diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzipci" "b/data_all_eng_slimpj/shuffled/split2/finalzzipci"
new file mode 100644--- /dev/null
+++ "b/data_all_eng_slimpj/shuffled/split2/finalzzipci"
@@ -0,0 +1,5 @@
+{"text":"\\section{Introduction}\nResearch in interpretable machine learning (IML) has explored means to understand how machine learning (ML) models work since the nineties ({\\it e.g}., \\citeauthor{towell1992interpretation} \\citeyear{towell1992interpretation}). Popular methods to help understand ML models are referred to as attribution methods \\citep{olah, yeh2018, koh2017understanding}, they identify features or instances responsible for a classification.\n\nWith the exception of human-centered studies \\citep{hoffman2018metrics}, the evaluation methods being used in XAI and IML include comparison to existing methods, metrics and axioms, sensitivity analyses, gold features, and demonstration of image classification (details and references in Section Background and Related Works). The problems with these methods include that they do not indicate where current XAI approaches fail thereby preventing consistent progress of the field. They do not measure accuracy as a way to validate correctness or to produce accountable agents ({\\it e.g}., \\citeauthor{Diakopoulos2014AlgorithmicAR} \\citeyear{Diakopoulos2014AlgorithmicAR}, \\citeauthor{kroll2016accountable} \\citeyear{kroll2016accountable}, \\citeauthor{doshivelez2017rigorous} \\citeyear{doshivelez2017rigorous}), and it is practically impossible to determine whether one XAI method is better than other or what the weaknesses of existing methods are, leaving researchers without guidance on which re-search questions will advance the field.\n\nThe intended purpose of this paper is to address these limitations with current XAI evaluation methods by proposing the use of data representing ground-truth explanations (GTE). In a variety of computer science tasks, it is a standard practice to treat some representation of data as ground truth. Ground-truth data, in its essence, represents data that is verifiable and considered as the most accurate against which a new system is tested against \\citep{wikilink}. Various authors agree that the lack of ground-truth for evaluating explanations is a limitation \\citep{tomsett2019sanity, hooker2019benchmark, yang2019evaluating, Montavon2019}. Consequently, we investigate the challenges in creating data representing GTEs. Our goal is to promote consistent and methodical progress of the XAI field. The scope of this paper is limited to neural networks (NN) for classification tasks.\n\nThe next section presents related methods, metrics and axioms to evaluate XAI methods. Then, we introduce how to generate three data sets representing GTEs. Next, we use the generated data to train NN models to submit to LIME \\citep{ribeiro2016should} and produce explanations while converting the GTEs into LIME's explanations format. We evaluate LIME, and analyze the evaluation, seeking support to our conclusions as a means to validate the evaluation. We conclude with a discussion of issues and benefits, and future work.\n\\begin{table*}[t]\n\\centering\n\\small\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{p{0.31\\textwidth}|p{0.69\\textwidth}}\n \\hline\n \\textbf{Evaluation method\/Axiom\/Metric} & \\textbf{Method proposer and\/or example authors who employed them}\\\\\n \\hline\n Sensitivity analysis & \\citet{adebayo2018sanity}\\\\\n \\hline\n Example images & \\citet{ribeiro2016should}\\\\\n \\hline\n SAT\/Model counting & \\citet{ignatiev2018, narodytska2019assessing, ignatiev2019validating} \\\\\n \\hline\n Correlation, completeness, and complexity & \\citet{cui2019integrative}\\\\\n \\hline\n Conservation, continuity & \\citet{Montavon2019}\\\\\n \\hline\n Fidelity & \\citet{alavarez2018}\\\\\n \\hline\n Gold features & \\citet{ribeiro2016should}\\\\\n \\hline\n Post-hoc accuracy & \\citet{chen2018learning, Bhatt2019BuildingHT, xie2019reparameterizable, bai2020attention}\\\\\n \\hline\n Perturbation analysis for vision & \\citet{zeiler2014visualizing}\\\\\n \\hline\n ROAR & \\citet{hooker2019benchmark}\\\\\n \\hline\n Perturbation on Time Series & \\citet{schlegel2019rigorous}\\\\\n \\hline\n Implementation invariance, sensitivity & \\citet{sundararajan2017axiomatic}\\\\\n \\hline\n Input invariance & \\citet{kindermans2019reliability}\\\\\n \\hline\n Simulated users & \\citet{ribeiro2016should}\\\\\n \\hline\n Amazon Mechanical Turk users & \\citet{ribeiro2016should, chen2018learning}\\\\\n \\hline\n In-depth Interviews & \\citet{hoffman2018metrics}\\\\\n \\hline\n\\end{tabular}\n}\n\\caption{Methods, metrics or axioms used to evaluate XAI and IML methods}\n\\label{alternativetable1}\n\\end{table*}\n\\section{Background and Related Works} \\label{BGDRW}\nTable 1 lists various methods currently used to evaluate both XAI and IML methods. The authors referenced in the table are those that first proposed the methods or that have used them within XAI and IML works. None of these methods use data representing ground-truth explanations. The closest to ground truth is the use of gold features \\citep{ribeiro2016should}, which are a set of features used in a model that are well-known to be the most important.\n\n\\citet{doshivelez2017rigorous} categorize IML evaluations as application-, human-, and functionally-grounded. The authors propose that any method should be evaluated along those three categories, one informing the other. \\citet{yang2019evaluating} are the only authors who actually present a reason against using ground truth to benchmark explanation methods, which is that of explanation quality is user-dependent. These authors propose three metrics for IML, namely, generalizability, fidelity, and persuasibility. Their fidelity metric aims to measure explanation relevance in the applied context. \\citet{gunning2019darpa} propose XAI approaches are evaluated along five categories, namely, {\\it explanation goodness}, {\\it explanation satisfaction}, {\\it mental model understanding}, {\\it user task performance}, and {\\it appropriate trust and reliance}. Considering that human-centered studies entail a lot of subjectivity, of those, only {\\it explanation goodness} seems as an objective category of explanation quality. All other categories are evaluated by humans or an external task. \\citet{tomsett2019sanity} conducted a meta evaluation of saliency methods by analyzing metrics previously proposed in the literature to evaluate saliency methods. To do this, they adopted psychometric tests to verify local saliency metric reliability and consistency of the saliency metrics that rely on perturbations. They conclude that saliency metrics can produce unreliable and inconsistent results, even if not always.\n\n\\begin{figure}[ht]\n\\centerline{\\includegraphics[width=\\columnwidth, height=4cm]{GTE-diagram.png}} \n\\caption{{In green, this diagram shows the steps to generate data representing GTEs and to use the data to train models. In orange are the processes for aligning GTE to LIME, and to send predictions for LIME to explain. In yellow, the evaluation compares the two orange processes}}\n\\label{gted}\n\\end{figure}\n\n\\section{Generate Data Representing Ground-Truth Explanations (GTE)}\nFigure \\ref{gted} gives an overview of the entire approach from generating the data up to evaluation. We start describing how to generate the data. We propose to generate data sets from existing processes, either natural or artificial, and identify classes from said processes. We propose to represent classes in a data set via mathematical equality or inequality constraints as a minimal canonical representation from which explanations can be aligned with the format of explanations produced by various XAI methods. We define the classes and the intervals to populate feature values to create instances. The intervals where instance feature values can be populated will determine noise and commonsense. The nature of values allowed for each feature in the generated equations will determine whether classes will remain disjoint. Overlapping classes will impact evaluations producing noise. Another consideration when defining intervals to populate feature values is commonsense. If an explanation indicates that the value of a feature is 0.3x$10^{-6}$, then the feature should not represent someone's age. Next, we describe the generation of three data sets. \n\n\\subsection{Generate Data Set Loan}\n\nThe process we chose is loan underwriting. This is a small data set, consisting of 54 instances, two classes accept and reject, and three input features characterizing three well-known factors considered in loan underwriting, namely, job condition, credit score, and debt-to-income ratio. We created this data set manually to characterize realistic instances. \nThe instance space is given by the arrangement of the three features and their four possible values, given by 4 x 4 x 4 = 64. We eliminated 10 instances from the data because they were not realistic. The data is generated with a system of two equations as follows:\n\\begin{equation}\n f(x)= \n\\begin{cases}\n 8(x_1 - 2)^2 + 3x_2^3 - x_3^4 + 4, & \\text{\\it if } x\\ne 2\\\\\n 3x_2^3 + x_3^4 + 12, & \\text{\\it if } x= 2\n\\end{cases}\n\\end{equation}\n\\begin{equation}\n\\begin{cases}\n Accepted, & \\text{\\it if } f(x)\\ge 32\\\\\n Rejected, & \\text{\\it if } f(x)< 32\n\\end{cases}\n\\end{equation}\n\nAs stated above, we considered class overlap and commonsense when defining the allowable values for the three features. The first feature, $x_1$ corresponds to the job condition of the applicant. This feature can be populated with integer values along the interval [2, 5], where 2 represents lack of a job, and values 3, 4, and 5, respectively that applicant has a job for less than one year, less than 3 years, or more than 3 years. The second feature, $x_2$, refers to credit score, which assumes integer values in the interval [0, 3], distributed in ranges from less than 580, 650, 750 and more than 750. The third and last feature, $x_3$, refers to the ratio of debt payments to monthly income, which assumes integer values in the interval [0, 3], distributed in ranges from less than 25\\%, 50\\%, 75\\% and more than 75\\%.\n\n\\subsection{Generate Data Set Distance}\n\nWe adopt the equation used to calculate travel consumption based on travel distance. The Data Set Distance has a total of 2,600,000 instances, described through five features, and 10 classes.\nThe 5 variables are Trip Frequency (TF), Trip Distance (TD), Transportation Occupancy (TO) ,Energy Intensity (EI) and Travel Mode (m). The Data Set Distance is generated using Equation 4 for travel energy consumption based on travel distance. \n\nUsing the base equation, we created 10 unique variations with the following goals: the variations should be kept realistic, the variations are a set of operations (such as raising to an exponent or multiplying by a scalar) performed on one or more variables. Afterward, we generate the data for the base equation by creating every permutation of 4 equation variables within a specified range using a truncated normal distribution. The 4 variables are used as features along with a 5th variable, travel mode. For each of the 10 variations, we use the set of operations on the base equation data to generate the equivalent rows for that variation.\n\\begin{equation}\n E_m = TF \\times \\frac{TD_m}{TO_m} \\times EI_m\n\\end{equation}\n\n\\subsection{Generate Data Set Time}\n\nFor Data Set Time and Distance, we used processes from the field of energy consumption analysis that describe various realistic processes with different focuses (e.g., distance or time) and include equations with a variety of features that can receive multiple values. These characteristics facilitate the generation of large data sets, so we can create conditions similar to those faced by XAI methods in the real world. In this paper, we generate data from transportation energy consumption, which can be used to calculate travel time and travel distances related to household energy consumption.\n\nEquation 3 is the basic equation to calculate energy (E) as a function of time. The four variables are Travel Time (TT), Speed, Fuel Economy (FE) and Travel Mode (m). The Data Set Time has a total of 504,000 instances and seven classes. Each class is defined by a small tweak to the equation. Using the base equation, we create seven unique variations with the same goals and process as we did for the Distance data set.\n\n\\begin{equation}\n E_m = \\sum_{m=1}^{5} = TT \\times Speed_m \\times FE_m\n\\end{equation}\n\n\\subsection{Train NN Models}\nThe number of models, the type of models, and how they vary between them depends on the metrics selected in the previous step. Consider, for example, the selected metric is {\\it implementation invariance} \\cite{sundararajan2017axiomatic}. This metric requires multiple types of models. In this paper, we trained two models for the Loan and Time data sets, and one model for the Distance data set, which we summarize next (detailed architectures are given in the Github link given at the end of this paper).\n\n\\subsubsection{Models NN1 and NN2}\nThe changes from NN1 to NN2 for Loan and Time included number and type of layers. Both models built for the Loan data reached 100\\% accuracy. Given the small number of 54 instances, we did not separate testing and training. For Data Set Time, NN1 reached 97\\% accuracy and NN2 96\\%. The generated instances were set to 403,200 for training, and 100,800 for testing.\nTrain NN 2 to 96\\% accuracy 403,200 training instances, 100,800 testing instances. The accuracy obtained for Data Set Distance was much lower, 82\\%. This is certainly due to noise class overlaps that occurred during the data set generation. \n\n\\subsection{LIME Explains Predictions}\nAs depicted in the diagram in Figure \\ref{gted}, after training the models, the next steps can be concurrent. This section describes the step where LIME explains the predictions from the models. First, let us briefly review how LIME works. The Local interpretable model-agnostic explanations (LIME) is a feature attribution method formally described in \\citet{ribeiro2016should}. LIME assumes that the behavior of a sample point ({\\it e.g}., instance) can be explained by fitting a linear regression with the point (to be explained) and the point's close neighbors. LIME perturbs the values of the point to be explained and submits them to the model to obtain their prediction, thus creating its own data set. Next, LIME measures the cosine similarity between the point to be explained and the points generated from its perturbations to select a region of points around it. LIME then utilizes a hyperparameter, {\\it number of samples (num\\_sample)}, to select the number of points it will use in the final step, which is the fitting of a linear regression. The hyperparameter {\\it num\\_sample} determines how many of the perturbed points will be used with the point to be explained to fit a linear regression. This last step produces coefficients of the line that expresses LIME's explanation. \n\nFor Data Set Loan, we submit to LIME all the 54 instances to be explained, and models NN1 and NN2. Note that all these instances with both NN models will have correct predictions because both models reached 100\\% accuracy. The number of samples selected was 25 to be used in the first evaluations, but we also created GTE for 5 and 50 number of samples. The output we receive back from LIME are two sets of 54 by 3 coefficients, one coefficient for each of the three features, one set for NN1, and one set for NN2. \n\nThe Data Set Time has 100,800 instances but the models did not reach 100\\% accuracy. Consequently, we randomly selected 10,000 that both models NN1 and NN2 predicted correctly and only submitted those 10,000 to LIME together with the two models. We made sure to select the instances from those correctly predicted because sending instances incorrectly predicted by the models would mislead LIME to produce bad decisions, and we had to make sure we could provide LIME with the best data for a fair assessment. The output produced by LIME are two sets of 10,000 by 4 coefficients, accounting for the four features in this data set, one for each model NN1 and NN2. We set hyperparameter number of samples as 1,000.\n\nThe accuracy reached by the Distance model NN1 was 82\\%. The number of testing instances was 520,000, so we randomly selected 50,000 instances from this data set from those correctly predicted. The data produced by LIME for NN1 is a 50,000 by 5 matrix, given the five features in this set. For number of samples, we used 5,000.\n\n\\subsection{GTE Data Aligns with LIME Explanation Format}\n\nThis step is represented in Figure \\ref{gted}, in orange, and is concurrent to the step \"Lime Explains Predictions\". As already noted, the data we produce representing GTEs are a specific way to represent explanations. We can only use them to evaluate any target XAI method after the data representing GTEs are at the same format or have been processed under the same conditions. As a general rule of thumb, this conversion may imply to take the ground-truth data and execute the last steps of the target method.\n\nFor LIME, an explanation consists of a fitted line from a target point whose prediction we want to explain and the number of points (based on the hyperparameter number of samples, {\\it num\\_sample}) that are the closest to the target based on the cosine similarity. Ultimately, this means that evaluating LIME means to determine how realistic are the perturbed points that are the closest based on cosine similarity in the number of the number of samples. Consequently, we take the points from the data representing GTEs and execute this same process, namely, measure the cosine similarity of each target point to be explained and then fit linear functions using the same Ridge regression method and the same regression parameters used in LIME using the same number of samples as established by the {\\it num\\_sample} hyperparameter. The result is that, for each data set, we have matrices with the same number of coefficients (one per feature per instance) as produced by each model. \n\n\\section{Evaluation Measures Compare Lime Explanations Against GTE}\n\\subsection{Propose Evaluation Measures}\n\\subsubsection{Euclidean distance (ED)} We adopt the {\\it Euclidean distance} (ED), which is an obvious choice of method to measure how far two points are in $n$-dimensional spaces. For this reason, we compute, for each instance of each data set and NN model, the ED between the point described in the GTE data and the point described through LIME's explanation coefficients. The range of the ED is (-$\\infty$, +$\\infty$), however, we do normalize the ED using the maximum and minimum points obtained for each data set and parameters. The goal is to keep the ED's results within the interval [0, 1] for better visualization. The purpose of computing the ED between the GTE data and LIME's explanation coefficients is to measure {\\it accuracy} as a measure of {\\it explanation goodness}. The ED, for being a distance, produces results in the opposite direction of quality. For this reason, later we will compute the {\\it Complement of ED}, which we denote as {\\it C-of-ED}, as its mathematical complement.\n\n\\subsubsection{Implementation Invariance} \\citet{sundararajan2017axiomatic} proposed that explanation methods should produce identical attributions for networks that produce the same outputs while receiving the same inputs, which are referred to as functionally equivalent. This is why we created models with different architectures for two of our data sets.\n\n\\subsubsection{Measures of Order} We propose to use the order of the explanation coefficients as measures of {\\it accuracy} or {\\it explanation goodness}. In LIME \\citep{ribeiro2016should}, the explanation coefficients assign importance to each feature in the sense that the feature that is assigned the highest coefficient is the most important feature in the explanation. This is related to the use of gold features to evaluate explanations, as proposed by \\citet{ribeiro2016should}. When gold features are used, the evaluation often targets the inclusion or not of a feature in an explanation. In the studies in this paper, we do not discuss the inclusion or not of features because our data sets have three, four, and five features each. At these small numbers of features, LIME includes all of them; this way we do not have to evaluate whether a feature is present, but how important it is considered. Note that the order of features is particularly proposed to evaluate LIME given the format LIME presents its explanations, although this would be an important aspect to consider when evaluating any explanation method. \n\nWe define two evaluation measures of order: {\\it Second Correct} and {\\it All Correct}. Respectively, {\\it Second Correct} indicates whether the second feature in the descending order of importance of an explanation's coefficients is correct in the sense that it is the same feature ordered as the second most important in the GTE data. Then {\\it All Correct} indicates that all the features are in the same order as the features in the GTE data. The values for these measures are counted as 1 or 0 for each instance. The comparisons include results for 100 runs, hence the values represent percentages.\n\\subsection{Comparing GTEs to LIME Explanations}\n\n\\subsubsection{NN1 vs. NN2}\nWe start by comparing ED across the two different NN architectures for data sets Loan and Time, NN1 and NN2 to assess {\\it Implementation Invariance}. Both were executed for 100 runs and thus we compute average and standard deviation across the 100 runs for each instance.\n\nWe use the parametric statistical testing t-test to measure whether the values differ significantly across the two samples NN1 and NN2. To conduct the t-test, we pose the hypothesis that the true difference between NN1 and NN2 is zero. The t-test determines that for $p$-values greater than 0.1, we cannot reject the hypothesis that the difference is zero between the samples. The $p$-values computed for Loan and Time data set are respectively, 0.979 and 0.661. These resulting $p$-values show that for both Loan and Time, the differences between NN1 and NN2 are not statistically significant. \\citet{sundararajan2017axiomatic} suggests that explanation methods should satisfy {\\it Implementation Invariance} for functionally equivalent NNs. This means that their explanations ought to be the same. As far as the t-test shows, the explanation coefficients are not significantly different, so at this level of specificity, they satisfy {\\it Implementation Invariance}. Given these results, we will use only NN1 for the remainder of the studies.\n\n\\begin{figure}\n\\centerline{\\includegraphics[width=\\columnwidth, height=6cm]{CofEDvsMeasuresOfOrder.png}} \n\\caption{C-of-ED (red), Second Correct (green), and All Correct (black) for data sets Loan, Time, and Distance}\n\\label{CofEDvs}\n\\end{figure}\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{p{0.1\\textwidth} |c|c|c c}\n \\hline\n Data Set & Ave. C-of-ED & Ave. Second & Ave. All \\\\\n \\hline\n Loan & 0.47 & 0.32 & \\textbf{0.179}\\\\\n \\hline\n Time & 0.76 & 0.03 & 0.0008\\\\\n \\hline\n Distance & \\textbf{0.82} & \\textbf{2.88} & 0.08\\\\ \n\\end{tabular}\n\\caption{Averages (Ave.) obtained for C-of-ED, {\\it Second correct}, and {\\it All correct} for all 100 runs and all instances for NN1 for the three data sets}\n\\label{5-25-50}\n\\end{table}\n\n\\subsubsection{Comparing C-of-ED against Second Correct and All Correct}\nNow we compare the measures of order {\\it All Correct} and {\\it Second Correct} against the complement of the ED, {\\it C-of-ED}. We use the complement given that measures of order are in the opposite direction of ED. Figure \\ref{CofEDvs} shows the three measures for Loan, Time, and Distance. \n\nThe visual inspection suggests multiple ideas. First, if we look at the red line for {\\it C-of-ED}, it shows that the quality of LIME's explanations seem to increase with larger values of the number of samples hyperparameter. Note that the charts for Time and Distance show only 100 samples because showing all 10,000 and 50,000 would make them indecipherable. For this reason, we include Table \\ref{5-25-50} with the averages for all instances to help the interpretation of the charts. The averages for Loan, Time, and Distance are, respectively, 0.47, 0.76, and 0.82. Recall the numbers set for number of samples submitted to LIME for these respective data sets were respectively, 25, 1,000, and 5,000. This seems reasonable as it allows LIME more chances to populate the region of the instance to be explained, thus increasing its chances of success.\n\nSecond, the measure {\\it All Correct} (black line) in Figure \\ref{CofEDvs} represents the number of times an instance has all its feature coefficients in the same order as GTE's coefficients. It is not surprising that this is the line further down with respect to the $y$-axis as it is the more demanding than {\\it Second Correct} (in green). \n\nThird, even with limited samples in the charts for Time and Distance, we see that the quality of LIME's explanations vary. This deserves a more detailed analysis. With the small data set Loan, we can see, for example, all three measures agree that Instance 50 is lower in quality than Instance 49. But before we examine the numbers and explore potential reasons for LIME having more difficulty to explain some instances over others, let's scrutinize these measures.\n\n\\subsection{Validating Evaluation Measures}\nIn this section, we investigate whether we have any evidence to support these results by further analyzing what these measures above can tell us about LIME explanations. To do this, we now focus on the Data Set Loan because its small scale allows us to conduct a detailed and comprehensive analysis. Above when we described the experimental design for the Loan data, we mentioned we selected the parameter number of samples to be 25. We now expand the results for two more number of sample values, 5 and 50.\n\nFigure \\ref{ed-5-25-50} shows the measures {\\it C-of-ED}, {\\it Second correct}, and {\\it All correct} for the different hyperparameter number of samples used with the Loan data. We kept the colors we used in earlier charts, making lighter hues for number of samples 5, darker for 50, and kept an intermediary tone for 25. For C-of-ED, the average at 5 number of samples is the highest, 0.60 against 0.47 and 0.41 for 25, and 50. For {\\it Second correct}, the highest average is 0.35, obtained with 5 and 50 number of samples, against 0.32 with 25. For {\\it All correct}, the highest is again at 5 number of samples with 0.22 against 0.18 and 0.16 for 25 and 50. \n\n\\begin{figure}[t]\n\\centerline{\\includegraphics[width=\\columnwidth, height=6cm]{ED2ndAllLoan5-25-50.PNG}} \n\\caption{Comparison of C-of-ED (top), Second Correct (middle), and All Correct (bottom) for Data Set Loan and number of samples 5, 25, and 50} \n\\label{ed-5-25-50}\n\\end{figure}\n\nThe first observation is that these results do not match the conclusion above that higher number of samples lead to more accurate explanations although this observation makes sense technically. Thorough examination of the results for every instance reveals that, at 5 number of samples, data representing GTE has a very high proportion of coefficients that are zero. The exact number is 18 zeros for coefficient $x_1$, 23 for $x_2$, and 27 for $x_3$, corresponding to 18, 23, and 50\\%. These high numbers of zeros can be explained by the low number of samples that would make it hard to fit the linear regression and thus return zeros. We then examined the number of zeros in the coefficients produced by LIME and we noted that in the 100 runs of 54 instances, the total number of zeros is 26, 27, and 29, respectively for $x_1$, $x_2$, and $x_3$, representing averages of 2.6, 2.7, and 2.9 for all 54 instances (these are around 5\\%). Consequently, given that LIME coefficients do not have such an abundance of zeros, ED will artificially show better results at number of samples 5 because these distances will be shorter. A distance between a number, which can be positive or negative, and zero will be shorter than a number and another that may also be positive or negative. The zeros also cause problems in computing the measures of order.\n\nTwo observations can be made from the identification of these high volumes of zero. First, the evaluations for 5 number of samples for the Loan data are artificial. They are revealed by the measures as good but the numbers are artificial, they do not originate from better explanations. Consequently, we do not have any reason to question that higher number of samples lead to better quality explanations.\n\nSecond, these artificially produced number do indicate better quality and all the proposed measures have shown them. This supports the quality of the proposed measures.\n\nFinally, these studies suggest that the best quality of explanations from LIME for the Loan Data should be when using 50 number of samples, but the measures do not show this with consistency. Consider that 50 number of samples is almost as much as the total number of instances in the Data Set Loan. With both the data representing GTEs and LIME using 50 number of samples, what would be the cause of the difference in the coefficients? If we could tell LIME the range and precision of the allowable values for the data to use in the perturbations, with only three features and a NN with 100\\% accuracy, LIME would only generate perturbations that matched the actual data set, and given that we used the same cosine similarity and the same Ridge regression with the same parameters, LIME's perturbations would be all actual instances. When using 50, it would be 50 out of 54, exactly like the data representing GTEs. Consequently, the only point of information separating LIME from better ({\\it i.e}., more accurate) explanations is not knowing the range and precision of allowable values. In practice, in a real-world model that needs explanation, there is nothing preventing us from asking the actual values allowed in data to create more accurate perturbations. This demonstrates how the use of data representing ground-truth explanations can lead to analyses that will improve existing XAI methods. \n\n\\section{Discussion and Conclusions}\nThe methodology we describe to generate data representing {\\it ground-truth explanations} (GTEs) poses many challenges. It requires the identification of a data-generation process and needs equations to define classes. The possibility of class overlap, their benefits and limitations, and methods to avoid noise are questions for future work.\n\nThe need to align data representing GTEs with the method targeted to evaluate may pose challenges such as the one we faced when setting a low value to a hyperparameter that produced artificially good results. This suggests this approach may be far from being fully automated.\n\nThe proposing authors of {\\it implementation invariance} \\cite{sundararajan2017axiomatic} suggest that explanation methods should satisfy it, which means producing the same explanation as long as NNs are functionally equivalent. If we envisage an explanation in support of accountability reports, then we want to have methods that can distinguish when a different architecture leads to a different explanation. Furthermore, when computing {\\it implementation invariance}, we face the question of which level of specificity to compare the explanations from these models. This poses questions with respect to whether what being the same for explanations mean. Consider that this question will differ depending on how the XAI method formats explanations.\n\nWe analyzed the results of our evaluation of LIME and showed how that analysis led us to conclusions about how LIME could be improved. Although not explicitly shown, our proposed method is measurable and verifiable, allowing the comparison between two explanation approaches. Further work examining why a method performs better in a certain type of instance, such as in outliers vs. non-outlier instances can help direct how to improve said methods. Finally, this proposed approach sheds light into how to demonstrate accountability, to create benchmarks, and contribute to the progress of the field.\n\nAll data and code necessary for reproducibility is available at https:\/\/github.com\/Rosinaweber forwardslash DataRepresentingGroundTruthExplanations\/tree\/master.\n\n\\subsubsection{Acknowledgements}\nRosina Weber and Prateek Goel are supported by the National Center for Advancing Translational Sciences (NCATS), National Institutes of Health, through the Biomedical Data Translator program award {\\#}OT2TR003448. Any opinions expressed in this document are those of the authors and do not necessarily reflect the views of NCATS, other Translator team members, or affiliated organizations and institutions.\n\n\n\\section{Copyright}\nAll papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form. They must also contain the AAAI copyright notice at the bottom of the first page of the paper. There are no exceptions to these requirements. If you fail to provide us with a signed copyright form or disable the copyright notice, we will be unable to publish your paper. There are \\textbf{no exceptions} to this policy. You will find a PDF version of the AAAI copyright form in the AAAI AuthorKit. Please see the specific instructions for your conference for submission details.\n\n\\section{Formatting Requirements in Brief}\nWe need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. The design and appearance of the paper is strictly governed by the aaai style file (aaai21.sty).\n\\textbf{You must not make any changes to the aaai style file, nor use any commands, packages, style files, or macros within your own paper that alter that design, including, but not limited to spacing, floats, margins, fonts, font size, and appearance.} AAAI imposes requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. All papers submitted to AAAI for publication will be recompiled for standardization purposes. Consequently, every paper submission must comply with the following requirements:\n\n\\begin{quote}\n\\begin{itemize}\n\\item Your .tex file must compile in PDF\\LaTeX{} --- (you may not include .ps or .eps figure files.)\n\\item All fonts must be embedded in the PDF file --- including includes your figures.\n\\item Modifications to the style file, whether directly or via commands in your document may not ever be made, most especially when made in an effort to avoid extra page charges or make your paper fit in a specific number of pages.\n\\item No type 3 fonts may be used (even in illustrations).\n\\item You may not alter the spacing above and below captions, figures, headings, and subheadings.\n\\item You may not alter the font sizes of text elements, footnotes, heading elements, captions, or title information (for references and mathematics, please see the limited exceptions provided herein).\n\\item You may not alter the line spacing of text.\n\\item Your title must follow Title Case capitalization rules (not sentence case).\n\\item Your .tex file must include completed metadata to pass-through to the PDF (see PDFINFO below).\n\\item \\LaTeX{} documents must use the Times or Nimbus font package (you may not use Computer Modern for the text of your paper).\n\\item No \\LaTeX{} 209 documents may be used or submitted.\n\\item Your source must not require use of fonts for non-Roman alphabets within the text itself. If your paper includes symbols in other languages (such as, but not limited to, Arabic, Chinese, Hebrew, Japanese, Thai, Russian and other Cyrillic languages), you must restrict their use to bit-mapped figures. Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or 300 dpi bitmap or removed from the document (even if they are in a graphics file embedded in the document).\n\\item Two-column format in AAAI style is required for all papers.\n\\item The paper size for final submission must be US letter without exception.\n\\item The source file must exactly match the PDF.\n\\item The document margins may not be exceeded (no overfull boxes).\n\\item The number of pages and the file size must be as specified for your event.\n\\item No document may be password protected.\n\\item Neither the PDFs nor the source may contain any embedded links or bookmarks (no hyperref or navigator packages).\n\\item Your source and PDF must not have any page numbers, footers, or headers (no pagestyle commands).\n\\item Your PDF must be compatible with Acrobat 5 or higher.\n\\item Your \\LaTeX{} source file (excluding references) must consist of a \\textbf{single} file (use of the ``input\" command is not allowed.\n\\item Your graphics must be sized appropriately outside of \\LaTeX{} (do not use the ``clip\" or ``trim'' command) .\n\\end{itemize}\n\\end{quote}\n\nIf you do not follow these requirements, your paper will be returned to you to correct the deficiencies.\n\n\\section{What Files to Submit}\nYou must submit the following items to ensure that your paper is published:\n\\begin{itemize}\n\\item A fully-compliant PDF file that includes PDF metadata.\n\\item Your \\LaTeX{} source file submitted as a \\textbf{single} .tex file (do not use the ``input\" command to include sections of your paper --- every section must be in the single source file). (The only allowable exception is .bib file, which should be included separately).\n\\item The bibliography (.bib) file(s).\n\\item Your source must compile on our system, which includes only standard \\LaTeX{} 2020 TeXLive support files.\n\\item Only the graphics files used in compiling paper.\n\\item The \\LaTeX{}-generated files (e.g. .aux, .bbl file, PDF, etc.).\n\\end{itemize}\n\nYour \\LaTeX{} source will be reviewed and recompiled on our system (if it does not compile, your paper will be returned to you. \\textbf{Do not submit your source in multiple text files.} Your single \\LaTeX{} source file must include all your text, your bibliography (formatted using aaai21.bst), and any custom macros.\n\nYour files should work without any supporting files (other than the program itself) on any computer with a standard \\LaTeX{} distribution.\n\n\\textbf{Do not send files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.\n\n\\textbf{Do not send supporting files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.\n\n\\textbf{Obsolete style files.} The commands for some common packages (such as some used for algorithms), may have changed. Please be certain that you are not compiling your paper using old or obsolete style files.\n\n\\textbf{Final Archive.} Place your PDF and source files in a single archive which should be compressed using .zip. The final file size may not exceed 10 MB.\nName your source file with the last (family) name of the first author, even if that is not you.\n\n\n\\section{Using \\LaTeX{} to Format Your Paper}\n\nThe latest version of the AAAI style file is available on AAAI's website. Download this file and place it in the \\TeX\\ search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete AAAI Author Kit so that you will have the latest instruction set and style file.\n\n\\subsection{Document Preamble}\n\nIn the \\LaTeX{} source for your paper, you \\textbf{must} place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation).\n\nLeave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2.\n\n\\subsubsection{The Following Must Appear in Your Preamble}\n\\begin{quote}\n\\begin{scriptsize}\\begin{verbatim}\n\\def\\year{2021}\\relax\n\\documentclass[letterpaper]{article}\n\\usepackage{aaai21}\n\\usepackage{times}\n\\usepackage{helvet}\n\\usepackage{courier}\n\\usepackage[hyphens]{url}\n\\usepackage{graphicx}\n\\urlstyle{rm}\n\\def\\UrlFont{\\rm}\n\\usepackage{graphicx} \n\\usepackage{natbib}\n\\usepackage{caption}\n\\frenchspacing\n\\setlength{\\pdfpagewidth}{8.5in}\n\\setlength{\\pdfpageheight}{11in}\n\\pdfinfo{\n\/Title (AAAI Press Formatting Instructions for Authors\nUsing LaTeX -- A Guide)\n\/Author (AAAI Press Staff, Pater Patel Schneider,\nSunil Issar, J. Scott Penberthy, George Ferguson,\nHans Guesgen, Francisco Cruz, Marc Pujol-Gonzalez)\n\/TemplateVersion (2021.1)\n}\n\\end{verbatim}\\end{scriptsize}\n\\end{quote}\n\n\\subsection{Preparing Your Paper}\n\nAfter the preamble above, you should prepare your paper as follows:\n\\begin{quote}\n\\begin{scriptsize}\\begin{verbatim}\n\\begin{document}\n\\maketitle\n\\begin{abstract}\n\\end{abstract}\\end{verbatim}\\end{scriptsize}\n\\end{quote}\n\n\\noindent You should then continue with the body of your paper. Your paper must conclude with the references, which should be inserted as follows:\n\\begin{quote}\n\\begin{scriptsize}\\begin{verbatim}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{\\label{sec:Intro}Introduction}\n\nOne way of realizing theories describing gravity as a truly\nhigher-dimensional interaction becoming effectively 4D at low-enough \nenergies is represented by the braneworld models, where the\nobservable universe is a 3-brane (domain wall) to which the\nstandard model (non-gravitational) matter fields are confined,\nwhile gravity field enters the extra spatial dimensions the size\nof which may be much larger than the Planck length scale\n$l_\\mathrm{P}\\sim 10^{-33}\\, \\mathrm{cm}$ \\cite{Ark-Dim-Dva:1998:}.\n\nAs shown by Randall and Sundrum \\cite{Ran-Sun:1999:}, gravity can be localized near the brane at low energies even with\na non-compact, infinite size extra dimension \nwith the warped spacetime satisfying the 5D Einstein equations\nwith negative cosmological constant. Then an arbitrary energy-momentum tensor\ncould be allowed on the brane \\cite{Shi-Mae-Sas:2000:}.\n\n\nThe Randall-Sundrum model gives 4D Einstein gravity in low energy\nlimit, and the conventional potential of Newtonian gravity appears\non the 3-brane with high accuracy \\cite{Ran-Sun:1999:}. Significant deviations from the\nEinstein gravity occur at very high energies, e.g., in the very\nearly universe, and in vicinity of compact objects\n\\cite{Maa:2004:,Dad-etal:2000:,Ger-Maa:2001:,Ali-Gum:2005:}.\nGravitational collapse of matter trapped on the brane results in\nblack holes mainly localized on the brane, but their horizon could\nbe extended into the extra dimension. The high-energy effects\nproduced by the gravitational collapse are disconnected from the\noutside space by the horizon, but they could have a signature on\nthe brane, influencing properties of black holes\n\\cite{Maa:2004:}. There are high-energy effects of local\ncharacter influencing pressure in collapsing matter, and also\nnon-local corrections of ``backreaction'' character arising from\nthe influence of the Weyl curvature of the bulk space on the brane\n-- the matter on the brane induces Weyl curvature in the bulk\nwhich makes influence on the structures on the brane due to the bulk\ngraviton stresses \\cite{Maa:2004:}. The combination of\nhigh-energy (local) and bulk stress (non-local) effects alters\nsignificantly the matching problem on the brane, compared to the\n4D Einstein gravity; for spherical objects, matching no longer\nleads to a Schwarzschild exterior in general\n\\cite{Maa:2004:,Ger-Maa:2001:}. The Weyl stresses\ninduced by bulk gravitons imply that the matching conditions do\nnot have unique solution on the brane; in fact, knowledge of the\n5D Weyl tensor is needed as a minimum condition for uniqueness\n\\cite{Ger-Maa:2001:}.\\footnote{At present, no exact 5D\nsolution in the braneworld model is known.} Some solutions for\nspherically symmetric black holes \\cite{Dad-etal:2000:} and\nuniform density stars \\cite{Ger-Maa:2001:} have been discussed. It has been shown\nthat in the black hole case the matching conditions could be satisfied and the bulk effects \non the black hole spacetimes could be represented by a single ``brany`` parameter. \n\nAssuming spherically symmetric metric induced on\nthe 3-brane, the constrained effective gravitational field equations on the\nbrane could be solved, giving Reissner-Nordstr\\\"{o}m static\nblack hole solutions endowed with a braneworld parameter $b$ having character of a ``tidal'' charge,\n instead of the standard electric charge\nparameter $Q^2$ \\cite{Dad-etal:2000:}. The tidal charge can be both positive and negative, however, there are some\nindications that negative tidal charge should properly represent the\n``backreaction'' effects of the bulk space Weyl tensor on the\nbrane \\cite{Dad-etal:2000:}.\n\nThe stationary and axisymmetric solutions describing\nrotating black holes localized in the Randall-Sundrum braneworld\nwere derived in \\cite{Ali-Gum:2005:}, having the metric tensor of the\nKerr-Newman form with a tidal charge describing the 5D correction\nterm generated by the 5D Weyl tensor stresses. The tidal charge\nhas an ``electric'' character again and arises due to the 5D\ngravitational coupling between the brane and the bulk, reflected\non the brane through the ``electric'' part of the bulk Weyl tensor\n\\cite{Ali-Gum:2005:}, in analogy with the spherically\nsymmetric case \\cite{Dad-etal:2000:}.\n\nWhen both the tidal and electric charge are present the\nblack hole spacetime structure is much more complex and additional off-diagonal metric components $g_{r\\phi}$,\n $g_{rt}$ are relevant along with the standard $g_{\\phi t}$ component, \ndue to the combination of the local bulk effects and the\nrotational dragging. This distorts the event horizon which\nbecomes a stack of non-uniformly rotating null circles having\ndifferent radii at fixed $\\theta$ while going from the equatorial\nplane to the poles \\cite{Ali-Gum:2005:}. The uniformly\nrotating horizon is recovered for the rotation\nparameter $a$ small enough where Kerr-Newman form of the metric tensor is allowed describing charged and slowly rotating black holes \\cite{Ali-Gum:2005:}. In the absence of rotation, the metric tensor reduces\nto the Reissner-Nordstr\\\"{o}m form with correction term of non-local origin\n \\cite{Cha-etal:2001:}.\n\nHere we restrict our attention to the Kerr-Newman type of\nsolutions describing the braneworld rotating black holes with no\nelectric charge, since in astrophysically relevant situations the\nelectric charge of the black hole must be exactly zero, or very\nsmall \\cite{MTW}. Then the results obtained in analysing the\nbehaviour of test particles and photons or test fields around the\nKerr-Newman black holes could be used assuming both positive and\nnegative values of the braneworld tidal parameter $b$ (used instead of\ncharge parameter $Q^2$).\n\nThe information on the properties of strong gravitational fields in vicinity of compact objects, namely\nof black holes, is encoded into optical phenomena of different kind that enable us to make estimates of the black hole parameters, including its tidal charge, when predictions of the theoretical models are confronted with the observed data. From this point of view, the spectral profiles of accretion discs around the black holes in galactic binaries, e.g., in microquasars, are most promising \\cite{Nar-Mcl-Sha:2007:,McCli-Nar-Sha:2007:}, along with profiled spectral lines in the X-ray flux \\cite{Laor:1991:,Bao-Stu:1992:,Stu-Bao:1992:,Kar-Vok-Pol:1992:,Mat-Fab-Ros:1993:,Zak:2003:}. Important information could also be obtained from the quasiperiodic oscillations observed in the X-ray flux of some low-mass black hole binaries of Galactic origin \\cite{Rem-McCli:2006:ARASTRA:}, some expected intermediate black hole sources \\cite{Stroh:2007a:}, or those observed in Galactic nuclei \\cite{Asch:2004:ASTRA:,Asch:2007:}. In the case of our Galaxy centre black hole Sgr A$^*$, we could be able to measure the optical phenomena in more detailed form as compared with the other sources, since it is the nearest supermassive black hole with mass estimated to be $\\sim 4\\times 10^6 M_\\odot$ \\cite{Ghez:2005:}, enabling to measure the \"silhuette`` of the black hole and other subtle GR phenomena \\cite{Bardeen:1973:,Cun-Bar:1973:}.\n\\par\nIn the present paper, we give an introductory study of the tidal charge influence on the optical phenomena near a rotating black hole. We focus our attention to some characteristic phenomena in close vicinity of the black-hole horizon, where the effects of the tidal charge could be in principle of the same order as those of the black hole mass and spin, contrary to the case of weak lensing effects. The light escape cones are given for families of astrophysically interesting sources, namely in locally non-rotating frames, and frames related to circular geodetical motion and radially free-falling sources in section 4 \\cite{SSJ:RAGTime:2005:Proceedings}. The silhuette of the black hole is determined in section 5. Images of the accretion discs are determined in section 6 using the transfer-function method. In Section 7, time delay of hot spot radiation is determined for direct and indirect images assuming circular geodetical motion in close vicinity of the black hole horizon. In Section 8 relevance of some effects is estimated for the Galaxy centre Sgr $A^*$ supermassive black hole. Concluding remarks are presented in Section 9.\n\n\n\\section{\\label{sec:GravFielEqOnBrane}Gravitational field equations on the brane}\n\nIn the 5D warped space models of Randall and Sundrum, involving a non-compact extra dimension, the gravitational field equations in the bulk can be expressed in the form \\cite{Shi-Mae-Sas:2000:,Dad-etal:2000:}\n\n\\begin{equation}\n \\tilde{G}_{AB}=\\tilde{k}^2[-\\tilde\\Lambda g_{AB}+\\delta(\\chi)(-\\lambda g_{AB}+T_{AB})],\\label{beq1}\n\\end{equation}\nwhere the fundamental 5D Planck mass $\\tilde M_P$ enters via $\\tilde{k}^2=8\\pi\/\\tilde{M}_p^3 $, $\\lambda$ is the brane tension, and $\\tilde\\Lambda$ is the negative bulk cosmological constant. Denoting $\\chi=x^4$ as the fifth dimension coordinate , $\\chi=0$ determines location of the brane in the bulk space, at the point of $Z_2$ symmetry; $g_{AB}=\\tilde{g}_{AB}-n_A n_B$ is the induced metric on the brane, with $n_A$ being the unit vector normal to the brane.\n\\par\nThe effective gravitational field equations induced on the brane are determined by the bulk field equations (\\ref{beq1}), the Gauss - Codazzi equations and the generalised matching Israel conditions with $Z_2$-symmetry. They can be expressed as modified standard Einstein's equations containing additional terms reflecting bulk effects onto the brane \\cite{Shi-Mae-Sas:2000:}\n\n\\begin{equation}\n G_{\\mu\\nu}=-\\Lambda g_{\\mu\\nu}+k^2 T_{\\mu\\nu} + \\tilde{k}^2 S_{\\mu\\nu} -\\mathcal{E}_{\\mu\\nu},\\label{beq2}\n\\end{equation}\nwhere $k^2=8\\pi\/M_P^2$, with $M_P$ being the braneworld Planck mass. The relations of the energy scales and cosmological constants are given in the form\n\n\\begin{equation}\n M_P=\\sqrt{\\frac{3}{4\\pi}}\\left(\\frac{\\tilde{M}_P^2}{\\sqrt{\\lambda}}\\right)\\tilde{M}_P;\\quad \\Lambda=\\frac{4\\pi}{\\tilde{M}_P^3}\\left[\\tilde\\Lambda+\\left(\\frac{4\\pi}{3\\tilde{M}_P^3}\\right)\\lambda^2\\right].\\label{beq3}\n\\end{equation}\nLocal bulk effects on the matter are determined by the ``squared energy-momentum'' tensor $S_{\\mu\\nu}$, \nthat reads\n\\begin{equation}\n S_{\\mu\\nu}=\\frac{1}{12}T T_{\\mu\\nu}-\\frac{1}{4}T_\\mu^{\\phantom{\\mu}\\alpha}T_{\\nu\\alpha}+\\frac{1}{24}g_{\\mu\\nu}\\left(3T^{\\alpha\\beta}T_{\\alpha\\beta}-T^2\\right),\n\\end{equation}\nwhile the non-local bulk effects are given by the tensor $\\mathcal{E}_{\\mu\\nu}$ representing the bulk Weyl tensor $\\tilde{C}_{ABCD}$ projected onto the brane, whereas\n\n\\begin{equation}\n \\mathcal{E}_{AB}=\\tilde{C}_{ABCD}n^C n^D.\\label{beq4}\n\\end{equation}\n\nSymmetries of the Weyl tensor imply that $\\mathcal{E}_{[AB]}=\\mathcal{E}_A^{\\phantom{A}A}=0$ and $\\mathcal{E}_{AB}n^B=0$. Therefore, on the brane, $\\chi\\rightarrow 0$, there is $\\mathcal{E}_{AB}\\rightarrow \\mathcal{E}_{\\mu\\nu}\\delta_A^{\\phantom{A}\\mu}\\delta_B^{\\phantom{B}\\nu}$. The $\\mathcal{E}_{\\mu\\nu}$ tensor reflects influence of the non-local gravitational effects in the bulk, including the tidal (``Coulomb``) and transverse traceless (gravitational wave) imprints of the free gravitational field of the bulk.\n\\par\nWe restrict our attention to the vacuum (at both bulk and brane) solutions of the gravitational field equations on the brane. Assuming zero cosmological constant on the brane ($\\Lambda=0$) we arrive to the condition\n\n\\begin{equation}\n \\tilde\\Lambda=-\\frac{4\\pi\\lambda^2}{3\\tilde{M}_P^2}.\\label{beq5}\n\\end{equation}\nIn the absence of matter fields, there is $T_{\\mu\\nu}=0=S_{\\mu\\nu}$, i.e., we are not interested in the properties of the squared energy-momentum $S_{\\mu\\nu}$ representing local effects of the bulk. In the vacuum case, the effective gravitational field equations on the brane reduce to the form \\cite{Shi-Mae-Sas:2000:}\n\\begin{equation}\n R_{\\mu\\nu}=-\\mathcal{E}_{\\mu\\nu},\\quad R_\\mu^{\\phantom{\\mu}\\mu}=0=\\mathcal{E}_\\mu^{\\phantom{\\mu}\\mu}\\label{beq6}\n\\end{equation}\nimplying divergence constraint \\cite{Shi-Mae-Sas:2000:}\n\n\\begin{equation}\n \\nabla^\\mu\\mathcal{E}_{\\mu\\nu}=0\\label{beq7}\n\\end{equation}\nwhere $\\nabla_{\\mu}$ denotes the covariant derivative on the brane.\n\\par\nThe equation (\\ref{beq7}) represents Bianchi identities on the brane, i.e., an integrability condition for the field equations $R_{\\mu\\nu}=-\\mathcal{E}_{\\mu\\nu}$\\cite{Ali-Gum:2005:}. For stationary and axisymmetric (or static, spherically symmetric) solutions Eqs. (\\ref{beq6}) and (\\ref{beq7}) form a closed system of equations on the brane. \n\\par\nThe 4D general relativity energy-momentum tensor $T_{\\mu\\nu}$ (with $T_\\mu^{\\phantom{\\mu}\\mu}=0$) can be formally identified to the bulk Weyl term on the brane due to the correspondence \n\n\\begin{equation}\n k^2 T_{\\mu\\nu}\\quad\\leftrightarrow\\quad -\\mathcal{E}_{\\mu\\nu}.\\label{beq8}\n\\end{equation}\nThe general relativity conservation law $\\nabla^\\mu T_{\\mu\\nu}=0$ then corresponds to the constraints equation on the brane (\\ref{beq7}). This behaviour indicates that Einstein-Maxwell solutions in general relativity should correspond to braneworld vacuum solutions. This was indeed shown in the case of Schwarzchild (R-N) \\cite{Maa:2004:,Dad-etal:2000:} and Kerr (K-N) spacetimes \\cite{Ali-Gum:2005:}. In both of these solutions the influence of the non-local gravitational effects of the bulk on the brane are represented by a single \"braneworld\" parameter $b$. The Coulomb-like behaviour in the Newtonian potential\n\n\\begin{equation}\n \\Phi=-\\frac{M}{M^2_{P}r}+\\frac{b}{2r^2}\\label{beq9}\n\\end{equation}\ninspired the name tidal charge \\cite{Dad-etal:2000:}.\n \\par\n\n\n\n\\section{\\label{sec:NullGeo}Null geodesics in Kerr spacetime with a tidal charge}\n\n\\subsection{Geometry}\n\nFollowing the work of \\cite{Ali-Gum:2005:}, and using the standard Boyer-Linquist coordinates ($t$, $r$, $\\theta$, $\\varphi$), we can write the line element of Kerr black-hole (or naked singularity) spacetime on the three-brane in the form \n\\begin{eqnarray}\n\t\\mathrm{d} s^2 &=& -(1-\\frac{2Mr - b}{\\Sigma})\\mathrm{d} t^2 + \\frac{\\Sigma}{\\Delta}\\mathrm{d} r^2 + \\Sigma \\mathrm{d} \\theta^2 + \\frac{A}{\\Sigma}\\mathrm{d}\\varphi^2 \\nonumber\\\\\n\t&&- 2\\frac{2Mr-b}{\\Sigma}\\sin^2\\theta\\mathrm{d} t\\mathrm{d}\\phi,\\label{eq1}\n\\end{eqnarray}\nwhere \n\n\\begin{eqnarray}\n\t\\Sigma &=& r^2 + a^2\\cos^2\\theta\\label{eq2}\\\\\n\t\\Delta &=& r^2 - 2Mr + a^2 +b\\label{eq3}\\\\\n\tA &=& (r^2 + a^2)^2 - a^2\\Delta\\sin^2\\theta\\label{eq4}.\n\\end{eqnarray}\nM is the mass parameter, $a=J\/M$ is the specific angular momentum and the braneworld prarameter $b$ is the \\emph{tidal charge} representing imprint of non-local gravitational effects from the bulk space. The metric (\\ref{eq1}) has the \nsame form as the Kerr-Newman metric, where the tidal charge is replaced by the squared electric charge, $Q^2$. \nThe stress tensor on the brane $E_{\\mu\\nu}$ takes the form \\cite{Ali-Gum:2005:}\n\n\\begin{eqnarray}\n E_t^{\\phantom{t}t}&=&-E_\\varphi^{\\phantom{\\varphi}\\varphi}=-\\frac{b}{\\Sigma^3}[\\Sigma-2(r^2+a^2)],\\\\\n E_r^{\\phantom{r}r}&=&-E_\\theta^{\\phantom{\\theta}\\theta}=-\\frac{b}{\\Sigma^2},\\\\\n E_\\varphi^{\\phantom{\\varphi}t}&=&-(r^2+a^2)\\sin^2\\theta,\\\\ \nE_t^{\\phantom{t}\\varphi}&=&-\\frac{2ba}{\\Sigma^3}(r^2+a^2)\\sin^2\\theta\n\\end{eqnarray}\nthat is fully analogical ($b\\rightarrow Q^2$) to the components of energy-momentum tensor for Kerr-Newman spacetimes in Einstein's general relativity \\cite{Ali-Gum:2005:}.\n\nThe roots of $\\Delta = 0$ identify the type of braneworld Kerr spacetime. There are two possibilities, a black hole or a naked singularity. By introducing $a^2\/M^2\\rightarrow a^2$, $b\/M^2\\rightarrow b$, $r_+\/M\\rightarrow r_+$, or putting $M=1$, we write the roots of $\\Delta = 0$ in the form\n\n\\begin{equation}\n r_+ = 1+\\sqrt{1-a^2-b},\\quad\\textrm{(outer horizon)}\\label{horeq2}\n\\end{equation}\nand\n\\begin{equation}\n r_- = 1-\\sqrt{1-a^2-b},\\quad\\textrm{(inner horizon)}.\\label{horeq3}\n\\end{equation}\nThe metric given by the line element (\\ref{eq1}) determines the geometry of rotating black hole in braneworld universe if\n\n\\begin{equation}\n \t1\\ge a^2+b.\\label{horeq1}\n\\end{equation}\nThe strong inequality refers to the case of two horizonts $r_+$ and $r_-$. For extreme black holes ($1=a^2+b$) the horizons coincide $r_+ = r_- = 1$.\n\n\\begin{figure}[!th]\n\n\\includegraphics[width=10cm]{fig1b}\n\n\\caption{\\label{fig_1}The plot of the inner horizont radius $r_-$ as a function of tidal charge parameter $b$ for three representative values of rotational parameter $a^2=0.5$, $a^2=1.0$ and $a^2=1.5$.}\n\\end{figure}\nIt is clear that for $b\\ge 0$ the loci of the inner horizon $r_-$ are always\npositive. But for $b<0$, the loci of the inner horizon can also be at\nnegative $r$, as illustrated in Figure \\ref{fig_1}. \n\\par\nNotice that $a^2>1$ is not allowed for standard black holes and for $b>0$ \\cite{MTW}, but such a possibility appears for $b<0$. The rotational parameter of extreme black holes is given by $a^2=1-b$.\nThe case of $10$.\n\nThe reality conditions $(\\mathrm{d} r\/\\mathrm{d} w')^2 \\ge 0$ and $(\\mathrm{d}\\theta\/\\mathrm{d} w')^2 \\ge 0$ lead to the restrictions on the impact parameter $\\mathcal{L}$\n\n\\begin{equation}\n\t\\mathcal{L}_{min} \\leq \\mathcal{L} \\leq \\mathcal{L}_{max},\\label{eq12}\n\\end{equation}\nwhere\n\n\\begin{equation}\n\t\\mathcal{L}_{max} \\equiv \\frac{(a\\lambda -2r +b)^2}{\\Delta}+ r^2+2r-b,\\label{eq13}\n\\end{equation}\nand\n\\begin{equation}\n\t\\mathcal{L}_{min}\\equiv\\left\\{ \\begin{array}{lcr} \n\t\t\t\t\\lambda^2 & \\textrm{for} & |\\lambda|\\geq a,\\\\\n\t\t\t\t2a|\\lambda|-a^2 & \\textrm{for} & |\\lambda|\\leq a. \n\t\t\t \\end{array}\\right.\\label{eq14}\n\\end{equation}\nThe upper(lower) constraint, $\\mathcal{L}_{max}$($\\mathcal{L}_{min}$), comes from the radial-motion (latitudinal-motion) reality condition. The properties of the photon motion are determined by the behaviour of the surface $\\mathcal{L}_{max}(r;\\lambda,a,b)$, as given by (\\ref{eq13}). The extrema of the surface $\\mathcal{L}_{max}$ (giving spherical photon orbits) are determined by\n\n\\begin{eqnarray}\n\t\\lambda=\\lambda_+ &\\equiv& \\frac{r^2+a^2}{a},\\label{eq15}\\\\\n\t\\lambda=\\lambda_- &\\equiv& \\frac{r^2-b r - a^2 - r\\Delta}{a(r-1)}.\\label{eq16}\n\\end{eqnarray}\nThe values of $\\mathcal{L}_{max}$ at these extreme points are given by\n\n\\begin{eqnarray}\n\t\\mathcal{L}_{max}(\\lambda_{+})\\equiv\\mathcal{L}_+ &=& 2r^2+a^2,\\label{eq17}\\\\\n\t\\mathcal{L}_{max}(\\lambda_{-})\\equiv\\mathcal{L}_- &=&\\frac{2r(r^3-3r+4b)+a^2(r+1)^2}{(r-1)^2}\\label{eq18}.\n\\end{eqnarray}\nThe character of the extrema follows from the sign of $\\partial^2\\mathcal{L}_{max}\/\\partial r^2$. One finds that\n\n\n\n\n\\begin{eqnarray}\n\\frac{\\partial^2 \\mathcal{L}_{max}}{\\partial r^2} &=& \\frac{8r^2}{\\Delta},\\quad\\textrm{for}\\quad \\lambda = \\lambda_+,\\label{eq19}\\\\\n\\frac{\\partial^2 \\mathcal{L}_{max}}{\\partial r^2} &=&\\frac{8r^2}{\\Delta} - \\frac{8r}{(r-1)^2},\\quad\\textrm{for}\\quad \\lambda=\\lambda_-.\\label{eq20}\n\\end{eqnarray}\nClearly, there are only minima of $\\mathcal{L}_{max}$ along for $\\lambda=\\lambda_{+}$, corresponding to unstable\nspherical orbits.\n\\begin{figure}[ht]\n\n\\begin{tabular}{cc}\n \\includegraphics[width=6cm]{fig2a}&\\includegraphics[width=6cm]{fig2b} \n\\end{tabular}\n\n\\caption{\\label{fig2_a_b}Left: classification of Kerr spacetime in braneworld universe according to the values of $a^2+b$, $b$ and $n_{ext}$ (the number of local extrema of the curves $\\tilde\\lambda_\\pm$, which is also the number of circular photon orbits in the equatorial plane). \nThe classification regions are: I) for $a^2+b\\leq 1$ and $n_{ext}=2$, II) for $a^2+b\\leq 1$ and $n_{ext}=4$, III) $a^2+b>1$ and $b<1$ and $n_{ext}=2$, IV) for $a^2+b>1$ and $b>1$ and $n_{ext}=2$, V) for $a^2+b>1$ and $n_{ext}=0$, VI) for $a^2+b>1$ and $b<1$ and $n_{ext}=4$, VII) for $a^2+b>1$ and $b>1$ and $n_{ext}=4$. \nRight: zoom of the area in the dashed rectangle of the left plot, to cover regions VI and VII.} \n\\end{figure}\n\n\nFurther, we have to determine where the restrictions given by the latitudinal motion $\\mathcal{L}_{min}$ meet the restrictions on the radial motion $\\mathcal{L}_{max}$. We find that $\\mathcal{L}_{max}=\\lambda^2$ (for $|\\lambda|\\ge a$) is fullfilled where\n\n\\begin{equation}\n \\lambda=\\tilde\\lambda_\\pm\\equiv\\frac{a(b-2r\\pm r^2\\sqrt{\\Delta})}{r^2-2r+b},\\label{eq23}\n\\end{equation}\nwhile $ \\quad\\mathcal{L}_{max}= 2a|\\lambda| - a^2$ (for $|\\lambda|} 1$, $b^{\\phantom{i}<}_{\\phantom{i}>} 1$ and $n_{ex}$ . The classification is represented in Figure \\ref{fig2_a_b}. There are two different classes of the black-hole spacetimes, differing by the presence of the photon circular orbits under the inner horizon. However, in the astrophysically relevant region outside the outer horizon, both the classes are of the same character, having two unstable equatorial photon circular orbits, one corotating (at $r_{ph1}$) and the other counter-rotating (at $r_{ph2}>r_{ph1}$). The tidal charge $b$ introduces no qualitatively new feature into the behaviour of photon motion in the Kerr spacetimes, but the quantitative impact of $b<0$ with\nhigh magnitude are quite relevant, as shown in next sections. All the braneworld Kerr black holes with tidal charge $b<0$ belong to the class II discussed in the case of standard Kerr-Newman spacetimes \\cite{Stu:1981b:}. We illustrate in Figures \\ref{fig3}-\\ref{fig5} functions $\\lambda_\\pm$, $\\tilde\\lambda_\\pm$ and $\\bar\\lambda$ for such a black hole spacetime with parameters $a=0.9$ and $b=-1.0$. In this case typical for braneworld Kerr black hole with $b<0$ there exist ten significiant values of $\\lambda$ as given in Figures \\ref{fig3} - \\ref{fig5}.\n\n\\begin{figure}[!ht]\n\n \\includegraphics[width=12.0cm]{fig3}\n\n\\caption{\\label{fig3}The graphs of the $\\lambda_\\pm$, $\\tilde\\lambda_\\pm$ and $\\bar\\lambda$ functions are plotted for representative values of the parameters $a=0.9$ and $b=-1.0$. The two dashed rectangle areas labeled with numbers $1$ and $2$ are zoomed in the following figures. The horizontal gray dashed lines represent special values of the impact parameter $\\lambda$, denoted according to the text as $\\lambda_A$...$\\lambda_J$.}\n\\end{figure}\nFor each interval of $\\lambda$ as determined by the sequence of $\\lambda_A$ - $\\lambda_J$ introduced in Figure \\ref{fig3}, there exists a characteristic type of behaviour of the restricting \"radial\" function $\\mathcal{L}_{max}$ and its relation to the \"latitudinal\" restricting function $\\mathcal{L}_{min}$. They can be found in \\cite{Stu:1981b:} and will not be repeated here.\n\n\\begin{figure}[ht]\n\n\\begin{tabular}{cc}\n \\includegraphics[width=6cm]{fig4a}&\\includegraphics[width=6cm]{fig4b}\n\\end{tabular}\n\n\\caption{\\label{fig4}Left figure is the zoom of dashed area labelled $1$ in previous figure. Right figure is the zoom of dashed area labelled $2$ in previous figure. The dashed rectangle area here is zoomed in the next figure.}\n\\end{figure}\n\n\\begin{figure}[ht]\n\n \\includegraphics[width=6.2cm]{fig5}\n\n\\caption{\\label{fig5}The zoom of the dashed rectangle area in previous figure.}\n\\end{figure}\n\nThe allowed values of the impact parameter $\\mathcal{L}$ lie between the limiting functions $\\mathcal{L}_{min}$ and $\\mathcal{L}_{max}$. If the minimum $\\mathcal{L}_{max}^{min}\\equiv\\mathcal{L}_{max}(r_{min},\\lambda_0)$ of the limiting function $\\mathcal{L}_{max}$ is less than the value of the limiting function $\\mathcal{L}_{min}$, an incoming photon ($k^r < 0$) travelling from infinity will return back for all values of $\\mathcal{L}_0\\in[\\mathcal{L}_{min};\\mathcal{L}_{max}]$. If $\\mathcal{L}_{max}^{min}>\\mathcal{L}_{min}$, \nthe incoming photon ($k^r < 0$) travelling from infinity returns back if its impact parameter $\\mathcal{L}_0$ \nsatisfies the condition $\\mathcal{L}_{0}\\ge\\mathcal{L}_{max}^{min}$ and is captured by the black hole \nif $\\mathcal{L}_0<\\mathcal{L}^{min}_{max}$. \nThe minimum $\\mathcal{L}_{max}^{min}$ determines (with the particular value of $\\lambda$) a photon spherical orbit, \ni.e., a sphere where photons move with $r=const$ but with varying latitude $\\theta$ (and, of course, varying $\\varphi$). \nWhen the condition $\\mathcal{L}_0 = \\mathcal{L}_{min}$ is satisfied simultaneously, the spherical photon orbit is transformed \nto an equatorial photon circular orbit. Photons with $\\mathcal{L}_0=\\mathcal{L}_{max}^{min}$ coming from distant regions or \nregions close to the black hole horizon will wind up around the photon sphere. \n \n\\clearpage\n\n\n\n\\section{\\label{sec:LEC}Light escape cones}\nThe optical phenomena related to accretion processes in the field of rotating black holes could be efficiently studied by using the notion of light escape cones of local observers (sources) that determine which portion of radiation emitted by a source could escape to infinity and, complementary, which portion is trapped by the black hole \\cite{SSJ:RAGTime:2005:Proceedings}. Here we focus our attention to four families of observers (sources) that are of direct physical relevance.\n\n\n\\subsection{Local frames of stationary and free-falling observers}\nWe consider three families of stationary frames, namely $LNRF$ (Locally Nonrotatig Frame), $SF$ (Static Frame) and $GF_\\pm$(Circular Geodesic Frame) and one non-stationary frame, namely $RFF$ (Radially Falling Frame). \nThe $LNRF$ are of highest physical importance since the physical phenomena take the simplest form when expressed in such frames, because the rotational spacetime effects are maximally suppressed there \\cite{Bardeen:1973:,MTW}. The $GF_\\pm$ are directly related to Keplerian accretion discs in the equatorial plane of the spacetime, both corotating and counterrotating, while $RFF$ are related to free-falling spherical accretion. The $SF$ are fixed relative to distant observers. The $GF_\\pm$ and $RFF$ are geodetical frames, while $SF$ and $LNRF$ are generally accelerated frames.\n\nThe radial and latitudinal 1-forms of the three stationary frame tetrads are common for all three stationary cases and read\n\n\\begin{eqnarray}\n\t\\omega^{(r)}&=&\\left\\{0,\\sqrt{\\Sigma\/\\Delta},0,0 \\right\\},\\label{LC9}\\\\\n\t\\omega^{(\\theta)}&=&\\left\\{0,0,\\sqrt{\\Sigma},0 \\right\\}.\\label{LC10}\n\\end{eqnarray}\n$LNRF$ correspond to observers with $\\Phi=0$ (zero angular momentum observers). Their time and azimuthal 1-forms read\n\n\\begin{eqnarray}\n\t\\omega^{(t)}&=&\\left\\{\\sqrt{\\frac{\\Delta\\Sigma}{A}},0,0,0 \\right\\},\\label{LC11}\\\\\n\t\\omega^{(\\varphi)}&=&\\left\\{-\\Omega_{LNRF}\\sqrt{\\frac{A}{\\Sigma}}\\sin\\theta,0,0,\\sqrt{\\frac{A}{\\Sigma}}\\sin\\theta\\right\\}.\\label{LC12}\n\\end{eqnarray}\nwhere \n\n\\begin{equation}\n\t\\Omega_{LNRF}=\\frac{a(2Mr-b)}{A}\\label{LC13}\n\\end{equation}\nis the angular velocity of $LNRF$ as seen by observers at infinity. \n\\par\nThe tetrad of $SF$ corresponding to observers with $\\Omega=0$ ,i.e. static relative to observers at infinity, is given by the formulae\n\n\\begin{eqnarray}\n\t\\omega^{(t)}&=&\\left\\{ \\sqrt{1-\\frac{2r-b}{\\Sigma}},0,0,\\frac{a(2r-b)\\sin^2\\theta}{\\sqrt{\\Sigma^2-(2r-b)\\Sigma)}} \\right\\},\\\\\n\t\\omega^{(\\varphi)}&=&\\left\\{ 0,0,0,\\sqrt{\\frac{\\Delta\\Sigma}{\\Sigma-(2r-b)}}\\sin\\theta \\right\\}.\n\\end{eqnarray}\n\nThe $GF_\\pm$ observers move along $\\varphi$-direction in the equatorial plane with velocity $V_{GF\\pm}$(+...corotating, -...counterrotating) relative to the $LNRF$ and with angular velocity $\\Omega$ relative to the static observers at infinity given by \\cite{}[SK]\n\\begin{equation}\n\\Omega_\\pm=\\pm\\frac{\\sqrt{r-b}}{r^2 \\pm a\\sqrt{r-b}}. \\label{ang_vel_gf}\n\\end{equation}\n\nThe velocity $V_{GF\\pm}$ is given by\n\n\\begin{equation}\n\tV_{GF\\pm}=\\pm\\frac{(r^2+a^2)Y\\mp a(2r-b)}{\\sqrt{\\Delta}(r^2\\pm aY)}.\\label{VGF}\n\\end{equation}\nwhere $Y=\\sqrt{r-b}$. The standard Lorentz transformation of the $LNRF$ tetrad gives the tetrad of $GF_\\pm$ in the form\n\\begin{eqnarray}\n\t\\omega^{(t)}_\\pm&=&\\left\\{ \\frac{r^2-2r+b\\pm a Y}{Z_\\pm},0,0,\\mp\\frac{(r^2+a^2)Y\\mp a(2r-b)}{Z_\\pm} \\right\\},\\\\\n\\omega^{(\\varphi)}_\\pm&=&\\left\\{\\mp \\frac{\\sqrt{\\Delta}Y}{Z_\\pm},0,0,\\frac{\\sqrt{\\Delta(r^2\\pm a Y)}}{Z_\\pm}, \\right\\}\n\\end{eqnarray}\nwhere \n\n\\begin{equation}\n\tZ_\\pm = r\\sqrt{r^2-3r+2b\\pm2aY}.\n\\end{equation}\nNote that the $GF_\\pm$ family is restricted to the equatorial plane, while $LNRF$ are defined at any $\\theta$.\n\nThe $RFF$ observers have velocity\n\n\\begin{equation}\n\tV_{RFF}=\\{V^{(r)},\\,V^{(\\theta)},\\,V^{(\\varphi)}\\}\n\\end{equation} \nas measured in $LNRF$. The radially free-falling (or free-escaping) observers starting (finishing) at infinity move with $\\theta = const$. Using the results of \\cite{Stu-Bic-Bal:1999:}, we find the velocity components of the free-falling frames in the $LNRF$ frames \n\n\n\\begin{eqnarray}\n\tV^{(r)}&=&\\pm\\sqrt{1-\\frac{\\Sigma\\Delta}{A}},\\\\\n\tV^{(\\theta)}&=&0,\\\\\n\tV^{(\\varphi)}&=& 0.\n\\end{eqnarray}\nClearly, the free-falling (free-escaping) observers move only radially in the $LNRF$, in analogy to particles radially moving in static frames of the Schwarzchild spacetimes.\n For the radially free-falling sources, the tetrad components $\\omega^{(\\theta)}$ and $\\omega^{(\\varphi)}$ coincide with those of the LNRF tetrad, while $\\omega^{(t)}$ and $\\omega^{(r)}$ are transformed. The local Lorentz transformation of the $LNRF$ to the $RFF_\\pm$ tetrad yields\n\n\\begin{eqnarray}\n\\omega_\\pm^{(t)}&=&\\left\\{ \\gamma\\frac{\\Delta\\Sigma}{A}, \\mp\\sqrt{\\frac{\\Sigma}{\\Delta}}V,0,0 \\right\\},\\\\\n\\omega_\\pm^{(r)}&=&\\left\\{\\mp\\gamma\\sqrt{\\frac{\\Delta\\Sigma}{A}}V,\\sqrt{\\frac{\\Sigma}{\\Delta}}\\gamma,0,0\\right\\},\\\\\n\\omega_\\pm^{(\\theta)}&=&\\{0,0,\\sqrt{\\Sigma},0\\},\\\\\n\\omega_\\pm^{(\\varphi)}&=&\\left\\{-\\Omega_{LNRF}\\sqrt{\\frac{A}{\\Sigma}}\\sin\\theta,0,0,\\sqrt{\\frac{A}{\\Sigma}}\\sin\\theta \\right\\}.\n\\end{eqnarray}\n\n\n\\subsection{Construction of escape cones}\n\n\\begin{figure}[ht]\n\t\\includegraphics[width=10cm]{fig8}\n\\caption{\\label{fig8}Definition of directional angles $\\alpha_0$, $\\beta_0$ and $\\gamma_0$ in a local frame. Vectors $\\vec{e}_r$, $\\vec{e}_\\theta$, $\\vec{e}_\\varphi$ are the basic tetrad vectors. Position of the observer (source) is given by the coordinates $(r_0,\\theta_0)$. Vector $\\vec{k}$ represents a photon as observed by the observer in the given tetrad and vector $\\vec{k}^\\prime$ is its projection into the plane ( $\\vec{e}_\\theta$, $\\vec{e}_\\varphi$). }\n\\end{figure}\n\n For each direction of emission in the local frame of a source, there is a corresponding pair of values of the impact parameters $\\lambda$ and $\\mathcal{L}$ which can be related to the directional cosines of the photon trajectory in the local frame at the position of the source. Of course, the analysis of the turning points of the radial motion of photons, presented in the previous section, is crucial in determining the local escape cones as the boundary of the escape cone is given by directional angles related to spherical photon orbits.\n\nProjection of a photon 4-momentum $\\vec{k}$ onto the local tetrad of an observer is given by the formulae\n\n\\begin{eqnarray}\nk^{(t)}&=&-k_{(t)}=1,\\label{LC1}\\\\\nk^{(r)}&=&k_{(r)}=\\cos\\alpha_0,\\label{LC2}\\\\\nk^{(\\theta)}&=&k_{(\\theta)}=\\sin\\alpha_0\\cos\\beta_0,\\label{LC3}\\\\\nk^{(\\varphi)}&=&k_{(\\varphi)}=\\sin\\alpha_0\\sin\\beta_0,\\label{LC4}\n\\end{eqnarray} \nwhere $\\alpha_0$, $\\beta_0$ are directional angles of the photon in the local\nframe (see Figure \\ref{fig8}) and $\\cos\\gamma_0=\\sin\\alpha_0\\sin\\beta_0$. \nIn terms of the local tetrad components of the photon 4-momentum and the related directional angles, the conserved quantities, namely, the azimutal momentum $\\Phi$, energy $E$ and $K$ read\n\n\\begin{eqnarray}\n\t\\Phi&=&k_\\varphi=-\\omega^{(t)}_{\\phantom{(t)}\\varphi}k^{(t)} + \\omega^{(r)}_{\\phantom{(r)}\\varphi}k^{(r)}+\\omega^{(\\theta)}_{\\phantom{(\\theta)}\\varphi}k^{(\\theta)}+\\omega^{(\\varphi)}_{\\phantom{(\\varphi)}\\varphi}k^{(\\varphi)},\\label{LC6}\\\\\n\tE&=&-k_t=\\omega^{(t)}_{\\phantom{(t)}t}k^{(t)} - \\omega^{(r)}_{\\phantom{(r)}t}k^{(r)}-\\omega^{(\\theta)}_{\\phantom{(\\theta)}t}k^{(\\theta)}-\\omega^{(t)}_{\\phantom{(\\varphi)}\\varphi}k^{(\\varphi)},\\label{LC7}\\\\\n K&=&\\frac{1}{\\Delta}\\left\\{ [E(r^2+a^2)-a\\Phi]^2-(\\Sigma k^r)^2\\right\\}.\\label{LC8}\n\\end{eqnarray}\nThe impact parameters $\\lambda$ and $\\mathcal{L}$ defined by relations (\\ref{eq9}) and (\\ref{eq10}) are thus fully determined by any double, $D$, of angles from the set $M=[\\alpha_0,\\beta_0,\\gamma_0]$.\n\n Having defined the source frame, we can construct light escape cones assuming fixed coordinates of the source $r_0$, $\\theta_0$. Their construction proceedes in the following steps:\n\n\\begin{itemize}\n\\item for given $D$, say $D=[\\alpha_0,\\beta_0]$, we calculate $\\lambda=\\lambda(\\alpha_0,\\beta_0)$,\n\\item $\\lambda$ determines the behaviour of $\\mathcal{L}_{max}=\\mathcal{L}_{max}(r;\\lambda)$,\n\\item from the analysis presented in the previous section we calculate minimum of $\\mathcal{L}_{max}$, which reads $\\mathcal{L}_{max}^{min}=\\mathcal{L}_{max}(r_{min};\\lambda)$,\n\\item we search for such a double $D$ which satisfies equation $\\mathcal{L}_0(\\alpha_0,\\beta_0)=\\mathcal{L}_{max}(r_{min};\\lambda)$.\n\\end{itemize}\nHere, we present in detail the construction of light escape cones in particular case of the $LNRF$. The procedure is analogous for the other stationary frames and simply modified for the free-falling frames, being radius dependent.\n\n\\begin{figure}[ht]\n\\begin{tabular}{ll}\n\\includegraphics[width=6cm]{fig9a} & \\includegraphics[width=6cm]{fig9b}\n\\end{tabular}\n\\caption{\\label{fig9_a_b}Left. The functions $\\mathcal{L}_{max}$ and $\\mathcal{L}_{min}=\\lambda_0^2$ are plotted together with representative constant functions $\\mathcal{L}_1$ and $\\mathcal{L}_2$ to demonstrate the construction of the photon escape cone. Right. The intersections of $\\mathcal{L}_{max}(\\gamma_0)$ with $\\lambda^2(\\gamma_0)$ give the interval of relevant values of $\\gamma_0\\in[\\gamma_{min};\\gamma_{max}]$.}\n\\end{figure}\n\n\n\n\\par\nThe impact parameter $\\lambda$ expressed in terms of the angle $\\gamma_0$, related to the $LNRF$, reads\n\n\\begin{equation}\n\t\\lambda_0=\\frac{1}{\\Omega_{LNRF0}+\\frac{\\Sigma_0\\sqrt{\\Delta_0}}{A_0\\sin\\theta_0\\cos\\gamma_0}},\n\\end{equation}\nwhere index '$0$' refers to the frame with coordinates $[r_0,\\theta_0]$. The minimum of $\\mathcal{L}_{max}$ is located at\n\n\\begin{equation}\n\tr_{min}=\\left\\{ \\begin{array}{lcr}\n\t\t\t\\sqrt{a\\lambda - a^2} & \\textrm{for} & \\lambda\\geq\\lambda_G = a\\\\\n\t\t\t1-\\frac{k_1}{k_2}+\\frac{k_2}{3} & \\textrm{for} & \\lambda<\\lambda_G = a\n\t\t\t\\end{array}\\right.\\label{eq_rmin}\n\\end{equation} \nwhere \n\n\\begin{eqnarray}\n\tk_1&=&a^2+2b+a\\lambda-3,\\\\\n\tk_2&=&\\left\\{ 27(1-a^2-b)+2\\sqrt{3}\\sqrt{27(1-a^2-b)^2+k_1^3}\\right\\}^{1\/3}.\n\\end{eqnarray}\nThe relevant values of $\\mathcal{L}$ lie between $\\mathcal{L}_{max}$ and\n$\\mathcal{L}_{min}$ determined by Eqs (\\ref{eq13}) and (\\ref{eq14}). The\nintersections of functions $\\mathcal{L}_{max}=\\mathcal{L}_{max}(\\gamma_0)$ and\n$\\mathcal{L}_{min}(\\gamma_0)$ give the relevant interval of angles\n$\\gamma\\in[\\gamma_{min},\\gamma_{max}]$ (see Figure \\ref{fig9_a_b}). For each $\\gamma$ from $[\\gamma_{min},\\gamma_{max}]$ we calculate minimal value of the photon impact parameter $\\mathcal{L}$ for which the photon reaches the turning point $r_{min}$ and escapes to infinity. This minimal value is the minimum of $\\mathcal{L}_{max}$ which is located at $r_{min}$, eg. $\\mathcal{L}_{max}=\\mathcal{L}_{max}(r_{min};\\lambda_0(\\gamma_0),a,b)$, where $r_{min}$ is given by (\\ref{eq_rmin}).\nNow we can calculate the value of $\\alpha_0$ using equation\n\n\\begin{equation}\n\t\\cos\\alpha_0=\\frac{k^{(r)}}{k^{(t)}}=\\frac{\\omega^{(r)}_{LNRF\\mu}k^\\mu}{\\omega^{(t)}_{LNRF\\mu}k^\\mu}.\n\\end{equation} \nWe arrive to the formula\n\n\\begin{equation}\n\t\\cos\\alpha_0=\\pm\\sqrt{A_0}\\frac{\\sqrt{(r_0^2+a^2-a\\lambda_0)^2-\\Delta_0(\\mathcal{L}_{max}^{min}-2a\\lambda_0+a^2)}}{-a(a\\sin^2\\theta_0-\\lambda_0)\\Delta_0+(r_0^2+a^2)(r_0^2+a^2-a\\lambda_0)},\n\\end{equation}\nwhere $A_0=A(r_0,\\theta_0)$, $\\Delta_0=\\Delta(r_0)$ and $\\mathcal{L}_{max}^{min}=\\mathcal{L}_{max}(r_{min};\\lambda_0,a,b)$. The angle $\\beta_0$ can be calculated from the formula (\\ref{LC4}).\nIn this way we obtain angles from the arc $\\beta_0\\in\\langle -\\pi\/2; \\pi\/2\\rangle$. The remaining arc $\\beta_0\\in\\langle \\pi\/2; 3\\pi\/2\\rangle$ can be obtained by turning the arc $\\beta_0\\in\\langle -\\pi\/2; \\pi\/2\\rangle$ around the symmetry axis determined by angles $\\beta_0=-\\pi\/2$ and $\\beta_0=\\pi\/2$. This procedure can be done because photons released under angles $\\beta_0$ and $\\pi-\\beta_0$ have the same constants of motion. \nClearly, for sources under the radius corresponding to the corotating\nequatorial photon circular orbit, only outward directed photons with no\nturning point of the $r$-motion can escape. With radius of the source\napproaching the event horizon ($r_0\\rightarrow r_+$), the escape cone shrinks\nto infinitesimal extension, except the case of extreme black hole \\cite{Bardeen:1973:}. For the other frames considered here, the procedure of\nthe related light escape cone construction can be directly repeated, but with\nthe relevant tetrad 1-form components being used in the procedure.\n\nIn order to reflect properly the effect of the tidal charge $b$ on the escape cone structure, we shall give the cones for black hole sequences of two kind: first we keep the spin $a$ fixed and change $b$, second we keep fixed \"distance\" to the extreme black hole states, i.e., $a^2+b$ is fixed, and both $a$ and $b$ are changed. The positive tidal charges have tendency to slightly increase the asymmetry of the cones as compared with $b=0$ case, keeping its character similar to the case of Kerr black holes (see next section). Therefore, we focus our attention to the influence of negative tidal charges. \n\n\\begin{figure}[ht]\n \\begin{tabular}{ccc}\n \\includegraphics[width=4.0cm]{escape_cones_lnrf_a0n998_b0_re6_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a0n998_bm1_re6_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a9981_bm3_re6M_th0_ok}\\\\\n \\includegraphics[width=4.0cm]{escape_cones_lnrf_b0_r020_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_bm1_r020_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a9981_bm3_re20M_th0_ok}\n \\end{tabular}\n\\caption{Light escape cones as seen by $LNRF$ in the vicinity of the braneworld kerr black hole. \nTop set of images is plotted for radial coordinate of emitter $r_e=6M$ and bottom set for $r_e=20M$.\nThe rotational parameter $a=0.9981$ is fixed and the representative values of the braneworld parameter $b$ are $0$ (left), $-1$ (middle) and $-3$ (right). The shaded area represents photons captured by black hole. }\\label{LNRF_fixed_a_on_b}\n\\end{figure}\n\n\n\n\\begin{figure}[ht]\n \\begin{tabular}{ccc}\n \\includegraphics[width=4.0cm]{escape_cones_lnrf_a21_b0_re6M_th0_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a22_bm1_re6M_th0_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a24_bm3_re6M_th0_ok}\\\\\n \\includegraphics[width=4.0cm]{escape_cones_lnrf_a21_b0_re20M_th0_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a22_bm1_re20M_th0_ok}& \\includegraphics[width=4.0cm]{escape_cones_lnrf_a24_bm3_re20M_th0_ok}\n \\end{tabular}\n\\caption{Light escape cones as seen by $LNRF$ in the vicinity of the extreme braneworld kerr black hole. Top set of images is plotted for radial coordinate of emitter $r_e=6M$ and bottom set for $r_e=20M$. The representative rotational and braneworld parameters [$a^2$,$b$] are [$1.0$,$0.0$](left), [$2.0$,$-1.0$](middle) and [$4.0$,$-3.0$](right). The shaded area represents photons captured by black hole. }\\label{LNRF_extreme_on_b}\n\\end{figure}\n\n\nBehaviour of the $LNRF$ escape cones in dependence on the braneworld parameter\n$b$ (and the spin $a$) is represented in Figures \\ref{LNRF_fixed_a_on_b} and \\ref{LNRF_extreme_on_b}.\nThe complementary trapped cones, corresponding to photons captured by the black hole, are shaded. \n\nAt a fixed radius expressed in units of $M$ the extension of the trapped cone grows with descending of $b$ to higher negative values and fixed spin $a$ and mass $M$, demonstrating thus the growing gravitational pull of the black hole due to growing magnitude of the negative braneworld parameter. The same statement holds also in the case of extreme Kerr black holes, when $a$ grows and $b$ descends, while $M$ is fixed. Clearly, the positive braneworld parameters have tendency to increase the asymmetry of the cones, while the negative ones symmetrize the escape cones with growing of $|b|$. On the other hand, the asymmetry of the escape cone grows with descending of $b$ for extreme black holes (Figure \\ref{LNRF_extreme_on_b}).\n\n\n\t\\begin{figure}[ht]\n\t\t\\begin{tabular}{ccc}\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a0n9981_b0_re1n24M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a0n9981_bm1_re3n91M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a0n9981_bm3_re6n27M_ok}\\\\\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a0n9981_b0_re10M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a0n9981_bm1_re10M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a0n9981_bm3_re10M_ok}\\\\\n\t\t\t\\includegraphics[width=4cm]{esape_cones_gf_a0n9981_b0_re10rh_ok}&\n\t\t\t\\includegraphics[width=4cm]{esape_cones_gf_a0n9981_bm1_re10rh_ok}&\n\t\t\t\\includegraphics[width=4cm]{esape_cones_gf_a0n9981_bm3_re10rh_ok}\n\t\t\\end{tabular}\n\t\t\\caption{Escape cones of GF+ observers. Top images are plotted for observer (emitter) at $r=r_{ms}$, middle images $r=10M$ and bottom images for \n\t\t$r=10\\cdot r_h$. The value of $a=0.9981$ is kept fixed. The representative values of $b$ are (from left to right) $0.0$, $-1.0$ and $-3$. }\\label{GF_escape_cones}\n\t\\end{figure}\t\n\n\t\\begin{table}[ht]\n\t\t\\tbl{Table of relevant values of $r_{ms}$ and $r_{h}$ used in plots on Figs \\ref{GF_escape_cones} and \\ref{GF_escape_cones_extreme}.}\n\t\t{\\begin{tabular}{@{}cccc@{}} \n\t\t\\toprule\n\t\t$(a^2, b)$ & (0.9981,0.0) & (0.9981,-1.0) & (0.9981,-3.0)\\\\ \n\t\t\\colrule\n\t\t$r_{ms}$ & 1.24M & 3.91M & 6.27M\\\\\n\t\t\\colrule\n\t\t$r_h$ & 1.062M & 2.002M & 2.73M\\\\\n\t\t\\botrule\n\t\t\\end{tabular}\\label{tabulka1}}\n\t\\end{table}\n\n\\begin{figure}[ht]\n\t\t\\begin{tabular}{ccc}\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a20n9999_b0_re1n062M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a21n9999_bm1_re1n06M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a23n9999_bm3_re1n06M_ok}\\\\\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a20n9999_b0_re10M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a21n9999_bm1_re10M_ok}&\n\t\t\t\\includegraphics[width=4cm]{escape_cones_gf_a23n9999_bm3_re10M_ok}\n\t\t\n\t\t\n\t\t\n\t\t\\end{tabular}\n\t\t\\caption{Escape cones of GF+ observers. Top images are plotted for observer (emitter) at $r=r_{ms}$ and bottom images for $r=10M$. The value of $a^2+b=0.9999$ is kept fixed. The representative values of $(a^2; b)$ are (from left to right) $(0.9999;0.0)$, $(1.9999;-1.0)$ and $(3.9999;-3)$. }\\label{GF_escape_cones_extreme}\n\t\\end{figure}\t\n \n Further, we represent the influence of the braneworld parameter on the escape cones for the circular (corotating) geodesic frames in Figure \\ref{GF_escape_cones}. Assuming astrophysically relevant sources in Keplerian accretion discs, their orbits must be located above the marginally stable orbit $r_{ms}$, determined implicitly by the condition \\cite{Ali-Gum:2005:,Stu-Kot:2008}\n\n\n\\begin{equation}\n \ta=a_{ms}(r;b)\\equiv\\frac{4(r-b)^{3\/2}\\mp r \\sqrt{3r^2-2r(1+2b)+3b}}{3r-4b}.\n\\end{equation}\nTherefore, we construct the escape cones for observers at $r=r_{ms}(a,b)$ and at fixed radii. In the sequence of black holes with fixed spin $a=0.9981$ (Figure \\ref{GF_escape_cones}) we include also a subsequence of escape cones constructed at the same relative distance from the black hole horizon in order to better illustrate the role of the tidal charge $b$. In the sequence of near-extreme black holes with $a^2+b=0.9999$ (Figure \\ref{GF_escape_cones_extreme}) the third sequence is not necessary as the black hole horizon is fixed at $r_h=1.01M$. Figures \\ref{GF_escape_cones} and \\ref{GF_escape_cones_extreme} demonstrate that the trapped cone expands as the tidal charge descends to lower negative values, both for black holes with fixed spin $a$ and for near-extreme holes. On the other hand, considering the cones at $r_{ms}$ we can conclude that the descending tidal charge ($b<0$) symmetrizes their shape for fixed $a$, but makes them strongly asymmetric for near-extreme black holes shrinking them strongly in the direction of the black hole rotation.\n\n\nFinally we demonstrate the relevance of the tidal charge $b$ in the character of escape cones of the $RFF_-$ (comparing them with those related to $LNRF$) in Figure \\ref{fig13a_f}. We construct the escape cones for two typical values of the tidal charge ($b=0$, $b=-3$) in a sequence of radii where the free-falling source is radiating, demonstrating thus the combined growing influence of the black hole gravitational pull on the photon motion and the velocity of the free-falling source. In order to illustrate the phenomena in a clear way, we compare the $RFF_-$ escape cones to the corresponding $LNRF$ escape cones. Clearly, the tidal charge descending to higher negative values makes stronger squeezing of the free-falling cones relative to the $LNRF$ escape cones at any fixed radius. Notice that both the $RFF_-$ and $LNRF$ cones are shifted due to the black hole rotational dragging. \nWe again observe the tendency of negative brany parameters to symmetrize and squeeze the escape cones. At a fixed $r$, the escape cones become smaller for growing $|b|$ due to stronger gravity. For completeness we present sequence of both the $RFF_-$ and $LNRF$ escape cones at the three fixed radii for an extreme black hole with $b=-3$ and $a^2=4$. We observe that both the $RFF_-$ and $LNRF$ cones are strongly shifted in the sense of the black hole rotation in vicinity of black hole horizon due to growing influence of the spin. The symmetrizing effect of descending values of negative tidal charge is canceled by strong influence of the rotational effects due to growing black hole spin.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}\n \\begin{tabular}{ccc}\n \\includegraphics[width=4cm]{escape_cones_rf_lnrf_a0n9981_b0_re5}&\\includegraphics[width=4cm]{escape_cones_rf_lnrf_a0n9981_bm3_re5}&\\includegraphics[width=4cm]{escape_cones_rf_lnrf_a2_bm3_re5}\\\\\n \\includegraphics[width=4cm]{escape_cones_rf_lnrf_a0n9981_b0_re10}&\\includegraphics[width=4cm]{escape_cones_rf_lnrf_a0n9981_bm3_re10}&\\includegraphics[width=4cm]{escape_cones_rf_lnrf_a2_bm3_re10}\\\\\n \\includegraphics[width=4cm]{escape_cones_rf_lnrf_a0n9981_b0_re15}&\\includegraphics[width=4cm]{escape_cones_rf_lnrf_a0n9981_bm3_re15}&\\includegraphics[width=4cm]{escape_cones_rf_lnrf_a2_bm3_re15}\n \\end{tabular}\n\\caption{\\label{fig13a_f}Comparison of the effect of the tidal charge $b$ on the shape of\n light escape cones of locally nonrotating (dashed curves) frames and free falling (solid curves)\n frames. In the left column light escape cones are plotted for the tidal\n charge parameter $b=0$ and in the middle one the light escape cones are\n plotted for $b=-3$. The spin $a=0.9981$ is kept fixed in both columns. The right column gives the sequence of the escape cones for an extreme black hole with [$a^2=4;b=-3$]. Emitting sources in all plots are moving in the equatorial plane. The radial distances of emitter are $r_e=5M$ (top row), $r_e=10M$ (middle row) and $r_e=15M$ (bottom row).}\n\\end{figure}\n\n\n\n\\clearpage\n\n\n\\section{\\label{sec:Silhuette}Silhuette of braneworld Kerr black hole}\n\nIn principle, it is of astrophysical importance to consider a black hole in front of a source of illumination whose angular size is large compared with the angular size of the black hole \\cite{Bardeen:1973:}. A distant observer will see a silhuette of the black hole, i.e., a black hole in the larger bright source. The rim of the black hole silhuette corresponds to photon trajectories spiralling around the black hole many times before they reach the observer. Of course, the shape of the silhuette enables, in principle, determination of the black hole parameters. But we have to be aware of the strong dependency of the silhuette shape on the observer viewing angle; clearly, the shape will be circular for observers on the black hole rotation axis, and its deformation grows with observer approaching the equatorial plane.\n\nAssuming that distant observers measure photon directions relative to the symmetry center of the gravitational field, the component of the angular displacement perpendicular to the symmetry axis is given by $-p^{(\\varphi)}\/p^{(t)}$ (for black hole rotating anticlockwise relative to distant observers), while for angular displacement parallel to the axis it is given by $p^{(\\theta)}\/p^{(t)}$. These angles are proportional to $1\/r_0$, therefore, it is convenient to use the impact parameters in the form independent of $r_0$ \\cite{Bardeen:1973:}\n\n\\begin{equation}\n \\tilde{\\alpha}=-r_0\\frac{p^{(\\varphi)}}{p^{(t)}}=-\\frac{\\lambda}{\\sin\\theta_0},\\label{silalpha}\n\\end{equation}\n\nand\n\n\\begin{eqnarray}\n \\tilde{\\beta}&=&r_0\\frac{p^{(\\theta)}}{p^{(t)}}=\\left[q+a^2\\cos^2\\theta_0-\\lambda^2\\cot^2\\theta_0\\right]^{1\/2}\\nonumber\\\\\n&&=\\left[\\mathcal{L}+a^2\\cos^2\\theta-\\frac{\\lambda^2}{\\sin^2\\theta_0}\\right]^{1\/2}.\\label{silbeta}\n\\end{eqnarray}\nPhoton trajectories reaching the observer are represented by points in the $(\\tilde{\\alpha}-\\tilde{\\beta})$ plane representing a small portion of the celestial sphere of the observer.\n\nThe shape of the black hole silhuette is the boundary of the no-turning-point region, i.e., it is the curve $\\mathcal{L}=\\mathcal{L}^{min}_{max}(\\lambda)$ expressed in the $(\\tilde{\\alpha}-\\tilde{\\beta})$ plane of the impact parameters. For observers in the equatorial plane $(\\theta_0 = \\pi\/2)$, $\\tilde{\\alpha}=-\\lambda$, $\\tilde{\\beta}=(\\mathcal{L}-\\lambda^2)^{1\/2}=q^{1\/2}$.\n\n\n\\begin{figure}[ht]\n\n\\begin{tabular}{cc}\n \\includegraphics[width=5.5cm]{silhuettes_a0n6_b}&\\includegraphics[width=5.5cm]{silhuettes_extreme_b}\n\\end{tabular}\n\n\\caption{\\label{fig15}Left figure. The $(\\bar\\alpha_0,\\bar\\beta_0)$ plots of the silhuettes of braneworld Kerr black hole on a bright background for rotational parameter $a^2=0.6$ and four representative values of tidal charge parameter $b=-3.0$, $b=-0.4$, $b=0.0$ and $b=0.4$. The observer is located at $r_0=10^4 M$ and $\\theta_0=90^\\circ$.\nRight figure. The silhuettes of extreme black holes for three representative values of braneworld parameter $b=0$ (solid), $b=-1$ (dashed) and $b=-3$ (dotted). Static observer is in equatorial plane at radial distance from the centre $r_0=10^4 M$.}\n\\end{figure}\n\n\n\n\n\n\n\nWe consider that the black hole is observed by static distant observers. Therefore, we shall use the static frames introduced above. The silhuette of the black hole is quite naturally related to their trapped (escape) light cones.\n\nThe marginal values of impact parameters $\\lambda_0$ and $\\mathcal{L}_0$(resp $q_0$) are obtained from the light escape cone. Using the stationarity of the braneworld Kerr spacetime we ``shoot out`` virtual photons from observer (static frame at very large distance $r_0$) and we are looking for the light escape cone of this virtual source (using the results of the previous section). The trapped light cone of this virtual source is constructed from the light escape cone of the virtual source by transformations of directional angle $\\alpha_0$ to $\\bar{\\alpha}_0=\\pi - \\alpha_0$ and directional angle $\\beta_0$ to $\\bar{\\beta}_0=\\beta_0$. In this way we get marginal directions for received photons from bright background behind the black hole. Then we can use the formulas (\\ref{LC6}), (\\ref{LC7}) and (\\ref{LC8}) to calculate the marginal values of $\\lambda_0$ and $q_0$($\\mathcal{L}_0$) in order to obtain the silhuette of the braneworld Kerr black hole in the plane $(\\tilde{\\alpha}-\\tilde{\\beta})$, i.e., the set of doubles $(\\tilde{\\alpha}_0,\\tilde{\\beta}_0)$ from equations (\\ref{silalpha}) and (\\ref{silbeta}). Here we plotted the silhuette directly from the trapped light cone $(\\bar{\\alpha}_0,\\bar{\\beta}_0)$ on the observer's sky $(\\bar{\\alpha}_0\\sin\\bar{\\beta}_0,\\bar{\\alpha}_0\\cos\\bar{\\beta}_0)$. Note that the angle $\\bar\\alpha_0$ is the radial coordinate and the angle $\\bar\\beta_0$ is the polar coordinate in the polar graph of the silhuette. \n\n\n\\begin{figure}[ht]\n\\begin{tabular}{cc}\n\t\\includegraphics[width=6cm]{silhuettes_a0n8_b02_th}&\\includegraphics[width=6cm]{silhuettes_a0n8_b0_th}\\\\\n\\includegraphics[width=6cm]{silhuettes_a0n8_bm1_th}&\\includegraphics[width=6cm]{silhuettes_a0n8_bm3_th}\n\\end{tabular}\n\\caption{\\label{fig18_a_d}The silhuettes of rotating braneworld black hole on a bright background. Each image contains three black hole shapes for three representative values of observer's inclination angle $\\theta_0=\\{0^\\circ(solid),45^\\circ(dashed),90^\\circ(dotted)\\}$, observer's radial coordinate $r_0=10^4 M$ and the rotational parameter $a^2=0.8$. Top left image: $b=0.2$. Top right image: $b=0.0$. Bottom left image: $b=-1.0$. Bottom right image: $b=-3.0$.}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\begin{tabular}{cc}\n\t\\includegraphics[width=6cm]{silhuettes_a1_b0_th_extreme}&\\includegraphics[width=6cm]{silhuettes_a2_bm3_th_extreme}\n\\end{tabular}\n\\caption{The silhuettes of extreme rotating braneworld black holes on a bright background. Each image contains three black hole shapes for three representative values of observer's inclination angle $\\theta_0=\\{0^\\circ(solid),45^\\circ(dashed),90^\\circ(dotted)\\}$, observer's radial coordinate $r_0=10^4 M$. Silhuettes on the left figure are plotted for extreme black holes with $a^2=1$ and $b=0$ and on the right side for $a^2=4$ and $b=-3$.}\\label{fig19_a_b}\n\\end{figure}\n\n\n\\par \n\tWe shall give the silhuette of the black hole for observers located at fixed radius $r_0=10^4$M that corresponds to the angular size of $\\alpha\\sim 1.4$arcsec; for higher distances the angular size falls accordingly to the $1\/r_0$ dependence. \n\nFirst, we give an illustrative picture of the tidal charge influence on the silhuette properties for maximal inclination angle $\\theta_0=90^\\circ$ when the black hole rotational effects are strongest (Figure \\ref{fig15}). We present a sequence of silhuettes for fixed black hole spin and varying $b$ (left) and for extreme black holes with $a^2+b=1$ and both $a$, $b$ varying (right). We clearly see that the positive tidal charge squeezes magnitude of the silhuette making its shape more asymmetric, while negative tidal charge enlarges silhuette's diameter symmetrizing its shape when $a$ is fixed. For extreme black holes the silhuette asymmetry is kept but its extension grows with $b$ descending to higher negative values.\n\n\tSecond, there is a crucial effect of the viewing angle $\\theta_0$ onto the shape of the black hole silhuette, demonstrated in Figure \\ref{fig18_a_d} for representative values of $b$ and fixed spin $a$, and in Figure \\ref{fig19_a_b} for extreme black holes with parameters [$a^2=1$;$b=0$] and [$a^2=4$;$b=-3$].\n\n\n\nThe rotational effect on the shape of the silhuette grows with inclination angle growing and becomes strongest when $\\theta_0=\\pi\/2$; then the suppressing effect of the braneworld parameter is given in the most explicit form as demonstrated in Figure \\ref{fig15}. \n\n\nThe negative values of the braneworld parameter have the tendency to make the silhuette of a Kerr black hole (with $a^2$ fixed and for $r_0$, $\\theta_0$ fixed) spherical, suppressing thus the rotational effects. However the symmetrizing effect of the tidal charge could be masked by symmetrizing effect of the viewing angle $\\theta_0$. Therefore, it is very important for black hole parameter estimates to have observational limits on the value of $\\theta_0$.\n\n\\begin{figure}[ht]\n\t\\includegraphics[width=8cm]{fig14} \n\\caption{\\label{fig14}We define shift $s$ and ellipticity $\\epsilon=x\/y$ as parameters enabling us to characterize the magnitude of distorsion of Kerr black hole silhuette in braneworld universe.}\n\\end{figure}\n\n\n\nIn order to characterize the influence of the tidal charge on the silhuette of a Kerr black hole we define two quantities in principle measurable by distant observers. The \\emph{shift} $s$ of the silhuette \n\\begin{equation}\n \ts=\\tilde\\alpha(\\beta_m)\\sin(\\beta_m - \\pi),\\label{eqA}\n\\end{equation}\nand its \\emph{ellipticity} $\\epsilon$\n\\begin{equation}\n \t\\epsilon=\\frac{\\tilde\\alpha(\\beta=90^\\circ)+\\tilde\\alpha(\\beta=270^\\circ)}{2\\tilde\\alpha(\\beta_m)\\cos(\\beta_m - \\pi)},\\label{eqB}\n\\end{equation}\nwhere $\\beta_m$ is defined by $\\tilde\\alpha(\\beta_m)\\sin(\\beta_m - \\pi)\\ge \\tilde\\alpha(\\beta)\\sin(\\beta - \\pi),\\quad \\forall \\beta\\in[\\pi\/2,3\/2\\pi]$ i.e., it defines maximal extension of the silhuette in the $x$-direction. The definition of \\emph{shift} $s$ and \\emph{elipticity} $\\epsilon$ is illustrated in Figure \\ref{fig14}.\n\nWe calculated shift $s$ and ellipticity $\\epsilon$ as functions of tidal\nparameter $b$ for the Kerr black hole with rotational parameter $a^2=0.9995$\n(see Figure \\ref{fig16_a_b}).\nClearly, these are quantities that could be measured and used for a black hole parameters estimates, if observational techniques could be developed to the level enabling the silhuette detailed measuring. We shall discuss such a possibility for the case of the supermassive black hole predicted in the Galaxy Centre (Sgr $A^*$).\n \n\n\n\n\\begin{figure}[ht]\n\\begin{tabular}{cc}\n\t\\includegraphics[width=6cm]{shift_th89n9_th45_a0n9995_r10p4} &\\includegraphics[width=6cm]{ellipticity_th89n9_th45_a0n9995_r10p4} \\\\\n\t\\includegraphics[width=6cm]{ellipticity_extreme_th090_r0104}&\\includegraphics[width=6cm]{shift_extreme_th090_r0104}\t\n\\end{tabular}\n\\caption{\\label{fig16_a_b\n Top row. Left figure: the shift $s=s(b)$ as a function of braneworld parameter $b$. Right figure: the ellipticity $\\epsilon=\\epsilon(b)$ as a function of $b$. There are two curves on each image, one for observer inclination angle $\\theta_0=45^\\circ$ and second for $\\theta_0=89.9^\\circ$. The rotational parameter of black hole is fixed to value $a=0.9995$ and the radial coordinate of observer if $r_0=10^4 M$.\nBottom row. The ellipticity $\\epsilon$ (left) and shift $s$ (right) of the extreme black hole silhuette as functions of braneworld parameter $b$. Observer's coordinates are $\\theta_0=\\pi\/2$ and $r_0=10^4 M$. }\n\\end{figure}\n\n\n\n\n\\clearpage\n\\section{\\label{sec:DirAndIndirImages}Direct and indirect images of radiating disc}\n\nModelling of spectral line profiles of a thin radiating ring rotating in the equatorial plane of a braneworld Kerr black hole or light curve of an isotropically emitting point source orbiting such a black hole will give us information about the influence of the braneworld parameter $b$ on the optical phenomena in the strong field regime \\cite{SS:b:RAGTime:2007:Proceedings}. Here we restrict our attention to images of radiating discs. We can then, at least in principle, obtain estimates on the astrophysically acceptable values of the braneworld parameter $b$. \n\n\\subsection{Images of isoradial geodesics}\n\t\nCalculating images of an accretion disc (ring) in the equatorial plane of a braneworld Kerr black hole is the first step to calculate the optical phenomena. Generally one could obtain a direct and an indirect image (see Figures \\ref{fig19} and \\ref{fig20}), but in special cases the situation can be much more complicated due to complex character of the latitudinal and azimuthal photon motion. Here we focus our attention to the direct and indirect images of isoradial geodesics.\n\nIn order to find all relevant positions of points forming the rotating ring on observer's sky, we have to find photon trajectories between the ring particles and the observer, i.e., we seek for such doubles of local observational angles $[\\alpha_0,\\beta_0]$ that satisfy the condition\n\n\n\n\\begin{figure}[ht]\n\n \\includegraphics[width=10cm]{fig19}\n\n\\caption{\\label{fig19}\\emph{Direct} image of the rotating ring in the equatorial plane at $r_e=6M$ around braneworld Kerr black hole with rotational parameter $a^2=0.5$ for four representative values of tidal charge parameter $b=-3.0$, $b=-0.4$, $b=0.0$ and $b=0.4$. The observer is located at $r_0=10^4 M$ and $\\theta_0=85^\\circ$. }\n\\end{figure}\n\n\\begin{figure}[ht]\n\n \\includegraphics[width=8cm]{fig20}\n\n\\caption{\\label{fig20}\\emph{Indirect} image of the rotating ring in the equatorial plane at $r_e=6M$ around braneworld Kerr black hole with rotational parameter $a^2=0.5$ for four representative values of tidal charge parameter $b=-3.0$, $b=-0.4$, $b=0.0$ and $b=0.4$. The observer is located at $r_0=10^4 M$ and $\\theta_0=85^\\circ$. }\\label{indirect_a_05_th85_on_b}\n\\end{figure}\n\n\n\\begin{equation}\n \tI_U(\\alpha_0,\\beta_0;n_u,u_{sgn}) - I_M(\\alpha_0,\\beta_0;n,p,s)=0.\\label{bvp}\n\\end{equation}\nHere we introduced the modified radial coordinate $u=1\/r$ and cosine of latitudinal coordinate $\\mu=\\cos\\theta$ \\cite{Rau-Bla:1994:}. In the condition (\\ref{bvp}) $n_u$ is the number of turning points in $u$ coordinate, $n$ is the number of turning points passed in $\\mu$ coordinate, $p=mod(n,2)$, $s=(1-\\mu_{sgn})\/2$. In terms of $u$ and $\\mu$ we define the functions $I_U$ and $I_M$ by\n\n\\begin{equation}\n\tI_U(\\alpha_0,\\beta_0;n_u,u_{sgn})\\equiv\\left\\{\\begin{array}{lcr}\n\t\t\t\t\t-u_{sgn}\\left(\\int^{u_0}_{u_t} +\\int^{u_e}_{u_t}\\right) & \\textrm{for} & n_u=1\\\\\n\t\t\t\t\tu_{sgn}\\int^{u_e}_{u_0} & \\textrm{for} & n_u=0\n\t\t\t\t\t\\end{array}\\right.\n\\end{equation}\nand\n\\begin{eqnarray}\n\tI_M(\\alpha_0,\\beta_0;n,p,s)&\\equiv&\\mu_{sgn}\\left[\\int^{\\mu_+}_{\\mu_0} + (-1)^{n+1}\\int^{\\mu_+}_{\\mu_e}+\\right.\\\\ \\nonumber\n&+&\\left.(-1)^s[(1-p)n+p[(1-s)(n-1)+s(n+1)]]\\int^{\\mu_+}_{\\mu_-} \\right]\n\\end{eqnarray}\n with\n\\begin{eqnarray}\n\t\\int^{u_2}_{u_1}&\\equiv&\\int^{u_2}_{u_1}\\frac{\\mathrm{d} u}{\\sqrt{U(u)}},\\label{u_int}\\\\\n\tU(u)&=&1+(a^2-\\lambda^2-q)u^2+2[(\\lambda^2-a^2)^2+q]u^3 - \\nonumber\\\\\n\t&-&[q(a^2+b)+b(a-\\lambda)^2]u^4\n\\end{eqnarray}\nand\n\n\\begin{eqnarray}\n\t\\int^{\\mu_2}_{\\mu_1}&\\equiv&\\int^{\\mu_2}_{\\mu_1}\\frac{\\mathrm{d} \\mu}{\\sqrt{M(\\mu)}},\\label{mu_int}\\\\\n\tM(\\mu)&=&q+(a^2-\\lambda^2-q)\\mu^2-a^2\\mu^4.\n\\end{eqnarray}\n\n\\subsection{Integration of photon trajectories}\n\nWe express the integrals (\\ref{u_int}) and (\\ref{mu_int}) in the form of the standard elliptic integrals of the first kind. Rauch and Blandford presented the tables of reductions of $u$-integrals and $\\mu$-integrals for the case of photons in Kerr geometry \\cite{Rau-Bla:1994:}. Here we extended those reductions for the case of nonzero braneworld parameter $b$. Because the integration of the $\\mu$-integral does not depend on braneworld parameter $b$, the transformations are the same as in the case of Kerr metric \\cite{Rau-Bla:1994:}, but we include them for completeness. \n\n\nThere are two cases we distinguish in latitudinal integral (see table \\ref{tableEIM}). In the first case there is one positive, $M_+>0$, and one negative, $M_-<0$ root of $M(m^2)$ it implies that there are two turning points located symmetrically about the equatorial plane given by $\\pm\\sqrt{M_+}$ (so called orbital motion \\cite{Bic-Stu:1976:,Fel-Cal:1972:}. In the second case there are two positive roots, $0\\beta_2>\\beta_2>0$ and $\\beta_4<0$. The value of modified constant of motion $\\tilde{q}>0$.\n\\item\nThe \\textbf{case II}: four real roots as in the case I but their values form the following order: $\\beta_1>\\beta_2>0$ and $\\beta_4<\\beta_3<0$. The value of modified constant of motion $\\tilde{q}<0$. \n\\item\nThe \\textbf{case III}: two real and two complex roots of $U(u)=0$: $\\beta_1$ being a complex root, $\\beta_2=\\bar{\\beta_1}$ and $\\beta_4<\\beta_3<0$. The value of modified constant of motion $\\tilde{q}<0$.\n\\item\nThe \\textbf{case IV}: only complex roots: $\\beta_2=\\bar{\\beta_1}$ and $\\beta_4=\\bar{\\beta_3}$. The value of modified constant of motion $\\tilde{q}<0$. \n\\item\nThe \\textbf{case V}: two real and two complex roots of $U(u)=0$: $\\beta_1>0$, $\\beta_4<0$, $\\beta_2$ being a complex root and $\\beta_3=\\bar{\\beta_2}$. \n\\end{itemize}\n\n\\begin{table}[!ht]\n\\tbl{The reductions of $\\int^m_{m_1}\\mathrm{d} m'\/\\sqrt{M(m')}=I_M$} \n{\\begin{tabular}{@{}lllll@{}}\\toprule\n \tCase & $\\tan\\Psi$ & $m$ & $c_1$ & $m_1$\\\\ \\colrule\n\t\\\\\n\t$M_-<0$ & $\\sqrt{\\frac{M_+}{m^2}-1}$ & $\\frac{M_+}{M_+-M_-}$ & $\\frac{1}{\\sqrt{a^2(M_+-M_-)}}$ & $\\sqrt{M_+}$\\\\\n\t\\\\\n\t$M_->0$ & $\\sqrt{\\frac{M_+-m^2}{m^2-M_-}}$ & $\\frac{M_+-M_-}{M_+}$ &\n $\\frac{1}{a^2}$ & $\\sqrt{M_+}$\\\\ \\botrule\n \\end{tabular}\\label{tableEIM}}\n\\end{table} \n\n\\begin{table}[!ht]\n\\tbl{The reductions of $\\int^u_{u_1}\\mathrm{d} u'\/\\sqrt{U(u')}=I_U$}\n{\\begin{tabular}{@{}lllll@{}}\\toprule\n\n \tCase & $\\tan\\Psi$ & $m$ & $c_1$ & $u_1$\\\\ \\colrule\n\t\n\tI & $\\sqrt{\\frac{(\\beta_1-\\beta_3)(u-\\beta_4)}{(\\beta_1-\\beta_4)(\\beta_3-u)}}$ & $\\frac{(\\beta_1-\\beta_2)(\\beta_3-\\beta_4)}{(\\beta_1-\\beta_3)(\\beta_2-\\beta_4)}$ & $\\frac{2}{\\sqrt{\\tilde{q}(b1-b3)(b2-b4)}}$ & $\\beta_4$\\\\\n\t\\\\\n\tII & $\\sqrt{\\frac{(\\beta_1-\\beta_2)(u-\\beta_3)}{(\\beta_1-\\beta_3)(\\beta_2-u)}}$ & $\\frac{(\\beta_2-\\beta_3)(\\beta_1-\\beta_4)}{(\\beta_1-\\beta_2)(\\beta_4-\\beta_3)}$ & $\\frac{2}{\\sqrt{-\\tilde{q}(b1-b2)(b3-b4)]}}$ & $\\beta_3$\\\\\n\t\\\\\n\tIII & $\\frac{2c_2(u)}{|1-c^2_2(u)|}$ & $\\frac{4c_4 c_5 - (\\beta_3 - \\beta_4)^2 - c_4 c_5}{4c_4 c_5}$ & $\\frac{1}{\\sqrt{-\\tilde{q}c_4 c_5}}$ & $\\beta_3$\\\\\n\t\\\\\n\tIV & $\\frac{u-c_3}{\\Im(\\beta_1)(1+c_2^2)+c_2(u-c_3)}$ & $1-\\left(\\frac{c_4-c_5}{c_4+c_5}\\right)^2$ & $\\frac{2}{(c_4+c_5)\\sqrt{-\\tilde{q}}}$ & $c_3$\\\\\n\t\\\\\n\tV & $\\frac{2c_2(u)}{|1-c^2_2(u)|}$ & $1-\\frac{(c_4+c_5)^2-(\\beta_1 -\n \\beta_4)^2}{4c_4 c_5}$ & $\\frac{1}{\\sqrt{\\tilde{q}c_4 c_5}}$ &\n $\\beta_4$\\\\ \\botrule\n \\end{tabular}\\label{tableEI}}\n\\end{table}\n\n\\begin{table}[!th]\n\\tbl{Definitions for Table \\ref{tableEI}.}\n{\\begin{tabular}{@{}lll@{}}\\toprule\n \n \tCase & $^1 c_2$ & $^1 c_3$\\\\ \\colrule\n\t\n\tIII & $\\left[\\frac{c5(u-\\beta_3)}{c_4(u-\\beta_4)}\\right]^{1\/2} $ & -\\\\\n\t\\\\\n\tIV & $\\left\\{\\frac{4[\\Im(\\beta_1)]^2-(c_4-c_5)^2}{(c_4+c_5)^2-4[\\Im(\\beta_1)]^2}\\right\\}^{1\/2}$ & $\\mbox{\\fontsize{8}{10}\\selectfont $\\Re(\\beta_1)+c_2\\Im(\\beta_1)$}$\\\\\n\t\\\\\n\tV & $\\left[\\frac{c4(u-\\beta_4)}{c_5(\\beta_1-u)}\\right]^{1\/2} $& -\\\\ \\botrule\n\\end{tabular}\\label{tableEI2}}\n\\end{table}\n\n\n\\begin{table}[!th]\n\\tbl{Definitions for Table \\ref{tableEI} and Table \\ref{tableEI2}.}\n {\\begin{tabular}{@{}lll@{}}\\toprule\n \n \tCase & $^1 c_4$ & $^1 c_5$\\\\ \\colrule\n\t\n\tIII & $\\mbox{\\fontsize{8}{10}\\selectfont$\\left\\{\\left[\\Re(\\beta_1)-\\beta_3\\right]^2+[\\Im(\\beta_1)]^2\\right\\}^{1\/2}$}$ & $\\mbox{\\fontsize{8}{10}\\selectfont $\\left\\{\\left[\\Re(\\beta_1)-\\beta_4\\right]^2+[\\Im(\\beta_1)]^2\\right\\}^{1\/2}$}$\\\\\n\t\\\\\n\tIV & $\\mbox{\\fontsize{8}{10}\\selectfont $\\left\\{\\left[\\Re(\\beta_1)-\\Re(\\beta_3)\\right]^2+[\\Im(\\beta_1)+\\Im(\\beta_3)]^2\\right\\}^{1\/2}$}$ & $\\mbox{\\fontsize{8}{10}\\selectfont $\\left\\{\\left[\\Re(\\beta_1)-\\Re(\\beta_3)\\right]^2+[\\Im(\\beta_1)-\\Im(\\beta_3)]^2\\right\\}^{1\/2}$}$\\\\\n\t\\\\\n\tV & $\\mbox{\\fontsize{8}{10}\\selectfont $\\left\\{\\left[\\Re(\\beta_2)-\\beta_1\\right]^2+[\\Im(\\beta_2)]^2\\right\\}^{1\/2}$}$ & $\\mbox{\\fontsize{8}{10}\\selectfont $\\left\\{\\left[\\Re(\\beta_2)-\\beta_4\\right]^2+[\\Im(\\beta_2)]^2\\right\\}^{1\/2}$}$\\\\ \\botrule\n\\end{tabular}}\n\\begin{tabular}{c}\n\t$^1$\\textit{The symbols $\\Re(x)$ and $\\Im(x)$ refer to real and imaginary part of $x$ here.}\n\\end{tabular}\n\\end{table}\n\n\n\\par\n\nUsing presented transformations we can write the integrals (\\ref{u_int}) and (\\ref{mu_int}) in the form\n\n\\begin{equation}\n \\int^{u}_{u_1}\\frac{1}{\\sqrt{U(\\tilde{u})}}\\mathrm{d} \\tilde{u} = c_1\\mathcal{F}(\\Psi;m)\\label{ellint}\n\\end{equation}\nand \n\\begin{equation}\n \\int^{\\mu}_{\\mu_1}\\frac{1}{\\sqrt{M(\\tilde{\\mu})}}\\mathrm{d} \\tilde{\\mu} = c_1\\mathcal{F}(\\Psi;m)\\label{ellintM}\n\\end{equation}\nwhere $\\mathcal{F}$ is the elliptic integral of the first kind and $u_1$(resp $\\mu_1$) depends on the case of root distribution of quartic equation $U(u)=0$ (resp. $M(\\mu)=0$) as given in Table \\ref{tableEI} (resp \\ref{tableEIM}). If, in the cases III and V, the value of $1-c_2^2(u)<0$, we have to take instead of (\\ref{ellint}) the form\n\n\\begin{equation}\n \\int^{u}_{u_1}\\frac{1}{\\sqrt{U(\\tilde{u})}}\\mathrm{d} \\tilde{u} = c_1(2\\mathcal{K}(m)-\\mathcal{F}(\\Psi;m)),\\label{ellint1}\n\\end{equation}\nwhere $\\mathcal{K}$ is the complete elliptic integral of the first kind. In the case that sign$(\\mu1\\cdot\\mu)<0$ we have to take instead of (\\ref{ellintM}) the form\n\\begin{equation}\n \\int^{\\mu}_{\\mu_1}\\frac{1}{\\sqrt{M(\\tilde{\\mu})}}\\mathrm{d} \\tilde{\\mu} = c_1(2\\mathcal{K}(m)-\\mathcal{F}(\\Psi;m)),\\label{ellintM1}\n\\end{equation}\nwhere $\\Psi$, $m$ and $c_1$ are taken from table \\ref{tableEIM}.\n We consider two basic possibilities of trajectories, namely those corresponding to direct and indirect images (Figures \\ref{fig19} and \\ref{fig20}).\n\n\\subsection{Disc images}\n\n\tIt is very important to demostrate the influence of the braneworld parameter on the shape of images of rings in the equatorial plane representing parts of Keplerian accretion discs. Of course, as well known from the Kerr (and even Schwarzchild) black holes, the images strongly depend on the latitude of the observer. We calculate the direct and indirect images of flat discs and combined, full image of the disc for two representative values of viewing angle $\\theta_0$ and appropriatelly chosen extension of radiating disc area.\n\n\n\tWe include the effect of frequency shift into the calculated images of part of the Keplerian discs assumed to be radiating at a given fixed frequency. The frequency shift $g$ is determined by the ratio of observed ($E_0$) to emitted ($E_e$) photon energy\n\\begin{equation}\n g=\\frac{E_0}{E_e}=\\frac{k_{0\\mu} u_0^\\mu}{k_{e\\mu} u_e^\\mu},\n\\end{equation}\nwhere $u_0^\\mu$($u_e^\\mu$) are components of the observer (emitter) 4-velocity and $k_{0\\mu}(k_{e\\mu})$ are components of the photon 4-momentum taken at the moment of emission (observation). For distant observers $u^\\mu_0=(1,0,0,0)$. The emitter follows an equatorial circular geodesics at $r=r_e$, $\\theta_e=\\pi\/2$. Therefore, $u_e^\\mu=(u^t_e,0,0,u_e^\\varphi)$, with components given by\n\n\\begin{eqnarray}\n u_e^t&=&\\left[1-\\frac{2}{r_e}(1-a\\Omega)^2-(r_e^2+a^2)\\Omega^2+\\frac{b}{r_e^2}(1-2a\\Omega)\\right]^{-1\/2},\\\\ u_e^\\varphi&=&\\Omega u_e^t,\n\\end{eqnarray}\nwhere $\\Omega=\\mathrm{d}\\varphi\/\\mathrm{d} t$ is the Keplerian angular velocity of the emitter related to distant observers, given by equation (\\ref{ang_vel_gf}).\n\nThe frequency shift including all relativistic effects is then given by\n\n\\begin{equation}\n g=\\frac{\\left[1-\\frac{2}{r_e}(1-a\\Omega)^2-(r_e^2+a^2)\\Omega^2+\\frac{b}{r_e^2}(1-2a\\Omega)\\right]^{1\/2}}{1-\\lambda\\Omega}\n\\end{equation}\nwhere $\\lambda\\equiv-k_\\varphi\/k_t$ is the impact parameter of the photon\nbeing a motion constant for an individual photon radiated at a specific\nposition of the radiating disc; notice that $g$ is independent of the second\nphoton motion constant (impact parameter) $q$. Of course, depending on the\nposition of the emitter along the circular orbit, the impact parameters\n$\\lambda$, $q$ of photons reaching a fixed distant observer will vary\nperiodically (see eg., \\cite{Bao-Stu:1992:}). For each position of the emitter\nthe impact parameters are determined by the procedure of integration of photon\ntrajectories. \n\nThe influence of the frequency shift in the disc images is demonstrated in Figures \\ref{fig24_a_i} and \\ref{fig26_a_i}. The role of the braneworld parameter is illustrated both for small ($\\theta_0=30^\\circ$) and high ($\\theta_0=80^\\circ$) inclination angles. We consider two cases of the radiating disc extension: first one with fixed inner and outer radii, independent of the black hole parameters, and the second one when the inner radius is identified with the marginally stable orbits, depending on the black hole parameters. \n\n\n\n\n\\begin{figure}[ht]\n\n \\begin{tabular}{ccc}\n \\includegraphics[width=3.6cm]{fig23d}\n& \\includegraphics[width=3.6cm]{fig23e}\n&\\includegraphics[width=3.6cm]{fig23f}\\\\\n \\includegraphics[width=3.6cm]{fig23g}\n& \\includegraphics[width=3.6cm]{fig23h}\n&\\includegraphics[width=3.6cm]{fig23i}\\\\\n \\includegraphics[width=3.6cm]{fig24d}\n& \\includegraphics[width=3.6cm]{fig24e}\n&\\includegraphics[width=3.6cm]{fig24f}\\\\\n \\includegraphics[width=3.6cm]{fig24g}\n& \\includegraphics[width=3.6cm]{fig24h}\n&\\includegraphics[width=3.6cm]{fig24i}\n \\end{tabular}\n\n\\caption{\\label{fig24_a_i}Radiating Keplerian disc images with fixed inner and outer radii. The modified frequency shift $\\bar{g}=(g-g_{min})\/(g_{max}-g_{min})$, with $g_{min}=0.4$ and $g_{max}=1.5$, of the radiation emitted from the thin disk with inner radius $r_{in}=7 M$ and outer radius $r_{out}=15 M$, encoded into colors is plotted for representative values of tidal charge parameter $b=-3.0$, $0.0$ and inclination of observer $\\theta_0=30^\\circ$, $80^\\circ$. In the left column direct images are ploted, the indirect images are ploted in the central column and the composition of direct and indirect images is plotted in the right column. The first two rows of images are plotted for the observer inclination $\\theta_0=30^\\circ$ and the second two rows of images are plotted for the observer inclination $\\theta_0=80^\\circ$. Top row images are plotted for $b=0.0$, the second row images are plotted for $b=-3.0$, the third row images are plotted for $b=0.0$ and bottom row images are plotted for $b=-3.0$.}\n\\end{figure}\n\nIn order to map the frequency shift $g$ into color palete we define modified frequency shift $\\bar{g}=(g-g_{min})\/(g_{max}-g_{min})$ where $g_{min}$ ($g_{max}$) is the minimal (maximal) value of frequency shift, which is fixed in a particular set of images.\n\nWe can see from Figs. \\ref{fig24_a_i} and \\ref{fig26_a_i} that the negative tidal charge has the tendency to enlarge and symmetrize the disc images.\n\n\n\\begin{figure}[ht]\n\\begin{tabular}{ccc}\n \\includegraphics[width=3.6cm]{fig25a}\n& \\includegraphics[width=3.6cm]{fig25b}\n&\\includegraphics[width=3.6cm]{fig25c}\\\\\n \\includegraphics[width=3.6cm]{fig25d}\n& \\includegraphics[width=3.6cm]{fig25e}\n&\\includegraphics[width=3.6cm]{fig25f}\\\\\n \\includegraphics[width=3.6cm]{fig26a}\n& \\includegraphics[width=3.6cm]{fig26b}\n& \\includegraphics[width=3.6cm]{fig26c}\\\\\n \\includegraphics[width=3.6cm]{fig26d}\n& \\includegraphics[width=3.6cm]{fig26e}\n& \\includegraphics[width=3.6cm]{fig26f}\n \\end{tabular}\n\n\n\\caption{\\label{fig26_a_i} Radiating Keplerian disc images with $r_{in}=r_{ms}$. The modified frequency shift $\\bar{g}=(g-g_{min})\/(g_{max}-g_{min})$, with $g_{min}=0.2$ and $g_{max}=1.8$, of the radiation emitted from the thin disk with inner radius $r_{in}=r_{ms}$ (with $r_{ms}(b=0;a=0.9981)=1.3$ and $r_{ms}(b=-3;a=0.9981)=6.3$) and outer radius $r_{out}=10$, encoded into colors is plotted for representative values of tidal charge parameter $b=-3.0$, $0.0$ and inclination of observer $\\theta_0=30^\\circ$, $80^\\circ$. In the left column direct images are ploted, the indirect images are ploted in the central column and the composition of direct and indirect images is plotted in the right column. The first two rows of images are plotted for the observer inclination $\\theta_0=30^\\circ$ and the second two rows of images are plotted for the observer inclination $\\theta_0=80^\\circ$. Top row images are plotted for $b=0.0$, the second row images are plotted for $b=-3.0$, the third row images are plotted for $b=0.0$ and bottom row images are plotted for $b=-3.0$. \n}\n\\end{figure}\n\n\n\n\\clearpage\n\\newpage\n \n\\section{Time delay}\n\nFor optical effects in vicinity of a black hole, the time delay in case of systems varying with time and observed along two different directions due to the light deflection in strong gravity can be important. The cordinate time that elapses from the instant of photon emission, $t_e$, to the instant of its reception, $t_o$, is integrated from the Carter equations and reads\n\n\\begin{eqnarray}\n t_o&=&t_e+\\mu_{sgn}\\int_{\\mu_e}^{\\mu_o}{a^2\\mu^2\\frac{\\mathrm{d}\\mu}{\\sqrt{M}}}\\nonumber\\\\\n\t&&+u_{sgn}\\int^{u_o}_{u_e}{\\frac{2a(a-\\lambda)u^3+a^2u^2+1+ab(\\lambda-a)u^4}{(u\/u_{+}-1)(u\/u_{-}-1)\\sqrt{U}}}\\mathrm{d} u\n\\end{eqnarray}\nIn order to succesfully integrate this formula, one must map all the turning points in $\\mu$ and $u$ motion to correctly set up the signs $u_{sgn}$ and $\\mu_{sgn}$.\n\nSuppose that the two light beams, direct and indirect, are emitted at the same coordinate time $t_e$. They generally reach the observer at different coordinate times $t_o^{\\mathrm{dir}}$($t_o^{\\mathrm{indir}}$ resp.). By time delay we define here the difference $\\Delta t\\equiv t_o^{\\mathrm{indir}}-t_o^{\\mathrm{dir}}$.\n\n\\begin{figure}[!ht]\n \\includegraphics[width=10cm]{fig27}\n\\caption{\\label{fig27}The illustration of the impact of tidal charge parameter on the time delay $\\Delta t$ in case of direct and indirect photons emitted from emitter $E$ at coordinate time $t_e$ and azimuthal position $\\varphi_e=\\pi$. They are received at observer $O$ at coordinate times $t_o^{\\mathrm{dir}}$($t_o^{\\mathrm{indir}}$ resp.). The emittor is on circular geodesic in equatorial plane of braneworld Kerr black hole at radial coordinate $r=r_e$. The observer is far from the center of the black hole at $r=r_o$. Its inclination is $\\theta=\\theta_o$.}\n\\end{figure}\n\n\nTo demonstrate the impact of the tidal charge $b$ on the time delay we\nconsider the following situation (see Figure \\ref{fig27}). Let the isotropicaly radiating monochromatic source orbits in the equatorial plane of the braneworld Kerr black hole at radial distance $r_e$. It can be switched on and off. When it reaches the azimuthal coordinate $\\varphi=\\pi$ it is switched on and we compare the coordinate times $t^{\\mathrm{dir}}_o$ and $t^{\\mathrm{indir}}_o$ of reception of the photons from the direct and indirect images of the source. \n\n\\begin{figure}[!ht]\n \\begin{tabular}{cc}\n \\includegraphics[width=6.2cm]{fig28a}&\\includegraphics[width=6.2cm]{fig28b}\n \\end{tabular}\n \n\\caption{\\label{fig28_a_b}The difference (``Time Delay``), $\\Delta t = t^{\\mathrm{indir}}_o -t^{\\mathrm{dir}}_o$, between coordinate times of reception of direct and indirect geodesics of photons emmited at the same coordinate time $t_e$ from the azimuthal coordinate $\\varphi=\\pi$ is plotted as a function of tidal charge $b$. Left figure: the inclination of the observer is $\\theta_0=20^\\circ$. Right figure: the inclination of the observer is $\\theta_0=80^\\circ$. }\n\n\\end{figure}\n\nThe results are demonstrated in the Figure \\ref{fig28_a_b}. We can directly see that time delay $\\Delta t$ between times of reception of the direct and indirect photons emitted at the same instant from the azimuthal position $\\varphi=\\pi$ increases as the value of the tidal charge parameter $b$ goes to higher negative values. When $b$ is fixed, the time delay $\\Delta t$ increases as the value of the inclination decreases. The same effects appear for other positions of the radiating spot ($\\varphi\\not= \\pi$). We can see that the time delay $\\Delta t$ depends strongly on the viewing angle $\\theta_0$. Therefore, it is extremely important to have a system with precisely determined viewing angle.\n\n\\section{Optical phenomena related to Sgr $A^*$}\nThere is an enormously growing evidence that the center of our Galaxy harbors a supermassive black hole whose position could be almost surely identified with the extremely compact radio source Sgr $A^*$. The chain of arguments seems to be very convincing; stars orbiting an unseen mass concentration on elliptical orbits with a common focal position, the unseen mass centered on Sgr $A^*$ that seems to be motionless at the dynamical center of the Galaxy, extremely compact emission of the center \\cite{Reid:2008:}. Recent measurements of Ghez and collaborators \\cite{Ghez-etal:2008:} from the W.M. Keck 10 - meter telescopes of a fully unconstrained Keplerian orbit of the short period star SO-2 provide the distance $R_0=8.0\\pm 0.6$ kpc and black hole mass $M=(4.1\\pm0.6)\\times 10^6 M_\\odot$. If the black hole is assumed to be at rest with respect to the Milky Way Galaxy (i.e., has no massive companion to induce its motion) as argued by Reid \\cite{Reid:2008:}, the fit can be further constrained to $R_0=8.4\\pm 0.4$kpc and $M=(4.5\\pm 0.4)\\times 10^6M_\\odot$ \\cite{Ghez-etal:2008:}.\n\nSuch a close and huge supermassive black hole could be clearly a very convenient object, probably the best one, for testing a wide variety of optical phenomena in strong gravity in its vicinity. The time delay of accidents happening behind the black hole and observed along two directions could be in principle easily measured. We could even expect possibility of black hole silhuette measurements. In this way the influence of the tidal charge could be properly tested and its value estimated, because for the Galaxy supermassive black hole we can determine relatively precisely the inclination angle of the observer (Solar system), although it is of course very close to $\\theta_0\\simeq 90^\\circ$.\n\nFor non-rotating , Schwarzchild black holes, the silhuette diameter is given by the impact parameter of the photon circular orbit\n\n\\begin{equation}\n\tD=2\\lambda_{ph}= 6\\sqrt{3} M.\n\\end{equation}\nUsing the Sgr $A^*$ mass estimate $M\\sim 4.5\\times 10^6M_\\odot$, we find $D\\simeq 55\\mu$arcsec while interferometer finges were reported at wavelength of $1.3$ mm and fringe spacing of $0.00005$, comparable with the expected value of $D$. Shorter wavelengths should enable detailed measurements of the black hole silhuette and relatively precise estimates of the black hole parameters due to very precise knowledge of the inclination angle. The angle can be given by the measurement of the Solar system position relative to Galaxy plane $z_\\odot\\sim 14pc$\\cite{Yoshi:2007:}. Then $\\theta_0\\sim 89.9^\\circ$ or more precisely, $\\theta_0$ lies between the values of $89.8772^\\circ$ ($z_\\odot=18pc$) and $89.9318^\\circ$($z_\\odot=10pc$). Of course, considering the silhuette shape, it is quite enough to take $\\theta_0=90^\\circ$. \n\n\\begin{figure}[h]\n\t\\begin{center}\n\t\\includegraphics[width=6.1cm]{diameter_of_silhuette_schw}\n\t\\end{center}\n\t\\caption{Diameter $D$ as a function of braneworld parameter $b$ is plotted for Schwarzchild black hole of mass $M=4.5\\times 10^6M_\\odot$. Observer is at $r_0=8.4$kpc lying in the equatorial plane.}\\label{diameter_of_silhuette_schw}\n\\end{figure}\n\nIn the case of spherically symmetric black holes, the influence of the tidal charge parameter $b$ on the silhuette diameter can be given by the simple formula for impact parameter of photon circular orbits that reads \\cite{Stu-Hle:2002:}\n\n\\begin{equation}\n\t\\lambda_{ph}(b)=\\frac{r_{ph}^2}{\\sqrt{r_{ph}-b}}M,\n\\end{equation}\nwhere\n\n\\begin{equation}\n\tr_{ph}(b)=\\frac{3}{2}\\left(1+\\sqrt{1-\\frac{8b}{9}}\\right).\n\\end{equation}\nThe resulting dependence of the diameter $D(b)$ is illustrated in Figure \\ref{diameter_of_silhuette_schw} . The diameter grows slowly with the descending of $b$; notice that its magnitude is twice the pure Schwarzchild value for $b=-12.8428$. Of course, for rotating black holes the silhuette is maximally deformed due to the influence of rotation since the viewing angle $\\theta_0\\sim 90^\\circ$ and is given by calculations and results presented above. Testing of the combined spin and tidal charge influence would be possible with measurement precision enlarged for 1-order relative to the recently expected state mentioned above. Clearly, we can expect that the observational accuracy in near future will be high enough to measure the Sgr $A^*$ black hole silhuette implying relevant estimates of the black hole parameters.\n \n\n\n\n\n\n\n\n\n\\begin{figure}[ht]\n\t\\begin{center}\n\t\t\\includegraphics[width=6.1cm]{td_tab4}\n\t\\end{center}\n\t\\caption{\\label{td_table4}Comparizon of time delay effect as a function of braneworld parameter $b$ between two rotating black holes with rotational parametes $a=0.5$ and $a=0.998$. For each $b$ the emitter is radiating from marginally stable orbit. The relevant values of radii $r_{ms}$ of marginally stable orbits are arranged in the Table \\ref{tabulka2}. }\n\\end{figure}\n\n\\begin{table}[ht]\n\t\t\\tbl{Table of relevant values of $r_{ms}$ used in plots oin Fig \\ref{td_table4}}\n\t\t{\\begin{tabular}{@{}ccccccc@{}} \n\t\t\\toprule\n\t\t$b$ & 0.0 & -0.5 & -1.0 & -2.0 & -3.0 & -10.0\\\\ \n\t\t\\colrule\n\t\t$r_{ms}(a=0.5)$ & 4.24M & 5.05M & 5.73M & 6.88M & 7.85M & 12.88M\\\\\n\t\t\\colrule\n\t\t$r_{ms}(a=0.998)$ & 1.24M & 3.03M & 3.91M & 5.22M & 6.28M & 11.44M\\\\\n\t\t\\botrule\n\t\t\\end{tabular}\\label{tabulka2}}\n\\end{table}\n\nConsidering the time delay effects, the exact value of $\\theta_0$ is crucial since it plays a fundamental role in determining the time delay effect whose scale is given by the value of $t\\sim 1sec$. We illustrate the influence of the tidal charge on the time delay effects at the astrophysically important radii corresponding to marginally stable circular geodesics, i.e. in the strong gravity regime, for two representative fixed values of black hole spin (see Figure \\ref{td_table4} and Table \\ref{tabulka2}). We can expect importance of the regions close to $r_{ms}$ for relevant optical effects due to the idea of the low angular momentum accretion in Sgr $A^*$ advocated by B. Czerny \\cite{Cze-etal:2007:}. Clearly, we can see in Figure \\ref{td_table4} that the time delay effects could be well measurable and the tidal charge influence could be well tested, if the black hole spin is properly estimated.\n\n\\section{\\label{sec:Conclusions}Conclusions}\nOne of the most promising ways of estimating influence of hypothetical hidden external dimensions, considered in the framework of the braneworld model with infinite external dimension as developed by \\cite{Ran-Sun:1999:}, seems to be investigation of the optical phenomena caused by the black hole backgrounds. It is so because black holes represent the only case when the non-local influence of the bulk space on the braneworld spacetime structure can be fully described by a single, braneworld parameter called tidal charge, the sign of which can be both positive and negative, with the second possibility beeing more realistic one \\cite{Ali-Gum:2005:,Dad-etal:2000:}.\n\nHere, we focused our attention to developing a theoretical background for treating the optical phenomena in vicinity of braneworld rotating black holes and bringing general tendencies of the tidal charge effect in some basical optical phenomena. \n\nWe have shown qualitatively how the braneworld tidal charge affects the basical optical phenomena, especially the black-hole silhuette, the accretion disc image with the frequency shift of area of the disc radiating at a specific frequency, and the time delay between the direct and indirect images of the hot spot orbiting the black hole. We have shown that these phenomena could be measured and used to put limits on the tidal charge in case of Galaxy Center Sgr $A^*$ supermassive black hole.\n\nWe generalized the approaches based on the transfer-function method as introduced and developed in Schwarzchild and Kerr backgrounds \\cite{fab-rees-ste-whi:1989:,Mat-Fab-Ros:1993:,Bao-Stu:1992:,Stu-Bao:1992:,Laor:1991:,Dov-Karas-Mas-Mar:2005:,Fan-Cal-Fel-Cad:1997:,Rau-Bla:1994:} where equations of photon motion are solved in terms of the elliptic integrals (see \\cite{Rau-Bla:1994:,Kra:2005:,Kra:2007:}). For purposes of the present work, the transfer-function method seems to be most efficient. Nevertheless, we prepared the ray-tracing method too, since that could be usefull in treating other optical phenomena.\n\n Generally, rising negative value of the tidal charge strenghtens the black hole field and suppresses the rotational phenomena, when the black-hole rotation parameter is fixed. The magnitude of the optical phenomena grows with decreasing of the negatively-valued tidal charge, but the rotation induced asymmetry of the phenomena like the black-hole silhuette, or the accretion disc image, decreases. The black-hole silhuette is characterized by two parameters, namely the shift of the center and ellipticity, that could be in principle measurable in the Galactic Center black-hole system Sgr $A^*$, after expected development of observational techniques that at present enable measurement of the black hole diameter, not details of the shape. \nThe Galaxy center (Sgr $A^*$) seems to be also a promising candidate for testing the time delay effects both for phenomena related to the accretion disc and flares observed there , and for some expected lensing phenomena connected to the observed stars orbiting the Sgr $A^*$ central black hole. \n\nWe have found that observable phenomena could be expected for the time-delay effects. Of special interest is comparison of time delays generated for sources in vicinity of the Sgr $A^*$ black hole (both stars and disc hot spots) and those related to weak lensing of some distant sources \\cite{Zak:2003:,Sereno:2006}. \n\nSimilarly, keeping rotational parameter fixed, the negative tidal charge has tendency to make the isoradial curve images (both direct and indirect) larger and less deformed while the positive tidal charge influence is of opposite character. On the other hand, for fixed rotational parameter of the black hole and disc radiating at the innermost part above the innermost stable orbit at $r=r_{ms}$, the negative tidal charge restricts the radiating ring image simply because the radius $r_{ms}$ grows with decreasing value of braneworld parameter $b$. Suppresion of the relativistic effects can be measurable also in the spectral line profiles generated by the inner hot part of the disc radiating at special X-ray line \\cite{SS:b:RAGTime:2007:Proceedings}. \n\nThe optical tests have to be confronted with the data obtained from quasiperiodic oscillations observed in some black-hole systems (microquasars \\cite{Rem-McCli:2006:ARASTRA:}). The orbital resonance model gives good estimates of the black-hole parameters \\cite{Tor-Abr-Klu-Stu:2005:,Tor:2005a:,Tor:2005b:}; this model has been recently generalized to the case of braneworld Kerr black holes \\cite{Stu-Kot:2008}. It is shown that in the case of microquasar GRS 1915+105 and Galactic Center Sgr $A^*$ black holes with the negative braneworld parameter $b$ are allowed by the observational data \\cite{Stu-Kot:2008}. Detailed modelling of optical phenomena connected to the oscillating discs or orbiting (oscillating) hot spots and related resonant phenomena between the oscillation modes could be very promising in putting limits on allowed values of the tidal charge of the black hole. We plan to elaborate such modelling in future. \n\n\n\\section*{Acknowledgements}\nResearch supported by the Czech grant MSM 4781305903 and LC 06014. One of the authors (Zden\\v{e}k Stuchl\\'{i}k) would like to express his gratitude to the Czech Committee for Collaboration with CERN for support.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\nIn Nielsen theory, the purpose is to find a homotopy-invariant lower bound for the number of fixed points of a map. Similarly, Nielsen periodic point theory will try to find a homotopy-invariant lower bound for the number of periodic points. Both of these theories work for all continuous maps on connected, compact absolute neighborhood retracts, but in this paper we will mainly focus on manifolds.\n\n\\medskip\n\nSo, let $f:X\\to X$ be a continuous map on a manifold. We call $x\\in X$ a periodic point if $f^n(x)=x$ for a certain integer $n>0$. The smallest $n$ for which this holds, will be called the period of $n$. In Nielsen theory, we partition the fixed point set into fixed point classes and subsequently count the number of fixed point classes that can not disappear by using a homotopy. The resulting number is the Nielsen number $N(f)$. We can do a similar thing for $f^n$ and try to approximate the number of $n$-periodic points of $f$ by the number $N(f^n)$. However, this lower bound is not sharp in general. By studying the relations between fixed point classes for different iterates of $f$ more closely, it is possible to define a better lower bound, namely the full Nielsen-Jiang periodic number $NF_n(f)$. This number has already been studied extensively for maps on tori and nilmanifolds and in this paper we will extend some of these results to infra-nilmanifolds.\n\n\\medskip\n\nIn the first two sections, we will give an introduction to the theory of infra-nilmanifolds and to Nielsen periodic point theory. Subsequently, we will prove that infra-nilmanifolds are essentially reducible to the GCD (Theorem \\ref{thmessredgcd}) and essentially toral (Theorem \\ref{thmesstor}). These structural properties also hold for maps on nilmanifolds and tori and are a necessary first step in order to be able to compute $NF_n(f)$ on infra-nilmanifolds. We will also prove that for all $n$, $NF_n(f)=N(f^n)$ for a large class of maps on infra-nilmanifolds, namely the class of semi-hyperbolic maps (Theorem \\ref{thmNF=Nf)}).\n\n\\medskip\n\nBecause maps on tori and nilmanifolds are weakly Jiang, the computation of $NF_n(f)$ turns out to be very doable on these manifolds. On infra-nilmanifolds, however, this is not the case. In the penultimate section, we will therefore develop a method that makes the computation of $NF_n(f)$ easier. Sometimes, though, the computation still can be quite hard, but this might be inherent to the problem, because, by applying our method to several examples, it becomes apparent that the expression for $NF_n(f)$ can be very complex.\n\n\\medskip\n\nIn the last section, we will look specifically at affine maps on infra-nilmanifolds. In general, these maps behave better than arbitrary continuous maps. In Theorem \\ref{theorem uiteindelijk boost inessentieel}, we prove that, under very mild conditions, these affine maps can only be Wecken (which means that $\\#\\Fix(f)=N(f)$) at every level if and only if they are semi-hyperbolic. This allows us to determine exactly for which maps $NF_n(f)=N(f^n)$, for all $n$ (Corollary \\ref{cor NF=Nf}). \n\\section{Infra-nilmanifolds}\nLet $G$ be a connected, simply connected, nilpotent Lie group. The group of affine transformations on $G$, $\\Aff(G)= G\\semi \\Aut(G)$, admits a natural left action on $G$:\n\\[ \\forall (g,\\alpha)\\in \\Aff(G),\\, \\forall h \\in G: \\;\\;^{(g,\\alpha)}h= g \\alpha(h).\\]Define $p:\\Aff(G)=G\\semi \\Aut(G) \\to \\Aut(G)$ as the natural projection onto the second factor of the semi-direct product. \n\n\\begin{definition} A subgroup $\\Gamma \\subseteq \\Aff(G)$ is called \\textbf{almost-crystallographic} if and only if $p(\\Gamma)$ is finite and $\\Gamma\\cap G$ is a uniform and discrete subgroup of $G$. The finite group $F=p(\\Gamma)$ is called the holonomy group of $\\Gamma$.\n\\end{definition}\n\nWith these properties, the natural action of such a group $\\Gamma$ on $G$ becomes properly discontinuous and cocompact. Moreover, when $\\Gamma$ is torsion-free, this action is free, which makes the resulting quotient space $\\Gamma\\backslash G$ a compact manifold. This idea leads to the following definition. \n\n\\begin{definition}\nA torsion-free almost-crystallographic group $\\Gamma\\subseteq \\Aff(G) $ is called an \\textbf{almost-Bieberbach group}, and the corresponding manifold $\\Gamma\\backslash G$ is called an \\textbf{infra-nilmanifold} (modeled on $G$). \n\\end{definition} \n\nWhen the holonomy group is trivial, $\\Gamma$ can be considered to be a lattice in $G$ and the corresponding manifold $\\Gamma\\backslash G$ is a nilmanifold. When $G$ is abelian, i.e. $G$ is isomorphic to $\\R^n$, $\\Gamma$ will be called a Bieberbach group and $\\Gamma\\backslash G$ a compact flat manifold. When $G$ is abelian and the holonomy group of $\\Gamma$ is trivial, then $\\Gamma\\backslash G$ is a torus. Hence, infra-nilmanifolds are a natural generalization of nilmanifolds and tori.\n\n\\medskip\n\nNow, define the semigroup $\\aff(G)=G\\semi \\Endo(G)$. Note that $\\aff(G)$ acts on $G$ in a similar way as $\\Aff(G)$:\\[ (d,D): \\; G \\rightarrow G:\\; h \\mapsto d D(h).\\]The elements of this semigroup will be called affine maps, since $\\aff(G)$ is merely a generalization of the semigroup of affine maps $\\aff(\\R^n)$ to the nilpotent case. One of the main advantages of working with infra-nilmanifolds, is the fact that every continuous map lies in the same homotopy class as a map induced by such an affine map with similar properties. These maps are often easier to handle and are therefore ideal to use in proving several theorems. This strategy will be often used throughout this paper, for example in the last section of this paper.\n\n\\begin{theorem}[K.B.\\ Lee \\cite{lee95-2}]\n\\label{leemaps} Let $G$ be a connected and simply connected nilpotent Lie group and suppose that $\\Gamma, \\Gamma'\\subseteq \\Aff(G)$ are two almost-crystallographic groups modeled on $G$. \nThen for any homomorphism $\\varphi: \\Gamma\\rightarrow \\Gamma'$ there \nexists an element $ (d, D)\\in \\aff(G)$ such that \n\\[ \\forall \\gamma \\in \\Gamma: \\; \\varphi(\\gamma) (d,D) = (d,D) \\gamma.\\] \n\\end{theorem}\n\nWe can consider the equality $ \\varphi(\\gamma) (d,D) = (d,D) \\gamma$ in $\\aff(G)$, since $\\Aff(G)$ is a subgroup of $\\aff(G)$. With this equality in mind, when $\\Gamma$ and $\\Gamma'$ are torsion-free, it is easy to see that the affine map $(d,D)$ induces a well-defined map between infra-nilmanifolds:\\[\\overline{(d,D)}: \\Gamma \\backslash G \\rightarrow \\Gamma' \\backslash G: \\; \\Gamma h \\rightarrow \\Gamma' d D(h),\\]\nwhich exactly induces the morphism $\\varphi$ on the level of the fundamental groups.\n\n\\medskip\n\nOn the other hand, if we choose an arbitrary map $f:\\Gamma\\backslash G\\ra \\Gamma'\\backslash G$ between two infra-nilmanifolds and choose a lifting $\\tilde{f}:G \\to G$ of $f$, then there exists a morphism $\\tilde{f}_\\ast:\\Gamma\\to \\Gamma'$ such that $\\tilde{f}_\\ast(\\gamma) \\circ \\tilde{f} = \\tilde{f}\\circ \\gamma$, for all $\\gamma\\in \\Gamma$. By Theorem~\\ref{leemaps}, an affine map $(d,D)\\in \\aff(G)$ exists which also satisfies $\\tilde{f}_\\ast(\\gamma) \\circ (d,D)= (d,D)\\circ \\gamma$ for all $\\gamma\\in \\Gamma$. Therefore, the induced map $\\overline{(d,D)} $ and $f$ are homotopic. Hence, whenever we are studying homotopy-invariant properties for maps on infra-nilmanifolds, we are free to replace an arbitrary map $f$ by its affine counterpart.\n\n\\medskip\n\nThe map $(d,D)$ will be called an \\textbf{affine homotopy lift} of $f$, while we will denote the map $\\overline{(d,D)}$ as an \\textbf{affine map on an infra-nilmanifold}.\n\n\\medskip\n\nIt might be noteworthy to mention that $(d,D)$ is not unique in the sense that it depends on the choice of lifting $\\tilde{f}:G\\to G$. For example, from \\cite{lee95-2} we know that $D$ is only determined up to an inner automorphism of $G$. \n\n\\medskip \nIn \\cite{ll09-1}, J.B. Lee and K.B. Lee gave a formula to compute Lefschetz and Nielsen numbers on infra-nilmanifolds. Pick an infra-nilmanifold $\\Gamma\\backslash G$, determined by the almost-Bieberbach group $\\Gamma\\subseteq \\Aff(G)$ and let $F\\subseteq \\Aut(G)$ denote the holonomy group of $\\Gamma$. We will write $\\lie$ for the Lie algebra of $G$. Because $G$ is a nilpotent, connected and simply connected Lie group, the map $\\exp:\\lie\\to G$ will be a diffeomorphism. Therefore, $\\Endo(G)$ and $\\Endo(\\lie)$ are isomorphic and for every endomorphism $A\\in \\Endo(G)$, we have a unique $A_\\ast\\in \\Endo(\\lie)$, which is determined by the relation $A \\circ \\exp= \\exp \\circ A_\\ast$. This $A_\\ast$ will be called the differential of $A$. Of course, $A$ is invertible if and only if $A_\\ast$ is invertible.\n\\begin{theorem}[J.B.\\ Lee and K.B.\\ Lee \\cite{ll09-1}] \\label{LeeForm}Let $\\Gamma\\subseteq \\Aff(G)$ be an almost-Bieberbach group with holonomy group $F\\subseteq \\Aut(G)$. Let $M=\\Gamma\\backslash G$ be the associated infra-nilmanifold. If \n $f:M\\ra M$ is a map with affine homotopy lift $(d,D)$, then \n\\[L(f)=\\frac{1}{\\# F}\\sum_{A \\in F}\\det(I-A_\\ast D_\\ast)\\]\nand\n\\[N(f)=\\frac{1}{\\# F}\\sum_{A \\in F}|\\det(I-A_\\ast D_\\ast)|.\\]\n\\end{theorem}\n\nWe will now list a couple of properties that we will need in this paper.\n\n\\medskip\n\nThe following lemma can be found in \\cite{ddm05-1}. We have adapted the formulation very slightly, but in essence it is the same lemma and it can be proved in a similar way.\n\n\\begin{lemma}\\label{Lemma Bram}\nSuppose that $F\\subset \\GL_n(\\C)$ is a finite group, $D\\in \\C^{n\\times n}$ and for all $A\\in F$, there exists a $B\\in F$, such that $DA=BD$. Take an arbitrary element $A_1\\in F$ and build the sequence $(A_j)_{j\\in \\N_0}$, such that $DA_i=A_{i+1}D$, for all $i$. Then,\n\\begin{enumerate}\n\\item $\\forall j\\in \\N_0: \\det(I-A_1D)=\\det(I-A_jD).$\n\\item $\\exists l, j\\in \\N_0: (A_jD)^l=D^l.$\n\\end{enumerate}\n\\end{lemma}\n\nIn \\cite{dp11-1}, we can find the following theorems.\n\n\\begin{theorem}\\label{thmRInf}\nLet $\\Gamma\\subseteq \\Aff(G)$ be an almost-Bieberbach group with holonomy group $F\\subseteq \\Aut(G)$. Let $M=\\Gamma\\backslash G$ be the associated infra-nilmanifold. If \n $f:M\\to M$ is a map with affine homotopy lift $(d,D)$, then\n\\[\n R(f)=\\infty \\iff \\exists A \\in F \\text{ such that } \\det(I - A_\\ast D_\\ast)=0.\n\\]\n\\end{theorem}\n\n\\begin{theorem}\\label{thmN=R}\nLet $f$ be a map on an infra-nilmanifold, such that $R(f)<\\infty$, then $N(f)=R(f)$.\n\\end{theorem}\n\nWe will also mention the following definition.\n\n\\begin{definition}\nLet $M$ be an infra-nilmanifold and $f:M\\to M$ be a continuous map, with $(d,D)$ as an affine homotopy lift. We say that $f$ is a \\textbf{hyperbolic} map if $D_\\ast$ has no eigenvalues of modulus $1$. We say that $f$ is \\textbf{semi-hyperbolic} if $D_\\ast$ has no eigenvalues which are roots of unity.\n\\end{definition}\n\nThis class of (semi-)hyperbolic maps contains for example the class of expanding maps and the class of Anosov diffeomorphisms.\n\n\\section{Nielsen periodic point theory}\n\nIn this section, we will mostly follow the outline of \\cite{jm06-1}. Many of the difficult aspects of Nielsen periodic point theory will disappear when working on infra-nilmanifolds. Therefore, we will try to present all the necessary results in a swift way and skip most of the proofs and unnecessary details. More information about Nielsen periodic point theory in general can be found in \\cite{hk97-1},\\cite{heat99-1},\\cite{jm06-1} or \\cite{jian83-1}.\n\n\\medskip\n\nWhen $f^n(x)=x$, we call $x$ a periodic point. If $n$ is the smallest integer for which this holds, $x$ is a periodic point of pure period $n$. We can apply similar techniques as in Nielsen fixed point theory to achieve Nielsen periodic point theory. Just like Nielsen fixed point theory divides $\\Fix(f)$ into different fixed point classes, Nielsen periodic point theory divides $\\Fix(f^n)$ into different fixed point classes, for all $n>0$ and looks for relations between fixed point classes on different levels. This idea is covered in the following definition.\n\n\\begin{definition}\\label{defboost}\nLet $f:X \\to X$ be a self-map. If $\\F_k$ is a fixed point class of $f^k$, then $\\F_k$ will be contained in a fixed point class $\\F_{kn}$ of $(f^k)^n$, for all $n$. We say that $\\F_k$ \\textbf{boosts} to $\\F_{kn}$. On the other hand, we say that $\\F_{kn}$ \\textbf{reduces} to $\\F_k$.\n\\end{definition}\n\nThis idea of boosting a fixed point class also has a more algebraic interpretation. Fix a lifting $\\tilde{f}$ of $f$ to the universal covering $(\\tilde{X},p)$ of $X$. Then $\\tilde{f}$ induces a homomorphism $f_\\ast$ on the group of covering transformations by using the following relation:$$f_\\ast (\\alpha)\\circ \\tilde{f}=\\tilde{f}\\circ \\alpha.$$Let us denote the set of Reidemeister classes of $f$ by $\\mathcal{R}(f)$. Any element of this set will be denoted by the Reidemeister class $[\\alpha]$, where $\\alpha$ is the coordinate of a lifting $\\alpha\\circ \\tilde{f}$. Let $k,n$ be integers, such that $k|n$. We then define the following boosting function:\n$$\\gamma_{nk}: \\mathcal{R}(f^k)\\to \\mathcal{R}(f^n):[\\alpha]\\mapsto [\\alpha f_\\ast^k(\\alpha)f_\\ast^{2k}(\\alpha)\\dots f_\\ast^{n-k}(\\alpha)].$$The idea behind this boosting function, is the fact that $$(\\alpha\\circ \\tilde{f}^k)^\\frac{n}{k}=\\alpha f_\\ast^k(\\alpha)f_\\ast^{2k}(\\alpha)\\dots f_\\ast^{n-k}(\\alpha)\\circ \\tilde{f}^n.$$This equality immediately shows that if $\\gamma_{nk}([\\beta])=[\\alpha]$, the fixed point class $p\\Fix(\\beta\\circ \\tilde{f}^k)$ will be contained in the fixed point class $p\\Fix(\\alpha \\circ \\tilde{f}^n)$. Hence, our algebraic definition makes sense in retrospect to Definition \\ref{defboost}.\n\n\\medskip\n\nIn this paper, we will often make a slight abuse of notation. Whenever we use the expression $[\\alpha]_k$, we will simultaneously consider the Reidemeister class $[\\alpha]\\in \\mathcal{R}(f^k)$ and the fixed point class $p(\\Fix(\\alpha\\circ \\tilde{f}^k))$. We will also often switch between both of these interpretations, whenever necessary. Note that both interpretations are essentially the same due to the one-to-one correspondence between Reidemeister classes and fixed point classes. This also means that we make a distinction between empty fixed point classes that come from a different Reidemeister classes. In a certain sense, this approach coincides with the idea of \\textit{labeled fixed point classes}, in \\cite{jian83-1}.\n\n\\medskip \n\nNote that this description only depends on the homomorphism $f_*$. So, when $f$ and $g$ induce the same morphism on the group of covering transformations, then the structure of their periodic point classes will be the same. More specifically, this means that the whole description above (and everything that will follow) is homotopy-invariant. \n\n\\medskip \n\nWhen $k|m|n$, an easy computation shows that $\\gamma_{nm}\\gamma_{mk}=\\gamma_{nk}$. Also, $\\gamma_{nn}=\\Id_{\\mathcal{R}(f^n)}$.\n\n\\medskip\n\nActually, to give the precise definition of Nielsen periodic point theory, we need a little more than this definition in terms of classes, namely a definition in terms of orbits. Define the following map$$\\mathcal{R}_f:\\mathcal{R}(f^n)\\to \\mathcal{R}(f^n):[\\alpha]\\mapsto[f_\\ast(\\alpha)].$$One can easily see that this map is well-defined and that $\\mathcal{R}_f^n=\\Id_{\\mathcal{R}(f^n)}$. Furthermore, by using the commutativity property of the fixed point index on the maps $f$ and $f^{n-1}$, it is clear that this map preserves the index of the associated fixed point classes. By identifying $[\\alpha]$ with $[f_\\ast(\\alpha)]$ in $\\mathcal{R}(f^n)$, for all $\\alpha$, we find the quotient set $\\mathcal{OR}(f^n)$ of orbits of Reidemeister classes. Since the index is preserved in every orbit, it makes sense to talk about essential and inessential orbits. One can also notice that boosting functions make sense in terms of orbits, so we can talk about reducible and irreducible orbits, depending on whether they have a pre-image under a boosting function or not.\n\n\n\\begin{lemma} [\\cite{jm06-1}, 5.1.13]\\label{lemessirred}\nIf $\\mathcal{A}\\in \\mathcal{OR}(f^n)$ is essential and irreducible, then this orbit contains at least $n$ periodic points of period $n$.\n\\end{lemma}\n\nThis lemma gives us the idea for the following definition.\n\n\\begin{definition}\nWe define the \\textbf{prime Nielsen-Jiang periodic number} $NP_n(f)$ as $$n \\times \\textrm{(number of irreducible essential orbits in }\\mathcal{OR}(f^n)).$$\n\\end{definition}\n\nIf $P_n(f)$ is the set of periodic points of $f$ of pure period $n$, then Lemma \\ref{lemessirred} ensures us that $NP_n(f)$ is a homotopy-invariant lower bound of $\\# P_n(f)$.\n\n\\medskip\n\nWe would also like to find a similar lower bound for $\\# \\Fix(f^n)$. Pick an arbitrary $\\mathcal{A}\\in \\mathcal{OR}(f^n)$. We define the depth $d(\\mathcal{A})$ to be the least divisor $k$ of $n$, such that $\\mathcal{A}\\in \\textrm{Im}(\\gamma_{nk})$.\n\n\\begin{definition}\nLet $n$ be a fixed positive integer. A subset $\\mathcal{PS}\\subset \\bigcup_{k|n}\\mathcal{OR}(f^k)$ is called a \\textbf{preceding system} if every essential orbit $\\mathcal{A}$ in $\\bigcup_{k|n}\\mathcal{OR}(f^k)$ is preceded by an element of $\\mathcal{PS}$. Such a preceding system is called \\textbf{minimal} if the number $\\sum_{\\mathcal{A}\\in \\mathcal{PS}}d(\\mathcal{A})$ is minimal.\n\\end{definition}\n\n\\begin{definition}\nThe \\textbf{full Nielsen-Jiang periodic number} $NF_n(f)$ is defined as $\\sum_{\\mathcal{A}\\in \\mathcal{PS}}d(\\mathcal{A})$, where $\\mathcal{PS}$ is a minimal preceding system.\n\\end{definition}\n\n\\begin{theorem}[\\cite{jm06-1}, 5.1.18]\n$NF_n(f)$ is a homotopy-invariant lower bound for the number $\\# \\Fix(f^n)$.\n\\end{theorem}\nNote that every preceding system must contain every essential irreducible orbit of $\\mathcal{OR}(f^n)$. Since every of these orbits has a depth of $n$, we know that the following inequality holds:$$\\sum_{k|n}NP_k(f)\\leq NF_n(f).$$\n\nAn important definition that gives some structure to the boosting and reducing relations is the following.\n\n\\begin{definition}\nA self-map $f:X\\to X$ will be called \\textbf{essentially reducible} if, for all $n,k$, essential fixed point classes of $f^{kn}$ can only reduce to essential fixed point classes of $f^k$. A space $X$ is called essentially reducible if every self-map $f:X\\to X$ is essentially reducible.\n\\end{definition}\n\nIt can be shown that the fixed point classes for maps on infra-nilmanifolds always have this nice structure for their boosting and reducing relations.\n\n\\begin{theorem}[\\cite{lz07-1}]\\label{thmleezhao}\nInfra-nilmanifolds are essentially reducible.\n\\end{theorem}\n\nA nice consequence of being essentially reducible, is the following lemma.\n\n\\begin{lemma}[\\cite{jm06-1}, 5.1.22]\\label{lemNPNF}\nIf $f:X\\to X$ is essentially reducible, then it has a unique minimal preceding system, namely the set of all the essential irreducible orbits in $\\bigcup_{k|n}\\mathcal{OR}(f^k)$. As a consequence, the following equality holds:$$\\sum_{k|n}NP_k(f)=NF_n(f).$$\n\\end{lemma}\n\nOf course, by using the M\\\"{o}bius inversion formula, we can also write$$NP_n(f)=\\sum_{k|n}\\mu\\left(\\frac{n}{k}\\right)NF_k(f),$$where $\\mu$ denotes the M\\\"{o}bius function.\n\n\\medskip\n\nAs a generalization of being essentially reducible, we can define two other structures on the boosting and reducing relations.\n\n\\begin{definition}\nA map $f:X\\to X$ is called \\textbf{essentially reducible to the greatest common divisor} (GCD) if it is essentially reducible and if for every essential fixed point class $[\\alpha]_n$ that reduces to both $[\\beta]_k$ and $[\\gamma]_l$, there exists a fixed point class $[\\delta]_d$, with $d=\\gcd(k,l)$, such that $[\\alpha]_n$ reduces to $[\\delta]_d$. If this holds for every self-map on $X$, we will say that $X$ is essentially reducible to the GCD.\n\\end{definition}\n\nAn easy consequence of this definition is the following lemma.\n\n\\begin{lemma}[\\cite{jm06-1}, 5.1.26]\\label{lemessredgcd}\nIf $f:X\\to X$ is essentially reducible to the GCD, then every essential fixed point class $[\\alpha]_n$ in $\\mathcal{R}(f^n)$ is preceded by a unique irreducible essential fixed point class $[\\beta]_k$. Moreover, $d([\\alpha]_n)=k$.\n\\end{lemma}\n\nBy the length $l([\\alpha]_n)$, we mean the minimal number $l|n$, such that $\\mathcal{R}_f^l([\\alpha]_n)=[\\alpha]_n$. Alternatively, this is the number of fixed point classes in an orbit $\\mathcal{A}\\in \\mathcal{OR}(f^n)$.\n\n\\medskip\n\nIt is immediately clear that $d([\\alpha])\\geq l([\\alpha])$, because every class in an orbit that reduces to depth $d$ will be a fixed point of the map $\\mathcal{R}_f^d$.\n\n\\begin{definition}\nA map $f:X\\to X$ is called \\textbf{essentially toral} if it is essentially reducible and if the following two conditions are fulfilled:\\begin{enumerate}\n\\item For every essential fixed point class in $\\mathcal{R}(f^n)$, the length and depth coincide.\n\\item If $[\\alpha]_n$ is essential and $\\gamma_{nk}([\\beta]_k)=\\gamma_{nk}([\\gamma]_k)=[\\alpha]_n$, then $[\\beta]_k=[\\gamma]_k$.\n\\end{enumerate}\nIf this holds for every self-map on $X$, we will say that $X$ is essentially toral.\n\\end{definition}\n\nBecause lengths and depths coincide, the following lemma follows easily.\n\\begin{lemma}[\\cite{jm06-1}, 5.1.30]\\label{lemclasses}\nIf $f:X\\to X$ is essentially toral, then $NP_n(f)$ equals the number of irreducible essential fixed point classes in $\\mathcal{R}(f^n)$.\n\\end{lemma}\nThis lemma actually tells us that if we are working on an essentially toral space, we are free to replace the orbit theory by a theory in terms of classes. By combining Lemma \\ref{lemNPNF} and Lemma \\ref{lemclasses}, the following can also be easily deduced.\n\n\\begin{corollary}\\label{coresstor}\nIf $f:X\\to X$ is essentially toral, then $NF_n(f)$ equals the number of irreducible essential fixed point classes in $\\bigcup_{k|n}\\mathcal{R}(f^k)$.\n\\end{corollary}\n\nIn \\cite{hk97-1}, the following theorem is proved.\n\n\\begin{theorem}\nNilmanifolds are essentially reducible to the GCD and essentially toral.\n\\end{theorem}\n\nNote that they actually proved a more general version of this theorem, as they showed that the theorem above also holds for solvmanifolds.\n\n\\begin{definition}\nA map $f:X\\to X$ is called \\textbf{weakly Jiang} if $N(f)=0$ or $N(f)=R(f)$. This means that all fixed point classes are simultaneously essential or inessential.\n\\end{definition}\n\n\\begin{theorem}[\\cite{hk97-1}, Theorem 5.1]\\label{thmhk}\nSuppose that $X$ is essentially toral and essentially irreducible to the GCD. If $f:X\\to X$ is a map such that $f^n$ is weakly Jiang and $N(f^n)\\neq 0$, then $$NF_n(f)=N(f^n)$$and the same formula holds for every divisor of $n$.\n\\end{theorem}\nThe idea behind this proof is very simple. Since every fixed point class at level $n$ is essential, by Lemma \\ref{lemessredgcd}, we know that every such class is preceded by a unique irreducible essential class. On the other hand, every irreducible essential fixed point class in $\\bigcup_{k|n}\\mathcal{R}(f^k)$ has to boost essentially, since there are simply no inessential fixed point classes to boost to at level $n$. Because of these observations, there exist a bijection between the essential fixed point classes at level $n$ and the irreducible essential fixed point classes in $\\bigcup_{k|n}\\mathcal{R}(f^k)$. Corollary~\\ref{coresstor} then proves the theorem.\n\n\\medskip\n\nIt is known that every map on a nilmanifold is weakly Jiang, due to the result of Anosov (\\cite{anos85-1}) or Fadell and Husseini (\\cite{fh86-1}). Consequently, Theorem \\ref{thmhk} holds for nilmanifolds. Unfortunately, not every map on an infra-nilmanifold is weakly Jiang. Even on the Klein bottle, the smallest example of an infra-nilmanifold which is not a nilmanifold, it is possible to find counterexamples.\n\n\\begin{example}\\label{Example not weakly Jiang}\nSuppose we have the following presentation of the Klein bottle group: $$<\\alpha,\\beta | \\alpha \\beta =\\beta^{-1} \\alpha>.$$Let $k\\neq 1$ be odd. Now, let $f_*:\\alpha\\mapsto \\alpha^k, \\textrm{ } \\beta\\mapsto \\beta^{-1}$ be the induced morphism for a map $f$ on the Klein bottle. One can check that this morphism indeed induces a map on the Klein bottle, for which it holds that $R(f)=\\infty$, while $N(f)\\neq 0$.\n\\end{example}\nAn algebraic argument for the fact that maps on nilmanifolds are weakly Jiang, while maps on infra-nilmanifolds are generally not, can be found by combining Theorem \\ref{LeeForm} with Theorem~\\ref{thmRInf} and Theorem \\ref{thmN=R}. When working on nilmanifolds, the formula in Theorem \\ref{LeeForm} reduces to a single determinant. By Theorem \\ref{thmRInf}, we know that this determinant will be equal to $0$ (and hence $N(f)=0$) if and only if $R(f)=\\infty$. By combining this fact with Theorem \\ref{thmN=R}, it follows that nilmanifolds are weakly Jiang. When working on infra-nilmanifolds, the sum generally consists of multiple determinants. Therefore, it is possible that some of these determinants are $0$ and some are not. If this is the case, a similar argument as before will show that the map is not weakly Jiang, as $R(f)=\\infty$, while $N(f)\\neq 0$.\n\\section{Structure on the periodic point classes of infra-nilmanifolds}\nIn this section, we will show that infra-nilmanifolds are both essentially reducible to the GCD and essentially toral. As a result of these structural properties, we will be able to show a theorem similar to Theorem \\ref{thmhk} for semi-hyperbolic maps on infra-nilmanifolds.\n\n\\medskip\n\nWe will prove both of these structural properties for affine maps on infra-nilmanifolds and because the theory described in the previous section is homotopy-invariant, this will be sufficient. As already mentioned before, affine maps are often much easier to deal with. This fact is exemplified in the following proposition, which can be found in \\cite{fl13-1}.\n\n\\begin{proposition}\\label{propaff}\nIf $\\overline{(d,D)}:M\\to M$ is an affine map on an infra-nilmanifold, then every non-empty fixed point class is path-connected and\n\\begin{enumerate}\n\\item every essential fixed point class of $\\overline{(d,D)}$ consists of exactly one point.\n\\item every non-essential fixed point class of $\\overline{(d,D)}$ is empty or consists of infinitely many points.\n\\end{enumerate} \n\\end{proposition}\n\nNow we can prove the two main theorems of this section.\n\n\\begin{theorem}\\label{thmessredgcd}\nInfra-nilmanifolds are essentially reducible to the GCD.\n\\end{theorem}\n\\begin{proof}\nBy Theorem \\ref{thmleezhao}, we already know that infra-nilmanifolds are essentially reducible.\n\\medskip\nIt is known from \\cite{ll06-1} that every almost-Bieberbach group $\\Gamma$ has a fully characteristic subgroup $\\Lambda$ of finite index, such that $\\Lambda\\subset G$. Therefore, every infra-nilmanifold of the form $\\Gamma\\backslash G$ is finitely covered by a nilmanifold $\\Lambda\\backslash G$, such that every continuous map $f:\\Gamma\\backslash G\\to \\Gamma\\backslash G$ can be lifted to a map $\\overline{f}:\\Lambda\\backslash G\\to \\Lambda\\backslash G$. All in all, we have the following commuting diagram, where $\\beta_n$ is a covering transformation and $\\overline{\\beta}_n$ is the natural projection of $\\beta_n$ into $\\Gamma\/ \\Lambda$.\n\\begin{displaymath}\n \\xymatrix{ G \\ar[dd]_p \\ar[dr]_{p'} \\ar[rrrrr]^{\\beta_n \\tilde{f}^n}& & & & & G \\ar[dd]^{p} \\ar[dl]^{p'} \\\\\n & \\Lambda\\backslash G \\ar[rrr]^{\\overline{\\beta}_n\\overline{f}^n} \\ar[dl]_{\\overline{p}} & & & \\Lambda\\backslash G \\ar[dr]^{\\overline{p}} & \\\\\n \\Gamma\\backslash G \\ar[rrrrr]^{f^n}& & & & & \\Gamma\\backslash G }.\n\\end{displaymath} \n\nSuppose that $g$ is the affine map on $\\Gamma\\backslash G$ that is induced by an affine homotopy lift $\\tilde{g}$ of $f$. Let $p(\\Fix(\\beta_n\\tilde{g}^n))$ be an essential fixed point class on level $n$, such that, for $r,s|n$, this fixed point class reduces to $p(\\Fix(\\beta_r\\tilde{g}^r))$ and $p(\\Fix(\\beta_s\\tilde{g}^s))$.\n\n\\medskip\n\nBecause of Proposition \\ref{propaff}, we know that there exists $x\\in \\Fix(g^n)$, such that all these fixed point classes are equal to the set $\\{x\\}$. The fixed point index is a local property and a covering map is a local homeomorphism, hence, the fixed point class $p'(\\Fix(\\beta_n\\tilde{g}^n))$ is also essential. By using Proposition \\ref{propaff} again, we know this fixed point class will consist of one point, namely a $\\overline{x}\\in \\overline{p}^{-1}(x)$. By a similar reasoning, there will exist $\\gamma_r,\\gamma_s \\in \\Gamma$ and accordingly, $\\overline{\\gamma_r},\\overline{\\gamma_s} \\in \\Gamma \/\\Lambda$ such that $$p'(\\Fix(\\beta_r\\tilde{g}^r))=\\{\\overline{\\gamma_r}\\cdot \\overline{ x}\\}\\textrm{ and }p'(\\Fix(\\beta_s\\tilde{g}^s))=\\{\\overline{\\gamma_s}\\cdot \\overline{ x}\\}.$$An easy calculation then shows that $$p'(\\Fix(\\gamma_r^{-1}\\beta_r g_\\ast^r(\\gamma_r)\\tilde{g}^r))=\\{\\overline{x}\\}\\textrm{ and }p'(\\Fix(\\gamma_s^{-1}\\beta_sg_\\ast^s(\\gamma_s)\\tilde{g}^s))=\\{\\overline{x}\\}.$$This actually means that if we choose good representatives in the Reidemeister classes of $[\\beta_r]_r$ and $[\\beta_s]_s$, $p'(\\Fix(\\beta_n\\tilde{g}^n))$ will reduce to both $p'(\\Fix(\\beta_r\\tilde{g}^r))$ and $p'(\\Fix(\\beta_s\\tilde{g}^s))$ on our nilmanifold $\\Lambda\\backslash G$. Since nilmanifolds are known to be essentially reducible to the GCD, there exists a $\\beta_d$, with $d=\\gcd(r,s)$, such that $p'(\\Fix(\\beta_r\\tilde{g}^r))$ and $p'(\\Fix(\\beta_s\\tilde{g}^s))$ both reduce to $p'(\\Fix(\\beta_d\\tilde{g}^d))$. By applying $\\overline{p}$ to this fixed point class, the statement is proved.\n\\end{proof}\n\n\\begin{theorem}\\label{thmesstor}\nInfra-nilmanifolds are essentially toral.\n\\end{theorem}\n\\begin{proof}\nAgain, we already know that infra-nilmanifolds are essentially reducible and again, by homotopy-invariance, it suffices to prove this theorem for affine maps $g$.\n\n\\medskip\n\nLet $[\\alpha]_n$ be an essential fixed point class of $g^n$. Since we already know that $d([\\alpha]_n)\\geq l([\\alpha]_n)$, we only need to prove that the strict inequality is impossible. So, suppose that $d=d([\\alpha]_n)> l([\\alpha]_n)=l$. Because of Proposition \\ref{propaff}, we know that there exists $x\\in \\Fix(g^n)$ such that $\\{x\\}$ is the fixed point class associated to $[\\alpha]_n$. Furthermore, $\\{g(x)\\}$ will be the fixed point class associated to $\\mathcal{R}_g([\\alpha]_n)$. By definition and because there is only one fixed point in each essential fixed point class, $g^l(x)=x$. Therefore, $[\\alpha]_n$ reduces to a fixed point class on level $l$, which is a contradiction to the fact that $d>l$. This proves the first condition.\n\n\\medskip\n\nIf $[\\beta]_k$ and $[\\gamma]_k$ are both boosted to $[\\alpha]_n$, then we know that they are both essential fixed point classes. Hence, they both have the set $\\{x\\}$ as associated fixed point class, which means that $[\\beta]_k=[\\gamma]_k$. This proves the second condition of essential torality.\n\\end{proof}\n\nNow we will use these newly obtained structural properties for infra-nilmanifolds to establish a few results concerning Nielsen periodic points. We start with the following definition.\n\n\\begin{definition}\nWe say that an essential fixed point $[\\alpha]_k$ is \\textbf{(in)essentially boosted to level $n$}, if $[\\alpha]_k$ is boosted to an (in)essential fixed point class $[\\beta]_{n}$. \n\\end{definition}\n\nLet us denote the set of all irreducible fixed point classes which are inessentially boosted to level $n$ for a continuous self-map $f$ by $IIB_n(f)$. Note that this is a subset of $\\bigcup_{k|n}\\mathcal{R}(f^k)$, since this set contains all fixed point classes on all levels that will boost to level $n$.\n\n\\begin{proposition}\\label{propIIB}\nWhenever a map $f$ is essentially reducible to the GCD and essentially toral, we have that\n$$NF_n(f)=N(f^n)+\\#IIB_n(f).$$\n\\end{proposition}\n\\begin{proof}\nWhen a map $f$ is essentially toral, we know by Corollary \\ref{coresstor} that $NF_n(f)$ equals the number of irreducible essential classes in $\\bigcup_{k|n}\\mathcal{R}(f^k)$. Now, pick an arbitrary irreducible essential class. We can distinguish two disjoint cases. \n\n\\medskip\n\nOn the one hand, suppose this class boosts essentially to level $n$. As $f$ is essentially reducible to the GCD, we can apply Lemma \\ref{lemessredgcd} and we know that every essential fixed point class reduces to a unique irreducible essential fixed point class. This means that there is a bijection between the irreducible essential classes that are essentially boosted to level $n$ and the essential fixed point classes of $\\mathcal{R}(f^n)$.\n\n\\medskip\n\nIf, on the other hand, our class boosts inessentially to level $n$, it belongs to $IIB_n(f)$. Since both cases are disjoint, the equality follows.\n\\end{proof}\n\nIt is quite easy to see that this proposition is a generalization of Theorem \\ref{thmhk}. In fact the proof is a slightly adapted version where we take inessential boosting into account.\n\n\\begin{theorem}\\label{thmNF=Nf)}\nWhen $f$ is a semi-hyperbolic map on an infra-nilmanifold, then for all $n>0$ $$NF_n(f)=N(f^n).$$\n\\end{theorem}\n\\begin{proof}\nSuppose that $(d,D)$ is an affine homotopy lift of $f$. By combining Theorem \\ref{thmRInf} and Theorem \\ref{thmN=R} we know that every fixed point class on level $n$ is essential if and only if for all $A\\in F$ (where $F$ is the holonomy group of our infra-nilmanifold),$$\\det(I-A_\\ast D_\\ast^n)\\neq 0.$$By Lemma \\ref{Lemma Bram}, we know that there exists $B \\in F$ and an integer $l$, such that $$(B_\\ast D_\\ast^n)^l=D_\\ast^{ln} \\textrm{ and } \\det(I-A_\\ast D_\\ast^n)=\\det(I-B_\\ast D_\\ast^n).$$Note that $\\det(I-B_\\ast D_\\ast^n)=0$ implies that $B_\\ast D_\\ast^n$ has an eigenvalue $1$, but this would mean that $D_\\ast^{ln}$ had an eigenvalue $1$, which is in contradiction with the fact that $f$ is semi-hyperbolic. Therefore, we know that every fixed point class on level $n$ is essential, which implies that $IIB_n(f)$ is the empty set. The theorem then follows from Proposition \\ref{propIIB}.\n\\end{proof}\n\nNote that the proof of this theorem actually also proves the following proposition, since we proved that every fixed point class on every level is essential.\n\n\\begin{proposition}\nWhen $f$ is a semi-hyperbolic map on an infra-nilmanifold, then for all $n>0$, $f^n$ is a weakly Jiang map.\n\\end{proposition}\n\nWith this proposition in mind, one can easily see that Theorem \\ref{thmNF=Nf)} is a special case of Theorem \\ref{thmhk}.\n\n\\medskip\n\nLater on, in the last section, we will show, under mild conditions, that semi-hyperbolic maps are the only maps for which a non-trivial equality $NF_n(f)=N(f^n)$ holds.\n\n\\medskip\n\nTheorem \\ref{thmNF=Nf)} actually has a nice corollary in the area of dynamical zeta functions. By $N_f(z)$, we mean the Nielsen zeta function, as defined in \\cite{fels00-2} ,\\cite{fels88-1} or \\cite{fp85-1}. In \\cite{fels00-2}, the following definition of the \\textbf{minimal dynamical zeta function} can be found: $$NF_f(z)=\\exp\\left(\\sum_{k=1}^\\infty \\frac{NF_k(f)z^k}{k}\\right).$$\n\n\nWe now have the following corollary.\n\n\\begin{corollary}\nLet $f$ be a semi-hyperbolic map on an infra-nilmanifold, then $N_f(z)=~NF_f(z)$.\n\\end{corollary}\n\nBy using the main result of \\cite{dd13-2}, which states that Nielsen zeta functions are rational for self-maps on infra-nilmanifolds, we can also conclude the following.\n\n\\begin{corollary}\nLet $f$ be a semi-hyperbolic map on an infra-nilmanifold, then $NF_f(z)$ is a rational function.\n\\end{corollary}\n\n\\section{A method for computing $NF_n(f)$}\nIn theory, we are now capable to compute $NF_n(f)$, due to Proposition \\ref{propIIB}. By using the standard formula for Nielsen numbers for maps on infra-nilmanifolds (Theorem \\ref{LeeForm}), the computation of $N(f^n)$ becomes very simple and therefore, the only thing left to check is how many fixed point classes lie in $IIB_n(f)$. \\medskip\n\nIn some cases, for example for semi-hyperbolic maps, the computation of $\\# IIB_n(f)$ becomes trivial. However, in a more general setting, this number can be a very tedious thing to compute. In this section, we will try to develop a method to make this computation a bit easier.\n\n\\subsection{$\\sim_f$-equivalence classes}\nWe start this section with the following definition.\n\\begin{definition}\nLet $f: \\Gamma\\backslash G \\to \\Gamma\\backslash G$ be a continuous map, such that $F$ is the holonomy group of $\\Gamma$. We will say that $A, B \\in F$ are \\textbf{$f$-conjugated}, if there exist $a,b\\in G$ and $\\gamma\\in \\Gamma$ such that $(a,A)$ and $(b, B)$ are elements of $\\Gamma$ and $$\\gamma\\circ (a,A) \\circ f_*(\\gamma^{-1})=(b,B).$$We will write $A\\sim_f B.$\n\\end{definition}\n\nAn alternative for this definition is given in the following lemma. In general, the definition will be more useful when one quickly wants to find elements that are $f$-conjugated. The lemma below is often more useful when it comes to finding properties of the $\\sim_f$-relation. \n\\begin{lemma}\\label{lemma alternatief definitie}\nLet $f: \\Gamma\\backslash G \\to \\Gamma\\backslash G$ be a continuous map, such that $F$ is the holonomy group of $\\Gamma$. Then $A\\sim_f B$ if and only if for all $(a,A)\\in \\Gamma$, there exist $(b,B), \\gamma \\in \\Gamma$, such that$$\\gamma\\circ (a,A) \\circ f_*(\\gamma^{-1})=(b,B).$$\n\\end{lemma} \n\\begin{proof}\nOne direction is obvious. For the other direction, pick an arbitrary $(a,A)\\in \\Gamma$ and suppose that $A\\sim_f B$. This means that there exist $a_0, b_0 \\in G$ and $\\gamma_0 \\in \\Gamma$, such that $(a_0,A),(b_0,B) \\in \\Gamma$ and $$\\gamma_0\\circ (a_0,A) \\circ f_*(\\gamma_0^{-1})=(b_0,B).$$Then $$\\gamma_0\\circ (a,A) \\circ f_*(\\gamma_0^{-1})=\\left(\\gamma_0\\circ (a_0,A) \\circ f_*(\\gamma_0^{-1})\\right) \\circ \\left(f_*(\\gamma_0) \\circ (A^{-1}(a_0^{-1}a), \\Id) \\circ f_*(\\gamma_0^{-1})\\right).$$As $\\Gamma \\cap G$ is a normal divisor of $\\Gamma$, there exists a $(c,\\Id)\\in \\Gamma\\cap G$, such that $$\\gamma_0\\circ (a,A) \\circ f_*(\\gamma_0^{-1})=(b_0,B)\\circ (c,\\Id).$$\n\\end{proof}\n\nA simple consequence of the previous lemma is the following.\n\\begin{corollary}\n$\\sim_f$ is an equivalence relation.\n\\end{corollary}\n\\begin{proof}\nThe fact that this relation is reflexive and symmetric is easy to see, so the only thing left to prove is the transitivity. Suppose that $A\\sim_f B$ and $B\\sim _f C$. By definition, there exist $a, b \\in G$ and $\\gamma_1 \\in \\Gamma$, such that $(a,A), (b,B)\\in \\Gamma$ and $$\\gamma_1\\circ (a,A) \\circ f_*(\\gamma_1^{-1})=(b,B).$$By Lemma \\ref{lemma alternatief definitie}, we know there exist $\\gamma_2, (c,C) \\in \\Gamma$, such that $$\\gamma_2\\circ (b,B) \\circ f_*(\\gamma_2^{-1})=(c,C).$$By combining both equations, we see$$(\\gamma_2 \\circ \\gamma_1)\\circ (a,A)\\circ f_*((\\gamma_2\\circ \\gamma_1)^{-1})=(c,C),$$which means that $A\\sim_f C$.\n\\end{proof}\n\nThe fact that $\\sim_f$ is an equivalence relation implies that we can partition $F$ into \\textbf{$\\sim_f$-equivalence classes}. There is an even more convenient way to look at the $\\sim_f$-equivalence classes for which we only need to work in the holonomy group $F$ of our infra-nilmanifold. This will be the ideal tool to compute these classes in a more effective way.\n\n\\begin{definition}\nBy $f_\\#(\\Id)$, we mean the set of all $A\\in F$, such that there exist $(g,\\Id), (a,A)\\in \\Gamma$, for which $f_*(g,\\Id)=(a,A)$. Analogously, $f_\\#(C)$ is the set of all $A\\in F$, such that there exist $(c,C), (a,A)\\in \\Gamma$, for which $f_*(c,C)=(a,A)$.\n\\end{definition}\n\nNote that it is known that $\\Gamma\\cap G$ is finitely generated. With this in mind, we can deduce the following lemma. \n\n\\begin{lemma}\\label{lemma f-equivalentieklassen}\nPick an arbitrary $(c,C)\\in \\Gamma$. Again, $p:\\Aff(G)=G\\semi \\Aut(G) \\to \\Aut(G)$ denotes the natural projection onto the second factor of the semi-direct product. Suppose that $(g_i,\\Id)_{i=1}^n$ is a set of generators for $\\Gamma\\cap G$. Then we can describe $f_\\#(\\Id)$ and $f_\\#(C)$ as follows:\n\\begin{itemize}\n\\item $f_\\#(\\Id)=\\grp\\{p(f_*(g_i,\\Id))\\}.$\n\\item $f_\\#(C)=p(f_*(c,C))f_\\#(\\Id)=f_\\#(\\Id)p(f_*(c,C)).$\n\\end{itemize}\n\n\\end{lemma}\n\\begin{proof}\nIt is clear that $f_\\#(\\Id)$ contains all elements $p(f_*(g_i,\\Id))$ and it is also clear that $f_*(\\Id)$ is precisely the set $p(f_*(\\Gamma\\cap G))=p(f_*(\\grp \\{ (g_i, \\Id)\\ \\| \\ i=1\\dots n\\}))$. As $p\\circ f_*$ is a morphism, this will be equal to $\\grp\\{p(f_*( g_i, \\Id)) \\ \\| \\ i=1\\dots n\\}$, which proves the first statement.\n\n\\medskip\n\nTake an arbitrary element of the form $(c_1, C)\\in \\Gamma$. It is clear that $p(f_*(c_1,C))\\in f_\\#(C)$. Now, an easy computation shows that $$p(f_*(c_1,C))=p(f_*(c,C))p(f_*(c,C)^{-1})p(f_*(c_1,C))=p(f_*(c,C))p(f_*(C^{-1}(c^{-1}c_1),\\Id)).$$As $p(f_*(C^{-1}(c^{-1}c_1),\\Id))\\in f_\\#(\\Id)$, the first equality of second statement is proved. The second equality can be proved in a similar way, by multiplying with $p(f_*(c,C)^{-1})p(f_*(c,C))$ on the right.\n\\end{proof}\n\nAs a side remark, note that an easy consequence of this lemma is the fact that if $p\\circ f_*:\\Gamma\\to F$ is a surjective morphism, then $f_\\#(\\Id)$ will be a normal divisor of $F$.\n\n\\medskip \n\nBy using this lemma, we can derive an easier way of determining $\\sim_f$-equivalence classes.\n\n\\begin{proposition}\nSuppose $A,B$ are elements in $F$. Then, $A\\sim_f B$ if and only if there exists a $C\\in F$, such that $B\\in CAf_\\#(C)^{-1}$. Here $f_\\#(C)^{-1}$ denotes the set of all inverses of elements in $f_\\#(C)$, or equivalently $f_\\#(C^{-1})$.\n\\end{proposition}\n\\begin{proof}\nOne direction is obvious. For the other direction, suppose that there exists a $C\\in F$, such that $B\\in CAf_\\#(C)^{-1}$. Then, there exist $a,c\\in G$, such that $(c,C), (a,A)\\in \\Gamma$. By Lemma~\\ref{lemma f-equivalentieklassen}, any element in $f_\\#(C)^{-1}$ will come from an element of the form $f_*(c,C)^{-1}f_*(g,\\Id)$, with $(g,\\Id)\\in \\Gamma$. As a result, we find that there exists a $b\\in G$, such that $$(c,C)(a,A)f_*(c,C)^{-1}f_*(g,\\Id)=(b,B).$$Note that $(b,B)$ will also be an element of $\\Gamma$. By multiplying both sides on the left with $(g^{-1},\\Id)$ (which is also in $\\Gamma$), we get$$ (g^{-1},\\Id)(c,C)(a,A)f_*((g^{-1},\\Id)(c,C))^{-1}=(g^{-1},\\Id)(b,B)=(g^{-1}b,B).$$This proves that $A\\sim_f B$.\n\n\\medskip\n\nIn order to see that $f_\\#(C^{-1})=f_\\#(C)^{-1}$, note that Lemma \\ref{lemma f-equivalentieklassen} tells us that $f_\\#(\\Id)$ is a group and that $f_\\#(C)=p(f_*(c,C))f_\\#(\\Id)=f_\\#(\\Id)p(f_*(c,C))$, from which this fact follows immediately, as $$f_\\#(C)^{-1}=f_\\#(\\Id)^{-1}p(f_*(c,C))^{-1}=f_\\#(\\Id)p(f_*(c,C)^{-1})=f_\\#(C^{-1}).$$\n\\end{proof}\n\n\\begin{corollary}\nThe $\\sim_f$-equivalence class of $A$ equals the set $$\\bigcup_{C\\in F} CAf_\\#(C)^{-1}.$$\n\\end{corollary}\n\n\\begin{corollary}\\label{corollary D invertible}\nLet $(d,D)$ be an affine homotopy lift of a continuous map $f$ on an infra-nilmanifold $\\Gamma\\backslash G$. When $D$ is invertible in $\\Endo(G)$, for any $C\\in F$, $f_\\#(C)$ will be a singleton.\n\\end{corollary}\n\\begin{proof}\nTake an arbitrary element $(g,\\Id)$ of $\\Gamma\\cap G$. Then $f_*(g,\\Id)\\circ (d,D)=(d,D)\\circ (g,\\Id)$ and hence also, $p(f_*(g,\\Id))\\circ D=D$. As $D$ is invertible, $p(f_*(g,\\Id))=\\Id$. As $(g,\\Id)$ was chosen arbitrarily, this means $f_\\#(\\Id)=\\{\\Id\\}$. By Lemma \\ref{lemma f-equivalentieklassen}, $f_\\#(C)$ is also a singleton. \n\\end{proof}\n\n\\subsection{Properties of $\\sim_f$-equivalence classes}\nA first sign that shows that elements in the same $\\sim_f$-equivalence class are strongly connected, can be found in the following lemma.\n\n\\begin{lemma}\\label{lemdet}\nWhen $A\\sim_f B$ and $(d,D)$ is an affine homotopy lift of $f$, then $$\\det(I-A_\\ast D_\\ast)=\\det(I-B_\\ast D_\\ast).$$\n\\end{lemma}\n\\begin{proof}\nAs $A\\sim_f B$, there exists a $(c,C) \\in \\Gamma$, such that $$(c,C)\\circ(a,A)\\circ f_*(c, C)^{-1}=(b,B).$$Of course, we can compose both sides with $(d,D)$. As a result, we get the following equality$$(c,C)\\circ(a,A)\\circ (d,D)\\circ (c,C)^{-1}=(b,B)\\circ(d,D).$$Therefore, we have $CADC^{-1}=BD$ in $\\Aut(G)$. From this, the statement follows easily.\n\\end{proof}\n\nThe following theorem is in a certain sense the heart of our computational method. It splits the formula from Theorem \\ref{LeeForm} into several parts, one for each $\\sim_f$-equivalence class. The proof is heavily influenced by the proofs in \\cite{kll05-1} and \\cite{ll06-1}.\n\n\\begin{theorem}\\label{thmNAf}\nLet $f:\\Gamma\\backslash G \\to \\Gamma\\backslash G$ be a continuous map on an infra-nilmanifold, with affine homotopy lift $(d,D)$ and holonomy group $F$. Let $(G,p)$ be a universal covering of $\\Gamma\\backslash G$, such that $\\tilde{f}$ is a reference lifting of $f$. If we fix $A \\in F$, then, the number of essential fixed point classes that can be written as $p(\\Fix((a,A)\\circ \\tilde{f}))$, which we will denote by $N_A(f)$, equals $$\\frac{1}{\\#F}\\sum_{B\\sim_f A}|\\det(I-B_\\ast D_\\ast)|.$$\n\\end{theorem}\n\\begin{proof}\nFrom now on, $\\Lambda$ will be the fully characteristic subgroup of $\\Gamma$ described in \\cite{ll06-1}. Similarly to Theorem \\ref{thmessredgcd}, we have the following commuting diagram:\n\n\\begin{displaymath}\n \\xymatrix{ G \\ar[dd]_p \\ar[dr]_{p'} \\ar[rrrrr]^{(a,A) \\circ \\tilde{f}}& & & & & G \\ar[dd]^{p} \\ar[dl]^{p'} \\\\\n & \\Lambda\\backslash G \\ar[rrr]^{\\overline{(a,A)} \\circ \\overline{f}} \\ar[dl]_{\\overline{p}} & & & \\Lambda\\backslash G \\ar[dr]^{\\overline{p}} & \\\\\n \\Gamma\\backslash G \\ar[rrrrr]^{f}& & & & & \\Gamma\\backslash G }.\n\\end{displaymath} \n\nFor $\\alpha \\in \\Gamma$, we will denote the Reidemeister class of $\\alpha$ in $\\Gamma$ by $[\\alpha]_\\Gamma$. Now, define an equivalence relation $\\sim{_\\Lambda}$ on $\\Gamma$ as follows:$$\\alpha\\sim{_\\Lambda}\\beta \\text{ iff } \\exists \\lambda\\in \\Lambda: \\beta=\\lambda\\circ \\alpha\\circ f_*(\\lambda)^{-1}.$$In a similar way as before, $[\\alpha]_\\Lambda$ will denote the equivalence class with respect to $\\sim_\\Lambda$ that contains $\\alpha$. It is straightforward to prove that $\\beta\\in [\\alpha]_\\Lambda$ implies that $p'(\\Fix(\\beta\\circ \\tilde{f}))=p'(\\Fix(\\alpha\\circ \\tilde{f}))$. In a similar way, one can prove that $\\beta\\not \\in [\\alpha]_\\Lambda$ implies that $p'(\\Fix(\\beta\\circ \\tilde{f}))\\cap p'(\\Fix(\\alpha\\circ \\tilde{f}))=\\emptyset$. Note that this can also mean that $p'(\\Fix(\\beta\\circ \\tilde{f}))$ and $ p'(\\Fix(\\alpha\\circ \\tilde{f}))$ are fixed point classes for different maps on the nilmanifold $\\Lambda\\backslash G$. Now, by labeling the possibly empty fixed point sets, we can say that there is a one-to-one relation between the sets $[\\alpha]_\\Lambda$ and the fixed point classes $p'(\\Fix(\\alpha\\circ \\tilde{f}))$ of liftings of $f$ to $\\Lambda\\backslash G$. Let us denote the set of $\\sim_\\Lambda$-equivalence classes by $\\mathcal{R}_\\Lambda(f)$ and the set of Reidemeister classes of $f$ by $\\mathcal{R}(f)$. As $\\Lambda\\leq \\Gamma$, we know that the map $$\\Psi:\\mathcal{R}_\\Lambda(f)\\to \\mathcal{R}(f): [\\alpha]_\\Lambda\\mapsto [\\alpha]_\\Gamma$$is a well-defined function, which is clearly surjective. We also know that $p(\\Fix(\\alpha\\circ \\tilde{f}))$ is essential if and only if $p'(\\Fix(\\alpha\\circ \\tilde{f}))$ is essential, because the fixed point index is a local property and $\\overline{p}$ is a local homeomorphism. Hence, when $[\\alpha]_\\Gamma$ is (in)essential, then every element in $\\Psi^{-1}([\\alpha]_\\Gamma)$ corresponds to an (in)essential fixed point class of a lift of $f$ to $\\Lambda\\backslash G$. Because of this property and the fact that $\\Psi$ is surjective and well-defined, we know that \\begin{equation}\\label{ineq}\nN_A(f)\\leq \\sum_{\\begin{subarray}{c}\n\\overline{(b,B)}\\in \\Gamma \/ \\Lambda \\\\ B\\sim_f A\n\\end{subarray}}N(\\overline{(b,B)}\\circ \\overline{f}).\n\\end{equation}\n\n\\medskip\n\n\nSuppose that $\\mathbb{F}$ is a fixed point class of the desired form. Then $$\\mathbb{F}=p(\\Fix((b,B)\\circ \\tilde{f})=[(b,B)]_\\Gamma,$$with $B\\sim_f A$.\nWhen $\\F$ is an inessential fixed point class, $[(b,B)]_\\Lambda$ corresponds to an inessential fixed point class of the map $\\overline{(b,B)}\\circ \\overline{f}$ on the nilmanifold $\\Lambda\\backslash G$. Due to the main result from \\cite{anos85-1} or \\cite{fh86-1}, we now know that every fixed point class of $\\overline{(b,B)}\\circ \\overline{f}$ is inessential, so that $N(\\overline{(b,B)}\\circ \\overline{f})=0$. This also means that $\\det(I-B_\\ast D_\\ast) =0$. By Lemma \\ref{lemdet} we know that $\\det(I-C_\\ast D_\\ast)=0$ for all $C\\sim_f B$, or equivalently, for all $C\\sim_f A$. This also means that $N(\\overline{(c,C)}\\circ \\overline{f})=0$. By definition, $N_A(f)$ is a non-negative integer and hence, it follows by inequality (\\ref{ineq}) that $$N_A(f)=0=\\frac{1}{\\#F}\\sum_{B\\sim_f A}|\\det(I-B_\\ast D_\\ast)|.$$\n\nNow, suppose that $\\F=[(b,B)]_\\Gamma$ is an essential fixed point class. The fact that$$N_A(f)\\leq \\sum_{\\begin{subarray}{c}\n\\overline{(b,B)}\\in \\Gamma \/ \\Lambda \\\\ B\\sim_f A\n\\end{subarray}}N(\\overline{(b,B)}\\circ \\overline{f})$$is not necessarily an equality comes from the fact that $\\Psi$ is not injective. This is due to situations where$$[(b,B)]_\\Gamma=[(c,C)]_\\Gamma \\text{, while } [(b,B)]_\\Lambda\\neq [(c,C)]_\\Lambda.$$So, in order to find $N_A(f)$, we need to find the number of elements in $\\mathcal{R}_\\Lambda(f)$ which are mapped to the same element of $\\mathcal{R}(f)$ by $\\Psi$. First, we will show that this number has $\\left|\\Gamma\/ \\Lambda\\right|$ as an upper bound. Suppose that $\\overline{\\gamma}_1=\\overline{\\gamma}_2 \\in \\Gamma\/ \\Lambda$, then $\\gamma_2=\\lambda\\circ \\gamma_1$, for $\\lambda\\in \\Lambda$. If $$(c_1,C_1)=\\gamma_1 \\circ (b,B) \\circ f_*(\\gamma_1^{-1}) \\textrm{ and } (c_2,C_2)=\\gamma_2 \\circ (b,B) \\circ f_*(\\gamma_2^{-1}),$$then an easy computation shows that $$(c_2,C_2)=\\lambda\\circ(c_1,C_1)\\circ f_*(\\lambda^{-1}),$$which means that $[(c_1,C_1)]_\\Lambda= [(c_2,C_2)]_\\Lambda$.\n\n\\medskip\n\nNow we will show that this upper bound is always attained by showing that $[(c_1,C_1)]_\\Lambda= [(c_2,C_2)]_\\Lambda$ implies that $\\overline{\\gamma}_1=\\overline{\\gamma}_2$ in $\\Gamma \/ \\Lambda$. Let $(d,D)$ be an affine homotopy lift of $f$. Suppose there exist a $\\lambda \\in \\Lambda$, such that $$(c_1,C_1)=\\gamma_1\\circ (b,B)\\circ f_*(\\gamma_1^{-1})=\\lambda\\circ (\\gamma_2\\circ (b,B)\\circ f_*(\\gamma_2^{-1}))\\circ f_*(\\lambda^{-1})=\\lambda\\circ (c_2,C_2)\\circ f_*(\\lambda^{-1}).$$As an easy consequence, $$(b,B)=(\\gamma_1^{-1}\\circ \\lambda\\circ \\gamma_2)\\circ (b,B)\\circ f_*(\\gamma_1^{-1}\\circ \\lambda\\circ \\gamma_2)^{-1}.$$Note that $p(\\Fix((b,B)\\circ \\tilde{f}))$ is an essential fixed point class and therefore $p(\\Fix((b,B)\\circ (d,D)))$ will also be an essential fixed point class. Hence, there exists an $x\\in G$ such that $(b,B)\\circ (d,D)(x)=x$. This also means that $$(\\gamma_1^{-1}\\circ \\lambda\\circ \\gamma_2)\\circ (b,B)\\circ f_*(\\gamma_1^{-1}\\circ \\lambda\\circ \\gamma_2)^{-1}\\circ (d,D) (x)=x,$$which implies that $(\\gamma_1^{-1}\\circ \\lambda\\circ \\gamma_2)^{-1}\\cdot x$ is also in $p(\\Fix((b,B)\\circ (d,D)))$. By Proposition \\ref{propaff}, we know that such a fixed point class is a singleton and hence, $(\\gamma_1^{-1}\\circ \\lambda\\circ \\gamma_2)^{-1}\\cdot x=x$. By the free action of $\\Gamma$ on $G$, this implies that $\\lambda\\circ \\gamma_2=\\gamma_1$.\n\n\\medskip\n\nNow, we know that $\\Psi$ maps $\\left|\\Gamma\/ \\Lambda\\right|$ different elements of $\\mathcal{R}_\\Lambda(f)$ to the element $[(b,B)]_\\Gamma$. As $\\F$ was chosen arbitrarily, we know that this holds for every essential fixed point class $[(c,C)]_\\Gamma$ in $\\mathcal{R}(f)$, for which $C\\sim_f A$. So, this means that $$N_A(f)=\\frac{1}{[\\Gamma:\\Lambda]}\\sum_{\\begin{subarray}{c}\n\\overline{(b,B)}\\in \\Gamma \/ \\Lambda \\\\ B\\sim_f A\n\\end{subarray}}N(\\overline{(b,B)}\\circ \\overline{f}).$$In a similar way as in the proof of Theorem 3.4 in \\cite{ll06-1}, we can now derive that$$N_A(f)=\\frac{1}{\\#F}\\sum_{B\\sim_f A}|\\det(I-B_\\ast D_\\ast)|.$$\n\\end{proof}\n\n\n\\begin{remark}\\label{remark essential of inessential}\nDuring the proof of this theorem, we actually also proved that the fixed point class $p(\\Fix((a,A)\\circ \\tilde{f})) $ is essential if and only if $\\det(I-A_\\ast D_\\ast)\\neq 0$. This is due to the fact that $p(\\Fix((a,A)\\circ \\tilde{f}))$ can be lifted to a fixed point class $p'(\\Fix((a,A)\\circ \\tilde{f}))$ with the same index. As this is a fixed point class for a map on a nilmanifold, we can use the result from \\cite{anos85-1} or \\cite{fh86-1}, which tells us that $p'(\\Fix((a,A)\\circ \\tilde{f})) $ is essential if and only if $\\det(I-A_\\ast D_\\ast)\\neq 0$.\n\\end{remark}\n\nIt might be noteworthy to mention that a fixed point class $p(\\Fix((b,B)\\circ \\tilde{f}))$ can be written as $p(\\Fix((a,A)\\circ \\tilde{f}))$ if and only if $A\\sim_f B$. So, in a certain sense, it is justified to say that $N_A(f)$ is the number of essential fixed point classes \\textbf{above the $\\sim_f$-equivalence class of $A$}. We will denote this equivalence class by $[A]$. Also note that every fixed point class above $A$ is simultaneously essential or inessential. Hence, it makes sense to talk about the (in)essential $\\sim_f$-equivalence class $[A]$. We can also generalize these notions when considering $\\sim_{f^{k}}$-equivalence classes of $A$. For these classes, we will use the notation $[A]_k$.\n\n\\medskip\n\nSome easy corollaries of Theorem \\ref{thmNAf} are the following.\n\n\\begin{corollary}\n$$N_A(f)=\\frac{\\#\\{B\\in F\\|B\\sim_{f} A\\}}{\\# F}\\cdot |\\det(I-A_\\ast D_\\ast)|.$$\n\\end{corollary}\n\\begin{proof}\nThis follows easily by combining Lemma~\\ref{lemdet} and Theorem~\\ref{thmNAf}.\n\\end{proof}\n\n\\begin{corollary}\nIf all elements of $F$ are in the same $\\sim_f$-equivalence class, then $$N(f)=N_A(f)=|\\det(I-A_\\ast D_\\ast)|=|\\det(I- D_\\ast)|=|L(f)|.$$\n\\end{corollary}\n\nThis condition is for example satisfied when $f_*:\\Gamma\\to \\Gamma$ maps every element into $\\Gamma\\cap G$. If this is the case, then $f_*$ induces the trivial morphism on the holonomy group, from which it follows easily that all elements in $F$ are in the same $\\sim_f$-equivalence class.\n\n\\begin{corollary}\nIf every $\\sim_{f}$-equivalence class in $F$ consists of a single element, then for all $A\\in F$, $\\det(I-A_\\ast D_\\ast)$ will be divisible by $\\#F$.\n\\end{corollary}\n\nThis is for example the case when $f_*$ induces the identity morphism on $F$, while $F$ itself is an abelian group. For instance, in Example \\ref{example Z6}, in section \\ref{examples}.\n\n\\subsection{$\\sim_f$-equivalence classes on different levels}\n\n\\begin{definition}\\label{definition boosting equivalence classes}\nLet $A\\in F$ be an element of the holonomy group $F$ of $\\Gamma\\backslash G$. Let $k|n$. Then we define $\\gamma_{nk}(A)$ to be the following subset of $F$:$$\\{C\\in F \\| \\textrm{ there exists } (a,A)\\in \\Gamma, \\textrm{such that } (c,C)=\\gamma_{nk}(a,A)=(a,A)f_*^k(a,A)\\dots f_*^{n-k}(a,A)\\}.$$\n\\end{definition}\n\nWith this definition in mind, we can prove the following lemma.\n\n\\begin{lemma}\\label{lemma boosting funct on f-conjugacy classes}\nIf $A\\sim_{f^k} B$, then for any $ C \\in \\gamma_{nk}(A)$, there exists a $D\\in \\gamma_{nk}(B)$ such that $C \\sim_{f^n} D$.\n\\end{lemma}\n\\begin{proof}\nBy Lemma \\ref{lemma alternatief definitie}, for any $(a,A) \\in \\Gamma$, there exist $(b,B), \\gamma \\in \\Gamma$, such that $$\\gamma\\circ (a,A)\\circ f_*^k(\\gamma^{-1})=(b,B).$$Because we picked $(a,A)$ arbitrarily, $C$, with $(c,C)=\\gamma_{nk}(a,A)$, is also chosen arbitrarily in the set $\\gamma_{nk}(A)$. Now, the following $(d,D)$ fulfills the necessary conditions:$$(d,D)=(b,B)f_*^k(b,B)\\dots f_*^{n-k}(b,B).$$Indeed, by using the relation $\\gamma\\circ (a,A)\\circ f_*^k(\\gamma^{-1})=(b,B)$, we see$$(d,D)=(\\gamma\\circ (a,A)\\circ f_*^k(\\gamma^{-1}))f_*^k(\\gamma\\circ (a,A)\\circ f_*^k(\\gamma^{-1}))\\dots f_*^{n-k}(\\gamma\\circ (a,A)\\circ f_*^k(\\gamma^{-1})).$$Since $f_*$ is a morphism, a simple computation shows that $$(d,D)=\\gamma\\circ (c,C) \\circ f_*^n(\\gamma^{-1}).$$\n\\end{proof}\n\nWith Definition \\ref{definition boosting equivalence classes}, we actually try to define boosting functions in terms of $\\sim_f$-equivalence classes. This might not necessarily be well-defined, in the sense that it might happen that for $A \\sim_{f^k} B$, not every element in $\\gamma_{nk}(A)$ is automatically $\\sim_{f^n}$-conjugated with every element in $\\gamma_{nk}(B)$. Note that Lemma \\ref{lemma boosting funct on f-conjugacy classes} tells us that this would be the case if every element in $\\gamma_{nk}(A)$ is in the same $\\sim_{f^n}$-equivalence class. Because of Corollary \\ref{corollary D invertible}, we know that $\\gamma_{nk}(A)$ will be a singleton whenever $D$ is invertible, so in that case, these boosting functions are well-defined on the equivalence classes. Note that it might also not necessarily be true that a $\\gamma_{nk}([A]_k)$, by which we mean the set $$\\bigcup_{B\\sim_{f^{k}}A}\\gamma_{nk}(B),$$is a full $\\sim_{f^n}$-equivalence class.\n\\medskip\n\nAlthough boosting functions might generally not behave well on $\\sim_{f^k}$-equivalence classes, we can still use them as a tool for the computation of $NF_n(f)$. The reason for this is the fact that two fixed point classes above the same equivalence class $[A]$ will boost in the exact same way. So, if we know how one fixed point class behaves, every fixed point class above the same equivalence class will behave in the same way.\n\n\\medskip\n\nThe following lemma is actually all we need to prove the statement above.\n\\begin{lemma}\\label{lemma determinants for boosted}\nLet $(d,D)$ be an affine homotopy lift of $f$. If $B, C\\in \\gamma_{nk}(A)$, then $$\\det(I-B_\\ast D_\\ast^n)=\\det(I-C_\\ast D_\\ast^n).$$\n\\end{lemma}\n\\begin{proof}\nSince there exist $(a_1,A)$ and $(a_2,A)$, such that $\\gamma_{nk}(a_1,A)=(b,B)$ and $\\gamma_{nk}(a_2,A)=(c,C)$, for certain $b,c\\in G$, we know that $$((a_1,A)(d,D)^k)^{\\frac{n}{k}}=(b,B)(d,D)^n \\textrm{ and }((a_2,A)(d,D)^k)^{\\frac{n}{k}}=(c,C)(d,D)^n.$$So, by just looking at the rotational part, we see $$BD^n=(AD^k)^{\\frac{n}{k}}=CD^n.$$By taking the differential, we obtain the desired result.\n\\end{proof}\n\n\n\\begin{proposition}\\label{prop similar boosting}\nSuppose $p(\\Fix((a,A)\\circ \\tilde{f}^k))$ is a fixed point class at level $k$ of a continuous map $f$ on an infra-nilmanifold. If this fixed point class boosts (in)essentially to level $n$, then every fixed point class of the form $p(\\Fix((b,B)\\circ \\tilde{f}^k))$, with $B\\in [A]_k$ also boosts (in)essentially to level $n$.\n\\end{proposition}\n\\begin{proof}\nSuppose that $(d,D)$ is an affine homotopy lift of $f$. By Remark \\ref{remark essential of inessential}, we know that $p(\\Fix((a,A)\\circ \\tilde{f}^k))$ is an essential fixed point class if and only if $\\det(I-A_\\ast D_\\ast^k)\\neq 0$. Take an arbitrary fixed point class $p(\\Fix((b,B)\\circ \\tilde{f}^k))$, with $B\\in [A]_k$. Because of Lemma \\ref{lemdet}, we already know that $\\det(I-B_\\ast D_\\ast^k)\\neq 0$ and that $p(\\Fix((b,B)\\circ \\tilde{f}^k))$ is an essential fixed point class.\n\n\\medskip\n\nNow suppose that $\\gamma_{nk}(a,A)=(c,C)$ and $\\gamma_{nk}(b,B)=(e,E)$. This means that $C\\in \\gamma_{nk}(A)$ and $E\\in \\gamma_{nk}(B)$. By Lemma \\ref{lemma boosting funct on f-conjugacy classes}, we know that there exists $E_0\\in \\gamma_{nk}(B)$, such that $C\\in [E_0]_n$. Lemma \\ref{lemdet} now tells us that $$\\det(I-C_\\ast D_\\ast^n)=\\det(I-E_{0 \\ast} D_\\ast^n).$$By Lemma \\ref{lemma determinants for boosted} and by the fact that $E,E_0\\in \\gamma_{nk}(B)$, we also know that $$\\det(I-E_\\ast D_\\ast^n)=\\det(I-E_{0 \\ast} D_\\ast^n).$$Because $\\det(I-C_\\ast D_\\ast^n)=\\det(I-E_\\ast D_\\ast^n)$, we know that $[(a,A)]_k$ boosts essentially to level $n$ if and only if $[(b,B)]_k$ boosts essentially to level $n$. A similar thing applies for inessential boosting.\n\\end{proof}\n\n\\subsection{Examples}\\label{examples}\n\nSo, we know that every fixed point class above $[A]$ boosts in exactly the same way and we also know that the number of essential fixed point classes above $[A]$ equals $N_A(f)$. This is a tool we can use to compute $\\# IIB_n(f)$ in a more efficient way. We show how this can be done by looking at a few examples.\n\n\\begin{example}\nLet us first try to compute the Nielsen periodic numbers of the maps in Example~\\ref{Example not weakly Jiang}. We will use the matrix description from \\cite{duga14-1}.\nLet the Klein bottle group be generated by the following two affine transformations:\n$$\\alpha=(a,A)=\\left(\\begin{pmatrix}\n\\frac{1}{2}\\\\\n\\frac{1}{2}\\\\\n\\end{pmatrix}, \\begin{pmatrix}\n1&0\\\\\n0&-1\n\\end{pmatrix}\\right) \\textrm{ and } \\beta=(e_2, \\Id),$$where $e_2$ denotes the second element of the standard basis of $\\R^2$. Suppose that $k\\neq 1$ is odd and that $p\\in \\R$. Then, the map induced by $$(d,D)=\\left(\\begin{pmatrix}\np\\\\\n\\frac{1}{2}\\\\\n\\end{pmatrix}, \\begin{pmatrix}\nk&0\\\\\n0&-1\n\\end{pmatrix}\\right)$$will induce the same morphism described in Example \\ref{Example not weakly Jiang}.\n\n\\medskip\n\nIt is clear that $D$ commutes with both $A$ and $\\Id$, so $f_*$ induces the identity morphism on $F$ and therefore, every $\\sim_{f^n}$-equivalence class consists of precisely one element. Also, it easy to compute that $[A]_l$ is essential if and only if $l$ is even, while $[\\Id]_l$ is essential if and only if $l$ is odd. Now, suppose that $m=ql$, then a simple computation shows that $[\\Id]_l$ always boosts to $[\\Id]_m$. Also, $[A]_l$ will boost to $[A]_m$ if $q$ is odd and to $[\\Id]_m$ if $q$ is even. The reason for this, lies in the following computation:$$(AD^l)^q=A^qD^{ql}=A^qD^m.$$All together, we see that every even boost ($q$ is even) of an essential fixed point class is inessential while every odd boost is essential.\n\n\\medskip\n\nAs a consequence, we see that if $n$ is odd, $IIB_n(f)$ is the empty set and by Proposition \\ref{propIIB}, it follows that $$NF_n(f)=N(f^n).$$On the other hand, if $n$ is even, the only essential fixed point classes that boost inessentially to level $n$ pass through level $\\frac{n}{2}$. Every essential fixed point class at this level will boost inessentially to level $n$ and every element in $IIB_{\\frac{n}{2}}(f)$ will also boost inessentially to level $n$. Therefore: $$NF_n(f)=N(f^n)+NF_{\\frac{n}{2}}(f).$$\n\nFor the sake of completeness, the case where $k=1$ also gives us a map on the Klein bottle. In this case, an easy computation shows that $N(f^n)=0$ for every $n$. As a consequence, every fixed point class at every level is inessential and hence $NF_n(f)=0$, for every integer $n$.\n\\end{example}\n\nThe following example will illustrate the use of $\\sim_f$-equivalence classes a little more. \n\n\\begin{example}\\label{example Z3}\nLet $\\Gamma$ be the Bieberbach group with generators:$$(a,A)=\\left(\\begin{pmatrix}\n0\\\\\n0\\\\\n\\frac{1}{3}\\\\\n\\end{pmatrix}, \\begin{pmatrix}\n-1&1&0\\\\\n -1 & 0 & 0\\\\\n 0&0&1 \\\\\n\\end{pmatrix}\\right)\n\\textrm{, }(e_1, \\Id)\\textrm{ and }(e_2, \\Id).$$In \\cite{duga14-1} one can find that the affine map $$(d,D)=\\left(\\begin{pmatrix}\n0\\\\\n0\\\\\n0\\\\\n\\end{pmatrix}, \\begin{pmatrix}\n0&1&0\\\\\n 1 & 0 & 0\\\\\n 0&0&2 \\\\\n\\end{pmatrix}\\right)$$induces a continuous map on the flat manifold $\\Gamma\\backslash\\R^3 $.\n\n\\medskip\n\nAn easy computation shows that $DA=A^2D$ and that $f_*$ induces a morphism $\\overline{f}_*$ on the holonomy group $\\Z_3$, such that $\\overline{f}_*^2=\\Id$. So, whenever $k$ is even, every $\\sim_{f^k}$-equivalence class is a singleton.\n\\medskip\n\nTo determine $[\\Id]_k$, with $k$ odd, note that $\\overline{f}_*^k=\\overline{f}_*$. Also, $\\overline{f}_*(A)=A^2$ and $\\overline{f}_*(A^2)=A$. So, the following are certainly subsets of $[\\Id]_k$:\n$$A\\Id f_\\#(A)^{-1}=\\{A^2\\} \\textrm{ and } A^2\\Id f_\\#(A^2)^{-1}=\\{A\\}.$$Hence, $[\\Id]_k=F$ for all odd $k$. \n\n\\medskip\n\nAn easy computation shows that $[\\Id]_k$, with $k$ odd, is always inessential. As a consequence, for every odd $n$, $$NF_n(f)=N(f^n)=0.$$When $k$ is even, $[\\Id]_k$ is inessential, while $[A]_k$ and $[A^2]_k$ are essential. In this case, every element of $F$ commutes with $D^k$, as $\\overline{f}_*^2$ is the identity morphism. Hence, the class $[A^i]_k$ boosts to the class $[A^{ip}]_{pk}$. So, an essential class can only boost inessentially if $p\\equiv 0 \\mod 3$. \n\n\\medskip\n\nIn Figure \\ref{fig1}, a scheme can be found where all these boosting relations are shown up to level $6$. In this scheme, inessential and essential fixed point classes are denoted by a circle and a square respectively. Only the boosting from an essential to an inessential class are drawn, since these are the only ones that need to be considered for the computation of $NF_n(f)$.\n\n\\begin{figure}\n\n\\centering\n\\begin{tikzpicture}[line cap=round,line width = .5pt,line join=round, >=triangle 45, x=1.5cm,y=1.5cm]\n\n\\draw[color=black](0,1.2) circle (0.3);\n\\draw[color=black](1.2,1.2) circle (0.3);\n\\draw[color=black](.9,-.9) rectangle (1.5,-1.5);\n\\draw[color=black](.9,-.3) rectangle (1.5,.3);\n\\draw[color=black](2.4,1.2) circle (0.3);\n\\draw[color=black](3.6,1.2) circle (0.3);\n\\draw[color=black](3.3,-.9) rectangle (3.9,-1.5);\n\\draw[color=black](3.3,-.3) rectangle (3.9,.3);\n\\draw[color=black](4.8,1.2) circle (0.3);\n\\draw[color=black](6,1.2) circle (0.3);\n\\draw[color=black](5.7,-.9) rectangle (6.3,-1.5);\n\\draw[color=black](5.7,-.3) rectangle (6.3,.3);\n\n\\draw[color=black] (0,1.2) node {$[\\Id]_1$};\n\\draw[color=black] (1.2,1.2) node {$[\\Id]_2$};\n\\draw[color=black] (1.2,0) node {$[A]_2$};\n\\draw[color=black] (1.2,-1.2) node {$[A^2]_2$};\n\\draw[color=black] (2.4,1.2) node {$[\\Id]_3$};\n\\draw[color=black] (3.6,1.2) node {$[\\Id]_4$};\n\\draw[color=black] (3.6,0) node {$[A]_4$};\n\\draw[color=black] (3.6,-1.2) node {$[A^2]_4$};\n\\draw[color=black] (4.8,1.2) node {$[\\Id]_5$};\n\\draw[color=black] (6,1.2) node {$[\\Id]_6$};\n\\draw[color=black] (6,0) node {$[A]_6$};\n\\draw[color=black] (6,-1.2) node {$[A^2]_6$};\n\n\\draw[->,color=black] (1.55,0) -- (3.25,0);\n\\draw[->,color=black] (1.55,-1.2) -- (3.25,-1.2);\n\\draw[->,color=black] (1.55,.1) .. controls (3,.6) and (5,.6) .. (5.7,1.1);\n\\draw[->,color=black] (1.55,-1.1) .. controls (3.6,-.7) .. (5.75,1);\n\\end{tikzpicture}\n\n\\caption{A scheme of $\\sim_{f^k}$-equivalence classes at different levels for Example \\ref{example Z3}} \\label{fig1}\n\\end{figure}\n\n\\medskip\n\nSo, suppose that $n=3^pq$ is even, such that $\\gcd(3,q)=1$, then $$NF_n(f)=\\sum_{i=0}^{p} N(f^{3^iq}).$$\n\\end{example}\n\nBy the following example, we clarify the use of Theorem \\ref{thmNAf}.\n\n\\begin{example}\\label{example Z6}\nLet $\\Gamma$ be the Bieberbach group with generators:\n$$(a,A)=\\left(\\begin{pmatrix}\n0\\\\\n0\\\\\n\\frac{1}{6}\n\\end{pmatrix}, \\begin{pmatrix}\n1 & -1 &0\\\\\n1& 0 &0 \\\\\n0&0 & 1\n\\end{pmatrix}\\right), (e_1,\\Id)\\textrm{ and } (e_2,\\Id).$$\n\nIn \\cite{duga14-1}, one can find that the following affine map induces a map on the infra-nilmanifold $\\Gamma\\backslash \\R^3$:\n\n$$(d,D)=\\left(\\begin{pmatrix}\n0\\\\\n0\\\\\n0\n\\end{pmatrix}, \\begin{pmatrix}\n0 & 1 &0\\\\\n-1& 1 &0 \\\\\n0&0 &7\n\\end{pmatrix}\\right).$$\n\nNote that every element of $F$ commutes with $D$, so every $\\sim_f$-equivalence class consists of precisely one element. This also means that the class $[A^i]_k$ boosts to the class $[A^{ip}]_{pk}$. It is also quite easy to compute that $[A^p]_k$ is inessential if and only if $p\\equiv k \\mod 6$. This boosting scheme can be found in Figure \\ref{fig2}.\n\n\\begin{figure}\n\n\\centering\n\\begin{tikzpicture}[line cap=round,line width = .5pt,line join=round, >=triangle 45, x=1.8cm,y=1.5cm]\n\n\\draw[color=black](0,2) circle (0.3);\n\\draw[color=black](-.3,.9) rectangle (.3,1.5);\n\\draw[color=black](-.3,.1) rectangle (.3,.7);\n\\draw[color=black](-.3,-.7) rectangle (.3,-.1);\n\\draw[color=black](-.3,-1.5) rectangle (.3,-.9);\n\\draw[color=black](-.3,-2.3) rectangle (.3,-1.7);\n\n\\draw[color=black](2.1,1.7) rectangle (2.7,2.3);\n\\draw[color=black](2.4,1.2) circle (0.3);\n\\draw[color=black](2.1,.1) rectangle (2.7,.7);\n\\draw[color=black](2.1,-.7) rectangle (2.7,-.1);\n\\draw[color=black](2.1,-1.5) rectangle (2.7,-.9);\n\\draw[color=black](2.1,-2.3) rectangle (2.7,-1.7);\n\n\\draw[color=black](4.5,1.7) rectangle (5.1,2.3);\n\\draw[color=black](4.5,.9) rectangle (5.1,1.5);\n\\draw[color=black](4.8,.4) circle (0.3);\n\\draw[color=black](4.5,-.7) rectangle (5.1,-.1);\n\\draw[color=black](4.5,-1.5) rectangle (5.1,-.9);\n\\draw[color=black](4.5,-2.3) rectangle (5.1,-1.7);\n\\draw[color=black] (0,2) node {$[A]_1$};\n\\draw[color=black] (0,1.2) node {$[A^2]_1$};\n\\draw[color=black] (0,.4) node {$[A^3]_1$};\n\\draw[color=black] (0,-.4) node {$[A^4]_1$};\n\\draw[color=black] (0,-1.2) node {$[A^5]_1$};\n\\draw[color=black] (0,-2) node {$[\\Id]_1$};\n\n\\draw[color=black] (2.4,2) node {$[A]_2$};\n\\draw[color=black] (2.4,1.2) node {$[A^2]_2$};\n\\draw[color=black] (2.4,.4) node {$[A^3]_2$};\n\\draw[color=black] (2.4,-.4) node {$[A^4]_2$};\n\\draw[color=black] (2.4,-1.2) node {$[A^5]_2$};\n\\draw[color=black] (2.4,-2) node {$[\\Id]_2$};\n\n\\draw[color=black] (4.8,2) node {$[A]_3$};\n\\draw[color=black] (4.8,1.2) node {$[A^2]_3$};\n\\draw[color=black] (4.8,.4) node {$[A^3]_3$};\n\\draw[color=black] (4.8,-.4) node {$[A^4]_3$};\n\\draw[color=black] (4.8,-1.2) node {$[A^5]_3$};\n\\draw[color=black] (4.8,-2) node {$[\\Id]_3$};\n\n\\draw[->,color=black] (.35,1.15) -- (2.05,-.3);\n\\draw[->,color=black] (.35,.35) -- (2.05,-1.9);\n\\draw[->,color=black] (.35,-.35) -- (2.05,1.2);\n\\draw[->,color=black] (.35,-1.1) -- (2.05,-.4);\n\\draw[->,color=black] (.35,-1.95) -- (2.05,-2);\n\n\n\\draw[->,color=black] (.35,1.2) .. controls (3,2) .. (4.45,-1.9);\n\\draw[->,color=black] (.35,.4) .. controls (3,1) .. (4.45,.5);\n\\draw[->,color=black] (.35,-.4) .. controls (2,-2.5) and (3,-1) .. (4.45,-2);\n\\draw[->,color=black] (.35,-1.2) .. controls (2,-.1) and (3.7,-1.8) .. (4.5,.2);\n\\draw[->,color=black] (.35,-2.05) .. controls (2.4,-2.5) .. (4.45,-2.05);\n\n\\end{tikzpicture}\n\n\\caption{A scheme of $\\sim_{f^k}$-equivalence classes at the lowest levels for Example \\ref{example Z6}} \\label{fig2}\n\\end{figure}\n\n\\medskip\n\nThe only classes at level $1$ that boost inessentially to level $2$ (to $[A^2]_2$), are the inessential class $[A]_1$ and the essential class $[A^4]_1$. It is therefore clear that $\\# IIB_2(f)=N_{A^4}(f)$. By Theorem \\ref{thmNAf} and Proposition \\ref{propIIB}:$$NF_2(f)=N(f^2)+\\frac{|\\det(I-A^4D)|}{6}.$$\nIn a similar way, one can see that the only classes that boost to $[A^3]_3$ are the inessential class $[A]_1$ and the essential classes $[A^3]_1$ and $[A^5]_1$. Hence,$$NF_3(f)=N(f^3)+ \\frac{|\\det(I-A^3D)|}{6}+ \\frac{|\\det(I-A^5D)|}{6}.$$\nComputing $NF_4(f)$ becomes a little more tricky, since fixed point classes at both level $1$ and $2$ can boost to inessential fixed point classes of level $4$. With an easy computation, we see that the classes that boost to $[A^4]_4$ are $[A]_1,[A^4]_1,[A^2]_2$ and $[A^5]_2$. Note that we already knew that $[A]_1$ is inessential and that the essential class $[A^4]_1$ boosts inessentially to $[A^2]_2$. Therefore, there are no essential classes at level $1$ that boosts essentially to level $2$ and inessentially to level $4$. This means that $$NF_4(f)=N(f^4)+ \\frac{|\\det(I-A^5D^2)|}{6}+\\frac{|\\det(I-A^4D)|}{6}.$$\nAs $[A^i]_k$ boosts to $[A^{ip}]_{pk}$, we know that this boosting relation is a bijection between the $\\sim_{f^k}$-equivalence classes and the $\\sim_{f^{pk}}$-equivalence classes if and only if $p$ is invertible modulo $6$. Now, suppose $n>0$ is an integer, such that $\\gcd(n,6)=1$. Note that every divisor of $n$ will also be relatively prime to $6$. Because there is only one inessential class at each level and because there is a bijection between the classes at different levels and because maps on infra-nilmanifolds are essentially reducible, every essential class that boosts to level $n$ will do so in an essential way. Therefore, if $\\gcd(n,6)=1$, $$NF_n(f)=N(f^n).$$Whenever $n$ has many prime factors $2$ and $3$, it will be much harder to compute $NF_n(f)$, because many inessential boosts occur and we have to keep track of all these boostings in order to not count some of them multiple times. As an example, let us compute $NF_6(f)$. Note that $[\\Id]_6$ is the only inessential class at level $6$. The classes that boost to $[\\Id]_6$ are $[\\Id]_3, [A^3]_3, [\\Id]_2, [A^2]_2,[A^4]_2$ and all classes at level $1$. The only essential classes at level $1$ that boost to essential classes at both level $2$ and level $3$, are $[\\Id]_1$ and $[A^2]_1$. Also, there are no essential classes at level $1$ that boost to inessential classes at both level $2$ and level $3$. Hence,\n\\begin{equation}\\nonumber\n\\begin{split}\nNF_6(f)=N(f^6)+\\frac{|\\det(I-D^3)|}{6}+\\frac{|\\det(I-D^2)|}{6}+\\frac{|\\det(I-A^4D^2)|}{6}\\\\\n-\\frac{|\\det(I-D)|}{6}-\\frac{|\\det(I-A^2D)|}{6}.\n\\end{split}\n\\end{equation}\nHere, these last two terms are precisely the number of essential fixed point classes at level $1$ that boost essentially to level $2$ and level $3$. As they are counted double, we have to subtract them once.\n\\end{example}\n\nAs one can see from this last example, it can be very hard to compute $NF_n(f)$. The tools in this section are useful, but they still require a lot of manual labor. Looking at these examples, it is not unthinkable that there might not exist a general formula for $NF_n(f)$.\n\n\\section{Some properties of affine maps on infra-nilmanifolds}\nIn the last section of this paper, we will look specifically at affine maps on infra-nilmanifolds in order to derive a nice property of these maps (Theorem \\ref{theorem uiteindelijk boost inessentieel}).\n\n\\begin{lemma}\\label{lemma alternatief lemma bram 3.1}\nIf $A,B \\in \\GL_n(\\C)$ and $D\\in \\C^{n\\times n}$, such that $DA=BD$, then, for all $n>0$ it holds that $$\\det(I-(AD)^n)=\\det(I-(BD)^n).$$\n\\end{lemma}\n\\begin{proof}\nUsing the multiplicative properties of the determinant, we find the following equalities:$$\\det(I-(AD)^n)=\\det(A^{-1})\\det(I-(AD)^n)\\det(A)=\\det(I-(DA)^n)=\\det(I-(BD)^n).$$\\end{proof}\n\nRemember that a continuous map $f$ will be called \\textbf{Wecken} if and only if $\\#\\Fix(f)=N(f)$. \n\n\\begin{theorem}\\label{theorem uiteindelijk boost inessentieel}\nLet $\\overline{(d,D)}$ be an affine map on an infra-nilmanifold. Suppose that there exists at least one $k$, for which $N(\\overline{(d,D)}^k)\\neq 0$. Then, $D_\\ast$ is semi-hyperbolic if and only if $\\overline{(d,D)^k}$ is a Wecken map for every $k$. \n\\end{theorem}\n\\begin{proof}\nFirst, suppose that $D_\\ast$ is semi-hyperbolic. Just like in the proof of Theorem \\ref{thmNF=Nf)}, we know that every fixed point class at every level is essential. By Proposition \\ref{propaff}, it follows that $\\overline{(d,D)^k}$ is a Wecken map.\n\n\\medskip\n\nOn the other hand, suppose that $D_\\ast$ is not semi-hyperbolic. This means that there exists an eigenvalue $\\lambda$ of $D_\\ast$, such that $\\lambda^d=1$. Now we will show that every essential fixed point class will eventually be boosted to an inessential fixed point class. Pick an essential fixed point class $[(a,A)]_k$. This is possible, because not all Nielsen numbers are $0$. Let $l$ be an arbitrary positive integer and set $m=kl$. Consider the fixed point class $\\gamma_{km}([(a,A)])$. This coincides with the set $p(\\Fix(((a,A)\\circ(d,D)^k)^l))$. Now suppose that $$(b,B)\\circ (d,D)^{m}=((a,A)\\circ(d,D)^k)^l.$$This means that $BD^{m}=(AD^k)^l$ and by Remark~\\ref{remark essential of inessential}, we now know that $\\gamma_{km}([(a,A)])$ is inessential if and only if $$\\det(I-(A_\\ast D_\\ast^k)^l)=\\det(I-B_\\ast D_\\ast^{m})=0.$$By combining Lemma~\\ref{lemma alternatief lemma bram 3.1} and Lemma \\ref{Lemma Bram}, we see there exists a $C\\in F$ and a positive integer $p$, such that $$\\det(I-(A_\\ast D_\\ast^k)^l)=\\det(I-(C_\\ast D_\\ast^k)^l)\\textrm{ and } (C_\\ast D_\\ast^k)^p=D_\\ast^{kp}.$$By taking $l=\\lcm(p,d)$, we know that $(C_\\ast D_\\ast^k)^l=D_\\ast^{kl}$. Also, $1$ is an eigenvalue of $D_\\ast^{kl}$. By combining all of the above, we see that $$\\det(I-B_\\ast D_\\ast^{m})=\\det(I-(A_\\ast D_\\ast^k)^l)=\\det(I-(C_\\ast D_\\ast^k)^l)=\\det(I-( D_\\ast^k)^l)=0.$$As there is certainly one essential fixed point class $[(a,A)]_k$, we know that it will boost to an inessential fixed point class $[(b,B)]_m$. This actually means that $[(a,A)]_k\\subset [(b,B)]_m$, which implies that the inessential fixed point class $[(b,B)]_m$ is non-empty, which implies that $\\overline{(d,D)}^m$ is not a Wecken map.\n\\end{proof}\n\n\\begin{corollary}\\label{corWecken}\nSuppose that there exists at least one $k$, such that $N(\\overline{(d,D)}^k)\\neq 0$. Whenever $\\Fix(\\overline{(d,D)}^k)$ is finite for every $k$, $\\overline{(d,D)}$ will be Wecken at every level and $D_\\ast$ is semi-hyperbolic.\n\\end{corollary}\n\\begin{proof}\nDue to Proposition~\\ref{propaff}, we know that every non-empty inessential fixed point class contains infinitely many fixed points.\n\\end{proof}\n\n\\begin{corollary}\\label{cor NF_n(f)>Nfn}\nSuppose that $f$ is a continuous map on an infra-nilmanifold that is not semi-hyperbolic. Suppose that $N(f^k)\\neq 0$ for at least one $k$. Then, at certain levels, there exist non-empty inessential fixed point classes. Also, there exist $n>0$, such that $$NF_n(f)>N(f^n).$$\n\\end{corollary}\n\\begin{proof}\nBy examining the proof of Theorem~\\ref{theorem uiteindelijk boost inessentieel}, we see that there will exist an essential fixed point class which boosts to an inessential fixed point class. Therefore, this inessential fixed point class will be non-empty. On the other hand, due to Proposition~\\ref{propIIB}, the second statement follows.\n\\end{proof}\n\n\\begin{corollary}\\label{cor NF=Nf}\nSuppose that $f$ is a continuous map on an infra-nilmanifold. Then, $NF_n(f)=N(f^n)$, for all $n$ if and only if $f$ is a semi-hyperbolic map or $N(f^n)=0$, for all $n$.\n\\end{corollary}\n\\begin{proof}\nWhen dealing with semi-hyperbolic maps, the statement follows from Theorem \\ref{thmNF=Nf)} and Corollary \\ref{cor NF_n(f)>Nfn}. Hence, the only thing left to prove is that $NF_n(f)=0$ if $N(f^n)=0$ for all $n$. This actually follows from Proposition \\ref{propIIB}. As all fixed point classes at all levels are inessential, we know that $\\#IIB_n(f)=0$. As we already knew that $N(f^n)=0$, it follows by Proposition \\ref{propIIB} that $NF_n(f)=0$.\n\\end{proof}\n\nAgain, we can translate some of these results into comparable results concerning dynamical zeta functions.\n\n\\begin{corollary}\nSuppose that $f$ is continuous map on an infra-nilmanifold. Then, $NF_f(z)=N_f(z)$ if and only if $f$ is a semi-hyperbolic map or $N(f^n)=0$, for all $n$.\n\\end{corollary}\n\\begin{proof}\nThis follows immediately from Corollary \\ref{cor NF=Nf}.\n\\end{proof}\n\nWe can actually say something more about another zeta function. In \\cite{fels88-1}, the following dynamical zeta function was defined: $$M_g(z)=\\exp\\left(\\sum_{k=1}^\\infty \\frac{\\#\\Fix(g^k)z^k}{k}\\right).$$When working with affine semi-hyperbolic maps on infra-nilmanifolds, we actually know how this zeta function looks, due to the following result.\n\n\\begin{corollary}\nSuppose that $g$ is an affine semi-hyperbolic map on an infra-nilmanifold, then $$M_g(z)=NF_g(z).$$ \n\\end{corollary}\n\\begin{proof}\nEvery such a map is a Wecken map on every level, due to Theorem \\ref{theorem uiteindelijk boost inessentieel}. From this it follows, for all $k>0$, that $$\\#\\Fix(g^k)=N(g^k)=NF_k(g).$$ \n\\end{proof}\n\nThis result partially answers a question asked in \\cite{jezi03-01} in the case of infra-nilmanifolds. Given a map $f$ on a manifold, the author of this paper asked if it would be possible to find a map $g$ homotopic to $f$, such that $$\\#\\Fix(g^k)=NF_k(g)=NF_k(f),$$for all $k$. This question is equivalent to asking whether there exists a map $g$, homotopic to $f$, such that $M_g(z)$ and $NF_g(z)$ coincide.\n\n\\section*{Acknowledgements}\n\n\nI would like to thank my advisor Karel Dekimpe for all useful comments on the first versions of this paper.\n\n\\medskip\n\nThis work was supported by the research fund KU Leuven.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Acknowledgement}\n\\section{Acknowledgement}\nWe thank the reviewers for their valuable comments. The work was supported by the National Key Research and Development Program of China (No. 2019YFB1704003), the National Nature Science Foundation of China (No. 62021002 and No. 71690231), NSF under grants III-1763325, III-1909323, III-2106758, SaTC-1930941, Tsinghua BNRist and Beijing Key Laboratory of Industrial Bigdata System and Application.\n\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.4]{figures\/example.pdf}\n \n \\caption{Example images and their corresponding scene graphs. Given the query, the original scene graph (left) is modified to be the target scene graph (right). }\n \\label{fig:example}\n \n\\end{figure}\nA scene graph is a structural representation that captures the semantics of visual scenes by encoding object instances, attributes of objects, and relationships between objects.\n~\\citep{Johnson2015ImageRU}. As shown in Figure~\\ref{fig:example}, the scene graph encodes objects (e.g.\\ ``\\textit{Boy}'', ``\\textit{Racket}''), attributes (e.g.\\ ``\\textit{Girl is standing}''), and relations (``\\textit{Boy holding racket}''). Scene graphs are able to capture the interactions between text and images by associating objects in the graph with regions of an image and modeling the relations between objects. Therefore, it has been used in the cross modality task such as image retrieval, image captioning, and visual question answering~\\citep{Schuster2015GeneratingSP, shi2019explainable, YangTZC19, Wang2020CrossmodalSG}. \n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.63]{figures\/framework.pdf}\n \n \\caption{Examples of basic operations INSERT and DELETE for scene graph modification. $Q$ denotes the textual query, $G_{S}$ denotes the source scene graph, $G_{T}$ denotes the target scene graph and $G_{I}$ is the extended graph. }\n \\label{fig:ise}\n \n\\end{figure*}\n\nRecently, modifying the scene graph based on the input becomes an emerging research direction as cross-modal systems may need to resort to an interactive process through multiple iterations~\\citep{Ramnath2019SceneGB,He2020SceneGM}. Take text-based image retrieval as an example, users start with a query describing the main objects or topics they are looking for, then modify the query to add more constraints or provide additional information based on previous search results. Instead of directly manipulating images, scene graphs can be used to convert the image-editing problem into a graph-editing problem, conditioned on the textual query. As shown in Figure~\\ref{fig:example}, given a retrieved image from the last turn, if the user wants to see a girl rather than a boy holding a racket, he will enter the query ``\\textit{I would like to see a girl holding racket}'' to the system. According to the query, the object ``\\textit{Boy}'' in the original scene graph will be substituted with the object ``\\textit{Girl}''. The target image can be retrieved given the updated scene graph. The key challenge in this process is how to modify the corresponding partial structure in the original scene graph based on understanding the natural language query. \n\n\n\n\n\n\n\n\nPrior effort framed this scene graph modification (SGM) task as conditional graph generation~\\citep{He2020SceneGM}, where the scene graph is generated from the scratch condition on the original graph and query~\\citep{You2018GraphRNNGR,Guo2019DenselyCG,Cai2020GraphTF}. However, rebuilding the entire scene graph may not be an optimal solution, as the model has to generate the partial structure of the original graph that should be unmodified. Moreover, nodes and edges of the scene graph are constructed separately in their proposed framework, which generates all the nodes first then attaches edges between generated nodes in the second pass. Such an approach may lead to the lack of the modeling capability of interactions between node prediction and edge prediction.\n\nInstead of rebuilding the whole scene graph, we introduce a novel formulation for SGM -- incremental structure expanding (ISE), which is able to build the target graph by gradually expanding the original structure. At each step, ISE generates the connecting edges between the existing nodes and the newly generated node, upon which the type of the new node is jointly decided. Based on the formalism, our proposed model is able to iterate between finding the relevant part in the query and reading the partially constructed scene graph, inferring more accurate and harmonious expansion decisions progressively. \nExperiments on three SGM benchmarks demonstrate the effectiveness of the proposed approach, which is able to outperform previous state-of-the-art models by large margins. To test the ability of a model under a complex scenario, we further construct a more challenging dataset from the remote sensing domain~\\citep{lu2017exploring}, which has much more modification operations based on the more complicated queries compared with the existing scene graph modification datasets. \nOur key contributions are summarized as follows: \n\n\\begin{itemize}\n \\item We propose a novel formulation for scene graph modification, allowing incremental expansion of the source scene graph rather than the regeneration of the target graph.\n \\item We further construct a challenging dataset that contains more complicated queries and larger scene graphs. Extensive experiments on four SGM datasets show the effectiveness of our proposed approach.\n \\item Experiments on four benchmarks demonstrate the effectiveness of our approach, which surpasses the previous state-of-the-art model by large margins. \n\\end{itemize}\n\n\n\n\n\n\n\n\n\\section{Incremental Structure Expanding}\n\\label{sec:ise}\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.67]{figures\/model.pdf}\n \\caption{Overview of the model architecture. }\n \\label{fig:model}\n \n\\end{figure*}\n\nIn scene graph modification, a node or multiple nodes can be inserted to, deleted from or replaced with other nodes in the scene graph. \\citet{He2020SceneGM} defined the scene graph modification task as a conditional graph generation problem. Formally, given the source scene graph $G_{S}$ and the natural language query $Q$, the target scene graph $G_{T}$ is generated by maximizing the conditional probability $p(G_{T}|G_{S}, Q)$. \n\nInstead of generating the entire target graph $G_{T}$, we frame the task as an incremental structure expanding, which extends the source scene graph $G_{S}$ one node at a time, as well as the edges associated with the node. Such a formulation does not require the model to rebuild the unmodified structure of the source scene graph.\n\nUnder this formulation, we first define two basic operations: INSERT and DELETE. Scene graph modification can be viewed as combining and applying these two operations multiple times. Formally, given the query $Q$, a sequence of $n$ operations ${a_{1}, a_{2}, ..., a_{n}}$ are selected from a set of graph modification operations $\\mathcal{A}$ = \\{INSERT, DELETE\\}. After applying the operations to the source scene graph $G_{s}$, the target scene graph $G_{t}$ is derived. Each operation is defined as: \n\n\\begin{itemize}\n \\item \\textbf{INSERT}: A new node $o$ is added to $G_{s}$, and edges are attached between $o$ and existing nodes in $G_{s}$. As shown in Figure~\\ref{fig:ise} (a), the node ``\\textit{Ball}'' is added to $G_{s}$ and an edge between ``\\textit{Ball}'' and ``\\textit{Holding}'' is attached, according to the query ``\\textit{holding a racket and a ball}''.\n \\item \\textbf{DELETE}: As shown in Figure~\\ref{fig:ise} (b). A node $o$ is removed from $G_{s}$, as well as its associated edges. As shown in Figure~\\ref{fig:ise} (b), the node ``\\textit{Blue}'' is removed from $G_{s}$ and the edge between ``\\textit{Racket}'' and ``\\textit{Blue}'' is removed either, according to the query ``\\textit{a racket not a blue}''.\n\\end{itemize}\n\n\n\n\nInspired by incremental parsing~\\citep{Nivre2004IncrementalityID,DyerBLMS15,Cai2020AMRPV,zhang2021crowdsourcing,zhang2022identifying}, we design a data structure called extended graph $G_{I}$, which can be used to model INSERT and DELETE under the graph expansion setting. As shown in Figure~\\ref{fig:ise} (a), the extended graph $G_{T}$ is identical with the target graph $G_{T}$ after applying INSERT. As for DELETE, we introduce a dummy node ``Delete'', which is attached to the node in the source graph $G_{S}$ that should be removed. For example, the dummy node ``Delete'' is attached to the node ``Blue'' in $G_{T}$. In the postprocessing stage, nodes attached with the dummy node ``Delete'' will be removed. Using this formulation, we are able to model scene graph modification by incrementally expanding the source graph $G_{S}$ to the extended graph $G_{I}$, which can be converted to the target graph $G_{T}$ without any losses. \n\nIf the modification requires multiple operations, there will exist multiple node orderings. Take node substitution as an example, replacing a node $o_{i}$ with $o_{j}$ in $G_{s}$ can be viewed as DELETE the node $o_{i}$ first, then INSERT the node $o_{j}$, or vice versa. In practice, we impose that the DELETE operation always comes before INSERT, then the breadth-first search is used to define a deterministic node ordering. \n\n\n\\section{Model Architecture}\n\\label{sec:model}\n\n\n\nIn this section, we will present the model based on the incremental structure expanding formulation. Figure~\\ref{fig:model} gives an overview of the proposed model, which consists of five components including query encoder, graph encoder, feature fusion, edge decoder and node decoder. \n\n\n\\paragraph{Query Encoder} This module is used to encode the query $Q$ by generating the representation of each token of it.\n\n\\paragraph{Graph Encoder} This module is used to encode the graph by generating the representation of each node of it. Note that the representations of the graph are constructed incrementally during the expanding progresses based on the updated graph of the last time step. The graph is the source graph $G_{S}$ at the first timestep.\n\n\\paragraph{Feature Fusion} this module aims to combine the representations from query and graph encoder, then served as a writable memory, which is updated based on the information from edge and node decoder during the incremental expansion. \n\n\\paragraph{Edge Decoder} this module is used to predict the edges between the newly generated node and existing nodes of the graph, then update the memory of the feature fusion module with edge information.\n\n\\paragraph{Node Decoder} this module is used to generate a new node of the graph, then update the memory of the feature fusion module with node information.\n\n\n\n\n\n\\subsection{Query Encoder \\& Graph Encoder}\nFor fair comparisons with the previous work~\\citep{He2020SceneGM}, our query encoder and graph encoder are based on the vanilla transformer~\\citep{VaswaniSPUJGKP17}, which consists of multi-head self attention (MSA) and position-wise feed-forward network (FFN) blocks. The FFN contains two layers with a ReLU non-linearity. Layer normalization (\\citealt{BaKH16}) is applied before every block, and residual connections~\\citep{HeZRS16} after every block. \n\nFormally, given an input query $Q$ with $n$ tokens, each token embedding is randomly initialized and positional encoding is added to the token embedding to retain positional information. The resulted embeddings are denotes as $\\mathbf{x} = \\{x_0, x_1,..., x_n\\}$. Similar to BERT~\\citep{DevlinCLT19}, a special token is appended to the query as $x_{0}$ for sentence encoding. Transformations in the query encoder can be denoted as:\n\n\\begin{align}\n \\mathbf{x}^{l^{\\prime}} = LN(MSA(\\mathbf{x}^{l-1}) + \\mathbf{x}^{l-1}), \\\\\n \\mathbf{x}^{l} = LN(FFN(\\mathbf{x}^{l^{\\prime}}) + \\mathbf{x}^{l^{\\prime}}).\n\\end{align}\n\nAfter stacking $L$ blocks, we obtained the contextualized token representations from the query encoder, denoted as $\\{x_0^L, x_1^L,..., x_n^L\\}$. The first vector $x_{0}$ is treated as the sentence-level representation of the query and will be used as the initial state during expansion. For clarity, we denote the vectors as $\\mathbf{x}$$\\in$$\\mathbb{R}^{(n+1) \\times d}$, where $d$ is the dimension. \n\n\nAs for the graph encoder, we treat the input graph as a sequence of nodes in the chronological order of when they are inserted into the graph as discussed in Section~\\ref{sec:ise}. Formally, given the graph $G_{t}$ at the time step $t$, we take its node sequence $\\{o_1, o_2, ..., o_{t-1}\\}$ as the input. A transformer architecture is also applied to obtain the contextualized node embeddings. Notice that the contextualized representation of the graph is constructed incrementally as the expanding progress. Therefore, we apply the vanilla transformer with masked self-attention as the graph encoder, which only allows each position in the node sequence to attend to all positions up to and including that position. For brevity, we denoted the resulted contextualized node representations as $\\mathbf{y}$$\\in$$\\mathbb{R}^{m \\times d}$.\n\n\n\n\n\\subsection{Feature Fusion}\nUnlike the conventional sequence-to-sequence model that only has one encoder, our model contains two encoders. Previous work~\\citep{He2020SceneGM} proposed to use gating mechanism and cross attention to combine the representations of resulted representations from query and graph encoders. We choose to use vanilla multi-head attention mechanism~\\citep{VaswaniSPUJGKP17} to fuse the features from these encoders. Formally, at each time step $t$, the feature fusion component combines the query and graph representations for gradually locating and collecting the most relevant information for the next expansion:\n\n\\begin{align}\n z_t^{l} = LN(MSA(h_t^{l-1}, \\mathbf{x}) + h_t^{l-1}), \\\\\n z_t^{l^{\\prime}} = LN(MSA(z_t^{l}, \\mathbf{y}) + z_t^{l}), \\\\\n h_t^{l} = LN(FFN(z_t^{l^{\\prime}}) + z_t^{l^{\\prime}}).\n\\end{align}\n\nThe initial expansion state of $h_t^{0}$ is initialized with $x_{0}$. For clarity, we denote the last hidden state $h_{t}^{L}$ as $h_{t}$, which is the expansion state at the time step $t$. We now proceed to present the details of each decision stage of one expansion step.\n\n\\subsection{Edge Decoder}\nAt the $t$-th time step, the edge decoder takes the expansion state $h_{t}$ from the feature fusion module and the contextualized representation $\\mathbf{y}$ from the graph encoder as the inputs, and predicts which nodes in the current graph should be attached to the new node. Inspired by~\\citet{CaiL19a} and~\\citet{Cai2020AMRPV}, we leverage multi-head attention and take the maximum over different heads as the final edge probabilities.\nFormally, given $h_t$ and $\\mathbf{y}$, a set of attention weights can be obtained by using multi-head attention mechanism:\n$\\{ \\alpha_{t}^{g_{i}}\\}_{i=1}^{k}$, where $k$ is the number of attention heads and $\\alpha_{t}^{g_{i}}$ is the $i$-th probability vector. The probability of the edge between the new node and the node $o_{j}$ is then computed by $\\alpha_{t}^{g} = max_{i}(\\alpha_{t}^{g_{i}})$. Intuitively, each head is in charge of a set of possible relations (though not explicitly specified). The maximum pooling reflects that the edge should be built once one relation is activated. \n\nFinally, the edge decoder passes the edge information to the feature fusion module by updating the expansion state $h_{t+1}$ as follows:\n\n\\begin{align}\n h_{t+1} = LN(MSA(h_{t},\\mathbf{y}) + h_{t} ).\n\\end{align}\n\n\n\\subsection{Node Decoder}\nThe node decoder needs to look at the input query and determine which tokens are the most important ones. This choice is a weighted matrix that gives an attention probability between each token in the query and generated nodes in the target graph. \nConcretely, a single-head attention $\\alpha_{t}^{s}$\nis computed based on the state $h_t$ and the sentence representation $s_{1:n}$, where $\\alpha_{t}^{s}$ denotes the attention weight of the word $w_i$ in the current time step.\nThis component then updates the parser state with the alignment information via the following equation:\n\\begin{align}\n h_{t+1} = LN(MSA(h_{t},\\mathbf{x}) + h_{t} ).\n\\end{align}\n\nWe then compute the probability distribution of the new node through a hybrid of two channels. The new node can either be a DELETE node or a token copied from the input query. First, $h_{t}$ is fed through a $softmax$ to obtain a probability distribution over a pre-defined vocabulary, which contains the DELETE node and other dummy nodes such as $\\mathsf{EOS}$. The probability of the new node is calculated as $P^{vocab} = softmax(W^{vocab}h_{t} + b^{vocab})$.\n\nSecond, we used the attention scores $\\alpha_{t}^{s}$ as the probability to copy a token from the input query as a node label similar to the copy mechanism~\\citep{GuLLL16,SeeLM17}. Therefore, the final prediction probability of a node $o$ is defined as:\n\n\\begin{align}\n P(o) = p_{gen} \\cdot P_{vocab}(o) + p_{copy} \\cdot \\sum_{i \\in T(c)} \\alpha_{t}^{s}[i],\n\\end{align}\n\nwhere $[i]$ indexes the $i$-th element, and\n$T(c)$ are index sets of tokens respectively that have the surface form as $o$. $P(gen)$ and $P(copy)$ are the probabilities of generating and copying a node, respectively. They are computed by using a single layer neural network with $softmax$ activation as:\n\\begin{align}\n [p_{gen}, p_{copy}] = softmax(W^{gate}h_{t}).\n\\end{align}\n\n\nThe whole expanding procedure is terminated if the newly generated node is the special node $\\mathsf{EOS}$.\n\n\n\n\n\\section{Dataset Construction}\n\\label{sec:dataset}\n\n\\input{tables\/dataset}\n\n\nExisting SGM datasets are synthetically constructed based on scene graphs from MSCOCO~\\citep{Lin2014MicrosoftCC} and GCC~\\citep{Sharma2018ConceptualCA}, and via crowd sourcing. To construct scene graphs, \\citet{He2020SceneGM} used an in-house scene graph parser to parse a random subset of MSCOCO description data and GCC captions, thus the constructed scene graph is relatively simple. In Table \\ref{tab:stats}, the average numbers of nodes and edges for each graph are limited to 2.9 and 1.9 respectively. GCC is more complicated than MSCOCO with a larger graph, but the percentage of nodes and edges from the development\/test set that does not appear in the training set ($\\mathrm{OOV}$ Nodes, $\\mathrm{OOV}$ Edges) are still low, which will cause the model easily overfit to the dataset. To verify the generalization ability and the scalability of the model to handle more complex scene graphs, we constructed our own Scene Graph Modification dataset based on the Remote Sensing Image Captioning Dataset (RSICD)~\\citep{lu2017exploring} in the remote sensing field for remote sensing image captioning task.\n\n\n\\input{tables\/main}\n\nInspired by the modification methods proposed by~\\citet{He2020SceneGM}. First, we adopt the parser~\\citep{Schuster2015GeneratingSP} to parse the caption for each graph and generate the original scene graph $\\textbf{x}$. Then we define three types of graph modification operations $\\mathcal{A}$ = \\{INSERT, DELETE, SUBSTITUTE\\}, and randomly apply them to the original scene graph to generate query ($\\textbf{q}$) and modified scene graph ($\\textbf{y}$). The data in RSICD consists of the triples ($\\textbf{x, y, q}$).\\footnote{We give three detailed operations and examples in the Appendix \\ref{operations}.}\n\nCompared with the existing SGM dataset, each graph of RSICD has more nodes and edges, with an average of 5.9 and 3.7 on the training\/development\/test set, which is almost twice that of User Generated and MSCOCO. In addition, the dataset comes from the field of remote sensing. Due to the large number of geographical terms, the $\\mathrm{OOV}$ Nodes of the development\/test sets compared with the training set reach 12\\%\/11\\%, and the $\\mathrm{OOV}$ Edges reach 8\\%\/8\\%, which are much higher than the MSCOCO and GCC datasets. Considering the complexity of RSICD, we construct it apart from User Generated, MSCOCO and GCC to further analysis the generalization and scalability of ISE.\n\n\n\\section{Experiments and Analyses}\n\\label{sec:experiments}\n\n\n\\subsection{Data}\nWe evaluated our model on four benchmarks, including User Generated, MSCOCO and GCC proposed by~\\citet{He2020SceneGM}, and RSICD dataset proposed in this work. MSCOCO, GCC and RSICD are constructed synthetically from publicly available datasets~\\citep{Lin2014MicrosoftCC, SoricutDSG18, lu2017exploring}, while the User Generated dataset is created via crowd sourcing. Detailed statistics of datasets are shown in Table~\\ref{tab:stats}.\n\n\n\n\\subsection{Setup}\n\nFor fair comparisons, we used the same data splits for User Generated, MSCOCO and GCC datasets as in ~\\citet{weber2021extend}. For RSICD, we randomly split the data into 8K\/1K\/1K for training\/development\/test. Following~\\citet{weber2021extend}, we use three automatic metrics for the evaluation, including node-level and edge-level F1 score, and graph-level accuracy. Graph-level accuracy is computed based on exact string match, which requires the generated scene graph to be identical to the target scene graph for a correct prediction. We reported the mean score and standard deviation by using 5 models from independent runs. We refer to the Appendix \\ref{Hyper-parameters} for the detailed implementation.\n\n\n\n\n\\subsection{Baselines}\nFor comprehensive comparisons, we include six baselines as follows. Except for the CopyGraph, all of them aim to rebuild the target scene graph. \n\n\\paragraph{CopyGraph} This baseline directly copies the source scene graph as the target scene graph, which can be viewed as the lower bound.\n\n\\paragraph{Text2Text} This baseline is introduced by~\\citet{He2020SceneGM}. They used the standard sequence-to-sequence architecture by linearizing the scene graph based on depth-first search. \n\n\\paragraph{GRNN} Graph RNN~\\citep{You2018GraphRNNGR} is used as the graph encoder and edge decoder. Specifically, the edges are represented by an adjacency matrix, which is then generated in an auto-regressive manner. Both the query encoder and node decoder are based on Gated Recurrent Units~\\citep{ChoMGBBSB14}.\n\n\\paragraph{DCGCN} Densely-Connected Graph Convolutional Networks ~\\citep{Guo2019DenselyCG} are used as the graph encoder. Other components are kept the same as the GRNN.\n\n\\paragraph{GTran} Graph Transformer~\\citep{Cai2020GraphTF} is used as the graph encoder, while other modules are the same as GRNN and DCGCN.\n\n\\paragraph{STran} The sparsely-connected transformer~\\citep{He2020SceneGM} is used to encode the source graph. In addition, a cross-attention mechanism is applied to fuse the features from graph encoder and query encoder. Node decoder and edge decoder are the same as GRNN.\n\n\\paragraph{EGraph} This is the state-of-the-art model on graph modification task. Concretely, \\citet{weber2021extend} considerably increases performance on the graph modification by phrasing it as a sequence labelling task.\n\n\n\n\\subsection{Main Results}\nAccording to Table~\\ref{tab:main}, our proposed approach (ISE) significantly outperforms the state-of-the-art model~\\citep{weber2021extend} on three datasets. Specifically, ISE outperforms EGraph 1.81, 1.11 and 1.33 percentage points in terms of graph accuracy on User Generated, MSCOCO and GCC datasets, respectively. We observe that the improvement is especially prominent on the User Generated dataset, which is \nmore challenging than the other two synthetic datasets in terms of the diversity in graph semantics and natural language expressions. All baseline models suffer from performance degradation as it is much harder to rebuild the entire target scene graph on this dataset. On the other hand, ISE constructs the target scene graph by incrementally expanding the source scene graph without changing the unmodified structure. We believe this formulation is able to effectively cope with this difficulty. \n\n\nWe also observe that both EGraph and ISE achieve lower graph accuracy on the GCC dataset. The main reason is the difficulty of predicting the correct edges between generated nodes. For example, EGraph achieves 98.62 Node F1 score on GCC, higher than 97.62 Node F1 score on the User Generated dataset. However, EGraph only achieves 75.01 Edge F1 score on GCC, while it can attain 88.26 Edge F1 score on User Generated. Our proposed model has larger improvements upon EGraph in terms of Edge F1 score on the same dataset (93.06 vs. 91.64). We attribute this stronger improvement to iterations between nodes prediction and edge prediction, which allows more accurate and harmonious expansion decisions progressively. On the other hand, EGraph predicts nodes and edges at two independent stages. Such an approach may lead to the lack of the modeling capability of interactions between node prediction and edge prediction.\n\n\\input{tables\/rsicd}\n\nWe further compare our model with EGraph on the newly constructed dataset RSICD as shown in Table~\\ref{tab:rs}. ISE is able to achieve a graph accuracy of 44.20\\% and improves upon the EGraph model by 21 percentage points. However, the graph accuracy of all the models is much lower than the one attained on the previous three SGM datasets. One reason is that RSICD has more complex queries paired with larger scene graph, which brings a challenge to existing models. The RSICD dataset also suffers from the data sparsity issue where many words (39\\%) and nodes (42\\%) only appear once in the training data. Incorrect node prediction will further propagate the errors to edge prediction. Our iterative node and edge prediction paradigm help to alleviate this issue. Specifically, ISE only outperforms EGraph 9.69 percentage points on Node F1 score, while the improvement on Edge F1 score is 13.05\\%. Therefore, ISE is able to achieve a higher accuracy. In order to further address this data sparsity issue, one potential solution is transfer learning, where the model is pretrained on User Generated dataset first then fine-tuned on RSICD. However, this approach may suffer from a domain-shift problem, as RSICD is constructed based on the remote sensing domain. We leave this direction as future works.\n\n\n\n\n\n\n\\subsection{Analysis and Discussion}\nIn this section, we provided a fine-grained analysis of our proposed model. We reported all the results on the development set by using the ISE model without contextualized embeddings from BERT.\n\n\\input{tables\/ablation}\n\n\\paragraph{Ablation Study} \nAs shown in Table~\\ref{tab:ablation}, we examine the contributions of two main components used in our model. The first one is the incremental structure expanding. We use the same model architecture but try to rebuild the target scene graph similar to previous efforts. We can observe significant drops on three SGM datasets, which further confirms the effectiveness of the extending strategy. The second one is the copy mechanism, which directly copies the token from the query as nodes in the target scene graph. It plays a significant role in predicting nodes especially when the training data is limited (User Generated).\n\n\\input{tables\/Robustness}\n\\paragraph{Performance against Training Data Size} Table~\\ref{tab:size} shows the performance of STran and ISE against different training settings on MSCOCO dataset. We considered four training settings (20\\%, 40\\%, 60\\%, 80\\%, 100\\% training data). ISE consistently outperforms STran under the same amount of training data. When the size of training data decreases, we can observe that the performance gap becomes more obvious. Particularly, using 40\\% of the training data, ISE is able to achieve a graph accuracy of 88.64\\%, higher than STran trained on the whole dataset. These results demonstrate that our model is more effective in terms of using training resources and more robust when the training data is limited.\n\n\\input{tables\/SenLength}\n\\paragraph{Performance against Query Length} Table~\\ref{tab:query} shows the results of STran and ISE under different query lengths on GCC dataset. We partitioned the sentence length into three classes (\\textless5, [5, 10), $\\geq$10). In general, ISE outperforms STran against various sentence lengths. When the length of the query increases, we can observe that the performance gap becomes more obvious in terms of graph accuracy. Intuitively, with the increase of the query length, it is more challenging for the model to comprehend the sentence. This suggests that ISE is able to handle more complex instructions.\n\n\n\\paragraph{Performance against Graph Size}\nTable~\\ref{tab:graph} shows the results of STran and ISE against different target scene graph sizes on GCC dataset. We partitioned the scene into three classes (\\textless5, [5, 10), $\\geq$10). Based on the formulation of extending the source scene graph, our model is required to deal with larger graphs. For example, deleting a node in the scene graph becomes adding a special ``Delete'' node in the extended graph. However, ISE is able to consistently outperform STran against various target graph sizes, even when the target scene graph is large. This result suggests the superiority of the proposed formulation.\\footnote{We give an error analysis in the Appendix \\ref{error}.} \n\\input{tables\/NodeLength}\n\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.26]{figures\/case.pdf}\n \n \\caption{Two cases of STran and ISE for scene graph modification on User Generated. $Q$ denotes the textual query, $G_{S}$ denotes the source scene graph, $G_{T}$ denotes the target scene graph generated by STran and ISE.}\n \\label{fig:case}\n \n\\end{figure*}\n\n\\paragraph{Case Study}\n\\label{case}\n\n\n\nWe give two cases in Figure \\ref{fig:case}. STran generates scene graph from the scratch conditioned on the original graph and query may lead to the lack of the modeling capability of interactions between node prediction and edge prediction. For example, in Figure \\ref{fig:case} (a), STran omitted the attribute: ``Velvet'' during the node prediction. In addition, during the edge prediction, STran redundantly generated the relation: ``Of'' in Figure \\ref{fig:case} (b). However, these structures do not need to be modified in the source scene graph. ISE can infer more accurate target graph by incrementally expanding the source graph without changing the unmodified structure. \n\\section{Related Work}\n\\label{sec:related}\n\nWe refer to the Appendix \\ref{scene graph} for the detailed related work of scene graph. Scene graph builds a bridge between image domain and text domain. Vision and natural language are all tremendously promoted by studying into scene graphs. Recently, scene graph modification becomes an emerging research\ndirection. \\citet{Chen2020GraphED} proposed a framework based on scene graph editing for text-based image retrieval.\nOn the other hand, \\citet{He2020SceneGM} took the scene graph and the textual query as inputs and modified the source graph according to the query. They viewed the task as conditional graph generation, which is further decomposed into node prediction and edge prediction. For node prediction, all the nodes in the target scene graph is generated based on a graph-to-sequence model with dual encoder~\\citep{Song2018AGM,Beck2018GraphtoSequenceLU,ZhangGTLCLB20}, then a graph RNN is adopted to predict the edges between generated nodes~\\citep{You2018GraphRNNGR}. More recently, \\citet{weber2021extend} developed an alternative formulation of this problem in which they model the modification as an auto-regressive sequence labelling task.\n\nInstead of rebuilding the entire target graph, we framed the scene graph modification task as incremental graph expansion. This formulation is related to incremental parsing, where a sentence is scanned from left-to-right and the structured is built incrementally by inserting a node or attaching an edge. Incremental parsers are widely used in semantic parsing~\\citep{ZhouXUQLG16,ChengRSL17,GuoL18, Naseem2019RewardingST,liu2022semantic} and syntactic parsing~\\citep{HuangS10,DyerBLMS15,LiuZ17a}, as they are computationally efficient, and can use machine learning to predict actions based on partially generated structures. Our feature fusion module can be viewed as the parser state as it carries the structural information and serves as a writable memory during the expansion step. Unlike \\citet{weber2021extend} linearize the scene graph and label it in an auto-regressive manner, our model iterates between finding the relevant part in the query and reading the partially constructed scene graph, inferring more accurate and harmonious expansion decisions progressively. \n\n\n\\section{Conclusion}\n\nIn this paper, we designed a novel formulation for scene graph modification, which allows us to incrementally expand the source scene graph instead of rebuilding the entire graph. Based on the formalism, we further propose a model that is able to leverage the mutual causalities between node prediction and edge prediction. Experiments on three SGM benchmarks demonstrate the effectiveness.\nTo test our model under a complex scenario, we constructed a more challenging dataset from the remote sensing domain, which has more modification operations based on the more complicated queries compared with existing SGM datasets. For future work, we would like to explore how to integrate the model into the text-based image retrieval task.\n\n\n\\section{Appendix}\n\n\n\\subsection{Operations in RSICD}\n\n\n\\input{tables\/RSICD_Case}\n\\label{operations}\nWe introduce three operations in RSICD in details:\n\\begin{itemize}\n \\item \\textbf{\\texttt{DELETE}}: The original scece graph is $\\textbf{x}$. We randomly select a node $\\textbf{o}$ in $\\textbf{x}$, and delete it both with related edges. The deleted graph is defined as $\\textbf{y}$. We choose a random sentence from the \\textit{DELETE Template}~\\citep{manuvinakurike-etal-2018-edit}, for example, `` I do not want \\textbf{**}.'' We replace \\textbf{**} with $\\textbf{o}$ to get modification operation $\\textbf{q}$.\n \\item \\textbf{\\texttt{INSERT}}: It is the reverse process of \\textbf{\\texttt{DELETE}}. The graph before deleting the node is regarded as $\\textbf{y}$, and the corresponding graph after deletion is treated as $\\textbf{x}$. The modification operation is randomly selected from the \\textit{INSERT Template}~\\citep{manuvinakurike-etal-2018-edit}, for example, `` Show me \\textbf{**}.'' We replace \\textbf{**} with $\\textbf{o}$ to obtain query $\\textbf{q}$.\n \\item \\textbf{\\texttt{SUBSTITUTE}}: We randomly select a node $\\textbf{o}$, use the AllenNLP toolkit~\\citep{gardner-etal-2018-allennlp} to find the three most similar semantics nodes compared with $\\textbf{o}$. We randomly choose a node $\\textbf{m}$, and select a sentence from the \\textit{SUBSTITUTE Template}~\\citep{manuvinakurike-etal-2018-edit}, for example, `` I prefer \\textbf{@@} to \\textbf{**}, modify \\textbf{**} to \\textbf{@@}.'' We replace \\textbf{**} and \\textbf{@@} with $\\textbf{o}$ and $\\textbf{m}$, and get modification operation $\\textbf{q}$. Note that SUBSTITUTE operation could be viewed as DELETE the node $\\textbf{o}$ first and then INSERT the node $\\textbf{m}$, or vice versa.\n\\end{itemize}\n\nIn Table \\ref{tab:RSICD_case_study}, we give the simple examples in RSICD to better understand three types of graph modification operations.\n\n\n\\subsection{Implementation Details}\n\\label{Hyper-parameters}\nHyper-parameters of the model are tuned on the development set. All transformer~\\citep{VaswaniSPUJGKP17} layers share the same hyper-parameter settings. Following~\\citet{He2020SceneGM}, we randomly initialized the word and node embeddings. We also report results with contextualized embeddings from BERT~\\citep{DevlinCLT19}. Specifically, we used the BERT-base-uncased implemented by~\\citep{wolf-etal-2020-transformers}. The parameters in BERT are fixed during training. To mitigate over-fitting, we apply dropout~\\citep{SrivastavaHKSS14} with the drop rate 0.2 between different layers. Following~\\citet{Cai2020AMRPV}, we use a special UNK token to replace the out-of-vocabulary lemmas of the input query and remove the UNK token in the generated graph. Parameter optimization is performed with the ADAM optimizer~\\citep{KingmaB14} with $\\beta_{1}$ = 0.9 and $\\beta_{2}$ = 0.999. The learning rate schedule is similar to that in~\\citet{VaswaniSPUJGKP17}, where warm-up steps being set to 2K. We used early stopping on the development set for choosing the best model. \nPlease refer to Table \\ref{tab:hyper-parameters} for the detailed hyper-parameters settings for ISE.\n\n\\begin{table}\n\\centering\n\\scalebox{0.9}{\n\\begin{tabular}{lr}\n\\toprule\n\\multicolumn{2}{l}{\\textbf{Embeddings}}\\\\\n\\midrule\nconcept & 300 \\\\\nword & 300 \\\\\nrelation & 100\\\\\n\\midrule\n\\multicolumn{2}{l}{\\textbf{Query Encoder}}\\\\\ntransformer layers & 4 \\\\\n\\midrule\n\\multicolumn{2}{l}{\\textbf{Graph Encoder}}\\\\\ntransformer layers & 2 \\\\\n\\midrule\n\\multicolumn{2}{l}{\\textbf{Feature Fusion}}\\\\\nheads & 8 \\\\\nhidden size & 512 \\\\\nfeed-forward hidden size & 1024 \\\\\n\\midrule\n\\multicolumn{2}{l}{\\textbf{Node Decoder\/ Edge Decoder}}\\\\\nheads & 8 \\\\\nfeed-forward hidden size & 1024 \\\\\n\n\n\\bottomrule\n\\end{tabular}}\n\n\\caption{Hyper-parameters settings for ISE.}\n\\label{tab:hyper-parameters}\n\n\\end{table}\n\n\n\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[scale=0.26]{figures\/error.pdf}\n \n \\caption{Two errors of ISE for scene graph modification on User Generated. $Q$ denotes the textual query, $G_{S}$ denotes the source scene graph, $G_{T}$ denotes the target scene graph generated by ISE. $G_{G}$ denotes the gold target scene graph.}\n \\label{fig:error}\n \n\\end{figure*}\n\n\\subsection{Scene Graph and Application}\n\\label{scene graph}\nDeep learning has significantly promoted the advancement of computer vision~\\citep{liang2017deep, ren2021comprehensive}. Simple visual understanding tasks such as object detection and recognition are no longer sufficient. To depict the relationship between objects in the scene as a driving force, higher-level visual understanding and reasoning skills are frequently necessary. Scene graphs were created specifically to address this issue. Scene graph was first proposed by~\\citet{Johnson2015ImageRU} for image retrieval, which describes objects, their attributes, and relationships in images with a graph. A complete scene graph could represent the semantics of a dataset's scenes, not just a single image or video; additionally, it contains powerful representations that encode 2D\/3D images~\\citep{Johnson2015ImageRU,armeni20193d}, and videos~\\citep{qi2018scene,wang2020storytelling} into their abstract semantic elements. Scene graph is beneficial for various downstream tasks, such as information extraction \\cite{hu2020selfore,hu2021semi,hu2021gradient,liu2022hierarchical}, natural language summarization \\cite{liu2022psp}, and natural language inference \\cite{li2022pair}.\n\nFollowing the graph representation paradigm, different methods have been proposed to generate scene graphs from images~\\citep{XuZCF17,WangLZY18, ZellersYTC18}. Many cross-modal tasks that require understanding and reasoning on image and text are able to benefit from incorporating scene graphs, such as visual question answering~\\citep{TeneyLH17,shi2019explainable}, grounding referring expressions~\\citep{wang19}, image captioning~\\citep{YangTZC19,yao2018exploring}, and image retrieval~\\citep{Wang2020CrossmodalSG,Schroeder2020StructuredQI}. \n\n\n\n\n\n\\subsection{Error Analysis} \n\\label{error}\nWe give two wrong scene graphs generated by ISE in Figure \\ref{fig:error}. We can observe in Figure \\ref{fig:error} (a) that although ISE successfully predicts the need to insert a relation between object ``Plants'' and attribute ``Surface'', since the User Generated dataset contains a total of 2078 relations and the relations have serious long-tail effects. It is difficult for ISE to learn sparseness relations with few occurrences, leading to incorrectly predicting relation ``in growing over'' as ``on''. We attempt to address the long-tail effects of relations in future work.\nSince a node can be attached to multiple nodes, when Edge Decoder determines which nodes in the current graph should be attached to the new node, a common error is predicting the wrong node that needs to be attached. As shown in Figure \\ref{fig:error} (b), ISE incorrectly connects relation ``behind'' between ``Giraffe'' and ``Tree'' instead of ``Head'' and ``Tree''.\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
+{"text":"\\section{Introduction}\n\nA phased mission system (PMS) is one that performs several different tasks or functions in sequence. The periods in which each of these successive tasks or functions takes place are known as phases \\citep{xing2008reliability,la2004phased}. Examples of PMSs can be found in many practical applications, such as electric power systems, aerospace systems, weapon systems and computer systems. A typical example of a PMS is the monitoring system in a satellite-launching mission with three phases: launch, separation, and orbiting.\n\nA PMS is considered to be functioning if all of its phases are completed without failure, and failed if failure occurs in any phase. Therefore, the reliability of a PMS with $N$ phases is the probability that it operates successfully in all of its phases:\n\\begin{equation}\n R_S = \\mathbb{P}(\\mbox{Phase 1 works} \\cap \\mbox{Phase 2 works} \\cap \\dots \\cap \\mbox{Phase $N$ works})\n \\label{eq:allphases} \n\\end{equation}\n\nThe calculation of the reliability of a PMS is more complex than that of a single phase system, because the structure of the system varies between phases and the component failures in different phases are mutually dependent \\citep{xing2008reliability}.\n\nOver the past few decades, there have been extensive research efforts to analyze PMS reliability. Generally, there are two classes of models to address such scenarios: state space oriented models \\citep{kim1994phased,chew2008phased,lu2014reliability,wang2017competing} and combinatorial methods \\citep{xing2015binary,ma1999algorithm,la2004phased,zang1999bdd,tang2006bdd,mo2009variable,reed2011improved,xing2007reliability,xing2013bdd}. The main idea of state space oriented models is to construct Markov chains and\/or Petri nets to represent the system behaviour, since these provide flexible and powerful options for modelling complex dependencies among system components. However, the cardinality of the state space can become exponentially large as the number of components increases. The remaining approaches exploit combinatorial methods, Boolean algebra and various forms of decision diagrams for reliability analysis of PMSs.\n\nIn particular, in recent years the Binary Decision Diagram (BDD) --- a combinatorial method --- has become more widely used in reliability analysis of PMSs due to its computationally efficient and compact representation of the structure function compared with other methods. Zang \\emph{et al.}\\ \\citep{zang1999bdd} first used the BDD method to analyze the reliability of PMSs. Tang \\emph{et al.}\\ \\citep{tang2006bdd} developed a new BDD-based algorithm for reliability analysis of PMSs with multimode failures. Mo \\citep{mo2009variable} and Reed \\emph{et al.}\\ \\citep{reed2011improved} improved the efficiency of Tang's method by proposing a heuristic selection strategy and reducing the BDD size, respectively. Xing \\emph{et al.}\\ \\citep{xing2007reliability,xing2013bdd} and Levitin \\emph{et al.}\\ \\citep{levitin2013reliability} proposed BDD based methods for the reliability evaluation of PMSs with common-cause failures and propagated failures. Wang \\emph{et al.}\\ \\citep{wang2007reliability} and Lu \\emph{et al.}\\ \\citep{lu2015reliability} studied modular methods for reliability analysis of PMSs with repairable components, by combining BDDs with state-enumeration methods.\n\nWhile the BDD method has been shown to be a very efficient combinatorial method, it is still difficult to analyze large systems without considerable computational expense \\citep{xing2008reliability,reed2011improved}. In this paper, we propose a combinatorial analytical approach providing a new survival signature methodology for reliability analysis of PMSs. This paper is organized as follows: \\cref{sec:PMS} gives a brief background on PMSs; \\cref{sec:survsig} first shows how the standard survival signature can be used to evaluate PMSs with similar component types in each phase, before providing a novel methodology which facilitates heterogeneity of components across the phases. \\Cref{sec:examples} presents illustrative examples showing numerical agreement with existing literature, but where the full benefits of the interpretability of survival signatures is now available due to this work. Finally, \\cref{sec:conclusion} presents some conclusions ideas for future work.\n\n\\section{Phased mission systems}\n\\label{sec:PMS}\n\n\\Cref{fig:pms1} shows a simple system that performs a series of functions or tasks which are carried out over consecutive periods of time to achieve a certain overall goal (or `mission'). Such a system --- where the structure (and possibly operating environment) of the system changes over time --- is known as a Phased Mission System (PMS), with each period of operation being referred to as a `phase'. Each phase therefore corresponds to one structural configuration and components in different phases are taken to be mutually dependent.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{PMS1}\n \\caption{A PMS with similar components in each phase.}\n \\label{fig:pms1}\n\\end{figure}\n\nLet us consider a system consisting of $N \\ge 2$ phases, with $n_i$ components in phase $i \\in \\{1, \\dots, N\\}$. The binary state indicator variable $X_{ij}, j \\in \\{1, \\dots, n_i\\}$ denotes the operational status of the $j$th component in phase $i$:\n\\[ X_{ij} = \\begin{cases}\n1 & \\mbox{if component $j$ works for all of phase $i$} \\\\\n0 & \\mbox{if component $j$ fails before the end of phase $i$}\n\\end{cases} \\]\n\nThe vectors $\\mathbf{X}_i = (X_{i1}, \\dots, X_{in_i}), i \\in \\{1, \\dots, N\\}$, represent the states of all components in the $i$th phase and the full vector $\\mathbf{X} = (\\mathbf{X}_1, \\dots, \\mathbf{X}_N) = (X_{11}, \\dots, X_{1n_1}, \\dots, X_{N1}, \\dots, X_{Nn_N})$ represents the states of all components during the full mission.\n\nThe state of the system in each phase is also a binary random variable, which is completely determined by the states of the components in that phase. Let $\\phi_i$ represent the system state in the $i$th phase, that is:\n\\[ \\phi_i = \\varphi_i(\\mathbf{X}_i) = \\varphi_i(X_{i1}, \\dots, X_{in_i}) \\]\nwhere $\\varphi_i(\\cdot)$ is the structure function of the system design in phase $i$. The structure function evaluates to $\\phi_i = 1$ if the system functions for state vector $\\mathbf{X}_i$, and $\\phi_i = 0$ if not.\n\nSimilarly, the structure function of the full PMS (that is, the operational state of the system across \\emph{all} phases) is also a binary random variable, which is completely determined by the states of all the components in the PMS\n\\begin{equation}\n \\phi_S = \\varphi_S(\\mathbf{X}) \\triangleq \\prod_{i=1}^N \\varphi_i(X_{i1}, \\dots, X_{in_i})\n \\label{eq:strfnpms}\n\\end{equation}\n\nThe structure function as shown in \\cref{eq:strfnpms} is again a Boolean function which is derived from the truth table of the structure functions for each phase of operation. The truth tables depend uniquely on the system configurations and simply provide a means of tabulating all the possible combinational states of each component to realise the operational state of the system in each case. The state vectors for which $\\varphi_S(\\mathbf{X})=1$ provide a logical expression for the functioning of the system, while the states when $\\varphi_S(\\mathbf{X})=0$ provide a logical expression for the failure of the system. It should be noted that, unlike non-PMSs, there exist impossible combinations of states which should be deleted from the truth table when performing a reliability analysis. For example, if both the system and its components are non-repairable during the mission, then if a component is failed in a certain phase it cannot be working in subsequent phases.\n\nFinally, if all phases are completed successfully, the mission is a success, that is:\n\\[ \\phi_S = \\prod_{i=1}^N \\phi_i = 1 \\iff \\phi_i = 1 \\ \\forall\\,i \\]\n\n\\section{Survival signature}\n\\label{sec:survsig}\n\nFor larger systems, working with the full structure function can be complicated and as the system size grows it becomes hard to intuit anything meaningful from the particular algebraic form it takes. In particular, one may be able to summarize the structure function when it consists of exchangeable components of one or more types \\citep{samaniego2007system,coolen2013generalizing,coolen2014nonparametric}.\n\nRecently, the concept of the survival signature has attracted substantial attention, because it provides such a summary which enables insight into the system design even for large numbers of components of differing types. Coolen and Coolen-Maturi \\citep{coolen2013generalizing} first introduced the survival signature, using it to analyze complex systems consisting of multiple types of component. Subsequently, \\citep{coolen2014nonparametric,coolen2015predictive,Aslett2015} presented the use of the survival signature in an inferential setting, with nonparametric predictive inference and Bayesian posterior predictive inference respectively, and \\citep{feng2016imprecise} presented methods for analyzing imprecise system reliability using the survival signature. Patelli \\emph{et al.}\\ \\citep{patelli2017simulation} developed a survival signature-based simulation method to calculate the reliability of large and complex systems and \\citep{Aslett2017} presents a simulation method which can be used if the dependency structure is too complex for a survival signature approach. Walter \\emph{et al.}\\ \\citep{walter2017condition} proposed a new condition-based maintenance policy for complex systems using the survival signature. Moreover, Eryilmaz \\emph{et al.}\\ \\citep{eryilmaz2016generalizing} generalized the survival signature to multi-state systems.\n\nEfficient computation of the survival signature was addressed by Reed \\citep{reed2017efficient}, using reduced order binary decision diagrams (ROBDDs). The survival signature of a system can be easily computed by specifying the reliability block diagram as a simple graph by using the \\texttt{ReliabilityTheory} R package \\citep{Aslett2012}.\n\nIn this section, the survival signature is first shown to apply directly to full mission-length PMSs where there is a single component type in each phase. Thereafter, an extension is presented which enables heterogeneity of component types across phases, providing novel methodology for reliability analysis of PMSs.\n\n\\subsection{PMSs with similar components in each phase}\n\\label{sec:pms.same}\n\nWe consider a system with $N \\ge 2$ phases, with $n$ components in each phase (e.g.\\ the PMS as shown in \\cref{fig:pms1}), and let phase $i \\in \\{1, \\dots, N\\}$ run from time $\\tau_i$ to time $\\tau_{i+1}$ with $\\tau_1 \\triangleq 0$ and $\\tau_i < \\tau_{i+1} \\ \\forall\\,i$. Thus the full mission time is denoted $\\tau_{N+1}$.\n\nWe assume that the random failure times of components in the same phase are fully independent, and in addition that the components are exchangeable. Let $\\Phi(l_1, \\dots, l_N)$ denote the probability that the PMS functions by the end of the mission given that precisely $l_i, i \\in \\{1, \\dots, N\\}$, of its components functioned in phase $i$. Both the system and its components are non-repairable during the mission, so $n \\ge l_1 \\ge l_2 \\ge \\dots \\ge l_N \\ge 0$ and the number of components that function at the beginning of phase $i$ is $m_i = l_{i-1},$ with $m_1=n$ --- so all components appear in all phases. Subject to these constraints which do not apply in a non-PMS, the survival signature can then be applied without further modification for the mission completion time.\n\nThere are $\\binom{m_i}{l_i}$ state vectors where precisely $l_i$ components function. Because the random failure times of components in the same phase are independent and exchangeable, the survival signature is equal to:\n\\begin{equation}\n \\Phi(l_1, \\dots, l_N) = \\left[ \\prod_{i=1}^N \\binom{m_i}{l_i}^{-1} \\right] \\sum_{\\mathbf{X} \\in \\mathcal{S}} \\varphi_S(\\mathbf{X})\n \\label{eq:survsig0}\n\\end{equation}\nwhere $\\mathcal{S}$ denotes the set of all possible state vectors for the whole system where $l_i$ components in phase $i$ are functioning. This step is of the same form as the standard survival signature for a static system \\citep{coolen2013generalizing}, but note one immediate subtle difference: as noted above, $m_i$ is not fixed across evaluations of $\\Phi(\\cdot)$, but rather is determined by $l_{i-1}$, since the maximum number of functioning components in the $i$th phase is determined by how many components completed phase $i-1$ still functioning.\n\nA further subtlety arises as soon as we consider any time leading up to the mission completion time, because the structure of the system changes. Although the standard survival signature can be used in computing the reliability of a static system at any point in its life \\citep{coolen2013generalizing}, this is no longer true in this extension to PMSs. Consequently, \\eqref{eq:survsig0} is the survival signature which represents the probability that the whole mission completes successfully given that $l_i$ components are working in phase $i$. For the survival function of a PMS, we must extend the survival signature to create a family of survival signatures which account for the temporally changing structure. Let $\\Phi_p(l_1, \\dots, l_p)$ denote the survival signature of a PMS up to and including phase $p \\le N$, which is the probability that the mission has not yet failed by phase $p$ given that $l_i$ components are working in phase $i \\in \\{1, \\dots, p\\}$. Then,\n\\begin{equation}\n \\Phi_p(l_1, \\dots, l_p) = \\left[ \\prod_{i=1}^p \\binom{m_i}{l_i}^{-1} \\right] \\sum_{\\mathbf{X} \\in \\mathcal{S}} \\prod_{i=1}^p \\varphi_i(\\mathbf{X})\n \\label{eq:survsig0t}\n\\end{equation}\n\nWe define a function mapping mission time $t$ to the current phase\n\\begin{equation}\n \\rho(t) : [0,\\tau_{N+1}] \\to \\{1, \\dots, N\\}, \\mbox{ as } \\rho(t) \\triangleq \\max\\{ i \\,:\\, \\tau_i < t \\} \\label{eq:currentphase}\n\\end{equation}\n\nFrom \\cref{eq:allphases} and \\eqref{eq:survsig0t}, the reliability of the PMS at time $t$ can then be expressed pointwise as:\n\\begin{equation}\n R(t) = \\sum_{l_1=0}^{m_1} \\cdots \\sum_{l_{\\rho(t)}=0}^{m_{\\rho(t)}} \\left[ \\Phi_{\\rho(t)}(l_1, \\dots, l_{\\rho(t)}) \\mathbb{P}\\left( \\bigcap_{i=1}^{\\rho(t)} \\left\\{ C_i(t) = l_i \\right\\} \\right) \\right]\n \\label{eq:survsig1}\n\\end{equation}\nwhere $C_i(t)$ is the random variable denoting the number of components in phase $i$ which function at time $t \\in [\\tau_i, \\tau_{i+1})$. If $R(t)$ is being evaluated at $t \\ge \\tau_{i+1}$ then $C_i(t) \\triangleq C_i(\\tau_{i+1})$. By the definition of $\\rho(t)$, $R(t)$ will never be evaluated for $t < \\tau_{i}$.\n\nBecause components are of the same type they share a common lifetime distribution as long as they all appear in all phases (and hence age together). As a result, the sequential nature of a PMS means that components in the same phase have common conditional CDF, $F_i(t)$, for phase $i$, where conditioning is on the component having worked at the beginning of phase $i$. That is, if the components have common CDF $F(t)$ and all components appear in every phase (in possibly different configurations), then the conditional CDF in phase $i$ is:\n\\begin{align}\n F_i(t) &= \\mathbb{P}(T < t \\,|\\, \\tau_i, \\tau_{i+1}, T > \\tau_i) \\nonumber \\\\\n &= \\frac{1}{1-F(\\tau_i)} \\int_{\\tau_i}^{\\min \\{t, \\tau_{i+1}\\}} dF(z) \\nonumber \\\\\n &= \\frac{F(\\min \\{t, \\tau_{i+1}\\}) - F(\\tau_i)}{1-F(\\tau_i)} \\label{eq:condcdf}\n\\end{align}\nwhere $\\tau_i$ is the start time of phase $i$ ($\\tau_1 \\triangleq 0$) and $T$ is the random variable representing component lifetime.\n\nProceeding with this conditional CDF, the last term in \\cref{eq:survsig1} can be simplified as\n\\begin{align*}\n \\mathbb{P}\\left( \\bigcap_{i=1}^{\\rho(t)} \\left\\{ C_i(t) = l_i \\right\\} \\right) &= \\prod_{i=1}^{\\rho(t)} \\mathbb{P}\\left( C_i(t) = l_i \\right) \\\\\n &= \\prod_{i=1}^{\\rho(t)} \\left[ \\binom{m_i}{l_i} (R_i(t))^{l_i} (1-R_i(t))^{m_i-l_i} \\right]\n\\end{align*}\nwhere\n\\begin{equation}\n R_i(t) = 1-F_i(t) = \\frac{1-F(\\min \\{t, \\tau_{i+1}\\})}{1-F(\\tau_i)} \\label{eq:comprel}\n\\end{equation}\nis the reliability of the components at time $t$ in phase $i$.\n\nThus, \\cref{eq:survsig1} can be rewritten pointwise in $t$ as\n\\begin{align}\n R(t) &= \\sum_{l_1=0}^{m_1} \\cdots \\sum_{l_{\\rho(t)}=0}^{m_{\\rho(t)}} \\left\\{ \\Phi_{\\rho(t)}(l_1, \\dots, l_{\\rho(t)}) \\phantom{\\prod_{i=1}^{\\rho(t)} \\left[ \\binom{m_i}{l_i} (R_i(t))^{l_i} (1-R_i(t))^{m_i-l_i} \\right]} \\right. \\nonumber \\\\\n &\\qquad\\qquad\\qquad\\qquad \\times \\left. \\prod_{i=1}^{\\rho(t)} \\left[ \\binom{m_i}{l_i} (R_i(t))^{l_i} (1-R_i(t))^{m_i-l_i} \\right] \\right\\} \\label{eq:survsigPMS1a}\n\\end{align}\n\nSince in the general case (see special case exception in the sequel) every component appears in every phase, this can be written\n\\begin{align}\n R(t) &= \\sum_{l_1=0}^{l_0} \\cdots \\sum_{l_{\\rho(t)}=0}^{l_{\\rho(t)-1}} \\left\\{ \\Phi_{\\rho(t)}(l_1, \\dots, l_{\\rho(t)}) \\phantom{\\prod_{i=1}^{\\rho(t)} \\left[ \\binom{m_i}{l_i} (R_i(t))^{l_i} (1-R_i(t))^{m_i-l_i} \\right]} \\right. \\nonumber \\\\\n &\\qquad\\qquad\\qquad\\qquad \\times \\left. \\prod_{i=1}^{\\rho(t)} \\left[ \\binom{l_{i-1}}{l_i} (R_i(t))^{l_i} (1-R_i(t))^{l_{i-1}-l_i} \\right] \\right\\} \\label{eq:survsigPMS1b}\n\\end{align}\nwhere we define $l_0 \\triangleq n$. Writing in this final form stresses the sequential dependence in the computation, in stark contrast to the standard survival signature for a static system.\n\n\\subsubsection{Special case: Exponentially distributed component lifetime}\n\nThere are two simplifications that arise when components are Exponentially distributed. Firstly, $F_i(t) \\equiv F(t) \\ \\forall\\, i$, so that $R_i(t) = R(t) = 1-F(t-\\tau_i) \\ \\forall i$.\n\nThe second simplification is that not all components need to appear in all phases. It may be that some components appear only in later phases (but continue to appear after the first phase they are in). In this case, one should be careful not to use \\eqref{eq:survsigPMS1b}, but instead \\eqref{eq:survsigPMS1a} where now $m_i=l_{i-1}+m_i^\\star$ where $m_i^\\star$ is the number of components appearing in the system for the first time at phase $i$.\n\n\\subsubsection{Modelling constraints}\n\nNote that considerable care is required in the specification of --- and implicit assumptions made for --- $F_i(t)$. In particular, when a component is not present in a phase, then whether ageing continues (i.e.\\ time passes) or not is crucial in determining whether the assumption of identical component lifetime distribution still holds in all phases. For example, in \\cref{fig:pms1} each component appears in all phases and therefore experiences the same wear, but in \\cref{fig:pms2} each component is in precisely 2 of the 3 phases. Consequently, even though one might assume all components are of the same type initially, if component $C$ is considered not to `age' during phase 1 (where it is not present) then it will in fact not have identical conditional lifetime distribution to $A$ and $E$ during phase 2, since the latter will have already experienced wear from phase 1.\n\nThis imposes rather unattractive modelling strictures: all components of similar type must appear in the same phases; or all components must have constant failure rate (Exponentially distributed lifetime). These modelling strictures severely limit applicability to real world systems, thus motivating the novel methodological extension of survival signatures hereinafter.\n\n\\subsection{PMSs with different components in different phases}\n\\label{sec:pms.diff}\n\nMost practical PMSs for which the reliability is modelled consist of heterogeneous component types both within and between phases. Therefore, a more interesting challenge is to extend the methodology of survival signatures to this more general setting.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{PMS2}\n \\caption{A PMS with multiple types of components.}\n \\label{fig:pms2}\n\\end{figure}\n\nWe now consider this setting in generality and show that the problem again simplifies in the special case of Exponentially distributed lifetimes, which is the only case that most of the literature has addressed to date. The only constraint we impose is that components of the same type appear in the same phases (since then the conditional CDFs within phases remain in agreement). However, note that this does not limit the scenarios that can be modelled, since components of the same physical type can still be split into multiple `meta-types'.\n\n\\begin{definition}{(Meta-type)}\n Components are defined to be of the same \\emph{meta-type} when they are of the same physical type and appear in the same phases.\n\\end{definition}\n\nLet there be a total of $K$ different meta-types of component. We take the multi-type, multi-phase survival signature to be denoted by the function $\\Phi(l_{11}, \\dots, l_{1K}, \\dots, l_{N1}, \\dots, l_{NK})$, the probability that the system functions given that precisely $l_{ik}$, components of type $k$ function in phase $i$. That is,\n\\[ \\Phi(l_{11}, \\dots, l_{1K}, \\dots, l_{N1}, \\dots, l_{NK}) = \\left[ \\prod_{i=1}^N \\prod_{k=1}^{K} \\binom{m_{ik}}{l_{ik}}^{-1} \\right] \\sum_{\\mathbf{X} \\in \\mathcal{S}} \\varphi_S(\\mathbf{X}) \\]\nwhere $\\mathcal{S}$ denotes the set of all possible state vectors for the whole system. Not all component types need necessarily appear in all phases, so we admit the possibility that $m_{ik}=0$ when a component type is absent from a phase and observe the standard definition that $\\binom{0}{0} \\triangleq 1$ --- this simplifies notation versus having varying numbers of $l_{i\\cdot}$ for each phase.\n\nAs before, the above survival signature is only applicable to the full mission time and we define a family of survival signatures corresponding the successive phases of the mission. Let $\\Phi_p(l_{11}, \\dots, l_{1K}, \\dots, l_{p1}, \\dots, l_{pK})$ denote the survival signature of a PMS up to and including phase $p \\le N$, which is the probability that the mission has not yet failed by phase $p$ given that $l_{ik}$ components of type $k$ are working in phase $i \\in \\{1, \\dots, p\\}$. Then,\n\\begin{equation}\n \\Phi_p(l_{11}, \\dots, l_{1K}, \\dots, l_{p1}, \\dots, l_{pK}) = \\left[ \\prod_{i=1}^p \\prod_{k=1}^{K} \\binom{m_{ik}}{l_{ik}}^{-1} \\right] \\sum_{\\mathbf{X} \\in \\mathcal{S}} \\prod_{i=1}^p \\varphi_i(\\mathbf{X})\n \\label{eq:survsig0t2}\n\\end{equation}\n\nWe retain the definition of $\\rho(t)$ given in \\eqref{eq:currentphase}. It then follows from \\cref{eq:allphases} and \\cref{eq:survsig0t2} that the reliability of the PMS can be characterised as:\n\\begin{align}\n R(t) &= \\sum_{l_{11}=0}^{m_{11}} \\cdots \\sum_{l_{\\rho(t),K}=0}^{m_{\\rho(t),K}} \\left[ \\Phi_{\\rho(t)}(l_{11}, \\dots, l_{1K}, \\dots, l_{\\rho(t),1}, \\dots, l_{\\rho(t),K}) \\vphantom{\\mathbb{P}\\left( \\bigcap_{i=1}^N \\bigcap_{k=1}^{K_i} \\left\\{ C_{ik}(t) = l_{ik} \\right\\} \\right)} \\right. \\nonumber \\\\\n & \\qquad\\qquad\\qquad\\qquad\\qquad \\left. \\times \\mathbb{P}\\left( \\bigcap_{i=1}^{\\rho(t)} \\bigcap_{k=1}^{K} \\left\\{ C_{ik}(t) = l_{ik} \\right\\} \\right) \\right] \\label{eq:survsig2}\n\\end{align}\nwhere $C_{ik}(t)$ is the random variable denoting the number of components of type $k$ in phase $i$ which function at time $t \\in [\\tau_i, \\tau_{i+1})$. In the same vein as \\cref{sec:pms.same}, if $R(t)$ is being evaluated at $t \\ge \\tau_{i+1}$ then $C_{ik}(t) \\triangleq C_{ik}(\\tau_{i+1})$. By the definition of $\\rho(t)$, $R(t)$ will never be evaluated for $t < \\tau_{i}$.\n\nWe can simplify, by defining that $\\mathbb{P}\\left( C_{ik}(t) = 0 \\right) = 1$ when $m_{ik}=0$.\n\\begin{align*}\n \\mathbb{P}\\left( \\bigcap_{i=1}^{\\rho(t)} \\bigcap_{k=1}^{K} \\left\\{ C_{ik}(t) = l_{ik} \\right\\} \\right) &= \\prod_{i=1}^{\\rho(t)} \\prod_{k=1}^{K} \\mathbb{P}\\left( C_{ik}(t) = l_{ik} \\right) \\\\\n &= \\prod_{i=1}^{\\rho(t)} \\prod_{k=1}^{K} \\left[ \\binom{m_{ik}}{l_{ik}} (R_{ik}(t))^{l_{ik}} (1-R_{ik}(t))^{m_{ik}-l_{ik}} \\right]\n\\end{align*}\nwith\n\\begin{equation}\n R_{ik}(t) = \\frac{1-F_k(\\min \\{t, \\tau_{i+1}\\})}{1-F_k(\\tau_i)} \\label{eq:comprel2}\n\\end{equation}\nwhere $F_k(\\cdot)$ is the CDF of the component lifetime distribution for the meta-type $k$.\n\nConsequently, for any time $t$ during the mission, we have the reliability of the system characterised by:\n\n\\begin{align}\n R(t) &= \\sum_{l_{11}=0}^{m_{11}} \\cdots \\sum_{l_{\\rho(t),K}=0}^{m_{\\rho(t),K}} \\left\\{ \\Phi_{\\rho(t)}(l_{11}, \\dots, l_{1K}, \\dots, l_{\\rho(t),1}, \\dots, l_{\\rho(t),K}) \\vphantom{\\prod_{i=1}^{\\rho(t)} \\prod_{k=1}^{K} \\left[ \\binom{m_{ik}}{l_{ik}} (R_{ik}(t))^{l_{ik}} (1-R_{ik}(t))^{m_{ik}-l_{ik}} \\right]} \\right. \\nonumber \\\\\n & \\qquad\\qquad\\ \\left. \\times \\prod_{i=1}^{\\rho(t)} \\prod_{k=1}^{K} \\left[ \\binom{m_{ik}}{l_{ik}} (R_{ik}(t))^{l_{ik}} (1-R_{ik}(t))^{m_{ik}-l_{ik}} \\right] \\right\\} \\label{eq:survsigPMS2}\n\\end{align}\nwhere $m_{ik} = l_{jk}$ for $j = \\max \\{ j : j < i, m_{jk} > 0 \\}$. That is, $m_{ik}$ is the number components which were working in the most recent preceding phase where this component meta-type appears.\n\n\\subsubsection{Special case: Exponential component lifetimes}\n\nExponentially distributed component lifetimes again provide simplifications. Now, the $R_{ik}(t) \\equiv R_k(t)$ due to the memoryless property of the Exponential distribution.\n\nFurthermore, we can relax the definition of a meta-type of component. The definition of component meta-types serves two purposes: (i) to ensure that $m_{ik}$ can be determined without tracking the individual functioning status of all components; and (ii) to ensure that the conditional CDFs of all components of the same meta-type in a phase are the same. The second purpose is made entirely redundant by the memoryless nature of the Exponential distribution. The first purpose remains, but can be achieved with a weaker definition of meta-type.\n\n\\begin{definition}{(Exponential meta-type)}\n Components are defined to be of the same \\emph{exponential meta-type} when they are of the same Exponentially distributed physical type, and if once any pair of components of the same \\emph{exponential meta-type} appear in a phase together, they both appear in all subsequent phases where either component appears.\n\\end{definition}\n\nIn other words, components of the same exponential meta-type may first appear in the system at different phases, but thereafter should appear whenever at least one such exponential meta-type component appears. This definition enables the determination of $m_{ik}$ as $m_{ik} = l_{jk} + m_{ik}^\\star$ for $j = \\max \\{ j : j < i, m_{jk} > 0 \\}$, where $m_{ik}^\\star$ is the number of components of exponential meta-type $k$ appearing for the first time in phase $i$.\n\nThe benefits of Exponential component lifetimes can be mixed in a system containing both meta-type and exponential meta-types since a crucial feature of survival signatures is the factorisation of such types so that they do not interact.\n\n\\section{Numerical examples}\n\\label{sec:examples}\n\n\\subsection{Example 1}\n\nWe first consider the PMS shown in \\cref{fig:pms1}. The duration of each phase is taken to be 10 hours, and the failure rate of each component in each phase is $10^{-4}$\/hour.\n\nThe survival signatures of this PMS can be obtained using \\cref{eq:survsig0}. The elements of the survival signature which are non-zero are shown in \\cref{tab:pms1.survsig} --- that is, rows where $\\Phi(l_1)=0, \\Phi(l_1, l_2)=0$ and $\\Phi(l_1, l_2, l_3)=0$ are omitted. The table is grouped into a nested sequence of phases, with just the first phase shown, followed by the first two phases together and finally all phases --- this helps emphasise and clarify the sequential dependence of phases, where $m_k$ depends on $l_{k-1}$.\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular}{ccccccccc}\n \\hline\n \\multicolumn{2}{c}{First phase} & \\multicolumn{3}{c}{Phase 1+2} & \\multicolumn{4}{c}{All Phases} \\tabularnewline\n \\multicolumn{2}{c}{$0 \\le t \\le 10$} & \\multicolumn{3}{c}{$10 < t \\le 20$} & \\multicolumn{4}{c}{$20 < t \\le 30$} \\tabularnewline\n \\hline \n $l_1$ & $\\Phi(l_1)$ & $l_1$ & $l_2$ & $\\Phi(l_1, l_2)$ & $l_1$ & $l_2$ & $l_3$ & $\\Phi(l_{1},l_{2},l_{3})$ \\tabularnewline\n \\hline \n \\hline\n 3 & 1 & 3 & 1 & 1 & 3 & 2 & 2 & $\\frac{2}{3}$ \\tabularnewline\n & & 3 & 2 & 1 & 3 & 3 & 2 & $\\frac{2}{3}$ \\tabularnewline\n & & 3 & 3 & 1 & 3 & 3 & 3 & 1 \\tabularnewline\n \\hline \n \\end{tabular}\n \\caption{Survival signature of the PMS shown in \\cref{fig:pms1}}\n \\label{tab:pms1.survsig}\n\\end{table}\n\nWe can obtain the conditional reliability of components using the conditional failure rate of the component in each phase. \\Cref{eq:survsigPMS1a} then renders the reliability of the PMS as a whole. The results are shown in \\cref{tab:pms1.R} and \\cref{fig:pms1.R}. These results concord with those found using an independent method in \\citep{zang1999bdd}.\n\nOf note is the jump discontinuity in the reliability function at $t=20$, as shown in \\cref{fig:pms1.R}. This occurs because a failure of component $A$ during phase 2 does not necessarily cause failure of the system at that point, so long as at least one of components $B$ or $C$ work. However, in this situation the PMS will fail instantaneously upon commencing phase 3 at $t=20^+$. Consequently, the size of the jump discontinuity in fact corresponds to the probability of the event $\\{ A$ fails in phase 2, but the system still functions$\\}$.\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular*}{0.85\\textwidth}{@{\\extracolsep{\\fill}}ccccccc@{\\extracolsep{\\fill}}}\n \\hline\n $t$ & $0$ & $10^{-}$ & $10^{+}$ & $20^{-}$ & $20^{+}$ & $30$ \\tabularnewline\n $R$ & 1 & 0.99700 & 0.997700 & 0.997700 & 0.99601 & 0.99501 \\tabularnewline\n \\hline \n \\end{tabular*}\n \\caption{Reliability of the PMS in example 1}\n \\label{tab:pms1.R}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{PMS1R}\n \\caption{Reliability of the PMS in example 1.}\n \\label{fig:pms1.R}\n\\end{figure}\n\n\n\\subsection{Example 2}\n\nFor the PMS shown in \\cref{fig:pms2}, phases 1, 2 and 3 last for 10, 90 and 100 hours respectively. All components in each phase are of the same type and the lifetime distribution of these components follows a two-parameter Weibull distribution. \\Cref{tab:pms2.pars} summarises the distribution information of the components in each phase.\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular*}{0.75\\textwidth}{@{\\extracolsep{\\fill}}lccc@{\\extracolsep{\\fill}}}\n \\hline\n Parameter & Phase 1 & Phase 2 & Phase 3 \\tabularnewline\\hline \n Scale & 250 & 1000 & 300 \\tabularnewline\n Shape & 2.6 & 3.2 & 2.6 \\tabularnewline\n \\hline \n \\end{tabular*}\n \\caption{Conditional distribution information of the components in each phase}\n \\label{tab:pms2.pars}\n\\end{table}\n\nAs described in \\cref{sec:pms.diff}, if some components of the same type appear in a phase and also appear in some subsequent phases --- but not simultaneously --- then these components should be considered as different types of component. For this example, this means that despite the fact they all share a common failure rate within phases, components $A$ and $E$ need to be labelled as type 1 and the remainder as type 2, because ageing will have been different.\n\nThe survival signatures of this PMS are shown in \\cref{tab:pms2.survsig}, with rows where $\\Phi(l_{11},l_{12})=0, \\Phi(l_{11},l_{12},l_{21},l_{22})=0$ and $\\Phi(l_{11},l_{12},l_{21},l_{22},l_{32})=0$ suppressed. The reliability of the PMS is shown in \\cref{tab:pms2.R} and \\cref{fig:pms2.R}.\n\nWe again see a jump discontinuity in the reliability curve depicted in \\cref{fig:pms2.R}, at $t=10$. In this instance, if component $E$ fails during phase 1 the system will still function, but instantaneous failure will occur once phase 2 commences. This is evident in \\cref{tab:pms2.R}, which shows the jump discontinuity is of size $2.3 \\times 10^{-4}$. Indeed, this should correspond to the probability that the system survives phase 1 but with component $E$ failing during that phase. That is:\n\\begin{align*}\n & \\mathbb{P}(A, B \\mbox{ function} \\cap E \\mbox{ fails in phase 1}) \\\\\n & \\quad= \\mathbb{P}(E \\mbox{ fails in phase 1}) \\mathbb{P}(A, B \\mbox{ function} \\,|\\, E \\mbox{ fails in phase 1}) \\\\\n & \\quad = \\int_0^{10} b a^{-b} t^{b-1} e^{-(t\/a)^b}\\,dt \\left(1 - \\int_0^{10} b a^{-b} t^{b-1} e^{-(t\/a)^b}\\,dt \\right)^2 \\\\\n & \\quad \\approx 2.3 \\times 10^{-4} \\ \\ \\mbox{for } a=250, b=2.6\n\\end{align*}\nas required. Hence, PMS can exhibit jump discontinuities where probability mass from non-critical failures in one phase accumulate onto phase change boundaries when the system layout switches.\n\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular}{llllllllllllll}\n \\hline\n \\multicolumn{3}{l}{The first phase} & \\multicolumn{5}{l}{The first two phases} & \\multicolumn{6}{l}{All phases} \\tabularnewline\n \\hline \n $l_{11}$ & $l_{12}$ & $\\Phi_1$ & $l_{11}$ & $l_{12}$ & $l_{21}$ & $l_{22}$ & $\\Phi_{12}$ & $l_{11}$ & $l_{12}$ & $l_{21}$ & $l_{22}$ & $l_{32}$ & $\\Phi_S$ \\tabularnewline\n \\hline\n \\hline\n 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\/2 & 1 & 1 & 1 & 1 & 1 & 1\/2 \\tabularnewline\n 2 & 1 & 1 & 2 & 1 & 1 & 1 & 1\/2 & 1 & 1 & 1 & 1 & 2 & 1\/2 \\tabularnewline\n & & & 2 & 1 & 2 & 0 & 1 & 1 & 1 & 1 & 1 & 3 & 1\/2 \\tabularnewline\n & & & 2 & 1 & 2 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 1\/2 \\tabularnewline\n & & & & & & & & 2 & 1 & 1 & 1 & 2 & 1\/2 \\tabularnewline\n & & & & & & & & 2 & 1 & 1 & 1 & 3 & 1\/2 \\tabularnewline\n & & & & & & & & 2 & 1 & 2 & 0 & 1 & 1 \\tabularnewline\n & & & & & & & & 2 & 1 & 2 & 0 & 2 & 1 \\tabularnewline\n & & & & & & & & 2 & 1 & 2 & 1 & 1 & 1 \\tabularnewline\n & & & & & & & & 2 & 1 & 2 & 1 & 2 & 1 \\tabularnewline\n & & & & & & & & 2 & 1 & 2 & 1 & 3 & 1 \\tabularnewline\n \\hline\n \\end{tabular}\n \\caption{Survival signature of the PMS shown in \\cref{fig:pms2}}\n \\label{tab:pms2.survsig}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{PMS2R}\n \\caption{Reliability of the PMS in example 2.}\n \\label{fig:pms2.R}\n\\end{figure}\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular*}{0.85\\textwidth}{@{\\extracolsep{\\fill}}ccccccc@{\\extracolsep{\\fill}}}\n \\hline\n $t$ & $0$ & $10^{-}$ & $10^{+}$ & $100^{-}$ & $100^{+}$ & $200$ \\tabularnewline\n $R$ & 1 & 0.999768 & 0.999536 & 0.999086 & 0.999086 & 0.998910 \\tabularnewline\n \\hline \n \\end{tabular*}\n \\caption{Reliability of the PMS in example 2.}\n \\label{tab:pms2.R}\n\\end{table}\n\n\\subsection{Example 3}\n\nIn this final example, we replicate the space application mission discussed by Zang \\citep{zang1999bdd} and Mural \\citep{mural1999dependability}. This example includes the full complexity of real-world PMSs, where there is now heterogeneity of component types within phases. This means that multiple component types arise necessarily and not merely as a side effect of identical components appearing in differing phases. There are five phases involved in this space mission: launch is the first phase, followed by Hibern.1, Asteroid, Hibern.2, and finally Comet. The reliability block diagram is shown in \\cref{fig:pms3}. The five phases last for 48, 17520, 672, 26952 and 672 hours, respectively. The failure rates of the components in each phase are given in \\cref{tab:pms3.lambda}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=1\\textwidth]{PMS3}\n \\caption{Reliability block diagram of the space application.}\n \\label{fig:pms3}\n\\end{figure}\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular}{llllll}\n \\hline\n & Phase1 & Phase 2 & Phase 3 & Phase 4 & Phase 5 \\tabularnewline\n \\hline\n $H_a$, $H_b$, $H_c$, $H_d$ & $10^{-5}$ & $10^{-6}$ & $10^{-5}$ & $10^{-6}$ & $10^{-5}$ \\tabularnewline\n $L_a$, $L_b$ & $5 \\times 10^{-5}$ & 0 & 0 & 0 & 0 \\tabularnewline\n $A_a$, $A_b$ & 0 & 0 & $10^{-5}$ & 0 & 0 \\tabularnewline\n $C_a$, $C_b$ & 0 & 0 & 0 & 0 & $10^{-4}$ \\tabularnewline\n \\hline\n \\end{tabular}\n \\caption{Failure rates of the components.}\n \\label{tab:pms3.lambda}\n\\end{table}\n\nAs shown in \\cref{tab:pms3.types}, in order to calculate the reliability of the PMS, the 4 `real' component types must be divided into 5 types when using the methodology presented in this paper. That is, although $H_a, H_b, H_c,$ and $H_d$ have homogeneous failure rates throughout all phases, because they do not always appear together they will exhibit different ageing. Consequently, these are split into two `pseudo' types.\n\nThe result of analysing the reliability of this PMS is shown in \\cref{tab:pms3.R} and \\cref{fig:pms3.R}. The results found using the new methodology we have presented in this paper are in agreement with the entirely independent method in \\citep{zang1999bdd}. \n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\begin{tabular*}{0.85\\textwidth}{@{\\extracolsep{\\fill}}ccccc@{\\extracolsep{\\fill}}}\n \\hline\n Type 1 & Type 2 & Type 3 & Type 4 & Type 5 \\tabularnewline\n $H_a$, $H_b$ & $H_c$, $H_d$ & $L_a$, $L_b$ & $A_a$, $A_b$ & $C_a$, $C_b$ \\tabularnewline\n \\hline \n \\end{tabular*}\n \\caption{Types of components in example 3.}\n \\label{tab:pms3.types}\n\\end{table}\n\n\\begin{table}\n \\centering\\renewcommand{\\arraystretch}{1.25}\n \\resizebox{0.9\\textwidth}{!}{%\n \\begin{tabular}{ccccccccccc}\n \\hline\n $t$ & $0$ & $48^{-}$ & $48^{+}$ & $17568^{-}$ & $17568^{+}$ & $18240^{-}$ & $18240^{+}$ & $45192^{-}$ & $45192^{+}$ & $45864$ \\tabularnewline\n $R$ & 1 & 0.99999 & 0.99999 & 0.99968 & 0.99964 & 0.99862 & 0.99862 & 0.99670 & 0.99600 & 0.98943 \\tabularnewline\n \\hline \n \\end{tabular}}\n \\caption{Reliability of the PMS in example 3.}\n \\label{tab:pms3.R}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{PMS3R}\n \\caption{Reliability of the PMS in example 3, with inset graph providing blown-up detail of first 200 hours of operation.}\n \\label{fig:pms3.R}\n\\end{figure}\n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nComputing the reliability of a PMS is considerably more complex than that of a non-PMS, due to the variation in system structure between phases and the dependencies between component failures in different phases. Consequently, reliability analysis of PMSs has become one of the most challenging topics in the field of system reliability evaluation and maintenance engineering in recent decades. Despite some progress towards efficient and effective methods for measuring the reliability of PMS, it is still difficult to analyze large systems without considerable computational expense and even where it is possible, many methods fail to convey intuition about the reliability of the system.\n\nIn this paper, a new and efficient method for reliability analysis of PMS is proposed using survival signature. Signatures have been proven to be an efficient method for estimating the reliability of systems. A new kind of survival signature is derived to represent the structure function of the PMS. Then the proposed survival signature is applied to calculate the reliability of the PMS. Reliability analysis of a system using signatures could separate the system structure from the component probabilistic failure distribution. Therefore, the proposed approach is easy to be implemented in practice and has high computational efficiency.\n\n\nNote that reliability analysis of PMSs with multiple failure mode components is not studied in this paper. In practice the components may perhaps have more than one failure mode. In ongoing work, the authors are considering component importance analysis, extending work such as \\cite{feng2016imprecise, eryilmaz2018marginal} to PMSs.\n\\section*{Acknowledgements}\n\nThe authors gratefully acknowledge the support of National Natural Science Foundation of China (51575094), China Postdoctoral Science Foundation (2017M611244), China Scholarship Council (201706085013) and Fundamental Research Funds for the Central Universities (N160304004).\n\nThis work was performed whilst the first author was a visitor at Durham University.\n\n\\section*{References}\n\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}