diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzihba" "b/data_all_eng_slimpj/shuffled/split2/finalzzihba" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzihba" @@ -0,0 +1,5 @@ +{"text":"\\section{Conclusion}\nIn this paper, we propose a robust character-level consistency regularization method for STR.\nOur framework consists of a supervised branch trained with synthetic labeled data, and an unsupervised branch trained by two augmented views of real unlabeled images.\nAn asymmetric structure is designed with EMA, weight decay and domain adaption to encourage a stable model training and overcome the domain gap issue caused by synthetic and real images.\nMoreover, a character-level consistency regularization unit is proposed to ensure better character alignment.\nWithout using any human annotated data, our method is able to improve existing STR models by a large margin, and achieves new SOTA performance on STR benchmarks.\n\n\\section*{Acknowledgements}\nThis work was supported by\nNational Key R\\&D Program of China (No.2020AAA0106900),\nthe National Natural Science Foundation of China (No.U19B2037, No.61876152),\nShaanxi Provincial Key R\\&D Program (No.2021KWZ-03),\nNatural Science Basic Research Program of Shaanxi (No.2021JCW-03)\nand Ningbo Natural Science Foundation (No.202003N4369).\n\n \n {\\small\n \\bibliographystyle{ieee_fullname}\n \n\\section{Experiment}\n\n\\subsection{Datasets}\n\nTwo types of data are used here for training, \\ie, synthetic data with annotations and real data without label.\n\nTwo widely used synthetic datasets are adopted including \\textbf{SynthText (ST)}~\\cite{synthetic_dataset\/SynthText} and \\textbf{MJSynth (MJ)}~\\cite{synthetic_dataset\/MJSynth2}, which results in $14.5$M samples in total, referring to as \\textbf{synthetic labeled data (SL)}.\n\nFor real unlabeled scene text data, we collected from three public available datasets, Places2~\\cite{Places2}, OpenImages\\footnote{https:\/\/storage.googleapis.com\/openimages\/web\/index.html} and ImageNet ILSVRC 2012~\\cite{ImageNet}. CRAFT~\\cite{CRAFT} was employed to detect text from these images.\nThen we cropped text images with detection scores larger than $0.7$.\nImages with low resolution (width times height is less than $1000$) were also discarded.\nThere are finally $10.5$M images, denoted as \\textbf{real unlabeled data (RU)}.\n\nIn addition, during ablation study, to demonstrate the superiority of the proposed framework, we also conduct experiments by using real labeled data collected by~\\cite{semisupervised_STR\/fewlabels}. It has $278$K images totally, named as \\textbf{real labeled data (RL)}.\n\nSix commonly used scene text recognition benchmarks are adopted to evaluate our method.\n\n\\textbf{ICDAR 2013 (IC13)} contains $1095$ cropped word images. Following ~\\cite{semanticinformation\/YuLZLHLD20}, we remove images that contain non-alphanumeric characters, which results in 857 test patches.\n\n\\textbf{IIIT5K-Words (IIIT)}~\\cite{IIIT5k} has $3000$ nearly horizontal word patches for test.\n\n\\textbf{Street View Text (SVT)}~\\cite{DBLP:conf\/iccv\/WangBB11} consists of $647$ word images collected from Google Street View for test.\n\n\\textbf{SVT-Perspective (SVTP)}\\cite{SVTP)} contains $645$ images for test, which are cropped from side-view snapshots in Google Street View.\n\n\\textbf{CUTE80 (CUTE)}~\\cite{CUTE} has $288$ curved text images.\n\n\\textbf{ICDAR 2015 (IC15)}~\\cite{IC15} contains $2077$ word images cropped from incidental scene images. After removing images with non-alphanumeric characters, there are $1811$ word patches left for test.\n\n\\subsection{Evaluation Metric}\n\nFollowing common practice, we report word-level accuracy for each dataset. Moreover, in order to comprehensively evaluate models for their recognition performance on both regular and irregular text, following~\\cite{semisupervised_STR\/fewlabels}, we introduce an average score (Avg) which is the accuracy over the union of samples in all six datasets.\n\n\\subsection{Implementation Details}\n\nThe whole model is trained end-to-end without pre-training.\nWe use a batch size of $384$ for labeled data and $288$ for unlabeled data.\nBy default, we set the target decay rate $\\alpha=0.999$ and\nconfidence threshold $\\beta_{U}=0.5$ respectively.\nBoth supervised branch and unsupervised branch are jointly trained, while we only use the model in supervised branch in inference time.\n\nFour STR models are adopted to validate the effectiveness of the proposed framework, with their default model configurations,\nincluding CRNN~\\cite{DBLP:journals\/pami\/ShiBY17}, MORAN~\\cite{MORAN}, HGA~\\cite{2Dattention\/YangWLLZ20} and TRBA~\\cite{TRBA}. Note that CRNN uses CTC for character decoding, which is non-autoregressive. Hence, CCR is not adopted when training model with CRNN.\n\nWe adopt Adadelta when training MORAN or HGA, following their original optimization method. The learning rate is $1.0$ initially and decreases during training process.\nAdamW~\\cite{AdamW} optimizer is adopted when using CRNN or TRBA model.\nFollowing~\\cite{semisupervised_STR\/fewlabels}, we use the one-cycle learning rate scheduler~\\cite{Super-convergence} with a maximum\nlearning rate of $0.001$. The weight decay rate is aligned with the used STR model.\n\nThe unsupervised branch takes two augmented views of an image as input. Here we define two types of augmentations, \\ie, StrongAug and WeakAug.\nStrongAug is borrowed from RandAugment~\\cite{Randaugment} which includes multiple augmentation strategies on both geometry transformations and color jitter.\nConsidering Cutout may crop some characters from the image which will corrupt the semantic information of text, we remove \"Cutout\" operation from RandAugment.\nWeakAug only has color jitter, including brightness, contrast, saturation and hue.\nIn our framework, we use WeakAug for target model and StrongAug for online models of both supervised and unsupervised branches.\n\n\n\n\\begin{table*}[ht!]\n \\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n \\begin{center}\n \\scalebox{0.8}{\n \\begin{tabular}{ccccccccccc}\n \\toprule\n \\multirow{2}{*}{} & Methods & \\multirowcell{2}{Labeled \\\\Dataset} & \\multirowcell{2}{Unlabeled\\\\Dataset} & \\multicolumn{3}{c}{Regular Text} & \\multicolumn{3}{c}{Irregular Text} & \\multirow{2}{*}{Avg} \\\\\n\n & & & & IC13 & SVT & IIIT & IC15 & SVTP & CUTE & \\\\\n \\midrule\n \\multirow{14}{*}{\\rotatebox{90}{SOTA Methods}}\n & Shi~\\etal~\\cite{DBLP:journals\/pami\/ShiBY17} (CRNN) & MJ & - & - & 80.8 & 78.2 & - & - & - & - \\\\\n & Luo~\\etal~\\cite{MORAN}(MORAN) & SL & - & - & 88.3 & 93.4 & 77.8 & 79.7 & 81.9 & - \\\\\n & Yang~\\etal~\\cite{2Dattention\/YangWLLZ20}(HGA) & SL & - & - & 88.9 & 94.7 & 79.5 & 80.9 & 85.4 & - \\\\\n & Baek~\\etal~\\cite{TRBA}(TRBA) & SL & - & - & 87.5 & 87.9 & - & 79.2 & 74.0 & - \\\\\n & Liao~\\etal~\\cite{MaskTextSpotter}(Mask TextSpotter) & SL & - & 95.3 & 91.8 & 93.9 & 77.3 & 82.2 & 87.8 & 88.3 \\\\\n & Wan~\\etal~\\cite{TextScanner}(TextScanner) & SL & - & 92.9 & 90.1 & 93.9 & 79.4 & 84.3 & 83.3 & 88.5 \\\\\n & Wang~\\etal~\\cite{attentiondrift\/WangZJLCWWC20}(DAN) & SL & - & 93.9 & 89.2 & 94.3 & 74.5 & 80.0 & 84.4 & 87.2 \\\\\n & Yue~\\etal~\\cite{attentiondrift\/YueKLSZ20}(RobustScanner) & SL & - & 94.8 & 88.1 & 95.3 & 77.1 & 79.5 & \\underline{90.3} & 88.4 \\\\\n & Qiao~\\etal~\\cite{semanticinformation\/QiaoZYZ020}(SRN) & SL & - & 95.5 & 91.5 & 94.8 & 82.7 & 85.1 & 87.8 & 90.4 \\\\\n & Zhang~\\etal~\\cite{SPIN}(SPIN) & SL & - & - & 90.9 & 95.2 & 82.8 & 84.3 & 83.2 & - \\\\\n & Mou~\\etal~\\cite{PlugNet}(PlugNet) & SL & - & - & 92.3 & 94.4 & - & 84.3 & 84.3 & - \\\\\n & Qiao~\\etal~\\cite{PIMNet}(PIMNet) & SL & - & 95.2 & 91.2 & 95.2 & 83.5 & 84.3 & 84.4 & 90.5 \\\\\n & Fang~\\etal~\\cite{semisupervised_STR\/ABINet}(ABINet) & SL & - & \\underline{97.4} & \\underline{93.5} & \\underline{96.2} & \\underline{86.0} & 89.3 & 89.2 & \\underline{92.7} \\\\\n \\cline{2-11}\n & Gao~\\etal~\\cite{DBLP:journals\/tip\/GaoCWL21} & $10\\%$ SL & $90\\%$ SL & - & 78.1 & 74.8 & - & - & - & - \\\\\n & Baek~\\etal~\\cite{semisupervised_STR\/fewlabels}(CRNN) & RL & Book32 et al. & - & 84.3 & 89.8 & - & 74.6 & 82.3 & - \\\\\n & Baek~\\etal~\\cite{semisupervised_STR\/fewlabels}(TRBA) & RL & Book32 et al. & - & 91.3 & 94.8 & - & 82.7 & 88.1 & - \\\\\n & Fang~\\etal~\\cite{semisupervised_STR\/ABINet}(ABINet) & SL & Uber-Text & 97.3 & 94.9 & \\textbf{96.8} & 87.4 & 90.1 & 93.4 & 93.5 \\\\\n\n \\midrule\n \\midrule\n \\multirow{9}{*}{\\rotatebox{90}{Ours}}\n & CRNN-pr & SL & - & 91.0 & 82.2 & 90.2 & 71.6 & 70.7 & 81.3 & 82.8 \\\\\n & CRNN-cr & SL & RU & 92.4 & 87.9 & 92.0 & 75.8 & 75.7 & 85.8 & 85.9 \\\\\n \\cline{2-11}\n & MORAN-pr & SL & - & 95.1 & 90.4 & 93.4 & 79.7 & 80.6 & 85.4 & 88.5 \\\\\n & MORAN-cr & SL & RU & 96.5 & 93.0 & 94.1 & 82.6 & 82.9 & 88.5 & 90.2 \\\\\n \\cline{2-11}\n & HGA-pr & SL & - & 95.0 & 89.5 & 93.6 & 79.8 & 81.1 & 87.8 & 88.7 \\\\\n & HGA-cr & SL & RU & 95.4 & 93.2 & 94.9 & 84.0 & 86.8 & 92.0 & 91.2 \\\\\n \\cline{2-11}\n & TRBA-pr & SL & - & 97.3 & 91.2 & 95.3 & 84.2 & 86.4 & 92.0 & 91.5 \\\\\n & TRBA-cr & $10\\%$ SL & $10\\%$ RU & 97.3 & 94.7 & 96.2 & 87.0 & 89.6 & \\textbf{94.4} & 93.2 \\\\\n & TRBA-cr & SL & RU & \\textbf{98.3} & \\textbf{96.3} & 96.5 & \\textbf{89.3} & \\textbf{93.3} & 93.4 & \\textbf{94.5} \\\\\n \\bottomrule\n \\end{tabular}\n }\n \n \\caption{Comparison with SOTA methods on STR test accuracy.\n In each column, the best result is shown in bold, and the best result in supervised setting is\n shown with underline. \"-pr\" means our reproduced results and \"-cr\" means using our \\textbf{c}onsistency \\textbf{r}egularization method. Our method improves STR models firmly, and propels TRBA towards new SOTA performance on test benchmarks.}\n \\label{comparison with SOTA}\n \\end{center}\n \\vspace{-6mm}\n\\end{table*}\n\n\\subsection{Comparison with SOTA}\n\nWe perform experiments by using different STR models.\nFor fair comparison, we also reproduce those models under supervised setting using the same data augmentation strategy as that used in our semi-supervised training.\nAs presented in Table~\\ref{comparison with SOTA}, our reproduced models have comparable or even higher accuracies than that reported in the original paper. Those results provide an even fair baseline to show the advantage of our method. Experiments with their original settings can be found in Supplementary.\n\nBy training with the proposed framework using additional unlabeled real images, all models gain improvement.\nTo be specific, CRNN improves by $3.1\\%$ (from $82.8\\%$ to $85.9\\%$) on average, MORAN increases from $88.5\\%$ to $90.2\\%$ (+1.7\\%).\nHGA has an accuracy increase of $2.5\\%$ (from $88.7\\%$ to $91.2\\%$) and TRBA has an increase of $3.0\\%$ (from $91.5\\%$ to $94.5\\%$).\nThe consistent enhancement over different STR models shows the effectiveness and universality of our proposed method.\nSpecially, the performance gain over irregular text (IC15, SVTP and CUTE) is more obvious, since irregular text has more variance on appearance which is hard to be generated by synthetic engine.\n\nNote that although TRBA is worse than ABINet~\\cite{semisupervised_STR\/ABINet} in supervised setting ($91.5\\%$ \\vs $92.7\\%$), our framework helps TRBA outperform ABINet that adopts self-training in semi-supervised setting ($94.5\\%$ \\vs $93.5\\%$), which proves the superiority of our proposed CR method again. Compared with other SOTA work, our proposed framework with TRBA achieves the highest accuracies on vast majority of test datasets (only except IIIT), which demonstrates its robustness for both regular and irregular text recognition.\n\nIn addition, to accelerate training process, we perform an experiment with TRBA using only $10\\%$ synthetic labeled data (denoted as ``$\\text{SL}_{sm}$'' that contains only $1.45$M images) and $10\\%$ real unlabeled data (denoted as ``$\\text{RU}_{sm}$'' which has $1.05$M images). Surprisingly, experimental results is fairly good with the average score of $93.2\\%$, even higher than that obtained by $\\text{TRBA}_{pr}$ ($91.5\\%$) and ABINet~\\cite{semisupervised_STR\/ABINet} ($92.7\\%$). It should be noted that $\\text{TRBA}_{pr}$ and ABINet are trained in a fully supervised manner using all synthetic data ($14.5$M). The training data is $5.8$ times more than that used in $\\text{TRBA}_{sm}$. The excellent results suggest the necessary of using real images in training STR models and the advantage of our semi-supervised training framework.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{correction_cases_v2.pdf}\n \\vspace{-5mm}\n \\caption{Hard examples that can be successfully recognized by using our method. The first line shows the recognition results by $\\text{TRBA}_{pr}$, which include mistakes (red characters), while the second line are results by $\\text{TRBA}_{cr}$. Our method enables TRBA to address even tough samples like dark, blur, or severe distortion.\n }\n \\label{fig:vis}\n \\vspace{-3mm}\n\\end{figure}\n\nIn Figure~\\ref{fig:vis}, we present several examples that can be correctly recognized by $\\text{TRBA}_{cr}$ but encounter failure when using $\\text{TRBA}_{pr}$.\nAlthough the employed real images are unlabeled, STR models can still get benefit from our method, particularly for recognizing text that is severely blurred, distorted, or with artistic font.\n\n\\subsection{Ablation Study}\n\nIn order to analyze the proposed model,\nwe conduct a series of ablation experiments in this section. All ablation experiments are performed using TRBA because of its good performance. $\\text{SL}_{sm}$ and $\\text{RU}_{sm}$ are employed for fast training. More experiments with different data sizes can be found in Supplementary.\n\\vspace{-4mm}\n\\subsubsection{Effect of domain gap on model stability}\n\\vspace{-1mm}\nIn this work, we propose a stable CR based SSL framework for STR.\nAs stated in Section~\\ref{sec:intro}, we guess it is the domain inconsistency among training data used in STR that causes the instability or even failure by previous CR methods.\n\nTo prove this conjecture, we perform experiments using domain consistent training data (in-domain data).\nSpecially, we split the real labeled training data RL into $\\text{RL}_{20p}$ and $\\text{RL}_{80p}$\nwith a ratio of 1:4. $\\text{RL}_{20p}$ is adopted with labels while $\\text{RL}_{80p}$ is employed without annotations.\nSOTA CR methods are tested, including FixMatch~\\cite{consitency_regularization\/FixMatch} and UDA~\\cite{consitency_regularization\/UDA}.\nAs presented in Table~\\ref{tab: in domain and cross domain}, when training data is from the same domain, they work well.\nThe test accuracy increases by $3.6\\%$ using FixMatch and $2.6\\%$ using UDA.\nHowever, when the training data is from different domains, \\eg, $\\text{SL}_{sm}$ and $\\text{RU}_{sm}$, their training processes become unstable. We test the models before collapse. The recognition accuracies are even lower than that obtained by only using $\\text{SL}_{sm}$, with performance degradation of $11.0\\%$ (FixMatch) and $4.6\\%$ (UDA) separately.\n\nBy contrast, our method is able to improve the recognition accuracy no matter the training data is from similar domain or not.\nIn comparison to the results by fully supervised training, our method improves STR model accuracy steadily by $4.5\\%$ ($84.8\\%$ to $89.3\\%$) using in-domain data and $3.3\\%$ ($89.9\\%$ to $93.2\\%$) in cross-domain setting.\nThe performance gain in in-domain setting is even larger than that brought by FixMatch and UDA.\n\n\n\\begin{table}\n \\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n \\begin{center}\n \\setlength{\\tabcolsep}{1.8mm}\n \\scalebox{0.75}{\n \\begin{tabular}{ccccccc}\n \\toprule\n & \\tabincell{c}{Labeled\/ \\\\ Unlabeled Data} & Methods & \\tabincell{c}{IC13 \\\\IC15} & \\tabincell{c}{SVT\\\\ SVTP} &\\tabincell{c}{IIIT\\\\CUTE}& Avg \\\\\n \\midrule\n \\multirow{4}{*}{\\rotatebox{90}{In-domain}} & \\tabincell{c}{$\\text{RL}_{\\text{20p}}$(55.7K)\/ \\\\- } &Sup & \\tabincell{c}{90.1 \\\\77.6 } & \\tabincell{c}{87.5 \\\\78.0 } & \\tabincell{c}{88.8 \\\\83.0 } &84.8 \\\\\n \\cline{2-7} & \\multirowcell{3}{$\\text{RL}_{\\text{20p}}$(55.7K)\/ \\\\$\\text{RL}_{\\text{80p}}$(223K) } & FixMatch & \\tabincell{c}{93.0 \\\\82.3 } & \\tabincell{c}{88.6 \\\\82.5 } & \\tabincell{c}{92.0 \\\\88.5 } &88.4 \\\\\n \\cline{3-7} & & UDA & \\tabincell{c}{92.5 \\\\80.7 } & \\tabincell{c}{88.6 \\\\80.9 } & \\tabincell{c}{91.4 \\\\88.5 } &87.4 \\\\\n \\cline{3-7} & & Ours & \\tabincell{c}{93.8 \\\\82.5 } & \\tabincell{c}{91.5 \\\\83.6 } & \\tabincell{c}{92.9 \\\\88.5 } &89.3 \\\\\n \\hline\\hline\n \\multirow{4}{*}{\\rotatebox{90}{Cross-domain}} & \\tabincell{c}{$\\text{SL}_{\\text{sm}}$(1.45M)\/ \\\\- } &Sup & \\tabincell{c}{96.0 \\\\82.4 } & \\tabincell{c}{90.0 \\\\82.6 } & \\tabincell{c}{94.4 \\\\88.9 } &89.9 \\\\\n \\cline{2-7} & \\multirowcell{3}{$\\text{SL}_{\\text{sm}}$(1.45M)\/ \\\\$\\text{RU}_{\\text{sm}}$(1.06M) } &FixMatch & \\tabincell{c}{90.0 \\\\72.6 } & \\tabincell{c}{86.2 \\\\77.2 } & \\tabincell{c}{79.2 \\\\69.1 } &78.9 \\\\\n \\cline{3-7} & & UDA & \\tabincell{c}{94.2 \\\\75.7 } & \\tabincell{c}{85.3 \\\\79.5 } & \\tabincell{c}{90.0 \\\\82.3 } &85.3 \\\\\n \\cline{3-7} & & Ours & \\tabincell{c}{97.3 \\\\87.0 } & \\tabincell{c}{94.7 \\\\89.6 } & \\tabincell{c}{96.2 \\\\94.4 } &\\textbf{93.2} \\\\\n \\bottomrule\n \\end{tabular}\n }\n \n \\caption{Experiments with CR methods on in-domain and cross-domain data settings. Our method can consistently improve recognition accuracy. The results of FixMatch and UDA in cross-domain setting are obtained by the models before collapse. }\n \\label{tab: in domain and cross domain}\n \\end{center}\n \\vspace{-10mm}\n\\end{table}\n\n\\vspace{-3mm}\n\\subsubsection{Ablation on model units}\n\\vspace{-1mm}\n\nThe techniques used in our method\ninclude an additional projection layer for asymmetric structure, EMA, domain adaption and weight decay.\nHere we analyze the effect of each unit in detail. The experiments are performed with CCR added to benefit character-level consistency.\n\nAs presented in Table~\\ref{tab:ablation_for_PCT}, the use of additional projection layer can improve the final average score by $0.7\\%$.\nHowever, the performance is still lower than that obtained under fully supervised setting ($87.7\\%$ \\vs $89.9\\%$).\nAs indicated in~\\cite{weight_decay}, without weight decay, the consistency between online and target outputs is dependent mainly on the projection layer, rendering the online model weights inferior.\nWeight decay helps balance weights between online model and projection layer dynamically.\nThe use of weight decay, with projection layer, increases the average score on test data by another $3.5\\%$, surpassing the supervised results.\nEMA mechanism brings an accuracy gain of $1.6\\%$ furthermore as it helps keep projection layer in near-optimal and improves training stability.\nLastly, the adding of domain adaption improves the average test accuracy up to $93.2\\%$.\n\n\\begin{table}\n \\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n \\begin{center}\n \\scalebox{0.8}{\n \\begin{tabular}{cccccccc}\n \\toprule\n Projection & WD & EMA & DA & \\tabincell{c}{IC13 \\\\IC15} & \\tabincell{c}{SVT\\\\ SVTP} &\\tabincell{c}{IIIT\\\\CUTE}& Avg \\\\\n \\midrule\n & & & & \\tabincell{c}{94.2 \\\\80.5 } & \\tabincell{c}{91.5\\\\84.0 } & \\tabincell{c}{88.7 \\\\84.0 } &87.0\\\\\n \\cline{5-8}\n \\CheckmarkBold & & & & \\tabincell{c}{94.5 \\\\81.6 } & \\tabincell{c}{90.1\\\\86.1 } & \\tabincell{c}{89.5 \\\\85.4 } &87.7\\\\\n \\cline{5-8}\n \\CheckmarkBold & \\CheckmarkBold & & & \\tabincell{c}{97.2 \\\\85.9 } & \\tabincell{c}{93.0\\\\87.0 } & \\tabincell{c}{93.5 \\\\91.3 } &91.2\\\\\n \\cline{5-8}\n \\CheckmarkBold & \\CheckmarkBold & \\CheckmarkBold & & \\tabincell{c}{96.7 \\\\86.7 } & \\tabincell{c}{94.6\\\\89.3 } & \\tabincell{c}{95.9 \\\\92.7 } &92.8\\\\\n \\cline{5-8}\n \\CheckmarkBold & \\CheckmarkBold & \\CheckmarkBold & \\CheckmarkBold & \\tabincell{c}{97.3 \\\\87.0 } & \\tabincell{c}{94.7\\\\89.6 } & \\tabincell{c}{96.2 \\\\94.4 } &\\textbf{93.2}\\\\\n \\bottomrule\n \\end{tabular}\n }\n \\caption{Ablation on model units.\n ``Projection'' means using additional projection layer before classifier. \"WD\" means weight decay,\n \"EMA\" means using EMA for target model. \"DA\" means domain adaption.}\n \\label{tab:ablation_for_PCT}\n \\end{center}\n \\vspace{-5mm}\n\\end{table}\n\n\\begin{table}\n \\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n \\begin{center}\n \\setlength{\\tabcolsep}{3mm}\n \\scalebox{0.8}\n {\n \\begin{tabular}{ccccc}\n \\toprule\n Method & \\tabincell{c}{IC13 \\\\IC15} & \\tabincell{c}{SVT\\\\ SVTP} &\\tabincell{c}{IIIT\\\\CUTE}& Avg \\\\\n \\midrule\n SCR & \\tabincell{c}{96.6 \\\\84.9 } & \\tabincell{c}{93.0 \\\\85.9 } & \\tabincell{c}{96.4 \\\\93.1 } &92.2\\\\\n \\hline\n CCR & \\tabincell{c}{97.3 \\\\87.0 } & \\tabincell{c}{94.7 \\\\89.6 } & \\tabincell{c}{96.2 \\\\94.4 } &\\textbf{93.2}\\\\\n \\bottomrule\n \\end{tabular}\n }\n \n \\caption{Effect of our proposed CCR. Compared to using standard consistency regularization, training with CCR conduces to $1\\%$ average score increase for TRBA.}\n \\label{tab:comparison CCR with SCR}\n \\end{center}\n \\vspace{-5mm}\n\\end{table}\n\n\\begin{table}\n \\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n \\begin{center}\n \\setlength{\\tabcolsep}{3mm}\n \\scalebox{0.8}{\n \\begin{tabular}{cccccccc}\n \\toprule\n Consistency Loss & \\tabincell{c}{IC13 \\\\IC15} & \\tabincell{c}{SVT\\\\ SVTP} &\\tabincell{c}{IIIT\\\\CUTE}& Avg \\\\\n \\midrule\n MSE & \\tabincell{c}{96.3 \\\\84.0 } & \\tabincell{c}{92.0 \\\\86.8 } & \\tabincell{c}{94.2 \\\\92.0 } &91.0\\\\\n \\hline\n CE & \\tabincell{c}{97.4 \\\\86.9 } & \\tabincell{c}{94.3 \\\\89.8 } & \\tabincell{c}{96.3 \\\\92.7 } &\\textbf{93.2}\\\\\n \\hline\n KL-divergence & \\tabincell{c}{97.3 \\\\87.0 } & \\tabincell{c}{94.7 \\\\89.6 } & \\tabincell{c}{96.2 \\\\94.4 } &\\textbf{93.2}\\\\\n \\bottomrule\n \\end{tabular}\n }\n \n \\caption{Ablation on different distance functions used in consistency loss. CE and KL-divergence leads to similar performance, better than MSE.}\n \\label{tab:consistency_loss}\n \\end{center}\n \\vspace{-5mm}\n\\end{table}\n\n\\vspace{-5mm}\n\\subsubsection{Effect of CCR}\n\\vspace{-2mm}\nAnother contribution of this work is a character-level consistency regularization (CCR) unit to handle the specially sequential property of STR task.\nInstead of letting online model and target model run separately in unsupervised branch (standard consistency regularization, SCR), and only restricting their final outputs by consistency loss,\nwe proposed CCR to enforce the same context information for both online and target model.\nExperimental results in Table~\\ref{tab:comparison CCR with SCR} prove the effectiveness of CCR.\nIt helps TRBA improve $1\\%$ more on the final test accuracy.\n\n\\subsubsection{Ablation on distance measure functions}\nBy default, we use KL-divergence to measure the consistency in loss function (\\ref{eq:cons}).\nHere we test other distance measure functions, such as CE and MSE. As presented in Table~\\ref{tab:consistency_loss}, empirically, CE leads to similar recognition performance with KL-divergence, while MSE results in lower accuracies ($93.2\\%$ \\vs $91.0\\%$).\n\n\\subsection{Comparison with Other Semi-supervised Methods}\n\nWe compare our method with other SSL approaches that have been successfully used in STR, including Pseudo Label (PL)~\\cite{selftraining\/pseudo-label} and Noisy Student (NS)~\\cite{selftraining\/noisystudent}. TRBA is used as the basic model.\nPL based SSL is performed following the practice in~\\cite{semisupervised_STR\/fewlabels},\nwhile NS based SSL is following~\\cite{selftraining\/noisystudent}, with the threshold $\\beta_U=0.5$ and $3$ iterations of re-training.\n\nThe results are shown in Table~\\ref{tab:semi supervised comparison}. Our CR based method outperforms all the others, with the resulted average score $2.3\\%$ higher than PL and $0.8\\%$ higher than NS. Note that compared to NS, our training process is more efficient, without time-consuming iterations.\n\n\\begin{table}\n \\newcommand{\\tabincell}[2]{\\begin{tabular}{@{}#1@{}}#2\\end{tabular}}\n \\begin{center}\n \\setlength{\\tabcolsep}{3mm}\n \\scalebox{0.8}{\n \\begin{tabular}{ccccccc}\n \\toprule\n Method & \\tabincell{c}{IC13 \\\\IC15} & \\tabincell{c}{SVT\\\\ SVTP} &\\tabincell{c}{IIIT\\\\CUTE}& Avg \\\\\n \\midrule\n Pseudo Label (PL) & \\tabincell{c}{95.9 \\\\82.9 } & \\tabincell{c}{91.2 \\\\85.7 } & \\tabincell{c}{95.4 \\\\90.6 } &90.9\\\\\\hline\n Noisy Student (NS) & \\tabincell{c}{96.3 \\\\85.5 } & \\tabincell{c}{94.4 \\\\86.7 } & \\tabincell{c}{96.1 \\\\94.1 } &92.4\\\\\\hline\n Ours & \\tabincell{c}{97.3 \\\\87.0 } & \\tabincell{c}{94.7 \\\\89.6 } & \\tabincell{c}{96.2\\\\94.4 } &\\textbf{93.2}\\\\\n \\bottomrule\n \\end{tabular}\n }\n \n \\caption{Comparison with other semi-supervised methods. Our method brings more benefit to STR model and outperforms the other approaches.}\n \\label{tab:semi supervised comparison}\n \\end{center}\n \\vspace{-8mm}\n\\end{table}\n\n\n\\section{Introduction}\n\\label{sec:intro}\n\n\\begin{figure}[t]\n \\centering\n \\begin{subfigure}{0.49\\linewidth}\n \\includegraphics[width=\\textwidth]{out_domain.png}\n \\vspace{-6mm}\n \\caption{cross-domain.}\n \\label{fig:out domain}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}{0.49\\linewidth}\n \\includegraphics[width=\\textwidth]{in_domain.png}\n \\vspace{-6mm}\n \\caption{in-domain.}\n \\label{fig:in domain}\n \\end{subfigure}\n \\caption{Scene text recognition test accuracy by using supervised training, existing consistency regularization SSL (UDA~\\cite{consitency_regularization\/UDA} and FixMatch~\\cite{consitency_regularization\/FixMatch}) and our method. Cross-domain means the labeled and unlabeled training data are from different domains (\\eg. synthetic labeled \\vs real unlabeled in our setting), while in-domain means they are from similar condition.\n UDA and FixMatch are feasible in in-domain condition but fail in cross-domain setting.\n It is observed that the test accuracy drops drastically during the training process, and the highest accuracy is even lower than that obtained by supervised training.\n By contrast, our method is able to stabilize the training process and improve test performance in both in-domain and cross-domain conditions. }\n \\label{fig:example of in-domain and cross-domain}\n \\vspace{-5mm}\n\\end{figure}\n\nScene text recognition (STR) is to recognize text in natural scenes and is widely used in many applications\nsuch as image retrieval, robot navigation and instant translation.\nCompared to traditional OCR, STR is more challenging because of multiple variations from the environment, various font styles and complicated layouts.\n\n\nAlthough STR has made great success, it is mainly researched in a fully supervised manner. Real labeled datasets in STR are usually small because the annotation work is expensive and time-consuming. Hence, two large synthetic\ndatasets MJSynth~\\cite{synthetic_dataset\/MJSynth1,synthetic_dataset\/MJSynth2} and SynthText~\\cite{synthetic_dataset\/SynthText} are commonly used to train STR models and produce competitive results.\nHowever, there exists domain gap between synthetic and real data which restricts the effect of synthetic data.\nBriefly speaking, synthetic dataset can improve STR performance, but STR model is still hungry for real data.\n\n\nConsidering that it is easy to obtain unlabeled data in real world,\nmany researchers intend to leverage unlabeled data and train models in a Semi-Supervised Learning (SSL) manner. Baek~\\etal~\\cite{semisupervised_STR\/fewlabels} and Fang~\\etal~\\cite{semisupervised_STR\/ABINet}\nintroduced self-training methods to train STR models and receive improved performance.\nNevertheless, self-training requires a pre-trained model to predict pseudo-labels for unlabeled data and then re-trains the model, which affects the training efficiency.\nBy contrast, Consistency Regularization (CR), another important component of state-of-the-art (SOTA) SSL algorithms, has not been well exploited in STR.\n\nIn this paper, we would like to explore a CR-based SSL approach to improve STR models, where only synthetic data and unlabeled real data are used for training, exempting human annotation cost thoroughly. CR assumes that the model should output similar predictions when fed perturbed versions of the same image~\\cite{consistencyRegul}.\nIt tends to outperform self-training on several SSL benchmarks~\\cite{consistencyRegul2,consistencyRegul3}.\nNevertheless, it is non-trivial to utilize existing CR methods to STR directly.\nWe attempt to two representative CR approaches, UDA~\\cite{consitency_regularization\/UDA} and FixMatch~\\cite{consitency_regularization\/FixMatch}. Neither of them is feasible in our setting. As shown in Figure~\\ref{fig:out domain}, the models are quite unstable during the training process.\nCompared with experiments on image classification where they show big superiority, we assume the reasons lie in the following two aspects.\n\n\n1) Our labeled images are synthetic while unlabeled images are from real scenarios.\nThe domain gap between synthetic and real images affects the training stability.\nActually, it is found that the collapsed models recognize synthetic inputs with a reasonable accuracy, but generate nearly identical outputs for all real inputs.\nWe conjecture that they incorrectly utilize the domain gap to minimize the overall loss: they learn to distinguish between synthetic and real data, and learn reasonable representations for synthetic data to minimize the supervised loss, but simply project real data to identical outputs such that the consistency loss is zero.\nTo validate this conjecture, we perform another experiment by using training images all from real.\nAs shown in Figure~\\ref{fig:in domain}, the training processes of UDA and FixMatch become stable in such a setting.\nHowever, we aim to relieve human labeling cost. The introduced domain gap becomes an issue.\n\n2) Different from image classification, STR is a kind of sequence prediction task.\nThe alignment between character sequences brings another difficulty to consistency training.\n\n\\begin{figure*}[htb]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{overall_framework.pdf}\n \\vspace{-2mm}\n \\caption{Overall framework of our proposed consistency regularization method for STR. Our model takes advantage of labeled synthetic data and unlabeled real data, exempting human annotation cost thoroughly. An asymmetric structure is designed with EMA and domain adaption to encourage a stable model training. }\n \\setlength{\\abovecaptionskip}{-0.15cm}\n \\label{fig:overall_framework}\n \\vspace{-2mm}\n\\end{figure*}\n\nTo address the aforementioned problems, we propose a robust character-level consistency regularization based framework for STR.\nFirstly, inspired by BYOL~\\cite{self_supervised\/BYOL} that prevents model collapse\nwithout using negative samples in contrastive learning, we propose an asymmetric consistency training structure for STR.\nSecondly, a character-level CR unit is proposed to ensure the character-level consistency during training process.\nThirdly, some techniques are subtly adopted in training process, such as weight decay and domain adaption, which improve STR model furthermore.\n\nThe main contributions are summarized as follows:\n\n1) We propose a robust consistency regularization based semi-supervised framework for STR.\nIt is capable of tackling the cross-domain setting, thus more easily benefitting from labeled synthetic data and unlabeled real data.\nCompared with self-training approaches, our method is more efficient, without iteratively predicting and re-training.\n\n2) Considering the sequential property of text,\nwe propose a character-level consistency regularization (CCR) unit to ensure better sequence alignment between the outputs of two siamese models.\n\n\n3) Extensive experiments are performed to analyze the effectiveness of the proposed framework.\nIt boosts the performance of a variety of existing STR models.\nDespite free of human annotation, our method achieves new SOTA performance on several standard text recognition benchmarks for both regular and irregular text.\n\n\\section{Proposed Method}\n\\subsection{Overview}\n\nAs shown in Figure~\\ref{fig:overall_framework}, our framework consists of an STR model for text recognition and\na CR architecture to integrate information from both labeled and unlabeled data.\nWe adopt the attention-based encoder-decoder STR model here for illustration. However, our framework is not restricted to autoregressive STR models.\nThe encoder extracts discriminative features from input images, while the decoder generates character-level features. The classifier maps features into probabilities over character space via a linear transformation and Softmax.\n\nWe define two modes for STR model, named training mode and inference mode, according to whether the ``ground-truth'' character sequence is provided.\nIn training mode, ``ground-truth'' characters are sent to the decoder for next character prediction.\nBy contrast, in inference mode, the output of previous step is fed into decoder to infer next character. Both modes receive a special ``BOS'' token at the first step which means the start of decoding.\nTraining mode ends when all ground-truth characters are input, while inference mode ends when generating an ``EOS'' token.\n\nThe CR architecture is inspired by UDA~\\cite{consitency_regularization\/UDA}, which consists of two branches, namely supervised and unsupervised branch, as demonstrated in Figure~\\ref{fig:overall_framework}.\nThe supervised branch is trained on labeled data, while the unsupervised branch takes two augmented views of an unlabeled image as input, and requests the outputs to be similar to each other.\nMotivated by BYOL~\\cite{self_supervised\/BYOL}, we employ STR models with the same architecture but different parameters in unsupervised branch for the two views of inputs, denoted as online model and target model separately.\nThe online model shares parameters with the one used in supervised branch.\nTo overcome the instability during model training and improve STR performance,\nan additional projection layer is introduced before classifier in online model of the unsupervised branch.\n\n\\subsection{Supervised Branch}\n\nSupervised branch adopts the online STR model and runs in training mode, using the labeled synthetic data.\nSpecially, denote the weight of online STR model as $\\theta_{o}$, which is comprised of parameters from three modules, \\ie, encoder, decoder and classifier, referring to Figure~\\ref{fig:overall_framework}.\nGiven the input image $\\textbf{X}^L$ and the ground-truth character sequence $\\textbf{Y}^{gt} = \\{ y_1^{gt}, y_2^{gt}, \\dots, y_T^{gt} \\} $, the supervised branch outputs a sequence of vector $\\textbf{P}^{L} = \\{ \\textbf{p}_1^L, \\textbf{p}_2^L, \\dots, \\textbf{p}_T^L \\} $.\nCross-entropy loss is employed to train the model, \\ie,\n\\vspace{-3mm}\n\\begin{equation}\n \\mathcal{L}_{reg} = \\frac{1}{T} \\sum_{t=1}^{T} \\log p_t^L(y_{t}^{gt}\\mid \\textbf{X}^L)\n \\vspace{-3mm}\n \\label{eq:recognition loss}\n\\end{equation}\nwhere $p_t^L(y_{t}^{gt})$ represents the predicted probability of the output being $y_{t}^{gt}$ at time step t. T is the sequence length.\n\n\\subsection{Unsupervised Branch}\nDifferent from~\\cite{consitency_regularization\/UDA} and inspired by~\\cite{self_supervised\/BYOL}, unsupervised branch in our framework relies on two models, referred to as online STR model (with model parameter $\\theta_{o}$) and target STR model (with model parameter $\\theta_{t}$) respectively. The two models interact and learn from each other.\n\nGiven the input image without label $\\textbf{X}^U$, two different augmentation approaches are adopted which produce two augmented views of the image, denoted as $\\textbf{X}^{U_w}$ and $\\textbf{X}^{U_s}$ respectively.\nThe online STR model takes $\\textbf{X}^{U_s}$ as input and runs in training mode.\nMotivated by the collapse preventing solution in~\\cite{self_supervised\/BYOL},\nan additional projection layer is introduced between the decoder and classifier, as shown in Figure~\\ref{fig:overall_framework}, and the parameters are denoted as $\\theta_{p}$ independently. It is composed of 2 layers of perceptron with ReLU activation.\nThe added projection layer makes the architecture asymmetric between the online and target model, which contributes to a stable training process.\nThe classifier is then followed to transform the output vector into probabilities over character space, denoted as $\\textbf{P}^{U_s} = \\{\\textbf{p}_1^{U_s}, \\textbf{p}_2^{U_s}, \\dots, \\textbf{p}_T^{U_s} \\} $.\n\nThe target STR model takes $\\textbf{X}^{U_w}$ as input and runs in inference mode, which generates a sequence of probabilities $\\textbf{P}^{U_w} = \\{\\textbf{p}_1^{U_w}, \\textbf{p}_2^{U_w}, \\dots, \\textbf{p}_T^{U_w} \\} $. The output sequence is used as the reference target to train the online model.\nA stop-gradient operation is acted on the target model, and its parameters $\\theta_{t}$ are an exponential moving average (EMA) of the online model parameter $\\theta_{o}$, \\ie,\n\\vspace{-1mm}\n\\begin{equation}\n \\theta_t = \\alpha\\theta_t + (1-\\alpha)\\theta_o\n \\vspace{-1mm}\n \\label{eq:EMA update}\n\\end{equation}\nwhere $\\alpha \\in [0,1]$ is the target decay rate.\nEMA makes the target model produce relatively stable targets for online model, which helps to keep the projection layer in near optimal and benefits the model training as well.\n\nAs indicated in~\\cite{selftraining\/entropyminimization,consitency_regularization\/UDA},\nregularizing predictions with low entropy would be beneficial to SSL. We sharpen the output from target STR model $\\textbf{P}^{U_w}$ by using a low Softmax temperature $\\tau$.\nDenote the output vector at step $t$ before Softmax as $\\textbf{z}_t^{U_w} = \\{z_1^{U_w}, z_2^{U_w},\\dots, z_C^{U_w} \\}$, $C$ is the number of character classes, then\n\\vspace{-2mm}\n\\begin{equation}\n p_t^{U_w}(y_t)=\\frac {\\exp(z^{U_w}_{y_t}\/\\tau)} {{\\sum_{y^\\prime_t} \\exp(z^{U_w}_{y^\\prime_t}\/\\tau)}}\n \\vspace{-2mm}\n \\label{eq:caculate confident score}\n\\end{equation}\nWe set $\\tau=0.4$ following~\\cite{consitency_regularization\/UDA}.\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{CCR_v31.pdf}\n \\vspace{-6mm}\n \\caption{Character-level consistency regularization (CCR).\n In each time step t, target decoder and online decoder share the same output character produced by target decoder in previous time step so as to keep good character alignment.\n Consistency loss is computed between the outputs in each time step.}\n \\setlength{\\abovecaptionskip}{-0.15cm}\n \\label{fig:character-level consistency regularization}\n \\vspace{-1mm}\n\\end{figure}\n\nThe consistency training regularizes the outputs of $\\textbf{P}^{U_w}$ and $\\textbf{P}^{U_s}$ to be invariant. However, given that STR is a sequence recognition task, a character-level consistency regularization (CCR) unit is proposed for autoregressive decoder, so as to keep a good sequence alignment.\nAs shown in Figure~\\ref{fig:character-level consistency regularization},\nin decoding time step $t$, a pseudo label is generated from target model by taking the class that has the highest probability in $\\textbf{p}_t^{U_w}$. The pseudo label will be used as the input for both online and target decoder in next time step.\nThe design enforces online decoder and target decoder share the same context information, benefits character level alignment, and thus ensures a stable consistency training.\n\nTo alleviate the influence caused by noise samples in training process, we filter out noise samples based on their confidence scores in recognition.\nThe confidence score is the cumulative product of the maximum output probability from target model in each decoding step, \\ie,\n\\vspace{-3mm}\n\\begin{equation}\n \\mathcal{\\textbf{s}}^{U_w} = \\prod_{t=1}^{T}p_t^{U_w}(y_t \\mid \\textbf{X}^{U_w})\n \\vspace{-3mm}\n \\label{eq:confscore}\n\\end{equation}\nThe consistency loss used in unsupervised branch is then defined as:\n\\vspace{-3mm}\n\\begin{equation}\n \\mathcal{L}_{cons} = \\mathbb I (\\mathcal{\\textbf{s}}^{U_w}>\\beta_{U}) \\frac{1}{T} \\sum_{t=1}^{T} Dist(\\textbf{p}_t^{U_w}, \\textbf{p}_t^{U_s})\n \\vspace{-3mm}\n \\label{eq:cons}\n\\end{equation}\nwhere $ \\mathbb I (\\mathcal{\\textbf{s}}^{U_w}>\\beta_{U})$ is an indicator, $\\beta_{U}$ is a threshold for filtering out noises and\n$Dist(\\cdot)$ is a function to measure the character-level distance between $\\textbf{P}^{U_w}$ and $\\textbf{P}^{U_s}$.\nThere are several choices for $Dist$, such as Cross Entropy (CE), KL-divergence or Mean Squared Error (MSE).\nKL-divergence is adopted in our framework by default.\n\n\\subsection{Additional Training Techniques}\n\\textbf{Weight Decay.}\nWeight decay is an important component in contrastive learning~\\cite{simclr,self_supervised\/BYOL} and SSL~\\cite{consitency_regularization\/FixMatch}.\nIt is claimed that~\\cite{weight_decay} weight decay in BYOL can help balance weights between predictor and online model dynamically, and improve the representation ability of online model. Here we also adopt it into our model training so as to improve the feature learning capability of online model.\n\n\\textbf{Domain Adaption.}\nTo mitigate the domain shift in training data, a character-level domain adaptation unit is employed between the supervised and unsupervised branches, referring to~\\cite{semisupervised_STR\/scene_domain}.\nSpecially, in each decoding step, decoder of the online model extracts vision feature for the character to be decoded,\ndenoted as $\\textbf{H}^L = \\{\\textbf{h}^L_1, \\textbf{h}^L_2, \\cdots, \\textbf{h}^L_T\\}$\nand $\\textbf{H}^{U_s} = \\{\\textbf{h}^{U_s}_1, \\textbf{h}^{U_s}_2, \\cdots, \\textbf{h}^{U_s}_T\\}$\nfor features extracted in supervised and unsupervised branch respectively.\nDomain adaption loss is defined as\n\n\\vspace{-2mm}\n\\begin{equation}\n \\mathcal{L}_{da} = \\frac{1}{4d^2}\\Vert(cov(\\textbf{H}^L)-cov(\\textbf{H}^{U_s})\\Vert^2_F\n \\vspace{-2mm}\n \\label{eq: domain adaption loss}\n\\end{equation}\nwhere\n$\\Vert\\cdot\\Vert^2_F$ denotes the squared matrix Frobenius norm, $cov(\\textbf{H})$ is covariance matrix,\n$d$ is the feature dimension.\n\n\\subsection{Overall Objective Function}\n\nWe sum the three loss functions defined above. The overall objective function for training our proposed model is:\n\\begin{equation}\n \\mathcal{L}_{overall} = \\mathcal{L}_{reg} + \\lambda_{cons}\\mathcal{L}_{cons}+\\lambda_{da}\\mathcal{L}_{da}\n \\label{eq:overall loss}\n\\end{equation}\nwhere $\\lambda_{cons}$ and $\\lambda_{da}$ are hyper-parameters to balance three terms.\nWe set $\\lambda_{cons}=1$ and $\\lambda_{da}=0.01$ empirically.\n\n\\section{Related Work}\n\\label{sec:related work}\n\n\\subsection{Scene Text Recognition}\nResearches usually treat text recognition as a sequence prediction task and employ RNNs to model the sequences for recognition without character separation. Connectionist temporal classification (CTC) model~\\cite{DBLP:conf\/nips\/wang18,DBLP:journals\/pami\/ShiBY17} and attention-based encoder-decoder model~\\cite{DBLP:conf\/cvpr\/LeeO16,Rectification\/ShiWLYB16} are two commonly used frameworks for STR.\nThe success of regular\ntext recognition leads researchers to turn their attention to irregular text recognition.\n~\\cite{Rectification\/LiuCW18,Rectification\/ShiWLYB16,Rectification\/ShiYWLYB19,Rectification\/YangGLHBBYB19,Rectification\/ZhanL19,Rectification\/LuoJS19}\nrectified irregular text into regular ones to alleviate the difficulty in recognition.\n~\\cite{2Dattention\/00130SZ19} and ~\\cite{2Dattention\/YangWLLZ20}\nemployed 2D attention to handle the complicated layout of irregular text.\n~\\cite{attentiondrift\/ChengBXZPZ17,attentiondrift\/WangZJLCWWC20,attentiondrift\/YueKLSZ20}\nattempted to improve recognition accuracy by mitigating the alignment drift in attention.\n~\\cite{semanticinformation\/FangXZSTZ18,semanticinformation\/QiaoZYZ020,semanticinformation\/YuLZLHLD20}\ntried to integrate semantic information from language model to enhance word recognition.\nAll those methods need to be trained in a fully supervised manner.\n\\subsection{Semi-Supervised Learning}\nSemi-Supervised Learning (SSL) aims to use labeled data and additional unlabeled data to boost model performance. There are mainly two types of SSL methods that relate to our work, self-training~\\cite{selftraining\/entropyminimization,selftraining\/noisystudent,selftraining\/pseudo-label,selftraining\/S4L}\nand consistency regularization (CR)~\\cite{consitency_regularization\/temporal_ensembling,consitency_regularization\/meanteacher,consitency_regularization\/VAT,consitency_regularization\/FixMatch,consitency_regularization\/UDA}.\nSelf-training is simple and effective. It first employs labeled data to train a teacher model, then predicts pseudo labels for unlabeled data, and finally trains a student model using both labeled and pseudo-labeled data. Pseudo Label~\\cite{selftraining\/pseudo-label} and Noisy Student~\\cite{selftraining\/noisystudent} are two popular variants.\nCR is based on the manifold assumption that model outputs should be consistent when fed different augmentation views of the same image. For example, Temporal Ensembling~\\cite{consitency_regularization\/temporal_ensembling} encourages a consensus prediction of\nthe unknown labels using the outputs of the network-in-training on different epochs.\nMean Teacher~\\cite{consitency_regularization\/meanteacher} requires the outputs from teacher model and student model to be consistent, and updates teacher model by averaging student model weights.\nFixMatch~\\cite{consitency_regularization\/FixMatch} combines CR and pseudo-labeling for better performance.\nUDA~\\cite{consitency_regularization\/UDA} argues the importance of noise injection in consistency training, and achieves SOTA performance on a wide variety of language and vision SSL tasks.\n\n\\subsection{Semi-Supervised Text Recognition}\nSome work has been proposed to train STR model with SSL.\nFor instance, Gao~\\etal~\\cite{semisupervised&STR\/GaoCWL21} adopted reinforcement learning techniques to exploit unlabeled data for STR performance improvement. However, both labeled and unlabeled data are divided from synthetic data, without domain gap issue.\n\\cite{semisupervised_STR\/scene_domain} and \\cite{semisupervised_STR\/hand_domain} utilized domain adaption techniques to mitigate the domain shift between source and target data, so as to improve recognition results on target domain.\nBaek~\\etal~\\cite{semisupervised_STR\/fewlabels} attempted to train STR model by using real data only, and tried both Pseudo Label and Mean Teacher to enhance STR performance.\nFang~\\etal~\\cite{semisupervised_STR\/ABINet} proposed an autonomous, bidirectional and iterative language modeling for STR. A self-training strategy was applied with the ensemble of iterative prediction to\nincrease STR performance further.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Leveraging SCM for an improved defense} \n\\subsection{Ablation Studies}\n\n\\label{sec:ablations}\n\n\n\\noindent{\\textbf{Choice of $\\tau$, $m$ in HTC:}} HTC (Sec 3.3) depends on softening temperature ($\\tau$) which adjusts the optimal representation, and multiplier $m$\\ which adjusts the optimal weight to transfer this knowledge. Thus, we now study each of these parameters separately. We also note that $\\alpha$ primarily controls the ground truth signal, hence, we omit varying $\\alpha$ here and set it to 0.9. We vary the $\\tau$ as 10, 20, 50, 100 with $m$\\ = 50 on ShuffleNetV2 and present results in Fig. \\ref{fig:ablations-temp} (a),(b) to demonstrate the effect of setting temperature to an optimal higher value. We then vary $m$\\ keeping the $\\tau$ at 50 to observe the motivation of its careful selection in \\ref{fig:ablations-temp}(d). \\begin{wraptable}{r}{0.6\\linewidth}\n \n \\centering\n \\begin{tabular}{cc|cc}\n \n \n \n \n \n \n \n \n \n \n \\hline\n & CIFAR10 & CIFAR100 & CIFAR100 \\\\\n & ResNet18 & ResNet18 & ResNet50 \\\\\n \\hline\n Nasty Teacher & 0.4446 & 14.6363 & 10.4892 \\\\\n \n \n SCM Ensemble & \\textbf{0.2471} &\\textbf{ 5.8398 }&\\textbf{ 5.3165} \\\\\n \\hline\n \\end{tabular}\n \\caption{KL-Divergence scores of a given model (col 1) with the original teacher. }\n \n \\label{tab:ens-kl-div}\n\\end{wraptable} \n\n\\noindent \\noindent\\textbf{Choice of Architecture and $k$ in SCM :} Given a Nasty Teacher, we currently use the same architecture type to generate ${S}^{1}, {S}^{2},..{S}^{k}$. However, we now explore if SCM generalizes to different architecture choices in Table \\ref{tab:hyp-ens}(a). Rather than obtaining seq. items as ResNet18 $\\rightarrow$ ResNet18 i.e ${S^{i}}_{ResNet18}=G({S^{i-1}}_{ResNet18})$, we hereby do ${S^{i}}_{ShuffleNetV2}=G({S^{i-1}}_{ResNet18})$, and finally ensemble generated ShuffleNetV2 ($S^{2})$ and original ResNet18 ($S^{1}$) to train the students (or stealers). Table \\ref{tab:hyp-ens} further shows that even with different architecture, SCM can extract information from Nasty Teacher. Moving ahead, we now ablate on \\textit{sequence length $k$}. From Table \\ref{tab:hyp-ens}(b), we observe improvement in performance as we increase number of sequence items. This can be intuitively linked to getting better estimate of essential relationships with more number of models, hence improved distillation. \\\\\n\n\\noindent \\noindent\\textbf{Similarity of SCM and Original Teacher:} As with SCM we seek to potentially recover the original teacher distribution, thus in addition to the visualization we present in \\ref{fig:nasty_illustration} we hereby conduct this study to quantitatively estimates the closeness of the SCM ensemble and normal teacher. We use KL-Divergence as our metric and present results in Table \\ref{tab:ens-kl-div}. It can can be clearly seen that compared to Nasty Teacher, SCM ensemble consistently results in low KL-Divergence, thereby lending support to our motivation in recovering original teacher. \\\\\n\n\\subsection{Limited and No Data Setting} \nIn this section, we consider settings to learn from the nasty teacher by using only a part of the data. Specifically, we experiment three settings and for the sake of simplicity use HTC to investigate learning from nasty under these settings.\n\n\\noindent \\textbf{No Label Available:} Here, we consider no access to the labels for data used in \\ref{sec:setup}. Table \\ref{tab:nolabel-results} includes the results against this setting. \n\n\n\\noindent \\textbf{Limited Data Available:} Here, we evaluate the learning by varying the percent of data used [10\\%, 30\\%, 50\\%, 70\\%, 90\\%]. From Fig. \\ref{fig:ablations-temp}(c), we observe to better extract knowledge in all data-subset fractions. Specifically, performance difference becomes even more notable when very low fraction is used (like in 10\\% setting, we perform approx. 15\\% better than Nasty Teacher). In addition, we also observe that we always remains significantly closer to the original teacher. \n\n\\begin{table}\n \n \\begin{minipage}[c]{0.45\\linewidth}\n \\centering\n \\begin{tabular}{ccc}\n \\hline\n \\hline\n Network & Nasty & HTC \\\\\n \\hline\n CNN & 16.38 & \\textbf{26.03} \\\\\n MobileNetV2 & 41.79 & \\textbf{52.96} \\\\\n ShuffleNetV2 & 70.04 & \\textbf{76.29} \\\\\n \\hline\n \\end{tabular}\n \n \\caption{\\label{tab:datafree-results} No Data Available Results for CIFAR10.}\n \\end{minipage}\n \\begin{minipage}[c]{0.55\\linewidth}\n \\centering\n \\begin{tabular}{cccc}\n \\hline\n \\hline\n Dataset & Network & Nasty & HTC \\\\\n \\hline\n CIFAR-10 & CNN & 82.64 & \\textbf{87.48} \\\\\n \n CIFAR-100 & Shufflenetv2 & 64.41 & \\textbf{74.17} \\\\\n \n \\hline\n \\end{tabular}\n \n \\caption{\\label{tab:nolabel-results} No Label Available results. Using images without their ground truth.} \n \\end{minipage}\n \n\\end{table}\n\\noindent \\textbf{No Data Available:} Here, we consider the setting where no data is available. More precisely, we take the existing data-free distillation method \\cite{DAFL}, and add our method to it. Table \\ref{tab:datafree-results} demonstrates consistent improvement while learning from nasty in this setting.\n\n\\begin{figure}[t]\n\\centering\n \\includegraphics[width=0.8\\linewidth]{.\/imgs\/ablations_horizontal.jpg}\n \n \\caption{Effect of Temperature: \\textbf{(a)} C10, Student : ShuffleNetV2, Teacher : ResNet18 and \\textbf{(b)} C100, Student : ShuffleNetV2, Teacher : ResNet50. \\textbf{(c)} Effect of percentage of data. \\textbf{(d)} Effect of multiplier $m$.}\n \n \\label{fig:ablations-temp}\n\\end{figure}\n\\subsection{Leveraging SCM for an improved defense}\n\\noindent Previously, we discuss the vulnerabilities of Nasty Teacher \\cite{nasty}. We now discuss this section in regards to improve ``Nasty Teacher\" defense.\n\n\n\n\n\\noindent Nasty Teacher derives its efficacy from the peaks created for the incorrect classes \n\\begin{wrapfigure}{r}{0.5\\textwidth}\n \n \\begin{tabular}{cc}\n \\includegraphics[width=0.4\\linewidth]{.\/imgs\/temp_tsne_normal.jpg} & \n \\includegraphics[width=0.4\\linewidth]{.\/imgs\/temp_tsne_nasty.jpg} \\\\\n (a) & (b) \\\\\n \\includegraphics[width=0.4\\linewidth]{.\/imgs\/temp_tsne_seq1.jpg}\n & \\includegraphics[width=0.4\\linewidth]{.\/imgs\/temp_tsne_seq2.jpg} \\\\\n (c) & (d) \\\\\n \\end{tabular}\n \\caption{t-SNE plots of features before FC layers for SCM on CIFAR-10. Model : ResNet-18, \\textbf{a)} Normal Teacher \\textbf{b)} Nasty Teacher \\textbf{c)} Seq-1 \\textbf{d)} Seq-2}\n \\label{fig:tsne-scm}\n \n\\end{wrapfigure}\n(Fig. \\ref{fig:nasty_illustration}). \nAlong the lines that each in-correct peak adds to the confusion while learning, we hypothesize that the number of in-correct peaks in the output distribution affects the effectiveness of the defense. In general, more number of peaks can be correlated with the increased confusion, hence a better defense. However, in case of Nasty we often see the number of peaks to be not many in comparison to the total number of classes (see Fig. \\ref{fig:nasty_illustration}(a) for illustration), thereby creating a chance to improve Nasty's defense via increasing the number of in-correct peaks. In lieu of this observation, we propose to use our previously proposed Sequence of Contrastive Model Training for an improved defense. As model obtained with each SCM's step displays diverse set of peaks, we expect them to exhibit more number of peaks, largely because they were trained in a way to eventually contrast from nasty teacher (i.e low number of peaks). Fig. \\ref{fig:nasty_illustration} illustrates this via columns (b),(c) where Seq-1\/Seq-2 to have more number of peaks than Nasty Teacher. Thus, we choose one of these intermediate sequence models ($S^{1}, S^{2}...S^{k}$) for teacher and further show our results (denoted by \\textbf{Ours}) in Table \\ref{tab:defense}. For ease of our exploration, we typically test out with first or second model from our sequence in \\ref{sec:ensmeth} as our defense teacher. Though our model incurs a small drop in accuracy ($<2\\%$), the significant improvement (sometimes, as much as $~10\\%$ ) it produces while defending makes it attractive for application. \nDiving deep into the results, one may consider this degradation in KD related to the decrease in accuracy. We now evaluate this dimension for a better understanding. Specifically, we obtain a model with an accuracy similar to ours by training a Nasty Teacher (refer \\ref{sec:background}) followed by early stopping it at the desired accuracy. We dub this approach Nasty KD ES (Early Stopped) and include the results in Table \\ref{tab:defense}. We observe that for a similar accuracy Nasty KD ES has a significantly poor defense against \\textbf{our method}, and in some cases it even defends poorly compared to original Nasty KD. Moreover, we visualize the features of intermediate sequence models in Fig. \\ref{fig:tsne-scm} t-SNE \\cite{tsne} plots and infer the intermediates i.e Seq-1 and Seq-2 to possess a similar class separation as the normal teacher \\ref{fig:tsne-scm}(a) while maintaining the desired defense.\n\\begin{table}[t]\n \n \\centering\n \\begin{tabular}{cccc|c}\n \\hline\n \\hline\n CIFAR-10 & Normal Teacher & Nasty KD & Nasty KD ES & Ours \\\\\n ResNet18 & (Acc=95.09) & (Acc=94.28) & (Acc=93.02) & (Acc=92.99) \\\\\n \\hline\n CNN & 85.99 & \\underline{82.27} & 84.62 & \\textbf{80.13}\\\\\n MobileNetV2 & 89.58 & 31.73 & \\underline{28.58} & \\textbf{22.12} \\\\\n ShuffleNetV2 & 91.03 & \\underline{79.73} & 87.81 & \\textbf{73.23}\\\\\n \\hline\n \\hline\n \\hline\n CIFAR-100 & Normal Teacher & Nasty KD & Nasty KD ES & Ours \\\\\n ResNet50 & (Acc=77.96) & (Acc=77.4) & (Acc=76.2) & (Acc=75.44) \\\\\n \\hline\n CNN & 58.38 & 58.93 & \\underline{58.45} & \\textbf{54.26} \\\\\n MobileNetV2 & 71.43 & 3.03 & \\underline{2.16} & \\textbf{1.4} \\\\\n ShuffleNetV2 & 68.90 & 63.16 & \\underline{62.95} & \\textbf{ 54.00}\\\\\n \\hline\n \\hline\n \\end{tabular}\n \\caption{Results of our defense, Nasty KD and Nasty KD ES. Training hyperparameters are same as discussed in section\n \\ref{sec:setup}.}\n \\label{tab:defense}\n\\end{table}\n\n\n\n\n \n \n\n \n \n\n\n\n\\section{Preliminaries}\n\\section{Method}\n\\subsection{Background: Knowledge Distillation}\n\\label{background:kd}\n\\noindent An ordinary training of Neural Network involves matching the output logits of a network $z$ with the truth label $y$, using the Cross-Entropy loss function. \n\\begin{equation}\n\\label{eq:orig-ce-loss}\n\\mathcal{L}_{CE} = H(softmax(z), y)\n\\end{equation}\n\n\\noindent In Knowledge distillation, a smaller Student Network ($f_s$) is trained with the output of a large Pre-Trained Teacher Network ($f_t$) alongside the ground-truth labels. Formally, with input image as $x$, and the output logits of student and teacher as $z_s= f_s(x)$ and $z_t= f_t(x)$ respectively, the logits are first softened using temperature $(\\tau)$, transformed with softmax to obtain outputs $y_s$ and $y_t$:\n\\begin{equation}\n y_{s} = \\textit{softmax}(z_{s}\/\\tau), \n y_{t} = \\textit{softmax}(z_{t}\/\\tau)\n\\end{equation}\nand then passed through Kullback-Leibler(KL) divergence balanced by weight parameter $\\lambda$ (generally $\\lambda=\\tau^{2}$ for KD) to compute KD loss as:\n\\begin{equation}\n \\label{eq:orig-kd-loss}\n \\mathcal{L}_{KD} = \\lambda \\cdot KL(y_s, y_t)\n\\end{equation}\nThe final loss is now computed by combining $\\mathcal{L}_{KD}$ with Eq. \\ref{eq:orig-ce-loss} via parameter $\\alpha$:\n\\begin{equation}\n \\mathcal{L} = \\alpha \\cdot \\mathcal{L}_{KD} + (1 - \\alpha) \\cdot \\mathcal{L}_{CE}\n \\label{eq:hinton-loss}\n\\end{equation}\n\n\n\\begin{figure}[h]\n \\centering\n \n\t\\includegraphics[width=\\linewidth]{.\/imgs\/ensv3.jpg}\n \\caption{Probabilities of Nasty Teacher (a), Seq-1(b), Seq-2(c), their Ensemble(d) as used in SCM , and Normal Teacher(e). As can be seen, probability distribution of Ensemble is very similar to Normal-Teacher. Note, Seq-i is $S^{i}$, Target Class is colored maroon and the remaining classes are colored orange.}\n \\label{fig:nasty_illustration}\n \n\\end{figure} \n\n\\begin{figure}\n \n \\centering\n \n \n \n \\begin{tabular}{cc}\n \n \n \n \\includegraphics[width=\\linewidth]{.\/imgs\/tempmeth_logits_horizontal.png}\n \\\\\n\n \\end{tabular}\n \\caption{The effect of temperature in HTC. We demonstrate the logits and probabilities at low temperature (as generally used in KD) and high temperature (as used in HTC). \\textbf{(a)} Logits at low temperature $T_{low} = 4$, \\textbf{(b)} Logits at high temperature $T_{high} = 50$, \\textbf{(c)} Probabilities at $T_{low}$ , \\textbf{(d)} Probabilities at $T_{high}$, \\textbf{(e)} Composing with one-hot to increase the peak.}\n \\label{fig:tempmeth-vis}\n \\vspace*{-4mm}\n\\end{figure}\n\n\\subsection{Definition: KD Based Model Stealing}\n\\noindent\\underline{\\textbf{KD-based Stealing}: }Given a stealer (or student) model $\\theta_{st}$ and a victim (or teacher) $\\theta_{vi}$, the stealer is said to succeed in stealing knowledge using KD if by using input-output information of victim it is able to grasp some additional knowledge which in the absence of teacher it won't have access to. Just like \\cite{nasty}, we also think this phenomenon in terms of Maximum Accuracy of stealer with and without stealing from victim. Formally, considering the maximum accuracy of stealer model with and without stealing as $Acc_{w}(KD(\\theta_{st}, \\theta_{vi}))$ and $Acc_{wo}(\\theta_{st})$, the stealing is said to happen if\\\\\n\\begin{equation}\n Acc_{w}(KD(\\theta_{st}, \\theta_{vi})) > Acc_{wo}(\\theta_{st})\n\\end{equation}\nIntuitively, the accuracy gain of stealer in the presence of victim (or teacher) indicates the additional beneficial information it was able to consume from the victim, thus the stealing. The aforementioned settings are same as used in \\cite{nasty}, and we further discuss in \\ref{ablations}. \\\\\n\\underline{\\textbf{Defense against KD based Stealing}: } Following \\cite{nasty}, we consider a method $M$ as defense, if it degrades the student's tendency (or accuracy) of stealing. Formally, considering the accuracy of stealer without defense $M$ as $Acc_{w}(KD((\\theta_{st},\\theta_{vi}))$ and with defense as $Acc_{wm}(KD(M(\\theta_{st}, \\theta_{vi})))$, $ M $ is said to be a defense if:\n\\begin{equation}\n Acc_{wm}(KD(M(\\theta_{st},\\theta_{vi}))) < Acc_{w}(KD(\\theta_{st},\\theta_{vi})) \n\\end{equation}\nIn the next course of our discussion, we discuss Nasty Teacher \\cite{nasty} following which we motivate our approach.\n\n\n\\section{Conclusion}\nIn this work, we focus on the threat of KD-based model stealing; specifically, the recent work Nasty Teacher \\cite{nasty} which proposes a defense against such stealing. We study \\cite{nasty} from two different directions and systematically show that we can still extract knowledge from Nasty Teacher with our approaches: HTC and SCM. Extensive experiments demonstrate our efficacy to extract knowledge from Nasty. Leveraging the insights we gain in our approaches, we finally also discuss an extension of Nasty Teacher that serves as a better defense. Concretely, we highlight a few dimensions that affect the defense against KD-based stealing to facilitate subsequent efforts in this direction. As our future work, we intend to improvise the existing defense and simultaneously explore such stealing in a relatively white-box setting, wherein the goal will be to defend even if the adversary gets hold of the model features\/parameters.\n\n\n\\section{Introduction}\n\\label{sec:intro}\nKnowledge Distillation utilizes the outputs of a pre-trained model (i.e teacher) to train a generally smaller model (i.e student). Typically, KD methods are used to compress models that are wide, deep and require significant computational resources and pose challenges to model deployment.\nOver the years, KD methods have seen success in various settings beyond model compression including few-shot learning \\cite{fskd}, continual learning {\\cite{delange2021pami-cl-survey}}, and adversarial robustness \\cite{ard}, to name a few -- highlighting its importance in training DNN models. However, recently, there has been a growing concern of misusing KD methods as a means to steal the implicit model knowledge of a teacher model that could be proprietary and confidential to an organization. \nKD methods provide an inadvertent pathway for leak of intellectual property that could potentially be a threat for science and society. Surprisingly, the importance of defending against such KD-based stealing was only recently explored in \\cite{nasty,skeptical}, making this a timely and important topic.\n\n\nIn particular, \\cite{nasty} recently proposed a defense mechanism to protect such KD-based stealing of intellectual property using a training strategy called the `Nasty Teacher'. This strategy attempts to transform the original teacher into a model that is `\\textit{undistillable}', i.e., any student model that attempts to learn from such a teacher gets significantly degraded performance. This method maximally disturbs incorrect class logits (a significant source of model knowledge), which produces confusing outputs devoid of clear, meaningful information. This method showed promising results in defending against such KD-based stealing from DNN models. However, any security-related technology development requires simultaneous progress of both attacks and defenses for sturdy progress of the field, and eventually lead to the development of robust models. In this work, we seek to test the extent of the defense obtained by the `Nasty Teacher' \\cite{nasty}, and show that it is possible to recover model knowledge despite this defense using the logit outputs of such a teacher. Subsequently, we leverage the garnered insights and propose a simple yet effective defense strategy, which significantly improves defense against KD-based stealing. \n\nTo this end,\nwe ask two key questions: (i) can we transform the outputs of the Nasty Teacher to reduce the extent of confusion, and thus be able to steal despite is defense? and (ii) can we transform the outputs of the Nasty Teacher to recover hidden essential relationships between the class logits? To answer these two questions, we propose two approaches -- High-Temperature Composition (HTC) which systematically reduces confusion in the logits and Sequence of Contrastive Model (SCM) which systematically recovers relationships between the logits. These approaches result in performance improvement of KD, thereby highlighting the continued vulnerability of DNN models to KD-based stealing. Because of their generic formulation and simplicity, we believe our proposed ideas could apply well to similar approaches that may be developed in future along the same lines as the Nasty Teacher. To summarize, this work analyzes key attributes of output scores (which capture the strength and clarity of model knowledge) that could stimulate knowledge stealing and thereby leverages those to strengthen defenses against such attacks too. Our key contributions are summarized as follows:\n\\begin{itemize}\n \\item We draw attention to the recently identified vulnerability of KD methods in model-stealing, and analyze the first defense method in this direction, i.e. Nasty Teacher, from two perspectives: (i) reducing the extent of confusion in the class logit outputs; and (ii) extracting essential relationship information from the class logit outputs. We develop two simple yet effective strategies -- High Temperature Composition (HTC) and Sequence of Contrastive Model (SCM) -- which can undo the defense of the Nasty Teacher, pointing to the need for better defenses in this domain.\n \\item Secondly, we leverage our obtained insights and propose an extension of Nasty Teacher, which outperforms the earlier defense under similar settings.\n \\item We conduct exhaustive experiments and ablation studies on standard benchmark datasets and models to demonstrate the effectiveness of our approaches.\n\\end{itemize}\nWe hope that our efforts in this work will provide important insights and encourage further investigation on a critical problem with DNN models in contemporary times where privacy and confidentiality are increasingly valued.\n\n\n\\section{Learning from a Nasty Teacher}\n\\label{sec:method}\n\\subsection{Background}\n\\label{sec:background}\n\\noindent \\textbf{\\textbf{Knowledge Distillation (KD)}}: KD methods train a smaller student network, $\\theta_s$, with the outputs of a typically large pre-trained teacher network, $\\theta_t$ alongside the ground-truth labels. Given an input image $\\textbf{x}$, the output logits of student given by $\\textbf{z}_s= \\theta_s(\\textbf{x})$ and teacher logits given by $\\textbf{z}_t= \\theta_t(\\textbf{x})$, a temperature parameter $\\tau$ is used to soften the logits and obtain a transformed output probability vector using the softmax function:\n\\begin{equation}\n \\textbf{y}_{s} = \\textit{softmax}(\\textbf{z}_{s}\/\\tau), \n \\textbf{y}_{t} = \\textit{softmax}(\\textbf{z}_{t}\/\\tau)\n\\end{equation}\nwhere $\\textbf{y}_s$ and $\\textbf{y}_t$ are the new output probability vectors of the student and teacher, respectively. The final loss function used to train the student model is given by:\n\\begin{equation}\n \\mathcal{L} = \\alpha \\cdot \\lambda \\cdot KL(y_s, y_t) + (1 - \\alpha) \\cdot \\mathcal{L}_{CE}\n \\label{eq:hinton-loss}\n\\end{equation}\nwhere KL stands for Kullback-Leibler divergence, $(\\mathcal{L}_{CE})$ represents standard cross-entropy loss, and $\\lambda, \\alpha$ are two hyperparameters to control the importance of the loss function terms ($\\lambda=\\tau^{2}$ generally).\n\n\\noindent \\textbf{KD-based Stealing}: Given a stealer (or student) model, denoted by $\\theta_{s}$, and a victim (or teacher) $\\theta_{t}$, the stealer is said to succeed in stealing knowledge using KD if by using the input-output information of the victim, it can grasp some additional knowledge which is not accessible in the victim's absence. As stated in \\cite{nasty}, this phenomenon can be measured in terms of difference in maximum accuracy of stealer with and without stealing from victim. Formally, stealing is said to happen if:\n\\begin{equation}\n Acc_{w}(KD(\\theta_{s}, \\theta_{t})) > Acc_{wo}(\\theta_{s})\n\\end{equation}\nwhere the left expression refers to the accuracy with stealing, and the right one refers to accuracy without stealing.\n\n\\noindent \\textbf{Defense against KD based Stealing}: Following \\cite{nasty}, we consider a method $M$ as defense, if it degrades the student's tendency (or accuracy) of stealing. Formally, considering the accuracy of stealer without defense $M$ as $Acc_{w}(KD(\\theta_{s},\\theta_{t}))$ and with defense as $Acc_{wm}(KD(M(\\theta_{t}, \\theta_{t})))$, $ M $ is said to be a defense if:\n\\begin{equation}\n Acc_{wm}(KD(M(\\theta_{s},\\theta_{t}))) < Acc_{w}(KD(\\theta_{s},\\theta_{t})) \n\\end{equation}\n\\input{motivation}\n\\subsection{Feasibility of KD Based Stealing}\n\\label{stealing-kd}\nAs discussed earlier, standard KD techniques \\cite{hinton,survey1,survey2,survey3} learn well with just the outputs of the teacher and hence are well-suited for stealing models released as APIs, MLaaS, so on (known as the black-box setting). However, it is also true that the performance of KD methods relies on factors such as training data, architecture choice and the amount of information revealed in the outputs. Thus, one might argue that we can permanently restrict attackers' access to these and prevent KD-based stealing attacks. We now discuss each of these and illustrate the feasibility of KD-based attacks.\n\\textbf{(1) Restricting Access to Training Data:} While developers try their best to protect such IP assets, as discussed by \\cite{skeptical}, there can continue to be numerous reasons for concern: (i) The developers might have bought the data from a vendor who can potentially sell it to others; (ii) Intentionally or not, there is a distinct possibility for data leaks; (iii) Many datasets are either similar to or subsets of large-scale public datasets (ImageNet\\cite{imgnet}, BDD100k\\cite{bdd100k}, so on), which can be effectively used as proxies; or (iv) Model inversion techniques can be used to recover training data from a pre-trained model \\cite{dream2distill, secret_reveal, labonly_inv, bb_frec, GAMIN} both in white-box and black box settings. Such methods \\cite{labonly_inv}, in fact, do not even require the soft outputs, just the hard predicted label from the model suffices. Thus, as these methods evolve, we can only expect an adversary to become capable of obtaining training data of sufficient quantity and quality to allow such KD-based stealing.\n\\textbf{(2) Restricting Access to Architecture:} While developers may not reveal the architecture information completely, common development practices still pose concerns for safety: (i) Most applications simply utilize architectures from existing model hubs \\cite{torchhub,tensorflowhub}, sometimes even with the same pre-trained weights (transfer learning), which narrow down the options an adversary needs to try; (ii) Availability of additional tools such as AutoML \\cite{automl-example2-learningfeatureengg,automl-example2-oneplayergame}, Neural Architecture Search (NAS) \\cite{nas-example1-simpleandefficient,nas-example2-darts} can help attackers search for architectures that match a specific criteria. When combined with the increasingly available compute power (in terms of TPUs, GPUs), this can make models significantly vulnerable to attacks;\n(iii) With advancements in KD methods, it has been shown that knowledge can be distilled from any architecture to the other: \\cite{ban} shows distillation with same capacity networks, \\cite{tfkd} shows distillation from even poorly trained networks, and so on. Besides, KD has also witnessed techniques that require no data information and still achieve distillation (i.e. data-free distillation \\cite{DAFL, dream2distill}). Hence, more opportunities for an attacker to carry out KD-based stealing.\n\\textbf{(3) Releasing Incomplete or Randomly Noised Teacher Outputs:} The degree of exposition (i.e. the number of classes revealed) and the clarity of scores (i.e. ease of understanding and inferring from scores) play a vital role in ensuring the quality of knowledge transfer. Hence, one might argue that releasing only top-K class scores can help contain attackers; however, such an approach is infeasible in many use cases where it is necessary to fetch the entire score map. One might also consider adding random noise to make teacher outputs undistillable. This approach generally lowers model performance but also make its failure tracking difficult.\\\\\n\\noindent The above discussion motivates the need to develop fundamentally sound strategies to protect against stealing. To this end, we analyze ``Nasty Teacher\" \\cite{nasty} from two different directions and subsequently leverage insights from KD literature to propose simple yet effective approaches to show that it is still vulnerable to knowledge stealing. We name our attack methods as: \\textit{``High Temperature Composition (HTC)\"} and \\textit{``Sequence of Contrastive Models (SCM)\"}, and describe them below. We explain our methods using the Nasty Teacher as the pivot, primarily because it is the only known KD-stealing defense method at this time. Our strategies however are general, and can be applied to any KD based stealing method.\n\n\n\n\n\n\n\n\\subsection{High Temperature Composition: HTC }\n\\label{sec:tempmeth}\n\\textbf{Motivation: Nasty Teacher Creates Confusing Signals.} In Figure \\ref{fig:nasty_illustration}, we observe that the original teacher $\\theta_{t}$ emulates a single-peak distribution and consistently has low scores for incorrect classes. Now, because the Nasty Teacher is trained to contrast with the original teacher, it produces high scores for few incorrect classes, and thus results in a multi-peak distribution (see Fig \\ref{fig:nasty_illustration}b). In particular, a few incorrect classes score almost as high as the correct class while other incorrect classes score almost a zero, which, as discussed in \\cite{nasty} introduces confusion in the outputs. We attribute this confusion to two key aspects: (i) some low-scoring incorrect classes getting ignored, and (ii) some high-scoring incorrect classes behaving as importantly as the correct class. Fig \\ref{fig:tempmeth-vis}c shows a visualization of this observation.\nSince the student model is now forced to learn these incorrect peaks as equally important as the correct class, it gets a false signal and diverges while training.\n\\begin{figure}\n \n \n \\centering\n \n \n \n \n \n \n \n \\includegraphics[width=\\linewidth]{.\/imgs\/tempmeth_logits_horizontalv2.jpg}\n \\\\\n\n \n \\caption{The effect of temperature in HTC. We demonstrate the logits and probabilities at low temperature (as generally used in KD) and high temperature (as used in HTC). \\textbf{(a)} Logits at low temperature $\\tau_{low} = 4$, \\textbf{(b)} Logits at high temperature $\\tau_{high} = 50$, \\textbf{(c)} Probabilities at $\\tau_{low}$ , \\textbf{(d)} Probabilities at $\\tau_{high}$, \\textbf{(e)} Composing with one-hot to increase the peak.}\n \\label{fig:tempmeth-vis}\n \n\\end{figure}\n\n\n\\noindent {\\textbf{Proposition.}} \\textit{Transform the output to reduce the degree of confusion in them.}\n\n\\noindent {\\textbf{Method.}}\nWe hypothesize that distillation from defenses such as Nasty Teacher can be improved by increasing the relative importance of low-scoring incorrect classes and including their presence in the output. \nIncreasing the importance of low-scoring incorrect classes makes the high-scoring incorrect classes lesser important, thus reducing confusion. We note that this idea can be used generically, even independent of the Nasty Teacher's defense.\nTo this end, we first soften the teacher's outputs with a high temperature $\\tau$ ($\\tau$ $>$ 50 in our case). Figure \\ref{fig:tempmeth-vis}b shows how this reduces the relative disparity among the scores and brings them closer, thus helping reduce confusion. From the softmax outputs in Figure \\ref{fig:tempmeth-vis}d, we further see that this not only allows the other incorrect classes to be viewed in the output but also gives rise to relationships (or variations) which were earlier not visible. Formally, the above operation can be written as: \n\\begin{equation}\n \\textbf{y}_{nasty} = softmax(\\textbf{z}_{nasty} \/ \\tau) \n \\label{eq:y-nasty-htc}\n\\end{equation}\nAlthough we get a much more informative output, the above transformation does also reduce the relative peak of the correct class, which for distillation may not be ideal. We overcome this by using a convex combination of $\\textbf{y}_{nasty}$ with the one-hot target vector to obtain the final output $\\textbf{y}_{net}$, which makes this strategy more meaningful (see Figure \\ref{fig:tempmeth-vis})):\n\\begin{equation}\n \\textbf{y}_{net} = (1 - \\alpha) \\cdot \\textbf{y} + \\alpha \\cdot \\textbf{y}_{nasty}\n \\label{eq:y-net-htc}\n\\end{equation}\n\n\\noindent In the above discussion, we propose to create our own training targets which satisfy the two properties to learn despite the Nasty Teacher's defense: (i) has a high peak for the correct class; and (ii) has the rich semantic class score information. Finally, to learn from this teacher and match its distributions, we minimize the cross-entropy lpss between student probabilities ($\\textbf{s} = softmax(\\textbf{z}_{student})$) and HTC teacher ($\\textbf{y}_{net}$) targets as:\n\\[ \\mathcal{L_{HTC}} = - \\sum_{i} \\textbf{y}_{net, i} \\cdot \\log{\\textbf{s}_i} \\]\n\\noindent Combining the above with Eqn \\ref{eq:y-net-htc}, $\\mathcal{L}_{CE}$ as cross-entropy loss and $\\mathcal{L}_{KL}$ as KL-Divergence, we can now write the above as:\n\n \n\n\n\n\n\n\n\n\n\\begin{align}\n \\mathcal{L_{HTC}} &= -\\sum_{i} ((1 - \\alpha) \\cdot \\textbf{y}_i + \\alpha \\cdot \\textbf{y}_{nasty, i}) \\log{\\textbf{s}_i} \\nonumber \\\\\n &= -(1 - \\alpha) \\cdot \\sum_{i} \\textbf{y}_i \\cdot \\log{\\textbf{s}_i} - \\alpha \\cdot \\sum_{i} \\textbf{y}_{nasty, i} \\cdot \\log{\\textbf{s}_i} \\nonumber \\\\\n &= (1 - \\alpha) \\cdot \\mathcal{L}_{CE}(\\textbf{s}, \\textbf{y}) + \\alpha \\cdot \\mathcal{L}_{CE}(\\textbf{s}, \\textbf{y}_{nasty}) \\nonumber \\\\ \\\\\n \\mathcal{L_{HTC}} &= (1 - \\alpha) \\cdot \\mathcal{L}_{CE}(\\textbf{s}, \\textbf{y}) + \\alpha \\cdot m \\cdot \\mathcal{L}_{KL}(\\textbf{s}, \\textbf{y}_{nasty})\n \\label{eq:tempmeth-loss}\n \n\\end{align}\n\n\n\\noindent In Equation \\ref{eq:tempmeth-loss}, we see that $\\mathcal{L}_{CE}$ is replaced with $\\mathcal{L}_{KL}$. Here, we use the fact that cross-entropy loss and KL-divergence differ by a term $\\sum_{i} \\textbf{y}_{nasty,i} \\log{\\textbf{y}_{nasty,i}}$, which remains a constant for student gradient computation because of fixed teacher outputs $\\textbf{y}_{nasty}$. $\\mathcal{L}_\\mathcal{HTC}$ is thus a KD-loss similar to what was discussed in Sec \\ref{sec:background}. To adjust the extent of knowledge transfer, we finally re-weight the KL-divergence term with a hyperparameter multiplier $m$. Conceptually, $m$\\ does not make a difference to the idea, but we observe this to be useful while training. Figure \\ref{fig:into-vis}a provides a visualization for this approach.\n\n\n\n\n\n\n\n\n\\subsection{Sequence of Contrastive Models: SCM }\n\\label{sec:ensmeth}\n\\noindent{\\textbf{Motivation: Nasty Hides Essential Information.}} As Nasty Teacher selectively causes some incorrect classes to exhibit peaks while inhibiting scores for others, it hides certain inter-class relationships to protect against KD-based attacks.\n\n\\noindent {\\textbf{Proposition.}}: \\textit{Transform the output to extract\/recover the essential class relationships or possibly the entire original teacher distribution}\n\n\\begin{figure}[t]\n \\centering\n \n\t\\includegraphics[width=0.8\\linewidth]{.\/imgs\/diagramv15.jpg}\n \\caption{\\textit{Illustration of our Methods}: High Temperature Composition (HTC) and Sequence of Contrastive Models (SCM). \\textit{HTC} learns from Nasty Teacher (NT) outputs by reducing their confusion, while \\textit{SCM} learns by extracting semantic information from them. }\n \\label{fig:into-vis}\n \n\\end{figure}\n\\noindent {\\textbf{Method.}}: To begin with, we ask the question - what would happen if we used Eqn \\ref{eq:nasty} as is with the given nasty teacher $\\theta_{n}$? We would expect to see a model with accuracy as high as the teacher but with presence of class distribution peaks different from it. If we perform this operation for ``k\" such sequential stjpg, we can expect to obtain ``k\" potentially different output distributions. Building on this thought, we introduce a Sequential Contrastive Training strategy, wherein we form a sequence of ``k\" contrastive models by training each model to contrast with the model just before it. Formally, taking the nasty teacher $\\theta_{n}$ as the starting point of the sequence, $G$ as the method for generating the next contrastive item in the sequence, which in our approach is the same as the one described in Eqn \\ref{eq:nasty}, the sequence $S^{k}$ with $k$ as the sequence length can be written as:\n\\begin{equation}\nS^{k} = \\begin{cases} \n {S}^{i} = \\theta_{n} & i=1 \\\\\n {S}^{i} = G({S}^{i-1}) & 1 < i\\leq k\n \\end{cases}\n\\end{equation}\nThus, while maximising KL-divergence between models, each model learns a different probability distribution and has its own unique set of confusing class relationships. We then take an ensemble of their outputs, which is denoted as $\\textbf{z}_{ens}$ and use this alongside the ground truth labels $y$ to train the student (or stealer) model, i.e\n\\begin{multline}\n \\mathcal{L_{SCM}} = (1 - \\alpha) \\cdot \\mathcal{L}_{CE} (softmax(\\textbf{z}_{s}), \\textbf{y}) \\\\\n + \\alpha \\cdot m \\cdot \\mathcal{L}_{KL} (softmax(\\textbf{z}_{s}\/\\tau), softmax(\\textbf{z}_{ens}\/\\tau))\n\\end{multline} \nwhere hyperparameters $m$, $\\alpha$, $\\tau$ have the same meaning as in Eqn \\ref{eq:tempmeth-loss}.\nThe core intuition behind SCM is that a relationship may be important in its own distribution but only the essential relationships will be important across distributions. Therefore, by taking the ensemble, we not only capture such relationships but also obtain a distribution closer to the original teacher. This idea is visualized in Figure \\ref{fig:nasty_illustration} where (b) and (c) represent two successive checkpoints in the sequence, and (d) represents their ensemble. It can be clearly noted that different items in the sequence consistently output diversity in class relationships, and further, their ensemble illustrates a class distribution closer to the original teacher (Figure \\ref{fig:nasty_illustration}e).\n\n\\section{Motivation}\n\n\n\n\\label{sec:nastyteacher}\n\\noindent \\textbf{Nasty Teacher(NT)}\\cite{nasty}: The Nasty Teacher methodology transforms the original model to a model which has accuracy as high as the original model (to ensure model usability) but whose output distribution (or logits) significantly camouflages the meaningful information.\n\\begin{figure}[t]\n \\centering\n \n\t\\includegraphics[width=0.65\\linewidth]{.\/imgs\/ensv3.jpg}\n \\caption{Softmax Outputs of: (a) Nasty Teacher; (b) Seq-1 (Intermediate model $S^{i}$ in Sec \\ref{sec:ensmeth}); (c) Seq-2 (Similar to Seq-1); (d) Ensemble of Seq-1 and Seq-2 as used in SCM ; and (e) Normal Teacher. Note that the class output distribution of the ensemble is similar to the original (normal) teacher. Maroon = Target class; Orange = other classes.}\n \\label{fig:nasty_illustration}\n \\vspace{-15pt}\n\\end{figure}\n\nFormally, given a teacher model $\\theta_{t}$, they output a nasty teacher model $\\theta_{n}$ trained by minimizing cross-entropy loss $\\mathcal{L}_{CE}$ with target labels $y$ (to ensure high accuracy) and also by maximizing KL-Divergence $\\mathcal{L}_{KL}$ with the outputs of the original teacher (to maximally contrast or disturb from the original and create a confusing distribution). This can be written as:\n\\begin{equation}\n\\label{eq:nasty}\n \\mathcal{L}_{n}(\\textbf{x},y) = \\mathcal{L}_{ce}(\\theta_{n}(\\textbf{x}), y) - \\omega \\cdot \\tau_{A}^2 \\cdot \\mathcal{L}_{KL} (\\theta_{n}(\\textbf{x}), \\theta_{t}(\\textbf{x}))\n\\end{equation}\n\\noindent where $\\omega$ is a weighting parameter, and $\\tau_{A}^2$ is a temperature parameter. Figure \\ref{fig:nasty_illustration}(a) provides a visual illustration of Nasty Teacher's outputs after softmax. We see that when compared to a normal teacher model in Figure \\ref{fig:nasty_illustration}(e), it maintains correct class assignments but significantly changes the semantic information contained in the class distribution.\n\n\n\\section{Related Work}\n\\label{sec:related-work}\nWe discuss prior work both from perspectives of Knowledge Distillation (KD) as well as its use in model-stealing below.\n\n\\noindent \\textbf{Knowledge Distillation:} KD methods transfer knowledge from a larger network (referred to as \\textit{teacher}) to a smaller network (referred to as \\textit{student}) by enforcing students to match the teacher's output. With seminal works \\cite{bucilua,hinton} laying the foundation, KD has gained wide popularity in recent years. The initial techniques for KD mainly focused on distilling knowledge from logits or probabilities. This idea got further extended to distilling features in \\cite{fitnet,at,fsp,rkd}, and many others. In all such methods, KD is used to improve the performance of the student model in various settings. More detailed surveys on KD can be found in \\cite{survey1, survey2, survey3}. Our focus in this work, however, is on recent works {\\cite{nasty,dream2distill,skeptical}}, which have discussed how KD can unintentionally expose threats to Intellectual Property (IP) and private content of the underlying DNN models and data, thereby motivating a new, important direction in KD methods.\n\n\\noindent \\textbf{Model Stealing and KD:} Model stealing involves stealing any information from a DNN model that is desired to be inaccessible to an adversary\/end-user.\nSuch stealing can happen in multiple ways: \\textit{(1) Model Extraction as a Black Box.} An adversary could query existing model-based software, and with just its outputs clones the knowledge into a model of their own; \\textit{(2) Using Data Inputs.} An adversary may potentially access similar\/same data as the victim, which can be used to extract knowledge\/IP; or \\textit{Using Model Architecture\/Parameters.} An adversary may attempt to extract critical model information -- such as the architecture type or the entire model file -- through unintentional leaks, academic publications or other means. There have been a few disparate efforts in the past to protect against model\/IP stealing in different contexts such as watermark-based methods \\cite{watermark1, watermark2}, passport-based methods \\cite{passport1, passport2}, dataset inference \\cite{di}, and so on. These methods focused on verifying ownership, while other methods such as \\cite{me1, me2} focused on defending against few model extraction attacks. However, the focus of these efforts was different from the one discussed herein. In this work, we specifically explore the recently highlighted problem of KD-based model stealing \\cite{nasty, skeptical}. As noted in \\cite{nasty,skeptical}, most existing verification and defense methods do not address KD-based stealing, leaving this rather critical problem vulnerable. Our work helps analyze the first defense for KD-based stealing \\cite{nasty}, identifies loopholes using simple strategies and also leverages them to propose a newer defense to this problem. We believe our findings will accelerate further efforts in this important space.\nThe work closest to ours is one that has been recently published -- Skeptical Student \\cite{skeptical} -- which probes the confidentiality of \\cite{nasty} by appropriately designing the student (or hacker) architecture. Our approach in this work is different, and focuses on mechanisms of student training, without changing the architecture. \n\\footnote{Code available at \\url{https:\/\/github.com\/surgan12\/NastyAttacks}.}\n\n\\section{Experiments and Results}\n\\label{sec:results}\n\\subsection{Experimental Setup}\n\\label{sec:setup}\n\\noindent For all the experiments, we follow the same models and training configurations as used in the code provided by \\cite{nasty}\\footnote{https:\/\/github.com\/VITA-Group\/Nasty-Teacher}.\n\n\\noindent \\textbf{Datasets and Network Baselines:} For datasets, we use CIFAR-10 (C10), CIFAR-100 (C100) and TinyImageNet (TIN) datasets in our evaluation. For teacher (or victim) networks, we use ResNet18 \\cite{resnet} for CIFAR-10, both Resnet18 and ResNet50 for CIFAR-100 and TinyImageNet. For student (or stealer) networks, we use MobileNetv2 \\cite{mobilev2}, ShuffleNetV2 \\cite{shufflev2}, and Plain CNNs \\cite{takd} in CIFAR-10 and CIFAR-100 and MobileNetV2, ShuffleNetV2 in TinyImageNet. For baselines, Vanilla in Table \\ref{tab:main} refers to the cross entropy training, \\textit{Normal KD}\/\\textit{Nasty KD} refer to the distillation (or stealing) with original\/nasty teacher, \\textit{Skeptical} refers to the recent work \\cite{skeptical}, and \\textit{HTC, SCM} refers to our approaches. \n\n\\noindent \\textbf{Training Baselines:} We follow the parameter choice from \\cite{nasty}. For generating Nasty Teacher in Eq. \\ref{eq:nasty}, $\\tau_{A}$ is set to 4 for CIFAR-10 and 20 for CIFAR-100\/TinyImageNet, correspondingly $\\omega$ to 0.04, 0.005, 0.01 for CIFAR-10, CIFAR-100, TinyImageNet respectively. For both Nasty KD and Normal KD, ($\\alpha$, $\\tau$) parameters of distillation in Eq. \\ref{eq:hinton-loss} are set to (0.9, 4) in plain CNNs and (0.9, 20) in MobileNetV2, ShuffleNetV2. Plain CNNs are optimized with Adam\\cite{adam} (LR=1e-3) for 100 epochs while MobileNetV2, ShuffleNetV2 are optimized with SGD (momentum=0.9, weight decay=5e-4, initial LR=0.1) for 160 epochs with LR decay of 10 at epoch [80, 120] in CIFAR-10 and for 200 epochs with LR decay of 5 at epoch [60, 120, 160] in CIFAR-100. Training settings in TinyImageNet are same as used in CIFAR-100. Moreover, all the experiments use batch size of 128 with standard image augmentations.\n\n\\noindent \\textbf{Training HTC and SCM :} For both HTC and SCM, we search $m$\\ in \\{1, 5, 10, 50\\}, $\\tau$ in \\{4, 20, 50, 100\\}, $\\alpha$ in \\{0.1,....0.9\\}. For TinyImageNet, we include $\\tau=200$ also in the earlier set of $\\tau$. While any number of sequence models can conceptually be chosen for SCM , our experiments only leverage a max. of 4 such Sequence Contrastive models. Note, for all these we choose the search space rather intuitively and finally present their impact in Sec. \\ref{sec:ablations}.\\\\\n\\subsection{Quantitative Results}\n\\noindent We report the results in Table \\ref{tab:main} and clearly observe the degradation of student performance while learning from Nasty teacher. Subsequently, our approaches: HTC and SCM increase learning from Nasty by upto 58.75\\% in CIFAR-10, 68.63\\% in CIFAR-100 and 60.16\\% in TinyImageNet. We significantly outperform the recent state of the art Skeptical \\cite{skeptical} and unlike them we also consistently outperform the Vanilla training. Moreover, many times we achieve very close performances and other few times even better performance than Normal-Teacher (i.e stealing from unprotected victim model), see the $\\ssymbol{4}$ marked cells in table \\ref{tab:main}. Further, we combine our training methods HTC and SCM with the novel stealing architecture Skeptical Student \\cite{skeptical} and report improvements in Table \\ref{tab:main}.\n\\begin{table}[t]\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \\centering\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \\begin{tabular}{ccc cc|cccc H | cc}\n \\hline\n \\hline\n Dataset & Tch. & Stu. & Vanilla & Normal & Nasty & Skep. & HTC & SCM & $\\Delta$ & Skep. & Skep. \\\\\n & & & & KD & KD & (NeurIPS'21) & & & & +HTC & +SCM \\\\\n \\hline\n \\hline\n \\multirow{3}{*}{C10} & \\multirow{3}{*}{Res18} & CNN & 86.31 & 87.83 & 82.27 & 86.71 & \\underline{87.38} & \\textbf{87.85} & +1.14 & 87.17 & 87.04 \\\\\n & & Mob & 89.58 & 89.30 & 31.73 & 90.53 & 90.03$^\\ssymbol{4}$ & \\underline{90.48}$^\\ssymbol{4}$ & -0.05 & \\underline{91.45}$^\\ssymbol{4}$ & \\textbf{91.55}$^\\ssymbol{4}$ \\\\\n & & Shuf & 91.03 & 91.17 & 79.73 & 91.34 & 91.61$^\\ssymbol{4}$ & 91.93$^\\ssymbol{4}$ & +0.59 & \\underline{92.45}$^\\ssymbol{4}$ & \\textbf{92.76}$^\\ssymbol{4}$ \\\\\n \\hline\n \\multirow{3}{*}{C100} & \\multirow{3}{*}{Res18} & CNN & 58.38 & 62.35 & 58.62 & 58.38 & \\underline{61.21} & \\textbf{61.31} & +2.39 & 59.64 & 59.17 \\\\\n & & Mob & 68.90 & 72.75 & 3.15 & 66.89 & 71.01 & 71.06 & +4.17 & \\underline{71.48} & \\textbf{71.74} \\\\\n & & Shuf & 71.43 & 74.43 & 63.67 & 70.00 & 74.04 & \\textbf{75.23}$^\\ssymbol{4}$ & +5.23 & 73.78 & \\underline{74.23} \\\\\n \\hline\n \\multirow{3}{*}{C100} & \\multirow{3}{*}{Res50} & CNN & 58.38 & 61.84 & 58.93 & 59.15 & \\textbf{61.24} & \\underline{59.58} & +2.09 & 59.48 & 59.17 \\\\\n & & Mob & 68.90 & 72.22 & 3.03 & 66.65 & 70.49 & \\textbf{71.66} & +5.01 & \\underline{71.54} & 71.16 \\\\\n & & Shuf & 71.43 & 73.91 & 62.8 & 70.02 & 72.37 & 73.25 & +3.23 & \\underline{73.60} & \\textbf{74.73}$^\\ssymbol{4}$ \\\\\n \\hline\n \\multirow{2}{*}{TIN} & \\multirow{2}{*}{Res18} & Mob & 55.69 & 61.00 & 0.85 & 47.37 & 56.28 & \\textbf{61.01}$^\\ssymbol{4}$ & +13.64 & \\underline{59.05} & 58.88 \\\\\n & & Shuf & 60.30 & 63.45 & 23.78 & 54.78 & 60.46 & 62.09 & +7.31 & \\textbf{63.05} & \\underline{62.28} \\\\\n \\hline\n \\multirow{2}{*}{TIN} & \\multirow{2}{*}{Res50} & Mob & 55.89 & 57.84 & 1.10 & 48.21 & 56.00 & \\textbf{59.46}$^\\ssymbol{4}$ & +11.25 & 58.56$^\\ssymbol{4}$ & \\underline{58.88}$^\\ssymbol{4}$ \\\\\n & & Shuf & 60.30 & 62.02 & 24.27 & 56.08 & 60.80 & 61.55 & +5.47 & \\textbf{63.14}$^\\ssymbol{4}$ & \\underline{61.92} \\\\\n \\hline\n \\hline\n \\end{tabular}\n \n \\caption{{\\textit{Accuracy (higher is better)}} of HTC and SCM against baselines. \\textbf{bold} represents best performance, \\underline{underline} the second best in learning from Nasty Teacher. {$\\ssymbol{4}$ \\textit{represents instances}} that even outperform the Normal KD. {\\textit{Abbreviations}} -- Tch : Teacher, Stu : Student, Skep : Skeptical \\cite{skeptical}, Res50 : ResNet50, Res18 : ResNet18, Mob : MobileNetV2, Shuf : ShuffleNetV2.}\n \n \\label{tab:main}\n\\end{table}\n\n\n\\begin{table}[t]\n\n \n \n \\begin{minipage}{0.5\\linewidth}\n \\centering \n \\begin{tabular}{ccccc}\n \\hline\n \\hline\n & Vanilla & Nasty KD & Ours \\\\\n \\hline\n CNN & 58.38 & 58.62 & \\textbf{61.25} \\\\\n ShuffleNetV2 & 71.43 & 63.67 & \\textbf{72.71} \\\\\n \\hline\n \\hline\n \n \\end{tabular}\n \\caption*{(a) Effect of architecture choice.}\n \\end{minipage}\n \\begin{minipage}{0.5\\linewidth}\n \\centering\n \n \\begin{tabular}{ccccc}\n \\hline\n \\hline\n & 2 & 3 & 4 \\\\\n \\hline\n CNN & 60.99 & 61.24 & \\textbf{61.31} \\\\\n ShuffleNetV2 & 74.45 & 74.24 & \\textbf{75.23} \\\\\n \\hline\n \\hline\n \\end{tabular}\n \\caption*{(b) Effect of sequence length.}\n \n \\end{minipage}\n\n \n \n \n \n \n \n\\caption{Analysing hyperparameter choice for SCM.}\n\\label{tab:hyp-ens}\n\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n \n\n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\input{sec\/introduction}\n\n\\section{Background}\n\\input{sec\/background}\n\n\\section{Bitcoin Unreachable Nodes}\n\\input{sec\/unreachable}\n\n\\section{Our Solution}\n\\input{sec\/protocol}\n\n\\section{Discussion}\n\\input{sec\/discussion}\n\n\\section{Related Work}\n\\label{sec:related}\n\\input{sec\/related}\n\n\\section{Conclusion and Future Work}\n\\label{sec:conclusion}\n\\input{sec\/conclusion}\n\n\n\n\\printbibliography\n\n\n\n\n\\end{document}\n\n\n\n\n\\subsection{NAT and P2P networks}\n\\textit{Network Address Translation} (NAT) \\cite{rfc1999nat} is a method to map IP addresses between incompatible networks.\nThe most common type, known as \\textit{Network Address and Port Translation} (NAPT), is often used to connect private networks to the Internet without the need to assign a unique address to each device.\nNAPT is often regarded as a solution to the IPv4 address exhaustion problem \\cite{richter2015primer}, since it allows a large number of devices to connect through a shared IP address.\nAs a side effect, such devices cannot be reached from the Internet, unless they first open a connection.\n\nWhile this is not a problem in a client-server setting, it is a serious limitation for P2P networks.\nNotably, it prevents NATted nodes from connecting to each other.\nTo overcome this limitation, \\textit{NAT traversal} techniques have been devised \\cite{hu2005nat}.\n\nThe Bitcoin reference client implements \\textit{Universal Plug-and-Play} (UPnP), which, however, is incompatible with CG-NATs, as it needs direct access from the host.\nFurthermore, the UPnP option is disabled by default due to a known vulnerability in the protocol.\nAs a consequence, NATted nodes only establish outbound connections, which in the reference client are limited to just 8.\n\n\\subsection{Transaction Propagation and Anonymity}\nForwarding a transaction in Bitcoin is a three-step process.\nFirst, the sender transmits an inventory (INV) message to advertise the hash of the transaction.\nThe receiver then checks the hash and, if it is unknown, requests the full transaction data with a GETDATA message.\nFinally, the sender transmits the full transaction in a TX message.\nThe INV-based transmission allows sending the transaction only to those nodes which still have not received it.\nWhen a node receives a new transaction, it relays it to its peers following the same process.\n\nThe relay step is fundamental in determining the propagation pattern.\nSince 2015, Bitcoin adopts the \\textit{diffusion spreading} protocol, where nodes relay transactions to each neighbor with an independent, exponential delay.\nNewly-generated transactions are transmitted in the same way by their source.\n\nAs shown in \\cite{fanti2017deanonimization}, the pattern generated by this gossip-like protocol leads to possible deanonymization attacks based on the so-called \\textit{rumor centrality}~\\cite{shah2012rumor}.\nIn simple words, since a transaction spreads symmetrically from each node to its peers, it is possible to determine the origin of the spreading (i.e., the first node that transmitted the transaction) by observing the state of its propagation through the network.\nGiven that the source broadcasts the transaction in the same way as the relays, detecting the origin of the propagation often means identifying the creator of the transaction (in terms of the device address).\n\nBased on this fact, several attacks~\\cite{koshy2014analysis,fanti2017deanonimization,biryukov2019deanonymization} have been shown where an \\textit{eavesdropper adversary} connects to all reachable nodes and applies the so-called \\textit{first-spy} estimator, which simply associates a transaction to the first node that relays it.\nFanti et al.~\\cite{fanti2017anonymity} showed that this type of strategy often has very high levels of accuracy.\n\n\n\n\n\\subsection{Unreachable nodes}\nUnreachable nodes have been extensively studied by Wang et al.~\\cite{wang2017towards}.\nIn order to perform their analyses, they deployed around 100 nodes, through which they collected information on more than 100 K unreachable peers, which generated more than 2 M transactions.\nTheir findings show that most connections last for less than 60 seconds, while, at the same time, most transaction propagations are sent over long-lived connections (more than 100 seconds), showing a high degree of centrality.\nFinally, they show a method to deanonymize transactions coming from unreachable peers, with the help of an external listener node.\nTheir results show that unreachable nodes are also susceptible to the first-spy estimator attack.\nNote that our protocol makes this attack much less effective, since new transactions are proxied to a single reachable node, reducing the probability that the attacker receives the transaction.\nAt the same time, new transactions are mixed with transactions proxied from its peers, thus reducing the accuracy of the attack.\n\nOther deanonymization attacks also target U nodes globally by means of fingerprinting techniques.\nBiryukov et al. \\cite{biryukov2014deanonymisation} make use of ADDR messages to uniquely identify U nodes by the set of their peers.\nTheir technique allows linking multiple transactions created by the same node over a single session.\nIn this paper, we proposed a change in the address advertisement by U nodes that would make this attack ineffective.\n\nIn \\cite{biryukov2015tor}, Biryukov et al. devise a technique to deanonymize U nodes connecting via Tor, even through multiple sessions. \nHowever, their attack is specific to Tor users.\n\nFinally, Mastan et al. \\cite{mastan2018new} exploit block requests patterns to identify U nodes over consecutive sessions.\nHowever, their technique only allows linking sessions and thus needs to be used in conjunction with other deanonymization techniques.\n\n\\subsection{Transaction Propagation Anonymity}\nAnonymity properties of transaction propagation have been extensively studied in research.\n\nKoshy et al. \\cite{koshy2014analysis} are among the first ones to show a practical deanonymization technique for Bitcoin, based on transaction propagation analysis.\nThey show that anomalies in the propagation can be exploited to identify the source of transactions.\n\nNeudecker et al.~\\cite{neudecker2017could} combine observations of the message propagation with Bitcoin address clustering techniques. \nHowever, their results show that for the vast majority of users this information does not facilitate deanonymization.\n\nIn \\cite{fanti2017anonymity}, Fanti et al. thoroughly analyze the anonymity properties of both the former and the actual Bitcoin propagation protocols (Trickle and Diffusion).\nThey theoretically prove that both protocols offer poor anonymity on networks with a regular-tree topology.\nThey also identify the symmetry of current spreading protocols as the main characteristic that allows deanonymization attacks.\nAn alternative protocol, called Dandelion, is then proposed in \\cite{venkatakrishnan2017dandelion,fanti2018dandelion++} that specifically addresses this issue.\nDandelion breaks the symmetry by proxying the broadcast of new transactions through a network-wide circuit of nodes.\nFurthermore, they increase anonymity by mixing new transactions over the same path.\nHowever, their protocol is somewhat hard to implement and only applies to R nodes. \nOur protocol, instead, involves both R and U nodes and does not seem to show any difficulties of implementation.\n\n\\subsection{Limitations}\nOur protocol requires R nodes to have U peers connected to them.\nHowever, newly-joined R nodes usually have to wait some time to have other peers connect to them.\nWe address this limitation by having new R nodes using the diffusion protocol until they have a sufficient number of U peers.\nAdditionally, to prevent an adversary from taking advantage from this situation (by filling up all inbound slots), we also adopt the bucketing strategy.\nSpecifically, we make R nodes use our protocol only when enough U peers from different buckets are connected.\n\n\\subsection{Propagation and anonymity}\nTo better understand our protocol it is useful to depict the propagation pattern of a transaction.\n\nLet us consider an R node $O$ generating a transaction $tx$.\nThe following sequence of events happens:\n\\begin{enumerate}\n\t\\item R selects a proxy $P$ among its proxy set, mark $tx$ as $proxying$ and sends it to $P$;\n\t\\item $P$ receives $tx$ and proxy it with probability $p$, or diffuses it otherwise;\n\t\\item If proxying $tx$, $P$ selects a node $R$ from its proxy set $S$ and sends it $tx$;\n\\end{enumerate}\nProxying transactions are relayed through a sequence of R and U nodes until it gets diffused.\nDiffusion can happen at any step, except for the first one.\nPropagation from an U node follows a similar pattern.\n\nA major risk of proxied broadcast is that a transaction might take too long to diffuse, or not be diffused at all.\nAs for diffusion time, we can statistically guarantee to diffuse every transaction within a reasonable time.\nSince at every hop, the transaction $tx$ is diffused with probability $p$, it is possible to tune this value to obtain a target number of hops through which $tx$ is proxied on average.\nThe use of timeouts allows dealing with a transaction not being diffused.\n\nWith respect to anonymity, our protocol is designed to be resistant against a first-spy estimator.\nThis type of adversary connects to all R nodes and links each transaction to the first node from which it has been received.\nAs demonstrated in \\cite{fanti2017anonymity}, this strategy is very effective with the current propagation model.\nHowever, the changes introduced by our protocol make it very unlikely for a node to first receive a transaction from its source.\nOn the contrary, most of the times, transactions will be received by a node different from the origin, thanks to proxying.\nFurthermore, each transaction is mixed with many others generated by nodes in the proxying path, which are indistinguishable from each other to the receiving node.\nThis means that any claim about the origin of a transaction can be easily denied.\n\nNote that our protocol is designed to resist against very powerful adversaries controlling several nodes and maintaining multiple connections to all reachable nodes.\nThe adversary can combine information from all of its nodes and coordinate them to influence or track the mixing set of a target node.\nHowever, we showed in the previous section how such an adversary has limited capabilities to affect the security of the protocol.\n\n\\subsection{Ephimerality of U nodes}\nA possible issue in our design is the short time of connection of many U nodes.\nIn fact, while R nodes are relatively stable \\cite{statoshi-peers}, U nodes often experience very short-lived connections \\cite{wang2017towards}.\nThis behavior might affect the efficiency of the protocol.\nHowever, the timeout mechanism is also meant to deal with this kind of problems and can be fine-tuned independently by each node, depending on the experienced churn.\n\nMoreover, the presence of short-lived proxy nodes, if properly exploited, might serve as an added value to the anonymity level of our protocol, as it makes it harder to track back a transaction to its origin.\n\n\\subsection{NAT adoption}\nAnother potential limitation of our solution is that it is based on the unreachability of NATted nodes.\nHowever, if IPv6 gets adopted by the majority of nodes, it is possible that NATs will cease to be used. \nThe introduction of IPv6, in fact, was mainly intended to deal with the IPv4 address exhaustion problem and remove the need for NATting~\\cite{rfc2007ip6}.\n\nAlthough growing, the adoption rate of IPv6 seems to be variable \\cite{mccarthy2018ipv6} and not uniform worldwide, with statistics strongly dependent on the adopted metrics ~\\cite{czyz2014measuring}.\nAs for the Bitcoin network, Neudecker et al.~\\cite{neudecker2019characterization} showed that, unlike IPv4, IPv6 connections have not grown over the past two years.\n\nEither way, most optimistic estimates predict a complete adoption within 7-8 years~\\cite{howard2019ipv6}.\nIn this perspective, our protocol should be considered as a medium-term solution, likely able to work for the next decade.\n\n\\subsection{Network Changes}\nWe first describe the changes to basic network protocol behavior that allow us to improve security and efficiency. \n\n\\subsubsection{Explicitly distinguish between R and U nodes}\nAlthough there is some difference in the behavior of R nodes and U nodes in the reference client, the Bitcoin network protocol does not make any explicit distinction between them.\nHowever, our solution is based on the different characteristics shown by the two types of node, as shown in \\S \\ref{sect:unreachable}.\nAs such, explicitly distinguish between R nodes and U nodes is a necessary step.\n\nDifferent strategies can be followed by a node to determine its reachability.\nA naive approach would be to verify if the client accepts incoming connections.\nHowever, it might be the case that a node is accepting connections but its address is unreachable from the outside.\nA better approach is to have the node connect to its own address, as seen by its peers, and set itself reachable, if the attempt succeeds, and unreachable, otherwise.\n\n\\subsubsection{Increase U nodes connections}\nThe second modification we propose is to increase the number of outbound connections of U nodes.\nThis change has several effects.\nFirstly, it helps leveling the imbalance of connectivity between R and U nodes.\nIn fact, while R nodes can reach 125 connections, U nodes only maintain up to 8, corresponding to their outbound peers.\nOn the other hand, inbound slots are often underutilized by R nodes~\\cite{decker2013information}, which means they can handle a higher number of connections.\n\nSecondly, increasing the number of peers means receiving, and relaying, more transactions per amount of time.\nGiven the great number of U nodes, even a small increase in their connections might produce a significant improvement in the propagation speed of transactions and blocks.\n\nFurthermore, from a security perspective, it has been shown how increasing the number of outbound connections can improve resistance against DoS attacks~\\cite{neudecker2019network}, eclipse attacks~~\\cite{heilman2015eclipse}, and isolation attacks~\\cite{apostolaki2017hijacking}.\n\nFinally, a higher number of connections for U nodes can be beneficial for the anonymity of our propagation protocol, as we will show in \\S \\ref{protocol}.\n\n\\subsubsection{Do not advertise U nodes addresses}\nIn the current protocol, U nodes, like R nodes, advertise their public address to their peers.\nThese addresses represent 90\\% of those being spread through the network~\\cite{wang2017towards,miller2015discovering}.\nHowever, being unreachable, these addresses are of no use to any other node.\nAt the same time, they increase network traffic~\\cite{biryukov2014deanonymisation} and likely produce a high number of failed connection attempts.\nAdditionally, they potentially reduce the availability of reachable addresses, since new (often unreachable) addresses replace old ones in the node database when the address pool is full. \n\nFrom a security perspective, these addresses enable fingerprinting techniques, which allow for deanonymization attacks~\\cite{biryukov2014deanonymisation,biryukov2015tor}.\nDisabling their advertisement to outbound peers would effectively invalidate the few known deanonymization attacks targeting U nodes.\n\n\\subsection{Transaction Propagation Protocol}\n\\label{protocol}\nIn the following, we discuss and describe our new propagation protocol design.\nThe protocol explicitly leverages U nodes to improve resiliency against deanonymization.\nSimilarly to Dandelion~\\cite{venkatakrishnan2017dandelion,fanti2018dandelion++}, we include two essential concepts in our design: proxied broadcast and transaction mixing. \nProxied broadcast consists in delegating the diffusion of a new transaction to another node (called \\textit{proxy}), allowing to hide the real origin of the transaction.\nMixing consists in sending to the proxy other new (proxied) transactions, received from other peers.\nThis makes it hard for the proxy to distinguish between transactions generated by the sender and others relayed by the sender, but generated by other nodes.\n\nGiven what said about U nodes in \\S \\ref{sect:unreachable}, we want to leverage their protected position in the network to conceal the origin of a transaction.\nThe core idea is to make R nodes, which are more susceptible to deanonymization, use U nodes as proxies for their transactions.\nThis way, such transactions will look as generated by their proxies instead of the actual source.\nAdditionally, we hinder proxies from distinguishing such transactions by mixing them with transactions from other nodes.\n\nBefore detailing the propagation protocol, we define our adversary model and motivate our design choices.\n\n\\paragraph{Network and Adversary Model}\n\\begin{figure}[tbp]\n\t\\centerline{\\includegraphics[width=0.7\\columnwidth]{img\/network}}\n\t\\caption{Our view of the Bitcoin network: the origin O of a transaction is connected to R and U nodes. The adversary (colored in red) deploy both R and U nodes and connect to all reachable nodes.}\n\t\\label{fig:netmodel}\n\\end{figure}\nTo describe our protocol, we model the Bitcoin network as in Figure \\ref{fig:netmodel}.\nWe call $O$ the R node running the protocol and generating new transactions.\nOther nodes are denoted by $R_i$, if reachable and $U_i$, if unreachable, for $i=1,2,...$.\nThe adversary $A$ aims at deanonymizing transactions generated by $O$ and can control various nodes, both reachable and unreachable, which we denote by $R^A_i$ and $U^A_i$, respectively.\n$A$ can connect to all reachable nodes and also create multiple connections to the same node (including $O$).\nNonetheless, $A$ cannot directly connect to other U nodes.\nTo that respect, $A$ can only deploy multiple R nodes to increase the chance of having honest U nodes connecting to it.\nAdditionally, the adversary can create and transmit transactions, as well as relay or retain others received from its peers.\n\n\\paragraph{Design}\nIn our protocol, R nodes leverage U nodes as proxies and use transactions coming from other U peers for mixing.\nInstead, U nodes use R nodes as proxies and mix new transactions with those coming from other R peers.\n\nThis scheme allows protecting both R and U nodes.\nIn fact, U nodes cannot distinguish between transactions generated by their R peers and those proxied by such peers but generated by other U nodes.\nSimilarly, U-generated transactions are indistinguishable to R nodes from those generated by other R nodes and proxied by their U peers.\n\nHowever, a naive design could lead to easy deanonymization attacks, and also to an ineffective propagation of new transactions through the network.\nTherefore, we need to define (1) which peers are used for proxying and (2) which transactions are used for mixing.\n\nAs for point (1), an R node can select one, all, or a subset of its U peers.\nNote that an adversary can control a large subset of U peers of R nodes.\nThis increases her probability of being selected as the first proxy for many R-generated transactions, allowing an effective use of the first-spy estimator.\nAt the same time, if we send all transactions to a single proxy, it will be easy for this one to narrow down the set of transactions possibly generated by the sender.\nAs such, we first select a subset of peers to be used as proxy and pick a random one within this subset for every proxy operation.\nWe call this subset the \\textit{proxy set}.\nIn order to distribute transactions among all nodes and minimize the risk of a proxy collecting all new transactions from a node, we change the proxy set at a certain rate. \nWe call \\textit{epoch} the time frame in which a proxy set is used.\n\nAs for point (2), we first need to identify which transactions are suitable for mixing.\nNote that transactions received by an R node from other R peers following our protocol have already been diffused, making them unsuitable for mixing (since the adversary might already know them).\nSimilarly, transactions diffused by U peers might have already been received by the adversary.\nOn the other hand, it is easy to see that proxied transactions are the least likely to be known to the adversary, and thus best suite for mixing.\n\nTherefore, we need to identify which transactions are being proxied and which are being diffused.\nTo do so, we mark proxied transactions and distinguish between two propagation phases: the \\textit{proxying phase} and the \\textit{diffusion phase}.\nWe call transactions in the proxying phase \\textit{proxy transactions}.\nWhen a new transaction is created is marked as proxying and sent to a node of the proxy set.\nAs for mixing, nodes use proxy transactions coming from their peers.\nWe call the set of proxy transactions used for mixing, the \\textit{mixing set} of a node.\nTransactions in the mixing set are relayed through the same path as newly-generated ones so as to make them indistinguishable from each other.\n\nIdeally, we would like the mixing set to be as large as possible.\nHowever, if we used all incoming proxy transactions, they would never be diffused.\nInstead, we include only a fraction of such transactions in the mixing set, and diffuse the rest.\nTo do so, we need to decide which transactions to diffuse and which to relay.\nA possible strategy is to select some peers in each epoch, and only use transactions coming from them.\nHowever, if an adversary controls many of these peers and also the selected proxy, she could track most transactions in the mixing set of the target, leading to an easy deanonymization.\nTo avoid such a risk, we select proxy transactions from all of our peers and probabilistically include them in our mixing set.\nIn particular, for each proxy transaction, we keep proxying it with a certain probability $p$ and diffuse it otherwise.\nThis way, despite being able to track or inject proxy transactions for a specific node, an adversary cannot affect the number of honest transactions included in its mixing set.\nA correct choice of $p$ will be fundamental for the effectiveness and efficiency of our protocol.\n\nTo further protect R nodes from adversaries controlling many inbound connections, \nwe adopt the \\textit{bucketing} strategy used in Bitcoin Core for managing addresses.\nThis mechanism is used to prevent an adversary from filling up the address database with malicious IPs,\nand it is based on the assumption that the attacker only controls nodes from a limited address space~\\cite{neudecker2015simulation}.\nIn particular, each bucket contains addresses from a different subnet.\nSimilarly, we make R nodes select proxies and transactions for the mixing set uniformly at random among peers from different buckets.\n\nFinally, to cope with the risk of a transaction not being diffused, due to a DoS attack by a proxy or to an excessively long proxying phase, each node sets a timeout $t$ for every proxied transaction.\nWhen $t$ expires, the node verifies if the transaction has been diffused by checking if the majority of outbound peers have advertised it back to us.\nWe choose to monitor outbound peers to minimize the risk of an adversary deceiving an R node by relaying proxied transactions from other adversary-controlled U peers.\nThe same rule is applied to both new and relayed transactions, so as to avoid deanonymization due to rebroadcast.\nIn the current protocol, in fact, a rebroadcast is only done by the source of the transaction, and can thus reveal its origin~\\cite{koshy2014analysis}.\nIn our protocol, instead, rebroadcast applies to all proxied transactions, thus leaking no new information.\n\n\\paragraph{Protocol Rules}\nTo detail the propagation rules of our protocol, we first define the \\textit{proxy} operation on a transaction $tx$ as follows:\n\\begin{algorithm}\n\\SetAlgoLined\nPick a random peer $P$ from the proxy set\\;\nSend $tx$ to $P$ and set a timeout $t$\\;\nWhen $t$ expires:\\\\\n\\eIf {\\text{The majority of outbound peers advertised} $tx$}\n{Return}\n{Repeat}\n\\caption{Proxy($tx$)}\n\\end{algorithm}\n\nNext, we define the propagation rules for R nodes:\n\\begin{algorithm}\n\\SetAlgoLined\nDivide time into epochs\\;\n\\If{\\text{New epoch begins}}\n{Select subset $S$ from U peers uniformly at random from different buckets\\;\n Set $S$ as the proxy set}\n \n\\If{\\text{Create new transaction $tx$}}\n{Mark $tx$ as $proxying$\\;\n Run $proxy(tx)$} \n\\If{\\text{Receive a $proxying$ transaction $tx_m$ from a U peer}}\n{with probability $p$, execute $proxy(tx_m)$\\;\n otherwise, $diffuse(tx)$}\n\n\\caption{R Propagation Rules}\n\\end{algorithm}\n\nU nodes follow the same rules, except they use R peers instead of U peers and do not use buckets.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{S0} \nThe ongoing discovery of extrasolar planets has resulted\nin increased investigation of theories of planet formation\n(e.g. Mayor \\& Queloz 1995;\nMarcy, Cochran, \\& Mayor 2000; Vogt et al. 2002).\nPossible formation scenarios for gas giant planets are either through\ndirect gravitational instability occurring in\na young protostellar disk (e.g. Boss 2001) or through \nthe accumulation of a solid core that\nundergoes rapid gas accretion once its mass has \nreached a critical value\n$\\simeq 15$ Earth masses (e.g. \nPollack et al. 1996). In either case planetary formation is likely\nto be initiated at significantly greater orbital radii than those\ncurrently observed implying that a process of orbital\nmigration has brought them closer to the central star.\nDisk protoplanet interaction provides a natural migration mechanism.\n\nA protoplanet exerts torques on a protostellar disk\nthrough the excitation of spiral density waves \n(e.g. Goldreich \\& Tremaine 1979;\nPapaloizou \\& Lin 1984; Lin \\& Papaloizou 1993).\nThese waves carry away either positive or negative angular momentum \nwhich is deposited in the disk at the locations\nwhere the\nwaves are damped. As a result of this, a negative torque is exerted\non the\nprotoplanet by the outer disk and a positive torque \nis exerted on it by the disk interior to its orbit.\n\nIn recent years much numerical\nwork on disk protoplanet interactions\nhas been undertaken for protoplanets with a range\nof masses in both laminar and turbulent disks\nin which the turbulence is maintained by the magnetorotational\ninstability\n(eg. Bryden et al. 1999; Kley 1999; Lubow, Seibert, \\&\nArtymowicz 1999,\nD'Angelo, G., Henning, \\& Kley, W., 2002, Nelson \\& Papaloizou 2003,\nWinters Balbus \\& Hawley 2003a, Papaloizou Nelson \\& Snellgrove \n2004; Nelson \\& Papaloizou 2004).\nThis work indicates that a\nsufficiently massive protoplanet can open up an annular gap in the disk\ncentred on its orbital radius.\nFor typical protostellar disk models gap formation at $5AU$ starts to\noccur for protoplanet masses of around a Saturn mass with the gap\nbecoming deep for a Jovian mass.\n\nFor low mass disks gap formation, or the transition\nfrom linear to non linear disk response, is associated with the transition\nbetween type I migration (eg. Ward 1997) and type II migration\n(eg. Lin \\& Papaloizou 1986). For both of these regimes the\ntime scale of the migration in standard disk models,\nwith mass comparable to that of the minimum mass solar nebula,\nis shorter than the disk lifetime making it a threat to the\nsurvival of protoplanets (see eg. Terquem Papaloizou \\& Nelson 2000\nfor a review) as well as a mechanism for producing close\norbiting giant planets. \n\nMore recently Masset \\& Papaloizou (2003) considered a form of\npotentially very fast migration induced by the action\nof coorbital torques acting on disk material as it passes\nthrough the coorbital region of a migrating protoplanet.\nThis is in contrast to torques produced by waves dissipating\nin the disk away from the orbit. They found using two dimensional\nglobal simulations at rather low resolution that such torques\ncould lead to a positive feedback and a fast migration for\ntypically Saturn mass protoplanets in massive disks.\nThis may provide a mechanism for bringing sub Jovian \nmass objects close to the star which, if they are then\nslowed down and undergo\nsome additional accretion may become hot Jupiters. \n\nIn this paper we explore some calculations of disk\nplanet interactions and related migrational torque calculations using local \nshearing box simulations. These focus on a local patch on the disk\nand so may be done at high resolution for relatively low cost.\nWe consider both non turbulent disks and disks with turbulence driven\nby the magnetorotational instability or MRI (Balbus \\& Hawley 1991).\nThis is the most likely mechanism producing angular\nmomentum transport and thus the '{\\it viscous}' evolution of the disk\nthat ultimately drives type II migration (Lin \\& Papaloizou 1986).\n\nWe discuss the noisy type I migration in turbulent\ndisks and the condition for gap formation.\nWe also explore at length coorbital torques associated with migrating\nprotoplanets and find that the local box simulations are consistent\n with the lower resolution global ones. \n\nThe plan of the paper is as follows.\nIn section \\ref{S1}\nwe give the basic equations and describe the\nlocal shearing box model in frames that may be either uniformly rotating \nor migrating with an angular velocity which is a function of time. \nIn section \\ref{S3} we \ndescribe the numerical procedure and models simulated.\nIn section \\ref{S4} we go on to\ndescribe results of simulations with protoplanets and the calculation\nof the\nforces acting \non them that lead to migration. In particular we examine the coorbital\ntorque generated by material as it passes through the coorbital region\nfrom one side of the protoplanet to the other and present numerical\nresults for two different flow through speeds.\nFinally in section \\ref{S5} we discuss our results\nand their implications for extrasolar planetary configurations.\n \n\n\n\\section{Basic Equations and Model set up}\\label{S1} \n\nIn this paper we describe local simulations\nof protoplanets interacting with a Keplerian disk flow\nwith and without \nMHD turbulence driven by the MRI. We consider simulations \nfor which the magnetic field, when present has zero net flux so that \nan internally generated dynamo is maintained. Simulations with turbulence\nrequire high resolution with usually not less than $32$ grid cells per scale\nheight $H.$ They are greatly facilitated by considering\na local shearing box \nrather than a complete disk (Goldreich \\& Lynden-Bell 1965). \n\n\\noindent\nThe governing equations for ideal MHD written in an inertial frame \nare:\n\\begin{equation}\n\\frac{\\partial \\rho}{\\partial t}+ \\nabla \\cdot {\\rho\\bf v}=0, \\label{cont}\n\\end{equation}\n\\begin{equation}\n\\rho \\left(\\frac{\\partial {\\bf v}}{\\partial t}\n + {\\bf v}\\cdot\\nabla{\\bf v}\\right)= \n-\\nabla p -\\rho \\nabla\\Phi +\n\\frac{1}{4\\pi}(\\nabla \\times {\\bf B}) \\times {\\bf B}, \\label{mot}\n\\end{equation}\n\\begin{equation}\n\\frac{\\partial {\\bf B}}{\\partial t}=\\nabla \\times ({\\bf v} \\times {\\bf B}).\n\\label{induct}\n\\end{equation}\nwhere ${\\bf v}, P, \\rho, {\\bf B}$ and $\\Phi$ denote the fluid\nvelocity, pressure, density and magnetic field respectively.\nThe gravitational potential is $\\Phi.$ It contains contributions\nfrom a central mass $M_*$ and an orbiting secondary mass or planet\n$M_p.$\nThus adopting a cylindrical coordinate system $(r, \\varphi, z)$\ncentred\non $M_*,$\n\\begin{equation} \\Phi=-GM_*\/r \n -{GM_p\\over \\sqrt{r^2 + R^2 -2r R \\cos(\\varphi-\\varphi_s)+ z^2 +b^2}},\\end{equation}\n where $(R, \\varphi_s , 0)$ are the coordinates of the secondary,\n$M_p$ is\nthe secondary mass and $G$ the gravitational constant.\n\n\n\n\\subsection{The Local Shearing Box}\nFollowing Goldreich \\& Lynden-Bell (1965)\nwe consider a small Cartesian box centred on a point at which the Keplerian\nangular velocity is $\\Omega_p.$\nNormally one takes the centre of the box to be at a fixed point\nin the underlying disk flow which corresponds to\nthe motion of a secondary mass in circular orbit. However, it is possible\n\n to consider a box centred on a point with changing radius $R(t)$\nand varying angular velocity $\\Omega_p(t) = \\sqrt{GM_*\/R(t)^{3}},$ \n$M_*$ being the central mass and $G$ being the gravitational constant.\nFor a box centred on the secondary mass this corresponds to the situation\nwhen the secondary migrates.\nWe also suppose that as the point moves the \ndisk semi-thickness is $H(t)$ and \nthat this determines the radial scale of the flow.\n \nWe use the local Cartesian coordinates\n$x = r- R(t),$ $z,$ and the mutually orthogonal coordinate\n$y,$ to define local dimensionless \nCartesian coordinates $(x' , y' , z')=\n( x\/H(t), y\/H(t), z\/H(t))$ with associated\nunit vectors $({\\bf i}, {\\bf j}, {\\bf k}).$ \n\n\\noindent The direction ${\\bf i}$\nis outwards along the line joining the origin\nto the central object while ${\\bf k}$ points in the vertical direction.\nThe direction ${\\bf j}$ is that of the unperturbed Keplerian shear flow.\nIn this non inertial frame the flow velocity is ${\\bf u}.$\nIt is also convenient to use a rescaled time\n$t'$ such that $d t' = \\Omega_p(t)dt.$\n\n The idea behind using the local box is that there is a small\nparameter $ h = H\/R,$ measuring the aspect ratio of the disk.\nWe adopt a spatially isothermal equation of state $P =\\rho c_s^2,$\nwith $c_s$ being the sound speed which may depend on time, such that\nthe aspect ratio is also equal to the\nratio of sound speed to orbital velocity so that\n$h = c_s\/(R\\Omega_p).$\n\n For a moving centre we suppose that there is another\nsmall parameter $|\\epsilon| = |dR\/dt\/(\\Omega_p R)| \\sim |dH\/dt\/(\\Omega_p H)|.$\nThis measures the ratio of the orbital time to\nthe time for the centre to migrate significantly in the radial direction.\n\n\\noindent We shall assume that $|\\epsilon| \/h = O(h^{k}),$\n for some $ 1 > k > 0.$ This enables us to\n retain terms of order $\\epsilon \/h = dR\/dt\/(\\Omega_p H)$\nbut neglect terms of order $\\epsilon$ and higher.\n\n\n\\subsection{Dimensionless Variables}\n We introduce a dimensionless velocity, sound\nspeed and planet mass through\n${\\bf u}' = {\\bf u}\/(\\Omega_p H),$ $c_s' = c_s\/(\\Omega_p H),$ and\n$q = M_p R^3\/(M_* H^3)$ respectively.\nAs we do not include the self-gravity of the disk in calculating\nits response, the magnitude of the disk density may be arbitrarily\nscaled. All the simulations presented here start with a uniform density\n$\\rho_0$ which, together with $H,$ may be used to specify the mass scale.\nWe define a dimensionless density and magnetic field through\n$\\rho' = \\rho \/\\rho_0$ and\n${\\bf B}' = {\\bf B}\/(\\Omega_p H \\sqrt{4\\pi\\rho_0}).$\nThe equation of motion may be written in terms of these\ndimensionless variables, after taking the limit\n$\\epsilon \\rightarrow 0,$ and $h \\rightarrow 0$\nsimultaineously but retaining the ratio $\\epsilon \/h$ in the form\n\n\\begin{equation}\n\\frac{\\partial {\\bf u}'}{\\partial t'}\n+{\\bf u}'\\cdot\\nabla{\\bf u}'\n+ 2{\\bf {\\hat k}}{\\bf \\times}{\\bf u}' -3x'{\\bf{\\hat i}}= -{\\epsilon{\\bf{\\hat j}}\n\\over 2h}\n-{\\nabla (\\rho' c_s^{'2}) \\over \\rho'} -\\nabla\\Phi_p' +\n{(\\nabla \\times {\\bf B}') \\times {\\bf B}' \\over \\rho'}, \\label{boxmotdim}\n\\end{equation}\nwhere\n\\begin{equation} \\Phi_p' = -{q \\over \\sqrt{x^{'2} + y^{'2} + z'^{2} + b^{'2}}},\n\\label{dimpot}\\end{equation}\nwith dimensionless softening parameter $b' = b\/H$\nand of course the spatial derivatives are with respect to the dimensionless \n coordinates.\nSimilarly the dimensionless induction equation becomes\n\\begin{equation}\n\\frac{\\partial {\\bf B}'}{\\partial t'}=\n\\nabla \\times ({\\bf u}' \\times {\\bf B}').\n\\label{inductdim}\n\\end{equation}\nand the dimensionless continuity equation is\n\\begin{equation}\n\\frac{\\partial \\rho'}{\\partial t' }+ \\nabla (\\cdot \\rho' {\\bf u}') = 0.\n\\label{contdim}\n\\end{equation}\n\nWe note that the term $\\propto x'$ in equation (\\ref{boxmotdim})\nis derived from a first order Taylor expansion about $x=0$\nof the combination\nof the\ngravitational acceleration due to the central mass and the centrifugal\nacceleration (Goldreich \\& Lynden-Bell 1965).\nHowever, in order to reduce computational requirements,\nin the work presented here, we have neglected the $z'$ dependence\nof the gravitational potential due to the secondary\n given by equation (\\ref{dimpot})\nand also of that due to the central mass.\nTherefore vertical\nstratification is neglected.\nSimulations of unstratified\nboxes are often undertaken \nin MHD because they contain the essential\nphysics and ease computational requirements\n(eg. Hawley, Gammie \\& Balbus 1995).\n\n\n\\noindent In the steady state box with a fixed centre and\nno planet or magnetic field, the equilibrium\nvelocity is due to Keplerian shear\nand thus\n\\begin{equation} {\\bf u'} = (0,-3 x'\/2 ,0). \\label{boxv}\\end{equation}\n\n\\subsection{Steady and Moving Frames}\nWhen the centre is fixed so that $\\Omega_p,$ $R$ and $H$ are constant,\nequations (\\ref{boxmotdim}- \\ref{contdim}) \nare the standard equations for a shearing box\n(Goldreich \\$ Lynden-Bell 1965).\n\nWhen the centre migrates with $\\Omega_p,$ $R$ and $H$\nbeing functions of time, in the limit of small, $\\epsilon$ and $h$\nbut $\\epsilon \/h$ retained, the equations are the same but for one added\ndimensionless acceleration $ -\\epsilon {\\hat j}\/(2h).$\nThis corresponds to the dimensionless torque required to\nmake the disk gas move with speed $-dR\/dt$ in the absence\nof the secondary. This situation means that a migrating\nplanet moving through non migrating gas with speed $dR\/dt$ \nis equivalent \nto a non migrating planet immersed in a disk with gas given a torque\nwhich results in it migrating with speed $-dR\/dt.$ \nThis symmetry has already been exploited without justification in two dimensional global\nsimulations by Masset \\& Papaloizou (2003).\nHowever, the approximation scheme requires $\\epsilon$ and $h$ to be small.\nThis means that the migration\ntime to move through $R$ measured by\n$1\/ \\epsilon$ has to be very long compared to the orbital time.\nBut the time to migrate through $H,$ which is measured by\n $h\/ \\epsilon $ can be significantly faster.\n \n\\subsection{Boundary Conditions}\n\nThe shearing box is presumed to represent a local patch of a differentially\nrotating disk. Thus the appropriate boundary conditions on the bounding faces\n$y = {\\rm constant} = \\pm Y$ and $z = {\\rm constant} = \\pm Z$ come\nfrom the requirement of periodicity in the local Cartesian coordinate\ndirections normal to the boundaries.\nOn the boundary faces $x = {\\rm constant} = \\pm X,$ \nthe boundary requirement is for periodicity in local shearing coordinates.\nThus for any state variable\n$F(x,y,z),$ the condition is that $F(X,y,z) = F(-X, y -3Xt' ,z).$\nThis means that information on one radial boundary face is communicated\nto the other boundary face at a location in the azimuthal coordinate\n$y$ shifted by the distance\nthe faces have sheared apart since the start of the simulation\n(see Hawley, Gammie \\& Balbus 1995).\n\nIn dimensionless coordinates\nthe boundaries of the box are $x' =\\pm X\/H,$ $y' =\\pm Y\/H,$ $z' =\\pm Z\/H.$\nAlso if the magnetic field has zero net flux and is dynamo generated and\nthus a spontaneous product of the simulation, given that\n the only parameter occurring in equations \n(\\ref{boxmotdim}-\\ref{contdim} )\n is the dimensionless mass $q = M_p R^3\/(M_* H^3),$\nwe should be able to consider the dependence of the\ntime averaged outcome of a simulation to be only on\n$ X\/H, Y\/H, Z\/H,$\nand $q = M_p R^3\/(M_* H^3).$\n\n \\begin{table}\n \\begin{tabular}{ccccc}\n \\hline\n & & & & \\\\\n Model & $ q $ & $b\/H$ & $|\\epsilon|\/h$ & ${\\bf B}$ \\\\\n & & & & \\\\\n \\hline\n A & $0.0$ & - & $0.0$ & NO \\\\\n B & $0.1$ & $0.3$ & $0.0$ & NO \\\\\n C & $0.1$ & $0.3$ & $0.0$ &YES \\\\\n D & $0.3$ & $0.3$ & $0.0$ &YES \\\\\n E & $1.0$ & $0.3$ & $0.0$ &YES \\\\\n F & $2.0$ & $0.3$ & $0.0$ & NO \\\\\n G & $2.0$ & $0.3$ & $3\/(8\\pi)$ & NO \\\\\n H & $2.0$ & $0.3$ & $3\/(64\\pi)$ & NO \\\\\n I & $2.0$ & $0.6$ & $3\/(8\\pi)$ & NO \\\\\n J & $2.0$ & $0.6$ & $3\/(64\\pi)$ & NO \\\\\n \\hline\n \\hline\n\\end{tabular}\n\\caption{ \\label{table1}\nThe first column gives the model label, the second the value of $q,$\nthe third gives the softening parameter $ b\/H,$ the fourth\ngives $|\\epsilon|\/h$ the magnitude of which gives the ratio of the induced\ndisk flow through speed to the sound speed and\nthe fifth column indicates whether a magnetic field was present.\nModel A was a reference model used to check that a box with\nno protoplanet and no magnetic field did not show any evolution\nor instabilities.}\n\\end{table}\n\n\nNote that the above discussion implies that for a box of fixed dimension, the\nonly distinguishing parameter is $ q = M_p R^3\/(M_* H^3).$ \nThis parameter may also be interpreted as the cube of the ratio of the Hill\nradius to disk scale height and the condition that $q > \\sim 1$ \nleads to the thermal condition of \nLin \\& Papaloizou (1993) that the Hill radius should exceed the disk\nscale height for gap formation to occur. \n From Korycansky \\& Papaloizou (1996)\nthis condition is also required in order that the perturbation\ndue to the protoplanet be non linear.\n\n\\begin{figure}\n\\centerline{\n\\epsfig{file= alpha0and0.1.ps , height=10.cm,width=10.cm,angle =270}}\n\\caption[]{The volume averaged stress parameter $\\langle \\alpha \\rangle,$\nis plotted as a function of a dimensionless\ntime for no planet, $q=0$ at all times\nand for the case\nwhen the planet corresponding to $q=0.1$ was inserted at time $353.$\nAlthough both curves are independently very noisy after this time\nthey cannot be separated. }\n\\label{fig1}\n\\end{figure}\n\n\n\\begin{figure}\n\\centerline{\n\\epsfig{file= density4low.ps , height=10.cm,width=10.cm,angle =0}}\n\\caption[]{Typical mid plane\n density contour plots near the end of the simulations\nfor upper left panel, $q =0.1$ and no magnetic field (model B), upper right panel, $q=0.1$ with magnetic\nfield (model C), lower left panel, $q =0.3$ with magnetic field (model D) and \nlower right panel, $q=1.0$ with magnetic field (model E).\n The ratio of maximum to minimum densities in these\nplots were 1.42, 2,9, 3.27, and 11.22 respectively. In the case of the upper left panel,\nthe ratio was reduced to 1.25\n if the wakes produced by the protoplanet were excluded.}\n\\label{fig23}\n\\end{figure}\n\n\n\n\n\\section{Numerical Procedure and Models Simulated} \\label{S3}\n\\noindent\nThe numerical\nmethod is that of characteristics\nconstrained transport MOCCT (eg. Hawley \\& Stone 1995).\nThe code used has been developed from a version\nof NIRVANA originally written by U. Ziegler\n(see Ziegler \\& R\\\"udiger 2000 and\nreferences therein). It has been used in a number of simulations of \ndisk planet interactions in two and three dimensions with and\nwithout magnetic fields\n(eg. Kley 1999; Nelson et al. 2000; Steinacker \\& Papaloizou 2002;\n Papaloizou Nelson \\& Snellgrove 2004).\n\nAll of the simulations were for shearing boxes with $X=4H, Y = 2\\pi H,$\nand $ Z= H\/2.$ These boxes are larger than the standard one often used in\nMHD simulations (eg. Hawley Gammie \\& Balbus 1995)\nwhich has $X = H\/2, Y = \\pi H$\nand $ Z= H\/2.$ Such a larger box is required to cover the scale\nof the disk planet interaction ( Papaloizou Nelson \\& Snellgrove 2004;\nNelson \\& Papaloizou 2004). As in those works, the simulations\nwere carried out with $261,$ $200,$ and $35$\ngrid points in the $x',$ $ y' $ and $z'$ directions respectively.\nAt this resolution, MHD turbulence and associated dynamo can be maintained\nfor beyond one hundred orbits of the box centre.\nSome parameters associated with the simulations are given in table I\n\n\nIn order to investigate conditions for gap formation we have run models\nwith a variety of values of $q =M_p R^3\/(M_* H^3)$ with and without magnetic fields.\nWe have also studied the torques acting on a planet moving outwards by applying a torque\nto the disk material to make it flow inwards (see discussion above).\nValues of $|\\epsilon|\/h$ are given in table I.\nBut note that because of the symmetry of the box, equivalent results are obtained\nif the flow is in the opposite direction. \n\nThe softening parameter should be chosen so as\nto represent the effects of vertical stratification and the expected value is expected to\nbe $\\sim H$ (eg. Papaloizou 2002; Masset 2001; Nelson \\& Papaloizou 2004).\nThe value adopted here was usually $b=0.3H.$\n\nAt this point a cautionary note should be inserted. For sustainable\ncoorbital torques to be realized a quasi steady state must be realized\nin which the planet migrates through the disk with a mass consistent\nwith that assumed when calculating the tidal interaction.\nIf a very small softening parameter is used, the diverging point\nmass potential coupled with a fixed low temperature\nisothermal equation of state results in arbitrarily\nlarge amounts of mass being deposited on the planet.\nApart from preventing states with a steady flow through the coorbital region\nbeing attained, large amounts of accretion onto the planet\ncannot be handled correctly in calculations such as those \n performed here and elsewhere for disk planet interactions\nthat neglect self-gravity of the disk and the proper\nphysics of protoplanetary structure.\nSteady coorbital torques require a planet that self-consistently\n does not increase\nits mass significantly through disk mass accretion as can be modeled\nwith a softening parameter $b \\sim H$ such as adopted here.\n\nBut note that there is some expected sensitivity of torques to the \nvalue of the softening\nparameter (eg. Artymowicz 1993; Papaloizou 2002; Nelson \\& Papaloizou 2004).\nAccordingly for purposes of comparison some models were run with $b=0.6H.$ \nWe found negligible mass accumulation on the protoplanet for $b=0.6H$\nbut a more noticeable effect when $b=0.3H.$\n\nBecause they are not dimensionless and can be scaled away the following\nnumerical values have no significance, but for convenience we adopted \n$h = 0.01,$ $\\rho_0 = 0.001,$ $GM_*=1$ and $R =1.$\nAll our results below are obtained using these values.\n\n\\noindent The gravitational potential was flattened ( made to attain a constant value\nin a continuous manner)\nat distances exceeding $3H$ from the protoplanet in order to satisfy the \nperiodic boundary conditions. Tests with larger flattening distance\nhave indicated indicated insensitivity to the precise choice.\n\n\n\\begin{figure}\n\\centerline{\n\\epsfig{file=tavepl.ps, height=10.0cm,width=10.cm,angle =0} }\n\\caption[]{ Running time averages of the force acting \non the planet in the direction\nof orbital motion plotted as a function of dimensionless time. These are for, \n the upper left panel, $q =0.1$ and no magnetic field (model B), upper right panel, $q=0.1$ with magnetic\nfield (model C), lower left panel, $q =0.3$ with magnetic field (model D) and\n lower right panel, $q=1.0$ with magnetic field (model E).\nFor each panel the upper curve represents the force due to material\nexterior to the protoplanet, the lower curve \n represents the force due to material\ninterior to the protoplanet while the central curve gives\nthe total contribution.}\n\\label{fig9}\n\\end{figure}\n\n\n\\begin{figure}\n\\centerline{\n\\epsfig{file= density2flowlow.ps ,height=10.cm,width=10.cm,angle =0} }\n\\caption[]{Typical mid plane density contour plots\nfor left panel $q=2.0$ with torque induced fast flow through (model G)\nand right panel $q=2.0$\nwith a torque induced slower flow through of disk material (model H).\nThe softening parameter was $b=0.3H.$\nIn these cases the magnitude of the torque applied to the disk material\nwas chosen to produce expected inflow speeds of\n$(3\/(8\\pi))c_s$ and $(3\/(64\\pi))c_s$ respectively.\nThese inflow speeds correspond to $|\\epsilon|\/h = 3\/(8\\pi)$ (fast flow through)\nand $|\\epsilon|\/h = 3\/(64\\pi)$ (slower flow through).\n These models had no magnetic field.}\n\\label{fig01}\n\\end{figure}\n\n\n\\noindent\n\\begin{figure}\n\\centerline{\n\\epsfig{file=rhoave2010.ps,height=10.cm,width=10.cm,angle =0} }\n\\caption[]{The mean surface density averaged over $y$ and $z$\nas a function of radial\ncoordinate $x\/H$ for $q=2.0$ with no flow through (dotted curve, model F)\nand with flow through (full curve, model G).\n These models had $b=0.3H$ and have no\nmagnetic field.\n}\n\\label{fig20}\n\\end{figure}\n\n\n\\noindent\n\\begin{figure}\n\\centerline{\n\\epsfig{file=meandensityflow.ps,height=10.cm,width=10.cm,angle =0} }\n\\caption[]{The mean surface density averaged over $y$ and $z$\nas a function of radial\ncoordinate $x\/H$ for $q=2.0$ with fast flow through and $b=0.3H$ \n(model G, full curve),\nwith slower flow through and $b=0.3H$ ( model H, dotted curve),\nwith fast flow through and $b=0.6H$ (model I, dashed curve) ,\nwith slower flow through and $b=0.6H$ ( model J, dot-dashed curve).\nThese models have no\nmagnetic field.\n}\n\\label{fig2}\n\\end{figure}\n\nModels with magnetic fields all had conserved zero net flux. \nFor these the initial field was taken to be vertical, independent of $y$ and $z$ \nand to vary sinusoidally in $x$ with a wavelength of $H.$\nThe amplitude was chosen to make the initial ratio \nof the total magnetic energy to volume integrated pressure \nto be $0.0025.$ The initial velocity in the $x$ direction at each\ngrid point was\nchosen to be the product of a random number between $-1.0$ and $1.0$\nand $0.1c_s.$\nWithout a perturbing secondary, these models reach a\nturbulent state in which the ratio of volume mean magnetic energy to volume mean pressure\nis in the range $0.002 - 0.02$ and the radially or volume averaged Shakura \\& Sunyaev $\\alpha$ parameter\nis in the range $0.003 - 0.02.$ The time averaged value of the volume averaged $\\alpha$ is typically\n$0.008$ (see eg. Winters Balbus \\& Hawley 2003b, Papaloizou Nelson \\& Snellgrove 2004\nfor details). A plot of the evolution of the volume averaged $\\alpha$ for a simulation with no\nsecondary mass which begins with the above initial\nconditions for the magnetic field is presented in figure \\ref{fig1}.\nModels with magnetic fields and protoplanets were\ninitiated by inserting the protoplanet into this\nmodel with established MHD turbulence after $353$ time units.\n\nModels without magnetic fields all had perturbing protoplanets\nso that these could be initiated from rest. \n\n\\subsubsection{Vertical and Horizontal Averages\n}\\label{S3a}\n\\noindent\nWe find it useful to consider quantities that are\nvertically and azimuthally averaged over the $(y ,z)$ domain ${\\cal D}.$ \nSometimes an additional running time average may be adopted.\nThe vertical and azimuthal average of some quantity $Q$ is defined by\n\n\\noindent\n\\begin{figure}\n\\centerline{\n\\epsfig{file= velocity10low.ps ,height=10.cm,width=10.cm,angle =0} }\n\\caption[]{ Velocity vectors in the mid plane coorbital region for $q=2.0$\n no magnetic field and no flow through (model F) }\n\\label{fig10}\n \\end{figure}\n\n\\begin{equation}\n{\\overline {Q}} ={\\int_{\\cal D} \\rho Q dz dy \\over \\int_{\\cal D} \\rho dz dy}.\n\\end{equation}\n\n\\noindent The disk surface density is given by\n\\begin{equation}\n\\Sigma(x,t) = {1\\over 2Y }\\int_{\\cal D} \\rho dz dy.\n\\end{equation}\n\n\\noindent The vertically and azimuthally\naveraged Maxwell and\nReynolds stresses, are given by:\n\\begin{equation}\nT_M= 2Y\n\\Sigma{\\overline{\\left({B_x B_y \\over 4\\pi\\rho}\\right)}}\n\\end{equation}\nand\n\\begin{equation}\nT_{Re}=2Y\n\\Sigma\n{\\overline{\\delta v_x\\delta v_y}}.\n\\end{equation}\n Here the velocity fluctuations $\\delta v_x$ and $\\delta v_y$\nare defined through,\n\\begin{equation}\n\\delta v_x=v_x-{\\overline{v_x}},\n\\end{equation}\n\\begin{equation}\n\\delta v_y=v_y- {\\overline{v_y}}.\n\\end{equation}\nThe horizontally and vertically averaged Shakura \\& Sunyaev (1973)\n$\\alpha$ stress parameter appropriate to the\ntotal stress is given by\n\\begin{equation}\n\\alpha=\\frac{T_{Re}-T_M}{2Y\n\\Sigma{\\overline{ \\left(P\/\\rho\\right)}}},\n\\end{equation}\nThe angular momentum flow across a line of constant $x$\nis given by\n\n\\begin{equation}\n {\\cal F} =2Y R \\Sigma \\alpha\\overline{P\/\\rho}.\n\\label{transport}\n\\end{equation}\n\n\n\n\n\\begin{figure}\n\\centerline{\n\\epsfig{file=velocity20low.ps,height=10.cm,width=10.cm,angle =0} }\n\\caption[]\n{ Velocity vectors in the mid plane\ncoorbital region for $q=2.0,$ $b=0.3H,$\n with no magnetic field\nand fast flow through by induced torqued disk material (model G).\n}\n\\label{fig3a}\n\\end{figure}\n\n\n\\begin{figure}\n\\centerline{\n\\epsfig{file= taveflpllow.ps, height=10.cm,width=10.cm,angle =0} }\n\\caption[]\n{Running time averages of the force acting on the planet in the direction\nof orbital motion as a function of dimensionless time. \nThese are for $q = 2.0$ with\nfast flow through and $b=0.3H$ ( model G, upper left panel), $q = 2.0$\nwith fast flow through and $b=0.6H$ ( model I, upper right panel), $q = 2.0$\nwith slower flow through and $b=0.3H$\n(model H, lower left panel) , and $q = 2.0$\nwith slower flow through and $b=0.6H$ ( model J, lower right panel).\nThese models have no\nmagnetic field. For each panel the upper curve\nrepresents the force due to material\nexterior to the protoplanet, the lower curve\nrepresents the force due to material\ninterior to the protoplanet while the central curve gives\nthe total contribution.}\n\\label{fig3}\n\\end{figure}\n\n\\section{ Results of Simulations with Protoplanets} \\label{S4}\nWe have performed simulations with fixed protoplanets with\n$q=0.1, 0.3, 1.0$ and $2.0.$ with and without magnetic fields.\nAs $q$ increases, a gap is formed once $q$ exceeds a number around unity.\nIf magnetic fields are present this is also the value beyond which perturbations\nfrom the protoplanet dominate the turbulence.\n\nIn figure \\ref{fig1} we plot the volume averaged\nstress parameter $\\langle \\alpha \\rangle,$\nas a function of dimensionless\ntime for no planet, $q=0,$ \ntogether with the same quantity for the run in which a protoplanet with\n$q=0.1$ was inserted at time $353.$\nAlthough both curves are independently very noisy after this time\nthey cannot be separated verifying the fact that a protoplanet with $q=0.1$\nis not strong enough to perturb the turbulence significantly. \n\nNonetheless all the simulations show the prominent density wake\nassociated with it.\nTypical mid plane density contour plots near the end of the simulations\nfor upper left panel, model B (q=0.1), upper right panel, model C (q=0.1), \nlower left panel, model D (q=0.3) and\nlower right panel, model E (q=1) are given in in figure \\ref{fig23}.\nThe ratio of maximum to minimum densities was \n1.42, 2,9, 3.27, and 11.22 respectively.\nWhen the dominant wakes\nproduced by the protoplanet are excluded\nin model B, this ratio is reduced to 1.25. This ratio\nmeasures the effect of waves excited by \nghost protoplanets in neighbouring boxes\nwhich are there on account of the periodic\nboundary conditions in shearing coordinates.\n It indicates that the fluctuations\ndue to ghost neighbours are relatively small compared to the effects\ndue to either magnetic fields or dominant\nperturbations close to the protoplanet due ro its own gravity. \n\nThe simulations with $ q < 1$ all have embedded protoplanets\nwhile for $q \\ge 1$ a prominent gap is formed (see also Papaloizou Nelson \n\\& Snellgrove 2004). This occurs with and without a magnetic field.\nThe condition for gap formation that $q >1$ is equivalent to requiring\nthat The size of the Hill sphere exceed the disk thickness, $H.$ \nThe additional \nrequirement discussed by Lin \\& Papaloizou (1993) is that tidal forces\nshould be enough to overcome viscous diffusion is expressed\nby $q > 40 \\alpha (R\/H).$ To compare with the box simulations\nconsidered here, we should set $ 2R = L_y \/\\pi = 4H$ and\nthe value of $\\alpha$ has to be related to the MHD turbulence\nwhen present. We adopt the value\n$\\alpha \\sim 0.005$ corresponding to a mean value of the Maxwell\nstress. Thus we see that for the MHD cases \nthe viscous criterion for gap formation is satisfied \nfor the boxes when $q > 0.4.$ Thus $q \\ge 1$ gives the expected condition\nfor gap formation in all the box simulations. \nAn extrapolation to make the azimuthal extent of the box equal\nto $2\\pi R$ would give $q > 0.2(R\/H).$\nThus it should be borne in mind that the tendency for the gap to\nfill in a full circle may be inderestimated for thin disks with MHD\n in a local box\nof the type we have used.\n\n\\subsubsection{Torques and Migration}\nAn important aspect of the disk protoplanet interaction is that\nit produces torques on the protoplanet that can lead to orbital migration.\nFor a box simulation the torque is $RF_{y}$ where $F_{y}$ is the force\nin the $y$ direction. We consider the contributions from the regions\nexterior to and interior to the protoplanet separately.\nOne can see from eg. figure \\ref{fig23} that the density enhancement\nin the wake exterior to the protoplanet tends to drag it backwards\nwhile the density enhancement\ninterior to the protoplanet tends to accelerate it. By the\n symmetry of the box these\ncontributions cancel on average. However, features which break the symmetry\nsuch as a radial flow through can result in a non zero net torque\/force.\n \nRunning time averages of the forces acting\non the planet in the direction\nof orbital motion are plotted as a function of dimensionless time in figure\n\\ref{fig9}. These are for,\nthe upper left panel, model B ,upper right panel, model C, \nlower left panel, model D, \nlower right panel, model E.\nWe normalize these using the fiducial form\n\\begin{equation} F_{y0} = q^2 \\Omega_p^2 H^3 \\Sigma. \\end{equation}\nFor our units, this gives\n\\begin{equation} F_{y0} = 10^{-11}q^2 . \\end{equation}\nFor $q=0.1$ and $q=0.3$ a gap does not form\nand a reasonable fit to the one sided outward force \nproduced by regions exterior to the secondary is \n\\begin{equation} F_y = 0.5 F_{y0}.\\end{equation}\n\nWe remark that the time averaged forces from exterior and interior region\nare very similar with and without turbulence due to a magnetic field.\nBecause of the symmetry of the shearing box, \ncontributions from the regions exterior to the\nprotoplanet orbit must eventually \nbalance the contributions from the interior regions.\nHowever, this may take a long time to achieve in the MHD case \nwhere noise can produce a ten percent bias for embedded protoplanets\neven after fifty orbits. Effects not yet modelled\nsuch as long timescale turbulent fluctuations in global\ndisks where there is bias introduced both by the geometry, initial conditions\nand external boundary conditions\n may have significant consequences for the net migration of embedded\nobjects (see Nelson \\& Papaloizou 2004).\n\n\nTo compare with the expected rates of type I migration\nin a disk without a magnetic field,\n(eg. Ward 1997) we assume there is an asymmetry between outward and inward\nforces introduced by global geometrical effects of magnitude $C_1 h,$\nwhere $C_1$ is a constant between $1$ and $10$ (Ward 2000).\nThis is taken to be in favour of a net drag force\nand vanishes in the limit $h \\rightarrow 0$ as required.\nThe net force is then\n\\begin{equation} F_y = - 0.5 F_{y0}C_1 h.\\end{equation}\nThis leads to an expected inflow time\n\\begin{equation} \\tau_{mig} = - {R \\over dR\/dt} =\n \\frac{M_*}{M_p} \\frac{M_{*}}{{C_1\\Sigma R^2}}\n h^2 \\Omega_p^{-1}. \\label{boxmig}\\end{equation} \n\nA physical explanation for why the net force amounts to a drag\n\tcan be based on the fact that the wakes originate\nas a density wave launched starting from the location \nwhere the disk shears past the protoplanet with the sound speed.\nIn a constant aspect ratio disk geometrical effects (not present in the box) \nmake this location closer to the protoplanet in the outer\nregions resulting in a larger contribution to the net force.\nThe resulting asymmetry is of order $h.$\n\n\nThe migration time of a non gap forming protoplanet embedded in a three\ndimensional disk has been calculated using a linear\nanalysis by Tanaka,Takeuchi,\\& Ward (2002). They derive the following\nexpression for the migration time of a protoplanet embedded in a\nlocally isothermal disk\nwith a uniform surface density profile:\n\n\\begin{equation}\n\\tau_{mig}= \\frac{M_*}{2.7M_p}\\frac{M_{*}}{{\\Sigma R^2}}\nh^2 \\Omega_p^{-1} \\label{ward-tmig}\n\\end{equation}\nThese authors comment\nthat in general this expression gives a migration time that is a factor\nof between 2 -- 3 times slower than similar expressions derived for\nflat, two--dimensional disks (e.g. Ward 1997). Interestingly\nboth (\\ref{boxmig}) and (\\ref{ward-tmig})\n agree for $C_1 = 2.7$ \nand correspond to a rather fast \ninward migration time of $8 \\times 10^5 y$ at $5 AU$ for\n $1 M_{\\oplus}$ in a gas disk with surface density\nchosen so that there is two Jupiter masses within $5 AU.$\nBut note that the magnitude is sensitive to the softening parameter\nwhich needs to be comparable to the vertical thickness or $\\sim H.$\n\n\n\\subsection{Coorbital Torques}\n\nBy coorbital torque we mean a torque exerted\non the protoplanet \nby disk material flowing through the orbit. This has been discussed recently\nby Masset \\& Papaloizou (2003).\nWe have shown that in the local approximation adopted here\n the torque on a slowly radially migrating protoplanet\nis the same as that due to disk material\nradially migrating with the same speed in the opposite direction \nflowing past a protoplanet in fixed orbit.\n\nAs mentioned above, in the absence of migration the symmetry\nof the shearing box results in zero net torque on the protoplanet.\nHowever, introducing a migration direction breaks the symmetry\nand results in a non zero torque on the protoplanet\nwhich changes sign with the direction of migration.\nFrom very general considerations we expect the net torque\nto be proportional to the migration velocity.\nThis can result in either positive feedback when\nthe torque assists the protoplanet migration or negative feedback\nwhen it acts in reverse.\n\nMasset \\& Papaloizou (2003) argue for positive feedback because for an outwardly\nmigrating protoplanet material traverses the coorbital zone\nfrom outside to inside. As it does so it moves along the outer boundary\nof the coorbital zone occupied by material librating around the coorbital \nequilibrium (Lagrange points) passing close to the protoplanet.\nAs this passage occurs angular momentum is transferred to the protoplanet,\na process which acts to assist the migration of the protoplanet\nand which accordingly gives a positive feedback.\nWe now look at this process in more detail.\n\nThe expected force exerted on the secondary by material flowing through the coorbital region\nis estimated as follows. The mass flow through the box is at rate\n$ 2Y\\Sigma dR\/dt.$ The outward momentum per \nunit mass imparted to the secondary when\nthe disk matter moves across the coorbital \nregion is $w\\Omega_p\/2,$ where $w$ is it's\nwidth. For this we adopt $w=2H$ (see below ).\nThen the rate of transfer of outward momentum to the planet is\n$F_{cr} = 2Y\\Omega_p H \\Sigma (dR\/dt).$ This can be expressed as\n\n\\begin{equation} F_{cr} = 2Y\\Omega_p^2 H^2 \\Sigma ((dR\/dt)\/c_s) .\\end{equation} \nThis may also be written in the form\n\n\\begin{equation} F_{cr} = {1\\over 2} M_d \\Omega_p (dR\/dt) , \\label{crt00}\\end{equation}\nwhere $ M_d = 4Y \\Sigma H$\nis the disk mass that would fill the coorbital zone of width\n$2H$ were it to do so at the background surface density.\n\nHowever, torques on the protoplanet do not only arise from material\npassing through the coorbital zone. Material that is forced to\ncomove with the protoplanet, either because it has been accreted by\nit, or because it librates about the coorbital equilibrium\npoints has to be supplied with the same force per \nunit mass as the protoplanet, by the protoplanet\n so as to maintain its migration.\nThis results in an additional force acting on the protoplanet given by\n\n\\begin{equation} F_{crb} = -{1\\over 2} M_b \\Omega_p (dR\/dt) ,\\end{equation}\nwhere $ M_b$ is the coorbital bound mass.\nThe total force so far is thus\n\n\\begin{equation} F_{cr} = {1\\over 2}( M_d - M_b) \\Omega_p (dR\/dt) . \\label{crto}\\end{equation}\n\nHowever, there are other forces. Because of the breaking\nof the symmetry of the box, the forces acting due to density\nwaves from the two sides do not necessarily cancel.\nIn this context note that these is an asymmetry in the surface density profile.\nThus we should expect a further wave torque component\nwhich is proportional to the migration speed.\nThe indication from our simulations is that this acts\nas a drag on the protoplanet as does $M_b.$\nAccordingly we shall consider the effects of asymmetric wave torques\nas modifying the value of $M_b$ and continue to use (\\ref{crto}).\n\n\nSuppose now the protoplanet is acted on by some external torque $T_{ext}.$\nThe equation of motion governing the migration of the protoplanet\nof mass $M_p$ with speed $dR\/dt$ \n obtained by considering the conservation of angular\nmomentum is\n\n\\begin{equation} {1\\over 2} M_p R \\Omega_p (dR\/dt) = {1\\over 2}( M_d - M_b)R\\Omega_p \n(dR\/dt) + T_{ext}.\n\\label{crto1}.\\end{equation}\nAccordingly we can consider the planet to move with an effective mass\n\n\\begin{equation} M_{eff} = M_p- ( M_d - M_b). \\end{equation}\nThe quantity $( M_d - M_b)$ has been called the coorbital mass deficit\nby Masset \\& Papaloizou (2003). When this is positive\nthere is an effective reduction in the inertia of the protoplanet.\nIf there were no asymmetry in wave torques, the coorbital mass deficit \nis seen to\nbe the amount of mass evacuated in the gap region were it to be initially\nfilled with the background density. Gap filling accordingly reduces\nthe coorbital mass deficit.\nIt is also clear that because at least a partial\ngap is required, the protoplanet must be massive enough to\nproduce a non linear response in the disk. Hence we have considered $q=2.$\n\nMasset \\& Papaloizou (2003) indeed found the coorbital mass deficit \nto be positive resulting in positive feedback\nfrom coorbital torques. They also found that in some circumstances\nthe coorbital mass deficit could become as large as the planet mass\nso reducing the effective inertia to zero. Under these circumstances\na very fast or runaway migration may occur.\n\nWe have considered four simulations with torqued disk material\ngiving flow through the coorbital\nregion which correspond to a migrating protoplanet.\nThese are model G with $q=2.0, b\/H =0.3$ and $\\epsilon\/h = 3\/(8\\pi),$\nmodel H with $q=2.0, b\/H =0.3$ and $\\epsilon\/h = 3\/(64\\pi),$\nmodel I with $q=2.0, b\/H =0.6$ and $\\epsilon\/h = 3\/(8\\pi),$ and\nmodel J with $q=2.0, b\/H =0.6$ and $\\epsilon\/h = 3\/(64\\pi).$\nAll of these models are inviscid with no magnetic field.\nResults for models with flow through and \nmagnetic fields will be presented elsewhere.\nThey all had $q=2.0$ which, for $h=0.05,$\n corresponds to a mass ratio of $2.5 \\times 10^{-4}$\nwhich is close to a Saturn mass for a central solar mass.\nIn each case the breakdown of the box symmetry \nresulted in a net torque acting on\nthe protoplanet.\n\n\nTypical density contour plots\nfor model G left panel and model H right panel \nobtained after a quasi steady state had been attained\nare given in figure \\ref{fig01}.\nIn these cases the magnitude of the torque applied to the disk material\nwas chosen to produce expected inflow speeds of\n$(3\/(8\\pi))c_s$ and $(3\/(64\\pi))c_s$ respectively.\nThe effective introducing a flow through the coorbital region is\nto tend to fill in the gap and produce an asymmetry in the density profile,\nthese effects being larger for a faster flow. In particular there is a density \nenhancement leading the planet that leads to a positive torque on the protoplanet\nassisting its outward migration.\n\nThe mean surface density averaged over $y$ and $z$\nas a function of radial\ncoordinate $x\/H$ for $q=2.0$ with no flow through (model F)\nand for model G are plotted in figure \\ref{fig20}.\nIt is seen that the effect of the flow is to partially fill the gap\nand to produce a density asymmetry that gives a relative density increase\njust beyond the outer gap edge. This is responsible for an increase in the torque\ncoming from these regions that acts against the sense of protoplanet migration\nand can be viewed as reducing the coorbital mass deficit.\n\nMean surface density averaged over $y$ and $z$\nas a function of radial\ncoordinate $x\/H$ for models G, H, I, and J are plotted in \nfigure \\ref{fig2}. These plots show an increasing amount of gap filling\nas the flow increases. But note that this effect is less pronounced\nfor the smaller softening parameter because of the more effective gap production\nin that case.\n\nTheoretical considerations (Masset \\& Papaloizou 2003)\nsuggest the importance of trapped librating\ncoorbital material.\nVelocity vectors in the coorbital region for model F\nwith no flow through are plotted \nin figure \\ref{fig10}. This indicates trapped librating \ntrajectories in the coorbital zone essentially\n symmetrically placed with respect to the \nprotoplanet and occupying a region of width $ w \\sim 2H.$\nFor comparison velocity vectors in the coorbital region for model G \nwith fast flow through induced by torqued disk material.\nare plotted in figure \\ref{fig3a}. This shows a trapped region of librating\norbits predominantly leading the protoplanet\nand shifted inwards.\n\nRunning time averages of the force acting on the planet in the direction\nof orbital motion as a function of dimensionless time for\nmodels G, H, I and J are\nplotted in figure \\ref{fig3}.\nFor comparison with theoretical expectations,\nequation (\\ref{crt00}) gives\nin our arbitrary units for the original background\ndensity and for the fast flow through rate \nwith $(dR\/dt)\/c_s = 3\/(8 \\pi),$ $F_{cr} = 3.0 \\times 10^{-11}.$ \n Simarly for the slow flow through rate with $(dR\/dt)\/c_s = 3\/(64 \\pi)$ we get\n$F_{cr} = 4.0 \\times 10^{-12}.$ \nWe see that the latter net running means are consistent with these values.\nAfter allowing for adjustment of the background value,\nmodel G is a factor of two smaller, model H is a factor of two smaller, model\nI is a factor of eight smaller while model J is a factor of six smaller.\nWe see therefore that, consistently with their more\npronounced gaps, the cases with smaller softening give larger\ncoorbital torques. This appears to be because gap clearance is more effective \nleading to a larger coorbital mass deficit.\nIn addition the faster flow rates show indication of gap filling\nand consequent tendency towards\n torque saturation. An interesting aspect of these\nnet torques is that in the faster flow through cases, the final net torque is \ncomparable\nto the initial one sided torque. This means that the resulting\n net torque is comparable to the extrapolated type I value assuming no gap (see also Masset \\&\nPapaloizou 2003).\n\n\\subsection{Occurrence of Fast or Runaway Migration}\nFast or runaway migration occurs when the effective mass becomes \nvery small or zero (Masset \\& Papaloizou 2003).\nThen the migrating planet loses inertia and \ncan migrate rapidly in response to even small external torques.\nThis requires $M_p= ( M_d - M_b).$ If we set $( M_d - M_b) =f M_d,$\n$f$ is given above as $0.5$ for models G, and H ,$0.17$ for model J ,\nand $0.125$ for model I respectively. \nThe smaller cases apply for the larger softening parameter.\nTo obtain $ M_{eff}=0.$ one requires $M_d =M_p\/f.$ This means\nthe amount of background disk material required to fill the gap region\nshould be $M_p\/f.$ One might expect this to be achievable for a massive\nenough disk. For example taking the case with the weakest coorbital\ntorque with $f=0.125,$ $M_p= 2.5\\times 10^{-4}M_{\\odot}$ corresponding\nto $q=2$ for $h =0.05,$ one requires a disk about $20$ times more massive\nthan the minimum mass solar nebula in the $5AU$ region. \nThis is essentially consistent with Masset \\& Papaloizou (2003)\nbut cautionary remarks need to be inserted regarding the dependence\non softening and the extrapolation of the behaviour in the local box to the\nwhole disk annulus.\n\n\\section { Discussion} \\label{S5}\nIn this paper we have reviewed local disk simulations\nperformed using a local shearing box. \nSuch simulations \nhave an advantage\nof giving higher resolution for the same computational cost\nas a global simulation, but with their associated boundary condition\nof periodicity in shearing coordinates possibly introduce artefacts\ndue to neighbouring ghost boxes and protoplanets.\n However, we saw no evidence\nthat such features, although they can never\nbe removed entirely, were very significant.\n \nWe considered protoplanets interacting with\nboth quiescent disks and \ndisks undergoing MHD turbulence with zero net flux fields.\nThe latter case results in angular momentum transport \nwith an effective Shakura \\& Sunyaev (1973)\nparameter $\\alpha$ related to the effective kinematic\nviscosity $\\nu$ by $\\nu = \\alpha H^2 \\Omega_p,$\ngiven by $\\alpha \\sim 5\\times 10^{-3}.$ \n\nIn both cases, when a protoplanet is present, \nthere exists a natural scaling indicating that results\ndepend only on the parameter $q = M_p R^3\/(M_* H^3)$\nand the condition for gap formation is\n$M_p R^3\/(M_* H^3) > \\sim 1 ,$\n being the thermal condition of\nLin \\& Papaloizou (1993).\n\nThe symmetry of a shearing box for a non migrating protoplanet\n ensures cancellation of the torque contributions\nfrom exterior and interior to the orbit. In a\nquiescent disk a natural bias occurs when non local\n curvature effects are introduced. The interaction is then stronger\nin the region exterior to the planet leading to a net inward migration\nthat can be estimated using the expression of\nTanaka,Takeuchi,\\& Ward (2002) for the migration time\n\\begin{equation}\n\\tau_{mig}= \\frac{M_*}{2.7M_p}\\frac{M_{*}}{{\\Sigma R^2}}\nh^2 \\Omega_p^{-1} .\\label{ward-tmigr}\n\\end{equation}\nHowever, the torques acting in a turbulent disk from either\nside of the orbit are very noisy\nto the extent that a ten percent difference between the contributions\ncan remain even after fifty orbits. The noise makes type I migration stochastic\nand evaluation of the final outcome will require\nlong time global simulations which can simulate the fluctuations\nthat occur on the longest timescales associated with the disk\n(see Nelson \\& Papaloizou 2004).\n\nWe then considered\nthe coorbital torques experienced by a moving protoplanet\nin an inviscid disk.\nThis was done by \ndemonstrating, in an appropriate\nslow migration limit, the equivalence of the problem for a moving\nprotoplanet to one where the protoplanet is in a fixed orbit which the\ndisk material flows through radially as a result of the action of\nan appropriate external torque.\n\nBy considering the torques exerted\nby material flowing around and through the coorbital region, \nwe showed that we could regard the \nprotoplanet as moving with an effective mass\n\\begin{equation} M_{eff} = M_p- ( M_d - M_b). \\end{equation}\nThe quantity $( M_d - M_b)$ was called the coorbital mass deficit\nby Masset \\& Papaloizou (2003). When this is positive\nthere is an effective reduction in the inertia of the protoplanet\na by this amount. It is related to, but not exactly equivalent to,\n the amount \nof mass evacuated in the gap region were it to be initially\nfilled with the background density. Because at least a partial\ngap is required the protoplanet must be large enough to\nproduce a non linear response in the disk. \n\nIn addition in order to\nobtain measurable quasi steady \ncoorbital torques a quasi steady state must be realized\nin which the planet does not accrete\nsignificant mass. We found as did \n Masset \\& Papaloizou (2003) that\nthe coorbital torques are\nproportional to the migration speed, when that is sufficiently\nsmall, and result in a positive\nfeedback on the migration, enhancing it and potentially leading to a runaway.\nThis could lead to fast migration for protoplanets in the Saturn mass\nrange in massive disks and may be relevant to the mass period correlation\nfor extrasolar planets which gives a preponderance of sub Jovian masses\nat short orbital periods ( see Zucker \\& Mazeh 2002).\n\nThe fast migration is limited in that the time to\nmigrate through the coorbital region of $w \\sim 2H,$\nshould be less than the time it takes material to circulate around\nthe trapped coorbital region. When this is taken to extend\nfor the full $2\\pi,$ this limit gives \n$R\/(dR\/dt) > 2\\pi(R\/H)^2\/(3\\Omega_p)$ which corresponds to a time\non the order of a hundred orbits. \nThus a migration stopping or slowing mechanism is required.\nThis may be as a result of the radial density profile of the disk\nsuch as for example a reduction of the density at smaller radii\n and \/or a transition to type II\nmigration which operates on the longer viscous time scale\n(see Masset \\& Papaloizou 2003 for more discussion). \n\n\n\n\n\\section*{References}\n\n\\noindent\nArtymowicz, P., 1993,{\\it ApJ, }, 419, 166\n\n\\noindent\nBalbus, S. A., Hawley, J. F., 1991,\nApJ, 376, 214\n\n\\noindent\nBoss, A.P., 2001, ApJ, 563, 367\n\n\\noindent\nBryden, G., Chen, X., Lin, D.N.C., Nelson, R.P., Papaloizou, J.C.B, 1999, {\\it ApJ, }\n514, 344\n\n\\noindent\nD'Angelo, G., Henning, Th., Kley, W., 2002, ApJ, 385, 647\n\n\\noindent \nGoldreich, P.; Lynden-Bell, D., 1965, MNRAS, 130, 159\n\n\\noindent\nGoldreich, P., Tremaine, S.D., 1979, {\\it ApJ, }, 233, 857\n\n\\noindent\nHawley, J. F., Gammie, C. F., Balbus, S. A., 1995, ApJ, 440, 742\n\n\\noindent\nHawley, J. F., Stone, J. M., 1995,\nComputer Physics Communications, 89, 127\n\n\n\\noindent\nKley, W., 1999, MNRAS, 303, 696\n\n\\noindent\nKorycansky, D. G.; Papaloizou, J. C. B., 1996, ApJS, 105, 181\n\n\\noindent\nLin, D.N.C., Papaloizou, J.C.B., 1986, ApJ, 309, 8\n\n\\noindent\nLin, D.N.C., Papaloizou, J.C.B., 1993,\nProtostars and Planets III, p. 749-835\n\n\\noindent\nMarcy, G. W., Cochran, W. D., Mayor, M., 2000,\nProtostars and Planets IV (Book - Tucson: University of Arizona Press;\neds Mannings, V., Boss, A.P., Russell, S. S.), p. 1285\n\n\\noindent\nLubow, S.H., Seibert, M., Artymowicz, P., 1999, ApJ, 526, 1001\n\n\\noindent\nMasset, F.S., 2001, ApJ, 558, 453\n\n\\noindent\nMasset, F.S., Papaloizou, J.C.B., 2003, ApJ, 588, 494\n\n\\noindent\nMayor, M., Queloz, D., 1995, Nature, 378, 355\n\n\\noindent\nNelson,R.P., Papaloizou, J.C.B., Masset, F.S., Kley, W., 2000,\n MNRAS, 318, 18\n\n\\noindent\nNelson, R.P., Papaloizou, J.C.B., 2003, MNRAS, 339, 993 \n\n\\noindent\nNelson,R.P., Papaloizou, J.C.B., 2004, MNRAS, 350, 849\n\n\\noindent\nPapaloizou, J.C.B, Lin, D.N.C., 1984, ApJ, 285, 818\n\n\\noindent\nPapaloizou, J.C.B., Nelson, R.P., Snellgrove, M.D., 2004, MNRAS, 350, 829\n\n\\noindent\nPapaloizou, J.C.B., 2002, A\\&A, 338, 615\n\n\\noindent\nPollack, J.B., Hubickyj, O., Bodenheimer, P., Lissauer, J.J., Podolak, M.,\nGreenzweig, Y., 1996, {\\it Icarus, } 124, 62\n\n\\noindent\nShakura, N. I., Sunyaev, R. A., 1973, A\\&A, 24, 337\n\n\\noindent\nSteinacker, A., Papaloizou, J.C.B., 2002, ApJ, 571, 413\n\n\\noindent\nTanaka, H., Takeuchi, T., Ward, W.R., 2002, ApJ, 565, 1257\n\n\\noindent\nTerquem, C., Papaloizou, J.C.B., Nelson, R. P. 2000, SSRv, 92, 323\n\n\\noindent\nVogt, S. S., Butler, R. P.,\nMarcy, G. W.; Fischer, D. A.;\nPourbaix, D., Apps, K.,\nLaughlin, G., 2002, ApJ, 568, 352\n\n\\noindent\nWard, W., 1997, Icarus, 126, 261\n\n\\noindent\nWard, W., 2000,\nProtostars and Planets IV (Book - Tucson: University of Arizona Press;\neds Mannings, V., Boss, A.P., Russell, S. S.), p. 1485\n\n\\noindent\nWinters, W., Balbus, S., Hawley, J., 2003a, ApJ, 589, 543\n\n\\noindent\nWinters, W., Balbus, S., Hawley, J., 2003b, MNRAS, 340, 519\n\n\\noindent\nZiegler, U., R\\\"udiger, G., 2000, A\\&A, 356, 1141\n\n\\noindent\nZucker, S. Mazeh, T., 2002, {\\it ApJ, }, 568, L113\n\n\n\n\n{}\n\\end{document}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Hubble flow equations in detail}\n\\label{app:detail}\n\n\\subsection*{The effective potential of Mukhanov--Sasaki equation}\nLet us derive the explicit form of the effective potential Eq. (\\ref{eq:explicit-effp})\n\\begin{equation}\n\\frac{z''}{z}=a^2 H^2 \\left\\{2-\\epsilon_1 +\\frac{3}{2}\\epsilon_2 +\\frac{1}{4}\\epsilon_2^2 -\\frac{1}{2}\\epsilon_1\\epsilon_2 +\\frac{1}{2}\\epsilon_2 \\epsilon_3 +(3-\\epsilon_1 +\\epsilon_2)\\delta_1 +\\delta^2_1 +\\delta_1 \\delta_2 \\right\\}~.\n\\tag{\\ref{eq:explicit-effp}}\n\\end{equation}\n\nFrom Eq. (\\ref{eq:z-epsilon}),\n\\begin{equation}\nz=\\frac{\\sqrt{2\\epsilon_1}a}{c_s}\n\\tag{\\ref{eq:z-epsilon}}\n\\end{equation}\nand\n\\begin{equation}\nz''=a\\frac{d}{dt}\\left(a\\frac{dz}{dt}\\right)=a\\dot{a}\\dot{z}+a^2\\ddot{z}=a^2 (H\\dot{z}+\\ddot{z})~.\n\\label{eq:z-2ndd}\n\\end{equation}\n\nThen let us calculate the time derivative of $z$.\nAfter some calculation, we get\n\\begin{equation}\n\\dot{z}=z\\left(H-\\frac{\\dot{H}}{H}+\\frac{1}{2}\\frac{\\ddot{H}}{\\dot{H}}-\\frac{\\dot{c_s}}{c_s}\\right)~,\n\\end{equation}\n\\begin{eqnarray}\n\\ddot{z}&=&\\dot{z}\\left(H-\\frac{\\dot{H}}{H}+\\frac{1}{2}\\frac{\\ddot{H}}{\\dot{H}}-\\frac{\\dot{c_s}}{c_s}\\right)+z\\frac{d}{dt}\\left(H-\\frac{\\dot{H}}{H}+\\frac{1}{2}\\frac{\\ddot{H}}{\\dot{H}}-\\frac{\\dot{c_s}}{c_s}\\right)\\nonumber\\\\\n&=&z\\left\\{\\left(H-\\frac{\\dot{H}}{H}+\\frac{1}{2}\\frac{\\ddot{H}}{\\dot{H}}-\\frac{\\dot{c_s}}{c_s}\\right)^2 + \\left(\\dot{H}-\\frac{\\ddot{H}}{H}+\\frac{\\dot{H}^2}{H^2}+\\frac{1}{2}\\frac{H^{(3)}}{\\dot{H}}-\\frac{1}{2}\\frac{\\ddot{H}^2}{\\dot{H}^2}-\\frac{\\ddot{c_s}}{c_s}+\\frac{\\dot{c_s}^2}{c_s^2}\\right)\\right\\}\\nonumber\\\\\n&=&z\\left\\{H^2 -\\dot{H} +\\frac{H\\ddot{H}}{\\dot{H}}+2\\frac{\\dot{H}^2}{H^2}-2\\frac{\\ddot{H}}{H}-\\frac{1}{4}\\frac{\\ddot{H}^2}{\\dot{H}^2}+\\frac{1}{2}\\frac{H^{(3)}}{\\dot{H}}\\right.\\nonumber\\\\\n &&\\left.+\\left(-2H+2\\frac{\\dot{H}}{H}-\\frac{\\ddot{H}}{\\dot{H}}\\right)\\frac{\\dot{c_s}}{c_s}+2\\frac{\\dot{c_s}^2}{c_s^2}-\\frac{\\ddot{c_s}}{c_s}\\right\\}~.\n\\end{eqnarray}\n\\if0{\n \\begin{eqnarray}\n \\frac{\\ddot{z}}{z}&=&\n H^2-2\\dot{H}-2H\\frac{\\dot{c_s}}{c_s}+\\frac{H\\ddot{H}}{\\dot{H}}+\\frac{\\dot{H}^2}{H^2}+2\\frac{\\dot{H}}{H}\\frac{\\dot{c_s}}{c_s}-\\frac{\\ddot{H}}{H}+\\frac{\\dot{c_s}^2}{c_s^2}-\\frac{\\dot{c_s}}{c_s}\\frac{\\ddot{H}}{\\dot{H}}+\\frac{1}{4}\\frac{\\ddot{H}^2}{\\dot{H}^2}\\nonumber\\\\\n &&+\\dot{H}-\\frac{\\ddot{H}}{H}+\\frac{\\dot{H}^2}{H^2}-\\frac{\\ddot{c_s}}{c_s}+\\frac{\\dot{c_s}^2}{c_s^2}+\\frac{1}{2}\\frac{H^{(3)}}{\\dot{H}}-\\frac{1}{2}\\frac{\\ddot{H}^2}{\\dot{H}^2}\\nonumber\\\\\n &=&H^2 -\\dot{H} +\\frac{H\\ddot{H}}{\\dot{H}}+2\\frac{\\dot{H}^2}{H^2}-2\\frac{\\ddot{H}}{H}-\\frac{1}{4}\\frac{\\ddot{H}^2}{\\dot{H}^2}+\\frac{1}{2}\\frac{H^{(3)}}{\\dot{H}}\n +\\left(-2H+2\\frac{\\dot{H}}{H}-\\frac{\\ddot{H}}{\\dot{H}}\\right)\\frac{\\dot{c_s}}{c_s}+2\\frac{\\dot{c_s}^2}{c_s^2}-\\frac{\\ddot{c_s}}{c_s}\n \\end{eqnarray}\n}\\fi\n\nSubstitute them into Eq. (\\ref{eq:z-2ndd}).\n\\begin{eqnarray}\n\\frac{z''}{z}&=&\\frac{a^2 (H\\dot{z}+\\ddot{z})}{z}=a^2 H^2 \\left(\\frac{\\dot{z}}{Hz}+\\frac{\\ddot{z}}{H^2 z}\\right)\\nonumber\\\\\n&=&a^2 H^2\\left\\{2-2\\frac{\\dot{H}}{H^2} +\\frac{3}{2}\\frac{\\ddot{H}}{H\\dot{H}}+2\\frac{\\dot{H}^2}{H^4}-2\\frac{\\ddot{H}}{H^3}-\\frac{1}{4}\\frac{\\ddot{H}^2}{H^2\\dot{H}^2}+\\frac{1}{2}\\frac{H^{(3)}}{H^2\\dot{H}}\\right.\\nonumber\\\\\n&&\\left.+\\left(-\\frac{3}{H}+2\\frac{\\dot{H}}{H^3}-\\frac{\\ddot{H}}{H^2\\dot{H}}\\right)\\frac{\\dot{c_s}}{c_s}+\\frac{2}{H^2}\\frac{\\dot{c_s}^2}{c_s^2}-\\frac{1}{H^2}\\frac{\\ddot{c_s}}{c_s}\\right\\}\n\\label{eq:explicit-effp-t}\n\\end{eqnarray}\n\nThen rewrite this in terms of the Hubble and sound flow functions.\nEq. (\\ref{eq:explicit-effp-t}) contains the third derivative of $H$ and we need up to the third Hubble flow function $\\epsilon_3$.\n\nThe explicit forms of the flow functions are below.\n\\begin{equation}\n\\epsilon_1 =-\\frac{\\dot{H}}{H^2}\n\\end{equation}\n\\begin{equation}\n\\epsilon_1 \\epsilon_2\n=\\frac{\\dot{\\epsilon_1}}{H}\n=-\\frac{\\ddot{H}}{H^3}+2\\frac{\\dot{H}^2}{H^4}\n\\end{equation}\n\\begin{equation}\n\\epsilon_2 = \\frac{1}{H}\\frac{\\dot{\\epsilon_1}}{\\epsilon_1}=\\frac{\\ddot{H}}{H\\dot{H}}-2\\frac{\\dot{H}}{H^2}\n\\end{equation}\n\\begin{equation}\n\\epsilon_2 \\epsilon_3=\\frac{\\dot{\\epsilon_2}}{H}=-3\\frac{\\ddot{H}}{H^3}+4\\frac{\\dot{H}^2}{H^4}-\\frac{\\ddot{H}^2}{H^2 \\dot{H}^2}+\\frac{H^{(3)}}{H^2 \\dot{H}}\n\\end{equation}\n\nThe term $\\frac{1}{2}\\frac{H^{(3)}}{H^2\\dot{H}}$ can be rewrote by using $\\epsilon_2 \\epsilon_3$, but then the term proportional to $\\frac{\\ddot{H}^2}{H^2\\dot{H}^2}$ remains.\nTo rewrite it we need the square of $\\epsilon_2$.\n\\begin{equation}\n\\epsilon_2^2 =\\frac{\\ddot{H}^2}{H^2 \\dot{H}^2}+4\\frac{\\dot{H}^2}{H^4}-4\\frac{\\ddot{H}}{H^3}\n\\end{equation}\n\nRewrite the second line of Eq. (\\ref{eq:explicit-effp-t}).\n\\begin{eqnarray}\n&&2-2\\frac{\\dot{H}}{H^2} +\\frac{3}{2}\\frac{\\ddot{H}}{H\\dot{H}}+2\\frac{\\dot{H}^2}{H^4}-2\\frac{\\ddot{H}}{H^3}-\\frac{1}{4}\\frac{\\ddot{H}^2}{H^2\\dot{H}^2}+\\frac{1}{2}\\frac{H^{(3)}}{H^2\\dot{H}}\\nonumber\\\\\n&=&2-2\\frac{\\dot{H}}{H^2} +\\frac{3}{2}\\frac{\\ddot{H}}{H\\dot{H}}-\\frac{1}{2}\\frac{\\ddot{H}}{H^3}+\\frac{1}{4}\\frac{\\ddot{H}^2}{H^2\\dot{H}^2}+\\frac{1}{2}\\epsilon_2 \\epsilon_3\\nonumber\\\\\n&=&2-2\\frac{\\dot{H}}{H^2} +\\frac{3}{2}\\frac{\\ddot{H}}{H\\dot{H}}-\\frac{\\dot{H}^2}{H^4}+\\frac{1}{2}\\frac{\\ddot{H}}{H^3}+\\frac{1}{4}\\epsilon_2^2+\\frac{1}{2}\\epsilon_2 \\epsilon_3\\nonumber\\\\\n&=&2-2\\frac{\\dot{H}}{H^2} +\\frac{3}{2}\\frac{\\ddot{H}}{H\\dot{H}}+\\frac{1}{4}\\epsilon_2^2-\\frac{1}{2}\\epsilon_1 \\epsilon_2+\\frac{1}{2}\\epsilon_2 \\epsilon_3\\nonumber\\\\\n&=&2-\\epsilon_1+\\frac{3}{2}\\epsilon_2+\\frac{1}{4}\\epsilon_2^2-\\frac{1}{2}\\epsilon_1 \\epsilon_2+\\frac{1}{2}\\epsilon_2 \\epsilon_3\n\\label{eq:detail1}\n\\end{eqnarray}\n\nAlso up to the second sound flow function $\\delta_2$ is needed.\n\\begin{equation}\n\\delta_1 =-\\frac{1}{H}\\frac{\\dot{c_s}}{c_s}\n\\end{equation}\n\\begin{equation}\n\\delta_1 \\delta_2\n=\\frac{\\dot{\\delta_1}}{H}\n=\\frac{\\dot{H}}{H^3}\\frac{\\dot{c_s}}{c_s}-\\frac{1}{H^2}\\frac{\\ddot{c_s}}{c_s}+\\frac{1}{H^2}\\frac{\\dot{c_s}^2}{c_s^2}\n\\end{equation}\n\nRewrite the third line of Eq. (\\ref{eq:explicit-effp-t}).\n\\begin{eqnarray}\n&&\\left(-\\frac{3}{H}+2\\frac{\\dot{H}}{H^3}-\\frac{\\ddot{H}}{H^2\\dot{H}}\\right)\\frac{\\dot{c_s}}{c_s}+\\frac{2}{H^2}\\frac{\\dot{c_s}^2}{c_s^2}-\\frac{1}{H^2}\\frac{\\ddot{c_s}}{c_s}\\nonumber\\\\\n&=&\\left(-\\frac{3}{H}+\\frac{\\dot{H}}{H^3}-\\frac{\\ddot{H}}{H^2\\dot{H}}\\right)\\frac{\\dot{c_s}}{c_s}+\\frac{1}{H^2}\\frac{\\dot{c_s}^2}{c_s^2}+\\delta_1 \\delta_2\\nonumber\\\\\n&=&\\left(3-\\frac{\\dot{H}}{H^2}+\\frac{\\ddot{H}}{H\\dot{H}}\\right)\\delta_1+\\delta_1^2+\\delta_1 \\delta_2\\nonumber\\\\\n&=& (3-\\epsilon_1 +\\epsilon_2)\\delta_1+\\delta_1^2+\\delta_1 \\delta_2\n\\label{eq:detail2}\n\\end{eqnarray}\n\nBy combining Eq. (\\ref{eq:detail1}) and eq.(\\ref{eq:detail2}), we can get the explicit form of the effective potential Eq. (\\ref{eq:explicit-effp}).\n\n\\subsection*{The third term of the adiabatic subtraction term}\nNext, let us rewrite the third term of the subtraction term Eq. (\\ref{eq:explicit-third}).\n\\begin{equation}\n\\frac{1}{4}\\frac{c''_s}{c_s}-\\frac{3}{8}\\frac{c'_s {}^2}{c_s^2}\n=- \\frac{1}{8}a^2 H^2\\delta_1 (2-2\\epsilon_1+\\delta_1+2\\delta_2)\n\\tag{\\ref{eq:explicit-third}}\n\\end{equation}\n\nTo use the above calculation, we need translate this terms in terms of proper time derivatives.\n\\begin{eqnarray}\n\\frac{1}{4}\\frac{c''_s}{c_s}-\\frac{3}{8}\\frac{c'_s {}^2}{c_s^2}\n&=&\\frac{1}{4}\\left(\\frac{a\\dot{a}\\dot{c_s}+a^2 \\ddot{c_s}}{c_s}\\right)-\\frac{3}{8}\\frac{a^2 \\dot{c_s}^2}{c_s^2}\\nonumber\\\\\n&=&a^2 H^2 \\left\\{\\frac{1}{4}\\frac{\\dot{c_s}}{Hc_s}+\\frac{1}{4}\\frac{\\ddot{c_s}}{H^2 c_s}-\\frac{3}{8}\\frac{\\dot{c_s}^2}{H^2 c_s^2}\\right\\}\\nonumber\\\\\n&=&a^2 H^2 \\left\\{-\\frac{1}{4}\\delta_1 +\\frac{1}{4}\\delta_1 (\\epsilon_1 +\\delta_1 -\\delta_2 )-\\frac{3}{8}\\delta_1^2\\right\\}\\nonumber\\\\\n&=&-a^2 H^2 \\cdot \\frac{1}{8}\\delta_1 (2-2\\epsilon_1+\\delta_1+2\\delta_2)\n\\label{eq:detail3}\n\\end{eqnarray}\n\nCombining Eq. (\\ref{eq:detail3}) with the effect by time dependent sound speed in the effective potential Eq. (\\ref{eq:detail2}), we obtain\n\\begin{equation}\n\\frac{1}{2}\\left\\{(3-\\epsilon_1 +\\epsilon_2)\\delta_1+\\delta_1^2+\\delta_1 \\delta_2\\right\\}\n-\\frac{1}{8}\\delta_1 (2-2\\epsilon_1+\\delta_1+2\\delta_2)\n=\\frac{1}{8}\\delta_1 (10-2\\epsilon_1 +4\\epsilon_2 +3\\delta_1 +2\\delta_2)~.\n\\end{equation}\n\nThen we have rewritten the adiabatic subtraction term in terms of the flow functions\n\\begin{equation}\n|\\mathcal{R}_k (\\eta)|_{\\text{sub}}^2\n=\\frac{1}{2z^2 c_s k}\\left\\{1+\\left(\\frac{aH}{c_s k}\\right)^2 \\left(1+\\delta\\epsilon+\\delta c_s \\right)\\right\\}\n\\tag{\\ref{eq:sub-in-sr}}\n\\end{equation}\nwhere\n\\begin{equation}\n\\delta\\epsilon \\equiv \\frac{1}{2}\\left(-\\epsilon_1 +\\frac{3}{2}\\epsilon_2 +\\frac{1}{4}\\epsilon_2^2 -\\frac{1}{2}\\epsilon_1\\epsilon_2 +\\frac{1}{2}\\epsilon_2 \\epsilon_3\\right)~,\n\\tag{\\ref{eq:del-e}}\n\\end{equation}\n\\begin{equation}\n\\delta c_s \\equiv\n\\frac{1}{8}\\delta_1 (10-2\\epsilon_1 +4\\epsilon_2 +3\\delta_1 +2\\delta_2)~.\n\\tag{\\ref{eq:del-c}}\n\\end{equation}\n\\section{The time development of the adiabatic subtraction terms during inflation}\\label{chap:sub-dev}\nWe have investigated the time dependence of the adiabatic subtraction term for non-canonical inflation in the section \\ref{sec:timedev}.\nThe zeroth adiabatic order term decays in general, while we cannot say anything about the second order term unless we specify the models (see \\cite{Alinea:2015-1}).\n\nFrom the view point we take in the section \\ref{sec:timedev}, the time dependence of the subtraction term during inflation is irrelevant.\nHowever, some people still claim that the subtraction term should be estimated at the horizon exit \\cite{Agullo:2010-1,Agullo:2011-1}.\n\nHence we estimate the time development of the subtraction term during inflation in the following two cases: (1) the slow-roll inflation model in quasi-de Sitter spacetime and (2) DBI inflation model.\nAs a result, we find that the subtraction terms at the horizon exit exceedingly suppress the bare power spectrum in these two cases.\n\n\\if0{\n From the definition of the sound flow functions Eq. (\\ref{eq:soundflow-def}), $c_s$ can be expressed by\n \\begin{equation}\n c_s (N)\\propto\\exp{\\left(-\\int^N \\delta_1 (\\tilde{N})d\\tilde{N}\\right)}~.\n \\end{equation}\n Then the $N$-dependence of the second order term can be obtained\n \\begin{eqnarray}\n \\frac{H^2}{4\\epsilon_1 c_s k^3}&\\propto&\\frac{1}{k^3}\n \\exp{\\left\\{\\int^N d\\tilde{N}\\left(-2\\epsilon_1 (\\tilde{N}) -\\epsilon_2 (\\tilde{N})+\\delta_1 (\\tilde{N})\\right)\\right\\}}\\nonumber\\\\\n &=&\\frac{1}{k^3}\\exp{\\left\\{-\\left[2\\ln{\\epsilon_0}+\\ln{\\epsilon_1}+\\ln{c_s}\\right]+\\left[2\\ln{\\epsilon_0}+\\ln{\\epsilon_1}+\\ln{c_s}\\right]_{\\text{ini}}\\right\\}}\n \\end{eqnarray}\n where we use $H(N)\\propto\\exp{(-\\int^N \\epsilon_1 (\\tilde{N})d\\tilde{N})}$.\n\nIn the models which the equation of state parameter $w\\equiv P\/E$ is larger than $-1$\\footnote{Otherwise the metric will diverge within finite time.}, $\\epsilon_1 \\propto\\dot{\\epsilon_0}$ is positive because from Eq.(\\ref{eq:Friedmann1}),\n\\begin{equation}\n\\dot{H}=-\\frac{1}{2}(E_c +P_c )=-\\frac{1}{2}E_c (1+w)<0~.\n\\end{equation}\nMoreover, the slow-roll function $\\epsilon_1$ approaches to $1$ at the end of inflation, we expect that $\\dot{\\epsilon_1}$ is also positive.\nTherefore in the canonical kinetic term case the factor Eq. (\\ref{eq:second-factor}) always decays as insisted by \\cite{Urakawa:2009-1}.\nIn exact de Sitter spacetime, the flow functions are zero and the factor Eq. (\\ref{eq:second-factor}) becomes constant as we saw in the previous section.\nIt depends on how $c_s$ develops in non-canonical kinetic term case.\n}\\fi\n\n\\subsection{The slow-roll inflation model in quasi-de Sitter spacetime\\label{sec:quasi-dS}}\n\nLet us review the case of slow-roll inflation model in de Sitter background spacetime first.\nOf course, however, the exact de Sitter universe can be achieved by the cosmological constant rather than inflaton.\nWe set the slow-roll inflaton with perturbations on a de Sitter background, and hence the derivation of the power spectrum is an approximation.\n\nIn the case of exact de Sitter universe,\n\\begin{equation}\na(t)= e^{Ht} \\longrightarrow a(\\eta)=-\\frac{1}{H\\eta}\n\\label{eq:exactdS}\n\\end{equation}\nand $H$ is constant.\nThe Hubble flow functions $\\epsilon_i$ ($i\\geq 1$) are equal to zero by definition.\nThe sound flow functions are also zero.\nFurthermore, we are assuming a model with canonical kinetic term.\n\nThen the effective potential of Mukhanov--Sasaki equation becomes\n\\begin{equation}\n\\frac{z''}{z}=2a^2 H^2=\\frac{2}{\\eta^2}\n\\end{equation}\nand the following expression is obtained.\n\\begin{equation}\n|\\mathcal{R}_k (\\eta)|_{\\text{sub}}^2=\\frac{H^2}{2a^2\\dot{\\varphi}^2 k}\\left(1+\\frac{1}{k^2 \\eta^2}\\right)\n=\\frac{H^4}{2\\dot{\\varphi}^2 k^3}\\left(k^2 \\eta^2 +1\\right)\n\\label{eq:sub-dS}\n\\end{equation}\nHere we use that $E+P=\\dot{\\varphi}^2$.\nIn the large scale limit,\n\\begin{equation}\n\\lim_{-k\\eta \\to 0}|\\mathcal{R}_k (\\eta)|_{\\text{sub}}^2\n=\\frac{H^4}{2\\dot{\\varphi}^2 k^3}~.\n\\end{equation}\nThis is equal to the bare power spectrum of the slow-roll model in the large scale limit.\n\nUnder the slow-roll condition, we assume that the second derivative of inflaton $\\ddot{\\varphi}$ is small. We can neglect the time dependence of $\\dot{\\varphi}$ in the denominator of Eq. (\\ref{eq:sub-dS}) during the inflation, and find that the first term exponentially decays but the second term is constant in the time development.\nIt is shown in the Fig. (\\ref{fig:sub-dS}).\n\\begin{figure}[h] \\begin{center}\n \\includegraphics[width=10cm]{Figures\/subterm-timedev-dS.pdf}\n \\caption{The time development of the subtraction term Eq. (\\ref{eq:sub-dS}) in the case of de Sitter spacetime. We are assuming $\\dot{\\varphi}=1$ (constant) to plot. We also set $H=k=1$ because these constants affect only the overall magnitude of the power spectrum. The subtraction term does not become zero even at the late time of inflation era.}\n \\label{fig:sub-dS}\n\\end{center} \\end{figure}\n\nTake the difference between the bare power spectrum after the horizon exit (in large scale limit) and the subtraction term,\n\\begin{equation}\n\\lim_{-k\\eta \\to 0}|\\mathcal{R}_k (\\eta)|_{\\text{phys}}^2\n=\\frac{H^4}{2\\dot{\\varphi}^2 k^3}\\biggr|_{\\text{exit}}-\\frac{H^4}{2\\dot{\\varphi}^2 k^3}~.\n\\label{eq:phys-dS}\n\\end{equation}\nBecause $H$ and $\\dot{\\varphi}$ is nearly constant during slow-roll inflation, the physical power spectrum in large scale limit is exceedingly suppressed by the subtraction term.\nThis result agrees with the prior research \\cite{Parker:2007-1,Agullo:2010-1,Agullo:2011-1} which argue the amplitude of the fluctuation of not the comoving curvature but the inflaton itself: $\\langle|\\delta\\varphi|^2\\rangle_{\\text{phys}}$ is also suppressed by the subtraction term during inflation.\n\n\\if0{\n However, we note that this is limited in the de Sitter case: only when we set the scale factor as Eq. (\\ref{eq:exactdS}).\nThe time dependence of the subtraction term is not canceled in general.\n\n Moreover, in the ``exact'' de Sitter case, there is no perturbation in the universe and no time-dependence in the first term and the second term of Eq. (\\ref{eq:phys-dS}).\n In this case the physical power spectrum is exactly zero.\n From this, we regard the second order part of the adiabatic subtraction term as the vacuum energy in the exact de Sitter spacetime.\n}\\fi\n\n\\subsection{DBI inflation model}\nNext let us to see how the adiabatic subtraction term behaves in DBI inflation model \\cite{Silverstein:2004-1}.\nDBI inflation is motivated by string theory\nand it achieves the inflation by the ``D-cceleration'' mechanism.\nThe cosmological property of this model has been analyzed, see e.g. \\cite{Silverstein:2004-1,Alishahiha:2004-1}.\n\nThe Lagrangian of DBI inflation model is given by\n\\begin{equation}\nP(\\varphi,X)=-\\frac{1}{f_{\\text{D}}(\\varphi)}\\left(\\sqrt{1-2f_{\\text{D}}(\\varphi)X}-1\\right)-V(\\varphi)\n\\end{equation}\nwhere $f_{\\text{D}}(\\varphi)$ is the (squared) warp factor and we redifine $X$ in terms of the proper time metric\\footnote{We set the sring coupling $g_s =1$ because it only affects the overall magnitude of the power spectrum at large scale limit.}, i.e., $X=\\frac{1}{2}\\dot{\\varphi}^2$.\n\nThe adiabatic subtraction term in this model can be calculated by using the general result Eq. (\\ref{eq:mini-sub}) or Eq. (\\ref{eq:sub-in-sr}).\n\\begin{equation}\n|\\mathcal{R}_k (\\eta)|_{\\text{sub}}^2\n=\\frac{1}{2z^2 c_s k}\\left\\{1+\\left(\\frac{aH}{c_s k}\\right)^2 \\left(1+\\delta\\epsilon+\\delta c_s \\right)\\right\\}\n\\tag{\\ref{eq:sub-in-sr}}\n\\end{equation}\n\\if0{\nBecause it is convenient to use the proper time in this model, we rewrite the subtraction term Eq. (\\ref{eq:mini-sub}) in terms of the proper time derivative.\n\\begin{equation}\n|\\mathcal{R}_k (\\eta)|_{\\text{sub}}^2\n=\\frac{1}{2z^2 c_s k}\\left[1+\\frac{1}{2c_s^2 k^2}\\frac{a\\dot{a}\\dot{z}+a^2 \\ddot{z}}{z}+\\frac{1}{c_s^2 k^2}\\left\\{\\frac{1}{4}\\left(\\frac{a\\dot{a}\\dot{c_s}+a^2 \\ddot{c_s}}{c_s}\\right)-\\frac{3}{8}\\frac{a^2\\dot{c_s}^2}{c_s^2}\\right\\}\\right]\n\\label{eq:sub-t}\n\\end{equation}\n}\\fi\n\nLet us calculate the speed of sound and $z$.\n\\begin{equation}\nP,_X = \\frac{1}{\\sqrt{1-2f_{\\text{D}}(\\varphi)X}},\n~P,_{XX}=\\frac{f_{\\text{D}}(\\varphi)}{(1-2f_{\\text{D}}(\\varphi)X)^{\\frac{3}{2}}}\n\\end{equation}\n\\begin{equation}\nE+P=2XP,_X=\\frac{2X}{\\sqrt{1-2f_{\\text{D}}(\\varphi)X}}\n\\end{equation}\n\\begin{equation}\n\\therefore c_s^2\\equiv\\frac{P,_X}{2XP,_{XX}+P,_X}=1-2f_{\\text{D}}(\\varphi)X\n\\end{equation}\n\\begin{equation}\n\\therefore z^2 =\\frac{a^4 (E+P)}{c_s^2 \\mathcal{H}^2}\n=\\frac{2a^2}{H^2}\\frac{X}{(1-2f_{\\text{D}}(\\varphi)X)^{\\frac{3}{2}}}\n\\end{equation}\nNote that both $c_s$ and $z$ do not depend on the potential $V(\\varphi)$ in general (not limited to DBI model) because $E+P$ is derivative of $X$.\nHowever, the behavior of $\\varphi$ is determined by the $f_{D}(\\varphi)$ and $V(\\varphi)$.\n\n\\begin{figure}[h] \\begin{center}\n \\includegraphics[width=10cm]{Figures\/subterm-timedev-DBI.pdf}\n \\caption{The time development of the subtraction term in the case of a DBI inflation model $f_{D}(\\varphi)=\\frac{\\lambda}{\\varphi^4}$, $V(\\varphi)=m^2\\varphi^2$.\n We set $k=1$, $\\epsilon_D =0.01$ and $m=10^{-15}$ to plot.\n The subtraction term decreases very rapidly but the second order term which cancels the bare spectrum remains.\n }\n \\label{fig:sub-DBI}\n\\end{center} \\end{figure}\n\nWe can obtain the subtraction term in terms of $f_D (\\varphi)$ and $X$ by substituting $z$ and $c_s$ into Eq. (\\ref{eq:sub-in-sr}).\nHowever, there are many choices of the two functions.\nIn this section, we assume a simple model which has $f_{D}(\\varphi)=\\lambda\/\\varphi^4$ and $V(\\varphi)=m^2\\varphi^2$.\n$\\lambda$ is the 't Hooft coupling and $m$ is the mass of the inflaton.\nThis model is analyzed in detail in \\cite{Alishahiha:2004-1} and can achieve a power law inflation.\n\nIn this model, each function behaves at late time (but it is still inflation era) as following \\cite{Alishahiha:2004-1}.\n\\begin{equation}\na(t)\\rightarrow a_{\\text{ini}} t^{1\/\\epsilon_{D}},\\hspace{1em}\nH\\rightarrow\\frac{1}{\\epsilon_D t},\\hspace{1em}\n\\varphi\\rightarrow\\frac{\\sqrt{\\lambda}}{t},\\hspace{1em}\nc_s\\rightarrow\\sqrt{\\frac{3\\lambda}{4}}\\frac{1}{mt^2},\\hspace{1em}\nz\\rightarrow z_{\\text{ini}} t^{(2\\epsilon_D +1)\/\\epsilon_D}\n\\label{eq:late-DBI}\n\\end{equation}\nwhere $\\epsilon_D$ is a slow-roll parameter.\n\\begin{equation}\n\\epsilon_D \\equiv 2c_s \\left(\\frac{H,_{\\varphi}}{H}\\right)^2\n\\end{equation}\ni.e. $c_s \\ll 1$ is necessary to achieve accelerated expansion.\n\n$\\epsilon_D$ can be calculated by using the Friedmann equation and Eq. (\\ref{eq:late-DBI})\n\\begin{equation}\n\\epsilon_D\n=\\frac{3}{\\sqrt{3m^2 \\lambda +2\\sqrt{3m^2 \\lambda}}}\n\\approx \\sqrt{\\frac{3}{\\lambda}}m^{-1}\n\\end{equation}\nand it is approximately constant at late time.\nLet us see the relationship between $\\epsilon_D$ and the flow functions.\n\\begin{equation}\n\\epsilon_1 =-\\frac{\\dot{H}}{H^2}=\\epsilon_D\n\\end{equation}\n\\begin{equation}\n\\delta_1 =-\\frac{\\dot{c_s}}{Hc_s}=-2\\epsilon_D\n\\end{equation}\nTherefore $\\epsilon_D =\\epsilon_1 = -\\frac{1}{2}\\delta_1$ and $\\delta\\epsilon$ and $\\delta c_s$ are constants during the inflation.\n\nThen we can also use the factors $\\frac{1}{2z^2 c_s k}$ and $\\frac{1}{2z^2 c_s k}\\left(\\frac{aH}{c_s k}\\right)^2$ to see the time dependence of the subtraction term.\nSubstitute Eq. (\\ref{eq:late-DBI}) into them and we obtain the following expression.\n\\begin{equation}\n\\frac{1}{2z^2 c_s k}=\\frac{c_s}{4\\epsilon_1 a^2 k}\n=\\frac{3}{8\\epsilon_D^2 m^2 k} t^{-2(1+1\/\\epsilon_D)}\n\\end{equation}\n\\begin{equation}\n\\frac{1}{2z^2 c_s k}\\left(\\frac{aH}{c_s k}\\right)^2\n=\\frac{H^2}{4\\epsilon_D c_s k^3}\n=\\frac{m^2}{6 \\epsilon_D^2 k^3}\n=\\text{const.}\n\\end{equation}\nThe zeroth adiabatic order factor becomes small, while the second term is constant.\nThis is also the same as the bare power spectrum of this model is given by (in the Bunch--Davies vacuum and in large scale limit)\n\\begin{equation}\n|\\mathcal{R}_k (\\eta)|_{\\text{bare}}^2 =\\frac{1}{2\\lambda\\epsilon_D^4 k^3}\n=\\frac{m^2}{6 \\epsilon_D^2 k^3}~.\n\\end{equation}\nThe time development is shown in the Fig. (\\ref{fig:sub-DBI}).\n\nTherefore the regularized power spectrum at the large scale is also strongly suppressed in this model.\n\\section{The time development of the adiabatic subtraction terms after inflation}\\label{chap:sub-dev-after}\n\nAs mentioned, we assume that the subtraction terms do not freeze out at horizon crossing.\nThe subtraction terms shrink after inflation in the canonical kinetic term case \\cite{Urakawa:2009-1}.\nIn the non-canonical case, they also decrease if we assume the nearly scale invariant power spectrum \\cite{Alinea:2015-1}.\n\nThen, the following question comes to mind;\nHow small the subtraction terms become?\nNo one seems to check this quantitatively.\nIn this appendix, we see it via a very rough estimate for the following research.\n\nWe restrict ourselves to considering the models which have neither the non-canonical kinetic terms nor the coupling with the scalar curvature.\nAlso we assume that the scalar perturbation obeys the only one equation throughout the history of the universe, and we neglect the dark energy dominant era.\n\nThe second adiabatic order subtraction term is\n\\begin{equation}\n\\frac{1}{2z^2 k}\\left(\\frac{aH}{k}\\right)^2 (1+\\delta\\epsilon)\n\\approx \\frac{H^2}{4\\epsilon_1 k^3}~.\n\\label{eq:2ndsub}\n\\end{equation}\n\nFor the simplicity of the formula, we set the magnitude of the dimensionless bare power spectrum to $1\/8\\pi^2$.\nThen, to compare the magnitude of the subtraction terms with the bare one, we only have to see the time development of Eq. (\\ref{eq:2ndsub}) for a specific mode.\nEq. (\\ref{eq:2ndsub}) can be rewritten in terms of slow-roll parameters \\cite{Urakawa:2009-1}.\n\\begin{equation}\n\\frac{H^2}{4\\epsilon_1 k^3}=\n\\frac{1}{4k^3}\\exp{\\left(-\\int^N_{N_*} d\\tilde{N}(2\\epsilon_1 (\\tilde{N})+\\epsilon_2 (\\tilde{N}))\\right)}\n\\label{eq:rewritten}\n\\end{equation}\nwhere $N_*$ is the e-folding at the horizon exit.\nHere we define the e-folding $N(a)\\equiv\\ln{(a\/a_*)}$ with $a_0 =1$.\n\nWe need separate the range of integral to calculate this.\nWe consider three parts: inflation era (from $N_*$ to $N_{\\text{end}}$), radiation dominant era (from $N_{\\text{end}}$ to $N_{\\text{eq}}$), and after that.\n\nWe assume the inflation ends abruptly at $N_{\\text{end}}$.\nThe behavior of the subtraction terms during inflation era can be calculated (see Appendix \\ref{chap:sub-dev}).\nIt has model dependence through the slow-roll parameters and e-folding, but is typically reduced to $10^{-1}$--$10^{-3}$.\n\nAfter the inflation, the slow-roll parameters become large and the time behavior of the subtraction terms lose the inflational-model dependence (except for the sound speed).\n\nIn radiation dominant era, the integrand of Eq. (\\ref{eq:rewritten}) is equal to $4$.\nThe subtraction terms become small $\\exp{\\left(-4(N_{\\text{eq}}-N_{\\text{end}})\\right)}$.\n$N_{\\text{eq}}-N_{\\text{end}}$ is depend on the temperature at the end of inflation era.\nHowever, using the scale factor at the recombination and at the time the energy density of radiation and matter became equal, the subtraction terms are reduced at least to $10^{-2}$ from the value at the end of inflation.\nBecause\n\\begin{equation}\nN_{\\text{eq}}-N_{\\text{end}} > \\ln{\\left(\\frac{a_{\\text{eq}}}{a_\\text{CMB}}\\right)}\\approx 0.8\n\\end{equation}\nwhere $a_\\text{CMB}$ is the scale factor at the recombination.\n\nThe ambiguity of the calculation becomes trivial in matter dominant era.\nIn this era, the integrand of Eq. (\\ref{eq:rewritten}) is equal to $3$ and\n\\begin{equation}\nN_0 -N_{\\text{eq}}=\\ln{\\left(\\frac{1}{a_\\text{eq}}\\right)}\\approx 6~.\n\\end{equation}\nthe subtraction terms are reduced to about $10^{-8}$.\n\nCombining these results, the adiabatic subtraction terms are reduced at least to $10^{-11}$ by the present.\nHowever, the power spectrum including subtraction terms should be estimated at the time it become ``classical''.\n\nIt is unknown when and how the power spectrum became classical.\nIt perhaps is in inflation era or radiation dominant era.\nIf the subtraction terms was not sufficiently small at the time, we may have the opportunity to observe the difference.\n\nThis calculation is very roughly.\nWe need more precise consideration including the effect by reheating and higher order quantum corrections.\n\\section{The subtraction terms in the Jordan frame}\\label{chap:jordan-frame}\nIn this section, we calculate the explicit form of the subtraction terms in the Jordan frame by using the conformal transformation to see how the non-minimal coupling term affects on it.\n\nThe subtraction term has been derived in the section \\ref{sec:conformal}.\n\\begin{equation}\n|\\widehat{\\mathcal{R}} (\\eta)|_{\\text{sub}}^2=\n\\frac{1}{2\\widehat{z}^2 \\widehat{c}_s k}\\left\\{1+\\frac{\\widehat{z}''}{2\\widehat{z}}\\frac{1}{\\widehat{c}_s^2 k^2}+\\frac{1}{\\widehat{c}_s^2 k^2}\\left(\\frac{1}{4}\\frac{\\widehat{c}''_s}{\\widehat{c}_s}-\\frac{3}{8}\\frac{\\widehat{c}'_s {}^2}{\\widehat{c}_s^2}\\right)\\right\\}\n\\tag{\\ref{sub:einstein}}\n\\end{equation}\nwhere\n\\begin{equation}\n\\widehat{z}^2=2\\left\\{3e^{2\\theta}\\left(\\frac{\\mathcal{H}}{\\theta'}-1\\right)^2+\\frac{a^4 \\Sigma}{\\theta'^2}\\right\\}\n\\tag{\\ref{eq:zcon}}\n\\end{equation}\n\\begin{equation}\n\\widehat{c}_s^2=\\frac{e^{2\\theta}\\left(1-\\frac{\\theta''}{\\theta' {}^{2}}\\right)}{3e^{2\\theta}\\left(\\frac{\\mathcal{H}}{\\theta'}-1\\right)^2+\\frac{a^4 \\Sigma}{\\theta'^2}}=\\frac{\\theta'^2-\\theta''}{3(\\mathcal{H}-\\theta')^2+\\frac{a^4 \\Sigma}{e^{2\\theta}}}\n=\\frac{2\\widehat{\\epsilon}_1 \\widehat{a}^2}{\\widehat{z}^2}~.\n\\tag{\\ref{eq:ccon}}\n\\end{equation}\n\nLet us rewrite $\\widehat{z}$ in terms of $z$ which is the variables in the minimal case.\n\\begin{equation}\n\\widehat{z}\n=\\frac{\\mathcal{H}}{\\theta'}\\sqrt{6e^{2\\theta}\\left(1-\\frac{\\theta'}{\\mathcal{H}}\\right)^2 +2\\frac{a^4 \\Sigma}{\\mathcal{H}^2}}\n=\\frac{\\mathcal{H}}{\\theta'}\\sqrt{6e^{2\\theta}\\left(1-\\frac{\\theta'}{\\mathcal{H}}\\right)^2 +z^2}\n\\end{equation}\n\n\nTo derive the explicit form of the subtraction terms in the Jordan frame, we need the second derivative of $\\widehat{z}$ and $\\widehat{c}_s$.\nWe can use some convenient calculations as below.\n\\begin{equation}\n\\widehat{z}'=\\frac{1}{2\\widehat{z}}\\frac{d}{d\\eta}\\widehat{z}^2,\\hspace{1em}\n\\frac{\\frac{d}{d\\eta}\\widehat{z}'^2}{\\frac{d}{d\\eta}\\widehat{z}^2}\n=\\frac{2\\widehat{z}'\\widehat{z}''}{2\\widehat{z}\\widehat{z}'}\n=\\frac{\\widehat{z}''}{\\widehat{z}}\n\\label{eq:derivative}\n\\end{equation}\n\\begin{equation}\n\\frac{d}{d\\eta}\\left(\\frac{AB\\dots}{CD\\dots}\\right)^2\n=2\\left(\\frac{AB\\dots}{CD\\dots}\\right)^2 \\left(\\frac{A'}{A}+\\frac{B'}{B}+\\dots-\\frac{C'}{C}-\\frac{D'}{D}-\\dots\\right)\n\\end{equation}\n\n\\subsection*{Derivatives of $\\widehat{z}$}\nLet us calculate the derivatives of $\\widehat{z}$ first.\nFor simplicity, define a function $\\alpha$,\n\\begin{equation}\n\\alpha^2 \\equiv 6e^{2\\theta}\\left(1-\\frac{\\theta'}{\\mathcal{H}}\\right)^2,\n\\end{equation}\nThis function approaches to zero in minimal coupling limit bacause $\\theta'$ is equal to $\\mathcal{H}$ in the case of minimal coupling model.\n\nThen $\\widehat{z}^2$ becomes\n\\begin{equation}\n\\widehat{z}^2 = \\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2 (z^2 +\\alpha^2)\n\\label{zhatal}\n\\end{equation}\nand\n\\begin{eqnarray}\n\\frac{d}{d\\eta}\\widehat{z}^2 &=&\n2\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2 z^2 \\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}+\\frac{z'}{z}-\\frac{\\theta''}{\\theta'}\\right) +2\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2 \\alpha^2 \\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}+\\frac{\\alpha'}{\\alpha}-\\frac{\\theta''}{\\theta'}\\right)\\nonumber\\\\\n&=& 2\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2\\left\\{(z^2 +\\alpha^2)\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}-\\frac{\\theta''}{\\theta'}\\right)+zz'+\\alpha\\alpha'\\right\\}~.\n\\end{eqnarray}\nUsing the relation Eq. (\\ref{eq:derivative}),\n\\begin{eqnarray}\n\\widehat{z}'^2 &=& \\frac{1}{4\\widehat{z}^2}\\left(\\frac{d}{d\\eta}\\widehat{z}^2\\right)^2\\nonumber\\\\\n&=& \\frac{4\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^4\\left\\{(z^2 +\\alpha^2)\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}-\\frac{\\theta''}{\\theta'}\\right)+zz'+\\alpha\\alpha'\\right\\}^2}{4\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2 (z^2 +\\alpha^2)}\\nonumber\\\\\n&=&\\frac{1}{z^2 +\\alpha^2}\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2\\left\\{zz'+\\alpha\\alpha'+(z^2 +\\alpha^2)\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}-\\frac{\\theta''}{\\theta'}\\right)\\right\\}^2~.\n\\end{eqnarray}\n\nAgain, to simplify this, define $\\beta$ vanishing in minimal coupling limit\n\\if0{\n \\begin{equation}\n \\beta\\equiv\\frac{\\alpha}{z}\\alpha'+\\frac{z^2 +\\alpha^2}{z}\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}-\\frac{\\theta''}{\\theta'}\\right)\n \\end{equation}\n Then\n \\begin{equation}\n \\widehat{z}'^2 = \\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2 \\frac{z^2}{z^2 +\\alpha^2}(z'+\\beta)^2\n \\end{equation}\n}\\fi\n\\begin{equation}\n\\beta\\equiv\\alpha\\alpha'+(z^2 +\\alpha^2)\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}-\\frac{\\theta''}{\\theta'}\\right)\n\\end{equation}\nso that\n\\begin{equation}\n\\widehat{z}'^2=\\frac{1}{z^2 +\\alpha^2}\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2\\left(zz'+\\beta\\right)^2~.\n\\end{equation}\nSimilarly,\n\\begin{equation}\n\\frac{d}{d\\eta}\\left(\\widehat{z}'^2\\right)\n=\\frac{2}{z^2 +\\alpha^2}\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2\\left(zz'+\\beta\\right)^2 \\left\\{\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}-\\frac{\\theta''}{\\theta'}\\right)-\\frac{zz'+\\alpha\\alpha'}{z^2+\\alpha^2}+\\frac{z'^2 +zz''+\\beta'}{zz'+\\beta}\\right\\}~,\n\\end{equation}\n\\begin{eqnarray}\n\\frac{\\widehat{z}''}{\\widehat{z}}=\\frac{\\frac{d}{d\\eta}\\widehat{z}'^2}{\\frac{d}{d\\eta}\\widehat{z}^2}\n&=&\\frac{\\frac{2}{z^2 +\\alpha^2}\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2\\left(zz'+\\beta\\right)^2 \\left\\{\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}-\\frac{\\theta''}{\\theta'}\\right)-\\frac{zz'+\\alpha\\alpha'}{z^2+\\alpha^2}+\\frac{z'^2 +zz''+\\beta'}{zz'+\\beta}\\right\\}}{2\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2\\left\\{(z^2 +\\alpha^2)\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}-\\frac{\\theta''}{\\theta'}\\right)+zz'+\\alpha\\alpha'\\right\\}}\\nonumber\\\\\n&=&\\left(\\frac{zz'+\\beta}{z^2+\\alpha^2}\\right)^2 \\frac{\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}-\\frac{\\theta''}{\\theta'}\\right)-\\frac{zz'+\\alpha\\alpha'}{z^2+\\alpha^2}+\\frac{z'^2 +zz''+\\beta'}{zz'+\\beta}}{\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}-\\frac{\\theta''}{\\theta'}\\right)+\\frac{zz'+\\alpha\\alpha'}{z^2+\\alpha^2}}~.\n\\end{eqnarray}\n\nThis is the expression of coefficient function of the second term of Eq. (\\ref{sub:einstein}) in the Jordan frame.\n\n\\subsection*{Derivatives of $\\widehat{c}_s$}\nNext, we need the expression of the derivatives of $\\widehat{c}_s$ in the Jordan frame.\nBy definition,\n\\begin{equation}\n\\widehat{c}_s^2 =\\frac{2\\widehat{\\epsilon}_1 \\widehat{a}^2}{\\widehat{z}^2}~.\n\\end{equation}\nUsing Eq.(\\ref{zhatal}) and Eq. (\\ref{eq:hatep}),\n\\begin{equation}\n\\frac{1}{\\widehat{c}_s^2}=\\frac{1}{2}\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2\\frac{z^2 +\\alpha^2}{1-\\frac{\\theta''}{\\theta'}}e^{-2\\theta}~.\n\\end{equation}\n\nTo see the difference from minimal coupling model, rewrite $\\widehat{\\epsilon}_1$.\n\\begin{equation}\n\\widehat{\\epsilon}_1 =1-\\frac{\\theta''}{\\theta'^2}\n=1-\\frac{\\mathcal{H}'}{\\mathcal{H}^2}+\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}^2}-\\frac{\\theta''}{\\theta'^2}\\right)\n\\end{equation}\nwhere we define $\\gamma$ as\n\\begin{equation}\n\\gamma\\equiv\\frac{\\mathcal{H}'}{\\mathcal{H}^2}-\\frac{\\theta''}{\\theta'^2}\n\\end{equation}\nand it vanishes in the minimal coupling limit.\n$1-\\frac{\\mathcal{H}'}{\\mathcal{H}^2}=\\epsilon_1$ is the first Hubble flow function of the minimal coupling model.\n\nThen we get\n\\begin{equation}\n\\frac{1}{\\widehat{c}_s^2}=\\frac{1}{2}\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2\\frac{z^2 +\\alpha^2}{\\epsilon_1 +\\gamma}e^{-2\\theta}\n\\end{equation}\nand calculate the derivatives.\n\\begin{eqnarray}\n&&\\frac{d}{d\\eta}\\widehat{c}_s^2 =\\frac{d}{d\\eta}\\left[2e^{2\\theta}\\left(\\frac{\\theta'}{\\mathcal{H}}\\right)^2 \\frac{\\epsilon_1 +\\gamma}{z^2 +\\alpha^2}\\right]\\nonumber\\\\\n&=&2e^{2\\theta}\\left(\\frac{\\theta'}{\\mathcal{H}}\\right)^2 \\frac{\\epsilon_1 +\\gamma}{z^2 +\\alpha^2}\\left\\{2\\theta'+2\\left(\\frac{\\theta''}{\\theta'}-\\frac{\\mathcal{H'}}{\\mathcal{H}}\\right)+\\frac{\\epsilon'_1 +\\gamma'}{\\epsilon_1 +\\gamma}-\\frac{2(zz'+\\alpha\\alpha')}{z^2 +\\alpha^2}\\right\\}\n\\end{eqnarray}\n\nDefine $\\kappa$ as\n\\begin{equation}\n\\kappa\\equiv2\\theta'+2\\left(\\frac{\\theta''}{\\theta'}-\\frac{\\mathcal{H'}}{\\mathcal{H}}\\right)+\\frac{\\epsilon'_1 +\\gamma'}{\\epsilon_1 +\\gamma}-\\frac{2(zz'+\\alpha\\alpha')}{z^2 +\\alpha^2}\n\\label{eq:kappa}\n\\end{equation}\nso that\n\\begin{equation}\n\\frac{d}{d\\eta}\\widehat{c}_s^2 = \\widehat{c}_s^2 \\kappa~.\n\\end{equation}\n$\\kappa$ approaches to $2\\frac{c'_s}{c_s}$ in minimal coupling limit.\n\nTherefore\n\\begin{equation}\n\\widehat{c}'_s =\\frac{1}{2\\widehat{c}_s}\\frac{d}{d\\eta}\\widehat{c}_s^2\n=\\frac{1}{2}\\widehat{c}_s \\kappa\n\\end{equation}\n\\begin{equation}\n\\frac{d}{d\\eta}(\\widehat{c}'_s)^2 =\\frac{1}{4}\\kappa^2 \\frac{d}{d\\eta}\\widehat{c}_s^2 +\\frac{1}{4}\\widehat{c}_s^2 \\frac{d}{d\\eta}\\kappa^2\n=\\frac{1}{4}\\widehat{c}_s^2 \\kappa^3+\\frac{1}{4}\\widehat{c}_s^2 \\frac{d}{d\\eta}\\kappa^2\n\\end{equation}\n\\begin{equation}\n\\frac{\\widehat{c}''_s}{\\widehat{c}_s}=\\frac{\\frac{d}{d\\eta}(\\widehat{c}'_s )^2}{\\frac{d}{d\\eta}\\widehat{c}_s^2}\n=\\frac{1}{4}\\kappa^2+\\frac{1}{4}\\frac{1}{\\kappa}\\frac{d}{d\\eta}\\kappa^2\n=\\frac{1}{4}\\kappa^2 +\\frac{1}{2}\\kappa'\n\\end{equation}\n\\begin{equation}\n\\frac{\\widehat{c}'_s {}^2}{\\widehat{c}_s^2}=\\frac{\\frac{1}{4}\\widehat{c}_s^2 \\kappa^2}{\\widehat{c}_s^2}=\\frac{1}{4}\\kappa^2\n\\end{equation}\n\\begin{equation}\n\\therefore \\frac{1}{4}\\frac{\\widehat{c}''_s}{\\widehat{c}_s}-\\frac{3}{8}\\frac{\\widehat{c}'_s {}^2}{\\widehat{c}_s^2}\n=\\frac{1}{8}\\left(\\kappa'-\\frac{1}{4}\\kappa\\right)\n\\label{eq:kappas}\n\\end{equation}\nwhere $\\kappa$ is defined as Eq. (\\ref{eq:kappa}).\n\n\\subsection*{Results}\nSubstitute the expressions to Eq.(\\ref{sub:einstein}),\nwe get\n\\begin{eqnarray}\n|\\widehat{\\mathcal{R}} (\\eta)|_{\\text{sub}}^2&=&\n\\frac{1}{2k}\\frac{1}{\\sqrt{2(\\epsilon_1 +\\gamma)(z^2 +\\alpha^2)}}\\left(\\frac{\\theta'}{\\mathcal{H}}\\right)e^{-\\theta}\\left[1+\n\\frac{1}{4k^2}\\left(\\frac{\\mathcal{H}}{\\theta'}\\right)^2\\frac{z^2+\\alpha^2}{\\epsilon_1 +\\gamma}e^{-2\\theta}\\right.\\\\\n&\\times&\\left.\\left\\{\n\\left(\\frac{zz'+\\beta}{z^2+\\alpha^2}\\right)^2 \\frac{\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}-\\frac{\\theta''}{\\theta'}\\right)-\\frac{zz'+\\alpha\\alpha'}{z^2+\\alpha^2}+\\frac{z'^2 +zz''+\\beta'}{zz'+\\beta}}{\\left(\\frac{\\mathcal{H}'}{\\mathcal{H}}-\\frac{\\theta''}{\\theta'}\\right)+\\frac{zz'+\\alpha\\alpha'}{z^2+\\alpha^2}}\n+\\frac{1}{4}\\left(\\kappa'-\\frac{1}{4}\\kappa\\right)\n\\right\\}\\right]~.\\nonumber\n\\end{eqnarray}\n\nWe have to calculate the derivatives of $\\alpha$, $\\beta$, and $\\kappa$ to obtain the final expression.\nHowever, the expression is very complicated and there seems to be no further use in continuing in this fashion. It is actually enough to see that the coincidence of the expression in the minimal coupling limit, and the results definitely coincide. Although our expression of the adiabatic subtraction terms for ``general'' (non-minimal) k-inflation depends on the non-minimal coupling term in a complex manner, it includes the expression for the usual (minimal) k-inflation as a special case.\n\\part{Introduction}\nThe inflationary paradigm \\cite{Guth:1981-1,Sato:1981-1,Linde:1982-1,Albrecht:1982-1} is a fascinating approach to solving various cosmological problems.\nAlthough inflation can describe the solutions naturally, we do not know how it occurs, or even whether or not it really occurred.\n\nTo examine inflation, we have a tool to see the early universe: the cosmic microwave background (CMB) radiation.\nThe CMB consists of photons which became free at early times (the last scattering surface) and now fill the observable universe.\nWe can observe the temperature and the polarization, and we know that the temperature of these photons from each direction is almost identical.\n\nHowever, tiny fluctuations in the temperature and the polarization have a large amount of information of the early universe because it is considered that they reflect the primordial perturbations of inflaton or gravitation.\nIf we can analyze the observation data properly, it means that we can see the universe at the last scattering surface.\nIndeed, the CMB is the oldest light that we can observe. To know the details of the universe before the last scattering surface directly, we are forced to detect other signals such as gravitational wave signatures. \n\nWe can infer from the CMB that the early universe went through an inflationary era \\cite{Planck:2013-1}.\nHowever, the precision is still not enough to determine which inflation model was realized, and many inflation models have been advocated from \nvarious theoretical motivations. To compare theoretical inflation models with the observed CMB, the power spectrum which will be shortly explained in section (\\ref{sec:powerspectrum}) is used. Therefore the method used to estimate the theoretical power spectrum is an important matter of cosmology.\n\nAlthough the amplitude of the perturbation which is the integrand of the power spectrum has divergences (before regularization), usually it has \nbeen assumed that regularization is not necessary\\footnote{%\nRecently, a review of the problem \\cite{BasteroGil:2013-1} has even claimed that there is no need to regularize the power spectrum from the standpoint of observations (although interacting theories would still require a regularization according to \\cite{BasteroGil:2013-1}, see also \\cite{delRio:2014-1}).\n}.\nHowever, since the paper of Parker \\cite{Parker:2007-1} quite a lot of discussion has arisen concerning this issue.\nHe applied the adiabatic regularization method, which is a regularization method used in quantum field theory in curved spacetime, to the power spectrum.\n\nThe purpose of this thesis is to review the adiabatic regularization for the power spectrum and to expand it to more general cases.\nIn the rest ot this part, Part 1, we review cosmological perturbation theory and we also review adiabatic regularization and k-inflation in the first half of Part 2. \nThen we expand our interest to non-minimally coupled k-inflation, and derive the adiabatic subtraction terms for the model in the last part of Part 2.\nIn Part 3, we discuss the issue of the physicality Einstein and Jordan frames that are related by a conformal transformation, where of relevance to this thesis we show the conformal equivalence of the adiabatic subtraction terms in each frame. In Part 4 we conclude and mention future work.\nWe include detailed calculations in Appendices: In Appendix A, we calculate Hubble flow functions in detail. In Appendices B and C, the time development of the adiabatic subtraction terms is discussed in some specific models. Finally, in Appendix D, we try to express the subtraction terms in the Jordan frame and discuss the generality of the derived subtraction terms.\n\n\\section{ Cosmological perturbation theory}\n\\subsection{The Friedmann equations}\nObservationally, the universe is nearly homogeneous and isotropic and \nhence perturbation theory is usually considered around such a background.\nIn this thesis, we go up to linear order in perturbations, although in principle higher order could be considered.\n\n\\subsubsection*{The background metric}\nThe homogeneous and isotropic background universe is written by the Friedmann--Lema\\^{i}tre--Robertson--Walker (FLRW) metric.\n\\begin{eqnarray}\nds^2&=&dt^2-a^2(t)d\\vec{x}^2\\nonumber\\\\\n&=& dt^2 -a^2 (t) \\left[ \\frac{dr^2}{1-Kr^2} +r^2 (d\\theta^2 +\\sin^2 \\theta d\\phi^2) \\right]\n\\label{eq:FLRW-t}\n\\end{eqnarray}\nwhere $K$ is spatial curvature. $a(t)$ is a scale factor which denotes the size of the universe at specific time. In this thesis, only the spatially flat ($K=0$) case will be considered.\n\nTo simplify the calculations, conformal time $\\eta$ defined as\n\\begin{equation}\n\\eta =\\int^t \\frac{dt'}{a(t')}\n\\end{equation}\nis also used. The metric is written in terms of $\\eta$ as follows\n\\begin{equation}\nds^2=a^2(\\eta)\\left[d\\eta^2-d\\vec{x}^2\\right]~.\n\\label{eq:FLRW}\n\\end{equation}\nBoth proper time $t$ and conformal time $\\eta$ are used in this thesis.\nThe derivatives is written by\n\\begin{equation}\n\\dot{f}(t)\\equiv\\frac{d}{dt}f(t),\\hspace{1em}f'(\\eta)\\equiv\\frac{d}{d\\eta}f(\\eta)~.\n\\end{equation}\nHowever, we use Eq. (\\ref{eq:FLRW}) as the background metric unless otherwise specified.\n\n\\subsubsection*{Metric perturbations}\nThen let us consider the perturbations.\nThe metric perturbations has three modes.\n1) Scalar perturbation $\\phi(\\eta,\\vec{x}),\\psi(\\eta,\\vec{x}),B(\\eta,\\vec{x}),\\tilde{E}(\\eta,\\vec{x})$.\n2) Vector perurbation $F(\\eta,\\vec{x})_i ,S(\\eta,\\vec{x})_i$.\n3) Tensor perturbation $h(\\eta,\\vec{x})_{ij}$.\n\\begin{eqnarray}\n\\delta g_{00}(x)&=&2a^2\\phi\\\\\n\\delta g_{0i}(x)&=&a^2(B,_i +S_i)\\\\\n\\delta g_{ij}(x)&=&a^2(2\\psi\\delta_{ij}+2\\tilde{E},_{ij}+F_{i,j}+F_{j,i}+h_{ij})\n\\label{eq:metricP3}\n\\end{eqnarray}\n\nIf we pick all scalar modes only, the line element is written as\n\\begin{equation}\nds^2=a^2(\\eta)\\left[\\left(1+2\\phi\\right)d\\eta^2+2B,_i d\\eta dx^i -\\left\\{\\left(1-2\\psi\\right)\\delta_{ij} -2\\tilde{E},_{ij}\\right\\}dx^i dx^j \\right]~.\n\\label{eq:perturbedm}\n\\end{equation}\nThe above metric will eventually be used to derive an equation of motion for the inflation perturbations, known as the Muhkanov--Sasaki equation.\n\n\n\n\\subsubsection*{Hydrodynamical perturbations}\nClassically, the universe is homogeneous and isotropic, and matter can be approximated by a perfect fluid.\n\nThe energy-momentum tensor of perfect fluid is given by following form.\n\\begin{equation}\nT_{\\mu\\nu}=(E+P)u_{\\mu}u_{\\nu}-Pg_{\\mu\\nu}\n\\label{eq:EMtensor}\n\\end{equation}\nwhere $E$ is the energy density of the matter and $P$ is the pressure.\n$u^{\\mu}(x)$ is a four vector which satisfy the classical part $u_{c\\mu} =(a,0,0,0)$ and $u_{\\mu}u^{\\mu}=1$.\n\nIn perturbation theory, $E$, $P$ and $u^{\\mu}$ also have the spatial-dependent perturbation part.\nLet us write the classical part like $E_c$. e.g.) $E(x)=E_c(\\eta)+\\delta E(\\eta,\\vec{x})$.\nThe perturbation of $u_i$ can be decomposed scalar part and vector part, so write them explicitly.\n\\begin{equation}\n\\delta u_i =\\delta u_{\\parallel},_i +\\delta u_{\\perp i}\n\\end{equation}\nwhere $\\partial^i \\delta u_{\\perp i} =0$ and $\\delta u_{\\parallel}$ is the scalar part.\n\n\\if0{\n \\begin{eqnarray}\n T^0_{\\ 0} &=& (E+P)u^0 u_0 -P=E\\\\\n T^0_{\\ i} &=& (E+P)u^0 \\delta u_i =\\frac{1}{a}(E_c +P_c )\\delta u_i\\\\\n T^i_{\\ j} &=& -P\\delta^i_{\\ j}\n \\label{eq:EMtensor2}\n \\end{eqnarray}\n}\\fi%\nThe classical part of the energy-momentum tensor becomes\n\\begin{equation}\nT^0_{c\\ 0} =E_c ,\\hspace{1em}\nT^0_{c\\ i} =0 ,\\hspace{1em}\nT^i_{c\\ j} = -P_c \\delta^i_{\\ j}\n\\end{equation}\nand the linear perturbation part (only scalar modes) are given by\n\\begin{eqnarray}\n\\delta T^0_{\\ 0} &=& \\delta E~,\\nonumber\\\\\n\\delta T^0_{\\ i} &=& \\frac{1}{a}(E_c +P_c )\\delta u_{\\parallel},_i ~, \\label{eq:EMtensor3}\\\\\n\\delta T^i_{\\ j} &=& -\\delta P\\delta^i_{\\ j}~.\\nonumber\n\\end{eqnarray}\nThese equations will be used later on to derive the Muhkanov--Sasaki equation after we have first defined the cosmological background.\n\n\\subsubsection*{The Friedmann equations}\nUsing these metric and energy-momentum tensor, we can get the Friedmann equation from the Einstein equation $R_{\\mu\\nu}-\\frac{1}{2}Rg_{\\mu\\nu}=T_{\\mu\\nu}$\\footnote{In this thesis $\\sqrt{8\\pi G}\\equiv M_{\\text{Pl}}^{-1}$ is normalized so that $M_{\\text{Pl}}=1$.}.\n\nThe classical part is\n\\begin{equation}\nH^2=\\frac{E_c}{3},\\hspace{1em}\\frac{\\ddot{a}}{a}=-\\frac{1}{6}(E_c +3P_c)\n\\label{eq:Friedmann1}\n\\end{equation}\nwhere $H\\equiv\\frac{\\dot{a}}{a}$.\nThe continuity equation is also derived\n\\begin{equation}\n\\dot{E}_c +3H(E_c +P_c)=0~.\n\\label{eq:continuity1}\n\\end{equation}\n\nOr in terms of conformal time,\n\\begin{equation}\n\\mathcal{H}^2=\\frac{a^2 E_c}{3},\\hspace{1em}\\frac{a''}{a}-\\frac{a'^2}{a^2}=-\\frac{a^2}{6}(E_c +3P_c)\n\\label{eq:Friedmann2}\n\\end{equation}\n\\begin{equation}\nE'_c +3\\mathcal{H}(E_c +P_c)=0\n\\label{eq:continuity2}\n\\end{equation}\nwhere $\\mathcal{H}\\equiv\\frac{a'}{a}=aH$.\n\nThe equation of scalar perturbation is more complicated because the metric perturbation contains non-physical degrees of freedom. To deal with this problem, we fix the gauge or compose gauge invariant quantities as discussed in the following section.\n\n\\subsection{Gauge invariance}\nGauge fixing can remove the non-physical degrees of freedom in the perturbed metric Eq. (\\ref{eq:perturbedm}).\nHowever, the variables in the gauge are not always physical quantity, i.e., they may be not gauge invariant.\n\nIn this thesis the conformal-Newtonian gauge will be considered.\n\n\\subsubsection*{The conformal-Newtonian gauge}\n\nThe conformal-Newtonian (longitudinal) gauge is defined by\n\\begin{equation}\nds^2=a^2(\\eta)\\left[\\left(1+2\\phi\\right)d\\eta^2-\\left(1-2\\psi\\right)d\\vec{x}^2\\right]~.\n\\label{eq:CNgauge}\n\\end{equation}\ni.e., $B=E=0$.\n\nIn this gauge, the perturbations of the metric and inflaton are gauge invariant.\nThus we can rewrite metric Eq. (\\ref{eq:CNgauge}) or equations in conformal-Newtonian gauge in terms of gauge invariant variables. Using the gauge invariant perturbations $\\Phi$ and $\\Psi$, the metric becomes\n\\begin{equation}\nds^2=a^2(\\eta)\\left[\\left(1+2\\Phi\\right)d\\eta^2-\\left(1-2\\Psi\\right)d\\vec{x}^2\\right]~.\n\\label{eq:CNgauge-i}\n\\end{equation}\n$\\Phi$ corresponds to the Newtonian potential.\nThere seems to be two degrees of freedom for the scalar mode.\nActually, $\\Phi$ and $\\Psi$ are related via the Einstein equation and not independent.\nIn minimal coupling case, these two are identical $\\Phi=\\Psi$.\n\nFor other perturbations, the overlines will be used to express gauge invariant variables. e.g., $\\delta\\varphi(\\eta,\\vec{x})\\rightarrow\\overline{\\delta\\varphi}(\\eta,\\vec{x})$.\n\n\\subsubsection*{The comoving gauge}\nThe comoving gauge which is another gauge fixing way is also considered in some references.\nWe will not use it in this thesis, but briefly explain it.\n\nThe comoving gauge is defined by the condition\n\\begin{equation}\n\\phi=0,\\ \\partial^i \\delta u_{\\parallel} =0~.\n\\end{equation}\nThe scalar part of the metric becomes\n\\begin{equation}\nds^2 = a^2 (\\eta)\\left[d\\eta^2 -e^{2\\mathcal{R}}\\delta_{ij}dx^i dx^j \\right]\n\\end{equation}\nwhere $\\mathcal{R}$ is the comoving curvature perturbation which is a gauge-invariant quantity composed by scalar perturbations in the metric and the inflaton.\n\nThis gauge is particularly well suited to performing calculations using the ADM formalism \\cite{Arnowitt:1959-1} in cosmology, e.g., see \\cite{Maldacena:2002-1,Baumann:2009-1}.\n\n\\subsection{Mukhanov--Sasaki equation}\n\nAfter gauge fixing, the first order Einstein equations can be solved.\nWe can calculate it in conformal-Newtonian gauge, and then replace the perturbations by the gauge invariant ones.\n\nIn the minimal coupling case, we can combine the Einstein equations and get the Mukhanov--Sasaki equation describing the behavior of the comoving curvature perturbation $\\mathcal{R}$.\n\\begin{equation}\nv''_k +\\left(c_s^2 k^2 - \\frac{z''}{z}\\right)v_k=0\n\\label{MSeq}\n\\end{equation}\nwhere\n\\begin{equation}\nv_k\\equiv z\\mathcal{R}_k,\\hspace{1em}\nz\\equiv\\frac{a^2 \\sqrt{E+P}}{c_s \\mathcal{H}},\\hspace{1em}\nc_s^2\\equiv\\frac{\\partial P}{\\partial E}~.\n\\label{eq:zdef}\n\\end{equation}\n\nThe comoving curvature perturbation $\\mathcal{R}\\equiv\\Psi-\\frac{\\mathcal{H}}{a^2}\\delta u_{\\parallel}$ \\cite{Baumann:2009-1} is a gauge invariant scalar which is conserved and is equal to the curvature perturbation on uniform-density hypersurfaces $\\zeta$ on superhorizon scales.\n\nFinally, we should mention that $c_s$ is the \"sound speed,\" the propagation speed of the perturbation.\nIn models which have canonical kinetic terms we have $c_s=1$.\n\n\\section{ The power spectrum\\label{sec:powerspectrum}}\n\\subsection{Quantization}\n\nIn this section, the definition and the physical property of the power spectrum of the CMB temperature is introduced. \n\nFirst, we consider how to quantize the Mukhanov--Sasaki variable $v(x)$ to $\\hat{v}(x)$.\nAt first, define the canonical momentum $\\hat{\\pi}$,\n\\begin{equation}\n\\hat{\\pi}\\equiv\\frac{\\partial\\mathcal{L}}{\\partial\\hat{v}}\n\\end{equation}\nwhere $\\mathcal{L}$ is the Lagrangian. We then assume that the quantized variable $\\hat{v}$ also obeys Mukhanov--Sasaki equation.\n\nWe now require the canonical commutation relation on $\\hat{v}$ and $\\hat{\\pi}$:\n\\begin{equation}\n[\\hat{v}(\\eta,\\vec{x}),\\hat{v}(\\eta,\\vec{y})]=[\\hat{\\pi}(\\eta,\\vec{x}),\\hat{\\pi}(\\eta,\\vec{y})] =0\n\\end{equation}\n\\begin{equation\n[\\hat{v}(\\eta,\\vec{x}),\\hat{\\pi}(\\eta,\\vec{y})] = i\\delta (\\vec{x}-\\vec{y})\n\\end{equation}\n\nLet us consider the Fourier components\n\\begin{equation}\n\\hat{v}(\\eta,\\vec{x})=\\int\\frac{d^3 k}{(2\\pi)^{\\frac{3}{2}}}\\left[\nA_{\\vec{k}}v_k (\\eta)e^{i\\vec{k}\\cdot\\vec{x}}+\nA_{\\vec{k}}^{\\dagger} v_k^* (\\eta)e^{-i\\vec{k}\\cdot\\vec{x}}\n\\right]\n\\label{eq:Fourierv}\n\\end{equation}\nwith the normalization\n\\begin{equation}\nv_k^* v'_k -v_k^{*} {}' v_k =-i~.\n\\end{equation}\nThen $A_{\\vec{k}}$ and $A_{\\vec{k}}^{\\dagger}$ have the canonical commutation relation\n\\begin{equation}\n[A_{\\vec{k}},A_{\\vec{k}'}]=[A_{\\vec{k}}^{\\dagger},A_{\\vec{k}'}^{\\dagger}]=0,\\hspace{1em}\n[A_{\\vec{k}},A_{\\vec{k}'}^{\\dagger}]=\\delta (\\vec{k}-\\vec{k}')~.\n\\label{eq:Acom}\n\\end{equation}\n\nThe power spectrum of two point correlation function for scalar perturbation $\\mathcal{P}_{\\mathcal{R}}(k)$ is defined by\n\\begin{equation}\n\\langle\\mathcal{R}_{\\vec{k}}\\mathcal{R}_{\\vec{k}'}\\rangle\\equiv\\delta(\\vec{k}+\\vec{k}')\\mathcal{P}_{\\mathcal{R}}(k)~.\n\\label{eq:Pdef}\n\\end{equation}\n\nUsing $v_k\\equiv z\\mathcal{R}_k$ and Eqs. (\\ref{eq:Fourierv})--(\\ref{eq:Acom}):\n\\begin{eqnarray}\n\\langle\\mathcal{R}(\\eta,\\vec{x})\\mathcal{R}(\\eta,\\vec{y})\\rangle\n&=&\\frac{1}{z^2}\\langle\\hat{v}(\\eta,\\vec{x})\\hat{v}(\\eta,\\vec{y})\\rangle\\nonumber\\\\\n&=&\\frac{1}{z^2}\\frac{1}{(2\\pi)^3}\\int d^3 k |v_k|^2 e^{i\\vec{k}\\cdot(\\vec{x}-\\vec{y})}\\nonumber\\\\\n&=&\\int\\frac{dk}{k}\\frac{\\sin{(k|\\vec{x}-\\vec{y}|)}}{k|\\vec{x}-\\vec{y}|}\\frac{k^3}{2\\pi^2} |\\mathcal{R}_k|^2\n\\end{eqnarray}\nIf we set $\\vec{x}=\\vec{y}$,\n\\begin{equation}\n\\langle\\mathcal{R}^2 (\\eta,\\vec{x})\\rangle=\\int\\frac{dk}{k}\\frac{k^3}{2\\pi^2} |\\mathcal{R}_k|^2~.\n\\label{eq:powerspectrum}\n\\end{equation}\n\nOn the other hand, from Eq. (\\ref{eq:Pdef}):\n\\begin{equation}\n\\langle\\mathcal{R}^2 (\\eta,\\vec{x})\\rangle=\\frac{1}{2\\pi^2}\\int dk\\ k^2 \\mathcal{P}_{\\mathcal{R}}(k)~.\n\\end{equation}\n\\begin{equation}\n\\therefore \\mathcal{P}_{\\mathcal{R}}(k) = |\\mathcal{R}_k|^2\n\\end{equation}\n\nThe integrand of Eq. (\\ref{eq:powerspectrum}) $\\Delta^2_{\\mathcal{R}}(k)\\equiv\\frac{k^3}{2\\pi^2} \\mathcal{P}_{\\mathcal{R}}(k)$ is called the dimensionless power spectrum. In this thesis, we will sometimes call $\\Delta^2$ the power spectrum.\nHowever, we make it clear which we indicate from the context when the distinction is needed.\n\n\\subsection{Behavior of the power spectrum}\n\nGiven that the comoving curvature perturbation $\\mathcal{R}$ is conserved after the Hubble horizon crossing, the power spectrum also time-independent after crossing. This is the reason why we are especially interested in large scales.\nThe modes in large scales exit early in the history of the universe, and they keep more information of primordial perturbations than the modes in small scales.\n\nIs it known that the dimensionless power spectrum of the scalar perturbation in de Sitter spacetime can be exactly solved and it does not depend on $k$. Moreover, the observations support the nearly $k$-independent power spectra\\cite{Planck:2013-1}.\nIf the dimensionless power spectrum can be expressed (or approximated) by power series, the integral Eq. (\\ref{eq:powerspectrum}), which is the variance of the perturbation, has the divergences on UV scales.\nThus we need to regularize the power spectrum\nin theoretical arguments.\n\nIn next part, we will discuss the adiabatic regularization which is a way to regularize the power spectrum.\nWe will see the behavior the adiabatic subtraction term in the latter half of Part 2.\n\\part{Regularization of the power spectrum}\n\\setcounter{section}{0}\n\nTo obtain the finite expression of power spectrum, adiabatic regularization\\cite{Parker:2007-1,ParkerToms200908}, which is one of regularization schemes of quantum field theory in curved spacetime, is considered.\nIn this regularization, we perform WKB-like expansion of the solution of perturbations, and subtract the isolated divergence part.\nThen we regard the regularized power spectrum as the corresponding one to the observable power spectrum.\n\nIn this chapter, we review the adiabatic regularization method.\nAnd then we apply it to the power spectrum for k-inflation model.\n\n\n\\section{ Adiabatic regularization\\label{sec:adiabatic}}\nLet us consider a scalar field $\\phi(x)$ obeying the following equation of motion\\footnote{We formulate the adiabatic regularization method using the conformal time for some convenience. The original formulation \\cite{Parker:2007-1,ParkerToms200908} uses the proper time, and the both subtraction terms are equivalent. We can obtain the subtraction term in terms of proper time by using either the differential equation by proper time or by the variable transformation of the subtraction term derived by Eq. (\\ref{eq:genEOM}).}.\n\\begin{equation}\n\\phi''(x)+f(\\eta)\\phi'(x)+\\left[g(\\eta)\\partial_i \\partial^i +h(\\eta) \\right]\\phi(x)=0\n\\label{eq:genEOM}\n\\end{equation}\nwhere $f(\\eta)$, $g(\\eta)$ and $h(\\eta)$ are functions of time and these can be derived from the given metric and Lagrangian\\footnote{We assume that the potential is not steep.}.\n\nIn this case, the coefficient functions from the metric depends on only time because we are considering up to first order of perturbations.\nFor the power spectra, we will consider the equation of motion of the perturbation part only.\nThe background metric is\nEq. (\\ref{eq:CNgauge-i}), which has\nthe both dependence of time and spatial coordinates. However, the dependence on the spatial coordinates appears as the perturbations.\nTherefore the spatial-dependence of the metric appears the second or higher order perturbation parts, and we can use the Eq. (\\ref{eq:genEOM}) in our discussion without loss of generality.\n\nConsider the Fourier transformation:\n\\begin{equation}\n\\phi(x)=\\int \\frac{d^3 k}{(2\\pi)^{\\frac{3}{2}}} \\left(A_{\\vec{k}}\\phi_{\\vec{k}}(x)+A_{\\vec{k}}^{\\dagger}\\phi_{\\vec{k}}^{*}(x)\\right)\n\\label{eq:fourier}\n\\end{equation\nThe following normalization of $\\phi$ is also imposed.\n\\begin{equation}\n\\phi_k^* \\phi'_k -\\phi_k^{*} {}' \\phi_k =-i\n\\end{equation}\n\nIf we define $\\chi_k (\\eta)$ as\n\\begin{equation}\n\\phi_{\\vec{k}}(x)\\equiv\\chi_k (\\eta)\\exp{\\left(i\\vec{k}\\cdot\\vec{x}-\\frac{1}{2}\\int f(\\eta)d\\eta\\right)}~,\n\\end{equation}\n$\\chi_k (\\eta)$ obeys the following equation.\n\\begin{equation}\n\\chi''_k (\\eta)+\\Omega^2(\\eta)\\chi_k (\\eta)=0\n\\label{eq:chiEOM}\n\\end{equation}\nwhere\n\\begin{equation}\n\\Omega^2 (\\eta)\\equiv-g(\\eta)k^2 +h(\\eta)-\\frac{1}{4}f^2 (\\eta)-\\frac{1}{2}f'(\\eta)\n\\label{eq:Omega}\n\\end{equation}\nThis is an equation of motion of a harmonic oscillator if we regard $\\Omega(\\eta)$ as the frequency.\nIndeed, in the case of a massive scalar field in the Minkowski spacetime,\n\\begin{equation}\nf(\\eta)=0,~g(\\eta)=-1,~h(\\eta)=m_{\\phi}^2\n\\end{equation}\nand Eq. (\\ref{eq:chiEOM}) becomes\n\\begin{equation}\n\\chi''_k (\\eta)+\\left(k^2 +m_{\\phi}^2 \\right) \\chi_k (\\eta)=0~.\n\\end{equation}\n$k^2 +m_{\\phi}^2 \\equiv\\omega_k^2$ is the square of the frequency and the solution is\n\\begin{equation}\n\\chi_k (\\eta) \\propto \\exp{\\left(-i\\omega_k \\eta \\right)}~.\n\\end{equation}\n\nIn a curved spacetim\n, the square of effective frequency $\\omega^2_k (\\eta) \\equiv-g(\\eta)k^2 +h(\\eta)$ depends on time and the coefficient function $f(\\eta)$ from the Christoffel symbol in covariant derivative exists. Therefore we perform a WKB-like (adiabatic) expansion for the solution.\n\nIntroducing a fictitious parameter $T$ in the metric.\n\\begin{equation}\ng_{\\mu\\nu} (x) \\rightarrow g_{\\mu\\nu} (x\/T)\n\\end{equation}\nThe order of differentiation of the metric becomes the adiabatic order of the term. We now require the adiabatic condition.\ni.e., In lowest adiabatic order, $\\chi_k$ should have the form\n\\begin{equation}\n\\chi_k \\sim \\frac{1}{\\sqrt{2 \\omega_k (\\eta)}} \\exp{\\left(-i\\int^{\\eta} \\omega_k (\\eta')d\\eta'\\right)}~.\n\\end{equation}\nUnder the adiabatic condition, we can get uniquely the adiabatic expansion of the solution $\\chi_{\\vec{k}}$, and the solution is schematically written by\n\\begin{equation}\n\\chi_k (\\eta) = \\frac{1}{\\sqrt{2W_{2n}(\\eta)}}\\exp{\\left(-i\\int^{\\eta}W_{2n}(\\eta')d\\eta'\\right)}+\\mathcal{O}(T^{-2n-2})\n\\label{eq:yuugen}\n\\end{equation}\nup to $2n$th adiabatic order. $W_{2n}$ is obtained later by the effective frequency of the equation of motion each order.\n\n$\\phi_{\\vec{k}}$ become the complete orthogonal set of solutions, and $A_{\\vec{k}}$ and $A^{\\dagger}_{\\vec{k}}$ becomes annihilation and creation operators after requiring the canonical commutation relations\n\\begin{equation}\n[A_{\\vec{k}},A_{\\vec{k}'}]=[A_{\\vec{k}}^{\\dagger},A_{\\vec{k}'}^{\\dagger}]=0,\\hspace{1em}\n[A_{\\vec{k}},A_{\\vec{k}'}^{\\dagger}]=\\delta (\\vec{k}-\\vec{k}'),\n\\end{equation}\nand an adiabatic vacuum state $|0\\rangle$ is constructed.\n\\begin{equation}\nA_{\\vec{k}}|0\\rangle=0\\hspace{3em}(\\text{for all }\\vec{k})\n\\end{equation}\n\nNeglect the higher order terms in Eq. (\\ref{eq:yuugen}).\ni.e., Consider the $2n$th adiabatic order solution.\n\\begin{equation}\n\\chi_k^{(2n)} (\\eta) = \\frac{1}{\\sqrt{2W_{2n}(\\eta)}}\\exp{\\left(-i\\int^{\\eta}W_{2n}(\\eta')d\\eta'\\right)}\n\\end{equation}\n$\\chi_k^{(2n)} (\\eta)$ obeys the following equation.\n\\begin{equation}\n\\chi''_k {}^{(2n)} +\\left[W_{2n}^2\n-W_{2n}^{\\frac{1}{2}}\\left(W_{2n}^{-\\frac{1}{2}}\\right)''\\right]\\chi_k^{(2n)}=0\n\\label{eq:2nth}\n\\end{equation}\n\nRewrite Eq. (\\ref{eq:chiEOM}) to\n\\begin{equation}\n\\chi''_k +\\left[W^2 -W^{\\frac{1}{2}}\\left(W^{-\\frac{1}{2}}\\right)''\\right]\\chi_k\n=\\left[W^2 -W^{\\frac{1}{2}}\\left(W^{-\\frac{1}{2}}\\right)''-\\Omega^2\\right]\\chi_k~.\n\\end{equation}\nIn $2n$th adiabatic order, we can choose $W$ so that the left hand side vanishes by using Eq. (\\ref{eq:2nth}).\n\\begin{equation}\n0=\\left[W^2 -W^{\\frac{1}{2}}\\left(W^{-\\frac{1}{2}}\\right)''-\\Omega^2\\right]\\chi_k\n\\label{eq:higher-rhs}\n\\end{equation}\ni.e., The right hand side is higher adiabatic order than $2n$th order. We can get from Eq. (\\ref{eq:higher-rhs})\n\\begin{equation}\nW^2 =\\Omega^2 +W^{\\frac{1}{2}}\\left(W^{-\\frac{1}{2}}\\right)''\n\\label{eq:Weq}\n\\end{equation}\nwhere $W$ in arbitrary order.\n\nNote that $\\Omega^2$ of Eq. (\\ref{eq:Omega}) can be decomposed to the terms of 0th adiabatic order and 2nd order because $f(\\eta)$ is 1st adiabatic order.\nTherefore we can write $\\Omega^2$ in terms of the 0th adiabatic order part $\\omega_k^2$ and the 2nd order part $\\sigma\\equiv-\\frac{1}{4}f^2 (\\eta)-\\frac{1}{2}f'(\\eta)$.\n\\begin{equation}\n\\Omega^2 =\\omega_k^2 +\\sigma\n\\end{equation}\n$W$ is decomposed as following by using the Eq. (\\ref{eq:Weq}).\n\\begin{eqnarray}\nW&=&W^{(0)}+W^{(1)}+W^{(2)}+\\cdots\\\\\nW^{(0)}&=&\\omega_k\\nonumber\\\\\nW^{(1)}&=&0\\nonumber\\\\\nW^{(2)}&=&\\frac{1}{2}\\omega_k^{-\\frac{1}{2}}\\left(\\omega_k^{-\\frac{1}{2}}\\right)'' +\\frac{1}{2}\\omega_k^{-1}\\sigma\\nonumber\n\\end{eqnarray}\n\nThe unregularized (``bare'') two point function of $\\phi$ becomes\n\\begin{eqnarray}\n\\langle0|\\phi(x)\\phi(\\tilde{x})|0\\rangle &=&\n\\frac{1}{(2\\pi)^3}\\int d^3\\! kd^3\\! k' \\phi_{\\vec{k}}(x)\\phi_{\\vec{k}'}^* (\\tilde{x})\\delta (\\vec{k}-\\vec{k}')\\nonumber\\\\\n&=&\\frac{1}{(2\\pi)^3}\\int d^3\\! k \\phi_{\\vec{k}}(x)\\phi_{\\vec{k}}^* (\\tilde{x})\\nonumber\\\\\n&=&\\frac{1}{(2\\pi)^3}\\frac{1}{\\exp{\\left(\\int\\! f(\\eta)d\\eta\\right)}}\\int d^3\\! k e^{i\\vec{k}\\cdot (\\vec{x}-\\vec{\\tilde{x}})} \\chi_k (x)\\chi_k^* (\\tilde{x})\n\\end{eqnarray\nand take limit $\\tilde{x}\\rightarrow x$.\n\\begin{eqnarray}\n\\langle0|\\phi^2 (x)|0\\rangle&=&\\frac{1}{(2\\pi)^3}\\frac{1}{\\exp{\\left(\\int\\! f(\\eta)d\\eta\\right)}}\\int d^3\\! k |\\chi_k (x)|^2\\nonumber\\\\\n&=& \\frac{1}{2\\pi^2}\\frac{1}{\\exp{\\left(\\int\\! f(\\eta)d\\eta\\right)}}\\int dk\\ k^2 |\\chi_k (x)|^2\n\\end{eqnarray}\n\nTo regularize this, we need $W^{-1}$ up to second order.\n\\begin{equation}\nW=\\frac{1}{W^{(0)}+W^{(1)}+W^{(2)}+\\cdots}=W^{-1(0)}+W^{-1(1)}+W^{-1(2)}+\\cdots\n\\end{equation}\n\\begin{equation}\nW^{-1(0)}=\\omega_k^{-1},\\hspace{1em}W^{-1(1)}=0,\\hspace{1em}W^{-1(2)}=-\\omega_k^{-2}W^{(2)}\n\\label{eq:W}\n\\end{equation}\n\nNow the divergent part is uniquely isolated by the adiabatic expansion.\nBecause only $W^{-1(0)}\\propto k^{-1}$ and $W^{-1(2)}\\propto k^{-n}\\ (n\\geq3)$ contain the divergent parts, we subtract these two terms and define the physical two point function (minimal subtraction scheme)\n\\begin{equation}\n\\langle0|\\phi^2 (x)|0\\rangle_{\\text{phys}}\n\\equiv\\frac{1}{2\\pi^2}\\frac{1}{\\exp{\\left(\\int\\! f(\\eta)d\\eta\\right)}}\\int dk\\ k^2 \\left[|\\chi_k (\\eta)|^2-\\frac{1}{2}\\left\\{W^{-1(0)}+W^{-1(2)}\\right\\}\\right]~.\n\\end{equation}\nFrom Eq. (\\ref{eq:W}), it is found that the 0th order subtraction term $W^{-1(0)}$ corresponds to the zero-point energy and its time dependence is due to the expansion of the universe and\/or the non-canonical kinetic term.\nThe second order term does not exist in Minkowski spacetime.\n\nWe can use this method to regularize the power spectrum.\nThe calculation for usual slow-roll inflation model has done, and it is shown that the subtraction terms for the scalar perturbation and the tensor perturbation become sufficiently negligible after the horizon crossing \\cite{Urakawa:2009-1} (see also \\cite{Agullo:2010-1}).\n\nWhile the adiabatic subtraction terms for slow-roll inflation model is studied very well, for other inflation models is not.\nHence we generalize it for k-inflation model which has more generic Lagrangian than slow-roll model.\n\n\\section{ Minimally coupled k-inflation} \n\\subsection{k-inflation}\nk-inflation (kinetically driven inflation) is a inflation model advocated by \\cite{ArmendarizPicon:1999-1,Garriga:1999-1}.\nIn this model, the inflaton have non-canonical kinetic terms and they make it possible that the universe expands exponentially.\nWe will review briefly k-inflation model in this section.\n\nThe Lagrangian of inflaton in k-inflation model given by\n\\begin{equation}\n\\mathcal{L}_{\\varphi}=P(\\varphi,X)=-V(\\varphi)+K(\\varphi)X+L(\\varphi)X^2+\\cdots\n\\label{eq:k-lag}\n\\end{equation}\nwhere $\\varphi$ is inflaton and $X\\equiv\\frac{1}{2}\\partial_{\\mu}\\varphi\\partial^{\\mu}\\varphi$.\nHowever, the explicit form of the Lagrangian will not be specified except Appendix \\ref{chap:sub-dev}, and the general form $P(\\varphi,X)$ will be used.\n\nThe action in minimally coupled case is\n\\begin{equation}\nS=\\frac{1}{2}\\int d^4 x\\sqrt{-g}\\left[-R+P(\\varphi,X) \\right]~.\n\\end{equation}\n\nThe energy-momentum tensor under the perfect fluid approximation is\n\\begin{equation}\nT_{\\mu\\nu}=(E+P)u_{\\mu}u_{\\nu}-Pg_{\\mu\\nu}~.\n\\tag{\\ref{eq:EMtensor}}\n\\end{equation}\nUsing this expression and the definition of the energy-momentum tensor\n\\begin{equation}\nT_{\\mu\\nu}\\equiv\\frac{2}{\\sqrt{-g}}\\frac{\\delta S}{\\delta g^{\\mu\\nu}}\n=P,_X \\partial_{\\mu}\\varphi \\partial_{\\nu}\\varphi-Pg_{\\mu\\nu}~,\n\\end{equation}\nthe energy $E$ is obtained as $E=2XP,_{X}-P$ where the four-velocity $u_{\\mu}\\equiv\\frac{\\partial_{\\mu}\\varphi}{\\sqrt{2X}}$.\n\nTo derive the adiabatic subtraction terms in following sections, it is need to obtain the expression of the sound speed.\nThe sound speed of k-inflation is given by\n\\begin{equation}\nc_s^2 \\equiv\\frac{P,_X}{E,_X}=\\frac{P,_X}{2XP,_{XX}+P,_X}~.\n\\end{equation}\n$c_s^2$ can be negative and $c_s$ can exceed $1$ by definition.\nHowever, $c_s^2 \\geq0$ is required in general for stability \\cite{ArmendarizPicon:1999-1} and $c_s$ does not exceed the speed of light as long as the Lagrangian does not contain the negative power of $X$.\n\nAlso $z$ is expressed in terms of $P$ and $X$,\n\\begin{equation}\nz\\equiv\\frac{a^2 \\sqrt{E+P}}{c_s \\mathcal{H}}=\\frac{a^2 \\sqrt{2XP,_X}}{c_s \\mathcal{H}}\n\\end{equation}\n\n\\subsection{The adiabatic subtraction term for the scalar perturbation}\nThe Mukhanov--Sasaki equation Eq. (\\ref{MSeq}) is a second derivative equation and we can derive the adiabatic subtraction term for the comoving curvature (scalar) perturbation $\\mathcal{R}$ from this.\n\\begin{equation}\nv''_k +\\left(c_s^2 k^2 - \\frac{z''}{z}\\right)v_k=0\n\\tag{\\ref{MSeq}}\n\\end{equation}\nwhere\n\\begin{equation}\nv_k\\equiv z\\mathcal{R}_k,\\hspace{1em}\nz\\equiv\\frac{a^2 \\sqrt{2XP,_X}}{c_s \\mathcal{H}},\\hspace{1em}\nc_s^2\\equiv\\frac{P,_X}{2XP,_{XX}+P,_X}~.\n\\end{equation}\n\nIn this case, using the notation in the section \\ref{sec:adiabatic},\n\\begin{equation}\n\\omega_k^2=c_s^2 k^2,\\hspace{1em}\\sigma=-\\frac{z''}{z}\n\\end{equation}\nand the physical amplitude is schematically given by\n\\begin{equation}\n\\langle|\\mathcal{R} (x)|^2\\rangle_{\\text{phys}} \\equiv \\int^{\\infty}_0 \\frac{dk}{2\\pi^2} k^2 \\left[ |\\mathcal{R}_k (\\eta)|^2_{\\text{bare}} -|\\mathcal{R}_k (\\eta)|^2_{\\text{sub}} \\right]\n\\end{equation}\nwhere\n\\begin{equation}\n|\\mathcal{R}_k (\\eta)|^2_{\\text{sub}} \\equiv |\\mathcal{R}_k (\\eta)|^{2(0)} +|\\mathcal{R}_k (\\eta)|^{2(2)}\n\\label{eq:subterm}\n\\end{equation}\nis a adiabatic solution of $\\mathcal{R}$ up to second order.\n\nThen the adiabatic subtraction term for k-inflation model can be calculated\n\\begin{eqnarray}\n|\\mathcal{R}_k (\\eta)|_{\\text{sub}}^2 &=&\\frac{1}{2z^2}\\left\\{\\omega_k^{-1} -\\omega_k^{-2} \\left(\\frac{1}{2}\\omega_k^{-\\frac{1}{2}}\\frac{d^2}{d\\eta^2}\\omega_k^{-\\frac{1}{2}}+\\frac{1}{2}\\omega_k^{-1}\\sigma\\right)\\right\\}\\nonumber\\\\\n&=&\\frac{1}{2z^2 c_s k}\\left\\{1+\\frac{1}{2c_s^2 k^2}\\frac{z''}{z}+\\frac{1}{c_s^2 k^2}\\left(\\frac{1}{4}\\frac{c''_s}{c_s}-\\frac{3}{8}\\frac{c'_s {}^2}{c_s^2}\\right)\\right\\}~.\n\\label{eq:mini-sub}\n\\end{eqnarray}\n\nThe third term in the parenthesis in the second line comes from the derivative of $\\omega_k$. In the models having canonical kinetic terms (for example slow-roll model etc.), the sound speed $c_s$ is equal to one, and this derivatives are equal to zero. This result accords with the result in \\cite{Urakawa:2009-1}, which considered slow-roll inflation model.\n\n\\subsection{The adiabatic subtraction term for the tensor perturbation}\nThe tensor perturbation or the gravitational wave $h_{ij}$ is one of the modes of the metric perturbations Eq. (\\ref{eq:metricP3}).\nThe tensor perturbation is gauge invariant itself, and has only two degrees of freedom because of the property of the metric.\n\nWhile the scalar perturbation propagates at the sound speed, the tensor perturbation propagates at the light speed even in k-inflation case.\nThe each degree of freedom of tensor perturbation obeys the following equation like Mukhanov--Sasaki equation with $c_s =1$.\n\\begin{equation}\nv''_{Tk} +\\left(k^2 -\\frac{a''}{a}\\right)v_{Tk} =0\n\\end{equation}\nwhere $v_{Tk}=\\frac{a}{2} h_k$. $h_k$ denotes one of two polarized gravitational waves and we must add each results at the end of calculation.\n\nThe subtraction terms can be derived similarly.\nIn this cace,\n\\begin{equation}\n\\omega_k^2 =k^2,\\ \\sigma =-\\frac{a''}{a}~.\n\\end{equation}\nThe calculation of the subtraction term has done by \\cite{Urakawa:2009-1} in slow-roll model case, and the same analysis can be used in k-inflation model case.\nThe expression of the subtraction term for tensor perturbation is obtained as\n\\begin{equation}\n|h_{k}(\\eta)|_{\\text{sub}}^2=\n\\frac{2}{a^2 k}\\left(1+\\frac{1}{2k^2}\\frac{a''}{a}\\right),\n\\end{equation}\nand it is efficiently suppressed at late time.\n\n\\section{ Non-minimally coupled k-inflation}\nThe inflation models which has the coupling terms of inflaton and scalar curvature have been considered.\nHiggs inflation model (see e.g. \\cite{Bezrukov:2008-1}) is one of these models, and of course there is no reason to prohibit inflaton from coupling to scalar curvature in general.\nThen let us generalize more the Lagrangian and obtain the adiabatic subtraction term for non-minimally coupled inflation models.\n\nThe action of non-minimally coupled k-inflation model is given by\n\\begin{equation}\nS=\\frac{1}{2}\\int d^4 x\\sqrt{-g}\\left[-f(\\varphi)R+2P(\\varphi,X) \\right]~.\n\\label{non-minimal-k}\n\\end{equation}\n$f(\\varphi)=1$ corresponds to the minimally coupled case and $f(\\varphi)=1+\\frac{1}{6}\\varphi^2$ corresponds to the conformally coupled case.\n\nIn non-minimal case, the Einstein equation differs from the minimal couping case, and the scalar perturbation does not obey the Mukhanov--Sasaki equation.\nHowever, according to \\cite{Qiu:2011-1}, it is shown that the scalar curvature obeys the equation like Mukhanov--Sasaki equation by using ADM formalism \\cite{Arnowitt:1959-1}.\n\\begin{equation}\nv''_k +\\left(c_{s,\\text{eff}}^2 k^2 - \\frac{z''_{\\text{eff}}}{z_{\\text{eff}}}\\right) v_k =0,\\ v_k \\equiv z_{\\text{eff}}\\mathcal{R}_k\n\\label{eq:Qiu's}\n\\end{equation}\n$z_{\\text{eff}}$ and $c_{s,\\text{eff}}$ have been estimated directly by ADM formalism in \\cite{Qiu:2011-1}.\n\\begin{eqnarray}\nz_{\\text{eff}}^2 &=& 6e^{2\\theta}\\left(\\frac{\\mathcal{H}}{\\theta'}-1\\right)^2+2\\frac{a^4 \\Sigma}{\\theta'^2}\\label{eq:zeff}\\\\\nc_{s,\\text{eff}}^2 &=& \\frac{\\theta'^2-\\theta''}{3(\\mathcal{H}-\\theta')^2+\\frac{a^4 \\Sigma}{e^{2\\theta}}}\n\\label{eq:ceff}\n\\end{eqnarray\nwhere\n\\begin{equation}\n\\theta\\equiv\\frac{1}{2}\\ln{(fa^2)},\\hspace{1em}\n\\Sigma\\equiv XP,_X +2X^2 P,_{XX}~.\n\\label{eq:thetasigma}\n\\end{equation}\ni.e., $\\theta'$ \/ $\\dot{\\theta}$ corresponds to $\\mathcal{H}$ \/ $H$ respectively in the minimal coupling case.\n\nUsing Eq. (\\ref{eq:Qiu's}), we can get the subtraction terms for non-minimal coupled case,\n\\begin{equation}\n|\\mathcal{R}_k (\\eta)|_{\\text{sub}}^2=\n\\frac{1}{2z_{\\text{eff}}^2 c_{s,\\text{eff}} k}\\left\\{1+\\frac{1}{2c_{s,\\text{eff}}^2 k^2}\\frac{z''_{\\text{eff}}}{z_{\\text{eff}}}+\\frac{1}{c_{s,\\text{eff}}^2 k^2}\\left(\\frac{1}{4}\\frac{c''_{s,\\text{eff}}}{c_{s,\\text{eff}}}-\\frac{3}{8}\\frac{c'_{s,\\text{eff}} {}^2}{c_{s,\\text{eff}}^2}\\right)\\right\\}~.\n\\label{sub:jordan}\n\\end{equation\n\nThis subtraction term depends on time and $f(\\varphi)$ in a complex manner.\nMoreover, the ADM formalism uses a very long method of calculation to derive the equations of motion.\nOnce we have the expression Eq. (\\ref{sub:jordan}), there is no need to search for other methods.\nHowever, for checking physicality and simplifying the process, we will consider a different method to obtain the subtraction terms for non-minimal coupling case in Part 3 (based upon using the equations of motion).\n\\section{ Time development of the subtraction terms\\label{sec:timedev}}\n\\subsection{Freezing out and the subtraction terms}\n\nThe bare power spectrum has a property whereby certain parameters ``freeze out'': they do not develop after horizon crossing $aH=k$ in canonical case, or the sound horizon crossing $aH=c_s k$ in the non-canonical case. The reason is that the comoving curvature perturbation $\\mathcal{R}$ becomes constant in the large scale limit $-k\\eta \\ll 1$ or $-c_s k\\eta \\ll 1$.\n\nThe frozen modes of the perturbation become observable after the horizon re-entry, which occurs in radiation dominant era or matter dominant era, see Fig. (\\ref{fig:crossing}).\n\\begin{figure}[h] \\begin{center}\n \\includegraphics[width=7cm]{Figures\/horizon-crossings-2.pdf}\n \\caption{The time development of the comoving Hubble radius $1\/aH$ (log-log graph).}\n \\label{fig:crossing}\n\\end{center} \\end{figure}\nThe arrows indicate a specific comoving scale $1\/k$ of the power spectrum. \nThe comoving horizon (or the comoving Hubble radius) shrinks during the inflation and the scale exits to the outside of the horizon at (a).\nAfter reheating around (b), the radiation dominant era begin and the comoving Hubble radius grows.\nThe mode re-enter into the horizon at (c).\nIt is considered that the primordial perturbations are transformed into the observables with some processes after (a).\nHowever, it is unclear when it occurred.\n\nThe longer the time the perturbations are freezing, the more primordial information we can obtain.\nTherefore we are interested in the large scale limit.\n\nWhen the adiabatic regularization scheme was adopted to evaluate the power spectrum, it was considered that the subtraction terms also freeze out after crossing at first \\cite{Parker:2007-1}.\nThe subtraction terms for slow-roll inflation models at horizon crossing are not small and cannot be neglected, therefore the author claimed, see \\cite{Parker:2007-1}, that the amplitude of the perturbations was changed from the bare one.\n\nHowever, the time dependence of the subtraction term is not obvious from Eq. (\\ref{eq:mini-sub}) even if we take the large scale limit.\nMoreover, it should be estimated up to the stage of reheating or up to times just before the primordial perturbations are transformed into classical quantities.\n\nThe method of subtraction has been argued by various authors (see the review \\cite{BasteroGil:2013-1}). Based on the above, we adopt the scheme adopted by Urakawa and Starobinsky \\cite{Urakawa:2009-1}.\nTheir scheme is similar to Parker's one, but not quite the same. In this scheme:\n\\begin{itemize}\n\\item We subtract up to the second adiabatic order term for the power spectra.\n\\item We consider the time development of the subtraction terms even after horizon crossing.\n\\item To consider time development, we do not require any cutoff of the range of the $k$ integrals.\n\\end{itemize}\nThe first condition is the minimal subtraction scheme as usual.\nThe second condition is different from the original idea of Parker \\cite{Parker:2007-1}, while the third condition is required to maintain the equation of motion of the inflaton in coordinate space \\cite{BasteroGil:2013-1}.\n\nImportantly, we are of the view that this scheme is simple and does not need any fictitious assumptions or processes; hence we use this method to analyze the observable power spectrum.\n\nIn the following sections, we see how large a subtraction term remains after the inflation era.\nAlthough it may be not so important for observational physics, we also investigate the time development of the subtraction term during inflation in two specific cases, see Appendix \\ref{chap:sub-dev}.\n\n\\subsection{The subtraction term in terms of the slow-roll parameters\nIt is useful to rewrite the subtraction term in terms of the slow-roll parameters which we are familiar with to see the time development.\nThere are several ways to define the slow-roll parameters.\nAmong them, we use the Hubble flow functions and the sound flow functions.\nThe Hubble flow functions $\\epsilon_i$ is defined by\n\\begin{equation}\n\\epsilon_{n+1}\\equiv\\frac{d\\ln{\\epsilon_n}}{dN},\\hspace{1em}\\epsilon_0 = \\frac{H_{\\text{ini}}}{H}\n\\end{equation}\nand the sound flow functions $\\delta_i$ is defined by\n\\begin{equation}\n\\delta_{n+1}\\equiv\\frac{d\\ln{\\delta_n}}{dN},\\hspace{1em}\\delta_0 = \\frac{c_{s,\\text{ini}}}{c_s}\n\\label{eq:soundflow-def}\n\\end{equation}\nwhere $N=\\ln{a\/a_{\\text{ini}}}$ is $e$-folding number.\nThe subscript ${}_{\\text{ini}}$ means the initial value which is irrelevant, but we should fix it at the horizon exit to compare the subtraction term to the frozen bare spectrum.\n\nThe definitions are also written in alternative forms\n\\begin{equation}\n\\epsilon_{n+1}=\\frac{1}{H}\\frac{\\dot{\\epsilon_n}}{\\epsilon_n}=\\frac{1}{\\mathcal{H}}\\frac{\\epsilon'_n}{\\epsilon_n},\\hspace{1em}\n\\delta_{n+1}=\\frac{1}{H}\\frac{\\dot{\\delta_n}}{\\delta_n}=\\frac{1}{\\mathcal{H}}\\frac{\\delta'_n}{\\delta_n}~.\n\\end{equation}\n\nThen rewrite $z$ in terms of $\\epsilon_i$. First,\n\\begin{equation}\n\\epsilon_1 =\\frac{1}{H}\\frac{\\dot{\\epsilon_0}}{\\epsilon_0}\n=-\\frac{\\dot{H}}{H^2}~.\n\\end{equation}\nThis is equal to the slow-roll parameter $\\varepsilon\\equiv-\\frac{\\dot{H}}{H^2}$ defined in usual\n\nFrom Friedmann equations Eq. (\\ref{eq:Friedmann1}) and Eq. (\\ref{eq:Friedmann2})\\footnote{In first order perturbation theory, it is enough considering only the zeroth order part of $z$ because the comoving scalar curvature $\\mathcal{R}$ itself is the first order of perturbation.},\n\\begin{equation}\n\\dot{E}=-3H(E+P)=6H\\dot{H}\n\\end{equation}\n\\begin{equation}\n\\therefore E+P=-2\\dot{H}\n\\end{equation}\nSubstitute this into Eq. (\\ref{eq:zdef}),\n\\begin{equation}\nz\\equiv\\frac{a^2 \\sqrt{E+P}}{c_s \\mathcal{H}}=\\frac{a \\sqrt{E+P}}{c_s H}\n=\\frac{a}{c_s}\\sqrt{\\frac{-2\\dot{H}}{H^2}}=\\frac{\\sqrt{2\\epsilon_1}a}{c_s}\n\\label{eq:z-epsilon}\n\\end{equation}\n\nUsing this expression, we can rewrite $z'' \/z$ \\cite{Lorenz:2008-1}.\n\\begin{equation}\n\\frac{z''}{z}=a^2 H^2 \\left\\{2-\\epsilon_1 +\\frac{3}{2}\\epsilon_2 +\\frac{1}{4}\\epsilon_2^2 -\\frac{1}{2}\\epsilon_1\\epsilon_2 +\\frac{1}{2}\\epsilon_2 \\epsilon_3 +(3-\\epsilon_1 +\\epsilon_2)\\delta_1 +\\delta^2_1 +\\delta_1 \\delta_2 \\right\\}\n\\label{eq:explicit-effp}\n\\end{equation}\n\nWe can also rewrite the third them of Eq. (\\ref{eq:mini-sub}).\n\\begin{equation}\n\\frac{1}{4}\\frac{c''_s}{c_s}-\\frac{3}{8}\\frac{c'_s {}^2}{c_s^2}\n=-\\frac{1}{8}a^2 H^2\\delta_1 (2-2\\epsilon_1+\\delta_1+2\\delta_2)\n\\label{eq:explicit-third}\n\\end{equation}\nThe calculation in detail is given Appendix \\ref{app:detail} for completeness.\n\nFinally, we obtain the expression of the adiabatic subtraction term.\n\\begin{equation}\n|\\mathcal{R}_k (\\eta)|_{\\text{sub}}^2\n=\\frac{1}{2z^2 c_s k}\\left\\{1+\\left(\\frac{aH}{c_s k}\\right)^2 \\left(1+\\delta\\epsilon+\\delta c_s \\right)\\right\\}\n\\label{eq:sub-in-sr}\n\\end{equation}\nHere we defined two variables:\n\\begin{equation}\n\\delta\\epsilon \\equiv \\frac{1}{2}\\left(-\\epsilon_1 +\\frac{3}{2}\\epsilon_2 +\\frac{1}{4}\\epsilon_2^2 -\\frac{1}{2}\\epsilon_1\\epsilon_2 +\\frac{1}{2}\\epsilon_2 \\epsilon_3\\right)\n\\label{eq:del-e}\n\\end{equation}\n\\begin{equation}\n\\delta c_s \\equiv\n\\frac{1}{8}\\delta_1 (10-2\\epsilon_1 +4\\epsilon_2 +3\\delta_1 +2\\delta_2)\n\\label{eq:del-c}\n\\end{equation}\nand have isolated the explicit effect of the non-canonical kinetic terms.\n\n During inflation, we usually expect small slow-roll parameters, which can be constructed by the Hubble flow functions and the sound flow functions, and $\\delta\\epsilon$ and $\\delta c_s$ are smaller than $\\mathcal{O}(1)$.\nEven after inflation, $\\delta\\epsilon$ is not so large because $\\epsilon_1 =2$ and $\\epsilon_2 =0$ in the radiation dominant era $a(t)\\propto\\sqrt{t}$, and $\\epsilon_1 =3\/2$ and $\\epsilon_2 =0$ in the matter dominant era $a(t)\\propto t^{2\/3}$.\nHow large $\\delta c_s$ is depends on the model.\nHowever, the ``sound speed'' $c_s$ does not exceed $1$ in usual k-inflation models, and we expect that $\\delta c_s$ does not become too large.\n\nTherefore it depends on the behavior of the factors $\\frac{1}{2z^2 c_s k}$ and $\\frac{1}{2z^2 c_s k}\\left(\\frac{aH}{c_s k}\\right)^2$ how the subtraction term develops.\nLet us see how the each term behaves at late time.\n\nThe zeroth order subtraction term $\\frac{1}{2z^2 c_s k}$ becomes exponentially small in general because\n\\begin{equation}\n\\frac{1}{2z^2 c_s k}=\\frac{c_s}{4\\epsilon_1 a^2 k}\n\\end{equation}\nand $a$ expands exponentially during inflation.\n$\\epsilon_1$ also become large\nwhile $c_s$ cannot become too large.\nThis is a natural consequence because the zeroth order term corresponds to the zero-point energy in the Minkowski spacetime.\n\nConcerning the second order subtraction term, mainly the behavior depends on $c_s$ because it is known that $\\frac{1}{2z^2 k}\\left(\\frac{aH}{k}\\right)^2=\\frac{H^2}{4\\epsilon_1 k^3}$ at the late time is sufficiently suppressed compared to the its value at the horizon crossing in canonical case \\cite{Urakawa:2009-1}\\if0{\\footnote{\n They claim that the subtraction term decays ``exponentially''.\n However, even though we can rewrite the suppression factor as $\\frac{H^2}{4\\epsilon_1 c_s k^3}\\propto\\frac{1}{k^3}\n\\exp{\\left\\{\\int^N d\\tilde{N}\\left(-2\\epsilon_1 (\\tilde{N}) -\\epsilon_2 (\\tilde{N})+\\delta_1 (\\tilde{N})\\right)\\right\\}}$ by using the definitions of the flow functions, it does not mean that the subtraction term decay very rapidly and the suppression does not occur as expected (see also \\cite{Agullo:2011-1}).\n $H$ is almost constant and $\\epsilon_1$ changes its value at most $\\mathcal{O}(100)$ during inflation.\n The time dependence mainly lies in the time dependence of $H$ and $c_s$ after the inflation era, thus the analysis using the flow functions is not always appropriate.}}\\fi.\nBecause $\\epsilon_1$ in post-inflation era is bigger than during inflation and $H\\propto t^{-1}$ in radiation and matter dominant era.\n\nIn the non-canonical case,\n\\begin{equation}\n\\frac{1}{2z^2 c_s k}\\left(\\frac{aH}{c_s k}\\right)^2\n=\\frac{H^2}{4\\epsilon_1 c_s k^3}~.\n\\label{eq:second-factor}\n\\end{equation}\nTherefore if there is a model in which $c_s$ becomes sufficiently small, the subtraction terms of the model can be large enough such that it affects the power spectrum.\n\n\\if0{\n At least, we require that the value of the integrand $-2\\epsilon_1 -\\epsilon_2 +\\delta_1$ is negative and sufficiently small at the horizon crossing.\n Because the spectral index of the scalar power spectrum is \\cite{Lorenz:2008-1}\n \\begin{equation}\n n_s -1\\equiv\\biggl.\\frac{d\\ln{\\Delta^2_{\\mathcal{R}}}}{d\\ln{k}}\\biggr|_{\\text{crossing}}=\\left[-2\\epsilon_1 -\\epsilon_2 +\\delta_1\\right]_{\\text{crossing}}\n \\end{equation}\n and the observed value of $n_s$ is around $0.96$ \\cite{Planck:2013-1}.\n}\\fi\n\n\n\\part{Conformal transformations and physicality}\n\\setcounter{section}{0}\n\nIn Part 2, we discussed how the adiabatic subtraction term for the non-minimal k-inflation model can be derived by using the ADM formalism. \nHowever, minimal coupling models are studied more often than non-minimal inflation models, where part of the reason is the calculations are easier. \nTherefore, it is convenient to analyze non-minimal models if we can apply knowledge from minimal inflation models in some form.\n\nUsing a conformal transformation, we can rewrite the Lagrangian of the non-minimal coupling model as that of one in the minimal coupling model.\nThe metric and the pressure are altered by such a conformal transformation, hence the physical properties of the conformal-transformed universe are \ndifferent from the true minimal coupling case, even though the action has the same form as a minimal coupling model.\n\nThe frame which has the non-minimal coupling term is denoted the Jordan, while the conformally-transformed frame is denoted the Einstein frame. It is interesting to note that the physical interpretation is completely altered, yet the observations do not change in the transformation (see e.g. \\cite{Domenech:2015-1}).\n\nImportantly, in this chapter, we will show that we can derive the adiabatic subtraction terms (extracted from the Mukhanov--Sasaki equation) for non-minimal k-inflation with the conformal transformation, without using the ADM formalism.\n\n\\section{ Einstein equation for non-minimal coupling model in Jordan frame}\nBefore the discussion of the conformal transformation, let us review the calculations in Jordan frame.\nHow different it is from the minimal case is also argued.\n\nThe action in the Jordan frame is\n\\begin{equation}\nS=\\frac{1}{2}\\int d^4 x \\sqrt{-g} [-f(\\varphi)R+2P(\\varphi,X)]\n\\tag{\\ref{non-minimal-k}}\n\\end{equation}\nand the Einstein equation becomes\n\\begin{equation}\ng_{\\mu\\nu}\\nabla_{\\lambda}\\nabla^{\\lambda}f-\\nabla_{\\mu}\\nabla_{\\nu}f\n+f\\left(R_{\\mu\\nu}-\\frac{1}{2}Rg_{\\mu\\nu}\\right)\n-P,_{X}\\nabla_{\\mu}\\varphi\\nabla_{\\nu}\\varphi+Pg_{\\mu\\nu}=0~.\n\\label{eq:einnon}\n\\end{equation}\nTo simplify, define three tensors and rewrite the Einstein equation as follows:\n\\begin{eqnarray}\nG_{\\mu\\nu}&\\equiv&R_{\\mu\\nu}-\\frac{1}{2}Rg_{\\mu\\nu}\\label{eq:g}\\\\\nT_{\\mu\\nu}&\\equiv&P,_{X}\\nabla_{\\mu}\\varphi\\nabla_{\\nu}\\varphi-Pg_{\\mu\\nu}\\label{eq:p}\\\\\nF_{\\mu\\nu}&\\equiv&g_{\\mu\\nu}\\nabla_{\\lambda}\\nabla^{\\lambda}f-\\nabla_{\\mu}\\nabla_{\\nu}f\\label{eq:f}\n\\end{eqnarray}\n\\begin{equation}\nfG^{\\mu}_{\\ \\nu}=T^{\\mu}_{\\ \\nu}-F^{\\mu}_{\\ \\nu}\n\\label{eq:EinsteinT}\n\\end{equation}\n\nIf we pick up only the first order part of the perturbations,\n\\begin{equation}\n\\delta (fG^{\\mu}_{\\ \\nu})=G^{\\ \\mu}_{c\\ \\nu}\\delta f+f_c\\delta G^{\\mu}_{\\ \\nu}\n=\\delta T^{\\mu}_{\\ \\nu}-\\delta F^{\\mu}_{\\ \\nu}~.\n\\end{equation}\nHereafter the subscript $c$ denotes the classical part and $\\delta$ denotes the first order part of the perturbations respectively.\n\nThe gauge fixing is needed to calculate the equation of the scalar perturbation.\nWe use the conformal-Newtonian gauge Eq. (\\ref{eq:CNgauge}) in this section because the correspondence to gauge-invariant variables is clearer than other gauges.\n\nFrom the transformation law of the tensor, the gauge-invariant $\\overline{\\delta G}^{\\mu}_{~\\nu}$, $\\overline{\\delta T}^{\\mu}_{~\\nu}$, and $\\overline{\\delta F}^{\\mu}_{~\\nu}$ is written by\n\\begin{eqnarray}\n\\overline{\\delta G}^0_{\\ 0} &\\equiv& \\delta G^0_{\\ 0} -(G^{\\ 0}_{c\\ 0})' (B-\\tilde{E}')\\nonumber\\\\\n\\overline{\\delta G}^i_{\\ j} &\\equiv& \\delta G^i_{\\ j} -(G^{\\ i}_{c\\ j})' (B-\\tilde{E}') \\label{eq:gtensor}\\\\\n\\overline{\\delta G}^0_{\\ i} &\\equiv& \\delta G^0_{\\ i} -\\left(G^{\\ 0}_{c\\ 0}-\\frac{1}{3}G^{\\ k}_{c\\ k}\\right) (B-\\tilde{E}')\\nonumber\n\\end{eqnarray}\nand so on.\nThe gauge invariant $\\overline{\\delta f}$ is also obtained from the transformation law of the scalar,\n\\begin{equation}\n\\overline{\\delta f}=\\delta f-f'_c (B-\\tilde{E})~.\n\\end{equation}\n\nIn conformal-Newtonian gauge, $B=\\tilde{E}=0$, and we get the gauge-invariant Einstein equation by replacing the tensors by the gauge-invariant ones.\n\\begin{equation}\nG^{\\ \\mu}_{c\\ \\nu}\\overline{\\delta f}+f_c\\overline{\\delta G}^{\\mu}_{\\ \\nu}\n=\\overline{\\delta T}^{\\mu}_{\\ \\nu}-\\overline{\\delta F}^{\\mu}_{\\ \\nu}~.\n\\end{equation}\n\nThen calculate the tensors and the equation in accordance with the definitions Eq. (\\ref{eq:g})--(\\ref{eq:EinsteinT}).\nAfter the tedious calculations, we can obtain $\\overline{\\delta G}^{\\mu}_{\\ \\nu}$ \\cite{Mukhanov200511}\n\\begin{eqnarray}\n\\overline{\\delta G}^0_{\\ 0} &=& \\frac{2}{a^2} \\{\\Delta\\Psi-3\\mathcal{H}(\\Psi'+\\mathcal{H}\\Phi)\\}\\\\\n\\overline{\\delta G}^0_{\\ i} &=& \\frac{2}{a^2}(\\Psi'+\\mathcal{H}\\Phi),_{i}\\\\\n\\overline{\\delta G}^i_{\\ j} &=& -\\frac{2}{a^2}\\left[\\left\\{\\Psi''+\\mathcal{H}(2\\Psi+\\Phi)'+(2\\mathcal{H}'+\\mathcal{H}^2)\\Phi+\\frac{\\Delta (\\Phi-\\Psi)}{2}\\right\\}\\delta_{ij}\\right.\\nonumber\\\\\n&&\\left.-\\frac{(\\Phi-\\Psi),_{ij}}{2}\\right]\n\\label{eq:Gpertss}\n\\end{eqnarray}\nwhere $\\Delta\\equiv\\partial_{\\mu}\\partial^{\\mu}$ and $\\overline{\\delta T}^{\\mu}_{\\ \\nu}$.\n\\begin{eqnarray}\n\\overline{\\delta T}^0_{\\ 0} &=& 2X\\delta P,_X -\\delta P\\\\\n\\overline{\\delta T}^0_{\\ i} &=& \\frac{1}{a^2}P_{c,X}\\varphi'\\delta\\varphi,_i\\\\\n\\overline{\\delta T}^i_{\\ j} &=& -\\delta P \\delta^i_{\\ j}\n\\end{eqnarray}\nThe Einstein equations are then as follows:\n\\begin{eqnarray}\n\\frac{3}{2}\\mathcal{H}^2 \\overline{\\delta f} +\nf_c\\{\\Delta\\Psi-3\\mathcal{H}(\\Psi'+\\mathcal{H}\\Phi)\\}\n&=&\\frac{1}{2}a^2 (\\overline{\\delta T}^{0}_{\\ 0}-\\overline{\\delta F}^{0}_{\\ 0})\\\\\nf_c (\\Psi'+\\mathcal{H}\\Phi),_i &=&\\frac{1}{2}a^2 (\\overline{\\delta T}^{0}_{\\ i}-\\overline{\\delta F}^{0}_{\\ i})\n\\end{eqnarray}\n\\begin{eqnarray}\n\\left(\\mathcal{H}'+\\frac{1}{2}\\mathcal{H}^2\\right)\\overline{\\delta f}\\delta_{ij}\n&+&f_c\\left\\{\\Psi''+\\mathcal{H}(2\\Psi+\\Phi)'+(2\\mathcal{H}'+\\mathcal{H}^2)\\Phi+\\frac{1}{2}\\Delta (\\Phi-\\Psi)\\right\\}\\delta_{ij}\\nonumber \\\\\n&-&\\frac{1}{2}f_c(\\Phi-\\Psi),_{ij}\n=-\\frac{1}{2}a^2 (\\overline{\\delta T}^{i}_{\\ j}-\\overline{\\delta F}^{i}_{\\ j})\n\\label{eq:third}\n\\end{eqnarray}\n\nLet us set $i\\neq j$ in the Eq. (\\ref{eq:third}).\nBecause the non-diagonal content of $\\overline{\\delta T}^i_{~j}$ is zero by definition then\n\\begin{equation}\nf_c(\\Phi-\\Psi),_{ij}=-a^2 \\overline{\\delta F}^{i}_{\\ j}~.\n\\label{eq:third'}\n\\end{equation}\nThe spatial non-diagonal components of the Einstein equations are zero in the minimal coupling case.\nThe existence of this anisotropic inertia is a big difference between the minimal and non-minimal couplings model.\n\nThe explicit form of $F^{\\mu}_{\\ \\nu}$ depends on the model, but we calculate the general form for generality.\n\nFrom Eq. (\\ref{eq:f}),\n\\begin{eqnarray}\nF^{\\mu}_{~\\nu} &=&\\delta^{\\mu}_{~\\nu}g_c^{\\lambda\\rho}\\left(\\partial_{\\rho}\\partial_{\\lambda}-^{(0)}\\!\\Gamma^{\\alpha}_{\\rho\\lambda}\\partial_{\\alpha}\\right)f_c-g_c^{\\mu\\rho}\\left(\\partial_{\\rho}\\partial_{\\nu}-^{(0)}\\!\\Gamma^{\\alpha}_{\\rho\\nu}\\partial_{\\alpha}\\right)f_c\\nonumber\\\\\n &+&\\delta^{\\mu}_{~\\nu}\\left(g_c^{\\lambda\\rho}\\partial_{\\rho}\\partial_{\\lambda}\\delta f\n +\\delta g^{\\lambda\\rho}\\partial_{\\rho}\\partial_{\\lambda}f_c\n -g_c^{\\lambda\\rho} {}^{(0)}\\Gamma^{\\alpha}_{\\rho\\lambda}\\partial_{\\alpha}\\delta f\n -g_c^{\\lambda\\rho}\\delta\\Gamma^{\\alpha}_{\\rho\\lambda}\\partial_{\\alpha} f_c\n -\\delta g^{\\lambda\\rho} {}^{(0)}\\Gamma^{\\alpha}_{\\rho\\lambda}\\partial_{\\alpha} f_c\n\\right)\\nonumber\\\\\n &-&\\!\\!\\!g_c^{\\mu\\rho}\\partial_{\\rho}\\partial_{\\nu}\\delta f\n -\\delta g^{\\mu\\rho}\\partial_{\\rho}\\partial_{\\nu}f_c\n +g_c^{\\mu\\rho} {}^{(0)}\\Gamma^{\\alpha}_{\\rho\\nu}\\partial_{\\alpha}\\delta f\n +g_c^{\\mu\\rho}\\delta\\Gamma^{\\alpha}_{\\rho\\nu}\\partial_{\\alpha} f_c\n +\\delta g^{\\mu\\rho} {}^{(0)}\\Gamma^{\\alpha}_{\\rho\\nu}\\partial_{\\alpha} f_c\n\\end{eqnarray}\nwhere the Christoffel symbol $\\Gamma^{\\alpha}_{\\rho\\lambda}=^{(0)}\\!\\!\\Gamma^{\\alpha}_{\\rho\\lambda}+\\delta\\Gamma^{\\alpha}_{\\rho\\lambda}$ (the notation \nis temporarily changed for simplicity).\nThe first line is the classical part $F^{\\ \\mu}_{c\\ \\nu}$, and the rest is the linear order part $\\delta F^{\\mu}_{\\ \\nu}$.\nIn up to first order perturbation theory, $\\overline{\\delta F^{\\mu}_{\\ \\nu} (\\delta f)}=\\delta F^{\\mu}_{\\ \\nu}(\\overline{\\delta f})$ and we get\n\\begin{eqnarray}\n\\overline{\\delta F}^{0}_{~0}&=&\\frac{1}{a^2}\\left(-\\Delta\\overline{\\delta f}+3\\mathcal{H}\\overline{\\delta f}'-6\\mathcal{H}\\Phi f'_c-3\\Psi'f'_c\\right)\\\\\n\\overline{\\delta F}^{0}_{~i}&=&\\frac{1}{a^2}\\left(-\\overline{\\delta f}'+\\mathcal{H}\\overline{\\delta f}+\\Phi f'_c\\right),_i\\\\\n\\overline{\\delta F}^{i}_{~j}&=&\\frac{1}{a^2}\\left[\\delta^i_{~j}\\left\\{\\overline{\\delta f}''-\\Delta\\overline{\\delta f}-2\\Phi f''_c+\\mathcal{H}\\overline{\\delta f}'-(\\Phi+2\\Psi)'f'_c-2\\mathcal{H}\\Phi f'_c\\right\\}\\right.\\nonumber\\\\\n&&\\left.+\\partial_i \\partial_j \\overline{\\delta f}\\right]\n\\end{eqnarray}\n\nThen when $i\\neq j$,\n\\begin{equation}\nf_c(\\Phi-\\Psi),_{ij}=-a^2 \\overline{\\delta F}^{i}_{\\ j}=-\\partial_i \\partial_j \\overline{\\delta f}\n\\label{eq:third-3}\n\\end{equation}\nIn the minimal coupling case, there is no anisotropic inertia and we can combine the Einstein equations to get the Mukhanov--Sasaki \nequation \\cite{Mukhanov200511}. However, due to the anisotropic inertia term and other additional terms from $\\delta f$, it is difficult to \ncombine straightforwardly the equations in the non-minimal coupling case without resorting to different mathematical techniques, such as \nthe ADM formalism.\n\n\\section{ Conformal transformation and invariance}\n\\subsection{Conformal transformation}\nAgain consider the non-minimal coupling k-inflation model in the Jordan frame.\n\\begin{equation}\nS=\\frac{1}{2}\\int d^4 x\\sqrt{-g}\\left[-f(\\varphi)R+2P(\\varphi,X) \\right]\n\\tag{\\ref{non-minimal-k}}\n\\end{equation}\nTo derive the adiabatic subtraction term without ADM formalism, perform the conformal transformation \n$g_{\\mu\\nu}\\rightarrow \\widehat{g}_{\\mu\\nu}=f(\\varphi)g_{\\mu\\nu}$ to obtain the action in the Einstein frame \\cite{Wald198406,Kubota:2012-1}:\n\\begin{equation}\nS=\\frac{1}{2}\\int d^4 \\widehat{x}\\sqrt{-\\widehat{g}}\\left[-\\widehat{R}+2\\widehat{P}(\\varphi,\\widehat{X}) \\right]\n\\label{eq:Eaction}\n\\end{equation}\nwhere\n\\begin{eqnarray}\n\\widehat{a}(\\widehat{\\eta})&\\equiv& \\sqrt{f}a(\\eta)\\nonumber\\\\\n\\widehat{R}&\\equiv&\\frac{1}{f}\\left[R-3\\nabla_{\\mu}\\nabla^{\\mu}\\ln{f}-\\frac{3}{2}(\\partial_{\\mu}\\ln{f})(\\partial^{\\mu}\\ln{f})\\right]\\nonumber\\\\\n\\widehat{X}&\\equiv&\\frac{1}{2f}g^{\\mu\\nu}\\partial_{\\mu}\\varphi\\partial_{\\nu}\\varphi=\\frac{1}{f}X\\nonumber\\\\\n\\widehat{P}&\\equiv&\\frac{1}{f^2}P+\\frac{3}{4f}(\\partial_{\\mu}\\ln{f})(\\partial^{\\mu}\\ln{f})~.\n\\label{eq:translaw}\n\\end{eqnarray}\n\nNote that the coordinates are not changed by the conformal transformation when we use conformal time.\ni.e.) $d\\widehat{\\eta}=d\\eta$, $d\\widehat{x}^i =dx^i$.\nThen the line element in the conformal-Newtonian gauge becomes\n\\begin{eqnarray}\nd\\widehat{s}^2 &\\equiv& f(\\varphi)ds^2\\nonumber\\\\\n&=& f(\\varphi)a^2 (\\eta)\\left[(1+2\\Phi)d\\eta^2 -(1-2\\Psi)d\\vec{x}^2\\right]\\nonumber\\\\\n&=& \\widehat{a}^2 (\\eta)\\left[(1+2\\widehat{\\Phi})d\\eta^2 -(1-2\\widehat{\\Psi})d\\vec{x}^2\\right]~.\n\\end{eqnarray}\nAfter expanding each coefficient up to linear order of perturbations, we can get the transformation laws of the scalar perturbations \\cite{Chiba:2008-1}.\n\\begin{equation}\n\\widehat{\\Phi}=\\Phi+\\frac{\\overline{\\delta f}}{2f_c},~\n\\widehat{\\Psi}=\\Psi-\\frac{\\overline{\\delta f}}{2f_c}\n\\label{eq:transPhiPsi}\n\\end{equation}\n\nLet us check whether or not the anisotropic inertia in the Jordan frame vanishes in the Einstein frame.\nSubstituting Eq. (\\ref{eq:transPhiPsi}) into Eq. (\\ref{eq:third-3}) leads to\n\\begin{equation}\nf_c\\left[\\left(\\widehat{\\Phi}-\\frac{\\overline{\\delta f}}{2f_c}\\right)-\\left(\\widehat{\\Psi}+\\frac{\\overline{\\delta f}}{2f_c}\\right)\\right],_{ij}=f_c\\left(\\widehat{\\Phi}-\\widehat{\\Psi}\\right),_{ij}-\\partial_i \\partial_j \\overline{\\delta f}=-\\partial_i \\partial_j \\overline{\\delta f}~.\n\\end{equation}\nTherefore, we can find the correct relation: $\\widehat{\\Phi}=\\widehat{\\Psi}$ in the Einstein frame.\n\n\\subsection{Conformal invariance and physicality\\label{sec:conformal}}\n\nThe conformal invariance of the comoving curvature perturbation $\\mathcal{R}$ and its correlation functions \n$\\langle\\mathcal{R}\\dots\\rangle$ was shown in \\cite{Chiba:2008-1,Kubota:2012-1}. i.e.,\n\\begin{equation}\n\\widehat{\\mathcal{R}}=\\mathcal{R},\\hspace{1em}\n\\langle\\widehat{\\mathcal{R}}\\dots\\rangle = \\langle\\mathcal{R}\\dots\\rangle\n\\end{equation}\nThen, if we regularize the correlation functions by adiabatic regularization, the subtraction terms in each frame must also agree with each other. Because otherwise the regularized power spectra depends on the frames and the method of either conformal transformation or the adiabatic regularization is spoiled.\n\nIn this section, we will derive the adiabatic subtraction terms in the Einstein frame and show the conformal invariance of the \nadiabatic subtraction terms in both the Jordan and Einstein frames.\n\nFirst, consider the equation from the action Eq. (\\ref{eq:Eaction})\n\\begin{equation}\n\\frac{d^2}{d\\widehat{\\eta}^2}\\widehat{v}_k +\\left(\\widehat{c}_s^2 k^2 - \\frac{1}{\\widehat{z}}\\frac{d^2 \\widehat{z}}{d\\widehat{\\eta}^2}\\right)\\widehat{v}_k=0\n\\end{equation}\nwhere\n\\begin{equation}\n\\widehat{v}_k\\equiv\\widehat{z}\\widehat{\\mathcal{R}}_k =\\widehat{z}\\mathcal{R}_k,\\hspace{1em}\n\\widehat{z}\\equiv\\frac{\\sqrt[]{2\\widehat{\\epsilon}_1}\\widehat{a}}{\\widehat{c}_s},\\hspace{1em}\n\\widehat{c}_s^2\\equiv\\frac{\\widehat{P},_{\\widehat{X}}}{2\\widehat{X}\\widehat{P},_{\\widehat{X}\\widehat{X}}+\\widehat{P},_{\\widehat{X}}}~.\n\\end{equation}\nFrom the universality of the conformal time, the equation becomes\n\\begin{equation}\n\\widehat{v}''_k +\\left(\\widehat{c}_s^2 k^2 - \\frac{\\widehat{z}''}{\\widehat{z}}\\right)\\widehat{v}_k=0~.\n\\label{eq:EMS}\n\\end{equation}\nWe can get the adiabatic subtraction terms for the power spectrum from Eq. (\\ref{eq:EMS}) as the same way in the minimal coupling case.\n\\begin{equation}\n|\\widehat{\\mathcal{R}} (\\eta)|_{\\text{sub}}^2=\n\\frac{1}{2\\widehat{z}^2 \\widehat{c}_s k}\\left\\{1+\\frac{\\widehat{z}''}{2\\widehat{z}}\\frac{1}{\\widehat{c}_s^2 k^2}+\\frac{1}{\\widehat{c}_s^2 k^2}\\left(\\frac{1}{4}\\frac{\\widehat{c}''_s}{\\widehat{c}_s}-\\frac{3}{8}\\frac{\\widehat{c}'_s {}^2}{\\widehat{c}_s^2}\\right)\\right\\}\n\\label{sub:einstein}\n\\end{equation}\n\nFrom the adiabatic subtraction term in the Jordan frame Eq. (\\ref{sub:jordan}) and in the Einstein frame Eq. (\\ref{sub:einstein}), it is shown that the adiabatic subtraction terms of non-minimally coupled k-inflation model depend on only $z_{\\text{eff}}$ \/ $\\widehat{z}$ and $c_{s,\\text{eff}}$ \/ $\\widehat{c}_s$.\nTherefore, if $\\widehat{z}$ and $\\widehat{c}_s$ are equal to the effective quantities Eq. (\\ref{eq:zeff})--(\\ref{eq:ceff}) after conformal transformation, the subtraction terms in both the Jordan frame and Einstein frame correspond to each other.\n\nOnce it has shown the correspondence of the subtraction terms in both frame, it is not necessary to derive it in Jordan frame by using ADM formalism.\nWe can calculate it in the Einstein frame, and transform it into the Jordan frame if we want to see the physical property due to the non-minimal coupling.\n\nFor conformal transformation, it is convenient to define two functions like Eq. (\\ref{eq:thetasigma}).\n\\begin{equation}\n\\theta\\equiv\\frac{1}{2}\\ln{(fa^2)}=\\frac{1}{2}\\ln{(\\widehat{a}^2)},\\hspace{1em}\n\\widehat{\\Sigma}\\equiv\\widehat{X}\\widehat{P}_{,\\widehat{X}}+2\\widehat{X}^2 \\widehat{P}_{,\\widehat{X}\\widehat{X}}\n\\end{equation}\nUsing Friedmann equations in the Einstein frame, $\\widehat{c}_s$ and $\\widehat{z}$ are expressed by\n\\begin{equation}\n\\widehat{c}_s^2 =\\frac{\\widehat{X}\\widehat{P},_{\\widehat{X}}}{\\widehat{\\Sigma}}=\\frac{\\widehat{\\epsilon}_1 \\widehat{H}^2}{\\widehat{\\Sigma}},\\hspace{1em}\n\\widehat{z}^2 =\\frac{2\\widehat{\\epsilon}_1 \\widehat{a}^2}{\\widehat{c}_s^2}=\\frac{2\\widehat{a}^2 \\widehat{\\Sigma}}{\\widehat{H}^2}~.\n\\label{eq:Ecs}\n\\end{equation}\nThus we should transform three quantities $\\widehat{H}$, $\\widehat{\\epsilon_1}$ and $\\widehat{\\Sigma}$.\n\\begin{eqnarray}\n\\widehat{H}&=&\\frac{\\mathcal{\\widehat{H}}}{\\widehat{a}}\n =\\frac{\\widehat{a}'}{\\widehat{a^2}}\n =e^{-\\theta}\\theta'\\\\\n\\widehat{\\epsilon}_1&\\equiv&\\frac{1}{\\widehat{\\mathcal{H}}}\\frac{\\widehat{\\epsilon}'_0}{\\widehat{\\epsilon}_0}\n =1-\\frac{\\widehat{\\mathcal{H}}'}{\\widehat{\\mathcal{H}}^2}\n =1-\\frac{\\theta''}{\\theta' {}^{2}}\\label{eq:hatep}\\\\\n\\widehat{\\Sigma}&=&\\frac{X}{f}\\left(f\\frac{\\partial}{\\partial X}\\right)\\left\\{\\frac{1}{f^2}P+\\frac{3X}{2f}\\left(\\frac{d\\ln{f}}{d\\varphi}\\right)^2\\right\\}\\nonumber\\\\\n&&+2\\left(\\frac{X}{f}\\right)^2 \\left(f\\frac{\\partial}{\\partial X}\\right)^2 \\left\\{\\frac{1}{f^2}P+\\frac{3X}{2f}\\left(\\frac{d\\ln{f}}{d\\varphi}\\right)^2\\right\\}\\nonumber\\\\\n&=&\\frac{1}{f^2}\\left\\{XP,_X+2X^2 P,_{XX}+\\frac{3}{2}fX\\left(\\frac{d\\ln{f}}{d\\varphi}\\right)^2\\right\\}\\nonumber\\\\\n&=&\\frac{1}{f^2}\\left\\{\\Sigma+\\frac{3}{4f}\\left(\\frac{f'}{a}\\right)^2\\right\\}\\nonumber\\\\\n&=&e^{-4\\theta}a^4\\left\\{\\Sigma+3\\frac{e^{2\\theta}}{a^4}(\\theta'-\\mathcal{H})^2\\right\\}\\nonumber\\\\\n&=&e^{-4\\theta}\\theta'^2 \\left\\{3e^{2\\theta}\\left(\\frac{\\mathcal{H}}{\\theta'}-1\\right)^2+\\frac{a^4 \\Sigma}{\\theta'^2}\\right\\}\n\\end{eqnarray}\nTo transform $\\widehat{\\Sigma}$, we used Eq. (\\ref{eq:translaw}) and the following relation.\n\\begin{eqnarray}\n\\widehat{X}&\\equiv&\\frac{1}{2}\\widehat{g}^{\\mu\\nu}\\partial_{\\mu}\\varphi\\partial_{\\nu}\\varphi\n=\\frac{2}{3}\\left(\\frac{d\\varphi}{d\\ln{f}}\\right)^2\\cdot\\frac{3}{4}\\widehat{g}^{\\mu\\nu}(\\partial_{\\mu}\\ln{f})(\\partial_{\\nu}\\ln{f})\\\\\n\\widehat{P}&\\equiv&\\frac{1}{f^2}P+\\frac{3}{4}\\widehat{g}^{\\mu\\nu}(\\partial_{\\mu}\\ln{f})(\\partial_{\\nu}\\ln{f})\\nonumber\\\\\n&=&\\frac{1}{f^2}P+\\frac{3}{2}\\widehat{X}\\left(\\frac{d\\ln{f}}{d\\varphi}\\right)^2\n\\end{eqnarray}\nSubstitute them into (\\ref{eq:Ecs}),\n\\begin{eqnarray}\n\\widehat{z}^2&=&2\\left\\{3e^{2\\theta}\\left(\\frac{\\mathcal{H}}{\\theta'}-1\\right)^2+\\frac{a^4 \\Sigma}{\\theta'^2}\\right\\}\\label{eq:zcon}\\\\\n\\widehat{c}_s^2&=&\\frac{e^{2\\theta}\\left(1-\\frac{\\theta''}{\\theta' {}^{2}}\\right)}{3e^{2\\theta}\\left(\\frac{\\mathcal{H}}{\\theta'}-1\\right)^2+\\frac{a^4 \\Sigma}{\\theta'^2}}=\\frac{\\theta'^2-\\theta''}{3(\\mathcal{H}-\\theta')^2+\\frac{a^4 \\Sigma}{e^{2\\theta}}}\\label{eq:ccon}\n\\end{eqnarray}\nComparing the results with Eq. (\\ref{eq:zeff})--(\\ref{eq:ceff}), it is realized that the effective variables in the Jordan frame and the variables in the Einstein frame agree with each other.\n\\begin{equation}\nz_{\\text{eff}}=\\widehat{z},\\hspace{1em}c_{s,\\text{eff}}=\\widehat{c}_s\n\\end{equation}\nTherefore the adiabatic subtraction terms in both Einstein and Jordan frame are identical, and we get the same results of regularized\npower spectrum in either frame.\n\nOf course, because the comoving curvature perturbation itself is conformally invariant, the equations the perturbations obey should also be \nconformally invariant. The above calculations show this explicitly.\n\\part{Conclusion}\n\\setcounter{section}{0}\n\nWe have investigated the adiabatic regularization of the power spectrum in general single scalar field inflation.\nWe derived the adiabatic subtraction terms and found that (due to the sound speed) their behavior is not trivial. \nThe analysis was partly done by assuming some specific models or approximations \\cite{Alinea:2015-1}, and there may be the models which have growing or non-vanishing subtraction terms.\n\nWe have also calculated the subtraction terms for the non-minimally coupled inflaton case.\nThe subtraction term is completely determined by the equation of motion of the frame-invariant scalar curvature perturbation, and we confirmed its frame invariance explicitly.\n\n\\section*{Discussion and future work}\n\\subsection*{What do vanishing subtraction terms mean?}\n\nIn section \\ref{sec:timedev}, we have considered the time development of the adiabatic subtraction terms.\nWe saw that the speed of sound may be able to make the subtraction terms large enough to affect the observable power spectrum: in canonical inflation, it generally decays after inflation era (see Appendix \\ref{chap:sub-dev-after}) and we expect that it becomes sufficiently small when they are transferred to observables.\n\n\nWe assumed a scheme where the bare spectrum freezes at the (sound) horizon crossing, while the subtraction terms remain time-developing.\nThe vanishing subtraction terms mean that the integral of the bare spectrum at the crossing has no divergence to begin with.\nThis result agrees with the conventional interpretation about the observable power spectrum: the observable spectrum has no high frequency modes because they have not yet exited the horizon and not yet become ``classical''.\n\nHowever, even if the result with regularization does not differ from the result ``without\" regularization, it does not mean that the argument for regularization of the power spectrum is unnecessary. Indeed, it gives mathematical support to the observable power spectrum, and also it is needed in an interacting theory.\n\n\n\\subsection*{Future work}\n\nIn section \\ref{sec:timedev}, we have investigated how the subtraction terms behave qualitatively at late times with the subtraction scheme used in \\cite{Urakawa:2009-1}. Related to this scheme, there are some problems still to be solved.\n\nThe subtraction scheme was not established completely.\nThere are many discussions about this issue (e.g. \\cite{Finelli:2007-1,Urakawa:2009-1,Haro:2010-1,Marozzi:2011-1,BasteroGil:2013-1}) and some papers such as \\cite{Finelli:2007-1,BasteroGil:2013-1} claim that the regularization of the power spectrum is originally unnecessary.\nSome of these claims have already been rejected, nevertheless, the argument is still continuing.\nThis is possibly due to our poor knowledge of the ``freezing out\" and the unclear nature of the renormalization condition using adiabatic subtraction (see also \\cite{Markkanen:2013-1}).\n\nUnderstanding ``freezing out\" is an important problem of cosmology.\nWhile it is true that the comoving curvature perturbation loses its time-dependence at large scales: $-k\\eta\\ll 1$, how to become classical is still unclear.\nThe scheme we take in this thesis assumes that the freezing out does not occur at the horizon crossing, in a sense.\nWhen the quantum fluctuations become observables is a fundamental problem we should consider in more detail.\n\nWe should also mention that we have only calculated the value of the subtraction terms qualitatively.\nAs we saw in section \\ref{sec:timedev}, it is possible for terms to survive or become large when the model has non-canonical kinetic terms.\nWe therefore need to know how large these surviving terms are and compare the regularized power spectrum with the bare one.\nSome models might need to be modified to achieve a realistic power spectrum if the subtraction terms remain significant at the time of classicalizatio\n:\nwe can use the regularized power spectrum to constrain an inflationary model by combining it with other theoretical constraints.\n\\part{Appendices}\n\\input{Appendices\/AppendixA}\n\\input{Appendices\/AppendixB}\n\\input{Appendices\/AppendixC}\n\\input{Appendices\/AppendixD}\n\n\n\\label{Bibliography}\n\\bibliographystyle{unsrt} \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}