diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzfgfr" "b/data_all_eng_slimpj/shuffled/split2/finalzzfgfr" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzfgfr" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\n\n\n\n\n\n\n\n\n\nMachine learning models, such as neural networks, random forests, and gradient boosted trees, are widely used in various fields, including computer vision and transportation, and are transforming the field of computer science~\\cite{Survey1,transport}. However, the probabilistic nature of classifications made by these models means that misclassifications are inevitable. As a result, estimating the uncertainty for a particular input is a crucial challenge in machine learning. In fact, many machine learning models have some built-in measure of confidence that is often provided to the user for risk management purposes. The field of \\emph{uncertainty calibration} aims to improve the accuracy of the confidence estimates made by machine learning models~\\cite {pmlr-v70-guo17a}. \n\n\n\n %\n Confidence evaluation, or the model's prediction of its success rate on a specific input, is a crucial aspect of mission-critical machine learning applications, as it provides a realistic estimate of the probability of success for a classification and enables informed decisions about the \\emph{current} situation. Even a highly accurate model may encounter an unexpected situation, which can be communicated to the user through confidence estimation. For example, consider an autonomous vehicle using a model to identify and classify traffic signs. The model is very accurate, and in most cases, its classifications are correct with high confidence. However, one day, it encounters a traffic sign that is obscured, e.g., by heavy vegetation. In this case, the model's classification is likely to be incorrect.\n Estimating confidence, or uncertainty, is a crucial tool for assessing unavoidable risks, allowing system designers to address these risks more effectively and potentially avoid unexpected and catastrophic consequences. For example, our autonomous vehicle may reduce its speed and activate additional sensors until it reaches higher confidence. \n Therefore, all popular machine learning models have mechanisms for determining confidence that can be calibrated to maximize the quality of confidence estimates~\\cite{Niculescu2005Predicting,CNNCalibration,Ana2019Verified}, and there is ongoing research to calibrate models more effectively and enable more reliable applications~\\cite{Survey2,Sun2007}. \n\nExisting calibration methods can be divided into two categories: post-hoc methods that perform a transformation that maps the raw outputs of classifiers to their expected probabilities~\\cite{NEURIPS2019_8ca01ea9,pmlr-v70-guo17a,gupta2021distribution}, and ad-hoc methods that adapt the training process to produce better calibrated models~\\cite{ThulasidasanCBB19,pmlr-v97-hendrycks19a}. Post-hoc calibration methods are easier to apply because they do not change the model and do not require retraining. However, ad-hoc methods may lead to better model training in the first place and more reliable models. With the success of both approaches, recent research has focused on using ensemble methods whose estimates are a weighted average of multiple calibration methods~\\cite{Ashukha2020Pitfalls,pmlr-v161-ma21a,pmlr-v119-zhang20k,naeini2016binary,naeini2015obtaining}.\nAnother recent line of work attempts to further refine the uncertainty estimations by refining the grouping of confidence estimations, e.g.,~\\cite{ByoundCalibration,grouppingConfidence}.\n\n\n\n\n\n\nIn principle, post-hoc calibration can be viewed as cleaning up a signal, namely the model's original confidence estimate. Interestingly, if we follow this logic, it is clear that the maximal attainable benefit lies in the quality of the signal. To see this, consider a model that plots the same confidence for all inputs. In this case, the best result that can be achieved is to set that confidence to the model's average accuracy over all inputs. Therefore, finding better signals to calibrate is a promising direction for research.\n\n\n\n\n\n\n\n\n\n\n %\n\n\nIn this work, we introduce a novel approach for improving uncertainty estimates in machine learning models \\emph{using geometry}. \nWe first provide an algorithm for calculating the maximal geometric \\emph{separation} of an input.\nHowever, calculating the geometric separation of an input requires evaluating the whole space of training inputs, making it a computationally expensive method that is not always feasible. \nTherefore, we suggest multiple methods to accelerate the process, including a lightweight approximation called \\emph{fast-separation} and several data reduction methods that shorten the geometric calculation. \n\n\nWe demonstrate that using our geometric-based method, combined with a standard calibration method, leads to more accurate confidence estimations than calibrating the model's original signal across different models and datasets. \nEven more, our approach yields better estimation even when compared to state-of-the-art calibration methods~\\cite{Ana2019Verified,gupta2021distribution,pmlr-v70-guo17a,pmlr-v119-zhang20k,naeini2015obtaining,kull2017beta}. \nAdditionally, we show that our approach can be implemented in near real-time on a variety of datasets through the use of multiple levels of approximation and optimization. This is particularly useful for practical applications that require rapid decision-making, such as autonomous driving.\nThe entire code is available at our Github~\\cite{Code}. \n\n\n\n\n\\section{Related Work}\n\n\n\nAs mentioned above, uncertainty calibration is about estimating the model's success probability of classifying a given example. Post-hoc calibration methods apply some transformation to the model's confidence (without changing the model) such transformations include Beta calibration (Beta)~\\cite{kull2017beta}, Platt scaling (Platt)~\\cite{platt1999probabilistic}, Temperature Scaling (TS)~\\cite{pmlr-v70-guo17a,NEURIPS2019_8ca01ea9}, Ensemble Temperature Scaling (ETS)~\\cite{pmlr-v119-zhang20k}, and cubic spline~\\cite{gupta2021distribution}.\nIn brief, these methods are limited by the best learnable mapping between the model's confidence estimations, and the actual confidence. That is, post-hoc calibration map each confidence value to another calibrated value whereas our method introduces a new signal that can be calibrated just like the model's original signal. Another work that uses a geometric distance in this context is \\cite{Dalitz09Reject}. There, the confidence score is computed directly from the geometric distance, while we first fit a function on a subset of the data to learn the specific behavior of the dataset and model. Moreover, the work in~\\cite{Dalitz09Reject} only applies to the k-nearest neighbor model, while our method is applicable to all models.\n\n\nThe recently proposed Scaling Binning Calibrator (SBC) of~\\cite{Ana2019Verified} uses a fitting function on the confidence values, divides the inputs into bins of equal size, and outputs the function's average in each bin. Histogram Binning (HB) \\cite{gupta2021distribution} uses a similar idea but divides the inputs into uniform-mass (rather than equal-size) bins.\nInterestingly, while most post-hoc calibration methods are model agnostic, recent methods have begun to look at a neural network non-probabilistic output called logits~(before applying softmax)~\\cite{CNNCalibration,Ding2020,wenger2019}. Thus, some new post-hoc calibration methods apply only to neural networks. \n\n\nEnsemble methods are similar to post-hoc calibration methods as they do not change the model, but they consider multiple signals to determine the model's confidence~\\cite{Ashukha2020Pitfalls,pmlr-v161-ma21a}. \nFor example, Bayesian Binning into \nQuantiles (BBQ)~\\cite{naeini2015obtaining} is an extension of HB that uses multiple histogram binning models with different bin numbers, and partitions then outputs scores according to Bayesian averaging.\nThe same methodology of Bayesian averaging is applied in Ensemble of Near Isotonic Regression~\\cite{naeini2016binary},\nbut instead of histogram binning, they use nearly isotonic regression models.\n\n\n\nAd-hoc calibration is about training models in new manners aimed to yield better uncertainty estimations. Important techniques in this category include mixup training~\\cite{ThulasidasanCBB19}, pre-training~\\cite{pmlr-v97-hendrycks19a}, \nlabel-smoothing~\\cite{NEURIPS2019_f1748d6b}, data augmentation \\cite{Ashukha2020Pitfalls}, self-supervised learning~\\cite{NEURIPS2019_a2b15837}, Bayesian approximation (MC-dropout)~\\cite{pmlr-v48-gal16,NIPS2017_84ddfb34}, Deep Ensemble (DE)~\\cite{DeepEnsembles}, Snapshot\nEnsemble~\\cite{SnapshotEnsemble}, Fast Geometric Ensembling (FGE)~\\cite{FastEnsembling}, and SWA-Gaussian (SWAG)~\\cite{SWAG}. \nA notable approach is to use geometric distances in the loss function while training the model~\\cite{Xing2020distance}. The authors work with a representation space that maximizes intra-class distances, minimizes inter-class distances, and uses the distances to estimate the confidence.\nAd-hoc calibration is perhaps the best approach in public as it tackles the core of models' calibration directly. However, because it offers specific training methods, it is of less use to large and already trained models, and the impact of each workshop is limited to a specific model type (e.g., DNNs in~\\cite{FastEnsembling}). In comparison, post-hoc and ensemble methods (and our own method) often work for numerous models. \n\n\n\n\n\n\n\n\n\n\n\nOur geometric method is largely inspired by the approach of robustness proving in machine learning models. In this field, formal methods are used to prove that specific inputs are robust to small adversarial perturbations. That is, we formally prove that all images in a certain geometric radius around a specific train-set image receive the same classification~\\cite{mooly, KatzBDJK17,marta,Gehr2018AISA,Ehlers17,DBLP:conf\/aaai\/EinzigerGSS19}. These works \nrely on formal methods produced in an offline manner and thus apply only to training set inputs (known apriori). Whereas confidence estimation reasons about the current input. However, the underlying intuition, i.e., that geometrically similar inputs should be classified in the same manner is also common to our work. \n\nIndeed, our work shows that geometric properties of the inputs can help us quantify the uncertainty in certain inputs and that, in general, inputs that are less geometrically separated and are 'on the edge' between multiple classifications are more error-prone than inputs that are highly separated from other classes. Thus our work reinforces the intuition behind applying formal methods to prove robustness and supports the intuition that more robust training models would be more dependable. \n\n\n\n\n\\section{Geometric Separation}\nIn this section, we define a geometric separation measure that reasons about the distance of a given input from other inputs with different classifications. Our end goal is to use this measure to provide confidence estimations. Formally, a model receives a data input, $x$, and outputs the pair $\\langle\\mathcal{C}(x),\\mathit{conf}(x)\\rangle$, where $\\mathcal{C}(x)$ is the model's classification of $x$ and $\\mathit{conf}(x)$ reflects the probability that the classification is correct. We estimate the environment around $x$ where inputs are closer to inputs of certain classifications over the others. Our work assumes that the inputs are normalized, and thus these distances carry the same significance between the different inputs. \n\nIn~\\cref{sec:sep}, we define geometric separation and provide an algorithm to calculate it. Our evaluation shows that geometric separation produces a valuable signal that improves confidence estimations. However, calculating geometric separation is too cumbersome for real-time systems, so we suggest a lightweight approximation in~\\Cref{sec:stab}.\nFinally,~\\cref{sec:conf} explains how we use the geometric signal to derive $\\mathit{conf}(x)$. That is, mapping a real number corresponding to the geometric separation to a number in $[0,1]$ corresponding to the confidence ratio. \n\n\n\n\\subsection{Separation Measure}\n\\label{sec:sep}\nWe look at the displacement of $x$ compared to nearby data inputs within the training set. Intuitively, when $x$ is close to other inputs in $\\mathcal{C}(x)$ (i.e., inputs with the same classification as $x$) and is far from inputs with other classifications, then the model is correct with a high probability, implying that $\\mathit{conf}(x)$ should be high. On the other hand, when there are training inputs with a different classification close to $x$, we estimate that $\\mathcal{C}(x)$ is more likely to be incorrect. \n\nBelow we provide definitions that allow us to formalize this intuitive account. In what follows, we consider a model $\\mathcal{M}$ to consist of a machine learning model (e.g., a gradient boosted tree or a neural network), along with a labeled train set, $\\mathit{Tr}$, used to generate the model. \nWe use an implicit notion of distance and denote by $\\mathit{d}(x,y)$ the distance between inputs $x$ and $y$, and by $\\mathit{D}(x,A)$ the distance between the input $x$ and the set $A$ (i.e., the minimal distance between $x$ and the inputs in $A$).\n\n\\begin{defn}[Safe and Dangerous inputs]\n\\label{def:Tr_C(x)}\nLet $\\mathcal{M}$ be a model. \nFor an input $x$ in the sample space we define:\n\\[\\mathit{F}_\\model(x) :=\\{x'\\in \\mathit{Tr} : \\mathcal{C}(x')=\\mathcal{C}(x)\\}.\\]\nWe denote by $\\overline{\\mathit{F}}_\\model(x)$ the set $\\mathit{Tr} \\setminus \\mathit{F}_\\model(x)$.\nAn input $x\\in \\mathcal{X}$ is labeled as \\emph{safe} if $D(x,\\mathit{F}_\\model(x)) < D(x,\\overline{\\mathit{F}}_\\model(x))$, and it is labeled as \\emph{dangerous} otherwise.\n\\end{defn} \n\n\n\\begin{defn}[Zones]\n\\label{def:zone}\nLet $x$ be a safe (dangerous) input. \nA \\emph{zone} for $x$, denoted $z_x$, is such that for any input $y$, if $d(x,y)0$ is a tuning parameter.\n\nAs pointed out in Electra~\\cite{clark2020electra}, we believe that our per-token positional-congruence loss ${\\cal L}_\\text{BCE}$ provides a richer ``dense'' feedback to the model during training, as opposed to when only the CLS token is used in the contrastive loss, and this might help convergence. As in MoCoV3 we use a symmetrized loss.\n\n\\subsection{Implementation}\n\n\\noindent\\textbf{Architecture.} We use Vision Transformers (ViT)~\\cite{dosovitskiy2020image} with a patch size of $16\\times 16$ pixels and an input image size of $224\\times 224$ pixels, which gives a total of $(224\/16)^2 = 196$ tokens. Due to computational limitations, we only use the small variant of the Vision Transformer (ViT-S) which has $12$ transformer blocks and $384$ channels. Following MoCoV3~\\cite{chen2021empirical} we use $12$ attention heads in each attention layer. This is different from most ViT-S implementations, which use 6 heads. This does not change the total number of parameters of the model, but incurs a slight speed penalty. We use a 3-layer MLP for the projection and prediction heads with synchronized batch normalization. We also freeze the weights of the patch embedding layer for better stability.\\\\\n\n\\noindent\\textbf{Pre-training Setup.} We pre-train {DILEMMA} on ImageNet-1K~\\cite{deng2009imagenet} with the exact same hyper-parameters (including learning rate, learning rate scheduler, optimizer, and warm-up epochs) of MoCoV3 using three GeForce RTX 3090 GPUs for 100 epochs with a base batch size of 345. We set the $\\lambda_\\text{DILEMMA}$ to 0.4 and the probability of positional embedding mismatch $\\theta=0.2$. We use sparsity ratios of 0\\%, 40\\%, 55\\%, 65\\% with $1\\times$, $2\\times$, $3\\times$, $4\\times$ base batch size and disable the {DILEMMA} loss when the input is dense. To compare with DINO~\\cite{caron2021emerging}, we trained a DINO network with a batch size of $480$ for $100$ epochs without multi-cropping. For the sake of completeness we also trained a {DILEMMA} for 150 epochs which takes the same amount of time as training MoCov3 for 100 epochs.\\\\\n\n\\noindent\\textbf{Linear Probing.} To evaluate the pre-trained features for image classification, we train a simple linear layer on top of \\underline{frozen features}, without any data augmentation (Linear$_{F}$). Note that it is different from the standard linear probing, and we opt to use this method for its simplicity and speed. It is also more aligned with the end goal of representation learning. In all the linear probing experiments, we use the embedding of the CLS token of the last layer (unlike in DINO~\\cite{caron2021emerging}, which uses the CLS token of the last four attention layers of the network and concatenates them) and perform a coarse grid search over learning rates, batch sizes and whether to normalize the data before feeding them to the linear layer or not (similarly to the added BatchNorm layer~\\cite{ioffe2015batch} in MAE~\\cite{he2021masked}).\n\n\\section{Experiments}\n\nWe evaluate {DILEMMA} on several datasets, compare it to existing methods in SSL, perform ablations to show the role of each loss function and analyze its shape vs texture bias. \nWe refer to MoCoV3~\\cite{chen2021empirical} as our baseline.\n\n\\subsection{Classification on ImageNet-1K}\n\n\\noindent\\textbf{k-NN and Linear Probing.}\nIn order to evaluate the quality of the pre-trained features, we either use a weighted $k$ nearest neighbor classifier (we always use $k=20$)~\\cite{Wu2018UnsupervisedFL} or a simple linear layer on top of a frozen backbone and \\underline{frozen features}. \nIn Table~\\ref{tab:imagenet_fast}, {DILEMMA} outperforms the base model by 1.5\\% after 2\/3 of the training time. \nIt also outperforms DINO in similar settings when multi-crop training is disabled (multi-crop training could also improve the results of MoCoV3 and {DILEMMA}, but this is beyond the focus of this work). We also show {DILEMMA} trained for the same amount of time as MoCoV3 in gray and denote it with the $(\\uparrow)$ symbol. In either case, {DILEMMA} shows a consistent and significant improvement.\nNote that Table~\\ref{tab:imagenet_others} includes all the results from other methods with their reported numbers. Reported numbers for the linear accuracy are based on a linear layer trained with data augmentation.\\\\\n\\begin{table}[t]\n\\begin{minipage}[c]{.45\\linewidth}\n \\captionsetup{width=0.98\\linewidth}\n \\caption{$k$-NN and linear probing on ImageNet-1K. The $(\\uparrow)$ model was trained for 150 epochs, which takes the same amount of time as training MoCoV3 for 100 epochs}\n \\label{tab:imagenet_fast}\n \\centering\n \\begin{tabular*}{\\linewidth}{cccc}\n \\toprule\n Method & {$k$-NN} & Linear$_{F}$ & Linear\\\\ \\midrule\n DINO & 61.66 & 64.60 & -\\\\\\midrule\n MoCoV3 & 60.11 & 63.81 & 65.1\\\\ \\midrule\n DILEMMA & \\bf 61.73 & \\bf 65.30 & \\bf 66.6\\\\ \\midrule\n \\rowcolor{Gray}\n DILEMMA($\\uparrow$) & 63.42 & 67.24 & -\\\\\n \\bottomrule\n \\end{tabular*}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[c]{.5\\linewidth}\n \\captionsetup{width=0.98\\linewidth}\n \\caption{Low-shot learning on ImageNet-1K. The $(\\uparrow)$ model was trained for 150 epochs, which takes the same amount of time as training MoCoV3 for 100 epochs}\n \\label{tab:imagenet_semisupervised}\n \\centering\n \\resizebox{1.0\\textwidth}{!}{\n \\begin{tabular}{@{}lcccc@{}}\n \\toprule\n {} & \\multicolumn{2}{c}{IN-1\\%} & \\multicolumn{2}{c}{IN-10\\%}\\\\\n Method & {$k$-NN} & Linear$_{F}$ & {$k$-NN} & Linear$_{F}$\\\\ \\midrule\n DINO & \\bf 40.60 & 45.24 & \\bf 52.95 & 58.35\\\\ \\midrule\n MoCoV3 & 38.77 & 44.48 & 51.03 & 57.90\\\\ \\midrule\n DILEMMA & 40.27 & \\bf 45.69 & 52.75 & \\bf 59.22\\\\ \\midrule\n \\rowcolor{Gray}\n DILEMMA($\\uparrow$) & 42.45 & 48.67 & 54.83 & 61.47\\\\ \\bottomrule\n \\end{tabular}\n }\n\\end{minipage}\n\\end{table}\n\n\\noindent\\textbf{Low-shot learning.}\nTo simulate transfers to small datasets, we use the model pre-trained on the whole unlabeled ImageNet dataset and then train a linear layer on top of the frozen features of the 1\\% or 10\\% subsets \\cite{chen2020simple} of ImageNet and then evaluate the results on the whole ImageNet validation set. Results in Table~\\ref{tab:imagenet_semisupervised} show that {DILEMMA} is more label efficient than MoCoV3. Notice that in this implementation {DILEMMA} is based on MoCoV3, which, as was observed in DINO~\\cite{caron2021emerging}, has a consistently worse $k$-NN accuracy than DINO. Nonetheless, {DILEMMA} is able to almost compensate for the gap deficit.\\\\\n\n\\noindent\\textbf{ImageNet-*.}\nTo measure the out of distribution (OOD) generalization of the representation, we use a linear layer trained on the non augmented training set of ImageNet-1K and evaluate it on different OOD datasets. Results in Table~\\ref{tab:imagenet_variants} show that {DILEMMA} has a better OOD generalization than the baseline.\n\\begin{table*}[t]\n \\centering\n \\caption{Accuracy on out of domain datasets. We show the transfer learning accuracy of an ImageNet frozen linear classifier to other test sets}\n \\label{tab:imagenet_variants}\n \\resizebox{1.0\\textwidth}{!}{\n \\begin{tabular}{@{}lcccccc@{}}\n \\hline\n Method & IN-A \\cite{hendrycks2021natural}& IN-O (AUPR)~\\cite{hendrycks2021natural} & IN-R \\cite{hendrycks2021many} & IN-Sketch \\cite{wang2019learning} & IN-ReaL \\cite{beyer2020we} & ObjectNet \\cite{barbu2019objectnet}\\\\\n \\hline\nMoCoV3 & 2.57 & 16.49 & 26.77 & 14.28 & 71.30 & 20.33 \\\\\nDILEMMA & \\bf 3.35 & \\bf 17.32 & \\bf 28.62 & \\bf 15.91 & \\bf 72.31 & \\bf 20.51\\\\\n \\hline\n \\end{tabular}\n }\n\\end{table*}\n\n\\subsection{Downstream Tasks}\n\\noindent\\textbf{Semantic Segmentation on ADE20K.}\nSemantic segmentation is a task that strongly relates to the shape of objects. Thus, we expect to see a significant improvement from a boost in the shape discriminability. The semantic segmentation capability of self-supervised methods is usually evaluated by fine-tuning the model with an extra decoder. For that we use UPerNet~\\cite{xiao2018unified} on the ADE20K~\\cite{zhou2017scene} dataset and train the model for $64K$ iterations with a batch size of 12. We also follow the evaluation protocol of iBOT~\\cite{zhou2021ibot} and just train a linear layer (for $64K$ iterations and a batch size of 16) for semantic segmentation with a frozen backend to directly assess the per token representation. Results in Table~\\ref{tab:semantic_segmentation} show that {DILEMMA} is also better than the base model for dense classification tasks and yields a remarkable mIoU gap of $4.6$ percentage points between {DILEMMA} and MoCoV3 in the linear settings.\n\\begin{table*}[t]\n \\centering\n \\caption{Semantic Segmentation on ADE20K}\n \\label{tab:semantic_segmentation}\n \\centering\n \\setlength{\\tabcolsep}{4.0pt}\n \\resizebox{0.6\\textwidth}{!}{\n \\begin{tabular}{@{}lcccccc@{}}\n \\toprule \n \\multirow{2}{*}{Method} & \\multicolumn{3}{c}{Seg. w\/ Lin.} & \\multicolumn{3}{c}{Seg. w\/ UPerNet} \\\\\n \\cmidrule(lr){2-4}\\cmidrule(lr){5-7}\n & mIoU & mAcc & aAcc & mIoU & mAcc & aAcc \\\\\n \\toprule\n MoCoV3 & 11.23 & 14.56 & 65.31 & 33.80 & 44.71 & 77.65\\\\ \\midrule\n DILEMMA & \\bf 15.90 & \\bf 20.08 & \\bf 67.46 & \\bf 33.97 & \\bf 44.73 & \\bf 77.95\\\\ \\bottomrule\n \\end{tabular}\n }\n\\end{table*}\\\\\n\n\\noindent\\textbf{Transfer Learning.}\nIn order to evaluate the transfer capability of our representations we do image classification on a diverse set of datasets. We again train a linear layer on top of the frozen features to accelerate the process. The results are in Table~\\ref{tab:many_shot_evaluation}. As can be seen {DILEMMA} performs well in transfer learning across all datasets and significantly outperforms the base model in Yoga$_{82}$~\\cite{verma2020yoga} (a yoga position classification dataset). Correctly classifying Yoga$_{82}$ images requires a solid understanding of object shape and texture alone is not sufficient.\n\\begin{table*}[t]\n \\centering\n \\caption{Transfer learning for image classification}\n \\label{tab:many_shot_evaluation}\n \\resizebox{1.0\\textwidth}{!}{\n \\begin{tabular}{clcccccccccccc|c}\n \\toprule\n & & Aircraft & Caltech$_{101}$ & Cars & CIFAR$_{10}$ & CIFAR$_{100}$ & DTD & Flowers$_{102}$ & Food$_{101}$ & INat$_{19}$ & Pets & STL$_{10}$ & Yoga$_{82}$ & \\multirow{2}{*}{Avg.} \\\\\n & & \\cite{maji13fine-grained} & \\cite{FeiFei2004LearningGV} & \\cite{KrauseStarkDengFei-Fei_3DRR2013} & \\cite{Krizhevsky2009LearningML} & \\cite{Krizhevsky2009LearningML} & \\cite{cimpoi14describing} & \\cite{Nilsback2008AutomatedFC} & \\cite{bossard14} & \\cite{inaturalist19} & \\cite{parkhi12a} & \\cite{Coates2011AnAO} & \\cite{verma2020yoga} \\\\\n \\midrule\n\\multirow{2}{*}{\\rot{$k$-NN}} & MoCoV3 & 23.82 & 79.31 & 19.67 & \\bf 90.87 & 70.98 & 59.84 & 81.40 & 57.89 & 26.67 & 74.22 & 94.08 & 32.73 & 59.29 \\\\\n{} & DILEMMA & \\bf 24.18 & \\bf 80.72 & \\bf 19.85 & 90.60 & \\bf 72.60 & \\bf 62.55 & \\bf 84.45 & \\bf 58.97 & \\bf 28.68 & \\bf 78.82 & \\bf 94.75 & \\bf 38.81 & \\bf 61.14 \\\\\n\\midrule \\midrule\n\\multirow{2}{*}{\\rot{Lin.$_{F}$}} & MoCoV3 & 43.53 & 87.77 & 48.39 & \\bf 92.49 & 76.74 & \\bf 64.89 & 93.07 & 70.38 & 44.09 & 84.55 & 95.72 & 59.92 & 71.79 \\\\\n{} & DILEMMA & \\bf 45.09 & \\bf 89.37 & \\bf 48.87 & 92.30 & \\bf 77.50 & 64.41 & \\bf 93.66 & \\bf 71.41 & \\bf 45.12 & \\bf 86.02 & \\bf 96.09 & \\bf 63.49 & \\bf 72.77 \\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{table*}\\\\\n\n\\noindent\\textbf{Nearest Neighbor Retrieval.}\nFollowing the evaluation protocol of DINO~\\cite{caron2021emerging}, we report the Mean Average Precision (mAP) for the Medium (M) and Hard (H) subsets of revisited Oxford and Paris retrieval datasets~\\cite{radenovic2018revisiting} using a $k$-NN retriever. Results in Table~\\ref{tab:retrieval} show that our results are comparable to the baseline, but, as it is explained in iBOT~\\cite{zhou2021ibot}, this evaluation metric is hyper-parameter sensitive (image resolution, whether to use multi-scale evaluation or not) and we do not investigate it further.\n\\begin{table}[t]\n\\begin{minipage}[t]{.44\\linewidth}\n \\captionsetup{width=0.98\\linewidth}\n \\caption{Image retrieval. mAP on medium and hard subsets of the revisited Oxford and Paris retrieval datasets~\\cite{radenovic2018revisiting}}\n \\label{tab:retrieval}\n \\centering\n \\resizebox{1.0\\textwidth}{!}{\n \\begin{tabular}{@{}l cc cc@{}}\n \\toprule\n & \\multicolumn{2}{c}{$\\mathcal{R}$Ox} & \\multicolumn{2}{c@{}}{$\\mathcal{R}$Par} \\\\\n \\cmidrule{2-3}\n \\cmidrule{4-5}\n Method & M & H & M & H\\\\\n \\midrule\n MoCoV3 & 26.02 & 7.16 & \\bf 49.25 & \\bf 19.52 \\\\\n DILEMMA & \\bf 26.14 & \\bf 7.26 & 48.15 & 18.84 \\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{.5\\linewidth}\n \\captionsetup{width=0.98\\linewidth}\n \\caption{Unsupervised object segmentation. For DAVIS we report mean region similarity $\\mathcal{J}_m$ and mean contour-based accuracy $\\mathcal{F}_m$. For VOC12 we report Jaccard similarity}\n \\label{tab:video}\n \\centering\n \\resizebox{1.0\\textwidth}{!}{\n \\begin{tabular}{@{}lcccc@{}}\n \\toprule\n {} & \\multicolumn{3}{c}{DAVIS} & VOC12 \\\\\n Method & $ (\\mathcal{J}$\\&$\\mathcal{F})_m$ & $\\mathcal{J}_m$ & $\\mathcal{F}_m$ & Jac.$_{sim.}$\\\\\n \\midrule\n MoCoV3 & 58.28 & 57.46 & 59.09 & 46.50 \\\\\n DILEMMA & \\bf 60.00 & \\bf 57.99 & \\bf 62.02 & \\bf 48.89 \\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{minipage}\n\\end{table}\\\\\n\n\\noindent\\textbf{Unsupervised Object Segmentation.}\nFor single frame object segmentation we use the mask generated from the attention of the CLS token (thresholded to keep 0.9 of the mass) as in DINO~\\cite{caron2021emerging} and report the Jaccard similarity between the ground truth and the mask evaluated on the validation set of PASCAL-VOC12~\\cite{Everingham2009ThePV}. For the videos we use the DAVIS-2017 video instance segmentation benchmark~\\cite{Pont-Tuset_arXiv_2017} and by following the protocol introduced in Space-time by Jabri et al.~\\cite{jabri2020space} we segment scenes via the nearest neighbor propagation of the mask. Results in Table~\\ref{tab:video} show that {DILEMMA} performs well also in these dense tasks. \n\n\\subsection{Shape vs Texture Bias}\n\nWe also evaluate the shape bias of {DILEMMA} according to the metrics defined by Geirhos et al.~\\cite{geirhos2018imagenet} and Tartaglini et al.~\\cite{tartaglini2022developmentally}. In Table~\\ref{tab:shape_bias} (Texture Bias column) we fine-tune our ViT model pre-trained with {DILEMMA} on ImageNet and then measure the shape vs texture bias on the Cue-Conflicting dataset \\cite{geirhos2018imagenet}. The reported Texture Bias indicates how often the model has preferred class discrimination based on texture rather than shape. The Linear Accuracy is obtained through fine-tuning on half of the Cue-Conflicting dataset and then tested on the other half.\nIn Fig.~\\ref{fig:shape_bias_analysis} we use instead a dataset \\cite{tartaglini2022developmentally}, where the parameter \\texttt{Alpha} indicates the degree of removal of the background ($0$ no removal, $1$ full removal). These metrics could be used as predictors of the quality of pre-trained models. This would allow the ranking of models without the need for extensive experimental validation, which now requires the transfer to several downstream tasks. However, the results in Table~\\ref{tab:shape_bias} and Fig.~\\ref{fig:shape_bias_analysis} seem to contradict this conclusion. In fact, {DILEMMA} appears to be more texture biased than the baseline, but its performance on a wide range of new tasks and in particular on shape-based tasks seems to be consistently better (see all other experiments). In fact, in the Linear Accuracy column we see that as soon as {DILEMMA} is trained on a shape-based task, it delivers a better performance than the baseline.\nWe argue that perhaps the ability of a model to classify objects based on shape does not necessarily imply that they must have a weaker texture discriminability. In conclusion, to have a better predictor for generalization based on shape, it would be more desirable to have a shape discriminability measure that is somehow unrelated to texture.\n\n\\begin{table*}[t]\n\\begin{minipage}{0.45\\linewidth}\n \\centering\n \\caption{Shape Bias. The Texture Bias is evaluated on the Cue-Conflicting dataset \\cite{geirhos2018imagenet}. The Linear Accuracy is obtained through fine-tuning on half of the Cue-Conflicting dataset and then tested on the other half}\n \\label{tab:shape_bias}\n \\centering\n \\resizebox{1.0\\textwidth}{!}{\n \\begin{tabular}{@{}lcc@{}}\n \\toprule \n Method & Texture Bias (\\%) & Linear Acc. \\\\\n\\toprule\n MoCoV3 & \\bf 63.58 & 59.89\\\\ \\midrule\n DILEMMA & 69.10 & \\bf 62.68\\\\ \\bottomrule\n \\end{tabular}\n }\n\\end{minipage}\n\\hfill\n\\begin{minipage}{0.5\\linewidth}\n\\begin{subfigure}[b]{0.95\\linewidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/alpha.png}\n \\end{subfigure}\n\\captionof{figure}{Shape bias Analysis \\cite{tartaglini2022developmentally}. \\texttt{Alpha} indicates the transparency of the background ($1$ means full transparency).}\n\\label{fig:shape_bias_analysis}\n\\end{minipage}\n\\end{table*}\n\n\\subsection{ViT Properties}\n\nIn this section, we carry out experiments that are unique to the ViT architecture.\n\n\\noindent\\textbf{Robustness against Occlusions and Shuffle.}\nSince our model was trained with sparse inputs, we should expect a gradual loss of performance with increased sparsity. Fig.~\\ref{fig:drop} shows that the performance drop for MoCoV3 is more severe than in {DILEMMA}. Moreover, {DILEMMA} is able to preserve its accuracy with larger sparsity ratios. In Table~\\ref{tab:shuffle} we see that {DILEMMA} can be fed with completely wrong position encodings and still obtain a reasonable classification accuracy compared to MoCoV3. We explain this result as the consequence of training the model with mismatched positional embeddings.\n\n\\begin{figure}[t]\n\\centering\n\\begin{minipage}{0.5\\textwidth}\n\\centering\n\\begin{subfigure}[b]{1.0\\textwidth}\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/random_drop.png}\n\\end{subfigure}\n\\caption{Token Dropping Analysis. Classification accuracy of a pre-trained head as a function of input sparsity ratio.}\n\\label{fig:drop}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{0.45\\linewidth}\n\\centering\n\\captionsetup{type=table}\n\\caption{Input shuffling effect. Classification accuracy of a pre-trained head with correct or random position encodings}\n \\label{tab:shuffle}\n \\centering\n \\begin{tabularx}{\\linewidth}{@{}l@{\\hspace{1.5em}}c@{\\hspace{1.5em}}c@{}}\n \\toprule \n Method & Correct & Random \\\\\n\\toprule\n MoCoV3 & 63.8 & 26.1\\\\ \\midrule\n DILEMMA & \\bf 65.3 & \\bf 45.1\\\\ \\bottomrule\n \\end{tabularx}\n\\end{minipage}\n\\end{figure}\n\\noindent\\textbf{Robustness against Background Change.}\nFollowing the background challenge evaluation metric~\\cite{xiao2020noise}, we compute the classification accuracy of the model on a subset of ImageNet (IN-9) by changing the background and foreground. As shown in Table~\\ref{tab:background}, in O\/N.F. (Only\/No Foreground), M.S\/R\/N. (Mixed Same\/Random\/Next), where the foreground is visible or accurately masked out, we outperform the base model. When the foreground is not visible (O.BB. (Only Background with foreground box Blacked out) and O.BT. (Only Background with foreground replaced with Tiled background)) the model performs correctly and does not just rely on the background for image classification.\n\\begingroup\n\\setlength{\\tabcolsep}{4.pt}\n\\begin{table}[t]\n\\caption{Robustness of pre-trained models against background changes}\n\\label{tab:background}\n\\centering\n\\resizebox{1.0\\textwidth}{!}{\n\\begin{tabular}{@{}lcccccccc@{}}\n\\multirow{2}{*}{Method} & \\multicolumn{7}{c}{Background Change} & Clean \\\\\n\\cmidrule(lr){2-8}\\cmidrule(lr){9-9}\n & \\it O.F.$(\\uparrow)$ & \\it M.S.$(\\uparrow)$ & \\it M.R.$(\\uparrow)$ & \\it M.N.$(\\uparrow)$ & \\it N.F.$(\\uparrow)$ & \\it O.BB.$(\\downarrow)$ & \\it O.BT.$(\\downarrow)$ & IN-9$(\\uparrow)$\\\\\n\\toprule\nMoCoV3 & 77.26 & 78.05 & 64.96 & 64.07 & 38.02 & 9.36 & 10.72 & 91.53 \\\\\nDILEMMA & \\bf 77.75 & \\bf 79.43 & \\bf 67.63 & \\bf 64.84 & \\bf 38.79 & \\bf 8.64 & \\bf 9.33 & \\bf 91.75 \\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{table}\n\\endgroup\n\n\\subsection{Ablations}\nAblation studies are conducted either on ImageNet100 or ImageNet-1K. For the smaller dataset we train the dense models for 300 epochs and the sparse models for 450 epochs (with the same hardware and time settings). For ImageNet-1K experiments we train all models for 50 epochs unless stated otherwise.\\\\\n\n\\noindent\\textbf{Image Size.}\nWe compare a model trained on $112\\times112$ images and a sparse model trained on $25\\%$ of the tokens of $224\\times224$ images. Results in Table~\\ref{tab:image_size} show that simply feeding smaller images is worse than feeding sparse large inputs.\\\\\n\n\\noindent\\textbf{Token Dropping Policy.}\nWe tried dropping the tokens that were less important based on the attention of the teacher network~\\cite{li2021mst} compared to randomly dropping the tokens. Results in Table~\\ref{tab:drop_policy} show that simple random dropping works well and there is no need to introduce extra complexity to the model.\n\n\\begin{table*}[t]\n\\begin{minipage}[t]{0.45\\linewidth}\n\\caption{Sparsity vs smaller inputs. Results are evaluated on IN100}\n\\label{tab:image_size}\n\\centering\n\\begin{tabular*}{\\linewidth}{@{}l@{\\extracolsep{\\fill}}cr@{}}\n\\toprule\nInput & $k$-NN & Linear\\\\\n\\hline\nLow Res. & 72.86 & 75.18\\\\\nSparse & \\bf 73.98 & \\bf 77.78\\\\\n\\bottomrule\n\\end{tabular*}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.5\\linewidth}\n\\caption{Token dropping policy. Results are evaluated on IN100}\n\\label{tab:drop_policy}\n\\centering\n\\begin{tabular*}{\\linewidth}{@{}l@{\\extracolsep{\\fill}}cr@{}}\n\\toprule\nSampling Method & $k$-NN & Linear\\\\\n\\hline\nImportance Based & 71.88 & 76.76\\\\\nRandom & \\bf 73.98 & \\bf 77.78\\\\\n\\bottomrule\n\\end{tabular*}\n\\end{minipage}\n\\end{table*}\n\n\\noindent\\textbf{Random Dropping Ratio.}\nTo verify that a random dropping ratio is better than a constant one, we conducted two experiments: one on IN100 and one on IN-1K. The results in Table~\\ref{tab:random_drop_ratio} show that a random dropping ratio performs better than a constant one. On the more difficult IN-1K dataset just applying the sparsity is worse than using the dense model. Only with a random dropping ratio the sparse model can outperform the dense model.\\\\\n\n\\noindent\\textbf{Mismatch Detection.}\nTo verify that mismatch detection helps, we trained a dense model with the mismatch detection task. Results are in Table~\\ref{tab:electra_helps}. Surprisingly, even though this is a trivial task (note that since only 20\\% of the positions are mismatched, a model can easily achieve an 80\\% accuracy in detecting the mismatches), the dense model can still improve the performance of the model. The performance improvement for a task like in $\\text{Yoga}_{82}$, which requires a better understanding of shape, is quite significant.\n\n\\begin{table*}[t]\n\\begin{minipage}[t]{0.45\\linewidth}\n\\caption{Random Dropping Ratio. Results on the left are evaluated on IN100 and on the right on IN-1K}\n\\label{tab:random_drop_ratio}\n\\centering\n\\resizebox{1.0\\textwidth}{!}{\n\\begin{tabular*}{\\linewidth}{@{}l@{\\extracolsep{\\fill}}cc|cc@{}}\n\\toprule\n{} & \\multicolumn{2}{c}{IN100} & \\multicolumn{2}{c}{IN-1K}\\\\\nSparsity & $k$-NN & Linear & $k$-NN & Linear\\\\\n\\hline\n0\\% (Dense) & \\bf 76.16 & 77.50 & 53.27 & 58.20\\\\\n75\\% & 73.98 & 77.78 & 52.99 & 57.90\\\\\nRandom & 74.46 & \\bf 78.82 & \\bf 55.71 & \\bf 59.55\\\\\n\\bottomrule\n\\end{tabular*}\n}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.5\\linewidth}\n\\caption{Mismatch Detection (MD)}\n\\label{tab:electra_helps}\n\\centering\n\\resizebox{1.0\\textwidth}{!}{\n\\begin{tabular*}{\\linewidth}{@{}l@{\\extracolsep{\\fill}}cc|cc|c@{}}\n\\toprule\n{} & \\multicolumn{2}{c}{IN-1K} & \\multicolumn{2}{c}{Yoga82} & {}\\\\\n{} & $k$-NN & Linear & $k$-NN & Linear & MD Acc.\\\\\n\\hline\nDense & 53.27 & 58.20 & 31.60 & 51.27 & -\\\\\n+MD & 54.18 & 58.78 & 35.78 & 54.53 & 100.00\\\\\n\\hline\nRand. & \\bf 55.71 & 59.55 & 32.73 & 50.9 & -\\\\\n+MD & 55.63 & \\bf 59.84 & \\bf 35.94 & \\bf 57.26 & 96.21\\\\\n\\bottomrule\n\\end{tabular*}\n}\n\\end{minipage}\n\\end{table*}\n\n\\noindent\\textbf{Skip Dense.}\nIn order not to encourage the model to use the tile edge shortcut, we disabled the calculation of the {DILEMMA} loss when the sparsity ratio is zero (the input is dense). Table~\\ref{tab:skip_dense} shows that this choice helps (though only marginally). Thus, we use it in all of our main results.\\\\\n\n\\noindent\\textbf{{DILEMMA} Variants.}\nWe also tried some variants of DILEMMA. Instead of just detecting the misplaced tokens, we predict the right position (as a classification task of $196$ classes). The other variant, which we call \\emph{Partial Jigsaw}, is to feed some tokens without position encoding and ask the network to predict their position given the other (sparse) correctly position-encoded tokens. Lastly, instead of corrupting the position, one can corrupt the content of a patch. Instead of using complex methods like inpainting we simply horizontally flip some of the patches and use the binary cross-entropy as our loss. Table~\\ref{tab:task_variants} shows that even though all of these methods do help in terms of shape discrimination, {DILEMMA} is the one with the best performance both on IN-1K and $\\text{Yoga}_{82}$.\n\n\\begin{table*}[t]\n\\begin{minipage}[t]{0.35\\linewidth}\n\\caption{Skip Dense. Results are evaluated on IN100}\n\\label{tab:skip_dense}\n\\centering\n\\begin{tabular}{lcc}\n\\toprule\n{} & $k$-NN & Linear\\\\\n\\midrule\nNo Difference & 76.04 & 79.94\\\\\nSkip on Dense & \\bf 76.34 & \\bf 80.56\\\\\n\\bottomrule\n\\end{tabular}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.6\\linewidth}\n\\caption{Variants of the loss. Although the variants improve the performance wrt the base dense model, {DILEMMA} is the most effective one}\n\\label{tab:task_variants}\n\\centering\n\\begin{tabular*}{\\textwidth}{@{}l@{\\extracolsep{\\fill}}cc|cc@{}}\n\\toprule\n{} & \\multicolumn{2}{c}{IN-1K} & \\multicolumn{2}{c}{Yoga82}\\\\\nTask & $k$-NN & Linear & $k$-NN & Linear\\\\\n\\hline\nPos. Correction & 54.77 & 58.95 & 35.74 & 56.15\\\\\nPartial Jigsaw & \\bf 55.72 & 59.19 & 34.77 & 56.79\\\\\nFlip Detection & 55.69 & 59.59 & 35.09 & 55.00\\\\\nDILEMMA & 55.63 & \\bf 59.84 & \\bf 35.94 & \\bf 57.26\\\\\n\\bottomrule\n\\end{tabular*}\n\\end{minipage}\n\\end{table*}\n\n\\noindent\\textbf{Mismatch Probability.} The probability of mismatch $\\theta$ is one of the most important hyper-parameters of {DILEMMA}. Early in our experiments, we found out that 20\\% is much better than 15\\%. In Table~\\ref{tab:mismatch_prob} we show that 30\\% yields worse performance than the default 20\\%.\\\\\n\n\\noindent\\textbf{Timing.} \nWe measure the time for an epoch of pre-training on ImageNet100 with three GeForce RTX 3090 GPUs and the maximum batch size possible. Note that we multiply the number of batches proportional to the sparsity ratio and the reported number is for dense batches. Results in Table~\\ref{tab:timing} show that {DILEMMA} is $1.5\\times$ faster than MoCoV3 due to a larger average batch size.\n\n\\begin{table*}[t]\n\\begin{minipage}[t]{0.5\\linewidth}\n\\caption{Mismatch Probability}\n\\label{tab:mismatch_prob}\n\\centering\n\\begin{tabular*}{\\linewidth}{@{}l@{\\extracolsep{\\fill}}cc|cc@{}}\n\\toprule\n{} & \\multicolumn{2}{c}{IN-1K} & \\multicolumn{2}{c}{Yoga82}\\\\\nProb. & $k$-NN & Linear & $k$-NN & Linear\\\\\n\\hline\n0.3 & 55.34 & 59.79 & 34.95 & 56.63\\\\\n0.2 & \\bf 55.63 & \\bf 59.84 & \\bf 35.94 & \\bf 57.26\\\\\n\\bottomrule\n\\end{tabular*}\n\\end{minipage}\n\\hfill\n\\begin{minipage}[t]{0.45\\linewidth}\n\\caption{Training Times}\n\\label{tab:timing}\n\\centering\n\\resizebox{1.0\\textwidth}{!}{\n\\begin{tabular*}{\\linewidth}{@{}l@{\\extracolsep{\\fill}}|c|c@{}}\n\\toprule\nMethod & Time(Sec.) & Batch Size\\\\\n\\hline\nDINO & 218 & 480\\\\\nMoCoV3 & 335 & 345\\\\\nDILEMMA & 223 & 345\\\\\n\\bottomrule\n\\end{tabular*}\n}\n\\end{minipage}\n\\end{table*}\n\n\\section{Conclusions}\nWe introduced a novel SSL method based on a location classification pseudo-task and a contrastive loss. We showed that awareness of the relative location of tiles of the input image is important for generalization and in particular when fine-tuning on shape-based downstream tasks. We observe also that current indicators of shape bias for pre-trained models may not always be predictive of the performance of such models on novel shape-based tasks. Our method is based on the ViT architecture. We introduce sparsity in the input (\\emph{i.e.}, dropping image tiles), to both speed up the training and also to avoid trivial degenerate learning.\n\n\\begin{table*}[t]\n \\centering\n \\caption{ImageNet classification results on 224$\\times$224 Images w\/o extra data. An epoch is calculated based on the number of full images processed during pre-training~\\cite{zhou2021ibot}, and non integer multipliers indicate usage of multi cropping~\\cite{caron2020unsupervised}. AUG means extra data augmentations (different from~\\cite{Grill2020BootstrapYO}), HED means more heads in the vision transformer compared to the base model, CLS indicates usage of more than just the last CLS token as the representation of the image, DAL indicates usage of DALLE's encoder~\\cite{ramesh2021zero}, and RRC indicates only random resized cropping and random horizontal flipping as data augmentations. $\\dagger$ indicates a linear layer trained without data augmentation}\n \\label{tab:imagenet_others}\n \\resizebox{0.91\\textwidth}{!}{\n \\begin{tabular}{lccccccc|c}\n \\toprule\n Method & Architecture & EffectiveEpoch & BatchSize & $k$-NN & Linear & Fine-tune & Source & Notes \\\\\n \\midrule\nDINO & ViT-S & 100x2 & 128 & 57.9 & - & - & \\cite{caron2021emerging} & - \\\\\nDINO & ViT-S & 100x2 & 256 & 59.1 & - & - & \\cite{caron2021emerging} & - \\\\\nMoCoV3 & ViT-S & 100x2 & 345 & 60.1 & 65.1 & - & - & HED \\\\\nDILEMMA & ViT-S & 100x2 & 345 & 61.7 & 66.6 & - & - & HED \\\\\nDILEMMA & ViT-S & 150x2 & 345 & 63.4 & 67.2$^\\dagger$ & - & - & HED \\\\\nDINO & ViT-S & 100x2 & 480 & 61.6 & 64.6$^\\dagger$ & - & - & - \\\\\nDINO & ViT-S & 100x2 & 512 & 59.6 & - & - & \\cite{caron2021emerging} & - \\\\\nDINO & ViT-S & 100x2 & 1024 & 59.9 & 67.8 & - & \\cite{caron2021emerging} & CLS \\\\\n\\midrule\n\\midrule\nSimCLR & ViT-S & 300x2 & 1024 & - & 69.0 & - & \\cite{chen2021empirical} & HED \\\\\nBYOL & ViT-S & 300x2 & 1024 & 66.6 & 71.4 & - & \\cite{caron2021emerging} & - \\\\\nSwAV & ViT-S & 300x2 & 1024 & 60.5 & 68.5 & - & \\cite{caron2021emerging} & - \\\\\nSwAV & ViT-S & 300x3.1 & 1024 & 64.7 & 71.8 & - & \\cite{caron2021emerging} & - \\\\\nMoBY & ViT-S & 300x2 & 512 & - & 72.8 & - & \\cite{xie2021self} & - \\\\\nMoCoV2 & ViT-S & 300x2 & 1024 & 62.0 & 71.6 & - & \\cite{caron2021emerging} & - \\\\\nMoCoV2 & ViT-S & 300x3.1 & 1024 & 65.4 & 73.4 & - & \\cite{caron2021emerging} & - \\\\\nMoCoV3 & ViT-S & 300x2 & 1024 & - & 72.5 & - & \\cite{chen2021empirical} & HED \\\\\nMoCoV3 & ViT-S & 300x2 & 4096 & - & 73.2 & 81.4 & \\cite{chen2021empirical} & HED \\\\\nMoCoV3 & ViT-S & 600x2 & 1024 & - & 73.4 & - & \\cite{chen2021empirical} & HED \\\\\nTWIST & ViT-S & 300x3.1 & 1024 & - & 76.3 & - & \\cite{wang2021self} & - \\\\\nDINO & ViT-S & 300x2 & 1024 & 67.9 & 72.5 & - & \\cite{caron2021emerging} & - \\\\\nDINO & ViT-S & 100x3.5 & 512 & 69.3 & 74.0 & - & \\cite{caron2021emerging} & CLS \\\\\nDINO & ViT-S & 300x3.1 & 1024 & 72.7 & 75.9 & - & \\cite{caron2021emerging} & - \\\\\nDINO & ViT-S & 300x3.5 & 1024 & 73.3 & 76.0 & - & \\cite{caron2021emerging} & CLS \\\\\nDINO & ViT-S & 800x3.8 & 1024 & 74.5 & 76.1 & 82.0 & \\cite{caron2021emerging} & - \\\\\nDINO & ViT-S & 800x3.8 & 1024 & 74.5 & 77.0 & 82.0 & \\cite{caron2021emerging} & CLS \\\\\niBOT & ViT-S & 800x2 & 1024 & 72.4 & 76.2 & - & \\cite{zhou2021ibot} & CLS \\\\\niBOT & ViT-S & 800x3.8 & 1024 & 75.2 & 77.9 & 82.3 & \\cite{zhou2021ibot} & CLS \\\\\nSplitMask & ViT-S & 300x2 & 1024 & - & - & 81.5 & \\cite{el2021large} & RRC \\\\\nCIM & ViT-S & 300x1 & 2048 & - & - & 81.6 & \\cite{Fang2022CorruptedIM} & DAL, RRC \\\\\nBEiT & ViT-S & 300x1 & 1024 & - & - & 81.7 & \\cite{zhou2021ibot} & DAL \\\\\nCAE & ViT-S & 300x1 & 2048 & - & 50.8 & 81.8 & \\cite{chen2022context} & RRC \\\\\n\\midrule\nSupervised & ViT-S & 300x1 & 1024 & - & - & 79.8 & \\cite{touvron2021training} & - \\\\\n\\midrule\n\\midrule\nDINO & ViT-B\/8 & 300x3.8 & 1056 & 77.4 & 80.1 & - & \\cite{caron2021emerging} & - \\\\\nDINO & XCiT-M\/8 & 300x3.5 & 1024 & 77.9 & 80.3 & - & \\cite{ElNouby2021XCiTCI} & - \\\\\nSimCLRv2 & ResNet152x3-SK & 800x2 & 4096 & - & 79.8 & 80.5 & \\cite{chen2020big} & - \\\\\nReLICv2 & ResNet200x2 & 1000x4.4 & 4096 & - & 80.6 & - & \\cite{tomasev2022pushing} & AUG \\\\\nMoCoV3 & ViT-L\/16 & 300x2 & 4096 & - & 77.6 & 84.1 & \\cite{chen2021empirical} & - \\\\\nMoCoV3 & ViT-BN\/7 & 300x2 & 4096 & - & 81.0 & - & \\cite{chen2021empirical} & - \\\\\nEsViT & Swin-B\/w14 & 300x3.5 & 1536 & 79.3 & 81.3 & - & \\cite{li2021efficient} & CLS, AUG \\\\\niBOT & ViT-L\/16 & 250x3.8 & 1024 & 77.7 & 81.3 & 85.0 & \\cite{zhou2021ibot} & CLS \\\\\nMAE & ViT-H & 1600x1 & 4096 & - & 77.2 & 86.9 & \\cite{he2021masked} & - \\\\\nPeCo & ViT-H & 800x1 & 2048 & - & - & 87.5 & \\cite{dong2021peco} & - \\\\\n \\bottomrule\n \\end{tabular}\n }\n\\end{table*}\n\n\\begin{ack}\nThis work was supported by grant 200020\\_188690 of the Swiss National Science Foundation.\n\\end{ack}\n\n\\clearpage\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\bf Introduction}\n\nAll graphs considered here are finite, undirected and simple. For\nstandard graph theory terminology not given here we refer to \\cite{West}. Let $%\nG=(V,E) $ be a graph with the \\emph{vertex set} $V$ of \\emph{order}\n$n(G)$ and the \\emph{edge set} $E$ of \\emph{size} $m(G)$. The\n\\emph{open neighborhood} and the \\emph{closed neighborhood} of a\nvertex $v\\in V$ are $N_{G}(v)=\\{u\\in V\\ |\\ uv\\in E\\}$ and\n$N_{G}[v]=N_{G}(v)\\cup \\{v\\}$, respectively. The \\emph{degree} of a\nvertex $v$ is also $deg_G(v)=|N_{G}(v)|$.\nIf $\\deg(v)=k$ for every vertex $v\\in V$, then $G$ is called $k$-\\emph{regular}.\nFor a subset $S$ of $V$, the set $N(S)=\\bigcup_{s\\in S} N(s)$ is called the open neighborhood of $S$.\n\nA subset $L$ of vertices of a graph $G$ is an \\emph{open packing} if the\nopen neighborhoods of vertices in $L$ are pairwise disjoint. The \\emph{open packing number} $\\rho^o(G)$ of $G$ is the maximum cardinality of an open packing in $G$.\n\nLet $1\\leq a_1=n}\n\\sum\\limits_{i=1}^{\\ell} |CN_G(V_i)|\\geq n.\n\\end{equation}\n\nIn this paper, for a proper coloring $f=(V_1,V_2,\\ldots,V_{\\ell})$ of a circulant graph $G$, we assume $v_i=|V_i|$, $v_i'=|CN_G(V_i)|$, and $v_1\\geq v_2\\geq \\cdots \\geq v_{\\ell}$ (so $v_1$ is at most $\\alpha (G)$, the independence number of $G$).\n\nFor integers $a_1,a_2,\\cdots,a_{\\ell}\\geq 0$, by $(v_1,v_2, \\cdots ,v_{\\ell}) \\leq (a_1,a_2, \\cdots ,a_{\\ell})$ we mean that $v_i \\leq a_i$ for each $i$. When vertices of $G$ are labeled 1, 2, $\\cdots$, $n$, then we may use $V_o$ (and $V_e$)\nto denote vertices with odd (even) labels.\n\nThe main goal of this paper is to prove the following theorem.\n\n\\begin{thm} \\label{TDCN Cn(a,b)}\n{\\rm\nGiven the circulant graph $C_n(a,b)$, where $n\\geq 6$, $gcd(a,n)=1$ and $ a^{-1}b\\equiv 3 \\pmod{n}$, we have\n\\begin{equation*}\n\\chi_{d}^t(C_n(a,b))=\\left\\{\n\\begin{array}{ll}\n2\\lceil \\frac{n}{8} \\rceil & \\mbox { if } 8\\leq n \\leq 10,\\\\\n2\\lceil \\frac{n}{8}\\rceil +1 & \\mbox{ if }n\\equiv 1\\pmod{8} \\mbox{ or } n=11,\\\\\n2\\lceil \\frac{n}{8}\\rceil +2 & \\mbox{ otherwise.}\n\\end{array}\n\\right.\n\\end{equation*}\n}\\end{thm}\n\nThe proof of the following theorem can be found in \\cite{Heu}.\n\\begin{thm} \\label{HeuTheorem}\n{\\rm\nLet $C_n(a,b)$ be a circulant graph and $gcd(a,n)=1$. Then the graph $C_n(a,b)$ is isomorphic to the graph $C_n(1, c)$ where $c \\equiv a^{-1}b \\pmod{n}$.\n}\n\\end{thm}\n\nHence by Theorem \\ref{HeuTheorem}, in order to prove Theorem \\ref{TDCN Cn(a,b)}, it is sufficient to prove\nthe following result.\n\n\\begin{thm} \\label{chi_{d}^t(C_n(1,3))}\n{\\rm\nFor any $n\\geq 6$,\n\\begin{equation*}\n\\chi_{d}^t(C_n(1,3))=\\left\\{\n\\begin{array}{ll}\n2\\lceil \\frac{n}{8} \\rceil & \\mbox{ if } n=6 \\mbox{ or } 8\\leq n \\leq 10,\\\\\n2\\lceil \\frac{n}{8}\\rceil +1 & \\mbox{ if }n\\equiv 1\\pmod{8} \\mbox{ or } n=11,\\\\\n2\\lceil \\frac{n}{8}\\rceil +2 & \\mbox{ otherwise.}\n\\end{array}\n\\right.\n\\end{equation*}\n}\n\\end{thm}\n\n\\section{Preliminary Results}\n\nIn this section, we will calculate the independence and the packing numbers of the circulant graphs $C_n(1,3)$ and also we will present some crucial lemmas. First an observation.\nRecall that for a proper coloring $f=(V_1,V_2,\\ldots,V_{\\ell})$ of a circulant graph $G$, we assume $v_i=|V_i|$, $v_i'=|CN_G(V_i)|$, and $v_1\\geq v_2\\geq \\cdots \\geq v_{\\ell}$. We also assume the vertex set of the circulant graph $C_n(1,3)$ is $\\{1,2,3,\\ldots,n\\}$.\n\n\\begin{obs} \\label{Obs.1}\n{\\rm\nFor any proper coloring of the circulant graph $C_n(1,3)$ of order $n\\geq 9$,\n\\begin{itemize}\n\\item $v_i+v_i'\\leq 5$ if $v_i \\leq 4$, and \n\\item $v_i'=0$ if $v_i \\geq 5$.\n\\end{itemize}\n}\n\\end{obs}\n\n\\begin{prop} \\label{alpha_Cn(1,3)}\n{\\rm\nFor any $n\\geq 4$,\n\\begin{equation*}\n\\alpha(C_n(1,3))= \\left\\{\n\\begin{array}{ll}\n\\frac{n}{2} & \\mbox{ if }n \\mbox{ is even,} \\\\\n\\frac{n-3}{2} & \\mbox{ if }n \\mbox{ is odd.}\n\\end{array}\n\\right.\n\\end{equation*}\n}\n\\end{prop}\n\n\\begin{proof}\nLet $L$ be an independent set in $C_n(1,3)$. Then the distance between the vertices in L must be at least 2. Hence, if $n$ is even, then the even vertices form an independent set of the largest size. If $n$ is odd, then there is an edge between vertex $n-1$ and $2$. Hence, the even vertices minus vertex $n-1$ form an independent set of the largest size. This completes the proof.\n\\end{proof}\n\n\n\\begin{prop} \\label{open-packing Cn(1,3)}\n{\\rm\nFor any $n\\geq 3$,\n\\begin{equation*}\n\\rho^o(C_n(1,3))= \\left\\{\n\\begin{array}{ll}\n\\lfloor\\frac{n}{3}\\rfloor & \\mbox{ if } 3\\leq n\\leq 6,\\\\\n\\lfloor\\frac{n}{4}\\rfloor-1 & \\mbox{ if } n\\equiv 4,6\\pmod{8},\\\\\n\\lfloor\\frac{n}{4}\\rfloor & \\mbox{ otherwise.}\n\\end{array}\n\\right.\n\\end{equation*}\n}\n\\end{prop}\n\n\\begin{proof}\nIf $3\\leq n \\leq 6$, then it is easy to see that\n$\\rho^o(C_n(1,3))=\\lfloor \\frac{n}{3}\\rfloor$. Now let\n$n\\geq 7$. By the 4-regularity of\n$C_n(1,3)$, we have $\\rho^o(C_n(1,3))\\leq \\lfloor \\frac{n}{4}\\rfloor$.\n\nLet $n\\equiv 4,6 \\pmod{8}$, and let $L$ be an open packing in $C_n(1,3)$ with cardinality\n$\\lfloor\\frac{n}{4}\\rfloor$. Let $V(C_n(1,3))=V_o\\cup V_e$ be the\npartition of the vertex set of the graph to the set of odd numbers\n$V_o$ and the set of even numbers $V_e$. For simplicity set $n_e=|L\\cap V_e|$ and $n_o=|L\\cap V_o|$.\nSince $n$ is even, it follows that $N(i)\\subseteq V_o$ if $i\\in L\\cap V_e$\nand $N(j)\\subseteq V_e$ if $j\\in L\\cap V_o$. We may assume that $n_e>n_o$. Since $n_o+n_e=\\lfloor\\frac{n}{4}\\rfloor=2k+1$, $n_e\\geq k+1$. So the number of vertices dominated by $n_e$ even vertices is at least $4k+4$ which is a contradiction, because the number of odd vertices is at most 4k+3. \\\\\nTherefore, for any $n\\geq 7$,\n\\begin{equation*}\n\\rho^o(C_n(1,3))\\leq \\left\\{\n\\begin{array}{ll}\n\\lfloor\\frac{n}{4}\\rfloor-1 & \\mbox{ if } n\\equiv 4,6\\pmod{8} , \\\\\n\\lfloor\\frac{n}{4}\\rfloor & \\mbox{ otherwise}.\n\\end{array}\n\\right.\n\\end{equation*}\nNow since the sets $\\{1+8t, 2+8t~|~0\\leq t\\leq\n\\lfloor\\frac{n}{8}\\rfloor-1\\}$, $\\{8t, 8t-5~|~1\\leq t\\leq\n\\lfloor\\frac{n}{8}\\rfloor\\}\\cup\\{n\\}$ and $\\{1+8t,\n2+8t~|~0\\leq t\\leq \\lfloor\\frac{n}{8}\\rfloor-1\\}\\cup\\{n-6\\}$ are all\nopen packing for $C_n(1,3)$ when $n\\not\\equiv 5,7\\pmod{8}$, $n\\equiv\n5\\pmod{8}$ and $n\\equiv 7\\pmod{8}$, respectively, the equality is obtained.\n\\end{proof}\n\\begin{obs}\\label{obs2}\n{\\rm\nLet $u,v,w\\in V(C_n(1,3))$.\n\\begin{itemize}\n\\item If the distance between $u$ and $v$ on the cycle $(1,2,\\ldots,n)$ is $2, 4, 6$, then\n$|CN(\\{u,v\\})|=3,2,1$, respectively, otherwise $|CN(\\{u,v\\})|=0$.\n\n\\item If the distances between $u$ and $v$, $v$ and $w$, $u$ and $w$ are $2,2,4,$ respectively, then\n$|CN(\\{u,v,w\\})|=2$ and if the distances are $2,4,6$, then $|CN(\\{u,v,w\\})|=1$, otherwise $|CN(\\{u,v,w\\})|=0$.\n\\end{itemize}\n}\n\\end{obs}\n\\begin{obs}\\label{obs3}\n{\\rm\nLet $L$ be an open packing in $C_n(1,3)$ of cardinality $\\rho^o(C_n(1,3))$.\n\\begin{itemize}\n\\item If $n\\not\\equiv 5, 7\\pmod 8$, then the subgraph of $C_n(1,3)$ induced by the vertices in $L$\nconsists of $\\lfloor\\frac{n}{8}\\rfloor $ edges.\n\n\\item If $n\\equiv 5, 7\\pmod 8$, then the subgraph of $C_n(1,3)$ induced by the vertices in $L$\nconsists of $\\lfloor\\frac{n}{8}\\rfloor$ edges and an isolated vertex.\n\\end{itemize}\n}\n\\end{obs}\n\nThe next lemma states that for even $n$ how large the cardinality of an independent set in $C_n(1,3)$ can be\nwhen it contains numbers with different parity.\n\n\\begin{lem} \\label{Max.ind.dif.parity}\n{\\rm\nFor even $n\\geq 8$, let $I$ be an independent set in $C_n(1,3)$ such\nthat the numbers in $I$ have different parity. Then $|I|\\leq \\frac{n}{2}-3$.\n}\n\\end{lem}\n\n\\begin{proof}\nLet $I$ be an arbitrary independent set in $C_n(1,3)$ with the\npartition $I= A_1\\cup \\cdots \\cup A_t$, where $t\\geq 2$, such the set\n$A_i$ consists of odd numbers if and only if $i$ is odd. Let also\n\\[\nA_i=\\{x_1^{(i)},\\ldots, x_{a_i}^{(i)}\\}\n\\]\nfor $1\\leq i\\leq t$, where $|A_i|=a_i$. Without loss of\ngenerality, we may assume $x_1^{(i)}< x_2^{(i)}< \\cdots <\nx_{a_i}^{(i)}$ and $x_1^{(1)} =1$. Then we have\n$x_1^{(i+1)}\\geq x_{a_i}^{(i)}+5$ for $1\\leq i\\leq t-1$. Therefore\n\\[\nn \\geq \\sum\\limits_{i=1}^{t}(2a_i+3)\n = 2|I|+3t,\n \\geq 2|I|+6,\n\\]\nwhich implies that $|I|\\leq \\frac{n}{2}-3$.\n\\end{proof}\n\n\\begin{cor}\\label{cor1}\n{\\rm\nLet $f=(V_1,\\cdots, V_{\\ell})$ be a proper coloring of $C_n(1,3)$ for even $n \\geq 18$. If $|V_i|\\geq \\frac{n}{2}-2$ for some $i$, then\n$f$ is not a TDC.\n}\n\\end{cor}\n\n\\begin{proof}\nBy Lemma \\ref{Max.ind.dif.parity}, the vertices in $V_i$ have the same parity. Without loss of generality, we may assume the vertices in $V_i$ are odd. Since $N(x) \\subseteq V_e$ for any odd $x$, $N(y)\\subseteq V_o$ for any even $y$ and $|V_o\\cap(\\bigcup_{j=1,j\\neq i}^{\\ell}V_j)|\\leq 2$, the condition $n\\geq 18$ implies that there exists\nan even $x$ such that $x\\nsucc_t V_j$ for each $1\\leq j \\leq \\ell$, a contradiction.\n\\end{proof}\n\\section{A Lower Bound}\nIn this section we find a lower bound for the total dominator chromatic number of $C_n(1,3)$ for $n\\geq 7$.\nWe make use of the following result in this section.\n\n\\begin{thm} \\emph{\\cite{Rad}}\n\\label{TDN C_n(1,3)}\n{\\rm\nFor any $n\\geq 4$,\n\\begin{equation*}\n\\gamma_t(C_n(1,3))= \\left\\{\n\\begin{array}{ll}\n\\lceil\\frac{n}{4}\\rceil+1 & \\mbox{ if } n\\equiv\n2,4\\pmod{8} , \\\\\n\\lceil\\frac{n}{4}\\rceil & \\mbox{ otherwise.}\\\\\n\\end{array}\n\\right.\n\\end{equation*}\n}\n\\end{thm}\n\nFirst we find lower bounds for the total dominator chromatic number of $C_n(1,3)$ when $n\\in\\{7,8,9,10,11,12,14,19\\}$.\n\n\\begin{lem} \\label{TDCN the 7 numbers}\n{\\rm\nFor any circulant graph $C_n(1,3)$,\n\\begin{equation*}\n\\chi_{d}^t(C_n(1,3)) \\geq \\left\\{\n\\begin{array}{ll}\n2\\lceil \\frac{n}{8} \\rceil & \\mbox{ if } 8\\leq n \\leq 10,\\\\\n2\\lceil \\frac{n}{8}\\rceil +1 & \\mbox{ if } n=11,\\\\\n2\\lceil \\frac{n}{8}\\rceil +2 & \\mbox{ if } n=7,12,14,19.\n\\end{array}\n\\right.\n\\end{equation*}\n}\n\\end{lem}\n\n\\begin{proof}\nWe know\n\\begin{equation*}\n\\begin{array}{ll}\n\\chi_{d}^t(C_n(1,3))\\geq \\chi(C_n(1,3))=2\\lceil \\frac{n}{8} \\rceil +2 & \\mbox{ if } n=7, \\mbox{ ~~~~(by (\\ref{max{chi,gamma_t} =< chi_d^t}))}\\\\\n\\chi_{d}^t(C_n(1,3))\\geq \\gamma_t(C_n(1,3))=2\\lceil \\frac{n}{8} \\rceil & \\mbox{ if } n=8,10, \n\\mbox{ (by (\\ref{max{chi,gamma_t} =< chi_d^t})+Theo. \\ref{TDN C_n(1,3)})}.\n\\end{array}\n\\end{equation*}\nLet $f=(V_1,V_2,V_3)$ be a TDC of $C_9(1,3)$, $v_i=|V_i|$ and $v_i'=|CN_G(V_i)|$. Then\n\\[\n\\sum\\limits_{i=1}^{3}v_i' \\leq \\sum\\limits_{i=1}^{3}(5-v_i)\n = 15-\\sum\\limits_{i=1}^{3}v_i\n = 6\n < 9,\n\\]\nwhich contradicts (\\ref{sum v_i'>=n}). Hence, $\\chi_{d}^t(C_9(1,3)) \\geq 4$. In a similar fashion, it can be proved that $\\chi_d^t(C_{11}(1,3))\\geq 5$. Now we prove $\\chi_{d}^t(C_n(1,3)) \\geq 2\\lceil \\frac{n}{8}\\rceil +2$ for $n=12,14,19$. Since their proofs are similar, we only prove $\\chi_d^t(C_{12}(1,3))\\geq 6$.\n\nLet $f=(V_1, \\cdots, V_5)$ be a proper coloring of $C_{12}(1,3)$, where $v_1\\geq v_2\\geq \\cdots \\geq v_{5}$, and let $m=|\\{i~|~v_i\\geq 5\\}|$. Note that $m=0$ or 1. First let $m=0$. Then $(v_1,\\cdots, v_5)\\in \\{(4,3,2,2,1),(4,2,2,2,2),(3,3,3,2,1)\\}$. Let $(v_1,\\cdots, v_5)=(4,3,2,2,1)$ (other cases are similar). Hence, $(v_1',\\cdots, v_5') \\leq (1,2,3,3,$ $4)$ by Observation \\ref{Obs.1}. If the numbers in some $V_i$ have different parity, then $|V_o|=|V_e|$=6 implies that the numbers in some $V_j$, $i\\neq j$, have different parity. Hence, $v_i'=v_j'=0$ and so $\\sum\\limits_{i=1}^{5}v_i' < 12$, a contradiction with (\\ref{sum v_i'>=n}).\nTherefore, we assume the numbers in each $V_i$ have the same parity. Then $V_i$ is a subset of $V_e$ (or $V_o$) if and only if $CN_{C_{12}(1,3)}(V_i)$ is a subset of $V_o$ (or $V_e$). Without loss of generality, let $V_1\\subseteq V_e$. Then either $V_3\\subseteq V_e$ or $V_4\\subseteq V_e$ (not both). Therefore\n\\[\n\\begin{array}{ll}\nV_o=CN_{C_{12}(1,3)}(V_1) \\cup CN_{C_{12}(1,3)}(V_3) \\mbox{ or}\\\\\nV_o=CN_{C_{12}(1,3)}(V_1) \\cup CN_{C_{12}(1,3)}(V_4),\n\\end{array}\n\\]\nwhich contradicts the fact that $|CN_{C_{12}(1,3)}(V_1) \\cup CN_{C_{12}(1,3)}(V_i)| < |V_o|$ for $i=3,4$.\n\nNow let $m=1$. Then $5\\leq v_1\\leq 6=\\alpha (C_{12}(1,3))$. Lemma \\ref{Max.ind.dif.parity} implies that the vertices in $V_1$ have the same parity. Without loss if generality, we may assume the numbers in $V_1$ are odd. Hence, $|(V_2\\cup \\cdots \\cup V_5)\\cap V_o|\\leq 1$. If $v_1=6$, then $|(V_2\\cup \\cdots \\cup V_5)\\cap V_o|=0$, and so for any even $v$, $v\\nsucc_t V_i$ for each $i$. Now assume $v_1=5$. Then $|(V_2\\cup \\cdots \\cup V_5)\\cap V_o|=1$. If $V_o=V_1\\cup V_5$, then for some even $v$, we will have $v\\nsucc_t V_i$ for each $i$. Now assume $V_o\\neq V_1\\cup V_5$. Without loss of generality, we may assume the numbers in $V_2$ have different parity, and so $v_2'=0$. Also we know $v_1=5$ implies that $(v_2,\\cdots, v_5)=(4,1,1,1)$ or $(v_2,\\cdots, v_5)=(3,2,1,1)$, or $(v_2,\\cdots, v_5)=(2,2,2,1)$, and so $(v_2',\\cdots, v_5')\\leq (1,4,4,4)$ or $(v_2',\\cdots, v_5')\\leq (2,3,4,4)$ or $(v_2',\\cdots, v_5')\\leq (3,3,3,4)$, respectively. Let $(v_2',\\cdots, v_5')=(1,4,4,4)$ (the proof of the other cases are left to the reader). Since $V_3\\cup V_4\\cup V_5 \\subseteq V_e$, we obtain\n\\[\n\\bigcup\\limits_{i=3}^{5} CN_{C_{12}(1,3)}(V_i)\\subseteq V_o,\n\\]\nand so $CN_{C_{12}(1,3)}(V_1)=CN_{C_{12}(1,3)}(V_2)=\\emptyset$, therefore $(V_1\\cup \\cdots \\cup V_5)\\cap V_e=\\emptyset$, a contradiction.\n\\end{proof}\n\nNext we find lower bounds for the total dominator chromatic number of $C_n(1,3)$ when $n\\geq 13$.\n\\begin{lem} \\label{TDCN n>=13 except 14,19}\n{\\rm\nFor any circulant graph $C_n(1,3)$ of order $n\\geq 13$,\n\\begin{equation*}\n\\chi_{d}^t(C_n(1,3)\\geq \\left\\{\n\\begin{array}{ll}\n2\\lceil \\frac{n}{8}\\rceil +1 & \\mbox{ if }n\\equiv 1\\pmod{8},\\\\\n2\\lceil \\frac{n}{8}\\rceil +2 & \\mbox{ otherwise.}\n\\end{array}\n\\right.\n\\end{equation*}\n}\n\\end{lem}\n\n\\begin{proof}\nThe statement is true for $n=14, 19$ by Lemma \\ref{TDCN the 7 numbers}. Now assume $n\\neq 14,19$.\nLet $f=(V_1,V_2,\\cdots,V_{\\ell})$ be a TDC of $C_n(1,3)$, where $v_1\\geq v_2\\geq \\cdots \\geq v_{\\ell}$ and\n\\begin{equation*}\n\\ell = \\left\\{\n\\begin{array}{ll}\n2\\lceil \\frac{n}{8}\\rceil & \\mbox{ if }n\\equiv 1\\pmod{8},\\\\\n2\\lceil \\frac{n}{8}\\rceil +1 & \\mbox { if }n\\not\\equiv 1\\pmod{8}.\n\\end{array}\n\\right.\n\\end{equation*}\n\n\\noindent Let $m=|\\{i~|~v_i\\geq 5\\}|$.\n\\noindent If $m\\geq 3$, we may assume $v_1'=v_2'=v_3'=0$. Then\n$\n\\sum_{i=1}^{\\ell}v_i' = \\sum_{i=4}^{\\ell}v_i'\n \\leq 4(\\ell-3)\n < n,\n$\na contradiction.\n\n\n\\noindent If $m=0$, then $5\\ell< 2n$ and so $\\sum\\limits_{i=1}^{\\ell} v_i'< n$, which is a contradiction with (\\ref{sum v_i'>=n}).\nNow let $m=1$. Hence, $v_1'=0$.\nBy Observation \\ref{Obs.1}, Proposition \\ref{alpha_Cn(1,3)} and Corollary \\ref{Max.ind.dif.parity},\n\\[\n\\sum\\limits_{i=1}^{\\ell}v_i' = \\sum\\limits_{i=2}^{\\ell}v_i'\n \\leq \\sum\\limits_{i=2}^{\\ell}(5-v_i)\n = 5(\\ell-1)-n+v_1 < n,\n\\]\na contradiction with (\\ref{sum v_i'>=n}).\nNow suppose that $m=2$. If $n\\equiv 0,1,5,6,7\\pmod{8}$, then\n\\[\n\\sum\\limits_{i=1}^{\\ell}v_i' = \\sum\\limits_{i=3}^{\\ell}v_i'\n \\leq 4(\\ell-2)\n < n,\n\\]\na contradiction with (\\ref{sum v_i'>=n}).\n\nThe remaining cases are $n\\equiv 2,3,4\\pmod{8}$ and $m=2$.\n\n\\textbf{Case 1.} $n\\equiv 3\\pmod{8} \\mbox { and }m=2$.\n\n\\noindent Let $n=8k+3$ for some nonnegative integer $k$ and $m=2$.\nSince $m=2$, it follows that $v_1\\geq v_2\\geq 5$. Note that, by definition, $\\ell=2k+3$. Since\n$\\ell -2\\leq n-(v_1+v_2),$ it follows that $v_1+v_2\\leq 6k+2$.\nIf $v_1+v_2\\leq 6k$, then\n\\[\n\\sum\\limits_{i=1}^{\\ell}v_i' = \\sum\\limits_{i=3}^{\\ell}v_i'\n \\leq \\sum\\limits_{i=3}^{\\ell} (5-v_i)\n = 5(\\ell-2)-n+v_1+v_2 < n,\n\\]\nwhich contradicts (\\ref{sum v_i'>=n}). We consider two following subcases.\n\\begin{itemize}\n\\item{Subcase 1.1.} $v_1+v_2=6k+2$.\n\n\\noindent Then $n-(v_1+v_2)=2k+1$ and $V_i=\\{w_i\\}$ for $3\\leq i\\leq \\ell=2k+3$.\nIn order to satisfy the inequality $\\sum_{i=1}^{\\ell}v_i'\\geq n$, given in (\\ref{sum v_i'>=n}), we must have\n$\\bigcup_{i=3}^{\\ell} N(w_i)= V(C_n(1,3))$.\nHence, there exists a set $L\\subset \\{w_i: 3\\leq i\\leq \\ell\\}$ of cardinality $2k$ which is an open packing in $C_n(1,3)$. Let $\\{w_j\\}=\\{w_i: 3\\leq i\\leq \\ell\\}\\setminus L$. Obviously, if $i,i+3\\in L$ and\n$w_j=i+1$ or\n$w_j=i+2$ (addition is modulo $n$ with residue in $\\{1,2,\\ldots,n\\}$), then $|N(w_j)\\cap N\\{i,i+3\\}|=3$ by Observation \\ref{obs2}. So $\\sum_{i=1}^{\\ell}v_i'=n}) we must have\n$(\\bigcup_{i=4}^{\\ell} N(w_i))\\cup CN(V_3)= V(C_n(1,3))$. Therefore $L=\\{w_i\\mid 4\\leq i\\leq \\ell\\}$\nforms an open packing of size $2k$ in $C_n(1,3)$. In addition, $|CN(V_3)|=3$ and\n$N(L)\\cap CN(V_3)=\\emptyset$.\nBy Observation \\ref{obs2}, the equality $|CN(V_3)|=3$ implies that the distance between\nvertices $a$ and $b$ on the cycle $(1,2,\\ldots,n)$ must be $2$ .\nLet $S$ be as in Subcase 1.1. It is easy to see that for every two vertices $a$ and $b$ of distance two\nwhich are between any two vertices of $S$, we obtain\n$|N(L)\\cap CN\\{a, b\\}|\\geq 1$, which is a contradiction.\n\\end{itemize}\n\\textbf{Case 2.} $n\\equiv 2, 4\\pmod{8} \\mbox { and }m=2$.\n\n\\noindent Let $n=8k+2$ for some nonnegative integer $k$.\nSince $m=2$, it follows that $v_1\\geq v_2\\geq 5$. Note that, by definition, $\\ell=2k+3$. Since\n$\\ell -2\\leq n-(v_1+v_2),$ it follows that $v_1+v_2\\leq 6k+1$.\nOn the other hand, if $v_1+v_2\\leq 6k-2$, then\n\\[\n\\sum\\limits_{i=1}^{\\ell}v_i' = \\sum\\limits_{i=3}^{\\ell}v_i'\n \\leq \\sum\\limits_{i=3}^{\\ell} (5-v_i)\n = 5(\\ell-2)-n+v_1+v_2 < n,\n\\]\nwhich contradicts (\\ref{sum v_i'>=n}).\n\n\\noindent So we have three subcases to consider, they are, $v_1+v_2=6k+1, 6k$, $6k-1$.\nIf $v_1+v_2=6k+1$, then $n-(v_1+v_2)=2k+1$.\nAn argument similar to that described in Subcase 1.1 leads to a contradiction.\nIf $v_1+v_2=6k$, then $n-(v_1+v_2)=2k+2$.\nAn argument similar to that described in Subcase 1.2 leads to a contradiction.\n\nNow let $v_1+v_2=6k-1$. Then $n-(v_1+v_2)=2k+3$.\nFirst let $V_3=\\{a,b,c\\}$.\nIf $v_3'\\leq 1$, then $\\sum_{i=1}^{\\ell} v_i'\\leq 8k+1=n}).\n\n\\noindent Hence, $v_1+v_2=6k+3$. So $n-(v_1+v_2)=2k+1$. In order to satisfy\n$\\bigcup_{i=3}^{\\ell} CN(V_i)=V(C_n(1,3))$, we must have $|V_i|=1$ for $3\\leq i\\leq \\ell$ and\nthe set $\\bigcup_{i=3}^{\\ell}V_i$ must be an open packing in $C_n(1,3)$ of cardinality\n$2k+1$, which is impossible by Proposition \\ref{open-packing Cn(1,3)}.\n\\end{proof}\n\n\n\\section{Proof of Theorem \\ref{chi_{d}^t(C_n(1,3))}}\n\n\n\\noindent First note that $C_6(1,3)$ is isomorphic to the complete bipartite graph $K_{3,3}$ and so $\\chi_d^t(C_6(1,3))$ $=\\chi(K_{3,3})=2$. Now for each $n\\geq 7$ we present color classes for $C_n(1,3)$ which satisfy the equality in Lemma \\ref{TDCN the 7 numbers} when $7\\leq n\\leq 12$, $n=14$ or $n=19$ and the equality in\nLemma \\ref{TDCN n>=13 except 14,19} when $n\\geq 13$ and $n\\neq 14,19$. Hence, the proof of the theorem follows.\n\n\\noindent If $7\\leq n\\leq 11$, then the color classes are:\\\\\n\\begin{tabular}{ll}\n$n=7:$ & $\\{1\\}, \\{2,7\\}, \\{3,5\\}, \\{4,6\\},$\\\\\n$n=8:$ & $\\{1,3,5,7\\}, \\{2,4,6,8\\},$\\\\\n$n=9:$& $\\{1,8\\}, \\{2,9\\}, \\{3,5,7\\}, \\{4,6\\},$\\\\\n$n=10:$ & $\\{1\\}, \\{2\\}, \\{3,5,7,9\\}, \\{4,6,8,10\\},$\\\\\n$n=11:$& $\\{1,3,5\\}, \\{2,11\\}, \\{7,9\\}, \\{8,10\\}, \\{4,6\\}.$\n\\end{tabular}\n\n\\noindent If $n\\geq 12$, we proceed as follows. Let $E$ and $O$ consist of even and odd numbers less\nthan or equal to $n$, respectively.\nDefine\n$$\n\\begin{array}{lll}\nA_1=\\{n-6,n-4,n-2,n\\},&&\nA_2 = \\{n-7, n-5, n-3, n-1\\},\\\\\nA_3 = \\{n-6, n-4, n-2\\},&&\nA_4 = \\{n-7, n-5, n-3\\},\\\\\nA_5 = \\{n-6, n-4\\}, &&\nA_6 = \\{n-7, n-5\\}.\n\\end{array}\n$$\n\\noindent Consider the open packing $L=\\{8i+1,8i+2\\mid 0\\leq i\\leq k-1\\}$. The color classes are:\n$n=8k:$\\hspace{7mm} $\\{v\\} \\mbox{ for } {v\\in L}, O\\setminus L, E\\setminus L,$\\\\\n$n=8k+1:$ $\\{v\\} \\mbox{ for } {v\\in L},A_1,O\\setminus(L\\cup A_1), E\\setminus L,$\\\\\n$n=8k+2:$ $\\{v\\} \\mbox{ for } {v\\in L},A_1,A_2,O\\setminus(L\\cup A_2), E\\setminus (L\\cup A_1),$\\\\\n$n=8k+3:$ $\\{v\\} \\mbox{ for } {v\\in L},A_2,A_3, O\\setminus(L\\cup A_3), (E\\setminus (L\\cup A_2))\\cup\\{n\\},$\\\\\n$n=8k+4:$ $\\{v\\} \\mbox{ for } {v\\in L},A_3,A_4, O\\setminus(L\\cup A_4), E\\setminus (L\\cup A_3),$\\\\\n$n=8k+5:$ $\\{v\\} \\mbox{ for } {v\\in L},A_4,A_5, (O\\setminus(L\\cup A_5))\\cup\\{n-1\\}, (E\\setminus (L\\cup A_4))\\cup\\{n-2,n\\},$\\\\\n$n=8k+6:$\\ $\\{v\\} \\mbox{ for } {v\\in L},A_5,A_6, O\\setminus(L\\cup A_6), E\\setminus (L\\cup A_5),$\\\\\n$n=8k+7:$ $\\{v\\} \\mbox{ for } {v\\in L},\\{n-6\\}, A_6, (O\\setminus L)\\cup\\{n-3, n-1\\}, (E\\setminus (L\\cup A_6))\\cup\\{n-4, n-2, n\\}.$\n\nTheorems \\ref{chi_{d}^t(C_n(1,3))} and \\ref{TDN C_n(1,3)}\nlead to the following corollary.\n\n\\begin{cor}\\label{corollary 2}\n{\\rm\n\\begin{equation*}\n\\chi_{d}^t(C_n(1,3))=\\left\\{\n\\begin{array}{ll}\n\\gamma_t & \\mbox{ if }n=8,10 \\\\\n\\gamma_t+1 & \\mbox{ if }n=9 \\\\\n\\gamma_t+3 & \\mbox{ if }n\\equiv 3\\pmod{8}, n\\neq 11\\\\\n\\gamma_t+2 & \\mbox{ otherwise.}\n\\end{array}\n\\right.\n\\end{equation*}\n}\n\\end{cor}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe attraction between two parallel neutral perfectly conducting plates separated by a gap was first predicted by Casimir in 1948 \\cite{Casimir48}. This force was explained in terms of quantum vacuum fluctuations and is a well known result in quantum field theory \\cite{Greiner}. For two plates made of an arbitrary material Lifshitz \\cite{Lifshitz56,Lifshitz61} derived a more general theory for dispersive forces based on Rytov's formalism for fluctuating electromagnetic fields \\cite{Rytov67,Vinogradov09}. Solving Maxwell's equations with the proper boundary conditions, and using the fluctuation dissipation theorem, the force can be written in terms of the reflection coefficients between the gap and the slabs.\n Lifshitz theory is applicable at any temperature and separation between the bodies, with the restriction that the separation between the bodies be larger than the atomic separation. For short separations the Lifshitz formula gives the non retarded Van der Waals force. At large separations retardation is included in the theory giving the Casimir force.\n The success of the Lifshitz formula is evident from the variety of systems that have been studied ranging from ferromagnets\\cite{Bruno02}, semiconductors \\cite{Lamoreaux08,Dalvit09} and metamaterials \\cite{Rosa08,Footnote1}. \n\nRenewed interest in the Casimir force came from measurements made in the mid 90's. Lamoreaux \\cite{Lamoreaux97} using a torsional balance and Mohideen \\cite{Mohideen98} with an atomic force microscope did the first precise measurements of the Casimir force. Later the use of micro torsional balances was implemented by several authors \\cite{Dec03,Ian04,Vanzwol09}. With the exception of one experiment \\cite{Onofrio02}, all the measurements are between a large sphere and a plane rather than between two parallel plates. Recently possible measurements of the Casimir force between a plane and a cylinder have been considered \\cite{Wei10}.\n\nIt has also been suggested that dispersive forces could play an important role in the operation of micro electro mechanical\nsystems (MEMS) \\cite{Serry95,Serry98, Roukes01}. Experiments show that Casimir forces have a strong influence on the oscillatory behavior of microstructures, driving them into their nonlinear modes \\cite{Capasso01}. The study of the of Casimir and Van der Waals forces in MEMS, in particular its role in pull-in dynamics has been also a topic of intensive study \\cite{Zhao03,Zhao07,Delrio05,Esquivelapl,Esquivelnjp,Batra07}. A possibility that has been considered by the authors \\cite{Garcia09,Esquivel09,Esquivel10} is the use of external magnetic fields to decrease the magnitude of the Casimir force, and thus inhibiting the pull-in or jump to contact. \n\nLifshitz theory was developed for isotropic media. When anisotropic media are considered, an additional effect was shown to occur, there is a torque associated with the dispersive forces. In 1972 Parsegian and Weiss \\cite{Parsegian72} derived an expression for the Van der Waals energy between anisotropic slabs in the general case when the medium between the slabs was also anisotropic. The resulting free energy depends on the angle between the principal axes of the slabs resulting in a torque. Later, Barash \\cite{Barash78} extended these results to include retardation effects predicting the Casimir torque. In the non retarded limit the results of Parsegian and Weiss were recovered.\n\n At the molecular level, the influence of structural chirality on Van der Waals forces between two solids has been studied by introducing an helical space variation of the dielectric tensor \\cite{Galatry82}. This work was motivated by the need to understand the macroscopic properties of optically active species.\n\nDifferent techniques have been used to derive the expression for the Van der Waals torque. For example, Zhoa \\cite{Zhao05} calculated the Casimir torque using the method of quantized surface modes techniques, extending their results to anisotropic metamaterials \\cite{Deng08}. Philbin and Leonhardt \\cite{Philbin08} calculated the electromagnetic stress tensor between anisotropic plates to obtain the same expression for the Casimir torque.\nThe Casimir torque has also been derived as a consequence of angular momentum transfer of the fluctuating electromagnetic fields between the two anisotropic planes \\cite{Enk95,Torres06}.\n\nAlthough dispersive torques have not been measured, Munday $et$ $al.$ \\cite{Munday05} made detailed calculations of the Casimir torque, and proposed an experiment to measured the torque between a barium titanate disk immersed in ethanol and a second anisotropic material. By means of an external laser the top plate is initially rotated letting the Casimir torque realigning the optical axes of the plates again. This paper showed that within current experimental techniques, dispersive torques can be measured.\n\nIn this paper we show that a dispersive torque can be induced using a constant magnetic field, and the magnitude of the torque depends on the intensity of the magnetic field. This is achieved by the excitation of magnetoplasmons in II-IV semiconductors such as $InSb$.\n\n\n\\section{Anisotropy due to external magnetic fields}\n\nWhen a dc external magnetic field is applied to a metal or a highly doped semiconductor, the normal modes of the free charge change significantly giving rise to magneto plasmons. For example, the external magnetic field induces an optical anisotropy in an otherwise isotropic material. Also, the dispersion relation of the magneto plasmons presents a frequency gap that depends on the intensity of the external magnetic field \\cite{Palik70,Wames72,Wallis74,Aers78}. Typically, magnetoplasmons are excited in III-V semiconductors such as InAb, InAs or GaAs. Magnetoplasmons have reemerge in the context of plasmonics and its applications \\cite{Kong08,Berman08,Liu09}.\n\nTo illustrate the effect of the external magnetic field, consider the classical equation of motion for the electrons in the material:\n\\begin{equation}\nm\\frac{d{\\bf v}}{dt}=q({\\bf E}+{\\bf v}\\times {\\bf B})-\\frac{m}{\\tau}{\\bf v}\n\\label{drude}\n\\end{equation}\nwhere $m$ is the effective mass of the electron, $q$ the charge and $\\tau$ is the relaxation time. Assuming an harmonic electric field $e^{-i\\omega t}$ the current ${\\bf j}=nq{\\bf v}$ can be found and thus the conductivity can be calculated \\cite{Palik70}\n\\begin{equation}\n\\sigma_{ij}(\\omega,{\\bf B}_0)=\\frac{nq^2}{\\tau^* m}\\frac{\\delta_{ij}+\\omega_c\\tau^* \\textit{e}_{ijk}(B_k\/B_0)+(wc\\tau^*)^2 (B_iB_j\/B_0^2)}{1+(\\omega_c \\tau^*)^2},\n\\end{equation}\nwhere $\\tau^*=\\tau\/(1-i\\omega \\tau)$, $w_c=q|{\\bf B}_0|\/mc$ is the cyclotron frequency, and $\\textit{e}_{ijk}$ is the Levi-Civita symbol.\nThe dielectric tensor is obtained from\n\\begin{equation}\n\\epsilon_{ij}(\\omega,{\\bf B}_0)=\\delta_{ij}+\\frac{4 \\pi i}{\\omega}\\sigma_{ij}.\n\\label{epsilon}\n\\end{equation}\nClearly if ${\\bf B}_0=0$ we recover the results for the isotropic case.\n\nFor an arbitrary direction of the magnetic field, the calculation of the dispersion relation of the surface magneto plasmons and of the optical reflectivity is difficult. To simplify the problem, specific directions of the magnetic field have to be chosen \\cite{Manvir06}. In the so called {\\it Faraday} configuration, the magnetic field is perpendicular to the slab. In this case, there is mode conversion upon reflection from the slab. That is, for an incident TE wave, the reflected wave will consist of TE and TM modes, similar for an incident TM mode.\n\nThe second configuration, that will be used in this paper, is the {\\it Voigt} geometry where the magnetic field is parallel to the slabs. In this case there is no mode conversion upon reflection. Consider a slab parallel to the $x-z$ plane. In the Voigt geometry the external magnetic field points along the $z$ axis. In this case, the components of the dielectric tensor are given by \\cite{Manvir06,Garcia09}\n\\begin{eqnarray}\n\\epsilon_{xx}&=&\\epsilon_L\\left[ 1-\\frac{\\omega_p^2}{\\omega(\\omega+i \\gamma)} \\right ], \\nonumber \\\\\n\\epsilon_{yy}&=&\\epsilon_L\\left[ 1-\\frac{(\\omega+i\\gamma)\\omega_p^2}{\\omega(\\omega+i\\gamma)^2-\\omega_c^2} \\right ] ,\\nonumber \\\\\n\\epsilon_{yz}&=&\\epsilon_L\\left[ \\frac{i\\omega_c\\omega_p^2}{\\omega ((\\omega+i\\gamma)^2-\\omega_c^2)} \\right ],\n\\end{eqnarray}\nand $\\epsilon_{zz}=\\epsilon_{yy}$ and $\\epsilon_{zy}=-\\epsilon_{yz}$. The other components are equal to zero.\nIn these equations $\\epsilon_L$ is the background dielectric function, $\\omega_p$ the plasma frequency and $\\gamma$ is the damping parameter. The factor $\\epsilon_L$ accounts for the fact that we are working with semiconductors. \nIn the absence of the magnetic field, $\\omega_c=0$ and the plates become isotropic. In the rest of the paper we will use the dimensionless variable $\\Omega_c=\\omega_c\/\\omega_p$, that gives the relative importance of the external magnetic field. In Figure (2) we have plotted the dielectric function components as given by Eq. (4), for a value of $\\Omega_c=0.2$, showing the anisotropy of the system. The parameters used are $\\epsilon_L=15.8$, $m=0.014 m_0$ and $\\gamma=\\omega_p\/100$, that are typical of $InSb$ \\cite{Cunningham74}. In this figure we performed a rotation of the frequency to the complex plane $\\omega\\rightarrow i\\zeta$, a common and convenient practice when calculating dispersive forces. For clarity, the relation between the values of the parameter $\\Omega_c$ and the magnetic field for $InSb$ are presented in the Appendix.\n\n\n\n\\section{Torque of the Van der Waals force}\n\n\nIn this section we briefly review the theory for calculating the torque associated with the Casimir force. We follow the formalism of Barash \\cite{Barash78}.\nLet us consider two anisotropic plates labeled $i=1,2$, as depicted in Figure (1). Their optical axes are not aligned and form an angle $\\theta$. The dielectric tensors of each plate can be described by\n\n\\begin{equation}\n\\left( \\begin{array}{ccc}\n\\epsilon_{1 ||} & 0 & 0 \\\\\n0& \\epsilon_{1 \\bot} & 0 \\\\\n0& 0 & \\epsilon_{1 \\bot} \\end{array} \\right)\n\\end{equation}\n\nand\n\n\n\\begin{equation}\n\\left( \\begin{array}{ccc}\n\\epsilon_{2 ||} cos^2(\\theta)+\\epsilon_{2 \\bot} sin^2(\\theta)& (\\epsilon_{2\\bot}-\\epsilon_{2 ||})sin(\\theta)cos(\\theta) & 0 \\\\\n(\\epsilon_{2\\bot}-\\epsilon_{2 ||})sin(\\theta)cos(\\theta)& \\epsilon_{2 ||} sin^2(\\theta)+\\epsilon_{2 \\bot} cos^2(\\theta)& 0 \\\\\n0& 0 & \\epsilon_{2 \\bot} \\end{array} \\right)\n\\end{equation}\n\n\nIn the case of anisotropic plates the free energy of the system depends on the separation between the plates and the angle between their optical axes, this is ${\\cal F}(\\theta,L)$. The usual magnitude of the attractive force per unit area is obtained from $F(\\theta,L)=-\\partial {\\cal F}(\\theta,L)\/\\partial L$, and the torque from $\\tau(\\theta,L)=-\\partial {\\cal F}(\\theta,L)\/\\partial \\theta$. Although the calculation of the free energy for anisotropic systems is complicated, some simplifications are possible, depending on the separation between the plates and the degree of anisotropy.\nThe degree of anisotropy is quantified using the relation\n\n\\begin{equation}\n\\delta=\\left |\\frac{\\epsilon_{||}}{\\epsilon_{\\bot}}-1\\right |.\n\\label{delta}\n\\end{equation}\nFor $\\delta<1$ we are in the low anisotropic regime. In Figure (2), we plot the values of $\\delta$ for different values of the parameter $\\Omega_c$. The values of $\\Omega_c$ chosen also correspond to magnetic fields attainable in current laboratories.\nIn all cases, the anisotropy is small and an approximate expression for the torque is possible in the non retarded regime, and is given by \\cite{Barash78,Munday05}\n\\begin{equation}\n\\tau(L,\\theta)=-\\frac{\\hbar S}{64 \\pi^2 L^2} \\bar{w} sin(2\\theta),\n\\label{torque}\n\\end{equation}\nwhere $S$ is the surface of the plates and $\\bar{w}$ is given by \\cite{Munday05}.\n\n\\begin{equation}\\label{barw}\n\\bar{w}=\\int_0^{\\infty} d\\zeta \\frac{(\\eps_{2 ||}-\\eps_{2\\bot})(\\eps_{1||}-\\eps_{1 \\bot}) \\eps_3^2}{(\\eps_{1 \\bot}^2-\\eps_3^2)(\\eps_{2\\bot}^2-\\eps_3^2)}\\times ln \\left ( 1-\\frac{(\\eps_{1 \\bot}-\\eps_3)(\\eps_{2\\bot}-\\eps_3)}{(\\eps_{1 \\bot}+\\eps_3)(\\eps_{2\\bot}+\\eps_3)}\\right ).\n\\end{equation}\n\nThis expression considers that between the plates there could be a fluid with a dielectric function $\\epsilon_3$ and that temperature effects are not important and the separation $L$ satisfies $L<