diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzlmio" "b/data_all_eng_slimpj/shuffled/split2/finalzzlmio" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzlmio" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction} \\label{Introduction}\n\nTranslation suggestion (TS) is a scheme to simplify Post-editing (PE) by automatically providing alternative suggestions for incorrect spans in machine translation outputs. \\citet{yang2021wets} formally define TS and build a high-quality dataset with human annotation, establishing a benchmark for TS. Based on the machine translation framework, the TS system takes the spliced source sentence $\\mathbf{x}$ and the translation sentence $\\mathbf {\\Tilde{m}}$ as the input, where the incorrect span of $\\mathbf {\\Tilde{m}}$ is masked, and its output is the correct alternative $\\mathbf y$ of the incorrect span. The TS task is still in the primary research stage, to spur the research on this task, WMT released the translation suggestion shared task.\n\nThis WMT'22 shared task consists of two subtasks: Naive Translation Suggestion and Translation Suggestion with Hints. We participate in the former, which publishes the bidirectional translation suggestion task for two language pairs, English-Chinese and English-German, and we participate in all language pairs.\n\nOur TS systems are built based on several machine translation models, including Transformer \\citep{vaswani2017attention}, SA-Transformer \\citep{yang2021wets}, and DynamicConv \\citep{wu2018pay}. To make up for the lack of training data, we use parallel corpora to construct synthetic data, based on three strategies. Firstly, we randomly sample a sub-segment in each target sentence of the golden parallel data, mask the sampled sub-segment to simulate an incorrect span, and use the sub-segment as an alternative suggestion. Secondly, the same strategy as above is used for pseudo-parallel data with the target side substituted by machine translation results. Finally, we use a quality estimation (QE) model \\citep{zheng2021self} to estimate the translation quality of words in translation output sentence and select the span with low confidence for masking, and then, we utilize an alignment tool to find the sub-segment corresponding to the span in the reference sentence and use it as the alternative suggestion for the span.\n\nConsidering that there is a domain difference between the synthetic corpus and the human-annotated corpus, we add an additional pre-training phase. Specifically, we train a discriminator and use it to filter sentences from the synthetic corpus that are close to the golden corpus, which we deem as in-domain data. After pre-training with large-scale synthetic data, we perform an additional pre-training with in-domain data, thereby reducing the domain gap. We will describe our system in detail in Section \\ref{Method}.\n\n\\section{Related Work} \\label{Related_Work}\n\nThe translation suggestion (TS) task is an important part of post-editing (PE), which combines machine translation (MT) and human translation (HT), and improves the quality of translation by correcting incorrect spans in machine translation outputs by human translators. To simplify PE, some early scholars have studied translation prediction (\\citet{green2014human}, \\citet{knowles2016neural}), which provides predictions for the next word (or phrase) when given a prefix. And some scholars have also studied prediction with the hints of translators \\citep{huang2015new}.\n\nIn recent years, some scholars have devoted themselves to researching methods to provide suggestions to human translators. \\citet{santy2019inmt} present a proof-of-concept interactive translation system that provides human translators with instant hints and suggestions. \\citet{lee2021intellicat} utilize two quality estimation models and a translation suggestion model to provide alternatives for specific words or phrases for correction. \\citet{yang2021wets} propose a transformer model based on segment-aware self-attention, provide strategies for constructing synthetic corpora, and released the human-annotated golden corpus of TS, which became a benchmark for TS tasks.\n\n\n\\section{Method} \\label{Method}\n\nIn this section, we describe the translation suggestion system, followed by our strategies for building synthetic corpora, and finally the details of the additional pre-training phase.\n\n\\subsection{Translation Suggestion System}\n\nAs defined by \\citet{yang2021wets}, given the source sentence $\\mathbf{x}$, its translation sentence $\\mathbf{m}$, the incorrect span $\\mathbf{w}$ in $\\mathbf{m}$, and its corresponding correct translation $\\mathbf{y}$, the translation suggestion task first masks the incorrect span $\\mathbf{w}$ in $\\mathbf{m}$ to get $\\mathbf{m^{-w}}$, and then maximizes the following conditional probabilities:\n\\begin{equation}\n p(\\mathbf{y} | \\mathbf{x},\\mathbf{m^{-w}}; \\boldsymbol{\\theta})\n\\end{equation}\n\\noindent where $\\boldsymbol{\\theta}$ is the parameters of the model.\n\nThe construction of the TS system is based on common machine translation models. We introduce the models used in our TS system below:\n\n\\begin{itemize}\n \\item \\textbf{Transformer-base \\citep{vaswani2017attention}.} The naive transformer model. The encoding and decoding layers are both set to 6, the word embedding size is set to 512, and the attention head is set to 8.\n \\item \\textbf{Transformer-big \\citep{vaswani2017attention}.} The widened transformer model. The encoding and decoding layers are both set to 6, the word embedding size is set to 1024, and the attention head is set to 16.\n \\item \\textbf{SA-Transformer \\citep{yang2021wets}.} The segment-aware transformer model, which replaces the self-attention of the naive transformer with the segment-aware self-attention, further injects segment information into the self-attention, so that it behaves differently according to the segment information of the token. Its parameter settings are the same as those of Transformer-base.\n \\item \\textbf{DynamicConv \\citep{wu2018pay}.} The dynamic convolution model that predicts a different convolution kernel at every time-step. We set both encoding GLU and decoding GLU to 1 in the experiment.\n\\end{itemize}\n\n\n\\subsection{Build Synthetic Corpora}\n\\label{build}\nSince there are few golden corpora available for training, it is necessary to build a synthetic corpus to make up for the lack of data. We build synthetic data through the following three strategies and use the mixed data for model pre-training.\n\n\\subsubsection{Building on Golden Parallel Data}\n\\label{golden}\n\nFollowing the method of \\citet{yang2021wets}, we construct synthetic data on the golden parallel corpus. Given a sentence pair $\\mathbf x = \\{x_1, x_2, \\ldots, x_n \\}$ and $\\mathbf r = \\{ r_1, r_2, \\ldots, r_m \\}$ from the golden parallel corpus, we randomly sample a sub-segment $\\mathbf w = \\{r_i, r_{i+1}, \\ldots, r_j\\}$ of $\\mathbf r$, we mask the sub-segment in sentence $\\mathbf r$ to get $\\mathbf{r^{-w}} = \\{r_1, r_2, \\ldots, r_{i-1},\\mathrm{[MASK]},r_{j+1}, \\ldots, r_m\\}$, and use $\\mathbf w$ as an alternative suggestion. We perform statistics on the length of golden data to determine the length of masked spans, which is more in line with the golden distribution.\n\n\\subsubsection{Building on Pseudo Parallel Data}\n\nThe prediction of alternative suggestions requires the translation context, which cannot be provided by the golden parallel corpus. Therefore, we still follow \\citet{yang2021wets} and use the same way as described in Section \\ref{golden} to construct synthetic data on the pseudo-parallel corpora consisting of source sentences and machine translation output sentences. \n\n\\subsubsection{Building with Quality Estimation}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.45]{figure1.pdf}\n \\caption{Schematic diagram of building synthetic corpora with quality estimation. $\\mathbf x$ is the source sentence, $\\mathbf m$ is the machine translation sentence, $\\mathbf r$ is the reference sentence, and $W_h$ and $W_l$ represent words with high and low confidence, respectively.}\n \\label{fig:qe}\n\\end{figure*}\n\nThe TS task is to predict the correct alternative proposal given the translation context. However, when sampling on the golden parallel corpus, the context does not match the translation output, and when sampling on the pseudo-parallel corpus, the alternative suggestions may be incorrect. Therefore, the above two construction strategies are not optimal.\n\nWe explore a method that is closer to the real scenarios, as shown in Figure~\\ref{fig:qe}. First, the word-level translation quality estimation (QE) model is used to estimate the confidence of the words in the translation sentence, and the continuous span with low confidence (that is, poor translation) is selected. Then, the translation sentence is aligned with the reference sentence through the alignment model, and the sub-segment corresponding to the span in the reference is selected as an alternative suggestion.\n\nMore specifically, we use a masked language model as our QE model, following the method of \\citet{zheng2021self}. To train the QE model, we splice the source sentence $\\mathbf x_i$ and the reference sentence $\\mathbf r_i$ of the golden parallel corpus, where some words in $\\mathbf r_i$ are masked to get $\\mathbf r^{-w}_i$, and the QE model is optimized to minimize the following loss function:\n\n\\begin{equation}\n \\mathcal{L} = -\\sum_{i=1}^N \\log p(\\mathbf{r}^w_i | \\mathbf{x}_i,\\mathbf{r}^{-w}_i; \\boldsymbol{\\theta})\n\\end{equation}\n\n\\noindent where $N$ is the number of golden parallel sentences, $\\mathbf{r}^w_i$ is the masked part of the reference sentence and $\\boldsymbol{\\theta}$ is the model parameter.\n\nDuring inference, the source and translation sentences of the pseudo-parallel corpus are spliced and fed into the QE model. The model scores the word of the translation sentence according to the recovery probability of it after being masked, and words with lower scores are considered poor translations.\n\nAfter that, we train a word alignment model \\citep{Lai2022cross} using the translated sentences and reference sentences. To ensure high alignment quality, we filter out sentences with lengths less than 5 and greater than 100 and randomly sample 5M sentence pairs for training. We use the trained alignment model to align the machine translation sentence and the reference sentence. The sub-segment in the reference that aligns with the poorly translated words described above is selected as alternative suggestion.\n\n\n\\subsection{Additional Pre-Training Phase with In-Domain Data}\n\nThe sources of data used to construct large-scale synthetic corpus and human-annotated golden corpus are domain different. To bridge this difference, we introduce an additional pre-training stage. We filter data similar to the golden corpus as in-domain data, which are used as pre-training for the next phase after pre-training model with a large-scale synthetic corpus.\n\nIn particular, we use BERT \\citep{devlin2019bert} to construct a discriminator to identify in-domain data. The discriminator consists of a binary classifier trained to distinguish between in-domain and out-of-domain sentences. The source sentences from the golden corpus as positive examples and source sentences from the synthetic corpus as negative examples are used to train this discriminator. We upsample the golden corpus to 10 times, and randomly subsample the same amount of sentences from the synthetic corpus. For each input source sentence, the discriminator predicts the probability that the sentence is in-domain. Sentences with probabilities greater than a certain threshold are discriminated as in-domain sentences.\n\nAfter the above two phases of pre-training, we use the human-annotated golden corpus for fine-tuning and test the final model.\n\n\n\\section{Experiments and Results}\n\n\\subsection{Setup}\n\n\\begin{table}[]\n \\centering\n \\begin{tabular}{lc c c c}\n \\midrule[1pt]\n \\textbf{Corpus} & \\textbf{golden} & \\textbf{pseudo} & \\textbf{with QE} \\\\\n \\hline\n LS en$\\Leftrightarrow$de & 9.8M & 9.8M & 4.7M \\\\\n \n LS en$\\Leftrightarrow$zh & 20M & 20M & \u2013 \\\\\n \n IND en$\\Rightarrow$de & 0.8M & 0.8M & 0.4M \\\\\n \n IND de$\\Rightarrow$en & 0.7M & 0.7M & 0.3M \\\\\n \\midrule[1pt]\n \\end{tabular}\n \\caption{Statistics of constructed synthetic data in our experiments, where LS stands for large-scale data and IND stands for in-domain data.}\n \\label{tab:data}\n\\end{table}\n\n\\begin{table}[]\n \\centering\n \\begin{tabular}{c|c c c c}\n \\midrule[1pt]\n \\multirow{2}*{System} & \\multicolumn{4}{c}{Translation direction} \\\\\n & zh-en & en-zh & de-en & en-de\\\\\n \\hline\n Baseline & 25.51 & \\textbf{36.28} & 31.20 & 29.48 \\\\\n Ours & \\textbf{28.56} & 33.33 & \\textbf{36.30} & \\textbf{42.61} \\\\\n \\midrule[1pt]\n \\end{tabular}\n \\caption{BLEU scores on the WMT 2022 TS test set.}\n \\label{tab:result}\n\\end{table}\n\nWe have submitted English-Chinese (en-zh) and English-German (en-de) bidirectional translation suggestion tasks. We mix en-zh data from WMT'19 and WikiMatrix, and en-de data from WMT'14 and WikiMatrix, respectively, to construct a synthetic dataset. We follow \\citet{yang2021wets} to preprocess the data, and mix the data constructed by the three strategies described in Section \\ref{build} as our large-scale synthetic data. The statistics of the constructed large-scale (LS) synthetic data and in-domain (IND) synthetic data are shown in Table \\ref{tab:data}. Note that for the experiments in the en-zh translation direction, we do not apply the construction strategy with QE and the pre-training phase with in-domain data. All our models are implemented based on Fairseq \\citep{ott2019fairseq}. We use the same data on each model for two phases of pre-training and fine-tuning.\n\n\n\\subsection{Results}\n\nWe report the results of our method on the development and test set of the translation suggestion task of WMT'22. SacreBLEU\\footnote{\\url{https:\/\/github.com\/mjpost\/sacrebleu}} is used to compute the BLEU score as quality estimates relative to a human reference. We report the experimental results of our system and the baseline system \\citep{yang2021wets} on the test set in Table \\ref{tab:result}, and for the baseline system, we directly use their experimental results.\n\n\\begin{table}[]\n \\centering\n \\begin{tabular}{l|c}\n \\midrule[1pt]\n \\textbf{System} & \\textbf{BLEU} \\\\\n \\hline\n Do nothing & 18.24 \\\\\n \\ + on golden and pseudo corpus & 26.91 \\\\\n \\ + with quality estimation & 30.72 \\\\\n \\ + IND pre-training phase & 32.95 \\\\\n \\midrule[1pt]\n \\end{tabular}\n \\caption{BLEU scores on the English-German development set for systems based on the SA-Transformer model under different strategies.}\n \\label{tab:strategy}\n\\end{table}\n\n\nAs can be seen from Table \\ref{tab:result}, our system beats the baseline system in three translation directions, especially in the en-de direction, where our system surpasses the baseline by 13.13 BLEU. \n\n\n\\begin{table}[]\n \\centering\n \\begin{tabular}{c|c}\n \\midrule[1pt]\n \\textbf{Model} & \\textbf{BLEU} \\\\\n \\hline\n Transformer-base (A) & 32.92 \\\\\n Transformer-big (B) & 34.73 \\\\\n SA-Transformer (C) & 32.95 \\\\\n DynamicConv (D) & 34.03 \\\\\n Ensemble (A + B + C + D) & \\textbf{35.81} \\\\\n \n \\midrule[1pt]\n \\end{tabular}\n \\caption{BLEU scores on the development set for systems under different models in the English-German direction.}\n \\label{tab:result_dev}\n\\end{table}\n\n\nWe also report the results of the system on the development set of English-German translation directions to analyze the effectiveness of different models and strategies. In Table \\ref{tab:strategy}, we show the results of the system based on the SA-Transformer model under different strategies. ``Do nothing'' means we only train with the provided training set. It can be seen that the strategy of constructing synthetic data with quality estimation (QE) and the additional pre-training with the in-domain (IND) data stage can bring about a great improvement. \n\nIn Table \\ref{tab:result_dev}, we present the results of systems based on different models and the model ensemble. The ensemble model brings obvious improvement and achieves the best results. \n\n\\section{Conclusion}\n\n We describe our contribution to the Translation Suggestion Shared Task of WMT'22. We propose a strategy to construct synthetic data with the quality estimation model to mask the constructed data closer to the real scenarios. Furthermore, we introduce an additional phase of pre-training with in-domain data to reduce the gap between synthetic corpus and golden corpus. Experimental results demonstrate the effectiveness of our strategy. Considering the heavy labor of annotating TS data, we think data augmentation is the most important strategy that should be addressed. In the future, we will put more effort into the data generation method, to make the most of openly-accessible parallel data.\n\n\n\n\\section*{Limitations}\n\nThe strategy of constructing synthetic data based on quality estimation proposed in this paper can automatically sample the incorrectly translated spans in the translations, and find the correct alternative suggestions through the alignment. It is a solution that conforms to the real scenarios, and the experimental results have also proved that it is effective. However, in our experiments, we find that the quality estimation and alignment phases require a large additional time overhead, and we hope to explore more efficient solutions in future research.\n\n\n\\section*{Acknowledgements}\nThe research work descried in this paper has been supported by the National Key R\\&D Program of China (2020AAA0108001) and the National Nature Science Foundation of China (No. 61976016, 61976015, and 61876198). The authors also would like to thank the WMT'22 shared task organizers for organizing this competition and for providing open source code and models.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\n\nA detailed and precise knowledge of the nucleon spectroscopy is undoubtedly one of the cornerstones for our \nunderstanding of the strong interaction in the non-perturbative regime. Today's privileged way to get information \non the excited states of the nucleon is light meson photo- and electroproduction. The corresponding database \nhas considerably expanded over the last years thanks to a combined effort of a few dedicated facilities worldwide. \nNot only did the recent experiments brought a quantitative improvement by measuring cross sections with \nunprecedented precision for a large number of channels but they also allowed a qualitative leap by providing \nfor the first time high quality data on polarization observables. It is well known -- and now \nwell established -- that these variables, being interference terms of various multipoles, bring unique \nand crucial constraints for partial wave analysis, hence facilitating the identification of resonant \ncontributions and making parameter extraction more reliable.\n\nFrom this perspective, $K^+\\Lambda$ photoproduction offers unique opportunities. Because the \n$\\Lambda$ is a self-analyzing particle, several polarization observables can be \"easily\" measured \nvia the analysis of its decay products. As a consequence, this reaction already possesses\nthe richest database with results on the differential cross section \\cite{bra06}-\\cite{sum06}, \ntwo single polarization observables ($\\Sigma$ and $P$) \\cite{gla04}-\\cite{zeg03} and two double polarization \nobservables ($C_x$ and $C_z$) recently measured by the CLAS collaboration \\cite{bra07}. \nOn the partial wave analysis side, the situation \nis particularly encouraging with most models concluding to the necessity of incorporating \nnew or poorly known resonances to reproduce the full set of data. Some discrepancies do \nremain nonetheless either on the number of used resonances or on their identification. \nTo lift the remaining ambiguities, new polarization obervables are needed calling for new experiments.\n\nIn the present work, we report on first measurements of the beam-recoil observables $O_x$ and $O_z$\nfor the reaction $\\gamma p \\rightarrow K^+\\Lambda$ over large energy (from threshold to 1500 MeV)\nand angular ($\\theta_{cm} = 30-140^0$) ranges. The target asymmetry $T$, indirectly extracted from the data, \nis also presented.\n \n\\section{Experimental set-up}\n\\label{setup}\n\nThe experiment was carried-out with the GRAAL facility (see \\cite{bar05} \nfor a detailed description), installed at the\nEuropean Synchrotron Radiation Facility (ESRF) in Grenoble (France). The\ntagged and linearly polarized $\\gamma$-ray beam is produced by Compton scattering of \nlaser photons off the 6.03~GeV electrons circulating in the storage ring.\n\nIn the present experiment, we have used a set of UV lines at 333, 351 and \n364~nm produced by an Ar laser, giving 1.40, 1.47 and 1.53~GeV $\\gamma$-ray \nmaximum energies, respectively. Some data were also taken with the green line \nat 514~nm (maximum energy of 1.1 GeV).\n\nThe photon energy is provided by an internal tagging system. The position of the\nscattered electron is measured by a silicon microstrip detector (128 strips with \na pitch of 300~$\\mu$m and 1000~$\\mu$m thick). The measured energy resolution of \n16~MeV is dominated by the energy dispersion of the electron beam\n(14 MeV - all resolutions are given as FWHM). \nThe energy calibration is extracted run by run from the fit of the Compton edge position with a\nprecision of $\\sim$10$\\mu m$, equivalent to\n$\\Delta E_\\gamma\/E_\\gamma \\simeq 2 \\times 10^{-4}$ (0.3~MeV at 1.5~GeV).\nA set of plastic scintillators used for time measurements is placed \nbehind the microstrip detector. Thanks to a specially designed electronic module which\nsynchronizes the detector signal with the RF of the machine, the\nresulting time resolution is $\\approx$100~ps.\nThe coincidence between detector signal and RF is used as a start for all Time-of-Flight (ToF) \nmeasurements and is part of the trigger of the experiment. \n\nThe energy dependence of the $\\gamma$-ray beam polarization was determined \nfrom the Klein-Nishina formula taking into account the laser and electron beam emittances.\nThe UV beam polarization is close to 100\\% at the maximum energy and decreases smoothly with\nenergy to around 60\\% at the $K\\Lambda$ \nthreshold (911~MeV). Based on detailed studies \\cite{bar05}, it was found that\nthe only significant source of error for the\n$\\gamma$-ray polarization $P_\\gamma$ comes from the laser beam\npolarization ($\\delta P_\\gamma \/ P_\\gamma$=2\\%).\n\nA thin monitor is used to measure the beam flux (typically 10$^6$ $\\gamma$\/s). The monitor \nefficiency (2.68$\\pm$0.03\\%) was estimated by comparison with the response at low rate of\na lead\/scintillating fiber calorimeter. \n\nThe target cell consists of an aluminum hollow cylinder of 4~cm in diameter closed by thin\nmylar windows (100~$\\mu$m) at both ends. Two different target lengths (6\nand 12~cm) were used for the present experiment. The target was filled by \nliquid hydrogen at 18~K ($\\rho \\approx 7 \\ 10^{-2}$~g\/cm$^3$).\n\nThe 4$\\pi$ LA$\\gamma$RANGE detector of the GRAAL set-up allows to\ndetect both neutral and charged particles (fig. \\ref {sch}). The\napparatus is composed of two main parts: a central one (25$^0\\leq \\theta \\leq\n155^0$) and a forward one ($\\theta \\ \\leq \\ 25^0$).\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\linewidth]{schema_det.eps} \n\\end{center}\n\\caption{Schematic view of the LA$\\gamma$RANGE detector: BGO calorimeter (1), plastic scintillator barrel (2),\ncylindrical MWPCs (3), target (4), plane MWPCs (5), double plastic scintillator hodoscope (6)\n(the drawing is not to scale).}\n\\label{sch}\n\\end{figure}\n\nThe charged particle tracks are measured by a set of \nMultiWire Proportional Chambers (MWPC) (see \\cite{lle07} \nfor a detailed description). \nTo cover forward angles, two plane chambers,\neach composed of two planes of wires, are used.\nThe detection efficiency of a track is about 95\\% and \nthe average polar and azimuthal resolutions are 1.5$^0$ and 2$^0$, respectively.\nThe central region is covered by two coaxial cylindrical chambers.\nSingle track efficiencies have been extracted for $\\pi^0 p$ and $\\pi^+ n$ reactions\nand were found to be $\\geq$90\\%, in agreement with the simulation.\nSince this paper deals with polarization observables, no special study was done\nto assess the efficiency of multi track events.\nAngular resolutions were also estimated via simulation, giving 3.5$^0$ in $\\theta$\nand 4.5$^0$ in $\\varphi$.\n\nCharged particle identification in the central region\nis obtained by dE\/dx technique thanks to a plastic scintillator barrel \n(32 bars, 5~mm thick, 43~cm long) with an energy resolution\n$\\approx$20\\%. \nFor the charged particles emitted in the forward\ndirection, a Time-of-Flight measurement is provided by a double plastic\nscintillator hodoscope (300$\\times$300$\\times$3~cm$^3$) placed at a distance of \n3~m from the target and having a resolution of $\\approx$600~ps. This detector provides \nalso a measure of the energy loss dE\/dx. Energy calibrations \nwere extracted\nfrom the analysis of the $\\pi^0 p$ photoproduction reaction while\nthe ToF calibration of the forward wall was obtained from fast electrons\nproduced in the target.\n\nPhotons are detected in a BGO calorimeter made of 480 ($15 \\theta \\times 32 \\varphi$)\ncrystals, each of 21 radiation lengths. They are identified\nas clusters of adjacent crystals (3 on average for an energy threshold of 10~MeV per crystal)\nwith no associated hit in the barrel.\nThe measured energy resolution is 3\\% on average ($E_\\gamma$=200-1200~MeV). \nThe angular resolution is 6$^0$ and 7$^0$ for polar and azimuthal\nangles, respectively ($E_\\gamma \\geq$ 200~MeV and $l_{target}$=3~cm).\n\n\\section{Data analysis}\n\\label{analysis}\n\n\\subsection{Channel selection}\n\\label{event_sel}\n\nFor the present results, the charged decay of the $\\Lambda$ ($\\Lambda \\rightarrow p\\pi^-$, BR=63.9\\%) \nwas considered and the same selection method used in our previous publication on $K\\Lambda$ photoproduction \n\\cite{lle07} was applied. Only the main points will be recalled in the following.\n\nOnly events with three tracks and no neutral cluster detected in the BGO \ncalorimeter were retained. In the absence of a direct \nmeasurement of energy and\/or momentum of the charged particles,\nthe measured angles ($\\theta$, $\\varphi$) of the three tracks were combined with\nkinematical constraints to calculate momenta. Particle identification was then\nobtained from the association of the calculated momenta with dE\/dx and\/or ToF\nmeasurements. \n\nThe main source of background is the $\\gamma p \\rightarrow p \\pi^+ \\pi^-$\nreaction, a channel with a similar final state and a cross section hundred times larger.\nSelection of the $K\\Lambda$ final state was achieved by applying narrow cuts on the following set of\nexperimental quantities:\n\n\\begin{itemize}\n\n\\item[.] Energy balance.\n\\vspace{0.3cm}\n\n\\item[.] Effective masses of the three particles \nextracted from the combination of measured dE\/dx and ToF (only at forward angles) with\ncalculated momenta.\n\\vspace{0.3cm}\n\n\\item[.] Missing mass $m_{\\gamma p- K^{+}}$ evaluated from\n$E_\\gamma$, $\\theta_K$ (measured) and $p_K$ (calculated).\n\\vspace{0.3cm}\n\n\\end{itemize}\n\nFor each of these variables, the width $\\sigma$ of the corresponding distribution\n(Gaussian-like shape) were extracted from a Monte-Carlo simulation of the \napparatus response based on the GEANT3 package of the CERN library. \n\nTo check the quality of the event selection, the distribution of the $\\Lambda$ decay length\nwas used due to its high sensitivity to background contamination.\n\nEvent by event, track information and $\\Lambda$ momentum were combined to\nobtain the distance $d$ between the reaction and decay vertices. \nThe $\\Lambda$ decay length\nwas then calculated with the usual formula $ct_{\\Lambda}=d\/(\\beta_{\\Lambda}*\\gamma_{\\Lambda})$.\nFig. \\ref {tfkl} shows the resulting\ndistributions for events selected with all cuts at $\\pm$2$\\sigma$ (closed circles)\ncompared with events without cuts (open circles). \nThese spectra were corrected\nfor detection efficiency losses estimated from the Monte-Carlo simulation\n(significant only for ct$\\ge$5~cm). It should be noted that the deficit in \nthe first bins is attributed to finite resolution effects not fully taken \ninto account in the simulation.\n\nThe first spectrum was fitted for ct$\\geq$1~cm by an exponential function\n$\\alpha*exp(-ct\/c\\tau)$ with $\\alpha$ and $c\\tau$ as free parameters.\nThe fitted $c\\tau$ value (8.17$\\pm$0.31~cm) is in good agreement with the PDG expectation \nfor the $\\Lambda$ mean free path ($c\\tau_\\Lambda$=7.89~cm) \\cite{pdg04}.\n\nBy contrast, the spectrum without cuts is dominated by $p \\pi^+ \\pi^-$ background\nevents. As expected, they contribute mostly to small ct values ($\\le$2-3~cm),\nmaking the shape of this distribution highly sensitive to background contamination.\nFor instance, a pronounced peak already shows up when opening selection cuts at \n$\\pm$3$\\sigma$.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\linewidth]{temps_log_2.eps} \n\\end{center}\n\\caption{Reconstructed $\\Lambda$ decay length spectrum after all selection cuts (closed circles)\nfor events with at least two tracks in the cylindrical chambers.\nThe solid line represents the fit with an \nexponential function $\\alpha*exp(-ct\/c\\tau)$ where $\\alpha$ and\n$c\\tau$ are free parameters. The second distribution \n(open circles) was obtained without applying selection cuts.\nIt corresponds to the main background reaction\n($\\gamma p \\rightarrow p\\pi^{+}\\pi^{-}$) which, as expected, contributes\nonly to small ct values.}\n\\label{tfkl}\n\\end{figure}\n\nA remaining source of background, which cannot be seen in the ct plot presented\nabove, originates from the contamination by the reaction \n$\\gamma p \\rightarrow K^+\\Sigma^0$. Indeed, events where the decay photon is not detected\nare retained by the first selection step. Since these events are kinematically\nanalyzed as $K\\Lambda$ ones, most of them are nevertheless rejected by the selection cuts.\nFrom the simulation, this contamination was found to be of the order of 2\\%. \n\nAs a further check of the quality of the data sample, the missing mass spectrum\nwas calculated. One should remember that the missing mass is not directly \nmeasured and is not used as a criterion for the channel identification.\nThe spectrum presented in fig. \\ref {mkl} (closed circles) is in fair\nagreement with the simulated distribution (solid line). Some slight\ndiscrepancies can nevertheless be seen in the high energy tail of the spectra.\nThe simulated missing mass distribution of the contamination from the\n$\\gamma p \\rightarrow K^+\\Sigma^0$ reaction,\nalso displayed in fig. \\ref {mkl}, clearly indicates that\nsuch a background cannot account for the observed differences. Rather,\nthese are attributed to the summation of a large number of data taking periods\nwith various experimental configurations (target\nlength, wire chambers, green vs UV laser line, ...). Although these\nconfigurations were implemented in corresponding simulations,\nsmall imperfections (misalignments in particular) could not be\ntaken into account.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\linewidth]{masse_k_lambda.eps} \n\\end{center}\n\\caption{Distribution of the missing mass $m_{\\gamma p- K^{+}}$ reconstructed from measured $E_\\gamma$ and $\\theta_K$ and calculated $p_K$.\nData after all selection cuts (closed circles) are compared to the simulation (solid line). \nThe expected contribution from the reaction $\\gamma p \\rightarrow K^+\\Sigma^0$\nis also plotted (note that it is not centered on the $\\Sigma^0$ mass due to kinematical constraints \nin the event analysis). \nThe vertical arrow indicates the $\\Lambda$ mass.}\n\\label{mkl}\n\\end{figure}\n\nTo summarize, thanks to these experimental checks, we are confident that the level of \nbackground in our selected sample is limited. This is corroborated by the simulation from which the\nestimated background contamination (multi-pions and $K^+\\Sigma^0$ contributions) never exceeds 5\\% \nwhatever the incident photon energy or the meson recoil angle.\n\n\\subsection{Measurement of $O_x$, $O_z$ and $T$}\n\nAs will be shown below, the beam-recoil observables $O_x$ and $O_z$, as well as the target \nasymmetry $T$, can be extracted from the angular distribution of the $\\Lambda$ decay proton.\n\n\\subsubsection{Formalism}\n\\label{expl}\n\nFor a linearly polarized beam and an unpolarized target, the differential cross section\ncan be expressed in terms of the single polarization observables $\\Sigma$, $P$, $T$ \n(beam asymmetry, recoil polarization, target asymmetry, respectively) and of the double polarization observables $O_x$,\n$O_z$ (beam-recoil), as follows \\cite{ade90}:\n\n\\noindent\n\\begin{eqnarray} \n\\rho_f \\frac{d\\sigma}{d\\Omega}&=& \\frac{1}{2} \\biggl (\\frac{d\\sigma}{d\\Omega} \\biggr)_{0}\n[1 - P_{\\gamma}\\Sigma \\cos 2\\varphi_\\gamma \\nonumber \\\\\n+\\sigma_{x'} P_{\\gamma} O_x \\sin 2\\varphi_{\\gamma} \\nonumber \\\\\n+\\sigma_{y'} (P - P_{\\gamma} T \\cos 2\\varphi_{\\gamma}) \\nonumber \\\\\n+\\sigma_{z'} P_{\\gamma} O_z \\sin 2\\varphi_{\\gamma}]\n\\label{eq1}\n\\end{eqnarray}\n\n\\noindent\n$\\rho_f$ is the density\nmatrix for the lambda final state and $(d\\sigma \/d\\Omega)_{0}$ the unpolarized differential cross section.\nThe Pauli matrices $\\sigma_{x',y',z'}$ refer to the\nlambda quantization axes defined by $\\hat{z}'$ along the lambda momentum in the center-of-mass frame\nand $\\hat{y}'$ perpendicular to the reaction plane (fig. \\ref{ax}).\n$P_{\\gamma}$ is the degree of linear polarization of the beam along an axis defined by\n$\\hat{n}=\\hat{x}\\cos \\varphi_{\\gamma}+\\hat{y}\\sin \\varphi_{\\gamma}$; the photon quantization axes are\ndefined by $\\hat{z}$ along the proton center-of-mass momentum and $\\hat{y}$=$\\hat{y}'$ (fig. \\ref{ax}).\nWe have $\\varphi_{\\gamma}\n=\\varphi_{lab}-\\varphi$, where $\\varphi_{lab}$ and $\\varphi$ are the azimuthal angles of the\nphoton polarization vector and of the reaction plane in the laboratory axes, respectively (fig. \\ref{ax2}).\n\nThe beam-recoil observables $C_x$ and $C_z$ measured by the CLAS collaboration with a \ncircularly polarized beam \\cite{bra07} were obtained\nusing another coordinate system for describing the hyperon polarization,\nthe $\\hat{z}'$ axis being along the incident beam direction instead of the momentum of one of the recoiling\nparticles (see fig. \\ref{ax}). Such a non-standard coordinate system was chosen to give the results their \nsimplest interpretation in terms of polarization transfer but implied the model calculations to be adapted. \nTo check the consistency of our results with the CLAS values (see sect. \\ref{combi}), our $O_x$ and $O_z$ \nvalues were converted using the following rotation matrix:\n\n\\noindent\n\\begin{eqnarray}\nO_x^c=-O_x \\cos\\theta_{cm}-O_z \\sin\\theta_{cm} \\nonumber \\\\\nO_z^c=O_x \\sin\\theta_{cm}-O_z \\cos\\theta_{cm}\n\\label{eqconv}\n\\end{eqnarray}\n\nIt should be noted that our definition for $O_x$ and $O_z$ (eq. \\ref{eq1})\nhas opposite sign with respect to the definition given in the article \\cite{bar75}, which is used in several hadronic models. \nWe chose the same sign convention than the CLAS collaboration.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\linewidth]{axes.eps} \n\\end{center}\n\\caption{Definition of the coordinate systems and polar angles in the center-of-mass frame (viewed in the reaction plane).\nThe [$\\hat{x}'$,$\\hat{y}'$,$\\hat{z}'$] system is used to specify the polarization of the outgoing $\\Lambda$ baryon:\n$\\hat{z}'$ is along the $\\Lambda$ momentum and $\\hat{y}'$ perpendicular to the reaction plane.\nThe [$\\hat{x}$,$\\hat{y}$,$\\hat{z}$] system is used to specify the incident photon polarization:\n$\\hat{z}$ is along the incoming proton momentum and $\\hat{y}$ identical to $\\hat{y}'$.\nThe polar angle $\\theta_{cm}$ of the outgoing $K^+$ meson is defined with respect to the incident beam\ndirection $\\hat{z}_{lab}$. [$\\hat{x}'_c$,$\\hat{y}'_c$,$\\hat{z}'_c$] is the coordinate system chosen by the\nCLAS collaboration for the $\\Lambda$ polarization. The $\\hat{x}'_c$ and $\\hat{z}'_c$ axes are obtained from\n$\\hat{x}'$ and $\\hat{z}'$ by a rotation of angle $\\pi+\\theta_{cm}$.}\n\\label{ax}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=0.9\\linewidth]{axes_2.eps} \n\\end{center}\n\\caption{Definition of the coordinate systems and azimuthal angles in the center-of-mass frame (viewed perpendicularly to\nthe beam direction).\nThe [$\\hat{x}_{lab}$,$\\hat{y}_{lab}$,$\\hat{z}_{lab}$] system corresponds to the laboratory axes with $\\hat{z}_{lab}$\nalong the incident beam direction. \nThe [$\\hat{x}$,$\\hat{y}$,$\\hat{z}$] system, used to define the incident photon polarization,\nhas its axes $\\hat{x}$ and $\\hat{y}$ along and perpendicular to the reaction plane (azimuthal angle $\\varphi$), respectively.\nThe polarization of the beam is along $\\hat{n}$ (azimuthal angle $\\varphi_{lab}$).\nThe two beam polarization states correspond to $\\varphi_{lab}=0^0$ (horizontal) and $\\varphi_{lab}=90^0$\n(vertical) ($\\varphi_{lab}=\\varphi_{\\gamma}+\\varphi$).}\n\\label{ax2}\n\\end{figure}\n\nFor an outgoing lambda with an arbitrary quantization axis $\\hat{n}'$, the \ndifferential cross section becomes:\n\n\\noindent\n\\begin{eqnarray} \n\\mathbf{P}_{\\Lambda} \\cdot \\hat{n}' \\frac{d\\sigma}{d\\Omega}&=&Tr \\Big[ \\mathbf{\\sigma} \\cdot \\hat{n'} \\rho_f \\frac{d\\sigma}{d\\Omega} \\Big]\n\\label{eq2}\n\\end{eqnarray}\n\n\\noindent\nwhere $\\mathbf{P}_{\\Lambda}$ is the polarization vector of the lambda.\nIf the polarization is not observed, the expression for the differential cross section reduces to:\n\n\\noindent\n\\begin{eqnarray} \n\\frac{d\\sigma}{d\\Omega}&=&Tr \\Big[\\rho_f \\frac{d\\sigma}{d\\Omega} \\Big]\n\\label{eq3}\n\\end{eqnarray}\n\n\\noindent\nwhich leads to:\n\n\\noindent\n\\begin{eqnarray} \n\\frac{d\\sigma}{d\\Omega}&=& \\biggl (\\frac{d\\sigma}{d\\Omega} \\biggr)_{0}\n [1 - P_{\\gamma}\\Sigma \\cos 2\\varphi_\\gamma]\n\\label{eq4}\n\\end{eqnarray}\n\n\\noindent\nFor horizontal ($\\varphi_{lab}=0^0$) and vertical ($\\varphi_{lab}=90^0$) photon polarizations,\nthe corresponding azimuthal distributions of the reaction plane are therefore:\n\n\\noindent\n\\begin{eqnarray} \n\\frac{d\\sigma}{d\\Omega}(\\varphi_{lab}=0^0)&=& \\biggl (\\frac{d\\sigma}{d\\Omega} \\biggr)_{0}\n [1 - P_{\\gamma}\\Sigma \\cos 2\\varphi]\n\\label{eq5}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray} \n\\frac{d\\sigma}{d\\Omega}(\\varphi_{lab}=90^0)&=& \\biggl (\\frac{d\\sigma}{d\\Omega} \\biggr)_{0}\n [1 + P_{\\gamma}\\Sigma \\cos 2\\varphi]\n\\label{eq6}\n\\end{eqnarray}\n\n\\noindent\nThe beam asymmetry values $\\Sigma$ published in \\cite{lle07} were extracted from the fit of the azimuthal\ndistributions of the ratio:\n\\noindent\n\\begin{eqnarray} \n\\frac{N(\\varphi_{lab}=90^0)-N(\\varphi_{lab}=0^0)}{N(\\varphi_{lab}=90^0)+N(\\varphi_{lab}=0^0)}=P_\\gamma \\Sigma \\cos 2\\varphi\n\\label{eqsig}\n\\end{eqnarray}\n\\noindent\n\n\\subsubsection{$\\Lambda$ polarization and spin observables}\n\\label{polo}\n\nThe components of the lambda polarization vector deduced from\neqs. \\ref{eq1} to \\ref{eq4} are:\n\n\\noindent\n\\begin{eqnarray} \nP_{\\Lambda}^{x',z'}&=&\\frac{P_{\\gamma} O_{x,z} \\sin 2\\varphi_{\\gamma}}{1 - P_{\\gamma} \\Sigma \\cos 2\\varphi_{\\gamma}} \n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray} \nP_{\\Lambda}^{y'}&=&\\frac{P - P_{\\gamma} T \\cos 2\\varphi_{\\gamma}}{1 - P_{\\gamma} \\Sigma \\cos 2\\varphi_{\\gamma}}\n\\end{eqnarray}\n\n\\noindent\nThese equations provide the connection between the $\\Lambda$ polarisation $\\mathbf{P}_{\\Lambda}$\nand the spin observables $\\Sigma$, $P$, $T$, $O_x$ and $O_z$.\n\nIntegration of the polarization components over the azimuthal angle $\\varphi$ of the reaction plane writes:\n\n\\noindent\n\\begin{eqnarray} \n&=&\\frac{\\int P_{\\Lambda}^{i}(\\varphi)\\frac{d\\sigma}{d\\Omega}(\\varphi)d\\varphi}\n{\\int \\frac{d\\sigma}{d\\Omega}(\\varphi)d\\varphi}\n\\end{eqnarray}\n\n\\noindent\nwhere $i$ stands for $x'$, $y'$ or $z'$.\n\nWhen integrating over the full angular domain, the averaged $x'$ and $z'$\ncomponents of the polarization vector vanish while the $y'$ component is equal to $P$.\nOn the other hand, when integrating over appropriatly chosen angular sectors, all three averaged \ncomponents can remain different from zero. For horizontal and vertical beam polarizations,\nthe following expressions are obtained when considering the four particular $\\varphi$ domains \ndefined hereafter \\cite{cal97} (recalling $\\varphi_{\\gamma}=\\varphi_{lab}-\\varphi$):\n\n\\begin{itemize}\n\n\\item[.] $S_1 = [\\pi \/4,3\\pi \/4] \\cup [5\\pi \/4,7\\pi \/4]$: \\\\\n$(\\varphi_{lab}=0^0)=(P\\pi + 2P_{\\gamma} T) \/ (\\pi + 2P_{\\gamma} \\Sigma)$ \\\\\n$(\\varphi_{lab}=90^0)=(P\\pi - 2P_{\\gamma} T) \/ (\\pi - 2P_{\\gamma} \\Sigma)$\n\\vspace{0.3cm}\n\n\\item[.] $S_2 = [-\\pi \/4,\\pi \/4] \\cup [3\\pi \/4,5\\pi \/4]$: \\\\\n$(\\varphi_{lab}=0^0)=(P\\pi - 2P_{\\gamma} T) \/ (\\pi - 2P_{\\gamma} \\Sigma)$ \\\\\n$(\\varphi_{lab}=90^0)=(P\\pi + 2P_{\\gamma} T) \/ (\\pi + 2P_{\\gamma} \\Sigma)$\n\\vspace{0.3cm}\n\n\\item[.] $S_3 = [0,\\pi \/2] \\cup [\\pi ,3\\pi \/2]$: \\\\\n$(\\varphi_{lab}=0^0)=-2 P_{\\gamma} O_{x,z} \/ \\pi$ \\\\\n$(\\varphi_{lab}=90^0)=+2 P_{\\gamma} O_{x,z} \/ \\pi$\n\\vspace{0.3cm}\n\n\\item[.] $S_4 = [\\pi \/2,\\pi] \\cup [3\\pi \/2 ,2\\pi]$ : \\\\\n$(\\varphi_{lab}=0^0)=+2 P_{\\gamma} O_{x,z} \/ \\pi$ \\\\\n$(\\varphi_{lab}=90^0)=-2 P_{\\gamma} O_{x,z} \/ \\pi$\n\\vspace{0.3cm}\n\n\\end{itemize}\n\n\\noindent\nIt should be noted that these four sectors cover the full $\\varphi$ range.\nIn the following, these different combinations of $\\varphi$ sectors and polarization states\nwill be labelled by the sign plus or minus appearing in the corresponding expressions for $$.\n\n\\subsubsection{Decay angular distribution}\n\nIn the lambda rest frame, the angular distribution of the decay proton is given by \\cite{lee57}:\n\n\\noindent\n\\begin{eqnarray} \nW(\\cos\\theta_{p})=\\frac{1}{2} \\big(1+\\alpha |\\mathbf{P}_{\\Lambda}| \\cos\\theta_{p} \\big)\n\\label{dist_ang}\n\\end{eqnarray} \n\n\\noindent\nwhere $\\alpha$=0.642$\\pm$0.013 \\cite{pdg04} is the $\\Lambda$ decay parameter and $\\theta_{p}$ the\nangle between the proton direction and the lambda polarization vector.\n\nFrom this expression, one can derived an angular distribution for each \ncomponent of $\\mathbf{P}_{\\Lambda}$:\n\n\\noindent\n\\begin{eqnarray} \nW(\\cos\\theta_{p}^i)=\\frac{1}{2} \\big(1+\\alpha P_{\\Lambda}^{i} \\cos\\theta_{p}^i \\big)\n\\label{dist_ang2}\n\\end{eqnarray}\n\n\\noindent\nwhere $\\theta_{p}^i$ is now the angle between the proton direction and the quantization axis $i$ ($x'$, $y'$ or $z'$).\n\nThe components being\ndetermined in the $\\Lambda$ rest frame, a suitable transformation should be applied to calculate them\nin the center-of-mass frame. However, as the boost direction is along the lambda momentum,\nit can be shown that the polarization measured in the lambda rest frame remains unchanged in the\ncenter-of-mass frame \\cite{bra07}.\n\nWhen integrating over all possible azimuthal angles $\\varphi$, the proton angular distribution\nwith respect to the $y'$-axis simply writes:\n\n\\noindent\n\\begin{eqnarray} \nW(\\cos\\theta_{p}^{y'})=\\frac{1}{2} (1 + \\alpha P \\cos\\theta_{p}^{y'})\n\\label{eqwyp}\n\\end{eqnarray}\n\n\\noindent\nwhere $P$ is the recoil polarization.\nOur $P$ results published in \\cite{lle07} were determined directly from the measured up\/down\nasymmetry:\n\n\\noindent\n\\begin{eqnarray}\n\\frac{N(\\cos\\theta_{p}^{y'}>0)-N(\\cos\\theta_{p}^{y'}<0)}{N(\\cos\\theta_{p}^{y'}>0)+N(\\cos\\theta_{p}^{y'}<0)}=\\frac{1}{2} \\alpha P\n\\label{eqp}\n\\end{eqnarray}\n\nWhen integrating over the different angular domains specified above (sectors $S_1+S_2$ for $y'$-axis and \n$S_3+S_4$ for $x'$-,$z'$-axes, appropriatly combined with the two beam polarization),\nthe proton angular distributions with respect to \nthe three quantization axes can be written as follows:\n\n\\noindent\n\\begin{eqnarray} \nW_\\pm(\\cos\\theta_{p}^{x',z'})=\\frac{1}{2} \\big(1 \\pm \\alpha \\frac{2P_{\\gamma}O_{x,z}}{\\pi} \\cos\\theta_{p}^{x',z'} \\big)\n\\label{eqwxz}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray} \nW_\\pm(\\cos\\theta_{p}^{y'})=\\frac{1}{2} \\big(1 + \\alpha \\frac{P\\pi \\pm 2P_{\\gamma}T}{\\pi \\pm 2P_{\\gamma}\\Sigma} \\cos\\theta_{p}^{y'} \\big)\n\\label{eqwy}\n\\end{eqnarray}\n\n\\subsubsection{Experimental extraction}\n\nAs for $\\Sigma$ and $P$, the observables $O_x$, $O_z$ and $T$ were extracted from ratios of the angular\ndistributions, in order to get rid of most of the distorsions introduced \nby the experimental acceptance.\n\nIncluding the detection efficiencies, the yields measured as a function of\nthe proton angle with respect to the different axes write:\n\n\\noindent\n\\begin{eqnarray} \nN_\\pm^{x',z'}=\\frac{1}{2} N_{0\\pm}^{x',z'} \\epsilon_\\pm (\\cos\\theta_{p}^{x',z'}) \\big(1 \\pm \\alpha \\frac{2P_{\\gamma}O_{x,z}}{\\pi} \\cos\\theta_{p}^{x',z'} \\big)\\nonumber \\\\\n\\label{eqnx}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray} \nN_\\pm^{y'}=\\frac{1}{2}N_{0\\pm}^{y'} \\epsilon_\\pm (\\cos\\theta_{p}^{y'}) \\big(1 + \\alpha \\frac{P\\pi \\pm 2P_{\\gamma}T}{\\pi \\pm 2P_{\\gamma}\\Sigma} \\cos\\theta_{p}^{y'} \\big)\n\\label{eqny1}\n\\end{eqnarray}\n\n\\noindent\nFrom the integration of the azimuthal distributions given by eqs. \\ref{eq5} and \\ref{eq6} over the different angular sectors,\nit can be shown that:\n\n\\noindent\n\\begin{eqnarray} \nN_{0+}^{x',z'}=N_{0-}^{x',z'}\n\\label{eq7}\n\\end{eqnarray} \n\n\\noindent\n\\begin{eqnarray} \n\\frac{N_{0+}^{y'}}{N_{0-}^{y'}}=\\frac{\\pi + 2P_{\\gamma}\\Sigma}{\\pi - 2P_{\\gamma}\\Sigma}\n\\label{eq8}\n\\end{eqnarray} \n\n\\noindent\nAssuming that the detection efficiencies do not depend on the considered $\\varphi$ sectors\n($\\epsilon_+(\\cos\\theta_{p}^{i})=\\epsilon_-(\\cos\\theta_{p}^{i})$ - \nthe validity of this assumption will be discussed later on), we can then calculate\nthe following sums and ratios from which the efficiency cancels out:\n\n\\noindent\n\\begin{eqnarray}\nN_+^{x',z'} + N_-^{x',z'} = \\frac{1}{2} (N_{0+}^{x',z'}+N_{0-}^{x',z'})\\epsilon_\\pm (\\cos\\theta_{p}^{x',z'})\\nonumber \\\\\n\\label{eqnxz}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray}\nN_+^{y'} + N_-^{y'} = \\frac{1}{2} (N_{0+}^{y'}+N_{0-}^{y'})\\epsilon_\\pm (\\cos\\theta_{p}^{y'})(1 + \\alpha P \\cos\\theta_{p}^{y'})\\nonumber \\\\\n\\label{eqny}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray} \n\\frac {2 N_+^{x',z'}}{N_+^{x',z'}+ N_-^{x',z'}}=(1+\\alpha \\frac{2P_{\\gamma}O_{x,z}}{\\pi} \\cos\\theta_{p}^{x',z'})\n\\label{eqrxz}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray} \n\\frac {2 N_+^{y'}}{N_+^{y'}+ N_-^{y'}}=\\big( 1+\\frac{2P_\\gamma \\Sigma}{\\pi}\\big) \\big( \\frac{1 + \\alpha \\frac{P\\pi+2P_{\\gamma}T}{\\pi+2P_{\\gamma}\\Sigma} \\cos\\theta_{p}^{y'}}{1+\\alpha P \\cos\\theta_{p}^{y'}} \\big)\n\\label{eqry}\n\\end{eqnarray}\n\nTo illustrate the extraction method of $O_{x}$, $O_{z}$ and $T$, the $N_+$ and $N_-$ experimental\ndistributions together with their sums and ratios,\nsummed over all photon energies and meson polar angles, \nare displayed in figs. \\ref{ox_fit} ($x'$-axis), \\ref{oz_fit} ($z'$-axis) and \\ref{t_fit} ($y'$-axis). \nThanks to the efficiency correction given by the distributions\n$N_++N_-$ (figs. \\ref{ox_fit},\\ref{oz_fit},\\ref{t_fit}-c), the ratios $2N_+\/(N_++N_-)$ (figs. \\ref{ox_fit},\\ref{oz_fit},\\ref{t_fit}-d), \nfrom which the efficiency drops out, exhibit the expected dependence\nin $\\cos\\theta_{p}$ and can be therefore fitted by the functions given in the r.h.s. of eqs. \\ref{eqrxz} and \\ref{eqry}. \nThe known energy dependence of $P_{\\gamma}$ and the previously measured values for\n$\\Sigma$ and $P$ \\cite{lle07} are then used to deduce $O_{x}$, $O_{z}$ and $T$ from\nthe fitted slopes.\n\nThe validity of the hypothesis $\\epsilon_+(\\cos\\theta_{p}^{i})=\\epsilon_-(\\cos\\theta_{p}^{i})$ was studied via the Monte Carlo \nsimulation in which a polarized $\\Lambda$ decay was included. \nThe efficiencies $\\epsilon_\\pm$ calculated from the simulation are presented in plots e) of\nfigs. \\ref{ox_fit} to \\ref{t_fit} and the ratios $\\epsilon_-\/\\epsilon_+$ in plots f) (open circles). \nAs one can see, for the $y'$ case, this ratio remains very close to 1 whatever the angle while, for $x'$ and $z'$, \nthe discrepancy from 1 is more pronounced and evolves with the angle. This shows that some corrections\nshould be applied on the measured ratios $2N_+\/(N_++N_-)$ to take into account the non-negligible differences\nobserved between $\\epsilon_+$ and $\\epsilon_-$. The correction factors, plotted in figs. \\ref{ox_fit},\\ref{oz_fit},\\ref{t_fit}-f) \n(closed circles), were calculated through the following expression:\n\n\\noindent\n\\begin{eqnarray}\nCor=\\big( \\frac {2 N_+^i}{N_+^i+ N_-^i} \\big)_{gen}\/\\big( \\frac {2 N_+^i}{N_+^i+ N_-^i} \\big)_{sel}\n\\label{rcor}\n\\end{eqnarray}\n\n\\noindent\nwhere {\\it gen} and {\\it sel} stand for generated and selected events.\nSince $\\epsilon_\\pm=(N_\\pm)_{sel}\/(N_\\pm)_{gen}$, it can be re-written as:\n\n\\noindent\n\\begin{eqnarray}\nCor=\\frac{1}{2}\\big( \\frac {2 N_+^i}{N_+^i+ N_-^i} \\big)_{gen} [1+\\frac{\\epsilon_-^i}{\\epsilon_+^i} \\big( \\frac {N_-^i}{N_+^i} \\big)_{gen}]\n\\label{rcor2}\n\\end{eqnarray}\n\n\\noindent\nThe corrected distributions are displayed in the plots g) of figs. \\ref{ox_fit} to \\ref{t_fit}. \nAfter correction, as expected, the slope of the $y'$ distribution is unaffected while the slopes of the $x'$ and $z'$ \ndistributions are slightly modified. These distributions were again fitted \nto obtain the final values of $O_{x}$, $O_{z}$ and $T$.\n\nAs the detection efficiencies and the correction factors calculated from\nthe simulation depend on the input values of $O_{x}$, $O_{z}$ and $T$, an iterative method\nwas used. Three iterations were sufficient to reach stable values.\n\nFor a consistency check, an alternative extraction method was implemented.\nThe angular distributions were directly corrected by the simulated efficiencies \nand fitted according to:\n\n\\noindent\n\\begin{eqnarray}\n\\frac{N_+^{x',z'}+N_-^{x',z',inv}} {\\epsilon_+^{x',z'}+\\epsilon_-^{x',z',inv}}=\\frac{1}{2} N_{0+}^{x',z'}\\big(1+\\alpha \\frac{2P_{\\gamma}O_{x,z}}{\\pi} \\cos\\theta_{p}^{x',z'} \\big)\\nonumber \\\\\n\\label{eqfxz}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray}\n\\frac{N_+^{y'}} {\\epsilon_+^{y'}}=\\frac{1}{2} N_{0+}^{y'}\\big(1+\\alpha \\frac{P\\pi+2P_{\\gamma}T}{\\pi+2P_{\\gamma}\\Sigma} \\cos\\theta_{p}^{y'} \\big)\n\\label{eqfyp}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray}\n\\frac{N_-^{y'}} {\\epsilon_-^{y'}}=\\frac{1}{2} N_{0+}^{y'}\\frac{\\pi-2P_{\\gamma}\\Sigma}{\\pi+2P_{\\gamma}\\Sigma}\\big(1+\\alpha \\frac{P\\pi-2P_{\\gamma}T}{\\pi-2P_{\\gamma}\\Sigma} \\cos\\theta_{p}^{y'} \\big)\n\\label{eqfym}\n\\end{eqnarray}\n\n\\noindent\nwhere $N^{inv}$ and $\\epsilon^{inv}$ stand for $N(-\\cos\\theta_{p})$ and $\\epsilon(-\\cos\\theta_{p})$, respectively.\nThis trick, used for the $x'$ and $z'$ cases, allows to combine the $N_+$ and $N_-$ distributions which have opposite slopes\n(eq. \\ref{eqnx}).\n\nTo illustrate this second extraction method, the corrected distributions, summed over all photon energies\nand meson polar angles, are displayed in figs. \\ref{ox_fit},\\ref{oz_fit}-j) ($x',z'$-axes) and \\ref{t_fit}-h),i) ($y'$-axis).\nThey were obtained by dividing the originally measured distributions (figs. \\ref{ox_fit},\\ref{oz_fit}-h and\n\\ref{t_fit}-a,b) by the corresponding efficiency distributions (figs. \\ref{ox_fit},\\ref{oz_fit}-i and \\ref{t_fit}-e).\nIn the $y'$-axis case, the two corrected spectra $N_\\pm\/\\epsilon_\\pm$ were simultaneously fitted.\n\nThis method gives results in good agreement with those extracted from the first method.\nNevertheless, the resulting $\\chi^2$ were found to be significantly larger \n(the global reduced-$\\chi^2$ values are given in figs. \\ref{ox_fit} to \\ref{t_fit} - they are close to 1 for the first method\nand five to ten times larger for the second one). The first method, which\nrelies upon ratios leading to an intrinsic first order efficiency correction, is\nless dependent on the simulation details and was therefore preferred.\n\nThree sources of systematic errors were taken into account: the laser beam polarization ($\\delta P_\\gamma \/ P_\\gamma$=2\\%),\nthe $\\Lambda$ decay parameter $\\alpha$ ($\\delta\\alpha=0.013$) and the hadronic background.\nThe error due to the hadronic background was estimated from the variation\nof the extracted values when cuts were changed from $\\pm$2$\\sigma$ to $\\pm$2.5$\\sigma$.\nGiven the good agreement between the two extraction methods, no corresponding systematic error was considered.\nFor the $T$ observable, the measured values for\n$\\Sigma$ and $P$ being involved, their respective errors were included in the estimation of the uncertainty.\nAll systematic and statistical errors have been summed quadratically.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.0\\linewidth]{ox_kl_fit.eps} \n\\end{center}\n\\caption{Angular distributions for the decay proton in the lambda rest frame with respect to the $x'$-axis:\na) distribution $N_+$;\nb) distribution $N_-$; \nc) sum $N_++N_-$; \nd) ratio $2N_+\/(N_++N_-)$; \ne) efficiencies $\\epsilon_+$ (triangles) and $\\epsilon_-$ (circles) calculated from the simulation;\nf) ratio $\\epsilon_-\/\\epsilon_+$ (open circles) and correction factor $Cor$ (closed circles) given by eq. \\ref{rcor} calculated from the simulation;\ng) ratio $2N_+\/(N_++N_-)$ corrected by the factor $Cor$;\nh) distribution $N_++N_-^{inv}$, with $N^{inv}=N(-\\cos\\theta_{p})$; \ni) efficiency $\\epsilon_++\\epsilon_-^{inv}$, with $\\epsilon^{inv}=\\epsilon(-\\cos\\theta_{p})$, calculated from the simulation;\nj) distribution $N_++N_-^{inv}$ corrected by the efficiency $\\epsilon_++\\epsilon_-^{inv}$ .\nThe solid line in d) and g) represents the fit by the (linear) function given in the r.h.s. of eq. \\ref{eqrxz}.\nThe solid line in j) represents the fit by the (linear) function given in the r.h.s. of eq. \\ref{eqfxz}.\nThe reduced-$\\chi^2$ and the $O_x$ value obtained from the fits are reported in d), g) and j).}\n\\label{ox_fit}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.0\\linewidth]{oz_kl_fit.eps} \n\\end{center}\n\\caption{Angular distributions for the decay proton in the lambda rest frame with respect to the $z'$-axis\n(all distributions as in fig. \\ref{ox_fit}).\nThe reduced-$\\chi^2$ and the $O_z$ value obtained from the fits are reported in d), g) and j).}\n\\label{oz_fit}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=1.0\\linewidth]{t_kl_fit.eps} \n\\end{center}\n\\caption{Angular distributions for the decay proton in the lambda rest frame with respect to the $y'$-axis:\na) distribution $N_+$;\nb) distribution $N_-$; \nc) sum $N_++N_-$; \nd) ratio $2N_+\/(N_++N_-)$; \ne) efficiencies $\\epsilon_+$ (triangles) and $\\epsilon_-$ (circles) calculated from the simulation - they are\nsymmetrical about $\\theta_{cm}=90^0$ (we find $\\epsilon_{down}\/\\epsilon_{up}$=1.03);\nf) ratio $\\epsilon_-\/\\epsilon_+$ (open circles) and correction factor $Cor$ (closed circles) given by eq. \\ref{rcor} calculated from the simulation;\ng) ratio $2N_+\/(N_++N_-)$ corrected by the factor $Cor$;\nh) distribution $N_+$ corrected by the efficiency $\\epsilon_+$;\ni) distribution $N_-$ corrected by the efficiency $\\epsilon_-$.\nThe solid line in d) and g) represents the fit by the (non-linear) function given in the r.h.s. of eq. \\ref{eqry}.\nThese distributions exhibit a linear behaviour since the overall recoil polarisation P is very low (the value\nextracted from the up\/down asymmetry of the raw distribtion $N_++N_-$ is -0.12).\nThe solid line in h) and i) represents the simultaneous fit by the (linear) functions given in the r.h.s. of eqs. \\ref{eqfyp} and \\ref{eqfym}.\nThe reduced-$\\chi^2$ and the $T$ value obtained from the fits are reported in d), g), h) and i).}\n\\label{t_fit}\n\\end{figure}\n\n\\section{Results and discussions}\n\nThe complete set of beam-recoil polarization and target asymmetry data is displayed in figs. \\ref{oxkl} to \\ref{pcokl}. These data\ncover the production threshold region ($E_\\gamma$=911-1500 MeV) and a large angular range ($\\theta_{cm}^{kaon}=30-140^0$).\nNumerical values are listed in tables \\ref{table_oxkl} to \\ref{table_tkl}. Error bars are the quadratic sum of statistical\nand systematic errors.\n\n\\subsection{Observable combination and consistency check}\n\\label{combi}\n\nIn pseudoscalar meson photoproduction, one can extract experimentally 16 different quantities: the \nunpolarized differential cross section $(d\\sigma\/d\\Omega)_0$,\n3 single polarization observables ($P$, $T$, $\\Sigma$), 4 beam-target polarizations ($E$, $F$, $G$, $H$), 4 beam-recoil polarizations\n($C_x$, $C_z$, $O_x$, $O_z$) and 4 target-recoil polarizations ($T_x$, $T_z$, $L_x$, $L_z$).\nThe various spin observables are not independent but are constrained by non-linear identities and various\ninequalites \\cite{ade90}, \\cite{bar75}, \\cite{chi97}, \\cite{art07}. \nIn particular, of the seven single and beam-recoil polarization observables, only five are independent being\nrelated by the two equations:\n\n\\noindent\n\\begin{eqnarray}\nC_x^2+C_z^2+O_x^2+O_z^2=1+T^2-P^2-\\Sigma^2\n\\label{eqobs1}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray}\nC_z O_x-C_x O_z=T- P \\Sigma\n\\label{eqobs1b}\n\\end{eqnarray}\n\n\\noindent\nThere are also a number of inequalities involving three of these observables:\n\n\\noindent\n\\begin{eqnarray}\n|T \\pm P| \\leq 1 \\pm \\Sigma\n\\label{eqobs6}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray}\nP^2+O_x^2+O_z^2 \\leq 1\n\\label{eqobs2}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray}\n\\Sigma^2+O_x^2+O_z^2 \\leq 1\n\\label{eqobs3}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray}\nP^2+C_x^2+C_z^2 \\leq 1\n\\label{eqobs4}\n\\end{eqnarray}\n\n\\noindent\n\\begin{eqnarray}\n\\Sigma^2+C_x^2+C_z^2 \\leq 1\n\\label{eqobs5}\n\\end{eqnarray}\n\nThese different identities and inequalities can be used to test the consistency of our present and previous measurements.\nThey can also be used to check the compatibility of our data with the results on $C_x$ and $C_z$ recently published by \nthe CLAS collaboration \\cite{bra07}. \n\nOur measured values for $\\Sigma$, $P$, $T$, $O_x$ and $O_z$ were combined to test the above inequalities. \nEquation \\ref{eqobs1} was used to calculate the quantity $C_x^2+C_z^2$ appearing in expressions \\ref{eqobs4} and \\ref{eqobs5}.\nThe results for the two combinations $|T\\pm P|\\mp \\Sigma$ of the three single polarizations are\npresented in fig. \\ref{tpskl}.\nThe results for the quantities:\n\n\\begin{itemize}\n\n\\item[.] $(P^2+O_x^2+O_z^2)^{1\/2}$, \n\\item[.] $(\\Sigma^2+O_x^2+O_z^2)^{1\/2}$,\n\\item[.] $(1+T^2-P^2-O_x^2-O_z^2)^{1\/2} = (\\Sigma^2+C_x^2+C_z^2)^{1\/2}$,\n\\item[.] $(1+T^2-\\Sigma^2-O_x^2-O_z^2)^{1\/2} = (P^2+C_x^2+C_z^2)^{1\/2}$, \n\n\\end{itemize}\n\n\\noindent\nwhich combine single and double polarization observables, are displayed in figs. \\ref{psockl} and \\ref{pckl}. \nAll these quantities should be $\\leq 1$. The plotted uncertainties are given by the standard error\npropagation. Whatever the photon energy or the meson polar angle, \nno violation of the expected inequalities is observed, confirming the internal consistency of our set of data.\n\nSince all observables entering in eqs. \\ref{eqobs1} and \\ref{eqobs1b} were measured either by GRAAL \n($\\Sigma$, $P$, $T$, $O_x$, $O_z$) or by CLAS ($P$, $C_x$, $C_z$ - their $P$ data were confirmed by our\nmeasurements \\cite{lle07}), the\ntwo sets of data can be therefore compared and combined. Within the error bars,\nthe agreement between the two sets of equal combinations $(1+T^2-\\Sigma^2-O_x^2-O_z^2)^{1\/2}$ (GRAAL) and \n$(P^2+C_x^2+C_z^2)^{1\/2}$ (CLAS) is fair (fig. \\ref{pckl}) and tends to confirm the previously observed\nsaturation to the value 1 of $R=(P^2+C_x^2+C_z^2)^{1\/2}$, whatever the energy or angle.\nFig. \\ref{pcokl} displays the values for the combined GRAAL-CLAS quantity $C_z O_x-C_x O_z-T+P\\Sigma$.\nWithin the uncertainties, the expected value (1) is obtained, confirming again the overall consistency of\nthe GRAAL and CLAS data.\n\nIt has been demonstrated \\cite{chi97} that the knowledge of the unpolarized\ncross section, the three single-spin observables and at least four double-spin observables - provided\nthey have not all the same type - is sufficient to determine uniquely the four complex reaction amplitudes.\nTherefore, only one additional double polarization observable measured using a polarized target\nwill suffice to extract unambiguously these amplitudes.\n\n\\subsection{Comparison to models}\n\nWe have compared our results with two models: the Ghent isobar RPR \n(Regge-plus-resonance) model \\cite{cor06}-\\cite{cov08} and the coupled-channel partial wave analysis developed by \nthe Bonn-Gatchina collaboration \\cite{ani05}-\\cite{sar08}. In the following, these models will be refered as RPR \nand BG, respectively. The comparison is shown in figs. \\ref{oxkl} to \\ref{tkl}.\n\nThe RPR model is an isobar model for $K\\Lambda$ photo- and electroproduction. \nIn addition to the Born and kaonic contributions, it includes a Reggeized \nt-channel background which is fixed to high-energy data. \nThe fitted database includes differential cross section, beam asymmetry and recoil polarization \nphotoproduction results. The model variant presented here contains, besides the known $N^*$ resonances \n($S_{11}$(1650), $P_{11}$(1710), $P_{13}$(1720)), the $P_{13}$(1900) state (** in the PDG \\cite{pdg04}) \nand a missing $D_{13}$(1900) resonance. This solution was found \nto provide the best overall agreement with the combined photo- and electroproduction database. \nAs one can see in figs. \\ref{oxkl} to \\ref{tkl}, the RPR prediction (dashed line) qualitatively reproduces \nall observed structures. Interestingly enough, the model best reproduces the data at high energy (1400-1500 MeV), \nwhere the $P_{13}$(1900) and $D_{13}$(1900) contributions are maximal.\n\nThe BG model is a combined analysis of experiments with $\\pi N$, $\\eta N$, \n$K\\Lambda$ and $K\\Sigma$ final states. As compared to the other models,\nthis partial-wave analysis takes into account a much larger database which includes\nmost of the available results (differential cross sections and polarization observables).\nFor the $\\gamma p \\rightarrow K^+\\Lambda$ reaction, the main resonant contributions come from the \n$S_{11}$(1535), $S_{11}$(1650), $P_{13}$(1720), $P_{13}$(1900) and $P_{11}$(1840) resonances.\nTo achieve a good description of the recent $C_x$ and $C_z$ CLAS measurements, the ** $P_{13}$(1900)\nhad to be introduced. It should be noted that, at this stage of the analysis, the contribution \nof the missing $D_{13}$(1900) is significantly reduced as compared to previous versions of the model.\nAs shown in figs. \\ref{oxkl}-\\ref{tkl}, this last version (solid line) provides a good overall agreement.\nOn the contrary, the solution without the $P_{13}$(1900) (not shown) fails to reproduce the data.\n\nMore refined analyses with the RPR and BG models are in progress and will be published later on.\nComparison with the dynamical coupled-channel model of Saclay-Argonne-Pittsburgh \n\\cite{jul06}-\\cite{sag08} has also started.\n\n\\section{Summary}\n\nIn this paper, we have presented new results for the reaction $\\gamma p \\rightarrow K^+\\Lambda$\nfrom threshold to $E_\\gamma \\sim$ 1500 MeV. Measurements of the beam-recoil observables $O_x$, $O_z$ and target\nasymmetries $T$ were obtained over a wide angular range. \nWe have compared our results with two isobar models which are in reasonable agreement with the whole\ndata set. They both confirm the necessity to introduce new or poorly known resonances in the 1900 MeV mass region\n($P_{13}$ and\/or $D_{13}$).\n\nIt should be underlined that from now on only one additional double polarization \nobservable (beam-target or target-recoil) would be sufficient to extract the four helicity \namplitudes of the reaction.\n\n\\vspace{5mm}\n\n\\noindent\n{\\bf Acknowledgements}\n\n\\noindent\nWe are grateful to A.V. Sarantsev, B. Saghai, T. Corthals, J. Ryckebusch and P. Vancraeyveld\nfor communication of their most recent analyses and J.M. Richard for fruitful discussions on\nthe spin observable constraints. We thank R. Schumacher for communication of the CLAS\ndata. The support of the technical groups from all contributing\ninstitutions is greatly acknowledged. It is a pleasure to thank the ESRF as a host institution\nand its technical staff for the smooth operation of the storage ring.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe better understanding of the behavior of novel materials with unusual mechanical properties is important in many applications. As it is well known the optimization of the topology and geometry of a structure will greatly impact its performance. Topology optimization, in particular, has found many uses in the aerospace industry, automotive industry, acoustic devices to name a few. As one of the most demanding undertakings in structural design, topology optimization, has undergone a tremendous growth over the last thirty years. Generally speaking, topology optimization of continuum structures has branched out in two directions. One is structural optimization of macroscopic designs, where methods like the Solid Isotropic Method with Penalization (SIMP) \\cite{BS04} and the homogenization method \\cite{AllHom}, \\cite{ABFJ97} where first introduced. The other branch deals with optimization of micro-structures in order to elicit a certain macroscopic response or behavior of the resulting composite structure \\cite{BK88}, \\cite{GM14}, \\cite{Sig94}, \\cite{WMW04}. The latter will be the focal point of the current work. \n\nIn the context of linear elastic material and small deformation kinematics there is quite a body of work in the design of mechanical meta-materials using inverse homogenization. One of the first works in the aforementioned subject was carried out by \\cite{Sig94}. The author used a modified optimality criteria method that was proposed in \\cite{RZ93} to optimize a periodic micro-structure so that the homogenized coefficients attained certain target values.\n\nOn the same wavelength the authors in \\cite{WMW04} used inverse homogenization and a level set method coupled with the Hadamard boundary variation technique \\cite{AllCon}, \\cite{AJT} to construct elastic and thermo-elastic periodic micro-structures that exhibited certain prescribed macroscopic behavior for a single material and void. More recent work was also done by \\cite{GM14}, where again inverse homogenization and a level set method coupled with the Hadamard shape derivative was used to extend the class of optimized micro-structures in the context of the smoothed interface approach \\cite{ADDM}, \\cite{GM14}. Namely, for mathematical or physical reasons a smooth, thin transitional layer of size $2\\epsilon$, where $\\epsilon$ is small, replaces the sharp interface between material and void or between two different material. The theory that \\cite{ADDM}, \\cite{GM14} develop in obtaining the shape derivative is based on the differentiability properties of the signed distance function \\cite{DZ11} and it is mathematically rigorous.\n\nTopology optimization under finite deformation has not undergone the same rapid development as in the case of small strains elasticity, for obvious reasons. One of the first works of topology optimization in non-linear elasticity appeared as part of the work of \\cite{AJT} where they considered a non-linear hyper-elastic material of St. Venant-Kirchhoff type in designing a cantilever using a level set method. More recent work was carried out by the authors of \\cite{WSJ14}, where they utilized the SIMP method to design non-linear periodic micro-structures using a modified St. Venant-Kirchhoff model.\n\nThe rapid advances of 3D printers have made it possible to print many of these micro-structures, that are characterized by complicated geometries, which itself has given way to testing and evaluation of the mechanical properties of such structures. For instance, the authors of \\cite{Clauetal15}, 3D printed and tested a variety of the non-linear micro-structures from the work of \\cite{WSJ14} and showed that the structures, similar in form as the one in {\\sc figure} \\ref{fig:Clauu}, exhibited an apparent Poisson ratio between $-0.8$ and $0$ for strains up to $20\\%$. Preliminary experiments by P. Rousseau \\cite{Rou16} on the printed structure of {\\sc figure} \\ref{fig:Clauu} showed that opposite branches of the structure came into contact with one another at a strain of roughly $25\\%$ which matched the values reported in \\cite{Clauetal15}. \n\\begin{figure}[h]\n\\label{fig:Clauu}\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=55mm]{Clauetal15_a_.png}}\n {(a)}\n&\n\\subf{\\includegraphics[width=56mm]{Clauetal15_b_.png}}\n {(b)}\n\\end{tabular}\n\\caption{A 3D printed material with all four branches on the same plane achieving an apparent Poisson ratio of $-0.8$ with over $20\\%$ strain. On sub-figure (a) is the uncompressed image and on sub-figure (b) is the image under compression. Used with permission from \\cite{Rou16}.}\n\\end{figure}\nTo go beyond the $25\\%$ strain mark, the author of \\cite{Rou16} designed a material where the branches were distributed over different parallel planes (see {\\sc figure} \\ref{fig:Rou}). The distribution of the branches on different planes eliminated contact of opposite branches up to a strain of $50\\%$. A question remains whether or not the shape of the unit cell in {\\sc figure} \\ref{fig:Rou} is optimal. We suspect that it is not, however, the novelty of the actual problem lies in its multi-layer character within the optimization framework of a unit cell with respect to two desired apparent elastic tensors. \n\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=55mm]{Rou16_a_.png}}\n {(a)}\n&\n\\subf{\\includegraphics[width=35mm]{Rou16_b_.png}}\n {(b)}\n\\end{tabular}\n\\caption{A 3D printed material with two of the branches on a different plane achieving an apparent Poisson ratio of approximately $-1.0$ with over $40\\%$ strain. Sub-figure (a) is the uncompressed image and sub-figure (b) is the image under compression. Used with permission from \\cite{Rou16}.}\n\\label{fig:Rou}\n\\end{figure}\n\nOur goal in this work is to design a multi-layer periodic composite with desired elastic properties. In other words, we need to specify the micro-structure of the material in terms of both the distribution as well as its topology. In section 2 we specify the problem setting, define our objective function that needs to be optimized and describe the notion of a Hadamard shape derivative. In section 3 we introduce the level set that is going to implicitly characterize our domain and give a brief description of the smoothed interface approach. Moreover, we compute the shape derivatives and describe the steps of the numerical algorithm. Furthermore, in Section 4 we compute several examples of multi-layer auxetic material that exhibit negative apparent Poisson ratio in 2D. For full 3D systems the steps are exactly the same, albeit with a bigger computational cost.\n\n\\noindent {\\bf Notation}. Throughout the paper we will be employing the Einstein summation notation for repeated indices. As is the case in linear elasticity, $\\vc{\\varepsilon}(\\vc{u})$ will indicate the strain defined by: $\\vc{\\varepsilon}(\\vc{u}) = \\frac{1}{2} \\left ( \\nabla \\vc{u} + \\nabla \\vc{u}^\\top \\right)$, the inner product between matrices is denoted by $\\vc{A}$:$\\vc{B}$ = $tr(\\vc{A}^\\top \\vc{B}) = A_{ij} \\, B_{ji}$. Lastly, the mean value of a quantity is defined as $\\mathcal{M}_Y(\\gamma) = \\frac{1}{|Y|}\\int_Y \\gamma(\\vc{y}) \\, d\\vc{y}$.\n\n\\section{Problem setting}\nWe begin with a brief outline of some key results from the theory of homogenization \\cite{AllHom}, \\cite{BP89}, \\cite{CD00}, \\cite{MV10}, \\cite{SP80}, that will be needed to set up the optimization problem. Consider a linear, elastic, periodic body occupying a bounded domain $\\Omega$ of $ {\\mathbb R} ^N, N = 2, 3$ with period $\\epsilon$ that is assumed to be small in comparison to the size of the domain. Moreover, denote by $Y=\\left(-\\dfrac{1}{2},\\dfrac{1}{2}\\right)^N$ the rescaled periodic unit cell. The material properties in $\\Omega$ are represented by a periodic fourth order tensor $\\mathbb{A}(\\vc{y})$ with $\\vc{y}=\\vc{x}\/\\epsilon \\in Y$ and $\\vc{x} \\in \\Omega$ carrying the usual symmetries and it is positive definite:\n\\[ \nA_{ijkl}=A_{jikl}=A_{klij} \\text{ for } i,j,k,l \\in \\{1, \\ldots, N \\}\n\\]\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tikzpicture}[scale=1.0]\n\\draw [step=0.5,thin,gray!40] (-2.6,-1.7) grid (2.6,1.7);\n\n\n\\draw [semithick,black] (0,0) ellipse (2.1 and 1.2);\n\n\\draw [semithick,black] (2.0,1.0) node [left] {$\\Omega$};\n\n\n\n\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.25,0.3) (0.3,0.2) (0.2,0.1) (0.2,0.3)}; \n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.75,0.3) (0.8,0.2) (0.7,0.1) (0.7,0.3)};\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(1.25,0.3) (1.3,0.2) (1.2,0.1) (1.2,0.3)};\n\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.25,0.8) (0.3,0.7) (0.2,0.6) (0.2,0.8)};\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.75,0.8) (0.8,0.7) (0.7,0.6) (0.7,0.8)};\n\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.25,0.3) (-0.2,0.2) (-0.3,0.1) (-0.3,0.3)}; \n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.75,0.3) (-0.7,0.2) (-0.8,0.1) (-0.8,0.3)};\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-1.25,0.3) (-1.2,0.2) (-1.3,0.1) (-1.3,0.3)};\n\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.25,0.8) (-0.2,0.7) (-0.3,0.6) (-0.3,0.8)};\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.75,0.8) (-0.7,0.7) (-0.8,0.6) (-0.8,0.8)};\n\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.25,-0.3) (-0.2,-0.2) (-0.3,-0.1) (-0.3,-0.3)}; \n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.75,-0.3) (-0.7,-0.2) (-0.8,-0.1) (-0.8,-0.3)};\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-1.25,-0.3) (-1.2,-0.2) (-1.3,-0.1) (-1.3,-0.3)};\n\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.25,-0.8) (-0.2,-0.7) (-0.3,-0.6) (-0.3,-0.8)};\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(-0.75,-0.8) (-0.7,-0.7) (-0.8,-0.6) (-0.8,-0.8)};\n\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.25,-0.3) (0.3,-0.2) (0.2,-0.1) (0.2,-0.3)}; \n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.75,-0.3) (0.8,-0.2) (0.7,-0.1) (0.7,-0.3)};\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(1.25,-0.3) (1.3,-0.2) (1.2,-0.1) (1.2,-0.3)};\n\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.25,-0.8) (0.3,-0.7) (0.2,-0.6) (0.2,-0.8)};\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(0.75,-0.8) (0.8,-0.7) (0.7,-0.6) (0.7,-0.8)};\n\n\\draw [->] (1.5,0) -- (5,-1);\n\\draw [->] (1.5,0.5) -- (5,2);\n\n\\draw [semithick,lightgray] (5,-1) -- (8,-1) -- (8,2) -- (5,2) -- (5,-1);\n\n\\draw[semithick,gray,fill=gray] plot [smooth cycle] coordinates {(6.2,-0.3) (6.8,-0.1) (7,0.8) (6.3,1.2)};\n\n\\draw [semithick,black] (8.2,2.3) node [left] {$\\epsilon \\, Y$};\n\n\\draw [<->,semithick,lightgray] (8.2,-1) -- (8.2,2);\n\\draw [semithick,black] (8.2,0.5) node [right] {$\\epsilon$};\n\n\\draw [<->,semithick,lightgray] (5,-1.2) -- (8,-1.2);\n\\draw [semithick,black] (6.5,-1.2) node [below] {$\\epsilon$};\n\n\\end{tikzpicture}\n\\end{center}\n\\caption{Schematic of the elastic composite material that is governed by eq. \\eqref{elas}. }\n\\label{fig:hom_schem}\n\\end{figure}\n\nDenoting by $\\vc{f}$ the body force and enforcing a homogeneous Dirichlet boundary condition the description of the problem is,\n\\begin{align}\\label{elas}\n- \\dv{ \\vc{\\sigma}^\\epsilon } &= \\vc{f} & \\text{in } &\\Omega,\\nonumber \\\\\n\\vc{\\sigma}^\\epsilon &= \\mathbb{A}(\\vc{x}\/\\epsilon) \\, \\vc{\\varepsilon}(\\vc{u}^\\epsilon) & \\text{in } &\\Omega, \\\\ \n\\vc{u}^\\epsilon &= \\vc{0} & \\text{on } &\\partial \\Omega. \\nonumber\n\\end{align} \n\nWe perform an asymptotic analysis of $\\eqref{elas}$ as the period $\\epsilon$ approaches $0$ by searching for a displacement $\\vc{u}^{\\epsilon}$ of the form \n\\[\n\t\\vc{u}^{\\epsilon}(\\vc{x}) = \\sum_{i=0}^{+\\infty} \\epsilon^i \\, \\vc{u}^i(\\vc{x},\\vc{x}\/\\epsilon)\n\\]\n\nOne can show that $\\vc{u}^0$ depends only on $\\vc{x}$ and, at order $\\epsilon^{-1}$, we can obtain a family of auxiliary periodic boundary value problems posed on the reference cell $Y.$ To begin with, for any $m,\\ell \\in \\{1, \\ldots, N\\}$ we define $\\vc{E}^{m\\ell}=\\frac{1}{2}(\\vc{e}_m \\otimes \\vc{e}_\\ell + \\vc{e}_{\\ell} \\otimes \\vc{e}_m),$ where $(\\vc{e}_k)_{1 \\le k \\le N}$ is the canonical basis of $ {\\mathbb R} ^N.$ For each $\\vc{E}^{m\\ell}$ we have\n\\begin{align} \\label{local}\n-&\\dv_y \\left( { \\mathbb{A}(\\vc{y})(\\vc{E}^{m\\ell} + \\vc{\\varepsilon}_y(\\vc{\\chi}^{m\\ell})) } \\right) = \\vc{0} & \\text{in } Y,\\nonumber \\\\\n&\\vc{y} \\mapsto \\vc{\\chi}^{m\\ell}(\\vc{y}) &Y-\\text{ periodic}, \\nonumber \\\\\n&\\mathcal{M}_Y (\\vc{\\chi}^{m\\ell}) = \\vc{0}. \\nonumber \n\\end{align} \nwhere $\\vc{\\chi}^{m\\ell}$ is the displacement created by the mean deformation equal to $\\vc{E}^{m\\ell}.$ In its weak form the above equation looks as follows:\n\\begin{equation} \\label{local:sol}\n\\text{Find } \\vc{\\chi}^{m\\ell} \\in V \\text{ such that } \\int_Y \\mathbb{A}(\\vc{y}) \\, \\left( \\vc{E}^{m\\ell} + \\vc{\\varepsilon}(\\vc{\\chi}^{m\\ell}) \\right) : \\vc{\\varepsilon}(\\vc{w}) \\, d\\vc{y} = 0 \\text{ for all } w \\in V, \n\\end{equation}\nwhere $V=\\{ \\vc{w} \\in W^{1,2}_{per}(Y; {\\mathbb R} ^N) \\mid \\mathcal{M}_Y(\\vc{w}) = 0 \\}.$ Furthermore, matching asymptotic terms at order $\\epsilon^0$ we can obtain the homogenized equations for $\\vc{u}^0,$\n\\begin{align}\n- \\dv_x{ \\vc{\\sigma}^0 } &= \\vc{f} & \\text{in } &\\Omega,\\nonumber \\\\\n\\vc{\\sigma}^0 &= \\mathbb{A}^H \\, \\vc{\\varepsilon}(\\vc{u}^0) & \\text{in } &\\Omega, \\\\ \n\\vc{u}^0 &= \\vc{0} & \\text{on } &\\partial \\Omega. \\nonumber\n\\end{align} \nwhere $\\mathbb{A}^H$ are the homogenized coefficients and in their symmetric form look as follows,\n\\begin{equation}\\label{hom:coef}\n\tA^H_{ijm\\ell} = \\int_{Y} \\mathbb{A}(\\vc{y})(\\vc{E}^{ij} + \\vc{\\varepsilon}_y(\\vc{\\chi}^{ij})):(\\vc{E}^{m\\ell} + \\vc{\\varepsilon}_y(\\vc{\\chi}^{m\\ell})) \\, d\\vc{y}.\n\\end{equation}\n\n\\subsection{The optimization problem}\n\nAssume that $Y$ is a working domain and consider $d$ sub-domains labeled $S_1,\\ldots,S_d \\subset Y$ that are smooth, open, bounded subsets. Define the objective function, \n\\begin{equation} \\label{objective}\nJ(\\mathbf{S}) = \\frac{1}{2} \\norm{\\mathbb{A}^H - \\mathbb{A}^t}^2_{\\eta} \\text{ with } \\mathbf{S}=(S_1,\\ldots,S_d).\n\\end{equation}\nwhere $\\norm{\\cdot}_{\\eta}$ is the weighted Euclidean norm, $\\mathbb{A}^t$, written here component wise, are the specified elastic tensor values, $\\mathbb{A}^H$ are the homogenized counterparts, and $\\eta$ are the weight coefficients carrying the same type of symmetry as the homogenized elastic tensor. We define a set of admissible shapes contained in the working domain $Y$ that have a fixed volume by\n\\[\n \\mathcal{U}_{ad} = \\left \\{ S_i \\subset Y \\text{is open, bounded, and smooth},\\text{ such that } |S_i| = V^t_i, i=1,\\ldots,d \\right \\}.\n\\] \n\nThus, we can formulate the optimization problem as follows, \n\\begin{gather} \\label{opti:prob}\n\\begin{aligned}\n& \\inf_{\\mathbf{S} \\subset \\mathcal{U}_{ad}} J(\\mathbf{S}) \\\\\n& \\vc{\\chi}^{m\\ell} \\text{ satisfies } \\eqref{local:sol}\n\\end{aligned} \n\\end{gather}\n\n\\subsection{Shape propagation analysis}\nIn order to apply a gradient descent method to \\eqref{opti:prob} we recall the notion of shape derivative. As has become standard in the shape and topology optimization literature we follow Hadamard's variation method for computing the deformation of a shape. The classical shape sensitivity framework of Hadamard provides us with a descent direction. The approach here is due to \\cite{MS76} (see also \\cite{AllCon}). Assume that $\\Omega_0$ is a smooth, open, subset of a design domain $D.$ In the classical theory one defines the perturbation of the domain $\\Omega_0$ in the direction $\\vc{\\theta}$ as \n\n\\[\n\t(Id + \\vc{\\theta})(\\Omega_0) := \\{ \\vc{x} + \\vc{\\theta}(\\vc{x}) \\mid \\vc{x} \\in \\Omega_0 \\}\n\\]\nwhere $\\vc{\\theta} \\in W^{1,\\infty}( {\\mathbb R} ^N; {\\mathbb R} ^N)$ and it is tangential on the boundary of $D.$ For small enough $\\vc{\\theta}$, $(Id + \\vc{\\theta})$ is a diffeomorphism in $ {\\mathbb R} ^N$. Otherwise said, every admissible shape is represented by the vector field $\\vc{\\theta}$. This framework allows us to define the derivative of a functional of a shape as a Fr\\'echet derivative.\n\n\\begin{deff} \nThe shape derivative of $J(\\Omega_0)$ at $\\Omega_0$ is defined as the Fr\\'echet derivative in $W^{1,\\infty}( {\\mathbb R} ^N; {\\mathbb R} ^N)$ at $\\vc{0}$ of the mapping $\\vc{\\theta} \\to J((Id + \\vc{\\theta})(\\Omega_0))$:\n\n\\[\n\tJ((Id + \\vc{\\theta})(\\Omega_0)) = J(\\Omega_0) + J'(\\Omega_0)(\\vc{\\theta}) + o(\\vc{\\theta})\t\n\\]\nwith $\\lim_{\\vc{\\theta} \\to \\vc{0}} \\frac{|o(\\vc{\\theta})|}{\\norm{\\vc{\\theta}}_{W^{1,\\infty}}},$ and $J'(\\Omega_0)(\\vc{\\theta})$ a continuous linear form on $W^{1,\\infty}( {\\mathbb R} ^N; {\\mathbb R} ^N).$\n\\end{deff}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=2.5in]{ShapeD.png}\n\t\\caption{Perturbation of a domain in the direction $\\vc{\\theta}.$}\n\\end{figure}\n\n\\begin{remark}\nThe above definition is not a constructive computation for $J'(\\Omega_0)(\\vc{\\theta}).$ There are more than one ways one can compute the shape derivative of $J(\\Omega_0)$ (see \\cite{AllCon} for a detailed presentation). In the following section we compute the shape derivative associated to \\eqref{opti:prob} using the formal Lagrangian method of J. Cea \\cite{Cea86}. \n\\end{remark}\n\n\\section{Level set representation of the shape in the unit cell}\n\nFollowing the ideas of \\cite{ADDM}, \\cite{WMW04}, the $d$ sub-domains in the cell $Y$ labeled $S_i$, $i \\in \\{1, \\ldots, d\\}$ can treat up to $2^d$ distinct phases by considering a partition of the working domain $Y$ denoted by $F_j$, $j \\in \\{1, \\ldots, 2^d \\}$ and defined the following way,\n\n\\begin{align*}\nF_1 =& S_1 \\cap S_2 \\cap \\ldots \\cap S_d \\\\\nF_2 =& \\overline{S_1^c} \\cap S_2 \\cap \\ldots \\cap S_d \\\\\n&\\vdots\\\\\nF_{2^d} =& \\overline{S_1^c} \\cap \\overline{S_2^c} \\cap \\ldots \\cap \\overline{S_d^c} \n\\end{align*}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=2in]{cell}\n\t\\caption[Representation of different phases in the unit cell for $d=2$.]{Representation of different material in the unit cell for $d=2$.}\n\\end{figure}\n\nDefine for $i \\in \\{ 1, \\ldots, d \\}$ the level sets $\\phi_i$,\n\n\n\\[\n \\phi_i(\\vc{y}) \n\t\\begin{cases}\n \t= 0 & \\text{ if } \\vc{y} \\in \\partial S_i \\\\\n \t> 0 & \\text{ if } \\vc{y} \\in S_i^c \\\\\n\t< 0 & \\text{ if } \\vc{y} \\in S_i\n \t\\end{cases}\n\\]\n\nMoreover, denote by $\\Gamma_{km} = \\Gamma_{mk} = \\overline{F}_m \\cap \\overline{F}_k$ where $k \\ne m$, the interface boundary between the $m^{th}$ and the $k^{th}$ partition and let $\\Gamma = \\cup_{i,j=1\\\\ i \\ne j}^{2^d} \\Gamma_{ij}$ denote the collective interface to be displaced. The properties of the material that occupy each phase, $F_j$ are characterized by an isotropic fourth order tensor\n\n\\[\n \\mathbb{A}^j = 2 \\, \\mu_j \\, I_4 + \\left( \\kappa_j - \\frac{2\\,\\mu_j}{N} \\right) \\, I_2 \\otimes I_2, \\quad j \\in \\{ 1, \\ldots, 2^d\\}\n\\]\nwhere $\\kappa_j$ and $\\mu_j$ are the bulk and shear moduli of phase $F_j$, $I_2$ is a second order identity matrix, and $I_4$ is the identity fourth order tensor acting on symmetric matrices.\n\n\\begin{remark}\nExpressions of the layer $F_k$, $0 \\leq k \\leq 2^d$ in terms of the sub-domains $S_i$, $1 \\leq k \\leq d$ is simply given by the representation of the number k in basis 2. For a number, $k$ its representation in basis 2 is a sequence of d digits, 0 or 1. Replacing in position $i$ the digit $0$ with $S_i$ and $1$ with $\\overline{S_i^c}$ and can map the expression in basis 2 in the expression of the layer $F_i$. In a similar way, one can express the subsequent formulas in a simple way. However for the sake of simplicity we shall restrain the expressions of the development in the paper to $d=2$ and $0 \\geq j \\geq 4$. \n\\end{remark}\n\n\\begin{remark}\nAt the interface boundary between the $F_j$'s there exists a jump on the coefficients that characterize each phase. In the sub-section that follows we will change this sharp interface assumption and allow for a smooth passage from one material to the other as in \\cite{ADDM}, \\cite{GM14}.\n\\end{remark}\n\n\\subsection{The smoothed interface approach}\n\nWe model the interface as a smooth, transition, thin layer of width $2 \\, \\epsilon > 0$ (see \\cite{ADDM}, \\cite{GM14}) rather than a sharp interface. This regularization is carried out in two steps: first by re-initializing each level set, $\\phi_i$ to become a signed distance function, $d_{S_i}$ to the interface boundary and then use an interpolation with a Heaviside type of function, $h_\\epsilon(t)$, to pass from one material to the next,\n\n\\[\n\\phi_i \\rightarrow d_{S_i} \\rightarrow h_\\epsilon(d_{S_i}).\n\\]\n\nThe Heaviside function $h_\\epsilon(t)$ is defined as,\n\\begin{equation}\\label{heavy}\nh_{\\epsilon}(t) =\n\\begin{cases} \n 0 & \\text{if } t < -\\epsilon, \\\\\n \\frac{1}{2}\\left(1+\\frac{t}{\\epsilon} + \\frac{1}{\\pi} \\, \\sin\\left( \\frac{\\pi \\, t}{\\epsilon} \\right) \\right) & \\text{if } |t| \\le \\epsilon,\\\\\n 1 & \\text{if } t > \\epsilon.\n \\end{cases}\n\\end{equation}\n\n\\begin{remark}\nThe choice of the regularizing function above is not unique, it is possible to use other type of regularizing functions (see \\cite{WW04}).\n\\end{remark}\n\nThe signed distance function to the domain $S_i, i=1,2$, denoted by $d_{S_i}$ is obtained as the stationary solution of the following problem \\cite{OS88},\n\n\\begin{gather} \\label{reinit}\n\\begin{aligned}\n\\frac{\\partial d_{S_i}}{dt} + sign(\\phi_i) (|\\nabla d_{S_i}| - 1) &= 0 \\text{ in } {\\mathbb R} ^+ \\times Y, \\\\ \nd_{S_i}(0,\\vc{y}) &= \\phi_i (\\vc{y}) \\text{ in } Y,\n\\end{aligned} \n\\end{gather}\nwhere $\\phi_i$ is the initial level set for the subset $S_i.$ Hence, the properties of the material occupying the unit cell $Y$ are then defined as a smooth interpolation between the tensors $\\mathbb{A}^j$'s $j \\in \\{1,\\ldots,2^d \\}$,\n\n\\begin{align} \\label{smoothing}\n\\mathbb{A}^{\\epsilon}(d_{\\mathbf{S}}) &= (1-h_\\epsilon(d_{S_1})) \\, (1-h_\\epsilon(d_{S_2})) \\, \\mathbb{A}^1 + h_\\epsilon(d_{S_1}) \\, (1-h_\\epsilon(d_{S_2})) \\, \\mathbb{A}^2 \\nonumber \\\\\n&+ (1-h_\\epsilon(d_{S_1})) \\, h_\\epsilon(d_{S_2}) \\, \\mathbb{A}^3 + h_\\epsilon(d_{S_1}) \\, h_\\epsilon(d_{S_2}) \\, \\mathbb{A}^4.\n\\end{align}\nwhere $d_{\\mathbf{S}}=(d_{S_1},d_{S_2})$. Lastly, we remark that the volume of each phase is written as \n\n\\[\n\t\\int_Y \\iota_k \\, d\\vc{y} = V_k\n\\]\nwhere $\\iota_k$ is defined as follows,\n\n\\begin{equation}\\label{vol:const}\n\\begin{cases} \n \\iota_1 &= (1-h_\\epsilon(d_{S_1})) \\, (1-h_\\epsilon(d_{S_2})), \\\\\n \\iota_2 &= h_\\epsilon(d_{S_1}) \\, (1-h_\\epsilon(d_{S_2})), \\\\\n \\iota_3 &= (1-h_\\epsilon(d_{S_1})) \\, h_\\epsilon(d_{S_2}), \\\\\n \\iota_4 &= h_\\epsilon(d_{S_1}) \\, h_\\epsilon(d_{S_2}).\n\\end{cases}\n\\end{equation}\n\n\n\\begin{remark}\nOnce we have re-initialized the level sets into signed distance functions we can obtain the shape derivatives of the objective functional with respect to each sub-domain $S_i.$ In order to do this we require certain differentiability properties of the signed distance function. Detailed results pertaining to the aforementioned properties can be found in \\cite{ADDM}, \\cite{GM14}. We encourage the reader to consult their work for the details. For our purposes, we will make heavy use of Propositions $2.5$ and $2.9$ in \\cite{ADDM} as well as certain results therein.\n\\end{remark}\n\n\\begin{thm} \\label{Shape:Thm}\nAssume that $S_1, S_2$ are smooth, bounded, open subsets of the working domain $Y$ and $\\vc{\\theta^1}, \\vc{\\theta^2} \\in W^{1,\\infty}( {\\mathbb R} ^N; {\\mathbb R} ^N).$ The shape derivatives of \\eqref{opti:prob} in the directions $\\vc{\\theta^1}, \\vc{\\theta^2}$ respectively are,\n\n\\begin{align*}\n\t\\frac{\\partial J}{\\partial S_1}(\\vc{\\theta}^1) = \n&-\\int_{\\Gamma} \\vc{\\theta^1} \\cdot \\vc{n}^1 \\Big (\\eta_{ijk\\ell} \\, \\left( A^H_{ijk\\ell} - A^t_{ijk\\ell} \\right) A^{\\epsilon*}_{mqrs}(d_{S_2}) (E^{k\\ell}_{mq} + \\varepsilon_{mq}(\\vc{\\chi^{k\\ell}})) (E^{ij}_{rs} + \\varepsilon_{rs}(\\vc{\\chi}^{ij})) \\\\\n&- h^{*}_{\\epsilon}(d_{S_2}) \\Big ) d\\vc{y} \n\\end{align*}\n\n\\begin{align*}\n\t\\frac{\\partial J}{\\partial S_2}(\\vc{\\theta}^2) = \n&-\\int_{\\Gamma} \\vc{\\theta^2} \\cdot \\vc{n}^2 \\Big (\\eta_{ijk\\ell} \\, \\left( A^H_{ijk\\ell} - A^t_{ijk\\ell} \\right) \\, A^{\\epsilon*}_{mqrs}(d_{S_1}) \\, (E^{k\\ell}_{mq} + \\varepsilon_{mq}(\\vc{\\chi^{k\\ell}})) \\, (E^{ij}_{rs} + \\varepsilon_{rs}(\\vc{\\chi}^{ij})) \\\\ \n&- h^{*}_{\\epsilon}(d_{S_1}) \\Big) d\\vc{y} \n\\end{align*}\nwhere, for $i=1,2$, $\\mathbb{A}^{\\epsilon*}(d_{S_i})$, written component wise above, denotes,\n\n\\begin{equation} \\label{A:star}\n\t\\mathbb{A}^{\\epsilon*}(d_{S_i}) = \\mathbb{A}^{2} - \\mathbb{A}^{1} + h_{\\epsilon}(d_{S_i}) \\, \\left( \\mathbb{A}^{1} - \\mathbb{A}^{2} - \\mathbb{A}^{3} + \\mathbb{A}^{4} \\right),\n\\end{equation}\n\n\\begin{equation} \\label{h:star}\n\th^{*}_{\\epsilon}(d_{S_i}) = (\\ell_2 - \\ell_1+ h_{\\epsilon}(d_{S_i})(\\ell_1 - \\ell_2 - \\ell_3 + \\ell_4) )\n\\end{equation}\nand $\\ell_j, j \\in \\{1, \\ldots, 4 \\}$ are the Lagrange multipliers for the weight of each phase.\n\\end{thm}\n\n\\begin{proof}\nFor each $k,\\ell$ we introduce the following Lagrangian for $(\\vc{u}^{k\\ell},\\vc{v},\\vc{\\mu}) \\in V \\times V \\times {\\mathbb R} ^{2d}$ associated to problem \\eqref{opti:prob},\n\n\\begin{gather}\\label{Lagrangian}\n\\begin{aligned} \n\\mathcal{L}(\\vc{S}, \\vc{u}^{k\\ell}, \\vc{v}, \\vc{\\mu}) \n= J(\\vc{S})\n& + \\int_Y \\mathbb{A}^{\\epsilon}(d_{\\vc{S}}) \\, \\left( \\vc{E}^{k\\ell} + \\vc{\\varepsilon}(\\vc{u}^{k\\ell}) \\right): \\vc{\\varepsilon}(\\vc{v}) \\, d\\vc{y} + \\vc{\\mu} \\cdot \\left( \\int_Y \\vc{\\iota} \\, d\\vc{y} - \\vc{V}^t \\right),\n\\end{aligned}\n\\end{gather}\nwhere $\\vc{\\mu}=(\\mu_1, \\ldots, \\mu_4)$ is a vector of Lagrange multipliers for the volume constraint, $\\vc{\\iota}=(\\iota_1, \\ldots, \\iota_4)$, and $\\vc{V}^t=(V_1^t,\\ldots,V_4^t)$.\n\n\\begin{remark}\nEach variable of the Lagrangian is independent of one another and independent of the sub-domains $S_1$ and $S_2$. \n\\end{remark}\n\n\\subsubsection*{Direct problem}\nDifferentiating $\\mathcal{L}$ with respect to $\\vc{v}$ in the direction of some test function $\\vc{w} \\in V$ we obtain,\n\\[\n\\dpair{\\frac{ \\partial \\mathcal{L} }{ \\partial \\vc{v} }}{ \\vc{w} } = \\int_Y A^{\\epsilon}_{ijrs}(d_{\\vc{S}}) \\, (E^{k\\ell}_{ij} + \\varepsilon_{ij}(\\vc{u^{k\\ell}})) \\, \\varepsilon_{rs}(\\vc{w}) \\, d\\vc{y},\n\\]\nupon setting this equal to zero we obtain the variational formulation in \\eqref{local:sol}.\n\n\\subsubsection*{Adjoint problem}\nDifferentiating $\\mathcal{L}$ with respect to $\\vc{u}^{k\\ell}$ in the direction $\\vc{w} \\in V$ we obtain,\n\n\\begin{align*}\n\\dpair{\\frac{ \\partial \\mathcal{L} }{ \\partial \\vc{u}^{k\\ell} }}{ \\vc{w} } \n& = \\eta_{ijk\\ell} \\, \\left( A^H_{ijk\\ell} - A^t_{ijk\\ell} \\right) \\, \\int_Y A^{\\epsilon}_{mqrs}(d_{\\vc{S}}) \\, (E^{k\\ell}_{mq} + \\varepsilon_{mq}(\\vc{u^{k\\ell}})) \\, \\varepsilon_{rs}(\\vc{w}) \\, d\\vc{y} \\\\\n&+\\int_Y A^{\\epsilon}_{mqrs}(d_{\\vc{S}}) \\, \\varepsilon_{mq}(\\vc{w}) \\, \\varepsilon_{rs}(\\vc{v}) \\, d\\vc{y}.\n\\end{align*}\nWe immediately observe that the integral over $Y$ on the first line is equal to $0$ since it is the variational formulation \\eqref{local:sol}. Moreover, if we chose $\\vc{w} = \\vc{v}$ then by the positive definiteness assumption of the tensor $\\mathbb{A}$ as well as the periodicity of $\\vc{v}$ we obtain that adjoint solution is identically zero, $\\vc{v} \\equiv 0.$\n\n\\subsubsection*{Shape derivative}\nLastly, we need to compute the shape derivative in directions $\\vc{\\theta}^1$ and $\\vc{\\theta}^2$ for each sub-domain $S_1$, $S_2$ respectively. Here we will carry out computations for the shape derivative with respect to the sub-domain $S_1$ with calculations for the sub-domain $S_2$ carried out in a similar fashion. We know (see \\cite{AllCon}) that \n\n\\begin{equation} \\label{SD}\n\t\\dpair{\\frac{\\partial J}{\\partial S_i}(\\vc{S})}{\\vc{\\theta}^i} = \\dpair{\\frac{\\partial \\mathcal{L}}{\\partial S_i}(\\vc{S},\\vc{\\chi}^{k\\ell},\\vc{0},\\vc{\\lambda})}{\\vc{\\theta}^i} \\text{ for } i=1,2.\n\\end{equation}\n\nHence, \n\\begin{align*}\n\\frac{ \\partial \\mathcal{L} }{ \\partial S_1 }( \\vc{\\theta}^1 ) \n& = \\eta_{ijk\\ell} \\left( A^H_{ijk\\ell} - A^t_{ijk\\ell} \\right) \\int_Y d'_{S_1}(\\vc{\\theta}^1) \\frac{\\partial A^{\\epsilon}_{mqrs}}{\\partial S_1} (d_{\\vc{S}}) (E^{k\\ell}_{mq} + \\varepsilon_{mq}(\\vc{u^{k\\ell}})) \\, (E^{ij}_{rs} + \\varepsilon_{rs}(\\vc{u}^{ij})) d\\vc{y} \\\\\n&+\\int_Y d'_{S_1}(\\vc{\\theta}^1) \\frac{\\partial A^{\\epsilon}_{ijrs}}{\\partial d_{S_1}} (d_{\\vc{S}}) (E^{k\\ell}_{ij} + e_{yij}(\\vc{u^{k\\ell}})) \\varepsilon_{rs}(\\vc{v}) d\\vc{y} \\\\\n&+ \\ell_1 \\int_Y - \\, d'_{S_1}(\\vc{\\theta}^1) \\frac{\\partial h_{\\epsilon}(d_{S_1})}{\\partial d_{S_1}} (1 - h_{\\epsilon}(d_{S_2})) d\\vc{y}\n+ \\ell_2 \\, \\int_Y d'_{S_1}(\\vc{\\theta}^1) \\, \\frac{\\partial h_{\\epsilon}(d_{S_1})}{\\partial d_{S_1}} \\, (1 - h_{\\epsilon}(d_{S_2})) \\, d\\vc{y} \\\\\n&+ \\ell_3 \\, \\int_Y - \\, d'_{S_1}(\\vc{\\theta}^1) \\, \\frac{\\partial h_{\\epsilon}(d_{S_1})}{\\partial d_{S_1}} \\, h_{\\epsilon}(d_{S_2}) \\, d\\vc{y} \n+ \\ell_4 \\, \\int_Y d'_{S_1}(\\vc{\\theta}^1) \\, \\frac{\\partial h_{\\epsilon}(d_{S_1})}{\\partial d_{S_1}} \\, h_{\\epsilon}(d_{S_2}) \\, d\\vc{y}.\n\\end{align*}\nThe term on the second line is zero due to the fact that the adjoint solution is identically zero. Moreover, applying Proposition $2.5$ and then Proposition $2.9$ from \\cite{ADDM} as well as using the fact that we are dealing with thin interfaces we obtain, \n\n\\begin{align*}\n\\frac{ \\partial \\mathcal{L} }{ \\partial S_1 }( \\vc{\\theta}^1 ) \n& = -\\eta_{ijk\\ell} \\, \\left( A^H_{ijk\\ell} - A^t_{ijk\\ell} \\right) \\, \\int_{\\Gamma} \\vc{\\theta}^1 \\cdot \\vc{n}^1 \\, A^{\\epsilon*}_{mqrs}(d_{S_2}) \\, (E^{k\\ell}_{mq} + \\varepsilon_{mq}(\\vc{u^{k\\ell}})) \\, (E^{ij}_{rs} + \\varepsilon_{rs}(\\vc{u}^{ij})) \\, d\\vc{y} \\\\\n&+ \\ell_1 \\, \\int_{\\Gamma} \\vc{\\theta}^1 \\cdot \\vc{n}^1 \\, (1 - h_{\\epsilon}(d_{S_2})) \\, d\\vc{y}\n- \\ell_2 \\, \\int_{\\Gamma} \\vc{\\theta}^1 \\cdot \\vc{n}^1 \\, (1 - h_{\\epsilon}(d_{S_2})) \\, d\\vc{y} \\\\\n&+ \\ell_3 \\, \\int_{\\Gamma} \\vc{\\theta}^1 \\cdot \\vc{n}^1 \\, h_{\\epsilon}(d_{S_2}) \\, d\\vc{y} \n- \\ell_4 \\, \\int_{\\Gamma} \\vc{\\theta}^1 \\cdot \\vc{n}^1 \\, h_{\\epsilon}(d_{S_2}) \\, d\\vc{y} \n\\end{align*}\nwhere $\\vc{n}^1$ denotes the outer unit normal to $S_1.$ Thus, if we let $\\vc{u}^{k\\ell} = \\vc{\\chi}^{k\\ell}$, the solution to the unit cell \\eqref{local:sol} and collect terms the result follows.\n\\end{proof}\n\n\\begin{remark}\nThe tensor $\\mathbb{A}^{\\epsilon *}$ in \\eqref{A:star} as well $h^{\\epsilon*}$ in \\eqref{h:star} of the shape derivatives in {\\bf Theorem \\ref{Shape:Thm}} depend on the signed distance function in an alternate way which provides an insight into the coupled nature of the problem. We further remark, that in the smooth interface context, the collective boundary $\\Gamma$ to be displaced in {\\bf Theorem \\ref{Shape:Thm}}, is not an actual boundary but rather a tubular neighborhood.\n\\end{remark}\n\n\\subsection{The numerical algorithm}\nThe result of {\\bf Theorem \\ref{Shape:Thm}} provides us with the shape derivatives in the directions $\\vc{\\theta}^1$, $\\vc{\\theta}^2$ respectively. If we denote by,\n\\[\n\tv^1 = \\frac{\\partial J}{\\partial S_1}(\\vc{S}), \\quad v^2 = \\frac{\\partial J}{\\partial S_2}(\\vc{S}),\n\\]\na descent direction is then found by selecting the vector field $\\vc{\\theta}^1=v^1\\vc{n}^1$, $\\vc{\\theta}^2=v^2\\vc{n}^2.$ To move the shapes $S_1, S_2$ in the directions $v^1, v^2$ is done by transporting each level set, $\\phi_i, \\quad i=1,2$ independently by solving the Hamilton-Jacobi type equation\n\n\\begin{equation} \\label{HJ:phi}\n\t\\frac{\\partial \\phi^i}{\\partial t} + v^i \\, |\\nabla \\phi^i| = 0, \\quad i=1,2.\n\\end{equation}\nMoreover, we extend and regularize the scalar velocity $v^i, \\, i=1,2$ to the entire domain $Y$ as in \\cite{AJT}, \\cite{ADDM}. The extension is done by solving the following problem for $i=1,2$,\n\n\\begin{align*}\n- \\alpha^2 \\, \\Delta \\vc{\\theta}^i + \\vc{\\theta}^i & = 0 \\text{ in } Y, \\nonumber \\\\\n\\nabla \\vc{\\theta}^i \\, \\vc{n^i} & = v^i\\vc{n}^i \\text{ on } \\Gamma, \\nonumber \\\\\n\\vc{\\theta}^i & \\text{ Y--periodic}, \\nonumber\n\\end{align*}\nwhere $\\alpha > 0$ is small regularization parameter. Hence, using the same algorithm as in \\cite{AJT}, for $i=1,2$ we have:\n\n\\subsubsection{Algorithm} We initialize $S_i^0 \\subset U_{ad}$ through the level sets $\\phi^i_0$ defined as the signed distance function of the chosen initial topology, then\n\n{\\small \\it\n\\begin{itemize}\n\\item[1.] iterate until convergence for $k \\ge 0$:\n\t\\begin{itemize}\n\t\t\n\t\t\\item[a.] Calculate the local solutions $\\vc{\\chi}^{m\\ell}_k$ for $m,\\ell=1,2$ by solving the linear \nelasticity problem \\eqref{local:sol} on $\\mathcal{O}^k := S_1^k \\cup S_2^k$.\n\t\t\\item[b.] Deform the domain $\\mathcal{O}^k$ by solving the Hamilton-Jacobi equations \\eqref{HJ:phi} for $i=1,2$. The new shape $\\mathcal{O}^{k+1}$ is characterized by the level sets $\\phi_i^{k+1}$ solutions of \\eqref{HJ:phi} after a time step $\\Delta t_k$ starting from the initial condition $\\phi_i^k$ with velocity $v^i_k$ computed in terms of the local problems $\\vc{\\chi^{m\\ell}_k}$ for $i=1,2$. The time step $\\Delta t_k$ is chosen so that $J(\\vc{S}^{k+1}) \\le J(\\vc{S}^k)$. \n\t\\end{itemize}\n\\item[2.] From time to time, for stability reasons, we re-initialize the level set functions $\\phi_i^k$ by solving \\eqref{reinit} for $i=1,2$.\n\\end{itemize}}\n\n\\section{Numerical examples}\n\nFor all the examples that follow we have used a symmetric $100 \\times 100$ mesh of $P1$ elements. We imposed volume equality constraints for each phase. In the smooth interpolation of the material properties in formula \\eqref{smoothing}, we set $\\epsilon$ equal to $2\\Delta x$ where $\\Delta x$ is the grid size. The parameter $\\epsilon$ is held fixed through out (see \\cite{ADDM} and \\cite{GM14}). The Lagrange multipliers were updated at each iteration the following way, $\\ell^{n+1}_j = \\ell^{n}_j - \\beta \\, \\left( \\int_Y \\iota^n_j \\, d\\vc{y} -V^t_j \\right)$, where $\\beta$ is a small parameter. \nDue to the fact that this type of problem suffers from many local minima that may not result in a shape, instead of putting a stopping criterion in the algorithm we fix, a priori, the number iterations. Furthermore, since we have no knowledge of what volume constraints make sense for a particular shape, we chose not to strictly enforce the volume constraints for the first two examples. However, for examples $3$ and $4$ we use an augmented Lagrangian to actually enforce the volume constraints,\n\\[\nL(\\vc{S} , \\vc{\\mu}, \\vc{\\beta}) = J(\\vc{S}) - \\sum_{i=1}^4 \\mu_i C_i(\\vc{S}) + \\sum_{i=1}^4 \\frac{1}{2} \\, \\beta_i C_i^2(\\vc{S}),\n\\]\nhere $C_i(\\vc{S})$ are the volume constraints and $\\beta$ is a penalty term. The Lagrange multipliers are updated as before, however, this time we update the penalty term, $\\beta$ every $5$ iterations. All the calculations were carried out using the software {\\tt FreeFem++} \\cite{FH12}. \n\n\\begin{remark}\nWe remark that for the augmented Lagrangian we need to compute the new shape derivative that would result. The calculations are similar as that of Theorem \\ref{Shape:Thm} and, therefore, we do not detail them here for the sake of brevity. \n\\end{remark}\n\n\\subsection{Example 1}\n\nThe first structure to be optimized is multilevel material that attains an apparent Poisson ratio of $-1$. The Young moduli of the four phases are set to $E^1=0.91$, $E^2=0.0001$, $E^3=1.82$, $E^4=0.0001$. Here phase $2$ and phase $4$ represent void, phase $2$ represents a material that is twice as stiff as the material in phase $3$. The Poisson ratio of each phase is set to $\\nu=0.3$ and the volume constraints were set to $V^t_1=30\\%$ and $V^t_3=4\\%.$\n\\begin{table}[h]\n\\center\n\\begin{tabular}{c|ccc}\n$ijkl$ & $1111$ & $1122$ & $2222$ \\\\\n\\hline\n$\\eta_{ijkl}$ & $1$ & $30$ & $1$ \\\\\n$A^H_{ijkl}$ & $0.12$ & $-0.09$ & $0.12$ \\\\\n$A^t_{ijkl}$ & $0.1$ & $-0.1$ & $0.1$ \n\\end{tabular}\n\\caption{Values of weights, final homogenized coefficients and target coefficients}\n\\end{table}\n\\vspace{-0.5cm}\nFrom {\\sc figure} \\ref{Im:aux3} we observe that the volume constraint for the stiffer material is not adhered to the target volume. In this cases the algorithm used roughly $16\\%$ of the material with Poisson ratio $1.82$ while the volume constraint for the weaker material was more or less adhered to the target constraint. \n\n\\newpage\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=60mm]{M3_0_2_}}\n {Initial shape}\n&\n\\subf{\\includegraphics[width=60mm]{M3_5_2_}}\n {iteration $5$}\n\\\\\n\\subf{\\includegraphics[width=60mm]{M3_10_2_}}\n {iteration $10$}\n&\n\\subf{\\includegraphics[width=60mm]{M3_50_2_}}\n {iteration $50$}\n\\\\\n\\subf{\\includegraphics[width=60mm]{M3_100_2_}}\n {iteration $100$}\n&\n\\subf{\\includegraphics[width=60mm]{M3_200_2_}}\n {iteration $200$}\n\\end{tabular}\n\\caption{The design process of the material at different iteration steps. \\break \\protect \\newboxsymbol{magenta}{magenta} Young modulus of $1.82$, \\protect \\newboxsymbol{cyan}{cyan} Young modulus of $0.91$, \\protect \\newboxsymbol{yellow}{yellow} void.}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=55mm]{M3_200_2_}}\n{}\n&\n\\subf{\\includegraphics[width=55mm]{img_macro3_2_}} \n{}\n\\end{tabular}\n\\caption{On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio $-1$.}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=55mm]{Aux3_sqerror}}\n {Evolution of the values of the objective }\n&\n\\subf{\\includegraphics[width=55mm]{Aux3_vol}}\n {Evolution of the volume constraints}\n\\end{tabular}\n\\caption{Convergence history of objective function and the volume constraints.}\n\\label{Im:aux3}\n\\end{figure}\n\n\\subsection{Example 2}\n\nThe second structure to be optimized is multilevel material that also attains an apparent Poisson ratio of $-1$. Every assumption remains the same as in the first example. The Young moduli of the four phases are set to $E^1=0.91$, $E^2=0.0001$, $E^3=1.82$, $E^4=0.0001$. The Poisson ratio of each material is set to $\\nu=0.3$, however, this times we require that the volume constraints be set to $V^t_1=33\\%$ and $V^t_3=1\\%.$\n\\begin{table}[h]\n\\center\n\\begin{tabular}{c|ccc}\n$ijkl$ & $1111$ & $1122$ & $2222$ \\\\\n\\hline\n$\\eta_{ijkl}$ & $1$ & $30$ & $1$ \\\\\n$A^H_{ijkl}$ & $0.11$ & $-0.09$ & $0.12$ \\\\\n$A^t_{ijkl}$ & $0.1$ & $-0.1$ & $0.1$ \n\\end{tabular}\n\\caption{Values of weights, final homogenized coefficients and target coefficients}\n\\end{table}\n\nAgain, from {\\sc figure} \\ref{Im:aux4} we observe that the volume constraint for the stiffer material is not adhered to the target volume. In this cases the algorithm used roughly $15\\%$ of the material with Poisson ratio $1.82$ while the volume constraint for the weaker material was more or less adhered to the target constraint. \n\n\\newpage\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=60mm]{M4_0_2_}}\n {Initial shape}\n&\n\\subf{\\includegraphics[width=60mm]{M4_5_2_}}\n {iteration $5$}\n\\\\\n\\subf{\\includegraphics[width=60mm]{M4_10_2_}}\n {iteration $10$}\n&\n\\subf{\\includegraphics[width=60mm]{M4_50_2_}}\n {iteration $50$}\n\\\\\n\\subf{\\includegraphics[width=60mm]{M4_100_2_}}\n {iteration $100$}\n&\n\\subf{\\includegraphics[width=60mm]{M4_200_2_}}\n {iteration $200$}\n\\end{tabular}\n\\caption{The design process of the material at different iteration steps. \\break \\protect \\newboxsymbol{magenta}{magenta} Young modulus of $1.82$, \\protect \\newboxsymbol{cyan}{cyan} Young modulus of $0.91$, \\protect \\newboxsymbol{yellow}{yellow} void.}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=55mm]{M4_200_2_}}\n{}\n&\n\\subf{\\includegraphics[width=55mm]{img_macro4_2_}}\n{}\n\\end{tabular}\n\\caption{On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio $-1$.}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=55mm]{Aux4_sqerror}}\n {Evolution of the values of the objective }\n&\n\\subf{\\includegraphics[width=55mm]{Aux4_vol}}\n {Evolution of the volume constraints}\n\\end{tabular}\n\\caption{Convergence history of objective function and the volume constraints.}\n\\label{Im:aux4}\n\\end{figure}\n\n\\subsection{Example 3}\n\nThe third structure to be optimized is multi-layer material with target apparent Poisson ratio of $-0.5$. For this example we used an augmented Lagrangian to enforce the volume constraints. The Lagrange multiplier was updated the same way as before, however, the penalty parameter $\\beta$ was updated every five iterations. The Young moduli of the four phases are set to $E^1=0.91$, $E^2=0.0001$, $E^3=1.82$, $E^4=0.0001$ and the volume target constraints were set to $V^t_1=38.5\\%$ and $V^t_3=9.65\\%.$\n\n\\begin{table}[h]\n\\center\n\\begin{tabular}{c|ccc}\n$ijkl$ & $1111$ & $1122$ & $2222$ \\\\\n\\hline\n$\\eta_{ijkl}$ & $1$ & $10$ & $1$ \\\\\n$A^H_{ijkl}$ & $0.18$ & $-0.08$ & $0.18$ \\\\\n$A^t_{ijkl}$ & $0.2$ & $-0.1$ & $0.2$ \n\\end{tabular}\n\\caption{Values of weights, final homogenized coefficients and target coefficients}\n\\end{table}\n\\vspace{-0.5cm}\nAgain just as in the previous two examples we observe that the volume constraint for the stiffer material is not adhered to the target volume, even though for this example a augmented Lagrangian was used. In this cases the algorithm used roughly $20\\%$ of the material with Poisson ratio $1.82$ while the volume constraint for the weaker material was more or less adhered to the target constraint. \n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=60mm]{M2_0}}\n {Initial shape}\n&\n\\subf{\\includegraphics[width=60mm]{M2_5}}\n {iteration $5$}\n\\\\\n\\subf{\\includegraphics[width=60mm]{M2_10}}\n {iteration $10$}\n&\n\\subf{\\includegraphics[width=60mm]{M2_50}}\n {iteration $50$}\n\\\\\n\\subf{\\includegraphics[width=60mm]{M2_100}}\n {iteration $100$}\n&\n\\subf{\\includegraphics[width=60mm]{M2_200}}\n {iteration $200$}\n\\end{tabular}\n\\caption{The design process of the material at different iteration steps. \\break \\protect \\newboxsymbol{magenta}{magenta} Young modulus of $1.82$, \\protect \\newboxsymbol{cyan}{cyan} Young modulus of $0.91$, \\protect \\newboxsymbol{yellow}{yellow} void.}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=55mm]{M2_200}}\n{}\n&\n\\subf{\\includegraphics[width=55mm]{img_macro2}}\n{}\n\\end{tabular}\n\\caption{On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio $-0.5$.}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=55mm]{ConvHist2.png}}\n {Evolution of the values of the objective }\n&\n\\subf{\\includegraphics[width=55mm]{vol2.png}}\n {Evolution of the volume constraints}\n\\end{tabular}\n\\caption{Convergence history of the objective function and the volume constraints.}\n\\end{figure}\n\n\\subsection{Example 4}\n\nThe fourth structure to be optimized is multilevel material that attains an apparent Poisson ratio of $-0.5$. An augmented Lagrangian was used to enforce the volume constraints for this example as well. The Lagrange multiplier was updated the same way as before, as was the penalty parameter $\\beta$. The Young moduli of the four phases are set to $E^1=0.91$, $E^2=0.0001$, $E^3=1.82$, $E^4=0.0001$. The Poisson ratio of each material is set to $\\nu=0.3$, however, this times we require that the volume constraints be set to $V^t_1=53\\%$ and $V^t_3=7\\%.$\n\n\\begin{table}[h]\n\\center\n\\begin{tabular}{c|ccc}\n$ijkl$ & $1111$ & $1122$ & $2222$ \\\\\n\\hline\n$\\eta_{ijkl}$ & $1$ & $10$ & $1$ \\\\\n$A^H_{ijkl}$ & $0.18$ & $-0.08$ & $0.18$ \\\\\n$A^t_{ijkl}$ & $0.2$ & $-0.1$ & $0.2$ \n\\end{tabular}\n\\caption{Values of weights, final homogenized coefficients and target coefficients}\n\\end{table}\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=60.3mm]{M1_0}}\n {Initial shape}\n&\n\\subf{\\includegraphics[width=60.3mm]{M1_5}}\n {iteration $5$}\n\\\\\n\\subf{\\includegraphics[width=60.3mm]{M1_10}}\n {iteration $10$}\n&\n\\subf{\\includegraphics[width=60.3mm]{M1_50}}\n {iteration $50$}\n\\\\\n\\subf{\\includegraphics[width=60.3mm]{M1_100}}\n {iteration $100$}\n&\n\\subf{\\includegraphics[width=60.3mm]{M1_200}}\n {iteration $200$}\n\\end{tabular}\n\\caption{The design process of the material at different iteration steps. \\break \\protect \\newboxsymbol{magenta}{magenta} Young modulus of $1.82$, \\protect \\newboxsymbol{cyan}{cyan} Young modulus of $0.91$, \\protect \\newboxsymbol{yellow}{yellow} void.}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=55mm]{M1_200}}\n{}\n&\n\\subf{\\includegraphics[width=55mm]{img_macro1}}\n{}\n\\end{tabular}\n\\caption{On the left we have the unit cell and on the right we have the macro-structure obtained by periodic assembly of the material with apparent Poisson ratio $-0.5$.}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cc}\n\\subf{\\includegraphics[width=55mm]{ConvHist1.png}}\n {Evolution of the values of the objective }\n&\n\\subf{\\includegraphics[width=55mm]{vol1.png}}\n {Evolution of the volume constraints}\n\\end{tabular}\n\\caption{Convergence history of objective function and the volume constraints.}\n\\end{figure}\n\n\n\n\\section{Conclusions and Discussion}\nThe problem of an optimal multi-layer micro-structure is considered. We use inverse homogenization, the Hadamard shape derivative and a level set method to track boundary changes, within the context of the smooth interface, in the periodic unit cell. We produce several examples of auxetic micro-structures with different volume constraints as well as different ways of enforcing the aforementioned constraints. The multi-layer interpretation suggests a particular way on how to approach the subject of 3D printing the micro-structures. The magenta material is essentially the cyan material layered twice producing a small extrusion with the process repeated several times. This multi-layer approach has the added benefit that some of the contact among the material parts is eliminated, thus allowing the structure to be further compressed than if the material was in the same plane.\n \nThe algorithm used does not allow ``nucleations'' (see \\cite{AJT}, \\cite{WMW04}). Moreover, due to the non-uniques of the design, the numerical result depend on the initial guess. Furthermore, volume constraints also play a role as to the final form of the design. \n\nThe results in this work are in the process of being physically realized and tested both for polymer and metal structures. The additive manufacturing itself introduces further constraints into the design process which need to be accounted for in the algorithm if one wishes to produce composite structures. \n\n\\section*{Acknowledgments}\nThis research was initiated during the sabbatical stay of A.C. in the group of Prof. Chiara Daraio at ETH, under the mobility grant DGA-ERE (2015 60 0009). Funding for this research was provided by the grant \\textit{''MechNanoTruss\"}, Agence National pour la Recherche, France (ANR-15-CE29-0024-01). The authors would like to thank the group of Prof. Chiara Daraio for the fruitful discussions. The authors are indebted to Gr\\'egoire Allaire and Georgios Michailidis for their help and fruitful discussions as well as to Pierre Rousseau who printed and tested the material in {\\sc figure} \\ref{fig:Clauu} \\& {\\sc figure \\ref{fig:Rou}}.\n\n\\bigskip \n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction }\n\nQuantum Chromodynamics (QCD) is believed to describe hadrons in\nthe universe. While much of its perturbative dynamics is by now\nfairly well understood, it is still hard to analyze, in convincing\nways, many non-perturbative phenomena that are relevant in low\nenergy regime. Among these are the confinement of quarks and the\nspontaneous breaking of their (approximate) chiral symmetry\n(S$\\chi$SB). Although these two aspects of QCD have their same\norigin in strongly interacting dynamics, there hasn't been found\nno logical connection between the two phenomena. In this paper, we\nprovide, in our belief, one convincing example showing the logical\nseparation between confinement and S$\\chi$SB. Our analysis seems\nto suggest that spontaneous chiral symmetry breaking of massless\nquarks may happen without any need of a confining potential\nbetween them.\n\nOur analysis is based on the proposal of AdS\/CFT correspondence,\nin which Type IIB string theory on $AdS_5\\times S^5$ background is\nequivalent to the ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM theory on the boundary of\n$AdS_5$\\cite{Mal}.\nThe duality between the two descriptions is supposed to hold even\nat the level of Hilbert spaces of their corresponding quantum\ntheory; (semi-classical) deformations of $AdS_5\\times S^5$ which\nvanish asymptotically at the boundary correspond to some quantum\nstates in the dual gauge theory\\cite{Wit,Gub1}.\nDepending on the deformations in\nthe bulk that we are considering, these states share several\ninteresting properties with the usual vacuum states of realistic\ngauge theories, such as homogeneity over space and non-vanishing\ngluon condensation, etc. Henceforth, studying confinement and\nS$\\chi$SB on these states may give us an important laboratory for\nunraveling the relation between the two phenomena.\n\nThe deformed backgrounds of our interest are a family of dilatonic\ndeformations of $AdS_5\\times S^5$ that were found in\nRef.\\cite{Bak:2004yf}. A nice fact about these solutions is the\nexistence of a single adjustable parameter, $k\/ \\mu$, which\nenables us to scan a range of corresponding quantum states. In the\ngauge theory side, this parameter represents the ratio of the\ngluon condensation to the energy density of the quantum states.\nThe analysis in Ref.\\cite{Bak:2004yf} showed that for ${k\/\n\\mu} <-12$, the potential between (heavy) quark\/anti-quark pair is\nconfining, while states of ${k\/\\mu}>-12$ were argued to\nexhibit Coulomb-like behavior. However, in section 3, we perform a\nmore careful study for the cases of ${k\/ \\mu}>-12$ to find the\nscreening phase instead for them. We also look at the response to\nmagnetically charged objects and get an interesting phase\nstructure.\n\nTo study S$\\chi$SB on these background states, a small number of\nlight quarks\/anti-quarks are introduced a la Karch and Katz in\nsection 4; probe D7-branes\\cite{Kar}.\n They are $N_f$, ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=2$\nhypermultiplets in fundamental representation of the $SU(N)$ gauge\ngroup, and their effect to ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ $SU(N)$ SYM dynamics may be\nneglected in $N\\gg N_f$ limit via quenched approximation. D7-probe\nfor studying S$\\chi$SB was analyzed first in \\cite{Babington:2003vm}, and\nits use also for hadron physics \\cite{Karch:2002xe,Kruczenski:2003be,Burrington:2004id,Hong:2003jm,Rho:1999jm}\nis by now a well\nestablished method in the literature\n( See \\cite{Sakai:2003wu,Ouyang:2003df,Wang:2003yc,Kruczenski:2003uq,Erdmenger:2004dk} for other set-up's\nof introducing flavors.).\nFrom a careful numerical\nwork, we seem to find a convincing evidence that S$\\chi$SB\npersists in the region of our parameter space in which the\nconfinement no longer exists. Therefore, on the basis of validity\nof the AdS\/CFT correspondence, it is clear that some states in\nlarge $N$ ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM theory, which have non-vanishing gluon\ncondensation, serve as a rare ground for logical separation\nbetween S$\\chi$SB and confinement. We summarize and conclude in\nsection 5.\n\n\n\\section{Bulk Solutions: Dilatonic Deformation in $AdS_5\\times S^5$}\n\nIn Ref.\\cite{Bak:2004yf}, a family of non-supersymmetric solutions\nof type IIB supergravity with asymptotic $AdS_5\\times S^5$\ngeometry were found by turning on generic dilaton deformation to\nthe maximally supersymmetric $AdS_5\\times S^5$\nbackground\\footnote{It is also possible to have solutions with the\naxion field turned on, but these solutions are readily obtained\nfrom the current ones by $SL(2,Z)$ action. See also\nRef.~\\cite{Gut} for the nonsingular class of dilatonic deformation\nin $AdS_5$. } (For an earlier example, see \\cite{Kehagias:1999tr,Nojiri:1999gf}).\nAnalytic solutions are available only for the cases\nin the Poincare patch, which preserves ${\\bf R}\\times ISO(3)\\times\nSO(6)$ subgroup of the full $SO(2,4)\\times SO(6)$ symmetry of\n$AdS_5\\times S^5$. Explicitly, these solutions are\n\n\\begin{eqnarray}\nds^2&=&\n(y-b)^{1-a\\over 4}(y+b)^{1+a\\over 4}\\left(\n-\\left(y-b\\over y+b\\right)^a dt^2 +\\frac{dy^2}{16(y-b)^{5-a\\over 4}\n(y+b)^{5+a\\over 4}}+d\\vec{x}^2\\right)\n+d\\Omega_5^\n\\,,\\nonumber\\\\\n\\phi&=& \\phi_0 +{k\\over 8b}\\log\\left(y-b\\over y+b\\right)\\,,\\quad\\quad\n F_5= \n\\left(\\omega_5+\\ast \\omega_5\\right)\\quad, \\label{bulkmetric} \\end{eqnarray}\nwhere the metric is in the Einstein frame and we let the AdS\nradius be unity for simplicity. Here $Q$ is the constant that\ncounts the number $N$ of D3-branes, $d\\Omega_5^2$ and $\\omega_5$\nare the metric and the volume form of unit five sphere,\nrespectively. Clearly, the $S^5$ part of the original $AdS_5\\times\nS^5$ is intact and the $SO(6)$ R-symmetry of the ${\\cal N}=4$ SYM\ntheory is unbroken at this level. The parameters $a$ and $b$ are\ndefined in terms of two quantities, $k$ and $\\mu$; \\begin{equation} a\\equiv\n\\left(1+{k^2\\over 6\\mu^2}\\right)^{-{1\\over 2}}\\quad,\\quad b\\equiv\n{\\mu\\over 2}\\left(1+{k^2\\over 6\\mu^2}\\right)^{1\\over 2}\\quad. \\end{equation}\nThe solutions have a time-like naked singularity at $y=b$. Up to\nover-all scaling, these solutions are parameterized by essentially\na single variable $k\/ \\mu$. They can be thought of as\ndescribing some quantum states in the bulk $AdS_5$ spacetime,\nbecause their deformations to the maximally supersymmetric\n$AdS_5\\times S^5$ solution decay sufficiently fast as we approach\nthe boundary. According to AdS\/CFT correspondence, we therefore\ninterpret them as the dual geometries of some quantum states of\nthe ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM gauge theory living on the boundary $R^{1,3}$.\n\nAn element of the standard AdS\/CFT dictionary gives us an important\ninformation about these quantum states in the gauge theory.\nIn terms of the coordinate $r$ defined by $r^2=\\sqrt{b\\over 2}\\,e^s$\nand $y=b\\,\\cosh(2s)$, the bulk metric goes to the standard $AdS_5\\times S^5$ metric\nfor large $r$, and $r$ becomes the usual radial coordinate of asymptotic\n $AdS_5$. The dilaton field then asymptotes to\n\\begin{equation}\n\\phi\\,\\,=\\,\\,\\phi_0+\n{k\\over 8b}\\log\\left({y-b\\over y+b}\\right)\\,\\,\\sim\\,\\, \\phi_0\n-\\frac{k}{4}\\,\n\\frac{1}{r^4}\\quad,\n\\end{equation}\nwhich implies that the corresponding quantum states in the gauge\ntheory have a non-vanishing\nexpectation value of ${\\cal L}} \\newcommand{\\CE}{{\\cal E}_{\\rm CFT}\\sim {1\\over 2g_{YM}^2}{\\rm tr}\\, F^2$ ;\n\\begin{equation}\n\\langle {\\cal L}} \\newcommand{\\CE}{{\\cal E}_{\\rm CFT} \\rangle\\,\\,=\\,\\,{k\\over 4}\\quad.\n\\end{equation}\nThe ADM energy density of these states was calculated to be proportional to $\\mu$.\n\n\n\n\n\n\n\n\\section{Phases of Dual ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM States : Confinement vs\nScreening}\n\nThe family of supergravity backgrounds in the previous section\nwith varying dilaton profile are supposed to describe some quantum\nstates of ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM theory on $R^{1,3}$. According to AdS\/CFT\ndictionary, these states are characterized by expectation values\nof ${\\rm tr}\\, F^2$ and the Hamiltonian density. Roughly, we have\nseen that\n\\begin{eqnarray} k& \\sim & {1\\over 2 g^2_{YM}}\\left<\\,{\\rm\ntr}(F^2)\\,\\right>\\,\\,=\\,\\, {1\\over 2 g^2_{YM}}\\ \\left<{\\rm tr}\n({\\vec E}^\n-{\\vec B}^2)\\right>\\quad,\\nonumber\\\\\n\\mu &\\sim & {1\\over 2 g^2_{YM}}\n\\left<{\\rm tr}\n({\\vec E}^\n+{\\vec B}^2)\\right>\n\\,\\,=\\,\\,{\\cal E}\\quad, \\end{eqnarray} where we denote the energy density\nby ${\\cal E}$. Though these states are quantum states of the\nsuperconformal ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM theory, they have certain properties\nthat mimic those of interesting vacuum states of more realistic\ngauge theories; they are homogeneous over spatial $R^3$ and have\nnon-vanishing gluon condensation. The latter property has long\nbeen suspected of being one of the crucial characteristics of QCD\nvacuum \\cite{shifman}. It is thus a meaningful endeavor to study\nquantum structure of these states and talk about their ``phases\".\nOne has to bear in mind that the strength of the gluon condensate\nhere characterizes the macroscopic states of the ${\\cal N}=4$ SYM\ntheory and works as a tunable parameter.\n\n\nOne of the key aspects of a given phase of gauge theory is how\nit reacts to external charges.\nIn the screening phase, external charges are compensated by\nconducting currents, and subsequently\nscreened within some characteristic length scale. Equivalently, the gauge\nboson gets massive and does not propagate\nbeyond its mass scale.\nOn the other hand, confining phase does not break gauge symmetry\nand charge conservation. Instead, electric flux is confined to a narrow string,\nresulting in a linear potential\nbetween two charges. One of the most profound observations\nin gauge theory is that magnetic screening due to condensation\nof magnetically charged object leads to\nelectric confinement and vice versa. However, there is a caveat here, that is,\nif there is also a condensation of electrically charged object at the same time,\nelectric confinement will be ruined.\nIn this case, the most plausible expectation is that\nboth electric and magnetic charges are screened.\n\nIn this section, we analyze the response of our states of\n ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM theory to various types of external charges,\nand find some aspects of interesting phases that we discussed in the above.\nIn the spirit of AdS\/CFT correspondence, external charges are\ndescribed by stretched strings in the supergravity background\nto the boundary \\cite{Rey,Maldacena}.\nThe Wilson line expectation value is obtained from the effective\nworld-sheet dynamics of the stretched strings in the\nsupergravity background. For electric charges, the world-sheet\ndynamics is dictated by F1 Nambu-Goto action, while magnetic\nor dyonic cases will be described by D1-DBI action with\/without\nworld-sheet gauge flux turned on.\n\n\n\\subsection{Electric Confinement\/Screening Transition}\nElectric confinement in the above dilatonic backgrounds\nhas been shown to occur in Ref.~\\cite{Bak:2004yf} for $k\/\\mu < -12$.\nA Nambu-Goto string stretched between heavy quark\/anti-quark pair\nthrough the bulk corresponds to the Wilson loop in the dual\ngauge theory side \\cite{Rey,Maldacena}. In the large AdS radius limit,\nthe string behaves classically and one may get the interaction potential\nvia classical analysis of the Nambu-Goto string dynamics. Here we would like\nto analyze more general cases including magnetic\ncharges as well.\n\nTo deal with general $(p,\\,\\, q)$ string,\nlet us begin with the Dirac-Born-Infeld action,\n\\begin{equation}\nS= -{1\\over 2\\pi \\alpha'} \\int d\\tau d\\sigma\ne^{-\\phi}\\sqrt{ -\\mbox{det}(g_{\\mu\\nu} \\partial_a X^\\mu\\partial_b X^\\nu +2\\pi\n \\alpha' F_{ab})}\\ ,\n\\end{equation}\nwhere $g_{\\mu\\nu}$ is the string frame metric which is related to the\nEinstein frame metric by\n\\begin{equation}\ng_{\\mu\\nu}= e^{\\phi\\over 2} \\, g^E_{\\mu\\nu}\\,.\n\\end{equation}\nDenoting\n\\begin{equation}\nM= -\\mbox{det}(g_{\\mu\\nu} \\partial_a X^\\mu\\partial_b X^\\nu)\\,,\n\\end{equation}\nthe Lagrangian density may be written as\n\\begin{equation}\n{\\cal L}= -{1\\over 2\\pi \\alpha' e^{\\phi}} \\sqrt{M -(2\\pi \\alpha' \\, E)^2} \\,,\n\\end{equation}\nwhere $E= F_{01}$. Let us introduce\nthe displacement $D$ by\n\\begin{equation}\nD={\\partial{\\cal L}\\over \\partial E}={2\\pi\\alpha' \\, E\\over e^{\\phi}\n \\sqrt{M -(2\\pi \\alpha' \\, E)^2}\n} \\,.\n\\label{electric}\n\\end{equation}\n$D$ is conserved and ${\\partial_\\sigma D}=0$ ; one may obtain an\nequivalent description of the system by the\nLegendre transformation,\n\\begin{equation}\n{\\cal L'}=-D\\cdot E + {\\cal L} =\n-{1\\over 2\\pi \\alpha' e^{\\phi}} \\sqrt{M (1+e^{2\\phi} D^2)}\n\\end{equation}\nby eliminating $E$ using\n(\\ref{electric}). The displacement $D$ counts the number of fundamental strings immersed and it is\nquantized to take an integer value, which we denote\nas $p$. Using the Einstein frame metric,\nthe above Lagrangian density may be written as\n\\begin{equation}\n{\\cal L}} \\newcommand{\\CE}{{\\cal E}= -{1\\over 2\\pi \\alpha'} \\int d\\sigma\n\\sqrt{ -\\mbox{det}(g^E_{\\mu\\nu} \\partial_a X^\\mu\\partial_b X^\\nu)}\\sqrt{p^2 e^\\phi\n+q^2 e^{-\\phi}\n}\\quad,\n\\label{pqlag}\n\\end{equation}\nwhere we also introduced an integer q counting the number of D-strings. The\nderivation\nof the $(p,\\,\\, q)$ string action here is only for $q=1$, but we\ngeneralize it for\nan arbitrary $q$. From the above action for the $(p,\\,\\,q)$ string,\nthe S-duality of the IIB\nstring theory is manifest. Namely, the above is invariant under the transformation,\n\\begin{equation}\ng'_{E\\mu\\nu }=g_{E\\mu\\nu},\\ \\ \\phi'= -\\phi,\\ \\ p \\ \\leftrightarrow \\ q\\,.\n\\end{equation}\nNote that in our dilaton-deformed solutions, the S-duality corresponds to\nsimply changing $k\\ \\rightarrow\\ -k$ and\n$\\phi_0\\ \\rightarrow\\ -\\phi_0$.\n>From this S-duality transformation,\nit is clear that magnetic charges are confined for\n$k\/\\mu > 12$, as electrically charged quarks are confined\nif $k\/\\mu< -12$.\n\nTo see the details of the interaction and the phase structure,\nlet us assume that the $(p,\\,\\, q)$ string is static and choose\nthe gauge $\\tau=t$ and $\\sigma=y$.\nWe shall consider the case where the $(p,\\,\\,q)$ string trajectory is\nindependent of\n$x_2$ and $x_3$. The $(p,\\,\\,q)$ string Lagrangian then becomes\n\\begin{equation}\n{\\cal L}} \\newcommand{\\CE}{{\\cal E}= - \\sqrt{\\lambda}\\int dy \\sqrt{ A(y)\\left(B(y)+ C(y)\\left(\n{dx\/ dy}\n\\right)^2\\right)}\\ ,\n\\end{equation}\nwhere $\\lambda=g_{YM}^2 N$ is the t'Hooft coupling and\n\\begin{eqnarray}\nA(y) &=& (y-b)^{\\frac{1+3a}{4}}\n(y+b)^{\\frac{1-3a}{4}}\n\\left(p^2 e^{\\phi_0}\n\\left({y-b\\over y+b} \\right)^{\\frac{k}{8b}}\n+ q^2\ne^{-\\phi_0}\n\\left({y-b\\over y+b}\\right)^{-\\frac{k}{8b}}\n\\right)\n\\ , \\nonumber\\\\\nB(y) &=& {1\\over 16(y-b)(y+b)}\\ , \\\\\nC(y) &=& (y-b)^{\\frac{1-a}{4}}\n(y+b)^{\\frac{1+a}{4}}\\ . \\nonumber\n\\end{eqnarray}\nThe computation\nshowing heavy quark confinement follows closely the one in Ref.~\\cite{Bak:2004yf}.\nThe equation of motion,\n\\begin{equation}\n{d\\over dy}\\left({ \\sqrt{A}\\, C\\, {dx\/dy} \\over \\sqrt{ B+ C\\left(\n{dx\/ dy}\n\\right)^2}\n} \\right)=0\\,,\n\\end{equation}\nmay be integrated once, and one gets\n\\begin{equation}\n{ \\sqrt{A}\\, C\\, {dx\/dy} \\over \\sqrt{ B+ C\\left(\n{dx\/ dy}\n\\right)^2}\n}=\\pm q^{-2}\\ ,\n\\label{velo}\n\\end{equation}\nwith an integration constant $q^2$.\nTo understand the dynamical implication,\nwe rewrite (\\ref{velo}) into the form\n\\begin{equation}\n\\left({dy\\over dx}\\right)^2+ {V}(y)=0\\ ,\n\\end{equation}\nwith the potential\n\\begin{equation}\n{ V}(y)={C\\over B} (1-q^4 AC)\n\\ .\n\\end{equation}\nThis can be viewed as a particle moving in one\ndimension\nunder the potential ${V}$,\nregarding the coordinate $x$ as the `time'.\n\nThe confinement occurs when the\n`particle' spends\nan arbitrarily large `time' when it approaches the\nturning point denoted by $y_0$. At the turning point, one has $dy\/dx=0$ and\nthus ${V}(y_0)=0$, which implies that\n\\begin{equation}\nq_0^4 A(y_0)C(y_0)=1\\ ,\n\\end{equation}\nfor an appropriate choice of the integration constant $q=q_0$.\nThe condition of spending arbitrarily large `time'\nis fulfilled if ${ V'}(y_0)=0$. This leads to\n\\begin{equation}\n( A C)'|_{y=y_0}=\n\\ ,\n\\end{equation}\nwhere the condition ${ V}(y_0)=0$ is used.\n\nLet us first consider the case of\n$(1,\\, 0)$ string connecting electrically\ncharged quark\/anti-quark pair.\nIn this case, the latter condition is solved by\n\\begin{equation}\ny_0 =- ab- k\/4\n\\ .\n\\end{equation}\nFor the existence of\nthe solution in the range $y \\in (b,\\infty)$, one has to impose\n\\begin{equation}\ny_0-b = -ab- k\/4-b \\equiv 2b \\beta> 0\\quad,\n\\end{equation}\nwhich is equivalent to\n\\begin{equation}\n{k\\over \\mu} < -12\\quad.\n\\label{wilsoncon}\n\\end{equation}\nThen ${ V}(y_0)=0$ is satisfied by choosing the\nintegration constant $q$ as\n\\begin{equation}\nq_0^4= {1\\over 2b}\\beta^\\beta (1+\\beta)^{-(1+\\beta)}\ne^{-\\phi_0}\\ .\n\\end{equation}\n\n\nFor small $q$, the separation between the quark\/anti-quark pair is of\nthe order of $q$ according to the IR\/UV relation.\nThe energy scale here is much higher than that of the\nconfinement. Thus the quark\/anti-quark potential for sufficiently small separation is of Coulomb type\nas expected.\n\n\n\n\nWhen $\\beta>0$ and $q$ approaches $q_0$ from below, the string spends\nmore and more \\lq time' near the turning point $y\\sim y_0$.\nThe separation between the quark and anti-quark becomes larger and larger\nas one sends $q$ to $q_0$ from below, because the `time' spent near\nthe turning point increases more and more.\n\nIn the limit $q \\to q_0$, we can compute the tension of the string and\nthe energy scale of confinement. The energy of the string is given by\n\\begin{equation}\nE_s= \\sqrt{\\lambda}\\int dy \\sqrt{ A\\left(B+ C\\left(\n{dx\/ dy}\n\\right)^2\\right)}\n= \\sqrt{\\lambda}\\int dx \\sqrt{q^4 A^2 C^2}\\ ,\n\\label{qenergy}\n\\end{equation}\nwhere we have used the equation of motion. The integral in fact diverges\nand one may regulate it by subtracting the self-energy of quark and anti-quark.\n\nSince $q_0^4 A(y_0) C(y_0)=1$ and the string stays\nnear the turning point\nfor most of the \\lq time', we find from (\\ref{qenergy}) the tension of\nthe confining string to be\n\\begin{equation}\nT_{QCD}= \\sqrt{\\lambda}\\, \\sqrt{A(y_0)C(y_0)}= \\sqrt{\\lambda}\\, q_0^{-2}\n= \\sqrt{\\lambda\\,\\mu}\\,\\, {(1+\\beta)^{1+\\beta\\over 2}\n\\over \\sqrt{a}\\,\\, \\beta^{\\beta\\over 2}}e^{\\phi_0\\over2}\\quad.\n\\end{equation}\nThis sets the scale of\nconfinement. Our result agrees with the previously calculated one in the $\\mu\\to 0$ limit \\cite{Gub}.\n\n\n\n\n\n\n\n\\subsection{Screening}\n\nIn the analysis of Ref.~\\cite{Bak:2004yf}, the region of $k\/\\mu > -12$\ncorresponding to $\\beta < 0$ was not carefully analyzed because the paper mainly\nconcerned only about the existence of confinement phenomena.\nWe would like to show that this region corresponds, in fact, to the screening phase.\nIn this region the potential $V$ always has a turning point beyond which the\nsingularity is located. This feature of inaccessibility to the singularity is true for all value of the integration\nconstant $q^2$. One may ask the following. May an infinitely large\nseparated quark\/anti-quark pair be connected through this string solution, by\nadjusting\nthe integration constant $q$? The answer turns out to be no.\nNamely, there is an upper limit on the separation length between the\nquark and anti-quark pair in the above solutions of string configuration.\n\nTo show this, let us first note that\nthe separation length is given by\n\\begin{equation}\nL=2 \\int^\\infty_{y_\\star} dy {1\\over\\sqrt{-V}}=\n2 \\int^\\infty_{y_\\star} dy\n{ \\sqrt{B}\\over \\sqrt{C} \\sqrt{q^4 AC-1 }\n}\\ \n\\end{equation}\nwhere $y_\\star$ is the turning point.\nFor small $q$ satisfying\nthe condition $e^{\\phi_0} \\, b\\, q^4 \\ll 1$, the turning point\n$y_\\star \\sim 1\/(e^{\\phi_0} q^4)$ is much larger than $b$\nand, thus,\nthe potential $V$ may well be approximated by\n\\begin{equation}\nV \\sim 16 y^{5\\over 2} (1-e^{\\phi_0} q^4 y)\\,.\n\\end{equation}\nThen the separation is approximately given by\n\\begin{equation}\nL \\sim {e^{\\phi_0\\over 4} q\\over 2} \\int^\\infty_{1} {dt\\over t^{5\\over 4}}\n{ 1\\over\\sqrt{t-1 }}\n\\quad\n\\end{equation}\nThis is the case of sufficiently small separation and the expression for the separation\nis essentially same as the one for the strings in the pure AdS case, because\nthe strings are staying in the near boundary\nregion of the asymptotically AdS space.\n\nTo study the upper limit on the separation distance, one should look\nat the large $q$ behavior of $L$. When\n$e^{\\phi_0}\\, b\\, q^4 \\gg 1$, the contribution of the integral\nfrom infinity to $y-b=O(b)$ is of order $1\/(e^{\\phi_0 \\over 2}q^2)$,\nwhich is small. The turning point occurs in the regime\n$y-b \\ll b$, and the contribution from near the turning point reads as\n\\begin{equation}\n\\delta L\n= 2 \\int_{z_\\star} {dz\\over 4z^{5-a\\over 8} (2b)^{5+a\\over 8}}\n{1\\over \\\n\\sqrt{\ne^{\\phi_0} q^4 z^{|\\beta|} (2b)^{1+\\beta}-1\n }\n}\\,.\n\\end{equation}\nThus we conclude that\n\\begin{equation}\n\\delta L\n\\sim q^{-{3+a\\over 2|\\beta|}}\n\\quad,\n\\end{equation}\nwhich is negligible in large $q$ limit.\nObviously, the intermediate region contributes only of order one to the integral.\nThis shows that the separation has a maximum value for some $q$, which\nwe denote by $L_{max}$.\n\nWhat really happens if the separation of the external quarks becomes larger than\n$L_{max}$? In this case, the strings follow the trajectory of the trivial\nsolution, ${dx\\over dy}=0$. The strings from the boundary quarks\nand antiquarks are stretched straight toward the singularity without\nany change of $x$\ncoordinate. Very near the singularity corresponding to IR regime of\nthe dual field theory, the strings are joined by changing $x$\ncoordinate. This situation is depicted in Figure \\ref{sc}.\nOne may worry about the part of the string very near the singularity.\n(This part describes the physics of the field lines\nin the extreme IR regime of the energy scale.). However one may see that\nthe contribution to the energy of this part is zero at any rate.\nTo see this, let us first note that the energy integral\nof the configuration is given by\n\\begin{equation}\nE_s= \\sqrt{\\lambda}\\int \\sqrt{AB dy^2+ AC dx^2}\\,.\n\\end{equation}\nSince $dy=0$ for the part of joining two different straight strings,\n\\begin{equation}\nE_{joint}= \\sqrt{\\lambda}\\int \\sqrt{AC} dx= \\sqrt{\\lambda}\\,L\\, { (y-b)^{|\\beta|\\over 2}\\phantom{abi}\n\\over (y+b)^{-{1+\\beta\\over 2}\n}\n}\\Big|_{y=b}=0\\,,\n\\end{equation}\nwhere in the last equality we have used the fact $\\beta < 0$.\nThus the boundary condition at the singularity does not matter\nand the result of vanishing string energy has its own validity despite\nthe singularity.\n\n\\begin{figure}[htb]\n\\vskip .5cm\n\\epsfxsize=3.6in\n\\centerline{\n\\epsffile{figs.eps}\n}\n\\vspace{.1in}\n\\caption{\\small A string configuration touching the singularity represents\n the screening.}\n\\label{sc}\n\\end{figure}\n\nFrom the above discussion, the nature of the interaction is clear.\nThe charges are not confined for $k\/\\mu > -12$. When the separation\nbecomes larger than $L_{max}$, the quark\/anti-quark potential diminishes,\nrepresenting the\nphenomenon of screening. Therefore we conclude that\nthe regime of $k\/\\mu > -12$ corresponds to a screening phase.\n\n\n\n\n\\subsection{Confinement vs Screening of Heavy Quarks}\n\n>From the discussion above, one may expect that the system shows\nthe phase transition as one varies the parameter\n$k\/\\mu$ by adjusting $k$ or $\\mu$. At $k\/\\mu =-12$, the system undergoes\nthe phase transition between a confinement phase and a screening phase for heavy\nquarks.\nThe appearance of the tension of the electric-flux string in the confining phase may serve\nas an order parameter. At the critical\npoint of $k\/\\mu=-12$ or $\\beta\\ \\rightarrow\\ 0$\nlimit, the electric-flux string tension takes a finite value of\n\\begin{equation}\nT_{QCD}=5 \\sqrt{\\lambda\\,\\mu}e^{\\phi_0\\over 2}\n\\ \n\\end{equation}\nNamely the tension jumps to the finite value at the phase transition.\nThis may be understood as follows. Due to the Gauss law, the total electric\nflux around charges should remain preserved irrespective of confinement or\nscreening. Then, when quarks are confined by the transition,\nthe electric flux lines form a linear tube\nand the finite tension simply comes from the existing energy of the field profile.\nThus the tension should start with a finite value.\n\n\n\n\\subsection{``Doubly Screening Phase\"}\nIn this subsection, let us consider the response of D-strings\ndescribing the interaction between magnetically charged\n objects. From the Lagrangian in (\\ref{pqlag}), the dynamics of\n$(1,\\,0)$ strings for a given $k$ is mapped into $(0,\\,1)$ strings\nwith $-k$. Thus, without further computation, one may see that\n magnetically charged quarks are confined when $k\/\\mu> 12$ and\nscreened otherwise.\n\n\n\n\\begin{figure}[htb]\n\\vskip .5cm\n\\epsfxsize=3.0in\n\\centerline{\n\\epsffile{figph.eps}\n}\n\\vspace{.1in}\n\\caption{\\small The full phase diagram.}\n\\label{phase}\n\\end{figure}\n\nThe full phase structure is drawn in Figure \\ref{phase}.\nThe region I with $k\/\\mu < -12$ describes the phase where\nthe electrically charged quarks are confined. Then the\nmagnetically charged\nobjects should be screened,\nwhich\nis indeed the case as discussed above.\nThe region III with $k\/\\mu > 12$ corresponds to the phase\nwhere magnetic charges are confined\nwhile quarks are screened. This corresponds to the\nS-dual of the region I.\n\nThe region II with $-12 < k\/\\mu < 12$ describes the phase where both\nthe quarks and magnetic charges are screened, which we call as\n`doubly screening phase'. As far as we know, there were no such\nexamples previously where both the electric and magnetic charges are\nscreened.\nPresumably this phase structure is possible due to the S-duality\nsymmetry of the underlying ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM theory.\n\n\n\n\n\n\n\n\n\n\\section{Spontaneous Chiral Symmetry Breaking : D7 Probe Analysis}\n\n\\subsection{Generalities}\n\nA few $N_f$ D7-branes parallel to a stack of large number\n$N$ of D3-branes introduces $N_f$\nfundamental ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=2$ hypermultiplets in the low energy gauge\ndynamics on D3-branes.\nFor $N$ large enough, $N\\gg N_f$, back reaction of the\nD7-branes to the near horizon limit of supergravity\nbackground may be irrelevant, and the low energy\ngauge dynamics on D3-branes with ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=2$ fundamental\nhypermultiplets is supposed to be\ndual to $AdS_5\\times S^5$ with probe D7-branes \\cite{Kar}.\nThe open string dynamics on the probe D7-branes corresponds\nto the dynamics of ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=2$ hypermultiplets\nin the ``ambient\" ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM theory. This is because the\ngauge theory interpretation of the probe approximation\nis to take the quenched approximation neglecting effects of\nhypermultiplets to the dynamics of ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ $SU(N)$ SYM with large $N$.\nHowever, it should be noted that these probe ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=2$ fundamental\nhypermultiplets experience full dynamics of ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM theory.\n\nIn the D-brane picture in flat 10-dimensional space-time,\nlet the D3-brane world-volume span along\n\n\\{0123\\}$,\nand the D7-brane along \n\\{01234567\\}$\ndirections. The distance between D3 and D7 in the\ntransverse $\\{8\n\\}$ space gives rise to mass term in the Lagrangian for\nhypermultiplets. More specifically, asymptotic value\nof $w =\\sqrt{x_8^2+x_9^2}$ for large\n$\\rho^2=x_4^2+x_5^2+x_6^2+x_7^2$ of the D7 world-volume\ncorresponds to the bare mass $m_f$ of the\nhypermultiplets\\footnote{We use the notation $x_i=x^i$ for the\nspatial coordinates.}.\nFor the maximally supersymmetric configuration,\n$w$ is constant on D7 (say, D7 lies at constant $(x_8,x_9)=(w_0,0)$ while\nD3 is sitting at the origin). However, for non-supersymmetric states\nsuch as those we are considering,\n$w$ is generically a varying function of $\\rho$.\n\nIn the supergravity picture, it is not difficult to identify the bare\nmass for the hypermultiplets in the framework of\nAdS\/CFT correspondence.\nThe maximally supersymmetric supergravity background in the near horizon\nlimit is $AdS_5\\times S^5$ with the string frame metric\n\\begin{eqnarray}\nds^2&=&\\frac{1}{f(r)}\\left(\\sum_{\\mu=0}^{3}dx_{\\mu}\ndx^{\\mu}\\right)+f(r)\\left(\\sum_{i=4}^{9}dx^i dx^i\\right)\\nonumber\\\\\n&=& \\frac{r^2}{l^2}\\left(\\sum_{\\mu=0}^{3}dx_{\\mu}dx^{\\mu}\\right)+\n\\frac{l^2}{r^2} dr^2 +l^2 d\\Omega_5^2\\quad,\\label{adsmetric} \\end{eqnarray}\nwhere $r^2=\\sum_{i=4}^9 x_i^2=\\rho^2+w^2$ and $f(r)=\\frac{4\\pi N\ng_s}{r^2}= \\frac{l^2}{r^2}$ is the warping factor. The\nworld-volume profile of the probe D7-brane in this background is\nsimply given by identifying the flat D-brane picture coordinate\n$\\{x^M\\}$ ($M=0,\\ldots,9$) with the coordinate $\\{x^M\\}$ in\n(\\ref{adsmetric}). For example, maximally supersymmetric D7 lying\non the plane $(x_8,x_9)=(w_0,0)$ fills the $AdS_5$ part of\n$\\{x^{\\mu},r\\}$ for $w_0\\leq r<\\infty$, in addition to wrapping\n$S^3$-cycle in the $S^5$. The wrapped $S^3$ is defined by\n$x_4^2+x_5^2+x_6^2+x_7^2=\\rho^2=r^2-w_0^2$ and it vanishes at\n$r=w_0$, ensuring a smooth D7 world-volume in $AdS_5\\times S^5$.\nNote that D7-brane is absent in energy scales below $r=w_0$; this\nis consistent with the field theory expectation that we shouldn't\nfind any hypermultiplet below its mass scale $m_f=w_0$. Moreover,\n$w_0$ is a free parameter representing a family of the D7 profiles\n; this gives us a freedom of changing the bare mass of\nhypermultiplets. Especially interesting limit would be the chiral\nsymmetry limit\\footnote{By chiral symmetry, we mean a chiral\nU(1)-symmetry which we discuss more in section (\\ref{xsym}).} of\n$m_f=w_0=0$.\n\n\nFor the non-supersymmetric backgrounds of our interest, it is possible\nto identify suitable coordinate $\\{x^M\\}$ that\nhas a natural interpretation of flat coordinate\nin the D-brane picture. The string frame metric in this coordinate\nis\\footnote{From now on, we set $\\phi_0=0$ because it plays no special role except\nthe trivial overall scaling.}\n\\begin{eqnarray}\nds^2= \\left(\\frac{r^4-1}{r^4+1}\\right)^{\\frac{k}{8b}}\\Bigg\\{\\!\\!\\!\\!&-&\\!\\!\\!\\!\n\\left(r^2-r^{-2}\\right)^{\\frac{1+3a}{2}}\\left(r^2+r^{-2}\n\\right)^{\\frac{1-3a}{2}}dx^0 dx^0+\\frac{1}{r^2}\\left(\\sum_{i=4}^{9}\ndx^i dx^i\\right)\\nonumber\\\\\n&+&\\left(r^2-r^{-2}\\right)^{\\frac{1-a}{2}}\\left(r^2+r^{-2}\n\\right)^{\\frac{1+a}{2}}d \\vec{x} \\cdot d\\vec{x}\\Bigg\\}\\label{newmetric}\n\\end{eqnarray}\nwhere $r^2=(x_4^2+x_5^2+x_6^2+x_7^2)\\,+\\,(x_8^2+x_9^2)\\equiv\\rho^2+w^2$ as before. The above metric is obtained from (\\ref{bulkmetric})\nby combining $dy^2$ and $d\\Omega_5^2$ with a change\nof variable $y=b\\cosh(2s)$ and $r^2=e^s$. The $R^{1,3}$ coordinate $\\{x^0,\\vec{x}\\}$\nhas been also rescaled appropriately. The singularity\nis now positioned at $r^2=\\rho^2+w^2=1$.\nThe probe D7 world-volume covers $x^0,\\ldots,x^7$ and its transverse\nposition is given by $(x_8,x_9)=(w(\\rho),0)$ without loss of generality.\nHence, it fills the (approximate) $AdS_5$ space over $w(0)\\leq r <\\infty$\n(equivalently $0\\leq \\rho <\\infty$), and the wrapped $S^3$-cycle\nin $S^5$ shrinks to zero at $r=w(0)$. As we have explained in the previous\nparagraph, the field theory\nsituation dual to this configuration is a quantum state of ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM\nwith ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=2$ fundamental hypermultiplet in quenched approximation,\nwhose bare mass is identified with $m_f=w(\\infty)$. A family of profiles\nwith varying $w(\\infty)$ allows us to tune the bare mass,\nand in lucky cases, to get the chiral limit.\n\n\nAccording to the AdS\/CFT proposal, however, the bare mass is not the only information we can extract from $w(\\rho)$.\nViewing $w(\\rho)$ as an effective scalar field in the $AdS_5$, its asymptotic value at $\\rho\\rightarrow\\infty$\ncouples to some scalar operator in the field theory. We have actually identified this operator ; we have seen that\n$w(\\infty)$ couples to the mass operator of the fundamental ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=2$ hypermultiplet,\n\\begin{eqnarray}\n\\delta {\\cal L}} \\newcommand{\\CE}{{\\cal E}_{\\rm SYM}&=&w(\\infty)\\int d^2\\theta \\,\\,\\tilde Q_f Q_f- ({\\rm h.c.})\\nonumber\\\\\n&\\sim &w(\\infty)(\\tilde q_L^f q_L^f+{\\rm h.c.})+({\\rm bosonic})\n=w(\\infty)\\bar q_D^f q_D^f+({\\rm bosonic})\\,,\n\\end{eqnarray}\nwhere we have introduced Dirac fermions,\n\\begin{equation}\nq_D^f=\\left(\\begin{array}{c} q_L^f \\\\ i\\sigma^2 (\\tilde q^f_L)^* \\end{array}\\right)\\quad.\n\\end{equation}\nThe fermion mass operator is of dimension 3, and the AdS\/CFT dictionary tells\nus that its expectation value\nis encoded in the coefficient of sub-leading $\\sim \\frac{1}{\\rho^2}$ behavior\n of $w(\\rho)$ in $\\rho\\rightarrow\\infty$.\nNote that the bosonic piece in the above has vanishing expectation value in\nthe symmetric phase. As we have a freedom of choosing the bare mass $w(\\infty)$,\nit is possible to see interesting dependence\nof bi-fermion mass operator condensate on the bare mass parameter. In\nthe chiral limit, we may discuss about the occurrence of\nspontaneous chiral symmetry breaking.\n\n\\subsection{A Subtlety}\n\nIn this subsection, we show that the expectation value of the fermion mass operator\nfor hypermultiplet is precisely given by\nthe coefficient of sub-leading $\\frac{1}{\\rho^2}$ in $w(\\rho)$\nas $\\rho\\rightarrow\\infty$. This is equivalent to a subtle\nquestion of choosing correct field variable, from whose\nasymptotic behavior we should read off the expectation value of\nthe field theory operator. This is a relevant caveat to\ncare about because $w(\\rho)$ has a highly non-standard form of\naction functional in $AdS_5$ derived from the D7 DBI action.\nThe relevant part of the D7-brane DBI action is\n\\begin{equation}\nS_{\\rm D7}=\\tau_7\\,\\int d^8\\xi \\,e^{-\\phi}\\sqrt{-\n\\det\\left(\\frac{\\partial x^M}{\\partial \\xi^\\mu}\n\\frac{\\partial x^N}{\\partial \\xi^\\nu}G^{(10)}_{MN}\\right)}\\quad,\n\\end{equation}\nwhere $G^{(10)}$ is the 10-dimensional metric\n of (\\ref{newmetric}), and the dilaton profile is\n\\begin{equation}\ne^{-\\phi}=\\left(\\frac{r^4+1}{r^4-1}\\right)^{\\frac{k}{4b}}\\quad.\n\\end{equation}\nChoosing the gauge $\\xi^i=x^i$ ($i=0,\\ldots,7$), and\n$(x^8,x^9)=(w(\\rho),0)$, we obtain the effective action for $w(\\rho)$,\n\\begin{equation}\nS\\sim \\int d^4x\\,\\int_0^{\\infty}d\\rho \\,\\, \\rho^3 \\, Z\\left(\\rho^2+w^2\\right)\\,\\sqrt{1+\\left(\\frac{d w}{d\\rho}\\right)^2}\\quad,\n\\label{rhoaction}\n\\end{equation}\nwhere $Z(x)$ is a complicated function which goes to unity for\nlarge $x$;\n\\begin{equation}\nZ(x)=\\left(1-{1\\over x^4}\\right)\\left({x^2-1}\\over{x^2+1}\\right)^{k\\over{4b}}\\quad.\n\\end{equation}\nAnother fact that will be important for us later is $Z'(x)\\sim\n{1\\over x^3}$ for large $x$.\n\nFor a smooth D7 embedding, we need to impose the boundary condition, ${dw\\over d\\rho}(0)=0$.\nIn the asymptotic $\\rho\\rightarrow \\infty$ region, $Z$ goes to unity and the solution of the equation of motion behaves as\n\\begin{equation}\nw(\\rho)\\sim m+{C\\over \\rho^2}\\quad.\n\\end{equation}\nNaively, the $\\rho$ integration in (\\ref{rhoaction}) diverges because for large $\\rho$,\n\\begin{equation}\n\\rho^3 \\, Z\\left(\\rho^2+w^2\\right)\\,\\sqrt{1+\\left(\\frac{d w}{d\\rho}\\right)^2}\\,\\,\\sim\\,\\,\n\\left(\\rho^3-\\frac{k}{2b}\\frac{1}{\\rho}\\right)+\\left(\\frac{k}{2b}(2m^2+1)+2C^2\\right)\\frac{1}{\\rho^3}+\\cdots\n\\label{wexpand}\n\\end{equation}\nand we need a suitable regularization procedure.\nHowever, what we are interested in will be variations of the value of (\\ref{rhoaction}) under changing $w(\\infty)=m$,\nand for this purpose, it is enough to regularize (\\ref{rhoaction})\nby subtracting the value of it at some fixed reference solution $w_0(\\rho)$ ;\n\\begin{equation}\nS_{\\rm R}\\,\\equiv\\,\\int d^4x\\,\\int_0^{\\infty}d\\rho \\,\\, \\rho^3\n\\left( Z\\left(\\rho^2+w^2\\right)\\,\\sqrt{1+\\left(\\frac{d w}{d\\rho}\\right)^2}\n-Z\\left(\\rho^2+w_0^2\\right)\\,\\sqrt{1+\\left(\\frac{d w_0}{d\\rho}\\right)^2}\\,\\,\\right)\\quad,\n\\end{equation}\nwhich is now convergent due to (\\ref{wexpand}). The standard AdS\/CFT correspondence is then,\n\\begin{equation}\n\\exp\\left(i\\,S_{\\rm R}[m]\\right)\\,\\,=\\,\\,\\left< \\exp\\left(i\\,\\int d^4x \\,m\\,\\bar q_D^f q_D^f\\right)\\right>\\quad,\n\\end{equation}\nwhere $S_{\\rm R}[m]$ is the above regularized action evaluated for the solution of the equation of motion with $w(\\infty)=m$.\nHence, we have\n\\begin{equation}\n\\frac{\\delta S_{\\rm R}[m]}{\\delta m}= \\int d^4 x\\,\\left< \\bar q_D^f q_D^f\\right>\\quad.\\label{condensate}\n\\end{equation}\n\nIn fact, it is not difficult to calculate the left-hand side of the above relation. Suppose that $w+\\delta w$ is the solution\nof the equation of motion with $(w+\\delta w)(\\infty)=m+\\delta m$ for infinitesimal $\\delta m$. The variation of $S_{\\rm R}[m]$ is\n\\begin{equation}\n\\delta S_{\\rm R}[m]=\\int d^4x\\,\\int_0^{\\infty}d\\rho \\,\\, \\rho^3 \\,\\left(2w\\,Z'(\\rho^2+w^2)\\sqrt{1+\\left(\\frac{d w}{d\\rho}\\right)^2}\\,\\delta w\n+Z(\\rho^2+w^2){{d w\\over d \\rho}{d\\,\\delta w \\over d \\rho}\\over \\sqrt{1+\\left(\\frac{d w}{d\\rho}\\right)^2}}\\right)\n\\end{equation}\nThe convergence of this expression may easily be seen from the property $Z'(x)\\sim {1\\over x^3}$ ,\nand we are allowed to perform integration by part\nfor the second term. The resulting integrand which is proportional to $\\delta w$ vanishes because $w(\\rho)$ satisfies the equation of motion,\nand the surviving surface contribution at $\\rho=\\infty$ is\n\\begin{equation}\n\\int d^4x\\, \\lim_{\\rho\\rightarrow \\infty} \\left(\\rho^3 Z(\\rho^2+w^2)\n{{d w\\over d \\rho}\\over \\sqrt{1+\\left(\\frac{d w}{d\\rho}\\right)^2}}\\,\\delta w\\right)=\\int d^4x\\,(-2C\\delta m)\\quad,\n\\end{equation}\nusing $\\rho^3 {d w\\over d \\rho}\\sim -2C$ and $\\delta w \\sim \\delta m$ for large $\\rho$.\nComparing with (\\ref{condensate}), we thus have\n\\begin{equation}\n\\left< \\bar q_D^f q_D^f\\right>\\,\\,=\\,\\,-2 C\\quad.\n\\end{equation}\n\n\\subsection{The Chiral Symmetry $U(1)_c$ \\label{xsym}}\n\nIn this subsection, we discuss more about the chiral U(1)-symmetry of the\ngauge theory we are considering.\nIn fact, it is clear in the D-brane picture of D3-D7 system that there must\nbe a global U(1)-symmetry which\ncorresponds to the rotation of D7's position in the transverse $(x_8,x_9)$ plane.\n(Recall that the D3-branes are aligned along $\\{0123\\}$, while D7-branes\nare along $\\{01234567\\}$.\nLet us put $N$ D3-branes at $(x_8,x_9)=(0,0)$, and $N_f$ D7-branes at\n$(x_8,x_9)=(w_1,w_2)$.)\nThe distance between D3 and D7 introduces a mass term for ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=2$\nfundamental hypermultiplets,\n\\begin{equation}\n(w_1+i w_2)\\,\\int d^2\\theta\\,\\,\\tilde Q_f Q^f +{\\rm h.c.}\\,\\,\\,\\sim\\,\\,\\,\n(w_1+i w_2)\\, \\bar q_D^f q_D^f \\,\\,+\\,\\,{\\rm (bosonic)}\\quad.\n\\end{equation}\nThe rotation $(w_1+iw_2)\\rightarrow\ne^{2i\\alpha}\\cdot(w_1+i w_2)$ of the D7-brane's position does not change\nanything on the D3-brane world-volume in the D-brane picture, and hence\nthere should exist\na compensating chiral rotation which is a\nglobal symmetry of the gauge theory.\nWe should also expect the same chiral\nsymmetry in the non-supersymmetric backgrounds of our interest, because\nthe solutions preserve the $SO(6)$ symmetry\nof $S^5$, which includes $(x_8,x_9)$-plane rotation as a subgroup.\n\n\n\nLooking at the superpotential term,\n\\begin{equation}\n\\int d^2\\theta\\,\\,\\tilde Q_f Z Q^f\\,\\,+\\,\\,\\int d^2\\theta\\,\\,\n{\\rm tr}\\left(Z[X,Y]\\right)\\quad,\n\\end{equation}\nwhere $X,Y$ and $Z$ are adjoint chiral superfields in ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM theory\n($Z=X_8+i X_9$),\nit is easy to realize that this symmetry is a $R$-symmetry. From the D-brane\npicture, we should assign\ncharges $1$ and $0$ to $Z$ and $X,Y$ respectively. Then $d^2\\theta$ has\ncharge $-1$ (or $\\theta_\\alpha$ has charge $1\\over 2$),\nand this forces us to take charge $0$ for $\\tilde Q_f$ and $Q_f$.\nThe reason behind the $R$-symmetry is clear ;\nwhen we rotate D7-branes in the $(x^8,x^9)$ plane, the corresponding\n10 dimensional type IIB Killing spinor of D3-D7 system with eight real\ncomponents also rotates accordingly.\n\nNote that $\\tilde q_f$ and $q_f$ both have charge $-{1\\over 2}$\nunder this $U(1)_c$. In terms of the Dirac spinor $q_D^f$, the\ncharge is ${1\\over 2}\\gamma^5={1\\over 2} \\left(\\begin{array}{cc}\n-1&0\\\\0&1\\end{array}\\right)$, and a non-vanishing expectation value of\n$\\left<\\bar q_D^f q_D^f\\right>$ will break this chiral symmetry\nspontaneously. Although $U(1)_c$ has a quantum anomaly which is\nproportional to $-N_f\\times C_2(F)$, where $C_2(F)$ is the Casimir\ninvariant of fundamental representation, it is negligible in the\nlarge $N$, t' Hooft limit \\cite{Wit1,Wit2}.\nFrom the D-brane\npicture, it comes as a surprise that there is an anomaly for\n$U(1)_c$ in the effective field theory on D3, because this is a\nsimple coordinate rotation in the $(x^8,x^9)$-plane. The\nresolution of the puzzle lies in the fact that D7-brane sources a\nnon-trivial profile of RR-scalar $C_0$ around it, such that\nrotation in the $(x^8,x^9)$-plane induces a shift monodromy of\n$C_0$ field which is exactly proportional to the number of\nD7-branes, $N_f$ \\cite{Ouyang:2003df}. The RR-scalar $C_0$,\nhowever, couples to the D3-branes by \\begin{equation} C_0\\, \\int {\\rm tr}\n(F\\wedge F)\\quad.\\end{equation} Therefore, the shift of the $\\theta$\nparameter due to the field theory anomaly of $U(1)_c$ rotation is\nprecisely cancelled by the shift monodromy of the bulk field\n$C_0$, and the total anomaly is absent in the whole system. This\nmay well be called an example of anomaly inflow\\footnote{We thank\nJaemo Park for a discussion on this.} (See also \\cite{Armoni:2004dc} for a related discussion).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Separation Between S$\\chi$SB and Confinement}\n\n\nThe equation of motion for $w(\\rho)$ from the effective action (\\ref{rhoaction}) is somewhat\ncomplicated, and does not seem to have any analytic solutions.\nWe have performed numerical analysis for solving the equation of motion, and have identified\nthe asymptotic data, $m$ and $C$, for each solution.\nIn the previous subsections, we have seen that $m$ corresponds to the bare mass of ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=2$ fundamental\nhypermultiplets, while $C$ is directly proportional to the condensate of the bi-fermion mass operator for\nthe hypermultiplets. Hence, a solution whose asymptotic behavior is characterized by $m=0$, but $C\\neq 0$,\nsignals that chiral symmetry is spontaneously broken in the gauge theory living on the boundary.\n\nThe effective action (\\ref{rhoaction}) for the probe D7-brane has a parameter\n\\begin{equation}\n{k\\over b}= 2\\left(k\\over \\mu\\right)\\left(1+{k^2\\over 6\\mu^2}\\right)^{-{1\\over 2}}\\quad,\n\\end{equation}\nrepresenting a family of bulk type-IIB supergravity backgrounds, which in turn correspond to a family of homogeneous\nquantum states of ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM theory with ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=2$ hypermultiplets in AdS\/CFT correspondence.\nIn section 2, we analyzed ``phases\" of these states and observed that their phase structure is sensitive to the value of $k\/ \\mu$\n(equivalently, $k\/ b$).\nSpecifically, for ${k\/ \\mu }<-12$, ($ {k\/ b} <-4.8$), we have an electric confinement, while for\n${k\/ \\mu }>+12$, ($ {k\/ b} >+4.8$), magnetically charged objects are confined. An interesting phase seems to\nhappen for $-12<{k\/ \\mu }<+12$, ($-4.8 < {k\/ b} <+4.8$),\nin which both electric charges and magnetic charges are screened.\n\nThe existence of spontaneous chiral symmetry breaking (S$\\chi$SB),\nthat is, whether there is a solution with $m=0$ but $C\\neq 0$ in the bulk,\nalso depends on the parameter $k\/ \\mu$ (or $k\/ b$). Our numerical study shows that\nthere is such a solution for ${k\/\\mu}<-2.97$ (or ${k\/ b} <-3.78$). As an exemplar case,\nFig.\\ref{xsb} is describing solutions with varying $w(\\infty)=m$ when\n${k\/\\mu}=-7$ (or ${k\/ b}=-4.62$).\nIt is evident from the figure that the value of $C$ does not vanish for the solution with $m=0$.\nWhat happens when ${k\/ \\mu}>-2.97$ (or ${k\/ b} >-3.78$) is that solutions start to meet the singularity at $\\rho^2+w^2=1$\nas we lower the value of $m$. We thus cannot extract useful information for these cases.\n\\begin{figure}[t]\n\\begin{center}\n\\scalebox{1.12}[1.2]{\\includegraphics{figure.eps}}\n\\par\n\\vskip-2.0cm{}\n\\end{center}\n\\caption{\\small Numerical Solutions for $w(\\rho)$ when ${k\/ \\mu}=-7$ (or\n${k\/ b}=-4.62$). It is clear that\nthe solution with $m=0$, but $C\\neq 0$, exists. The line $\\rho^2+w^2=1$ is the position of the singularity. }\n\\label{xsb}\n\\end{figure}\n\nThe above analysis has a profound implication. For $-12<{k\/ \\mu}<-2.97$ (or $-4.8<{k\/ b} <-3.78$),\nthe corresponding quantum states of the gauge theory are in the screening phase, while massless fermions of\nfundamental representation form a non-vanishing bi-fermion condensation. This contradicts a prevailing lore\nthat bi-fermion condensation would require a confining potential between two charges.\nOn the basis of the AdS\/CFT correspondence for probe D7-branes, we thus claim to have provided the first example\nof separation between spontaneous chiral symmetry breaking and confinement.\n\n\n\n\n\n\n\n\n\\section{Conclusion}\n\n\n\nIn this work, we have considered dilatonic deformations of AdS geometry that are\ndual to some quantum states of the ${\\cal N}=4$ SYM theory with non-vanishing gluon\ncondensation, $k$, as well as homogeneous energy density $\\mu$. As varying the parameter\n$k\/\\mu$, we have identified the phases of these states by studying the\ninteraction between quarks\/anti-quarks, and also between magnetically charged objects.\nThe regime $k\/\\mu < -12$\nis electrically confining, where quarks are confined and magnetic charges are\nscreened. The opposite regime of $k\/\\mu > 12$ corresponds to the S-dual transformed\nphase, where magnetic charges are confined. For $-12 < k\/\\mu < 12$,\ninterestingly both fundamental quarks as well as magnetic charges are screened, whose phase\n we call ``doubly screening phase''.\n\nWe then introduced the probe D7-branes and studied possible spontaneous\nchiral symmetry breaking. The ${\\cal N}=2$ fundamental hypermultiplet\narising from the D3-D7 strings possesses the classical chiral $U(1)_c$,\nwhich suffers from quantum anomaly. However we are working in the large $N$\nlimit of D3-branes and the effect of the anomaly may be ignored.\nBy studying the D7 moduli dual to the fermion mass\noperator of the hypermultiplet, we have shown that there is a nonvanishing\n bifermion condensate in the zero-mass limit, leading to the\nspontaneous breaking of the chiral symmetry. We demonstrated that this\nhappens even within the screening phase with no confinement.\n\n\n\n\n\nIt is our hope that the conclusions we have drawn from analyzing these\nstates of ${\\cal N}} \\newcommand{\\hg}{\\hat{g}=4$ SYM theory\nreflect some truth of generic confining gauge theories. At least,\nit seems to suggest that\nspontaneous chiral symmetry breaking does not necessarily require confinement.\n\n\n\n\n\\vskip 1cm \\centerline{\\large \\bf Acknowledgement} \\vskip 0.5cm\n\n\nWe would like to thank Kimyeong Lee, Jaemo Park and Soo-Jong Rey \nfor helpful discussions. We also thank other participants of the\n``AdS\/CFT and Quantum Chromodynamics\" (Oct. 28-30, 2004), Hanyang\nUniversity, Korea, for inspiring discussions.\nD.B. is supported in part by KOSEF ABRL\nR14-2003-012-01002-0 and KOSEF\nR01-2003-000-10319-0. D.B. also likes to thank the warm hospitality of KIAS\nwhere part of this work was done.\nH.-U.Y. is partly supported\nby grant No. R01-2003-000-10391-0 from the Basic Research Program\nof the Korea Science \\& Engineering Foundation.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Proofs for Section~\\ref{DPtestingsec}}\\label{DPtestingappendix}\n\n\\subsection{Proof of Proposition~\\ref{robustthresholds}}\\label{arobustthresholds}\n\n\\restaterobustthresholds*\n\nAs in \\cite{Canonne:2019}, define \\[\\tau = \\tau(P,Q) = \\max\\left\\{ \\int \\max\\{P(x)-e^{\\epsilon}Q(x), 0\\}dx, \\int \\max\\{Q(x)-e^{\\epsilon}P(x), 0\\}dx\\right\\}\\] and assume without loss of generality that $\\tau = \\int \\max\\{P(x)-e^{\\epsilon}Q(x), 0\\}dx$. Let $0\\le\\epsilon'\\le\\epsilon$ be the smallest value such that $\\tau = \\int \\max\\{Q(x)-e^{\\epsilon'}P(x), 0\\}dx$.\nDefine $P'=\\frac{1}{1-\\tau}\\min\\{P,e^{\\epsilon} Q\\}$ and $Q'=\\frac{1}{1-\\tau}\\min\\{Q,e^{\\epsilon'} P\\}$.\n\n\\begin{lemma} \\label{getridofepsprime} For any $\\epsilon\\in[0, 1]$, and distributions $P$ and $Q$ with the same support,\n\\[{\\rm SC}_{\\operatorname{ncLLR}_{-\\epsilon'}^{\\epsilon}}(P,Q)=\\Theta({\\rm SC}_{\\operatorname{ncLLR}_{-\\epsilon}^{\\epsilon}}(P,Q)).\\]\n\\end{lemma}\n\n\\begin{proof} First note that since $\\operatorname{ncLLR}_{-\\epsilon'}^{\\epsilon}$ is an optimal test up to a constant factor \\citep{Canonne:2019}, \\[{\\rm SC}_{\\operatorname{ncLLR}_{-\\epsilon'}^{\\epsilon}}(P,Q)=O({\\rm SC}_{\\operatorname{ncLLR}_{-\\epsilon}^{\\epsilon}}(P,Q)).\\]\n\n\\cite{Canonne:2019} show that the following two inequalities are sufficient to prove that ${\\rm SC}_{\\operatorname{ncLLR}_{-\\epsilon'}^{\\epsilon}}(P,Q) = \\Theta\\left(\\frac{1}{\\tau\\epsilon+(1-\\tau)H^2(P',Q')}\\right)$:\n\\[\\mathbb{E}_{P^n}[\\operatorname{cLLR}_{-\\epsilon'}^{\\epsilon}]-\\mathbb{E}_{Q^n}[\\operatorname{cLLR}_{-\\epsilon'}^{\\epsilon}]\\ge \\Omega(n(\\tau\\epsilon+(1-\\tau)H^2(P',Q')))\\]\nand\n\\[\\max\\left\\{\\mathbb{E}_{P}\\left[\\left[\\ln\\frac{P(x)}{Q(x)}\\right]_{-\\epsilon'}^{\\epsilon}\\right]^2, \\mathbb{E}_{Q}\\left[\\left[\\ln\\frac{P(x)}{Q(x)}\\right]_{-\\epsilon'}^{\\epsilon}\\right]^2\\right\\}\\le O(\\tau\\epsilon+(1-\\tau)H^2(P',Q'))\\]\n\nWe first note that the gap between the expectations increases when we move from $\\operatorname{cLLR}_{-\\epsilon'}^{\\epsilon}$ to $\\operatorname{cLLR}_{-\\epsilon}^{\\epsilon}$.\nIf $P(x)\\epsilon'>0$, and $\\epsilon-\\epsilon'\\le e^{\\epsilon}-e^{\\epsilon'}$ so \n\\begin{align*}\n\\int_{x\\in C} P(x)(-g(x))dx &= \\int_{x\\in C} P(x)(\\epsilon-\\epsilon')dx\\\\\n&\\le \\int_{x\\in C} P(x)(e^{\\epsilon}-e^{\\epsilon'})dx\\\\\n&\\le \\int_{x\\in C} Q(x)-e^{\\epsilon'}P(x) dx.\n\\end{align*}\nwhere the last inequality follows from the fact that $e^{\\epsilon}P(x)t)<\\mathbb{P}_{\\theta'}(x>t)$. Suppose $\\theta_1\\le\\theta_2\\le\\theta_3$ then \n\\begin{align*}\n\\text{\\rm TV}(P_{\\theta_1}, P_{\\theta_2}) &= \\mathbb{P}_{\\theta_2}\\left(x\\ge \\frac{A(\\theta_2)-A(\\theta_1)}{\\theta_2-\\theta_1}\\right)- \\mathbb{P}_{\\theta_1}\\left(x\\ge \\frac{A(\\theta_2)-A(\\theta_1)}{\\theta_2-\\theta_1}\\right)\\\\\n&\\le \\mathbb{P}_{\\theta_3}\\left(x\\ge \\frac{A(\\theta_2)-A(\\theta_1)}{\\theta_2-\\theta_1}\\right)- \\mathbb{P}_{\\theta_1}\\left(x\\ge \\frac{A(\\theta_2)-A(\\theta_1)}{\\theta_2-\\theta_1}\\right)\\\\\n&\\le \\text{\\rm TV}(P_{\\theta_1}, P_{\\theta_3}).\n\\end{align*} \nThus, monotonicity holds. To prove continuity, note that \n\\begin{align*}\n\\text{KL}(P_{\\theta}\\|P_{\\theta+h}) &= \\int p_{\\theta}(x)\\ln\\frac{p_{\\theta}(x)}{p_{\\theta+h}(x)} d\\mu\\\\\n&= \\int p_{\\theta}(x)\\left(\\theta x-A(\\theta)-(\\theta+h) x+A(\\theta+h)\\right) d\\mu\\\\\n&= -h\\mathbb{E}_{\\theta}[x]+A(\\theta+h)-A(\\theta)\\\\\n&= A(\\theta+h)-A(\\theta)-hA'(\\theta)\\\\\n&\\le \\max_{\\theta'\\in[\\theta, \\theta+h]}A''(\\theta')h^2.\n\\end{align*} \nTherefore, by Pinsker's inequality, \\begin{equation}\\label{upperboundonTV}\n\\text{\\rm TV}(P_{\\theta}, P_{\\theta+h})\\le |h|\\sqrt{\\max_{\\theta'\\in[\\theta, \\theta+h]}A''(\\theta')}.\n\\end{equation} Therefore,\n\\[\\text{\\rm TV}(P_{\\theta}, P_{\\theta+h_1})-\\text{\\rm TV}(P_{\\theta}, P_{\\theta+h_2})\\le \\text{\\rm TV}(P_{\\theta+h_1}, P_{\\theta+h_2}) \\le |h_1-h_2|\\sqrt{\\max_{\\theta'\\in[\\theta+h_1, \\theta+h_2]}A''(\\theta')}.\\] Since $A''$ is continuous, there exists $\\gamma$ such that if $|h_1-h_2|\\le\\gamma$ then $\\sqrt{\\max_{\\theta'\\in[\\theta+h_1, \\theta+h_2]}A''(\\theta')}\\le 2\\sqrt{A''(\\theta+h_1)}$. Thus for any $\\rho\\ge0$, if $|h_1-h_2|\\le \\min\\left\\{\\gamma,\\frac{\\rho}{2\\sqrt{A''(\\theta+h_1)}}\\right\\}$ then $\\text{\\rm TV}(P_{\\theta}, P_{\\theta+h_1})-\\text{\\rm TV}(P_{\\theta}, P_{\\theta+h_2})\\le\\rho$. Therefore, $h\\to \\text{\\rm TV}(P_{\\theta}, P_{\\theta+h})$ is continuous and monotone.\n\\end{proof}\n\n\\subsection{Proof of Lemma~\\ref{concentrationexp}}\\label{aconcentrationexp}\n\n\\rconcentrationexp*\n\n\\begin{proof}[Proof of Lemma~\\ref{concentrationexp}] \nRecall that $\\int e^{\\theta x}d\\mu = e^{A(\\theta)}.$\nLet $\\lambda\\le\\kappa(\\theta)$ then \n\\begin{align*}\n\\mathbb{E}_{P_{\\theta}}\\left[e^{\\lambda x}\\right] &= \\int e^{\\lambda x}e^{\\theta x-A(\\theta)} d\\mu\\\\\n&= e^{-A(\\theta)} \\int e^{(\\lambda +\\theta) x} d\\mu\\\\\n&= e^{A(\\theta+\\lambda)-A(\\theta)}.\n\\end{align*}\nNow, \\begin{align*}\n\\mathbb{E}_{P_{\\theta}}\\left[e^{|\\lambda (x-A'(\\theta))|}\\right] &= \\mathbb{E}_{P_{\\theta}}\\left[e^{\\lambda (x-A'(\\theta))}\\textbf{1}_{\\lambda (x-A'(\\theta))\\ge0}\\right]+ \\mathbb{E}_{P_{\\theta}}\\left[e^{-\\lambda (x-A'(\\theta))}\\textbf{1}_{\\lambda (x-A'(\\theta))\\le0}\\right]\\\\\n&\\le \\mathbb{E}_{P_{\\theta}}\\left[e^{\\lambda x-\\lambda A'(\\theta)}\\right]+ \\mathbb{E}_{P_{\\theta}}\\left[e^{-\\lambda x+\\lambda A'(\\theta)}\\right]\\\\\n&= e^{A(\\theta+\\lambda)-A(\\theta)-\\lambda A'(\\theta)}+e^{A(\\theta-\\lambda)-A(\\theta)+\\lambda A'(\\theta)}\\\\\n&\\le e^{\\frac{\\lambda^2}{2}\\max_{\\theta'\\in[\\theta, \\theta+\\lambda]}A''(\\theta')}+e^{\\frac{\\lambda^2}{2}\\max_{\\theta'\\in[\\theta-\\lambda, \\theta]}A''(\\theta')}\\\\\n&\\le 2e^{\\lambda^2 A''(\\theta)}\n\\end{align*}\nwhere the last inequality follows since $\\lambda\\le\\kappa(\\theta)$. Therefore for any $u>0$,\n\\begin{align*}\n\\mathbb{P}_{P_{\\theta}}(|x-A'(\\theta)|\\ge u)\n&= \\mathbb{P}_{P_{\\theta}}(e^{\\lambda|x-A'(\\theta)|}\\ge e^{\\lambda u})\\\\\n&\\le \\frac{\\mathbb{E}_{P_{\\theta}}\\left[e^{|\\lambda (x-A'(\\theta))|}\\right]}{e^{\\lambda u}}\\\\\n&\\le \\frac{2e^{\\lambda^2 A''(\\theta)}}{e^{\\lambda u}}\n\\end{align*}\nwhere the first inequality follows from Markov's inequality.\nLet $u=2\\sqrt{A''(\\theta)}\\sqrt{\\ln(2\/\\beta)}+\\frac{\\ln(2\/\\beta)}{\\kappa(\\theta)}$ and $\\lambda=\\min\\left\\{\\kappa(\\theta), \\frac{\\sqrt{\\ln(2\/\\beta)}}{\\sqrt{A''(\\theta)}}\\right\\}$, so\n$u \\ge \\lambda A''(\\theta)+\\frac{\\ln(2\/\\beta)}{\\lambda}$ and\n\\begin{align*}\n\\mathbb{P}_{P_{\\theta}}\\left(|x-A'(\\theta)|\\ge 2\\sqrt{A''(\\theta)}\\sqrt{\\log(2\/\\beta)}+\\frac{\\ln(2\/\\beta)}{\\kappa(\\theta)}\\right)\n&\\le \\mathbb{P}_{P_{\\theta}}\\left(|x-A'(\\theta)|\\ge \\lambda A''(\\theta)+\\frac{\\ln(2\/\\beta)}{\\lambda}\\right)\\\\\n&\\le 2e^{\\lambda^2A''(\\theta)-\\lambda^2A''(\\theta)-\\ln(2\/\\beta)}\\\\\n&= 2e^{-\\ln(2\/\\beta)}\\\\\n&=\\beta.\n\\end{align*}\nThe second statement follows immediately.\n\n\\end{proof}\n\n\\input{initialestimator}\n\n\\subsection{Proof of Corollary~\\ref{allforalog}}\n\n\\rallforalog*\n\n\\begin{proof}\nBy Theorem~\\ref{nonprivupper} and Theorem~\\ref{initialestthm}, there exists a constant $C>0$ such that for all $\\theta_0\\in\\Phi$ satisfying the two conditions and $n\\ge \\frac{c\\zeta^2\\ln(1\/\\delta)}{\\epsilon}$ with probability 0.8 we have, \n\\[|\\mathcal{M}_{\\fourthmoment, C}(X)-A'(\\theta_0)| = \\frac{C}{2}\\left( \\frac{\\sqrt{A''(\\theta_0)}}{\\sqrt{n}}+ \\frac{\\sqrt{A''(\\theta_0)}}{n\\epsilon}\\sqrt{\\ln\\left(n\\right)}\\right).\\] \nNow, since $\\kappa(\\theta)\\ge \\frac{1}{C}\\frac{\\sqrt{\\log(2\/\\beta)}}{\\sqrt{A''(\\theta)}}$\nand $\\epsilon=\\Omega(\\frac{\\ln n}{n})$, there exists $N\\in\\mathbb{N}$ such that for all $n>N$, \\[\\frac{C}{2}\\left(\\frac{\\sqrt{A''(\\theta_0)}}{\\sqrt{n}}+\\frac{\\sqrt{A''(\\theta_0)}\\sqrt{\\ln n}}{\\epsilon n}\\right)\\le \\frac{1}{2}A''(\\theta_0)\\kappa(\\theta_0)\\] combined with Lemma~\\ref{belongsinkappa} implies that $A'^{-1}(\\mathcal{M}_{\\fourthmoment, C}(X))\\in\\Phi(\\theta_0)$.\nTherefore, \n\\begin{align*}\n|(A')^{-1}(\\mathcal{M}_{\\fourthmoment, C}(X))-\\theta_0| &\\le \\max_{t\\in[\\mathcal{M}_{\\fourthmoment, C}(X), A'(\\theta_0)]}(A')^{-1})'(t)|\\mathcal{M}_{\\fourthmoment, C}(X)-\\theta_0|\\\\\n&= \\max_{t\\in[\\mathcal{M}_{\\fourthmoment, C}(X), A'(\\theta_0)]}\\frac{1}{A''(A'^{-1}(t))}|\\mathcal{M}_{\\fourthmoment, C}(X)-\\theta_0|\\\\\n&\\le 2\\frac{1}{A''(\\theta_0)}\\frac{C}{2}\\left( \\frac{\\sqrt{A''(\\theta_0)}}{\\sqrt{n}}+ \\frac{\\sqrt{A''(\\theta_0)}}{n\\epsilon}\\sqrt{\\ln\\left(n\\right)}\\right)\\\\\n&= C\\left( \\frac{1}{\\sqrt{nA''(\\theta_0)}}+ \\frac{1}{n\\epsilon\\sqrt{A''(\\theta_0)}}\\sqrt{\\ln\\left(n\\right)}\\right)\n\\end{align*}\n\\end{proof}\n\n\\subsection{Proof of Lemma~\\ref{boundonmodulus}}\\label{aboundonmodulus}\n\n\\rboundonmodulus*\n\n\\begin{proof}\n\\textbf{Lower Bound:}\nRecall from the proof of Corollary~\\ref{monotonecontinuous} that $\\text{\\rm TV}(P_{\\theta}, P_{\\theta+h})\\le |h|\\sqrt{\\max_{\\theta'\\in[\\theta, \\theta+h]}A''(\\theta')}$. Now, let $h$ be such that $\\text{\\rm TV}(P_{\\theta}, P_{\\theta+h})=\\beta$, so \\[|h|\\ge\\frac{\\beta}{\\sqrt{\\max_{\\theta'\\in[\\theta, \\theta+h]}A''(\\theta')}}.\\] By assumption, \n$\\kappa(\\theta)\\ge J^{-1}_{\\text{\\rm TV},\\theta}(\\beta)\\ge |h|$\n and $\\max_{\\theta'\\in[\\theta, \\theta+h]}A''(\\theta')\\le2A''(\\theta_0)$. Therefore $|h|\\ge \\frac{\\beta}{\\sqrt{2A''(\\theta)}}$ \nso $J_{\\text{\\rm TV}, \\theta}^{-1}(\\beta)~\\ge~\\frac{\\beta}{\\sqrt{2A''(\\theta)}}$. \n\n\\noindent\\textbf{Upper Bound:}\nSince $\\beta<\\frac{9}{128\\zeta}$, there exists a constant $C$ such that $\\frac{16\\zeta}{9}\\le C\\le \\frac{1}{8\\beta}$. Let $h=\\frac{C\\beta}{\\sqrt{A''(\\theta)}}$. If $h\\ge\\kappa(\\theta)$ then we are done since $J_{\\text{\\rm TV}, \\theta}^{-1}(\\beta)\\le\\kappa(\\theta)\\le h$, so assume that $h\\le\\kappa(\\theta)$.\nIt suffices to prove that $\\text{\\rm TV}(P_{\\theta}, P_{\\theta+h})\\ge\\beta$ since then again by monotonicity and continuity of $h\\to \\text{\\rm TV}(P_{\\theta}, P_{\\theta+h})$, we are done. By the Paley-Zygmund inequality, \\[\\mathbb{P}_{P_{\\theta}}\\left(X\\ge A'(\\theta)+\\frac{1}{2}\\sqrt{A''(\\theta)}\\right)\\ge \\left(1-\\frac{1}{4}\\right)^2\\frac{(\\mathbb{E}(X-\\mathbb{E}(X))^2)^2}{\\mathbb{E}[(X-\\mathbb{E}(X))^4]}\\ge \\frac{9}{16\\zeta}.\\] Then,\n\\begin{align*}\nTV(P_{\\theta}, P_{\\theta+h}) &= \\int_{\\frac{A(\\theta+h)-A(\\theta)}{h}}^{\\infty} \\left(e^{(\\theta+h)x-A(\\theta+h)}-e^{\\theta x-A(\\theta)}\\right)d\\mu\\\\\n&\\ge \\int_{\\frac{A(\\theta+h)-A(\\theta)}{h}+\\frac{1}{4}\\sqrt{A''(\\theta)}}^{\\infty} \\left(e^{(\\theta+h)x-A(\\theta+h)}-e^{\\theta x-A(\\theta)}\\right)d\\mu\\\\\n&= \\int_{\\frac{A(\\theta+h)-A(\\theta)}{h}+\\frac{1}{4}\\sqrt{A''(\\theta)}}^{\\infty} \\left(e^{hx-(A(\\theta+h)-A(\\theta))}-1\\right)P_{\\theta}(x)d\\mu\\\\\n&\\ge \\left(e^{\\frac{1}{4}h\\sqrt{A''(\\theta)}}-1\\right) \\mathbb{P}_{P_{\\theta}}\\left(X\\ge \\frac{A(\\theta+h)-A(\\theta)}{h}+\\frac{1}{4}\\sqrt{A''(\\theta)}\\right).\n\\end{align*}\nNow, \n\\begin{align*}\n\\frac{A(\\theta+h)-A(\\theta)}{h}+\\frac{1}{4}\\sqrt{A''(\\theta)}&\\le A'(\\theta)+h\\max_{\\theta'\\in[\\theta, \\theta+h]}A''(\\theta)+\\frac{1}{4}\\sqrt{A''(\\theta)}\\\\\n&\\le A'(\\theta)+\\frac{1}{8\\sqrt{A''(\\theta)}}2A''(\\theta)+\\frac{1}{4}\\sqrt{A''(\\theta)}\\\\\n&\\le A'(\\theta)+\\frac{1}{2}\\sqrt{A''(\\theta)},\n\\end{align*}\nwhere the second inequality holds since $h\\ge \\frac{1}{8}\\frac{1}{\\sqrt{A''(\\theta)}}$.\nThus,\n\\begin{align*}\nTV(P_{\\theta}, P_{\\theta+h})\n&\\ge \\left(e^{\\frac{1}{4}h\\sqrt{A''(\\theta)}}-1\\right) \\mathbb{P}_{P_{\\theta}}\\left(X\\ge A'(\\theta)+\\frac{1}{2}\\sqrt{A''(\\theta)}\\right)\\\\\n&\\ge \\frac{1}{4}\\frac{9}{4\\zeta} h\\sqrt{A''(\\theta)}\\\\\n&\\ge \\beta,\n\\end{align*}\nwhere the second inequality follows from the fact that $e^x-1\\ge x$ for all $x>0$, and the final inequality follows from the definition of $h$, and the assumptions on $C$.\n\\end{proof}\n\n\\subsection{Proof of Lemma~\\ref{highprivexptest}}\\label{ahighprivexptest}\n\n\\rhighprivexptest*\n\n\\begin{proof}[Proof of Lemma~\\ref{highprivexptest}] \nBy assumption there exists constant $A_1$ and $A_2$ such that $A_1\\frac{\\ln n}{n}\\le\\epsilon\\le A_2\\frac{1}{\\sqrt{n}}$.\nSet $N=\\frac{8\\sqrt{2}}{\\epsilon_n}\\max\\{\\frac{1}{J_{TV,\\theta_0}(\\kappa(\\theta_0))}, \\frac{128\\zeta}{9}\\}$ then $n\\ge N$ implies\n$\\frac{8\\sqrt{2}}{\\epsilon n}\\le \\frac{9}{128\\zeta}$ and $\\kappa(\\theta_0)\\ge J_{\\text{\\rm TV},\\theta}^{-1}(\\frac{8\\sqrt{2}}{\\epsilon n})$.\nCombined with the first assumption and Lemma~\\ref{boundonmodulus}, this implies that that there exists constants $C_1$ and $C_2$ (depending only on $\\zeta$) such that for all $\\beta\\le\\frac{8\\sqrt{2}}{\\epsilon n}$, \\[J_{\\text{\\rm TV}, \\theta_0}^{-1}(\\beta)\\in\\left[\\frac{C_1\\beta}{ \\sqrt{A''(\\theta_0)}}, \\frac{C_2\\beta}{\\sqrt{A''(\\theta_0)}}\\right].\\] Let $C=\\frac{C_2 8\\sqrt{2}}{C_1}$. Then \\[|\\theta_0-\\theta_1|\\ge CJ_{\\text{\\rm TV}, \\theta_0}^{-1}\\left(\\frac{1}{\\epsilon n}\\right) \\ge \\frac{C_28\\sqrt{2}}{C_1}\\frac{C_1}{\\epsilon n\\sqrt{A''(\\theta_0)}}\\ge J_{\\text{\\rm TV}, \\theta_0}^{-1}\\left(\\frac{8\\sqrt{2}}{\\epsilon n}\\right).\\]\nThus, $\\text{\\rm TV}(P_{\\theta_0}, P_{\\theta_1})\\ge \\frac{8\\sqrt{2}}{\\epsilon n}$.\n\n\nNext, assume that $|\\theta_0-\\theta_1|=CJ_{\\text{\\rm TV}, \\theta_0}^{-1}\\left(\\frac{1}{\\epsilon_nn}\\right)$. Assume without loss of generality that $\\theta_0\\le\\theta_1$. Note that by Markov's inequality, it is sufficient to show that \n\\begin{align*}\n&\\mathbb{E}_{\\theta_1}\\left[f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right]-\\mathbb{E}_{\\theta_0}\\left[f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right] \\\\\n&\\hspace{1in}\\ge \\frac{1}{4}\\min\\left\\{\\sqrt{\\text{var}_{\\theta_0}\\left(f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right)}, \\sqrt{\\text{var}_{\\theta_1}\\left(f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right)}\\right\\}\n\\end{align*}\nLet us first analyze the gap in expectations in the test statistic $f_{\\hat{t}}(\\cdot)+\\mathrm{Lap}(\\frac{1}{\\epsilon n})$.\nNote, \\[\\mathbb{P}_{\\theta_1}\\left[T(X)>\\frac{A(\\theta_0)-A(\\theta_1)}{\\theta_0-\\theta_1}\\right]-\\mathbb{P}_{\\theta_0}\\left[T(X)>\\frac{A(\\theta_0)-A(\\theta_1)}{\\theta_0-\\theta_1}\\right] = TV(P_{\\theta_0}, P_{\\theta_1}).\\] Therefore, \n\\begin{align*}\n&\\mathbb{E}_{\\theta_1}\\left[f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right]-\\mathbb{E}_{\\theta_0}\\left[f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right]\\\\\n&\\hspace{1in}= TV(P_{\\theta_0}, P_{\\theta_1})+\\mathbb{P}_{\\theta_1}[T(x)\\in I]-\\mathbb{P}_{\\theta_0}[T(x)\\in I],\n\\end{align*} where $I$ has endpoints $\\hat{t}$ and $\\frac{A(\\theta_0)-A(\\theta_1)}{\\theta_0-\\theta_1}$. Assume, for ease of notation, that $\\hat{t}\\ge \\frac{A(\\theta_0)-A(\\theta_1)}{\\theta_0-\\theta_1}$ so $\\hat{t}=\\frac{A(\\theta_0)-A(\\theta_1)}{\\theta_0-\\theta_1}+\\Gamma$ where \n\\begin{align*}\n\\Gamma &= \\hat{t}-\\frac{A(\\theta_0)-A(\\theta_1)}{\\theta_0-\\theta_1}\\\\\n&= \\hat{t}-A'(\\theta_0)+A'(\\theta_0)-\\frac{A(\\theta_0)-A(\\theta_1)}{\\theta_0-\\theta_1}\\\\\n&\\le \\textcolor{black}{b\\left(\\sqrt{\\frac{A''(\\theta_0)}{n}}+\\frac{\\sqrt{A''(\\theta_0)\\ln\\left(n\\right)}}{\\epsilon n}\\right)}+\\max_{\\theta'\\in[\\theta_0, \\theta_1]}A''(\\theta')|\\theta_0-\\theta_1|.\n\\end{align*}\nNow, \n\\begin{align*}\n\\mathbb{P}_{\\theta_1}[T(x)\\in I]-\\mathbb{P}_{\\theta_0}[T(x)\\in I] &= \\int_{a\\in I} e^{a\\theta_1-A(\\theta_1)} d\\mu - \\int_{a\\in I} e^{a\\theta_0-A(\\theta_0)} d\\mu\\\\\n&= \\int_{a\\in I} e^{a(\\theta_1-\\theta_0)+A(\\theta_0)-A(\\theta_1)}e^{a\\theta_0-A(\\theta_0)} d\\mu - \\int_{a\\in I} e^{a\\theta_0-A(\\theta_0)} d\\mu\\\\\n&\\le \\max_{a\\in I}(e^{a(\\theta_1-\\theta_0)+A(\\theta_0)-A(\\theta_1)}-1)\\mathbb{P}_{\\theta_0}[T(X)\\in I]\\\\\n&\\le \\max_{a\\in I}(e^{a(\\theta_1-\\theta_0)+A(\\theta_0)-A(\\theta_1)}-1)\\\\\n&= e^{\\hat{t}(\\theta_1-\\theta_0)+A(\\theta_0)-A(\\theta_1)}-1\\\\\n&= e^{\\Gamma(\\theta_1-\\theta_0)}-1. \n\\end{align*}\nNow $|\\theta_0-\\theta_1|=CJ_{\\text{\\rm TV}, \\theta_0}^{-1}\\left(\\frac{1}{\\epsilon_nn}\\right)\\le\\frac{C_2C}{\\epsilon n \\sqrt{A''(\\theta_0)}}$ and thus there exists a constant $C_3$ (depending on $\\zeta, C_1, C_2, A_1$ and $A_2$) such that\n\\begin{align*}\n\\mathbb{P}_{\\theta_0}[T(x)\\in I]-\\mathbb{P}_{\\theta_1}[T(x)\\in I] \n&\\le e^{\\left[\\textcolor{black}{b\\left(\\sqrt{\\frac{A''(\\theta_0)}{n}}+\\frac{\\sqrt{A''(\\theta_0)\\ln\\left(n\\right)}}{\\epsilon n}\\right)}+\\max_{\\theta'\\in[\\theta_0, \\theta_1]}A''(\\theta')|\\theta_0-\\theta_1|\\right](\\theta_1-\\theta_0)}-1\\\\\n&\\le e^{\\frac{bC_2C}{\\epsilon n^{1.5}}+\\frac{bC_2C\\sqrt{\\ln n}}{\\epsilon^2 n^2}+\\frac{2C_2^2C^2}{\\epsilon^2 n^2}}-1\\\\\n&= e^{\\frac{C_3\\sqrt{\\ln n}}{\\epsilon^2 n^{2}}}-1\n\\end{align*}\nwhere the second inequality follows from $\\epsilon=O(1\/\\sqrt{n})$, which implies that $\\epsilon n^{1.5}=\\Omega(\\epsilon^2n^2)$. Now $\\epsilon=\\Omega(\\ln n\/n)$ implies $e^{\\frac{C_3\\sqrt{\\ln n}}{\\epsilon^2 n^{2}}}-1=o(\\frac{1}{\\epsilon n})$, thus since $\\text{\\rm TV}(P_{\\theta_0}, P_{\\theta_1})\\ge \\frac{8\\sqrt{2}}{\\epsilon n}$, there exists $N$ (depending on $C, B, A_1$ and $A_2$) such that if $n\\ge \\max\\{N, \\frac{8\\sqrt{2}}{\\epsilon_nJ_{TV,\\theta_0}(\\kappa(\\theta_0))}, \\frac{8\\sqrt{2}}{\\epsilon_n}\\frac{128}{9\\zeta}\\}$ then \\[\\mathbb{E}_{\\theta_1}\\left[f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right]-\\mathbb{E}_{\\theta_0}\\left[f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right] \\ge \\frac{1}{2}\\left(\\text{\\rm TV}(P_{\\theta_0}, P_{\\theta_1}\\right))\\]\nAlso, \\[\\min\\left\\{\\text{var}_{\\theta_0}\\left(f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right), \\text{var}_{\\theta_1}\\left(f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right)\\right\\}\\le \\frac{1}{n}+\\frac{1}{\\epsilon^2n^2}\\le 2\\left(\\frac{1}{\\epsilon^2n^2}\\right),\\] where the second inequality holds since $\\epsilon\\le1\/\\sqrt{n}$. \nThus, since $\\text{\\rm TV}(P_{\\theta_0}, P_{\\theta_1})]\\ge\\frac{8\\sqrt{2}}{\\epsilon n}$, we have \\[\\sqrt{\\text{var}_{\\theta_0}\\left(f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right)}\\le \\frac{1}{4}\\left( \\mathbb{E}_{\\theta_0}\\left[f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right]-\\mathbb{E}_{\\theta}\\left[f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right]\\right)\\] Thus, the test distinguishes between $P_{\\theta_0}$ and $P_{\\theta_1}$.\n\nNext, assume that $|\\theta_0-\\theta_1| \\ge CJ_{\\text{\\rm TV}, \\theta_0}^{-1}\\left(\\frac{1}{\\epsilon_nn}\\right)$. Let $\\theta_1'$ be such that $|\\theta_0-\\theta_1'|= CJ_{\\text{\\rm TV}, \\theta_0}^{-1}\\left(\\frac{1}{\\epsilon_nn}\\right)$ and let $n>\\max\\{N, \\frac{8\\sqrt{2}}{\\epsilon_nJ_{TV,\\theta_0}(\\kappa(\\theta_0))}, \\frac{8\\sqrt{2}}{\\epsilon_n}\\frac{128}{9\\zeta}\\}$. Then by Lemma~\\ref{monotonelikelihood},\n\\begin{align*}\n&\\mathbb{E}_{\\theta_1}\\Big[f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\Big]-\\mathbb{E}_{\\theta_0}\\Big[f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\Big]\\\\\n&\\hspace{1in}\\ge \\mathbb{E}_{\\theta_1'}\\Big[f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\Big]-\\mathbb{E}_{\\theta_0}\\Big[f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\Big]\\\\\n&\\hspace{1in}\\ge \\frac{1}{2}\\text{\\rm TV}\\left(P_{\\theta_0}, P_{\\theta_1'}\\right)\\\\\n\\end{align*}\nand as above \\[\\min\\left\\{\\text{var}_{\\theta_0}\\left(f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right), \\text{var}_{\\theta_1}\\left(f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\right)\\right\\}\\le \\frac{1}{n}+\\frac{1}{\\epsilon^2n^2}\\] so we are done.\n\\end{proof}\n\n\n\\subsection{Proof of Proposition~\\ref{lowprivmodcontinuity}}\\label{alowprivmodcontinuity}\n\n\\input{lowprivacynoprivacy}\n\n\\subsection{Proof of Proposition~\\ref{mainexp}}\n\n\\rmainexp*\n\n\\begin{proof} \n\nThe proposition follows from a combination of Corollary~\\ref{maincorlowpriv} and Corollary~\\ref{allforalog}. Let $N, k , C_1, D_1$ and $D_2$ be as in Corollary~\\ref{maincorlowpriv}. Note that since $\\frac{D}{A''(\\theta_0)(\\kappa(\\theta_0))^2}\\le DC^2=O(1)$ we can set $N$ large enough that $N\\ge \\frac{D_2}{A''(\\theta_0)(\\kappa(\\theta_0))^2}$. By Corollary~\\ref{allforalog}, there exists a constant $C_2$ such that\n\\[|A'^{-1}(\\mathcal{M}_{\\fourthmoment, C}(X))-\\theta_0|\\le C_2\\left(\\frac{1}{\\sqrt{nA''(\\theta_0)}}+\\frac{1}{n\\epsilon\\sqrt{A''(\\theta_0)}}\\sqrt{\\ln(n)}\\right).\\]\nAgain since $\\kappa(\\theta_0)\\sqrt{A''(\\theta_0)}\\ge \\frac{1}{C}$, there exists constants $N_1$ and $D$ such that for all $n>N_1$ if $\\frac{\\sqrt{\\ln n}}{\\sqrt{n}}\\le D A''(\\theta_0)$ then, \\[C_2\\left(\\frac{1}{\\sqrt{nA''(\\theta_0)}}+\\frac{1}{n\\epsilon\\sqrt{A''(\\theta_0)}}\\sqrt{\\ln(n)}\\right)\\le \\min\\{\\kappa(\\theta_0), D_1\\epsilon\\sqrt{nA''(\\theta_0)}.\\] The condition $\\frac{\\sqrt{\\ln n}}{\\sqrt{n}}\\le D A''(\\theta_0)$ is implied by the assumption that $n\\ge \\frac{2\\ln(1\/(DA''(\\theta_0))^2)}{(DA''(\\theta_0))^2}$.\nThus, the estimator from Corollary~\\ref{allforalog} satisfies the requirements of Corollary~\\ref{maincorlowpriv} and so we are done.\n\\end{proof}\n\n\\subsection{Proof of Lemma~\\ref{mainlemmaexp}}\\label{amainlemmaexp}\n\n\\rmainlemmaexp*\n\n\\begin{proof}[Proof of Lemma~\\ref{mainlemmaexp}] Let $\\tilde{\\theta} = A'^{-1}(\\hat{t})$ and recall that for ease of notation we let $\\alpha_n(\\theta) = \\frac{1}{\\sqrt{nA''(\\theta)}}$. \nSince $|\\theta_0-A'^{-1}(\\hat{t})|\\le\\kappa(\\theta_0)$, we have $\\alpha_n(\\theta_0)\\in[\\frac{1}{2}\\tilde{\\alpha}, 2\\tilde{\\alpha}]$. Thus by Proposition~\\ref{lowprivmodcontinuity}, there exists constants $k$, $C_1$, and $C_2$ (depending only on $\\zeta$) such that $\\omega_n(P_{\\theta}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})\\in[C_1\\alpha_n(\\theta_0), C_2\\alpha_n(\\theta_0)]$. Further, by Lemma~\\ref{threshrobust}, there exist a constant $C_3$ (depending on $C_1$ and $C_2$) such that if $a, b\\in[C_1\/4\\epsilon, 4C_2\\epsilon]$ then \\[{\\rm SC}_{\\operatorname{ncLLR}_a^b}(P,Q)\\le C_3\\cdot{\\rm SC}_{\\epsilon}(P,Q).\\] Now, set $C=\\sqrt{C_3}\\frac{C_2}{C_1}$ and assume first that $|\\theta_0-\\theta_1|=C\\cdot\\omega_n(P_{\\theta}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})$. Then \n\\begin{align*}\n|\\theta_0-\\theta_1|&= C\\omega_n(P_{\\theta}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})\\\\\n&\\ge\\frac{CC_1}{\\sqrt{nA''(\\theta_0)}}\\\\\n&=\\frac{C_2}{\\sqrt{(C_2\/C_1C)^2nA''(\\theta_0)}}\\\\\n&\\ge \\omega_{(C_2\/C_1C)^2n, {\\rm SC}_{\\epsilon}}(P_{\\theta_0}).\n\\end{align*}\nTherefore, ${\\rm SC}_{\\epsilon}(P_{\\theta_0}, P_{\\theta_1})\\le (\\frac{C_2}{C_1C})^2n$. Thus, if $a,b\\in[C_1\/4, 4C_2]$ then \\begin{equation}\\label{equalSC}\n{\\rm SC}_{\\operatorname{ncLLR}_a^b}(P_{\\theta_0}, P_{\\theta_1})\\le C_3\\left(\\frac{C_2}{C_1C}\\right)^2n\\le n.\n\\end{equation}\n\nNow, recall that $P_{\\theta}(x)=e^{\\theta x-A(\\theta)}$ and $|\\theta_0-\\theta_1|=C\\cdot \\omega_n(P_{\\theta}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})$ so\n\\begin{align*}\n(\\theta_1-\\theta_0)[x-\\hat{t}]_{-\\epsilon\/C\\tilde{\\alpha}}^{\\epsilon\/C\\tilde{\\alpha}} &= \\left[(\\theta_1-\\theta_0)(x-\\hat{t})\\right]_{-\\epsilon\\frac{ \\omega_{n,{\\rm SC}_{\\epsilon}}(\\theta_0)}{\\tilde{\\alpha}}}^{\\epsilon\\frac{ \\omega_{n,{\\rm SC}_{\\epsilon}}(\\theta_0)}{\\tilde{\\alpha}}}\\\\\n&=\\left[\\ln\\frac{P_{\\theta_1}(x)}{P_{\\theta_0}(x)}+(A(\\theta_1)-A(\\theta_0))- (\\theta_1-\\theta_0)\\hat{t}\\right]_{-\\epsilon\\frac{ \\omega_{n,{\\rm SC}_{\\epsilon}}(\\theta_0)}{\\tilde{\\alpha}}}^{\\epsilon\\frac{ \\omega_{n,{\\rm SC}_{\\epsilon}}(\\theta_0)}{\\tilde{\\alpha}}}\\\\\n&= \\left[\\ln\\frac{P_{\\theta_1}(x)}{P_{\\theta_0}(x)}\\right]_{\\epsilon\\frac{ \\omega_{n,{\\rm SC}_{\\epsilon}}(\\theta_0)}{\\tilde{\\alpha}}-\\Gamma}^{\\epsilon\\frac{ \\omega_{n,{\\rm SC}_{\\epsilon}}(\\theta_0)}{\\tilde{\\alpha}}-\\Gamma}+\\Gamma\n\\end{align*}\nwhere \n\\begin{align*}\n\\Gamma &= A(\\theta_1)-A(\\theta_0)-(\\theta_1-\\theta_0)\\hat{t} \\\\\n&\\le A(\\theta_1)-A(\\theta_0)-(\\theta_1-\\theta_0)A'(\\theta_0)+(\\theta_1-\\theta_0)(A'(\\theta_0)-\\hat{t})\\\\\n&\\le \\left(\\max_{\\theta'\\in[\\theta_1, \\theta_0]}A''(\\theta')\\right)(\\theta_1-\\theta_0)^2+|\\theta_1-\\theta_0||A'(\\theta_0)-\\hat{t}|\\\\\n\\end{align*}\nNow, if we let $D_2=C_2^2C^2$ then $n\\ge \\frac{D_2}{A''(\\theta_0)(\\kappa(\\theta_0))^2}$ implies that \\[|\\theta_0-\\theta_1|=C\\omega_{n, {\\rm SC}_{\\epsilon}}(\\theta_0)\\le C\\frac{C_2}{\\sqrt{n A''(\\theta_0)}}= \\frac{\\sqrt{D_2}}{\\sqrt{nA''(\\theta_0)}}\\le\\kappa(\\theta_0).\\] \nTherefore, \\[\\Gamma\\le 2A''(\\theta_0)(\\theta_1-\\theta_0)^2+|\\theta_1-\\theta_0||A'(\\theta_0)-\\hat{t}|\\le 2C^2C_2^2\\frac{1}{n}+C_2C\\alpha_n(\\theta_0)|A'(\\theta_0)-\\hat{t}|.\\] \nThus, noting that $1\/n\\ll\\epsilon$, and setting $b=\\frac{C_1}{8CC_2}$ then if $|\\theta_0-A'^{-1}(\\hat{t})|\\le \\frac{b\\epsilon}{\\alpha_n(\\theta_0)}=b\\epsilon\\sqrt{nA''(\\theta_0)}$ then there exists $N$ such that for all $n\\ge \\max\\{N, \\frac{D_2}{A''(\\theta_0)(\\kappa(\\theta_0))^2}\\}$, $\\Gamma\\le \\frac{C_1\\epsilon}{4}$.\nTherefore, for all $n\\ge N$, the truncation parameters $\\epsilon\\frac{ \\omega_{n,{\\rm SC}_{\\epsilon}}(\\theta_0)}{\\tilde{\\alpha}}-\\beta, \\epsilon\\frac{ \\omega_{n,{\\rm SC}_{\\epsilon}}(\\theta_0)}{\\tilde{\\alpha}}+\\beta\\in[C_1\/4, 4C_2]$. \nSo, by eqn~\\eqref{equalSC}, ${\\rm SC}_{\\operatorname{ncLLR}_{-a}^b}(P_{\\theta_0}, P_{\\theta_1})\\le n$ which implies $n$ samples are sufficient for the test statistic $[x-\\hat{t}]_{-\\epsilon\/C\\tilde{\\alpha}}^{\\epsilon\/C\\tilde{\\alpha}}$ to distinguish between $P_{\\theta_0}$ and $P_{\\theta_1}$. The threshold $\\tau$ can be chosen as the midpoint between $\\mathbb{E}_{X\\sim P_{\\theta_0}^n}[\\hat{f}_{\\tilde{\\alpha}}(X)]$ and $\\mathbb{E}_{X\\sim P_{\\theta_1}^n}[\\hat{f}_{\\tilde{\\alpha}}(X)]$.\n\nNow, assume that $|\\theta_0-\\theta_1|\\ge C\\cdot\\omega_n(P_{\\theta}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})$. Let $\\theta_1'$ be such that $\\theta_0<\\theta_1'<\\theta_1$ and $|\\theta_0-\\theta_1|=C\\cdot\\omega_n(P_{\\theta}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})$. Then, by the previous argument, there exists a threshold $\\tau$ such that $n$ samples are sufficient for the test statistic $[x-\\tilde{t}]_{-\\epsilon\/C\\tilde{\\alpha}}^{\\epsilon\/C\\tilde{\\alpha}}$ to distinguish between $P_{\\theta_0}$ and $P_{\\theta_1'}$. Noting that this test statistic is monotone in $x$, we have by Lemma~\\ref{monotonelikelihood} (the fact that $P_{\\theta_1}$ stochastically dominates $P_{\\theta_1'}$) that this test statistic also distinguishes between $P_{\\theta_0}$ and $P_{\\theta_1}$ with $n$ samples. Additionally, since $\\mathbb{E}_{\\theta_1'}[\\operatorname{ncLLR}(X)]\\ge\\mathbb{E}_{\\theta_1}[\\operatorname{ncLLR}(X)]\\ge \\mathbb{E}_{\\theta_0}[\\operatorname{ncLLR}(X)] $, we maintain that $|\\mathbb{E}_{X\\sim P_{\\theta_0}^n}[\\hat{f}_{\\tilde{\\alpha}}(X)]-\\tau|\\le |\\mathbb{E}_{X\\sim P_{\\theta_1}^n}[\\hat{f}_{\\tilde{\\alpha}}(X)]-\\tau|$.\n\\end{proof}\n\n\\section{Proofs for Section~\\ref{tailrates}}\n\n\\subsection{Proof of Theorem~\\ref{maintailtheorem}}\n\n\\rmaintailtheorem*\n\n\\begin{proof}[Proof of Theorem~\\ref{maintailtheorem}] We can think of Algorithm~\\ref{algo:BS} as at each step dividing the distance between $t_{\\min}$ and $t_{\\max}$ by $2\/3$ and concluding that the true value $t^*$ lies between $t_{\\min}$ and $t_{\\max}$. Thus, in order to show that $\\ierror{\\hat{\\theta}}{N}{f}\\le \\MOCallt{n}{t}{\\mathcal{F}^{\\text{\\rm test}}_{\\epsilon}}{\\mathcal{P}}{\\theta}$, it suffices to show that it is possible to run for $k^*(n)=\\left\\lceil\\log_{\\frac{3}{2}}\\left (\\frac{|t_1-t_0|}{\\MOCallt{n}{t}{\\mathcal{F}^{\\text{\\rm test}}_{\\epsilon}}{\\mathcal{P}}{\\theta}}\\right)\\right\\rceil$ iterations with $N=n\\cdot \\lceil \\log k^*(n)\\rceil \\cdot k^*(n)$ samples. In order to make the correct decision with probability $1\/3k^*(n)$, it suffices for the last iteration to use $n\\cdot \\lceil \\log k^*(n)\\rceil$ samples. Since the hypothesis test at the last step has the largest sample size, $n\\cdot\\lceil \\log k^*(n)\\rceil\\cdot k^*(n)$ samples is sufficient to run $k^*(n)$ rounds.\n\\end{proof}\n\\section{One-Parameter Exponential Families: Characterising the Optimal Local Estimation Rate and Uniform Achievability}\\label{expfams}\n\nWe now turn our attention to an example where uniform achievability is possible under differential privacy: one-parameter exponential families. In this section we will characterize the optimal local estimation rate of estimating the parameter in a one-parameter exponential family, then show that this optimal local estimation rate is uniformly achievable under differential privacy. In particular, we will see how the results of Section~\\ref{DPtestingsec} on the form of the optimal DP simple hypothesis test, and it's sample complexity, inform the design of the locally minimax estimator.\n\nOne parameter exponential families are a broad class of families of distributions that encompasses many natural distributions. Examples of exponential families include Poisson distributions, Binomial distributions, normal distributions with known variance and normal distributions with known mean.\nFormally, a \\emph{one parameter exponential family}, $\\mathcal{P_{\\mu}}=\\{P_{\\theta}\\}$, is determined by a base measure $\\mu$ such that for each $\\theta$, the distribution $P_{\\theta}$ has density \\[p_{\\theta}(x) := e^{\\theta x-A(\\theta)} \\quad \\text{(relative to $\\mu$)},\\] where $A(\\theta) = \\ln\\int e^{\\theta x} d\\mu(x)$ is the normalisation.\\footnote{It is common to see a sufficient statistic, $T(x)$, included in the definition of an exponential family so that $p_{\\theta}(x) := e^{\\theta T(x)-A(\\theta)}$. Defined in this way, an exponential family can be defined over any space, not simply $\\mathbb{R}$. However, for the purpose of estimating $\\theta$, the two definitions are equivalent up to a change in the base measure, $\\mu$.} Note that the mean and the variance have the following simple formulations: $\\mathbb{E}_{\\theta}[x]=A'(\\theta)$ and $\\text{var}_{\\theta}(x)=A''(\\theta)$. The formula for $p_{\\theta}$ does not give a well defined distribution for values of $\\theta$ for which $A(\\theta)=\\infty$, so each measure $\\mu$ has an associated range which we will denote $\\Phi_{\\mu}=\\{\\theta\\;|\\; A(\\theta)<\\infty\\}$. When $\\mu$ is clear from context, we will drop the dependence on $\\mu$ and refer to $\\Phi_{\\mu}$ simply as $\\Phi$. \n\n Let us begin with the characterization of the optimal local estimation rate. The formal version of this theorem is a combination of Corollary~\\ref{smallepslowerexp} and Corollary~\\ref{largeepslowerexp}, which characterize the optimal local estimation rate separately for the high and low privacy regimes.\n\n\\begin{theorem}[Characterization of Optimal Local Estimation Rate---Simplified from Corollaries \\ref{smallepslowerexp} and \\ref{largeepslowerexp}]\\label{thm:informalrate}\nFor all exponential families (i.e., measures $\\mu$), $\\delta>0$, all sequences of privacy parameters $\\epsilon_n\\in[0,1]$, $n\\in\\mathbb{N}$, and $\\theta_0\\in\\Phi_{\\mu}$,\n \\[\\omega_n(P_{\\theta_0}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}}) = \\Theta_{\\mu}\\left( \\frac{1}{\\sqrt{A''(\\theta_0)}\\min\\{n\\epsilon_n, \\sqrt{n}\\}} \\right) ,\\]\n where the $\\Theta_{\\mu}$ notation hides constants depending only on $\\mu$ (but not $\\theta_0$).\n\\end{theorem}\n\nThis convergence result is uniform in a fairly strong sense. Given a family defined by a measure $\\mu$, there exists constants $C_1,C_2$ such that for all sequences $\\epsilon_n\\in[0,1]$, sufficiently large $n$, and $\\theta_0\\in\\Phi_{\\mu}$, $$\\omega_n(P_{\\theta_0}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}}) \\in \\left[\\frac{C_1}{\\sqrt{A''(\\theta_0)}\\min\\{n\\epsilon_n, \\sqrt{n}\\}}, \\frac{C_2}{\\sqrt{A''(\\theta_0)}\\min\\{n\\epsilon_n, \\sqrt{n}\\}}\\right].$$\nThe formal statements of this theorem are slightly stronger than Theorem~\\ref{thm:informalrate} in that we show that the constants $C_1$ and $C_2$ depend only on a few properties of $\\mu$. We will discuss these properties later in this section. The non-private local estimation rate for exponential families is $\\omega_n(P_{\\theta_0}, \\mathcal{F}^{\\text{\\rm test}})~=~\\Theta_{\\mu}\\left( \\frac{1}{\\sqrt{A''(\\theta_0)n}} \\right)$, so we can see that in the low privacy regime, privacy comes for free. In the high privacy regime, this characterisation matches the L1-information at $P$ as expected from Theorem~\\ref{smallepslower}.\n\nUnder some mild conditions, this optimal local estimation rate is actually uniformly achievable. That is, there exists an algorithm that achieves the optimal local estimation rate. The following is an informal statement of Proposition~\\ref{prop:highprivupper} and Proposition~\\ref{mainexp}, which contain the formal uniform achievability statements in the high and low privacy regimes separately.\n\n\\begin{theorem}[Uniform Achievability------Simplified from Propositions~\\ref{prop:highprivupper} and~\\ref{mainexp}]\\label{informalmainexp} For all exponential families (i.e., all measures $\\mu$), there is an algorithm $\\mathcal{A}_\\mu$ such that for all $\\theta_0\\in\\Phi$, and for all sequences $\\epsilon_n\\in[0,1]$, $\\delta\\in[0,1]$ and $n\\in\\mathbb{N}$, $\\mathcal{A}_{\\mu}(\\epsilon, \\delta, \\cdot)$ is $(\\epsilon, \\delta)$-DP and\n \\[\\ierror{\\mathcal{A}_{\\mu}}{n}{\\theta_0}= O_\\mu( \\omega_n(P_{\\theta_0}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}}) )\\, .\\]\n\nThe $O_\\mu$ notation hides constants depending on $\\mu$ (but not $\\theta_0$). \n\\end{theorem}\n\nIn the low privacy regime, the optimal test is based on the clamped log-likelihood ratio test from \\cite{Canonne:2019}. \nMuch of the work of both Theorem~\\ref{thm:informalrate} and Theorem~\\ref{informalmainexp} goes into finding the right conditions for uniform convergence. There are several key quantities that determine the optimal local estimation rate, and when it is uniformly achievable.\n\n\\begin{itemize}\n \\item Define the radius of smoothness of $A''$ around $\\theta$ as \\[\\kappa(\\theta)= \\max\\left\\{r\\;\\Big|\\; \\forall \\theta'\\in[\\theta-r, \\theta+r], \\frac{A''(\\theta)}{A''(\\theta')}\\in\\left[\\frac{1}{2}, 2\\right]\\right\\}.\\] By the continuity of $A''$, $\\kappa(\\theta)>0$ for all $\\theta$. Recall that $A''(\\theta)$ is the variance of $P_{\\theta}$ so $\\kappa(\\theta)$ is related to the smoothness of the variance. Our theorems will be strongest for families where $\\kappa(\\theta)$ is large for most $\\theta$ of interest. \n Given the characterisation of $\\omega_n(P_{\\theta_0}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})$ in Theorem~\\ref{thm:informalrate}, if $\\kappa(\\theta)$ is large, then this means that the local rate varies slowly. \n The parameter $\\kappa(\\theta)$ affects the achievability in two main ways.\n \\begin{itemize}\n \n \\item The form we give for the local estimation rate holds for sample sizes $n$ above some threshold that depends on $\\kappa(\\theta)$. Specifically, one requirement is that $J_{\\TV, P}^{-1}(\\kappa(\\theta)) \\geq \\frac{1}{\\epsilon n}$. \n That is, $n$ must be large enough that if $\\theta$ satisfies $TV(P_\\theta, P_{\\theta'}) \\leq 1\/\\epsilon n$, then $\\theta'\\in\\Phi_{\\mu}(\\theta)$, where $\\Phi_{\\mu}(\\theta)= [\\theta - \\kappa(\\theta), \\theta + \\kappa(\\theta)]\n $. \n This condition ensures that with high probability our private estimate lies within $\\kappa(\\theta)$ of $\\theta$.\n \n \\item In order for our procedure to succeed (that is, produce an accurate estimate) with probability at least $1-\\beta$, we require that there exists a constant $C>0$ such that $\\kappa(\\theta)~\\geq~\\frac 1 C \\cdot \\frac {\\sqrt{\\log(2\/\\beta)}}{ \\sqrt{A''(\\theta)}}$ for all $\\theta$.\n Under this condition, the distributions $P_{\\theta}$ are sub-Gaussian, that is \n \\[\\mathbb{P}_{x\\sim P_{\\theta}}\\left(|x-A'(\\theta)|\\ge (2+C)\\sqrt{A''(\\theta)}\\sqrt{\\ln(2\/\\beta)}\\right)\\le\\beta.\\] \n This light-tailed property ensures that with high probability a dataset sampled from $P_{\\theta}$ lies mostly in an interval of width $O(\\sqrt{A''(\\theta)}\\sqrt{\\ln(2\/\\beta)})$. \n This allows us to limit the amount of noise added for privacy to also scale with the standard deviation $\\sqrt{A''(\\theta_0)}$. Without a light tailed assumption, additional noise needs to be added to maintain privacy, resulting in a worse estimation rate. We see this effect in estimating the parameter of a Bernoulli distribution, where the scale of the noise needed to maintain privacy scales with $\\frac{1}{\\epsilon n}$, rather than $\\frac{\\sqrt{p(1-p)}}{\\epsilon n}$ which would be predicted by Theorem~\\ref{thm:informalrate}. The family of Bernoulli distributions fails to satisfy this assumption unless we constrain $\\min(p,1-p)$ to be at least a constant.\n \\end{itemize}\n \\item We will also require that the central standardised fourth moment is bounded. That is, there exists a constant $\\zeta$ such that \\[\\frac{\\mathbb{E}_{\\theta}\\Big(x-A'(\\theta)\\Big)^4}{A''(\\theta)^2} = \\frac{\\mathbb{E}_{\\theta}\\Big(x - \\mathbb{E}_{\\theta} x\\Big)^4}{\\text{var}(P_{\\theta})^2} \\le\\zeta.\\] \n The central standardised fourth moment is also known as the \\emph{kurtosis}, this assumption allows us to give a \\emph{lower} bound on the tails of $P_{\\theta}$. That is, there exists a constant $c$ such that $\\Pr_{P_{\\theta}}[X\\ge A'(\\theta)+\\frac{1}{2}\\sqrt{A''(\\theta)}]\\ge c$. This assumption is required for our algorithm to properly estimate the standard deviation $\\sqrt{A''(\\theta)}$, which plays a crucial role in our estimator. It is possible that this assumption can be weakened with an improved private variance estimator. \n\\end{itemize}\n\n\\subsection{Examples of Exponential Families}\n\nBefore we move onto the proofs of Theorems~\\ref{thm:informalrate} and~\\ref{informalmainexp}, let us consider a few examples of simple exponential families and the implications of these theorems.\n\n\\begin{example}[Gaussian mean with known variance] Note we can write \\[\\frac{1}{\\sigma\\sqrt{2\\pi}}e^{-\\frac{1}{2}\\left(\\frac{x-\\mu}{\\sigma}\\right)^2} = e^{x\\frac{\\mu}{\\sigma}-\\frac{1}{2}\\left(\\frac{\\mu}{\\sigma}\\right)^2} e^{-\\frac{1}{2}\\frac{x^2}{\\sigma^2}}.\\] Thus, if $\\sigma$ is known we can define an exponential family by $\\theta=\\mu\/\\sigma$, $A(\\theta)=\\frac{1}{2}\\theta^2$ and $d\\mu(x) = e^{-\\frac{1}{2}\\frac{x^2}{\\sigma^2} }dx$. Notice that $A''(\\theta)=1$ so $\\kappa(\\theta)=\\infty$ for all $\\theta$. Further, the central standardised fourth moment is 3. This is the ideal behavior for the conditions needed for Theorem~\\ref{thm:informalrate} and Theorem~\\ref{informalmainexp} to hold.\nTherefore, the local minimax optimal rate for privately estimating $\\theta$ is $\\left[\\frac{C_1}{\\min\\{n\\epsilon_n, \\sqrt{n}\\}}, \\frac{C_2}{\\min\\{n\\epsilon_n, \\sqrt{n}\\}}\\right]$, which implies that local minimax optimal rate for privately estimating the mean $\\mu$ is \\[\\left[\\frac{C_1\\sigma}{\\min\\{n\\epsilon_n, \\sqrt{n}\\}}, \\frac{C_2\\sigma}{\\min\\{n\\epsilon_n, \\sqrt{n}\\}}\\right].\\] This recovers a result of \\cite{Karwa:2018}.\n\\end{example}\n\n\\begin{example}[Poisson family] Recall that the Poission distribution, characterized by paramater $\\lambda>0$, assigns mass to nonnegative integers according to \\[\\frac{\\lambda^xe^{-\\lambda}}{x!} = e^{x \\ln\\lambda-\\lambda}\\frac{1}{x!}.\\] We can define an exponential family by taking $\\theta=\\ln\\lambda$, $A(\\theta)=\\lambda=e^{\\theta}$ and the base measure $\\mu$ that assigns mass $\\frac 1 {x!}$ to all nonnegative integers $x$. \nThus, $A''(\\theta) = e^{\\theta}=\\lambda$, which implies that $\\kappa(\\theta)=\\ln 2$ for all $\\theta$. Further, the central fourth moment is $3\\lambda^2+\\lambda$; normalized by the square of the variance, we get $3+\\frac 1 \\lambda$.\nSuppose there exists a constant $c>0$ such that it is guaranteed that $\\lambda>c$. Once $n$ is sufficiently large, the local minimax optimal rate for privately estimating $\\theta$ is $\\left[\\frac{C_1}{\\sqrt{\\lambda}\\min\\{n\\epsilon_n, \\sqrt{n}\\}}, \\frac{C_2}{\\sqrt{\\lambda}\\min\\{n\\epsilon_n, \\sqrt{n}\\}}\\right]$. Using the first-order Taylor approximation for $\\lambda=e^{\\theta}$, we see that the local minimax optimal rate for privately estimating $\\lambda$ is in \\[\\left[\\frac{C_1\\sqrt{\\lambda}}{\\min\\{n\\epsilon_n, \\sqrt{n}\\}}, \\frac{C_2\\sqrt{\\lambda}}{\\min\\{n\\epsilon_n, \\sqrt{n}\\}}\\right]\\]\nfor constants $C_1$ and $C_2$ depending on $c$.\n\\end{example}\n\n\n\\subsection{Basic Facts about Exponential Families}\n\nLet us begin by reviewing some basic properties of exponential families. A family $\\{p_{\\theta}(x)\\}$ has monotone likelihood ratio if for all $ \\theta<\\theta'$, $\\frac{p_{\\theta'}(x)}{p_{\\theta}(x)}$ is a non-decreasing function of $x$. Exponential families have monotone likelihood ratio.\n\n\\begin{lemma}[Lehmann \\& Romano, Lemma 3.4.2]\\label{monotonelikelihood}\nLet $\\{p_{\\theta}(x)\\}$ be a family with monotone likelihood ratio, then\n\\begin{itemize}\n\\item If $g$ is a nondecreasing function of $x$, then $\\mathbb{E}_{\\theta}g(x)$ is a nondecreasing function of $\\theta$.\n\\item For any $\\theta<\\theta'$, and any $t$, $\\mathbb{P}_{\\theta}(x>t)<\\mathbb{P}_{\\theta'}(x>t).$\n\\end{itemize}\n\\end{lemma}\n\n\\begin{restatable}{corollary}{rmonotonecontinuous}\n\\label{monotonecontinuous} Assume that $A''$ is continuous. Then \nfor all $\\theta$, the function $h\\mapsto \\text{\\rm TV}(P_{\\theta}, P_{\\theta+h})$ is continuous and monotonically increasing on $h\\ge 0$.\n\\end{restatable}\n\n\\begin{lemma}\\label{belongsinkappa} For any $\\theta, \\theta'$,\nif $|A'(\\theta')-A'(\\theta)|\\le \\frac{1}{2}A''(\\theta)\\kappa(\\theta)$ then $\\theta'\\in\\Phi(\\theta).$\n\\end{lemma}\n\n\\begin{proof}\nIf $|\\theta-\\theta'|\\ge\\kappa(\\theta)$ then $|A'(\\theta)-A'(\\theta')|\\ge \\min_{\\theta''\\in[\\theta,\\theta']}A''(\\theta'')|\\theta-\\theta'|\\ge \\frac{1}{2}A''(\\theta)\\kappa(\\theta).$ \n\\end{proof}\n\nThe following concentration inequality is proved in Section~\\ref{aconcentrationexp}.\n\n\\begin{restatable}{lemma}{rconcentrationexp}{\\em [Concentration Inequality for Exponential Families]}\n\\label{concentrationexp} For all measures $\\mu$, $\\theta\\in\\Phi$, and $\\beta\\in[0,1]$, \n\n{\\color{black} \n\\[\\mathbb{P}_{x\\sim P_{\\theta}}\\left(|x-A'(\\theta)|\\ge 2\\sqrt{A''(\\theta)}\\sqrt{\\ln(2\/\\beta)} + \\frac{\\ln(2\/\\beta)}{\\kappa(\\theta)}\\right)\\le\\beta.\\]\n\nIn particular, if $\\kappa(\\theta)\\geq \\frac 1 C \\cdot \\frac {\\sqrt{\\log(2\/\\beta)}}{ \\sqrt{A''(\\theta)}}$,\n\\[\\mathbb{P}_{x\\sim P_{\\theta}}\\left(|x-A'(\\theta)|\\ge (2+C)\\sqrt{A''(\\theta)}\\sqrt{\\ln(2\/\\beta)}\\right)\\le\\beta.\\]\n\n}\n\n\\end{restatable}\n\nLemma~\\ref{concentrationexp} shows that the tail of a distribution in an exponential family transitions from exponential to Gaussian as we move further out into the tail. How far into the tail one has to move for the tails to be Gaussian is a function of the standard deviation $\\sqrt{A''(\\theta)}$ and the stability of the standard deviation $\\kappa(\\theta)$. \n\n\\subsection{Non-Private Estimation}\n\nBefore we start designing our differentially private locally optimal estimator, let us first discuss the locally optimal estimator in the non-private setting. \nGiven a sample $X\\sim P_{\\theta}^n$, let \\[\\mathcal{A}_{opt}(X) = A'^{-1}\\left(\\frac{1}{n}\\sum_{i=1}^n x_i\\right) \\, .\\]\n\n\\begin{proposition}[Characterization of Optimal Local Estimation Rate in Non-Private Regime \\citep{BN:1978}] \n\\label{nonprivupper}\n$\\mathcal{A}_{opt}$ is the optimal non-private estimation algorithm and, for all measures $\\mu$ and $\\theta_0\\in\\Phi$, has rate \\[\\mathcal{E}_n^{\\rm loc}(P_{\\theta}, \\mathcal{P_{\\mu}}, \\mathcal{Q}_{\\rm est}, \\theta_0) = \\Theta\\left(\\frac{1}{\\sqrt{A''(\\theta_0) n}}\\right).\\] \n\\end{proposition}\n\n\\begin{restatable}{proposition}{rnonprivuniformachievability}{\\em [Uniform Achievability in Non-Private Regime]}\\label{nonprivuniformachievability}\nFor all measures $\\mu$ and $\\theta_0\\in\\Phi$ and $n\\in\\mathbb{N}$, if $n\\ge \\frac{36}{\\kappa(\\theta_0)^2A''(\\theta_0)}$, then \\[\\ierror{\\mathcal{A}_{opt}}{n}{\\theta_0} \\le \\frac{6}{\\sqrt{nA''(\\theta_0)}}.\\] \n\\end{restatable}\n\n\\begin{proof}[Proof of Proposition~\\ref{nonprivuniformachievability}]\nNote that $\\mathbb{E}_{\\theta_0}[\\frac{1}{n}\\sum_{i=1}^n x_i]=nA'(\\theta_0)$ and $\\text{var}_{\\theta_0}[\\frac{1}{n}\\sum_{i=1}^n x_i]=\\frac{A''(\\theta_0)}{n}$. Thus, with probability 8\/9, \n\\begin{equation}\\label{meanacc}\n\\left|\\frac{1}{n}\\sum_{i=1}^n x_i-A'(\\theta_0)\\right|\\le 3\\sqrt{\\frac{A''(\\theta_0)}{n}}.\n\\end{equation}\nNow, \n$\\frac{6}{\\sqrt{nA''(\\theta_0)}}\\le\\kappa(\\theta_0)$ implies that $3\\sqrt{\\frac{A''(\\theta_0)}{n}}\\le\\frac{1}{2}A''(\\theta_0)\\kappa(\\theta_0)$. So, by Equation~\\ref{belongsinkappa}, if Equation~\\eqref{meanacc} holds then $\\mathcal{A}_{opt}(X)\\in\\Phi(\\theta_0)$.\nTherefore, with probability 8\/9-0.1,\n\\begin{align*}\n\\left|A'^{-1}\\left(\\frac{1}{n}\\sum_{i=1}^n x_i\\right)-\\theta_0\\right|&\\le \\max_{t\\in[\\frac{1}{n}\\sum_{i=1}^n x_i, A'(\\theta_0)]} (A'^{-1})'(t)\\left|\\frac{1}{n}\\sum_{i=1}^n x_i-A'(\\theta_0)\\right|\\\\\n&\\le \\max_{\\theta'\\in[A'^{-1}(\\frac{1}{n}\\sum_{i=1}^n x_i), \\theta_0]} \\frac{1}{A''(\\theta')} 3\\sqrt{\\frac{A''(\\theta_0)}{n}}\\\\\n&\\le 6\\frac{1}{\\sqrt{nA''(\\theta_0)}}. \\qedhere\n\\end{align*}\n\\end{proof}\n\n\\subsection{Initial Estimator}\n\nIn both the high and low privacy settings, our first step will be to a get a crude estimate of $\\mathbb{E}_{\\theta_0}[x]$.\nThis initial estimate will then be used to obtain a more refined estimate of $\\theta_0$. In both cases a sufficient initial estimate is given by a slight variation of the mean estimator given in \\cite{Karwa:2018}. Note that we could use this estimate of $A'(\\theta_0)=\\mathbb{E}_{\\theta_0}[x]$ to get an estimate of $\\theta_0$ in the same way we did in $\\mathcal{A}_{opt}$. However, the resulting estimator of $\\theta$ is suboptimal by a factor of $\\sqrt{\\ln n}$. A full description of the initial estimator is given in Appendix~\\ref{appendix:meanest}, we will denote it by $\\mathcal{M}_{\\fourthmoment, C}$.\n\n\\begin{restatable}{theorem}{rinitialestthm}\n\\label{initialestthm}\nThere exists constants $c>0$ and $b>0$ such that for all $\\epsilon>0$, $\\delta\\in[0,1]$, $\\zeta>0$, and $C>0$, there exists an $(\\epsilon, \\delta)$-DP algorithm, $\\mathcal{M}_{\\fourthmoment, C}$, such that for all measures $\\mu$ and $\\theta_0\\in\\Phi$ if \n\\begin{itemize}\n\\item $\\frac{\\mathbb{E}_{\\theta_0}[|x-A'(\\theta_0)|^3]}{\\sqrt{A''(\\theta_0)}^3}\\le \\zeta$\n\\item $\\kappa(\\theta_0)\\ge \\frac{1}{C}\\frac{1}{\\sqrt{A''(\\theta_0)}}$\n\\end{itemize}\nthen for all $n\\in\\mathbb{N}$ such that $n\\ge \\frac{c\\zeta^2\\ln(1\/\\delta)}{\\epsilon}$, if $X\\sim P_{\\theta_0}^n$, then with probability $0.8$, \\[\\left|\\mathcal{M}_{\\fourthmoment, C}(X)-\\frac{1}{n}\\sum_{x\\in X} x\\right| \\le b(6+C)\\left(\\frac{\\sqrt{A''(\\theta_0)}}{n\\epsilon}\\sqrt{\\ln(n)}\\right). \\]\n\\end{restatable}\n\n\n\\begin{restatable}{corollary}{rallforalog}\n\\label{allforalog}\nThere exists a constant $c>0$ such that for all $\\epsilon_n=\\Omega(\\frac{\\ln n}{n})$, $\\delta\\in[0,1]$, $\\zeta>0$, $C>0$ there exists an $(\\epsilon_n, \\delta)$-DP algorithm, $\\mathcal{M}_{\\fourthmoment, C}$ and constants $D>0$ and $N\\in\\mathbb{N}$ such that for all measures $\\mu$ and $\\theta_0\\in\\Phi$ if \n\\begin{itemize}\n\\item $\\frac{\\mathbb{E}_{\\theta_0}[|x-A'(\\theta_0)|^3]}{\\sqrt{A''(\\theta_0)}^3}\\le \\zeta$\n\\item $\\kappa(\\theta_0)\\ge \\frac{1}{C}\\frac{1}{\\sqrt{A''(\\theta_0)}}$\n\\end{itemize}\nthen for all $n\\in\\mathbb{N}$ such that $n\\ge \\max\\{N, \\frac{c\\zeta^2\\ln(1\/\\delta)}{\\epsilon_n}\\}$, with probability at least 0.8, \\[|A'^{-1}(\\mathcal{M}_{\\fourthmoment, C}(X))-\\theta_0|\\le D\\left(\\frac{1}{\\sqrt{nA''(\\theta_0)}}+\\frac{1}{n\\epsilon_n\\sqrt{A''(\\theta_0)}}\\sqrt{\\ln(n)}\\right).\\]\n\\end{restatable}\n\n\\begin{proof}\nBy Theorem~\\ref{nonprivupper} and Theorem~\\ref{initialestthm}, there exists a constant $C>0$ such that for all $\\theta_0\\in\\Phi$ satisfying the two conditions and $n\\ge \\frac{c\\zeta^2\\ln(1\/\\delta)}{\\epsilon_n}$ with probability 0.8 we have, \n\\[|\\mathcal{M}_{\\fourthmoment, C}(X)-A'(\\theta_0)| = \\frac{C}{2}\\left( \\frac{\\sqrt{A''(\\theta_0)}}{\\sqrt{n}}+ \\frac{\\sqrt{A''(\\theta_0)}}{n\\epsilon_n}\\sqrt{\\ln\\left(n\\right)}\\right).\\] \nNow, since $\\kappa(\\theta_0)\\sqrt{A''(\\theta_0)}\\ge B$ and $\\epsilon_n=\\Omega(\\frac{\\ln n}{n})$, there exists $N\\in\\mathbb{N}$ such that for all $n>N$, \\[\\frac{C}{2}\\left(\\frac{\\sqrt{A''(\\theta_0)}}{\\sqrt{n}}+\\frac{\\sqrt{A''(\\theta_0)}\\sqrt{\\ln n}}{\\epsilon_n n}\\right)\\le \\frac{1}{2}A''(\\theta_0)\\kappa(\\theta_0)\\] combined with Lemma~\\ref{belongsinkappa} implies that $A'^{-1}(\\mathcal{M}_{\\fourthmoment, C}(X))\\in\\Phi(\\theta_0)$.\nTherefore, \n\\begin{align*}\n|(A')^{-1}(\\mathcal{M}_{\\fourthmoment, C}(X))-\\theta_0| &\\le \\max_{t\\in[\\mathcal{M}_{\\fourthmoment, C}(X), A'(\\theta_0)]}(A')^{-1})'(t)|\\mathcal{M}_{\\fourthmoment, C}(X)-\\theta_0|\\\\\n&= \\max_{t\\in[\\mathcal{M}_{\\fourthmoment, C}(X), A'(\\theta_0)]}\\frac{1}{A''(A'^{-1}(t))}|\\mathcal{M}_{\\fourthmoment, C}(X)-\\theta_0|\\\\\n&\\le 2\\frac{1}{A''(\\theta_0)}\\frac{C}{2}\\left( \\frac{\\sqrt{A''(\\theta_0)}}{\\sqrt{n}}+ \\frac{\\sqrt{A''(\\theta_0)}}{n\\epsilon_n}\\sqrt{\\ln\\left(n\\right)}\\right)\\\\\n&= C\\left( \\frac{1}{\\sqrt{nA''(\\theta_0)}}+ \\frac{1}{n\\epsilon_n\\sqrt{A''(\\theta_0)}}\\sqrt{\\ln\\left(n\\right)}\\right)\n\\end{align*}\n\\end{proof}\n\n\\subsection{High Privacy Regime}\n\nWe begin with the high privacy regime. While the noisy clamped log-likelihood ratio test is optimal in general for private simple hypothesis testing, a simpler test works in the high privacy regime. This test, which informs our design of the private estimator in this section, is a simple noisy counting test, and looks very similar to the optimal test in the local DP setting, presented in \\cite{Duchi:2018}. The form of the estimation rate is also simpler in this section since, as we saw in Lemma~\\ref{smalleps}, the sample complexity of the differentially private simple hypothesis testing takes on a simpler form in this regime. \n\n\\subsubsection{Characterising the Optimal Local Estimation Rate in High Privacy Regime}\n\nRecall from Corollary~\\ref{smallepslower} that the optimal local estimation rate in the high privacy regime is characterized by the $L_1$-information, defined in Equation~\\eqref{L1info}: for $\\beta\\in[0,1]$\n\\[J_{\\text{\\rm TV}, \\theta}^{-1}(\\beta) = \\sup\\{|h|\\;|\\; \\text{\\rm TV}(P_{\\theta}, P_{\\theta+h})\\le\\beta\\}.\\]\nThe following lemma characterizes the $L_1$-information, and hence the optimal local estimation rate, in terms of properties of the one-parameter exponential family. The proof can be found in Appendix~\\ref{aboundonmodulus}.\n\n\\begin{restatable}{lemma}{rboundonmodulus}\n\\label{boundonmodulus} For all $\\zeta>0$, there exists a constant $C$ such that for all measures $\\mu$, $\\theta_0\\in\\Phi$, and $\\beta\\in[0,1]$, if $ \\frac{\\mathbb{E}_{P_{\\theta}}(X-A'(\\theta_0))^4}{A''(\\theta_0)^2}\\le \\zeta\\le \\frac{9}{128\\beta}$ and $\\kappa(\\theta)\\ge J_{\\text{\\rm TV},\\theta}^{-1}(\\beta)$ then, \\[J_{\\text{\\rm TV}, \\theta}^{-1}(\\beta)\\in \\left[\\frac{1}{\\sqrt{2}} \\frac{\\beta}{\\sqrt{A''(\\theta)}}, C \\frac{\\beta}{\\sqrt{A''(\\theta)}}\\right].\\]\n\\end{restatable}\n\n\nThe following corollary follows immediately from Theorem~\\ref{smallepslower} and Lemma~\\ref{boundonmodulus}.\n\n\\begin{corollary}[Optimal Local Estimation Rate in the High Privacy Regime]\\label{smallepslowerexp} For all constants $k$ and $\\zeta>0$, there exists a constants $C_1$, $C_2$ and $C_3$ such that for all measures $\\mu$ and $\\theta_0\\in\\Phi$, if $ \\frac{\\mathbb{E}_{P_{\\theta}}(X-A'(\\theta_0))^4}{A''(\\theta_0)^2}\\le \\zeta$, $\\epsilon_n\\le \\frac{k}{\\sqrt{n}}$, $\\kappa(\\theta)\\ge J^{-1}_{\\text{\\rm TV},\\theta}(\\frac{C_3}{n\\epsilon_n})$ and $\\frac{C_3}{n\\epsilon_n}\\le \\frac{9}{128\\zeta}$ then for all $n\\in\\mathbb{N}$, \\[\\mathcal{E}^{\\rm loc}_n(P_{\\theta_0}, \\mathcal{P_{\\mu}}, \\mathcal{Q}_{\\rm est, \\epsilon}, \\theta) \\in \\left[\\frac{C_1}{n\\epsilon_n\\sqrt{A''(\\theta_0)}}, \\frac{C_2}{n\\epsilon_n\\sqrt{A''(\\theta_0)}}\\right].\\]\n\\end{corollary}\n\n\\subsubsection{Uniform Achievability in High Privacy Regime}\n\nIn this section we show that in the high privacy regime, uniform achievability is achieved using a simple estimator based on estimating $\\mathbb{P}_{\\theta}(x>\\mathbb{E}_{\\theta}[x])$.\nLet $f_t(X) = \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{x_i>t}$. Note that by Lemma~\\ref{monotonelikelihood},\n\\[g_t(\\theta):=\\mathbb{E}_{\\theta}[f_t(x)]=\\mathbb{P}_{\\theta}(x>t)\\]\n\nis monotone and invertible in $\\theta$, so Algorithm~\\ref{algo:estexphighprivstep1} is well-defined. Our estimator $\\mathcal{A}_{\\rm high}$ requires as input a DP mean estimator, so $\\hat{t}$ is an estimate of $\\mathbb{E}_{\\theta_0}[x]$. \nWe refine the estimate $\\hat{t}$ by using an estimator with lower sensitivity. Any sufficiently accurate mean estimator can be used for $\\mathcal{M}$, but we note that the estimator described in Theorem~\\ref{initialestthm} (derived from \\cite{Karwa:2018}) is sufficient.\n\n\\begin{algorithm} \\caption{$\\mathtt{Count}$}\\label{algo:estexphighprivstep1}\n\\begin{algorithmic}[1]\n\\Require{Sample $X \\sim P_{\\theta_0}^{n}$ , $\\hat{t}$}, $\\epsilon$\n\\State $\\hat{\\theta} = g_{\\hat t}^{-1}(f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right))$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm} \\caption{$\\mathcal{A}_{{\\rm high}}$}\\label{algo:estexphighprivstep2}\n\\begin{algorithmic}[1]\n\\Require{Sample $X_1 \\sim P_{\\theta_0}^{n}$ and $X_2 \\sim P_{\\theta_0}^{n}$, an $(\\epsilon\/2, \\delta)$-DP mean estimator $\\mathcal{M}$}\n\\State $\\hat{t}=\\mathcal{M}(X_1)$.\n\\State $\\hat{\\theta} = \\mathtt{Count}(X_2,\\hat{t}, \\epsilon\/2)$.\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{restatable}[Uniform Achievability in High Privacy Regime]{proposition}{rprophighprivupper} \\label{prop:highprivupper} For any $\\epsilon_n>0$ and $\\delta_n\\in[0,1]$, $\\mathcal{A}_{high}$ is $(\\epsilon_n,\\delta_n)$-DP.\nFurther, there exists a constant $c>0$ such that for all constants $\\zeta>0$, $C>0$, and $\\delta\\in[0,1]$, there exists an estimator $\\mathcal{M}_{\\fourthmoment, C}$ such that there exists constants $N\\in\\mathbb{N}$ and $C'>0$ such that for all exponential families (i.e. any measure $\\mu$) and $\\theta_0\\in\\Phi$, if \n\\begin{itemize}\n\\item $\\Omega\\left(\\frac{\\ln n}{n}\\right)\\le \\epsilon_n\\le O\\left(\\frac{1}{\\sqrt{n}}\\right)$\n\\item $\\frac{\\mathbb{E}_{P_{\\theta_0}}(X-A'(\\theta_0))^4}{A''(\\theta_0)^2}\\le \\zeta$,\n\\item $\\kappa(\\theta_0)\\ge \\frac{1}{C}\\frac{1}{\\sqrt{A''(\\theta_0)}}$\n\\end{itemize}\nthen for all $n\\in\\mathbb{N}$ such that $\\kappa(\\theta_0)\\ge J^{-1}_{\\text{\\rm TV},\\theta_0}(\\frac{1}{\\epsilon_n n})$ and $n\\ge \\max\\{N, \\frac{c\\zeta^2\\ln(1\/\\delta_n)}{\\epsilon_n}\\}$,\n\\[\\ierror{\\mathcal{A}_{{\\rm high}, C, \\fourthmoment}}{n}{\\theta_0} \\le C'\\cdot J_{\\text{\\rm TV}, \\theta_0}^{-1}\\left(\\frac{1}{\\epsilon_n n}\\right)=O\\left(\\frac{1}{n\\epsilon_n\\sqrt{A''(\\theta_0)}}\\right),\\]\nwhere $\\mathcal{A}_{{\\rm high}, C, \\fourthmoment}$ is $\\mathcal{A}_{{\\rm high}}$ with initial mean estimator $\\mathcal{M}_{\\fourthmoment, C}$.\n\\end{restatable}\n\nNote that the upper bound in Proposition~\\ref{prop:highprivupper} matches the characterization of the optimal local estimation rate given in Lemma~\\ref{smallepslowerexp}. \nThus, Proposition~\\ref{prop:highprivupper} implies uniform achievability; there exists an algorithm that achieves the optimal local estimation rate for every $\\theta_0\\in\\Phi$. Note that many of the conditions required in Proposition~\\ref{prop:highprivupper} were already present in Corollary~\\ref{allforalog}. Indeed, we will primarily use these conditions to ensure that our initial estimate $\\mathcal{M}(X_1)$ is sufficiently accurate. \n\nThe proof proceeds by arguing that the test defined by the test statistic $f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)$ is good enough to distinguish $P_{\\theta_0}$ from $P_{\\theta_1}$ provided $|\\theta_0-\\theta_1|\\ge CJ_{\\text{\\rm TV},\\theta_0}^{-1}\\left(\\frac{1}{n\\epsilon_n}\\right)$, and thus the estimator inherited by this tester will, with high probability, not output such a $\\theta_1$.\nThe main technical challenge in this section will be showing that $f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)$ is a good test statistic for distinguishing between $P_{\\theta_0}$ and $P_{\\theta_1}$ when $|\\theta_0-\\theta_1|\\ge CJ_{\\text{\\rm TV},\\theta_0}^{-1}\\left(\\frac{1}{n\\epsilon_n}\\right)$. We first show that if $\\hat{t}$ is a good enough estimate for $A'(\\theta_0)$, then $|\\mathbb{P}_{\\theta_0}(X>\\hat{t})-\\mathbb{P}_{\\theta_1}(X>\\hat{t})|\\approx \\text{\\rm TV}(P_{\\theta_0}, P_{\\theta_1})$. Note that this would be obviously true if $\\hat{t}=\\frac{A(\\theta_0)-A(\\theta_1)}{\\theta_0-\\theta_1}$, so the majority of the work goes into proving that $\\hat{t}$ is close enough to this ideal boundary point. Then, we show that the standard deviation of the statistic $f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\approx \\frac{1}{\\epsilon n}$, so the test will distinguish $P_{\\theta_0}$ and $P_{\\theta_1}$ provided $\\text{\\rm TV}(P_{\\theta_0}, P_{\\theta_1})=\\Omega(\\frac{1}{\\epsilon n})$, as required. The following is the main technical lemma in this section, it is proved in Section~\\ref{ahighprivexptest}. \n\n\n\\begin{restatable}{lemma}{rhighprivexptest}\n\\label{highprivexptest} For all positive constants $\\zeta$ and $b$, and $\\Omega\\left(\\frac{\\ln n}{n}\\right)\\le \\epsilon_n\\le O\\left(\\frac{1}{\\sqrt{n}}\\right)$, there exists constants $N\\in\\mathbb{N}$ and $C>0$ such for all measures $\\mu$, $\\theta_0\\in\\Phi$ and $n\\in\\mathbb{N}$ such that $n\\ge N$, if \n\\begin{enumerate}\n\\item $\\max_{\\theta\\in\\Phi(\\theta_0)}\\frac{\\mathbb{E}_{P_{\\theta}}[(X-A'(\\theta_0))^4]}{A''(\\theta)^2}\\le \\zeta$.\n\\item\\label{initialacc} $|\\hat{t}-A'(\\theta_0)|\\le \\textcolor{black}{b\\left(\\sqrt{\\frac{A''(\\theta_0)}{n}}+\\frac{\\sqrt{A''(\\theta_0)\\ln\\left(n\\right)}}{\\epsilon n}\\right)}$\n\\item $\\kappa(\\theta_0)\\ge J^{-1}_{\\text{\\rm TV},\\theta_0}\\left(\\frac{8\\sqrt{2}}{\\epsilon_n n}\\right)$\n\\end{enumerate} \nthen for all $\\theta_1\\in\\Phi(\\theta_0)$ such that $|\\theta_0-\\theta_1|\\ge CJ_{\\text{\\rm TV}, \\theta_0}^{-1}\\left(\\frac{1}{\\epsilon_nn}\\right)$, then there exists a threshold $\\tau$ such that the test \\[\\begin{cases}\nP_{\\theta_0} & \\text{if } f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\le \\tau\\\\\nP_{\\theta_1} & \\text{if } f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)\\ge \\tau\n\\end{cases}\n\\]\ndistinguishes between $\\theta_0$ and $\\theta_1$. Furthermore, $\\mathbb{E}_{P_{\\theta_0}}[f_{\\hat{t}}(X)]\\le \\tau\\le \\mathbb{E}_{P_{\\theta_1}}[f_{\\hat{t}}(X)].$\n\\end{restatable}\n\nThe following corollary translates from the above testing result to an estimation result. The intuition for this conversion is that if the test statistic $f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right)$ can distinguish $\\theta_0$ from $\\theta_1$, then the estimation algorithm $\\mathtt{Count}(X,\\hat{t})$ is unlikely to output $\\theta_1$ when the data is drawn from $P_{\\theta_0}$. \n\n\\begin{restatable}{corollary}{rhighprivoracleest}\n\\label{highprivoracleest} For all positive constants $\\zeta$ and $b$, and $\\Omega\\left(\\frac{\\ln n}{n}\\right)\\le \\epsilon\\le O\\left(\\frac{1}{\\sqrt{n}}\\right)$, there exists constants $N\\in\\mathbb{N}$ and $C>0$ such for all measures $\\mu$, $\\theta_0\\in\\Phi$ and $n\\in\\mathbb{N}$ such that $n\\ge N$, if \n\\begin{enumerate}\n\\item $\\max_{\\theta\\in\\Phi(\\theta_0)}\\frac{\\mathbb{E}_{P_{\\theta}}[(X-A'(\\theta_0))^4]}{A''(\\theta)^2}\\le \\zeta$,\n\\item\\label{initialacc} $|\\hat{t}-A'(\\theta_0)|\\le \\textcolor{black}{b\\left(\\sqrt{\\frac{A''(\\theta_0)}{n}}+\\frac{\\sqrt{A''(\\theta_0)\\ln\\left(n\\right)}}{\\epsilon n}\\right)}$,\n\\item $\\kappa(\\theta_0)\\ge J^{-1}_{\\text{\\rm TV}, \\theta_0}\\left(\\frac{1}{\\epsilon_n n}\\right)$,\n\\end{enumerate} \nthen we have with probability 0.75,\n\\[|\\mathtt{Count}(X,\\hat{t})-\\theta_0| \\le C\\cdot J_{\\text{\\rm TV}, \\theta_0}^{-1}\\left(\\frac{1}{\\epsilon_n n}\\right)=O\\left(\\frac{1}{n\\epsilon_n\\sqrt{A''(\\theta_0)}}\\right).\\]\n\\end{restatable}\n\n\\begin{proof} Let $C$ and $N$ be as in Lemma~\\ref{highprivexptest}.\nLet $X\\sim P_{\\theta_0}^n$ and suppose for sake of contradiction that $\\mathcal{A}_{\\text{high}}(X)~=~\\theta_1$ where $|\\theta_1-\\theta_0|\\ge CJ_{\\text{\\rm TV}, \\theta_0}^{-1}\\left(\\frac{1}{\\epsilon_nn}\\right)$. Then by definition, $f_{\\hat{t}}(X)+\\mathrm{Lap}\\left(\\frac{1}{\\epsilon n}\\right) = \\mathbb{E}_{Y\\sim P_{\\theta_1}}[f_{\\hat{t}}(Y)]$. Therefore, the test in Lemma~\\ref{highprivexptest} would have rejected $P_{\\theta_0}$, which only happens with probability $0.25$.\n\\end{proof}\n\nThe proof of Proposition~\\ref{prop:highprivupper} follows directly from Corollary~\\ref{highprivoracleest} and Theorem~\\ref{initialestthm}. Note that the conditions in Proposition~\\ref{prop:highprivupper} are sufficient to ensure that the mean estimator from Theorem~\\ref{initialestthm} is accurate enough that $\\hat{t}$ satisfies the required condition in Corollary~\\ref{highprivoracleest} with high probability.\n\n\\subsection{Low Privacy Regime}\n\nIn the low privacy regime, $\\epsilon=\\Omega\\left(\\frac{1}{\\sqrt{n}}\\right)$, we claim that the optimal local estimation rate for privacy is asymptotically the same as the non-private rate. The estimator that achieves the optimal local estimation rate is derived from the noisy clamped log-likelihood test outlined in Section~\\ref{cllr}, the optimal algorithm for privately distinguishing two distributions.\n\n\\subsubsection{Characterising the Optimal Local Estimation Rate in Low Privacy Regime}\n\nA main component of this claim is that for exponential families, the modulus of continuity of the non-private sample complexity ${\\rm SC}$ is equal to the modulus of continuity of the private sample complexity, ${\\rm SC}_{\\epsilon_n}$ in this parameter regime. \nThe proof of the following proposition is found in Section~\\ref{alowprivmodcontinuity}.\n\n\\begin{restatable}{proposition}{rlowprivmodcontinuity}\n\\label{lowprivmodcontinuity} For all $\\zeta>0$, there exists positive constants $k$, $C_1$, $C_2$ and $N$ such that for all measures $\\mu$, $\\theta\\in\\Phi$ and $n\\in\\mathbb{N}$ such that $n\\ge N$, if \n\\begin{itemize}\n\\item $\\epsilon_n\\ge \\frac{k}{\\sqrt{n}}$, \n\\item $\\max_{\\theta'\\in\\Phi(\\theta)}\\frac{\\mathbb{E}_{\\theta'}((x-A'(\\theta'))^4)}{A''(\\theta')^2}\\le \\zeta$ \n\\item $\\kappa(\\theta)\\ge \\frac{C_2}{\\sqrt{n A''(\\theta)}}$\n\\end{itemize} \nthen, \\[\\frac{C_1}{\\sqrt{nA''(\\theta)}}\\le\\omega_n(P_{\\theta}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})\\le \\frac{C_2}{\\sqrt{nA''(\\theta)}}.\\]\nThis implies $\\omega_n(P_{\\theta}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}}) =\\Theta\\left(\\omega_n(P_{\\theta}, \\mathcal{F}^{\\text{\\rm test}})\\right)$.\n\\end{restatable}\n\nThe following corollary follows immediately from Theorem~\\ref{lowerbound} and Proposition~\\ref{lowprivmodcontinuity}.\n\n\\begin{corollary}[Lower Bound in the Low Privacy Regime]\\label{largeepslowerexp}\nFor all $\\zeta>0$, there exists positive constants $k$ and constants $C_1$, $C_2$ and $N$ such that under the same conditions as Proposition~\\ref{lowprivmodcontinuity}, \\[\\mathcal{E}^{\\rm loc}_n(P_{\\theta_0}, \\mathcal{P_{\\mu}}, \\Qest_\\epsilon, \\theta) \\in \\left[\\frac{C_1}{2\\sqrt{nA''(\\theta_0)}}, \\frac{C_2}{2\\sqrt{nA''(\\theta_0)}}\\right]. \\]\n\\end{corollary}\n\n\\subsubsection{Uniform Achievability in the Low Privacy Regime}\n\nOur estimator in the low privacy regime is based on the $\\operatorname{ncLLR}$ test described in Section~\\ref{DPtestingsec}. Recall that the probability density function of $P_{\\theta}$ has the form $p_{\\theta}(x)=e^{\\theta x-A(\\theta)}$. Intuitively, the idea is that if we appropriately clamp the test statistic $x-\\hat{t}$ then it contains roughly the same amount of information as the clamped log-likelihood ratio. Suppose $|\\theta_1-\\theta_0|=\\omega_n(P_{\\theta_0}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})$ then\n\\begin{align*}\n(\\theta_1-\\theta_0)[x-\\hat{t}]_{-\\epsilon\/\\omega_n(P_{\\theta_0}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})}^{\\epsilon\/\\omega_n(P_{\\theta_0}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})} &= [(\\theta_1-\\theta_0)x-(\\theta_1-\\theta_0)\\hat{t}]_{-\\epsilon}^{\\epsilon} \\\\\n&= \\left[\\ln\\frac{P_{\\theta_1}(x)}{P_{\\theta_0}(x)}+A(\\theta_1)-A(\\theta_0)-(\\theta_1-\\theta_0)\\hat{t}\\right]_{-\\epsilon}^{\\epsilon} \\\\\n&\\approx \\left[\\ln\\frac{P_{\\theta_1}(x)}{P_{\\theta_0}(x)}\\right]_{-\\epsilon}^{\\epsilon}\n\\end{align*}\nwhere the final approximation holds since $\\hat{t}\\approx \\mathbb{E}_{\\theta_0}[x]=A'(\\theta)$. The final term is the optimal test for distinguishing the two distributions. The main technical difficulty is in showing that the approximations do not affect the sample complexity of the test too much. As in the high privacy setting, we need an initial mean estimator to estimate $\\hat{t}\\approx\\mathbb{E}_{\\theta_0}[x]$ and $\\tilde{\\alpha}\\approx \\omega_n(P_{\\theta_0}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})$. Again the estimator from Theorem~\\ref{initialestthm} will suffice. For ease of notation, let $\\alpha_n(\\theta) = \\frac{1}{\\sqrt{nA''(\\theta)}}$.\n\n\\begin{algorithm} \\caption{${\\rm nCLLRE}$}\\label{algo:estexp}\n\\begin{algorithmic}[1]\n\\Require{Sample $X_1 = (x_1, \\cdots, x_n)\\sim P_{\\theta_0}^n$, $\\hat{t}$, C, $\\epsilon$}\n\\State $\\tilde{\\alpha} = \\alpha_n(A'^{-1}(\\hat{t}))$. \n\\State $f_{\\tilde{\\alpha}}(X_1) = \\frac{1}{n}\\sum_{i=1}^n [x_i-\\hat{t}]_{\\frac{-\\epsilon}{C\\tilde{\\alpha}}}^{\\frac{\\epsilon}{C\\tilde{\\alpha}}}$\n\\State $\\widehat{f} = f_{\\tilde{\\alpha}}(X_1) + \\mathrm{Lap}\\left(\\frac{2}{\\epsilon n C\\tilde{\\alpha}})\\right)$\n\\State \\Return $\\hat{\\theta} = \\arg\\min_{\\theta} \\left|\\widehat{f}-\\mathbb{E}_{X\\sim P_{\\theta}^n}\\left(f_{\\tilde{\\alpha}}(X)\\right)\\right|$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm} \\caption{$\\mathcal{A}_{{\\rm low}}$, estimating exponential family parameters}\\label{algo:estexp}\n\\begin{algorithmic}[1]\n\\Require{Sample $X_1\\sim P_{\\theta_0}^n$, $X_2\\sim P_{\\theta_0}^n$, and an $(\\epsilon\/2, \\delta)$-DP mean estimator $\\mathcal{M}$, C}\n\\State $\\hat{t} = \\mathcal{M}(X_2)$. \n\\State \\Return $\\hat{\\theta} = {\\rm nCLLRE}(X_1, \\hat{t}, C, \\epsilon\/2)$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{restatable}{proposition}{rmainexp}{\\em [Uniform Achievability in Low Privacy Regime]}\\label{mainexp}\nFor any $\\epsilon_n>0, \\delta_n\\in[0,1], C>0$, $\\mathcal{A}_{low}$ is $(\\epsilon_n, \\delta_n)$-DP. Further, for all $\\zeta>0$, $C>0$, $\\epsilon_n>0$, and $\\delta_n\\in[0,1]$, there exists an initial estimator $\\mathcal{M}_{\\fourthmoment, C}$ such that there exists constants $N\\in\\mathbb{N}$, $k\\ge 0$, $C'>0$, $D>0$ such that for all exponential families (i.e. any measure $\\mu$) and $\\theta_0\\in\\Phi$ if \n\\begin{itemize}\n\\item $\\epsilon_n\\ge\\frac{k}{\\sqrt{n}}$ \n\\item $\\frac{\\mathbb{E}_{\\theta}((x-\\mathbb{E}_{\\theta}(x))^4)}{A''(\\theta)^2}\\le \\zeta$, \n\\item $\\kappa(\\theta)\\ge \\frac{1}{C}\\frac{1}{\\sqrt{A''(\\theta)}}$\n\\end{itemize}\nthen for all $n\\in\\mathbb{N}$ such that $n\\ge \\max\\left\\{N, \\frac{c\\zeta^2\\ln(1\/\\delta_n)}{\\epsilon_n}, \\frac{2\\ln\\left(\\frac{1}{(DA''(\\theta_0))^2}\\right)}{(DA''(\\theta_0))^2}\\right\\}$ then we have \n\\[\\ierror{\\mathcal{A}_{{\\rm low}, C, \\fourthmoment}}{n}{\\theta_0}\\le C'\\omega_n(P_{\\theta_0}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}}),\\]\nwhere $\\mathcal{A}_{{\\rm low}, C, \\fourthmoment}$ is $\\mathcal{A}_{{\\rm low}}$ with initial mean estimator $\\mathcal{M}_{\\fourthmoment, C}$.\n\\end{restatable}\n\nThe main technical part of our proof is the following lemma, whose proof we defer to Section~\\ref{amainlemmaexp}. \n\n\n\\begin{restatable}{lemma}{rmainlemmaexp}\n\\label{mainlemmaexp} \nFor all $\\zeta>0$, there exists constants $N\\in\\mathbb{N}$, $k\\ge 0$, $C>0$, $b>0$ and $D>0$ such that for all measures $\\mu$ and $\\theta_0\\in\\Phi$ if \n\\begin{itemize}\n\\item $\\epsilon_n\\ge\\frac{k}{\\sqrt{n}}$ \n\\item $\\frac{\\mathbb{E}_{\\theta}((x-\\mathbb{E}_{\\theta}(x))^4)}{A''(\\theta)^2}\\le \\zeta$, \n\\item $|\\theta_0-A'^{-1}(\\hat{t})|\\le \\min\\{\\kappa(\\theta_0), b\\epsilon_n\\sqrt{nA''(\\theta_0)}\\}$, \n\\end{itemize}\nthen for all $n\\in\\mathbb{N}$ such that $n\\ge \\max\\{N, \\frac{D}{A''(\\theta_0)(\\kappa(\\theta_0))^2}\\}$ and all $\\theta_1$ such that $|\\theta_1-\\theta_0|\\ge C\\omega_n(P_{\\theta_0}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})$, there exists a threshold $ \\tau$ such that the test\n\\begin{equation}\\label{testworks}\n\\begin{cases}\nP_{\\theta_0} & \\text{if } f_{\\tilde{\\alpha}}(X)+\\mathrm{Lap}\\left(\\frac{2}{\\epsilon_n nC\\tilde{\\alpha}}\\right) \\le\\tau\\\\\nP_{\\theta_1} & \\text{if } f_{\\tilde{\\alpha}}(X)+\\mathrm{Lap}\\left(\\frac{2}{\\epsilon_n n}C\\tilde{\\alpha}\\right) \\ge\\tau\n\\end{cases}\n\\end{equation} distinguishes between $P_{\\theta_0}$ and $P_{\\theta_1}$ with $n$ samples. \nFurthermore, $\\tau$ can be chosen so $|\\mathbb{E}_{X\\sim P_{\\theta_0}^n}[\\hat{f}_{\\tilde{\\alpha}}(X)]-\\tau|\\le |\\mathbb{E}_{X\\sim P_{\\theta_1}^n}[\\hat{f}_{\\tilde{\\alpha}}(X)]-\\tau|$. \n\\end{restatable}\n\nAs in the previous section, in the translation from testing result in Lemma~\\ref{mainlemmaexp} to the bound on the estimation rate we argue that algorithm ${\\rm nCLLRE}$ is unlikely to return $\\theta$ such that $|\\theta_0-\\theta|\\ge D\\omega_{n, {\\rm SC}_{\\epsilon}}(\\theta_0)$ since this would result in the induced test failing, which is unlikely to occur.\n\n\\begin{restatable}{corollary}{rmaincorlowpriv}\\label{maincorlowpriv}\nFor all $\\zeta>0$, there exists constants $N\\in\\mathbb{N}$, $k\\ge 0$, $C>0$, $b>0$ and $D>0$ such that for all measures $\\mu$ and $\\theta_0\\in\\Phi$ if \n\\begin{itemize}\n\\item $\\epsilon_n\\ge\\frac{k}{\\sqrt{n}}$ \n\\item $\\frac{\\mathbb{E}_{\\theta}((x-\\mathbb{E}_{\\theta_0}(x))^4)}{A''(\\theta_0)^2}\\le \\zeta$, \n\\item $|\\theta_0-A'^{-1}(\\hat{t})|\\le \\min\\{\\kappa(\\theta_0), b\\epsilon_n\\sqrt{nA''(\\theta_0)}\\}$, \n\\end{itemize}\nthen for all $n\\in\\mathbb{N}$ such that $n\\ge \\max\\{N, \\frac{D}{A''(\\theta_0)(\\kappa(\\theta_0))^2}\\}$ we have with probability 0.75\n\\[|{\\rm nCLLRE}(X,\\hat{t}, C)-\\theta_0|\\le C\\omega_n(P_{\\theta_0}, \\mathcal{F}_{\\epsilon}^{\\text{\\rm test}})\\]\n\\end{restatable}\n\n\\begin{proof}[Proof of Theorem~\\ref{mainexp}] \nFirst, we claim that the estimator from Theorem~\\ref{initialestthm} is sufficient to satisfy the conditions of Lemma~\\ref{mainlemmaexp}.\nSuppose that $\\mathcal{A}_{\\text{low}}(X) = \\theta_1$ and $|\\theta_1-\\theta_0|\\ge C\\omega_{n, {\\rm SC}_{\\epsilon}}(\\theta_0)$. Then, $|\\hat{f}-\\mathbb{E}_{\\theta_1}(f_{\\tilde{\\alpha}})| \\le |\\hat{f}-\\mathbb{E}_{\\theta_0}(f_{\\tilde{\\alpha}})|,$ which implies that the test in Lemma~\\ref{mainlemmaexp} would have rejected $\\theta_0$, which only happens with probability 0.25.\n\\end{proof}\nFinally, as in the previous section, Proposition~\\ref{mainexp} follows by showing that the estimator from Corollary~\\ref{allforalog} satisfies the conditions of Corollary~\\ref{maincorlowpriv}. \n\n\\begin{proof}[Proof of Proposition~\\ref{mainexp}] \n\nThe proposition follows from a combination of Corollary~\\ref{maincorlowpriv} and Corollary~\\ref{allforalog}. Let $N, k , C_1, D_1$ and $D_2$ be as in Corollary~\\ref{maincorlowpriv}. Note that since $\\frac{D_2}{A''(\\theta_0)(\\kappa(\\theta_0))^2}\\le \\frac{D_2}{B^2}$ we can assume that $N\\ge \\frac{D_2}{A''(\\theta_0)(\\kappa(\\theta_0))^2}$. By Corollary~\\ref{allforalog}, there exists a constant $C_2$ such that\n\\[|A'^{-1}(\\mathcal{M}_{\\fourthmoment, C}(X))-\\theta_0|\\le C_2\\left(\\frac{1}{\\sqrt{nA''(\\theta_0)}}+\\frac{1}{n\\epsilon\\sqrt{A''(\\theta_0)}}\\sqrt{\\ln(n)}\\right).\\]\nAgain since $\\kappa(\\theta_0)\\sqrt{A''(\\theta_0)}\\ge B$, there exists constants $N_1$ and $D$ such that for all $n>N_1$ if $\\frac{\\sqrt{\\ln n}}{\\sqrt{n}}\\le D A''(\\theta_0)$ then, \\[C_2\\left(\\frac{1}{\\sqrt{nA''(\\theta_0)}}+\\frac{1}{n\\epsilon\\sqrt{A''(\\theta_0)}}\\sqrt{\\ln(n)}\\right)\\le \\min\\{\\kappa(\\theta_0), D_1\\epsilon\\sqrt{nA''(\\theta_0)}.\\] Thus, the estimator from Corollary~\\ref{allforalog} satisfies the requirements of Corollary~\\ref{maincorlowpriv} and so we are done.\n\\end{proof}\n\n\\subsection{Initial Estimator - Proof of Theorem~\\ref{initialestthm}}\\label{appendix:meanest}\n\n\\rinitialestthm*\n\nIn this section we slightly generalise the algorithm and analysis from \\cite{Karwa:2018} beyond Gaussian distributions. We will show that their algorithm provides accurate estimates of the mean of sufficiently nice exponential families. This algorithm first estimates the variance of the distribution, then estimates the mean. Both steps of the estimation are performed using differentially private histogram queries.\n\nLet $\\rho = \\mathbb{E}_{P}[|X-\\mathbb{E}_P(x)|^3]$ be the absolute third moment of $P$ and $\\sigma$ be the standard deviation. Since the algorithm of \\cite{Karwa:2018} is designed for Gaussian distributions we will use the following lemma that describes the rate of convergence of the central limit theorem.\n\n\\begin{lemma}[Berry-Esseen theorem]\\label{BE}\nLet $n\\in\\mathbb{N}$ and $X_1, \\cdots, X_n$ be iid samples from a distribution $P$, and $\\rho = \\mathbb{E}_{P}[|X-\\mathbb{E}_P(x)|^3]$. Set $S_n =\\frac{1}{n} \\sum_{j=1}^n X_j$, $\\mu=\\mathbb{E}_P[x]$ and $\\sigma^2=\\text{var}(P)$, and let $Y\\sim \\mathcal{N}(\\mu, \\frac{\\sigma^2}{n})$ then for some absolute constant $\\nu>0$,\n\\begin{itemize}\n\\item (Uniform) For all $a>0$, \\[|\\mathbb{P}[S_n\\le a]-\\mathbb{P}[Y\\le a]|\\le \\frac{\\nu\\rho}{\\sigma^3\\sqrt{n}}\\]\n\\item (Non-uniform) For all $a>0$, \\[|\\mathbb{P}[S_n\\le a]-\\mathbb{P}[Y\\le a]|\\le \\frac{\\nu\\rho}{(1+|a|)^3\\sigma^3\\sqrt{n}}.\\]\n\\end{itemize}\n\\end{lemma}\n\n\\begin{lemma}[Histogram Learner \\cite{DworkMNS06, Bun:2016, Vadhan:2016}]\\label{histlearn} For all $K\\in\\mathbb{N}$ and domain $\\Omega$, for any collection of disjoint bins $B_1, \\cdots, B_K$ defined on $\\Omega, n\\in\\mathbb{N}, \\epsilon\\ge0, \\delta\\in(0,1\/n), \\lambda>0$ and $\\beta\\in(0,1)$ there exists an $(\\epsilon,\\delta)$-DP algorithm $M:\\Omega^n\\to\\mathbb{R}^K$ such that for every distribution $D$ on $\\Omega$, if \n\\begin{enumerate}\n\\item $X_1, \\cdots, X_N\\sim D$ and $p_k=\\mathbb{P}(X_i\\in B_k)$\n\\item $(\\tilde{p_1}, \\cdots, \\tilde{p_K})=M(X_1, \\cdots, X_n)$ and \n\\item \\[n\\ge \\max\\left\\{\\min\\left\\{\\frac{8}{\\epsilon\\lambda}\\ln\\left(\\frac{2K}{\\beta}\\right), \\frac{8}{\\epsilon\\lambda}\\ln\\left(\\frac{4}{\\beta\\delta}\\right)\\right\\}, \\frac{1}{2\\lambda^2}\\ln\\left(\\frac{4}{\\beta}\\right)\\right\\}\\]\n\\end{enumerate}\nthen, \\[\\mathbb{P}_{X\\sim D, M}(\\max_k|\\tilde{p_k}-p_k|\\le\\lambda)\\ge 1-\\beta\\;\\;\\;\\text{ and },\\]\n\\[\\mathbb{P}_{X\\sim D, M}(\\arg\\max_k\\tilde{p_k}=j)\\le \\begin{cases}\nnp_j+2e^{-(\\epsilon n\/8)\\cdot(\\max_kp_k)} & \\text{ if } K< 2\/\\delta\\\\\nnp_j & \\text{ if } K\\ge 2\/\\delta\n\\end{cases}\\]\nwhere the probability is taken over the randomness of $M$ and the data $X_1, \\cdots, X_n$.\n\\end{lemma}\n\n\\begin{algorithm}[ht] \\caption{Variance estimator}\\label{algo:varianceestimate}\n\\begin{algorithmic}[1\n\\Require{Sample $X = (x_{1},\\dots,x_{n})\\sim P, \\epsilon, \\delta, \\sigma_{\\min}, \\sigma_{\\max}, \\beta, \\zeta$.}\n\\State Let $\\phi= \\lceil(600\\nu\\zeta)^2\\rceil$, where $\\nu$ is the absolute constant from Lemma~\\ref{BE}.\n\\State If \\[n0, \\delta\\in(0,\\frac{1}{n}], \\beta\\in(0,1\/2), \\zeta>0,$ Algorithm~\\ref{algo:varianceestimate} is $(\\epsilon, \\delta)$-DP and satisfies that if $X_1, \\cdots, X_n$ are iid draws from $P$, where $P$ has standard deviation $\\sigma\\in[\\sigma_{\\min}, \\sigma_{\\max}]$ and \\textcolor{black}{$\\frac{\\rho}{\\sigma^3}\\le \\zeta$} then if \\[n\\ge c \\zeta^2\\min\\left\\{\\frac{1}{\\epsilon}\\ln\\left(\\frac{\\ln\\left(\\frac{\\sigma_{\\max}}{\\sigma_{\\min}}\\right)}{\\beta}\\right), \\frac{1}{\\epsilon}\\ln\\left(\\frac{1}{\\delta\\beta}\\right)\\right\\},\\] (where $c$ is a universal constant), we have \\[\\mathbb{P}_{X\\sim P, M} (\\sigma\\le\\hat{\\sigma}\\le 8\\sigma)\\ge 1-\\beta.\\]\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{highprobstd}]\nThis proof follows almost directly from Theorem 3.2 of \\cite{Karwa:2018}. Note that each $Y_i$ is sampled from a distribution with mean 0 and variance $\\frac{2\\sigma^2}{\\phi}$, and in addition is the sum of $\\phi$ independent random variables. As in \\cite{Karwa:2018}, there exists a bin $B_l$ with label $l\\in(\\lfloor \\ln_2\\frac{\\sigma_{\\min}}{\\sqrt{\\phi}}\\rfloor-1, \\lceil \\ln_2\\frac{\\sigma_{\\max}}{\\sqrt{\\phi}}\\rceil)$ such that $\\frac{\\sigma}{\\sqrt{\\phi}}\\in(2^l, 2^{l+1}]=B_l$. Define, \\[p_j = \\mathbb{P}(|Y_i|\\in B_j).\\] Sort the $p_j$'s as $p_{(1)}\\ge p_{(2)}\\ge\\cdots$ and let $j_{(1)}, j_{(2)}, \\cdots$ be the corresponding bins. Then, the following two facts imply the result (as in \\cite{Karwa:2018}).\n\n\\noindent \\textbf{Fact 1:} The bins corresponding to the largest and second largest mass $p_{(1)}, p_{(2)}$ are $(j_{(1)}, j_{(2)})\\in\\{(l,l-1), (l, l+1), (l+1,l)\\}$.\n\n\\noindent \\textbf{Fact 2:} $p_{(1)}-p_{(3)}>1\/300$.\n\nNow, let $W_i\\sim N(0, 2\\frac{\\sigma^2}{\\phi})$ and let $q_i, q_{(i)}$ be the corresponding probabilities for $W_i$. Then \\cite{Karwa:2018} showed that: \n\\begin{itemize}\n\\item The bins corresponding to the largest and second largest mass $q_{(1)}, q_{(2)}$ are $(j_{(1)}, j_{(2)})\\in\\{(l,l-1), (l, l+1), (l+1,l)\\}$.\n\\item $q_{(1)}-q_{(3)}>1\/100$.\n\\end{itemize}\n\nBy Lemma~\\ref{BE}, since $\\phi= \\lceil(600\\nu\\zeta)^2\\rceil$, for all $j$, $|p_j-q_j|\\le 1\/300$. Therefore, $\\{p_{(1)}, p_{(2)}\\}=\\{q_{(1)}, q_{(2)}\\}$, which implies both Fact 1 and Fact 2. \n\\end{proof}\n\n\\begin{algorithm}[ht] \\caption{Range estimator}\\label{rangeestalg}\n\\begin{algorithmic}[1\n\\Require{Sample $X = (x_{1},\\dots,x_{n})\\sim P, \\epsilon> 0, \\delta\\in[0,1], \\beta\\in (0,1\/2), R\\in(0,\\infty), \\sigma>0, B>0$.}\n\\State Let $r=\\left\\lceil \\frac{R}{2\\sigma}\\right\\rceil$. Divide $[-R-\\sigma\/2, R+\\sigma\/2]$ into $2r+1$ bins of length at most $2\\sigma$ each in the following manner - bin $B_j$ equals $(2(j-0.5)\\sigma, 2(j+0.5)\\sigma]$, for $j\\in\\{-r,\\cdots, r\\}$. \n\\State Run the histogram learner of Lemma~\\ref{histlearn} with privacy parameters $(\\epsilon, \\delta)$ and bins $B_{-r}, \\cdots, B_{r}$ on input $x_1, \\cdots, x_n$ to obtain noisy estimates $\\tilde{p_{-r}}, \\cdots, \\tilde{p_r}$. Let \\[\\hat{l} = \\arg\\max_{j=-r,\\cdots, r} \\tilde{p_j}.\\]\n\\State Output $(x_{\\min}, x_{\\max})$, where \\[x_{\\min} = 2\\sigma\\hat{l}-\\sigma(6+C)\\sqrt{\\ln(4n\/\\beta)}, \\;\\;\\; x_{\\max}=2\\sigma\\hat{l}+\\sigma(6+C)\\sqrt{\\ln(4n\/\\beta)}.\\]\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{theorem}\\label{rangeest} \nFor all $n\\in\\mathbb{N}$, $\\sigma>0, \\epsilon>0, \\delta\\in[0,1], \\beta\\in(0,1\/2), R\\in(0,\\infty]$, $C\\ge 0$, Algorithm~\\ref{rangeestalg} is $(\\epsilon, \\delta)$-DP. For all measures $\\mu$ and $\\theta\\in\\Phi$, if $x_1, \\cdots, x_n$ are sampled from $P_{\\theta}$ where $A'(\\theta)\\in(-R,R)$, $A''(\\theta)\\le \\sigma^2$,\n$\\kappa(\\theta)\\ge \\frac{1}{C}\\frac{\\sqrt{\\log(2\/\\beta)}}{\\sqrt{A''(\\theta)}}$,\nand \\[n\\ge c\\min\\left\\{\\frac{1}{\\epsilon}\\ln\\left(\\frac{R}{\\sigma\\beta}\\right), \\frac{1}{\\epsilon}\\ln\\left(\\frac{1}{\\delta\\beta}\\right)\\right\\},\\] (where $c$ is a universal constant), then we have \\[\\mathbb{P}_{x\\sim P_{\\theta}, M}(\\forall i,\\;\\; x_{min}\\le x_i\\le x_{\\max})\\ge 1-\\beta\\] and , \\[|x_{\\max}-x_{\\min}|=\\textcolor{black}{\\sigma(6+C)\\sqrt{\\ln(4n\/\\beta)}}.\\]\n\\end{theorem}\n\n\\begin{proof}\nBy Lemma~\\ref{concentrationexp} and a union bound, with probability $1-\\beta\/2$, we have \\[\\forall i\\;:\\; |x_i-\\mu|\\le \\sigma(2+C)\\sqrt{\\ln(4n\/\\beta)}.\\] Next, as in the proof of Theorem 3.1 from \\cite{Karwa:2018}, we want to show that with probability $1-\\beta\/2$, we have \\[|\\mu-\\hat{l}\\sigma|\\le 2\\sigma.\\] Note that by Chebyshev's inequality $\\mathbb{P}[|x-\\mu|\\le 2\\sigma]\\ge 3\/4$ so there exists a pair of neighbouring bins $B_j, B_{j+1}$ such that $\\mathbb{P}[x\\in B_j\\cup B_{j+1}]\\ge 3\/4$ and $\\mu\\in B_j\\cup B_{j+1}$. Also, for all $i\\notin\\{j,j+1\\}$, $\\mathbb{P}[x\\in B_i]\\le 1\/4$. Let $j^*=\\arg\\max_{k=j,j+1} \\mathbb{P}[x\\in B_k]$. Then $\\mathbb{P}[x\\in B_{j^*}]\\ge 3\/8$, and $\\mathbb{P}[x\\in B_{j^*}]-\\mathbb{P}[x\\in B_{i}]\\ge 1\/8$ for all $i\\notin\\{j,j+1\\}$. Then by Lemma~\\ref{histlearn}, setting $\\lambda=1\/8$, $n$ is large enough that with probability $1-\\beta\/2$, $\\hat{l}\\in\\{j,j+1\\}$. Therefore, $|\\mu-2\\hat{l}\\sigma|\\le 4\\sigma$. Therefore, with probability $1-\\beta$, for all $i$, \\[|x_i-2\\hat{l}\\sigma|\\le |x_i-\\mu|+|\\mu-2\\hat{l}\\sigma|\\le \\sigma(2+C)\\sqrt{\\ln(4n\/\\beta)}+4\\sigma \\le \\sigma(6+C)\\sqrt{\\ln(4n\/\\beta)}.\\]\n\\end{proof}\n\nAs in \\cite{Karwa:2018} combining these two algorithms gives us an estimator of the range with unknown variance. Since this range contains all the data points with high probability, we can clamp the data to this range, and add noise proportional to the width of the range. Note that we can remove the dependence on the range $[0,R]$ and $[\\sigma_{\\min}, \\sigma_{\\max}]$ in the sample complexity since $n\\ge \\frac{c\\zeta}{\\epsilon}\\ln(\\frac{1}{\\delta\\beta})$ is sufficient to ensure that the bounds required in both Theorem~\\ref{rangeest} and Lemma~\\ref{highprobstd} hold.\nThis completes the proof of Theorem~\\ref{initialestthm}.\n\n\\begin{algorithm}[ht] \\caption{Initial mean estimator, $\\mathcal{M}_{\\fourthmoment, C}$}\\label{meanest}\n\\begin{algorithmic}[1\n\\Require{$X_1, \\cdots, X_n, \\beta, \\epsilon, \\delta, \\zeta, B$.}\n\\State If \\[n<\\frac{c\\max\\{\\zeta^2, 1\\} }{\\epsilon}\\ln\\left(\\frac{1}{\\delta\\beta}\\right),\\] output 0.\n\\State Run Algorithm~\\ref{algo:varianceestimate} to obtain an estimate $\\hat{\\sigma}$ of the variance with privacy parameters $(\\epsilon, \\delta)$, $\\sigma_{\\min}=0$ and $\\sigma_{\\max}=\\infty$.\n\\State Run Algorithm~\\ref{rangeestalg} with privacy parameters $(\\epsilon, \\delta)$, $R=\\infty$, and standard deviation $\\hat{\\sigma}$ to obtain a range $[X_{\\min}, X_{\\max}]$.\n\\State Let \\[Y_i = \\begin{cases}\nX_i & \\text{if } X_i\\in[X_{\\min}, X_{\\max}]\\\\\nX_{\\max} & \\text{if } X_i>X_{\\max}\\\\\nX_{\\min} & \\text{if } X_i>X_{\\min}\n\\end{cases}\\]\n\\State Let $Z$ be a Laplace random variable with mean 0 and scale parameter $\\frac{X_{\\max}-X_{\\min}}{\\epsilon n}$.\n\\State Output \\[\\frac{\\sum_{i=1}^n Y_i}{n}+Z.\\]\n\\end{algorithmic}\n\\end{algorithm}\n\\section{Introduction}\nWhile the primary goal of statistical inference is to reveal properties of a population, many statistical estimators also reveal a significant amount of information about their sample, and this becomes a serious problem when the sample contains sensitive private information about individuals. As a response, \\emph{differential privacy}~\\citep{DworkMNS06} has emerged as a strong formal criterion for a statistical procedure to protect individual privacy. Differentially private algorithms are deployed in a variety of settings, from the public data products for the 2020 US decennial census to Google's keyboard prediction models~\\citep{Google-FL-blog} and Apple device analytics~\\citep{Apple17}.\n\nDifferential privacy is a constraint on an estimator that requires the distribution of the estimator's outputs to be insensitive to changing a single individual's data, and it offers a strong semantic guarantee that no attacker can infer much more about any individual than they could have inferred had that individual's data never been collected~\\citep{KasiviswanathanS08}. This semantic guarantee does not rely on any assumptions about the adversary's background knowledge and capabilities. In contrast, alternative approaches to protecting privacy have often been undermined by underestimating the abilities of the attacker. Although differential privacy is a constraint that significantly limits inference with small sample sizes,\nmost statistical tasks are compatible with differential privacy given a large enough sample. \n\nThere is now a large body of work on differentially private estimation, which includes \\emph{minimax optimal} differentially private estimators for many estimation tasks (e.g.~\\citet{DuchiJW13,BunUV14,DworkSSUV15}). A minimax optimal estimator is one that minimizes the maximum loss over all distributions in some family. However, even a minimax optimal estimator can be undesirable in practice because it might achieve the same error on all distributions, even if some distributions are \\emph{easier} than the worst-case distributions.\n\nA more refined guarantee is called \\emph{local minimax optimality}. While the actual definition is necessarily subtle, intuitively a local minimax optimal estimator simultaneously has the best possible error on every distribution, which means the error must automatically adapt to distributions that are easier. To illustrate this with a simple example from the non-private setting, suppose we are given a sample of size $n$ from a Bernoulli distribution $P = \\mathrm{Ber}(\\theta)$ and want to estimate the parameter $\\theta \\in (0,1)$. The empirical mean has mean-squared error $\\theta(1-\\theta)\/n \\leq 1\/4n$. No estimator can have better error than $1\/4n$ on all Bernoulli distributions (roughly because samples of size $n$ from $\\mathrm{Ber}(\\frac12 - \\frac 1{2\\sqrt{n}})$ and $\\mathrm{Ber}(\\frac12 + \\frac 1{2\\sqrt{n}})$ are hard to reliably distinguish), so the empirical mean is (globally) minimax optimal. But it is also \\textit{locally} minimax optimal because it adapts automatically to the ``easy\" values of $\\theta$ close to 0 or 1. In contrast, a hypothetical estimator that had mean-squared error exactly $1 \/{4n}$ for all values of $\\theta$ would be minimax optimal but \\textit{not} locally minimax optimal.\n\nWe study the design of locally minimax differentially private estimators. We provide:\n\\begin{itemize}\n \\item A connection between locally minimax differentially private estimators and differentially private simple hypothesis testing: namely, the local estimation rate for the class of differentially private estimators is given by inverting the sample complexity of the optimal differentially private hypothesis test.\n Such a connection was previously shown in the non-private setting \\citep{Donoho:1991} and in the more restrictive locally differentially private setting \\citep{Duchi:2018,Rohde:2019}.\n \n \\item Locally minimax differentially private estimators for one-parameter exponential families. In the small $\\epsilon$ (that is, $\\epsilon=O(1\/\\sqrt{n})$) regime, our estimator is directly informed by the locally\\footnote{\\textit{Local differential privacy} refers to the model of differential privacy where data subjects randomize their own data points before sending it to the server. It is a more restricted model than central differential privacy, the main privacy model of interest in this paper. See Section~\\ref{sec:related} } DP estimator introduced in \\citep{Duchi:2018}, who show the locally differentially private version of this estimator is locally minimax optimal. For larger $\\epsilon$, our estimators are directly informed by the structure of the approximately optimal differentially private simple hypothesis tests of \\citet{Canonne:2019}. In particular, our estimator critically relies on a refined version of their optimal test, introduced in this work, with additional properties. \n \n \\item A general approach to nonparametric estimation of one-dimensional functionals. We illustrate its application to estimating tail decay rates. \n\\end{itemize}\n\n\n\\mypar{Simple Hypothesis Testing and Local Estimation Rates (Section~\\ref{sec:testing}).} \nAs shown by \\citet{Donoho:1991}, local minimax estimation is closely related to simple hypothesis testing. The connection was originally developed in the non-private setting, but applies more generally to any restricted estimation setting. Suppose we have a population $P$ from some family $\\mathcal{P}$ and want to estimate a statistic $\\theta(P)$. We have a sample $X \\sim P^n$ and an estimator $\\hat\\theta(X)$. Given two distribution $P,Q \\in \\mathcal{P}$ we can use $\\hat\\theta$ as the basis for a simple hypothesis test that distinguishes $P$ and $Q$ by looking at $\\hat\\theta(X)$ and checking if it's closer to $\\theta(P)$ or $\\theta(Q)$, and this approach will be a successful hypothesis test if and only if $\\hat\\theta$ has sufficiently small error for both populations $P$ and $Q$. See Figure~\\ref{graphofconnection} for a pictorial representation of how $\\hat\\theta$ can distinguish $P$ and $Q$. Some pairs $P,Q \\in \\mathcal{P}$ cannot be reliably distinguished with a sample of size $n$ and some can. \nInformally, we say that $\\hat\\theta$ is \\textit{locally minimax optimal} if it can be used in this fashion to obtain a hypothesis test for \\emph{any} pair of distributions in $\\mathcal{P}$ that can be distinguished using $n$ samples. This formulation makes it clear that lower bounds for simple hypothesis testing automatically give lower bounds on the local estimation rate. Although hypothesis tests for specific pairs of distributions do not inherently yield optimal estimators, the structure of optimal tests can guide the construction of locally minimax estimators. We show that this process of converting hypothesis testing results into estimation rates can be carried out in the private setting, and instantiate it for several univariate estimation problems. \n\n\\begin{figure}\n\t\\centering\n \\includegraphics[scale=0.4]{estimation_is_hard.png}\n \\caption{Graphical representation of the connection between simple hypothesis testing and local estimation rates.}\n \\label{graphofconnection}\n\\end{figure}\n\nIn the non-private setting, the sample complexity of distinguishing between two distributions $P$ and $Q$ is $\\Theta(1\/H^2(P,Q))$, where $H(P,Q)$ is the Hellinger distance, and hence the Hellinger distance is the relevant distance when characterising local estimation rates in the non-private setting.\n\\cite{Duchi:2018} showed that in the \\textit{local DP} setting, the sample complexity is $\\Theta(1\/\\epsilon^2 \\text{\\rm TV}(P,Q)^2)$, where $\\text{\\rm TV}(P,Q)$ is the total variation distance. \\cite{Canonne:2019} showed that the sample complexity in the central DP setting is more nuanced. However, in this work, we show that it has a simple form in the high privacy regime. When $\\epsilon=O(1\/\\sqrt{n})$, the sample complexity is $\\Theta(1\/\\epsilon \\text{\\rm TV}(P,Q))$, the square root of the sample complexity in the local DP setting.\nThis move from the Hellinger distance to the total variation distance has implications for how well one can expect estimation algorithms to adapt to problem-specific difficulty. For example, the fact that non-private algorithms for Bernoulli parameter estimation can adapt to problem-specific difficulty, while local DP algorithms and central DP algorithms in the high privacy regime cannot, is a direct consequence of the fact $H({\\rm Bernoulli}(\\theta),{\\rm Bernoulli}(\\theta+\\alpha))$ is a function of $\\theta$, while $\\text{\\rm TV}({\\rm Bernoulli}(\\theta),{\\rm Bernoulli}(\\theta+\\alpha))$ is independent of $\\theta$. We discuss this further in Section~\\ref{generalhighprivacy}.\n\nWhile we show that this framework is suitable for univariate estimation problems, it is not generally suitable for estimating multivariate statistics, as this simple-hypothesis-testing formulation does not fully capture private estimation for multivariate statistics. In particular, one provably cannot achieve the local estimation rate even for simple tasks like estimating the mean of a multivariate Gaussian with identity covariance~\\citep{BunUV14,DworkSSUV15} since the lower bounds on hypothesis testing and estimation depend on the dimension in different ways\\footnote{For example, the sample complexity for privately distinguishing between two Gaussian distributions with identity covariance at total variation distance $\\alpha$ is \n$O(\\sqrt{d})$ (for constant $\\alpha$ and $\\epsilon$)\n(see, e.g., \\citet{Narayanan2022PrivateHH}), while the sample complexity required for privately estimating a Gaussian with identity covariance to within total variation distance $\\alpha$ is \n$\\Omega(d \/ \\log(d))$\n\\citep{KamathLSU19}.\n}. \nWe leave it to future work to develop a suitable notion of local minimax estimation for higher-dimensional problems.\n\n\\mypar{Exponential Families (Section~\\ref{expfams})} We give a DP estimator for one-parameter exponential families that uniformly achieves the private, locally minimax-optimal error under suitable regularity conditions. The estimator works (and is optimal) for any setting of $\\epsilon=O(1)$. We identify two qualitatively different regimes: the ``low privacy'' regime, $\\epsilon = \\Omega(1\/\\sqrt{n})$, and the ``high privacy'' regime, $\\epsilon = O(1\/\\sqrt{n})$. In the low-privacy regime, privacy can be achieved without increasing the asymptotic error of the estimator, while in the high-privacy regime, the error due to privacy dominates the sampling error. A weaker version of the low-privacy result appears in \\citet{Smith11}; however, that result matches the best nonprivate error only for $\\epsilon = \\omega(1\/\\sqrt[4]{n})$, instead of $\\epsilon = \\Omega(1\/\\sqrt{n})$.\n\n\nIn both regimes, our algorithm first uses a subroutine of \\citet{Karwa:2018} to identify a rough, initial approximation $\\hat\\theta_0$ to the true parameter. \nThe next step is to compute and release a (noisy) test statistic $\\hat f = f_{n,\\epsilon,\\hat\\theta_0}(X)$. In the low privacy regime, this statistic is, very roughly, \nthe same one that arises in\nthe private simple hypothesis test of \\citet{Canonne:2019} for distinguishing $\\hat\\theta_0$ from $\\hat\\theta_0 + \\alpha$, where $\\alpha$ is roughly the local minimax error at $\\hat\\theta_0$. \nThe exact form of the statistic is more subtle, and relies on a linearization of the model in a neighborhood of $\\hat\\theta_0$.\nThe statistic takes a simpler form in the high privacy regime.\nFinally, we take the estimate $\\hat \\theta$ to be the unique solution to $\\hat f = \\mathbb{E}_{X\\sim P_{\\hat \\theta}^n}\\paren{f_{n,\\epsilon,\\hat\\theta_0}(X)}$, which finds the value $ \\hat \\theta$ for which the expected value of the test statistic matches the observation $\\hat f$. The key in both regimes is to prove that $f_{n,\\epsilon,\\hat\\theta_0}$ is a good test statistic not only for distinguishing $\\hat \\theta_0$ from $\\hat \\theta_0+\\alpha$, but for distinguishing all pairs of the form $(\\theta, \\theta+\\alpha)$ for $\\theta$ in a neighborhood of $\\hat \\theta_0$.\n\nOur approach parallels that of \\citet{Duchi:2018}, who developed a similar result for the more restricted setting of \\textit{locally differentially private} algorithms. Indeed, in the high privacy regime, the structure of the optimal estimator is very similar to theirs, and the asymptotic sample complexity of the optimal (central-model) private estimator is exactly the square root of that of the optimal locally-private estimator. In the low-privacy regime, however, the estimators' structure differs. In all cases, the lower bound techniques are quite different.\n\n\\mypar{Estimation of More General Functionals (Section~\\ref{sec:noparametric}).} In addition to parametric estimation problems, our framework applies to the estimation of one-dimensional functionals $T(P)$ of distributions, even when the functional of interest does not completely describes the underlying data distribution $P$. We discuss general approaches to such problems and explore the estimation of tail decay rates in real-valued distributions, an example also studied in depth by \\citet{Donoho:1991}.\n\nThere are several natural meanings to local optimality in such a setting. \nFollowing \\citet{Donoho:1991}, we seek estimation algorithms that, for each $\\theta$, achieves error rate $\\mathfrak{R}(\\theta)$ for all distributions $P$ in the subfamily $\\set{P \\in \\mathcal{P}: T(P)=\\theta}$, where $\\mathcal{P}$ is the family of distributions to which the true population is assumed to belong and $\\mathfrak{R}(\\theta)$ is the optimal local estimation rate for at least one distribution in this set.\nFairly generically, one can devise near-optimal differentially private algorithms whenever testing the compound hypothesis $(\\set{P \\in \\mathcal{P}: T(P) \\leq \\theta_0},\\set{P \\in \\mathcal{P}: T(P) \\geq \\theta_1})$ is equivalent to a simple hypothesis testing problem of distinguishing two specific distributions (with parameters $\\theta_0$ and $\\theta_1$, respectively). We illustrate this with the design of near-optimal estimators for tail decay rates. \n\n\n\n\\subsection{Related Work}\n\\label{sec:related}\n\nWhile the literature on differentially private statistical inference is too vast to survey, we give an overview of the most closely related work. For additional discussion of the literature, we direct the reader to the survey of \\citet{KamathU20}.\n\n\\mypar{Minimax Optimality Under Privacy Constraints.} There is now an extensive body of literature on differentially private estimation, which is too large to fully survey here. The most technically relevant prior work to our work are the results of \\citet{Canonne:2019} characterizing optimal differentially private simple hypothesis testing. The first global minimax lower bounds for multivariate differentially private estimation were given by \\citet{BunUV14,DworkSSUV15, SteinkeU17}, based on a technique called \\emph{fingerprinting} or \\emph{tracing}. Work by \\citet{DuchiF14, KamathSU20} also gave minimax lower bounds for private mean estimation of univariate heavy-tailed statistics, and \\citet{AlonLMM19} give minimax lower bounds for privately estimating a univariate distribution in CDF distance.\n\nThere are also numerous constructions of minimax optimal differentially private estimators for specific tasks. Perhaps most closely related to our work are the estimators of~\\citet{Karwa:2018} who construct locally minimax optimal estimators for the parameters of a univariate Gaussian, which is a special case of our constructions.\n\n\\mypar{Beyond Global Sensitivity.} Several works in the differential privacy literature give general purpose techniques for privately estimating \\emph{empirical quantities} in a way that adapts to easy datasets (datasets on which the empirical quantity is stable). These techniques include smooth sensitivity \\citep{NissimRS07}, propose-test-release \\citep{DworkL09} and the use of Lipschitz extensions to extend regions of low variablility in the quantity of interest \n\\citep{Chen:2013, BlockiBDS13, KasiviswanathanNRS13}. \nThe most closely related work to ours is that of \\cite{Asi:2020}, who give a general class of differentially private estimators for computing empirical quantities that are locally optimal (under some regularity assumptions). \nHowever, in this work we study estimators for \\emph{population quantities}. While estimating empirical and population quantities are very related, they are fundamentally distinct. To see why, consider the example of computing the mean of a Gaussian random variable $N(\\mu,\\sigma^2)$. In the non-private setting, the empirical mean gives a locally minimax optimal estimator for $\\mu$. However, applying the locally minimax optimal estimator of Asi and Duchi for the empirical mean will have mean-squared error $\\infty$ for any sample size. In contrast, there is a differentially private estimator for the quantity $\\mu$ that has mean-squared error roughly $\\sigma^2 \/ n + \n\\sigma^2 \/ \\varepsilon^2 n^2$ for $\\epsilon\\leq 1$ (e.g., \\cite{Karwa:2018}).\nThus, we have to reason directly about population statistics when we try to construct locally minimax private estimators, and cannot simply apply the transformation of Asi and Duchi to an arbitrary locally minimax non-private estimator.\n\n\\mypar{Local Differential Privacy.} Our work studies the standard \\emph{centralized model} of differential privacy, where we assume that the estimator $M$ receives the samples $X_1,\\dots,X_n$ as input. There is also a large body of research on so-called \\emph{local differential privacy}~\\citep{KasiviswanathanLNRS08}, where we assume that differential privacy is applied to each sample before it is collected. In its most basic non-interactive form, this means that the mechanism can be written in the form $A(M(X_1),\\dots,M(X_n))$ where $M$ is differentially private and $A$ is arbitrary.\n\nLocally differentially private estimators are known to have significantly worse rates than general differentially private estimators~\\citep{KasiviswanathanLNRS08,BeimelNO08,ChanSS11,DuchiJW13, EdmondsNU20}. Recent work gives locally minimax optimal estimators subject to local differential privacy~\\citep{Duchi:2018,Rohde:2019}. In addition to different minimax rates, there are key conceptual differences between the local and centralized settings that make the centralized setting more complex to reason about. In particular: (1) The complexity of simple hypothesis testing under local differential privacy is characterized by the total variation distance between the two distributions, whereas a much more subtle notion is required for centralized differential privacy, and (2) The local minimax rate subject to local differential privacy is always larger than that of non-private estimation, whereas our results show that the local minimax rate subject to centralized differential privacy can be either the same or larger than non-private estimation in different ranges of the privacy parameter.\n\n\\vspace{0.1in}\n\n\\section{Local Estimate Rates and Simple Hypothesis Testing}\n\n\\subsection{Local Estimation Rates and Uniform Achievability}\n\nLet $\\Delta(\\chi)$ be the set of all distributions on a space $\\chi$ and $\\mathcal{P}\\subset\\Delta(\\chi)$ be a set of distributions on $\\chi$. Let $\\theta: \\mathcal{P}\\to \\mathbb{R}$ be a functional on $\\mathcal{P}$, so for any distribution $P\\in\\mathcal{P}$, $\\theta(P)$ is the parameter that we want to estimate. \nLet $\\mathcal{F}$ be a class of (potentially randomised) functions $\\hat{\\theta}:\\chi^n \\to \\mathbb{R}$. For any estimator $\\hat{\\theta}$ in $\\mathcal{F}$, $\\hat{\\theta}$ has local error rate $\\ierror{\\hat{\\theta}}{n}{P}$ if for all $P\\in\\mathcal{P}$ and $n\\in\\mathbb{N}$, if $X_1, \\cdots, X_n\\sim P$ then with probability 0.75: \\[|\\hat{\\theta}(X_1, \\cdots, X_n)-\\theta(P)|\\le \\ierror{\\hat{\\theta}}{n}{P}.\\]\nNotice that this error rate is \\emph{instance specific} in the sense that the error rate is a function of the distribution being sampled from. Since worst-case analysis can be too pessimistic in practice, and the local rate allows the error rate to adapt to \\emph{easy} instances of the problem. Defining a notion of \\emph{instance optimality} is nuanced since no algorithm can be optimal for all $P$; that is, one can not define an algorithm $\\hat{\\theta}$ such that $\\ierror{\\hat{\\theta}}{n}{P}\\le \\min_{\\theta'\\in\\mathcal{F}}\\ierror{\\hat{\\theta'}}{n}{P}$ for all $P$. This is easy to see since for any algorithm $\\hat{\\theta}$ and distribution $P$, the algorithm $\\hat{\\theta}'(X_1, \\cdots, X_n)=P$ satisfies $\\ierror{\\hat{\\theta'}}{n}{P}=0\\le \\ierror{\\hat{\\theta}}{n}{P}$. Of course, this algorithm is not a good point of comparison because it does poorly on distributions that are not $P$. Thus, we want to compare to algorithms that perform well on at least two distributions. This leads us to the following definition of the\n\\emph{optimal local estimation rate} at $P$ by:\n\\begin{equation}\\label{localerrorrate}\\errorlocall{n}{P}{\\mathcal{F}}{\\mathcal{P}}{\\theta} = \\sup_{Q\\in\\mathcal{P}}\\inf_{\\hat{\\theta}\\in\\mathcal{F}}\\max\\{\\ierror{\\hat{\\theta}}{n}{P}, \\ierror{\\hat{\\theta}}{n}{Q}\\}. \\end{equation}\nWe call this definition the local estimation rate based on the intuition that the hardest distributions to distinguish from $P$ are those that are ``close'' or ``local'' to $P$ (Fig.~\\ref{graphofconnection}). The local estimation rate is also sometimes to referred to as the rate of the \\emph{hardest one dimensional sub-problem}.\nWe say an algorithm $\\hat{\\theta}$ is \\emph{instance optimal} if $\\ierror{\\hat{\\theta}}{n}{P} = \\errorlocall{n}{P}{\\mathcal{F}}{\\mathcal{P}}{\\theta}$ for all $P\\in\\mathcal{P}$. Intuitively, if $\\hat{\\theta}$ is instance optimal then for every distribution $P$, if $\\hat{\\theta}$ performs poorly on $P$, then there exists another distribution $Q$ such that no algorithm $\\hat{\\theta}'$ performs well on both $P$ and $Q$. In contrast, the trivial algorithm $\\hat{\\theta}'(X_1, \\cdots, X_n)=P$ performs well on $P$, but unnecessarily sacrifices performance on distributions $Q$ far from $P$. Hence the optimal local estimation rate gives a specific kind of lower bound on the performance of any algorithm.\n\nThe estimator $\\hat{\\theta}$ in eqn~\\eqref{localerrorrate} has the advantage of being told the two distributions $P$ and $Q$. Hence, unlike worst-case optimality, which is always achieved by some algorithm, an instance optimal algorithm does not necessarily exist for every estimation problem. \nIn fact, a main question in this area is \\emph{when} do instance optimal algorithms exist? When an instance optimal algorithm exists we will say the estimation problem satisifies \\emph{uniform achievability}. This question of uniform achievability, under the constraint of differential privacy, is the main question of interest in this work. This question has been studied previously in the non-private setting \\citep{Donoho:1991} and under the constraint of local differential privacy \\citep{Rohde:2019, Duchi:2018}. \nWe will refer to the subset of $\\mathcal{F}$ that contains all $\\epsilon$-DP estimators (defined in Section~\\ref{DP}) as $\\Qest_\\epsilon$. \n\n\\subsection{Simple Hypothesis Testing}\n\nThe crucial insight for understanding the optimal local estimation rate is the connection to simple hypothesis testing. In simple hypothesis testing, we are given two distributions $P$ and $Q$ and the goal is to design an algorithm that given $n$ samples drawn from either $P$ or $Q$, will, with high probability, correctly guess which distribution the samples were drawn from. \nWe say a test $T:\\chi^n\\to\\{0,1\\}$ distinguishes between $P$ and $Q$ with $n$ samples if $\\mathbb{P}(T(P^n)=0)\\ge0.75$ and $\\mathbb{P}(T(Q^n)=1)\\ge0.75$, where the probability is taken over both the randomness in the sample, and the randomness in $T$. Let $\\chi^*=\\cup_{n\\in\\mathbb{N}}\\chi^n$. We will use ${\\rm SC}_T(P,Q)$ to denote the sample complexity of a test $T$, i.e., \\[{\\rm SC}_T(P,Q) = \\inf\\{ n\\in\\mathbb{N}\\;|\\; \\text{for all } N\\ge n, \\mathbb{P}(T(P^N)=0)\\ge 0.75 \\text{ and } \\mathbb{P}(T(Q^N)=1)\\ge 0.75\\}.\\]\n\nFor every estimator class $\\mathcal{F}$, we can define an associated class of binary testing algorithms, $\\mathcal{F}^{\\text{\\rm test}}$, to be the class of binary (potentially randomised) functions $T:\\chi^n\\to\\{0,1\\}$ obtained from $\\mathcal{F}$ by thresholding: \\begin{equation}\n \\mathcal{F}^{\\text{\\rm test}} = \\left\\{ T_{f,\\tau}(X) = \\begin{cases} 0 & {\\rm if } f(X)<\\tau \\\\ 1 & {\\rm otherwise} \\end{cases} \\;\\Bigg|\\; f\\in\\mathcal{F}, \\tau\\in\\mathbb{R}\\right\\}.\n \\label{eq:ftest} \n\\end{equation}\nWe will use this translation throughout this work.\nGiven a class of tests $\\mathcal{F}^{\\text{\\rm test}}$, define ${\\rm SC}_{\\mathcal{F}^{\\text{\\rm test}}}(P,Q) = \\inf_{T\\in\\mathcal{F}^{\\text{\\rm test}}} {\\rm SC}_T(P,Q)$. That is, ${\\rm SC}_{\\Qtest}(P,Q)$ is the smallest $n$ such that there exists a test $T\\in\\mathcal{F}^{\\text{\\rm test}}$ that distinguishes $P$ and $Q$.\n\n\\subsection{Connecting Local Estimation Rates and Simple Hypothesis Testing}\n\nConsider the definition of the optimal local estimation rate given in Eqn~\\ref{localerrorrate}. Given two distributions $P$ and $Q$, if $\\theta(P)$ and $\\theta(Q)$ are close then it is easy to find an estimator that performs well on both $P$ and $Q$ (e.g. the estimator that outputs $\\frac{1}{2}|\\theta(P)-\\theta(Q)|$). Similarly, if there exists a test that distinguishes $P$ and $Q$, then it is easy to define an estimator that performs well on both $P$ and $Q$ (e.g. by outputting the test result). Thus, the supremum in the definition is achieved at a distribution $Q$ that is as far as possible from $P$, while still being indistinguishable from $P$. This intuition gives rise to the definition of the \\emph{modulus of continuity} at $P\\in\\mathcal{P}$:\n\\[\\MOCall{n}{P}{\\mathcal{F}}{\\mathcal{P}}{\\theta} = \\sup\\{\\;|\\theta(P)-\\theta(Q)|\\;\\mid\\; {\\rm SC}_{\\Qtest}(P,Q)> n \\text{ and } Q\\in\\mathcal{P}\\}.\\]\n\nThe following theorem formalises the intuition above and allows us to translate the question of characterizing $\\errorlocall{n}{P}{\\mathcal{F}}{\\mathcal{P}}{\\theta}$ into characterizing ${\\rm SC}_{\\Qtest}$. This is useful since characterizations of ${\\rm SC}_{\\Qtest}$ in a variety of settings already exist, in particular a characterization of ${\\rm SC}_{\\Qtest}$ when $\\mathcal{F}$ is the class of all differentially private estimators was given in \\cite{Canonne:2019}.\nWe say $\\mathcal{F}$ is closed under post-processing if for any $\\hat{\\theta}\\in\\mathcal{F}$ and $f:\\mathbb{R}\\to\\mathbb{R}$, $f\\circ\\hat{\\theta}\\in\\mathcal{F}$. \\cite{Donoho:1987} studied the characterisation of $\\errorlocall{n}{P}{\\mathcal{F}}{\\mathcal{P}}{\\theta}$ where $\\mathcal{F}$ is the class of all possible estimators; their work can be extended to work for any class of estimators closed under post-processing.\n\n\\begin{restatable}{proposition}{prop:rlowerbound}\n\\label{lowerbound} \nFor any $\\mathcal{P}\\subset\\Delta(\\chi)$, statistic $\\theta$, and class of estimators $\\mathcal{F}$, if $\\mathcal{F}$ is closed under post-processing and contains all constant functions then \nfor all $P\\in\\mathcal{P}$ and $n\\in\\mathbb{N}$, \n\\[\\errorlocall{n}{P}{\\mathcal{F}}{\\mathcal{P}}{\\theta}=\\ \\tfrac 1 2 \\MOCall{n}{P}{\\mathcal{F}^{\\text{\\rm test}}}{\\mathcal{P}}{\\theta},\\] \nwhere $\\mathcal{F}^{\\text{\\rm test}}$ is as defined in eqn~\\eqref{eq:ftest}.\n\\end{restatable}\n\nWhen $\\mathcal{P}$ and $\\theta$ are clear from context, we write $\\MOC{n}{P}{\\mathcal{F}^{\\text{\\rm test}}}$ for $\\MOCall{n}{P}{\\mathcal{F}^{\\text{\\rm test}}}{\\mathcal{P}}{\\theta}$, and similarly $\\errorloc{n}{P}{\\mathcal{F}}$ for $\\errorlocall{n}{P}{\\mathcal{F}}{\\mathcal{P}}{\\theta}$.\nWe will primarily be concerned with the class of differentially private estimators in this paper, which is closed under post-processing and contains all constant functions. We include the proof below to build intuition for this connection. \n\n\\begin{proof} \n\nLet us first prove that $\\mathcal{E}_n^{\\rm loc}(P, \\mathcal{P}, \\mathcal{F}, \\theta)\\ge \\frac12 \\MOCall{n}{P}{\\mathcal{F}^{\\text{\\rm test}}}{\\mathcal{P}}{\\theta}$. \nSuppose for sake of contradiction that $\\mathcal{E}_n^{\\rm loc}(P, \\mathcal{P}, \\mathcal{F}, \\theta)< \\frac12 \\MOCall{n}{P}{\\mathcal{F}^{\\text{\\rm test}}}{\\mathcal{P}}{\\theta}$. Then there exists $\\hat\\theta\\in\\mathcal{F}$ and $Q\\in\\mathcal{P}$ such that ${\\rm SC}_{\\Qtest}(P,Q)> n$, $\\frac12 \\MOCall{n}{P}{\\mathcal{F}^{\\text{\\rm test}}}{\\mathcal{P}}{\\theta}~\\le~\\frac12 |\\theta(P)~-~\\theta(Q)|$ and, \n\\[\\ierror{\\hat{\\theta}}{n}{P}< \\tfrac12 \\MOCall{n}{P}{\\mathcal{F}^{\\text{\\rm test}}}{\\mathcal{P}}{\\theta} \\;\\;\\; \\text{and }\\;\\;\\;\\ierror{\\hat{\\theta}}{n}{Q}< \\tfrac12 \\MOCall{n}{P}{\\mathcal{F}^{\\text{\\rm test}}}{\\mathcal{P}}{\\theta}\\]\nTherefore, $T_{\\hat{\\theta}, \\frac{1}{2}(\\theta(P)+\\theta(Q))}$ (as defined in eqn~\\eqref{eq:ftest}) distinguishes $P$ and $Q$ with $n$ samples, which is a contradiction since ${\\rm SC}_{\\Qtest}(P,Q)> n$. Figure~\\ref{graphofconnection} gives a graphical representation of this, if the balls do not overlap then we have a test that distinguishes $P$ and $Q$.\n\nFor the opposite inequality, we need to show that for all $Q\\in\\mathcal{P}$, there exists an estimator $\\hat{\\theta}~\\in~\\mathcal{F}$ such that $\\max\\{\\ierror{\\hat{\\theta}}{n}{P}, \\ierror{\\hat{\\theta}}{n}{Q}\\}\\le \\tfrac12 \\omega_{n, {\\rm SC}_{\\Qtest}}(P) $. First suppose that ${\\rm SC}_{\\Qtest}(P,Q)>n$, that $Q$ lies inside the blue ball around $P$ in Figure~\\ref{graphofconnection}, and $\\frac12 |\\theta(P)-\\theta(Q)|\\le \\frac12 \\omega_{n, {\\rm SC}_{\\Qtest}}(P)$. Let $\\hat{\\theta}$ be the constant function that always outputs $\\frac12 |\\theta(P)~-~\\theta(Q)|$ so $\\ierror{\\hat{\\theta}}{n}{P}=\\frac12 |\\theta(P)-\\theta(Q)|$ and $\\ierror{\\hat{\\theta}}{n}{Q}=\\frac12 |\\theta(P)-\\theta(Q)|$, so we are done. Finally, suppose that ${\\rm SC}_{\\Qtest}(P,Q)\\le n$ so there exists $\\hat{\\theta}\\in\\mathcal{F}$ and $\\tau\\in\\mathbb{R}$ such that \n\\[\\mathbb{P}[\\hat{\\theta}(P^n)\\le\\tau]\\ge 0.75 \\;\\;\\;\\text{ and }\\;\\;\\; \\mathbb{P}[\\hat{\\theta}(Q^n)\\le\\tau]\\ge 0.75.\\] \n\nLet $f:\\mathbb{R}\\to\\mathbb{R}$ be defined by $f(x)=\\theta(P)$ if $\\hat{\\theta}(x)\\le\\tau$ and $f(x)=\\theta(Q)$ is $\\hat{\\theta}(x)\\ge \\tau$ so $f\\circ\\hat{\\theta}\\in\\mathcal{F}$ and $\\ierror{f\\circ\\hat{\\theta}}{n}{P}=0$ and $\\ierror{f\\circ\\hat{\\theta}}{n}{Q}=0$, so we are done. \n\\end{proof}\n\nFor distributions $P$ and $Q$, let $$\\text{H}(P,Q) = \\sqrt{\\int_{\\chi} (P(x)-Q(x))^2~\\mathrm{d}x}$$ be the Hellinger distance between $P$ and $Q$. It is well known that for the class of all estimators, \\[{\\rm SC}_{\\Qtest}(P,Q)~=~\\Theta\\left(\\frac{1}{\\text{H}^2(P,Q)}\\right),\\] so the following corollary is an immediate consequence of Theorem~\\ref{lowerbound}. Define the $\\text{H}$-information by:\n\\[J^{-1}_{\\text{H}, P}(\\beta) = \\sup\\left\\{|\\theta(P)-\\theta(Q)|\\;\\Big|\\; \\text{H}(P,Q)\\le \\beta, Q\\in\\mathcal{P}\\right\\}.\\]\n\n\\begin{corollary}[Non-private optimal local estimation rate \\citep{Donoho:1987}]\\label{nonprivLER}\nLet $\\mathcal{F}$ be the set of all functions, then there exists constants $C_1$ and $C_2$ such that for any family $\\mathcal{P}$, any statistic $\\theta$,\nany distribution $P$, and $n\\in\\mathbb{N}$, \\[\\mathcal{E}_n^{\\rm loc}(P, \\mathcal{P}, \\mathcal{F}, \\theta)\\in\\\\\\left[J^{-1}_{\\text{H}, P}\\left(\\frac{C_1}{\\sqrt{n}}\\right), J^{-1}_{\\text{H}, P}\\left(\\frac{C_2}{\\sqrt{n}}\\right)\\right].\\]\n\\end{corollary}\n\n\n\\subsection{Super efficiency}\n\\label{sec:supereff}\n\n\nThe optimal local estimation rate $\\errorlocall{n}{\\cdot}{\\mathcal{F}}{\\mathcal{P}}{\\theta}$ has the property that for any estimator $\\hat{\\theta}$, if $\\hat{\\theta}$ achieves better accuracy than $\\errorlocall{n}{P}{\\mathcal{F}}{\\mathcal{P}}{\\theta}$ at some distribution $P$, then there exists a distribution $Q$ such that the accuracy of $\\hat{\\theta}$ at $Q$ is \\emph{at least} as bad as $\\errorlocall{n}{Q}{\\mathcal{F}}{\\mathcal{P}}{\\theta}$.\nOne can also ask if an estimation rate satisfies the stronger condition of having a \\emph{super-efficiency} result. Roughly, an estimation rate $R$ has a super-efficiency result if for any estimator $\\hat{\\theta}$ that achieves better accuracy than $R(P)$ at a particular value $P$, there exists another value $Q$ where the accuracy of $\\hat{\\theta}$ is \\emph{strictly} worse than $R(Q)$. \nA super-efficiency result for a given rate $R$ shows, in a sense, that that $R$ is a meaningful target rate.\nThe optimal local estimation rate does not necessarily satisfy a super-efficiency result for general families.\nSuper-efficiency may hold for \\emph{specific} families but a general result seems to require further assumptions. We leave the question of super-efficiency of the optimal local estimation rate to future work, since our focus is the general regime. \n\n\\subsection{A Lower Bound for Instance Optimal Estimation in the High Privacy Regime}\\label{generalhighprivacy}\n\nThe characterization of the local estimation rate is significantly more complex in the central DP regime than in the local DP or non-private regimes. This is a direct consequence of the characterisation of the optimal sample complexity of simple hypothesis testing being more nuanced in the central DP regime than the local DP or non-private regimes. However, the existence of a simple characterisation of the sample complexity in the high privacy regime allows us to give a simple characterisation of the local estimation rate in that regime.\n\nFor distributions $P$ and $Q$, let $\\text{\\rm TV}(P,Q) = (1\/2) \\int |P(x)-Q(x)|dx$ be the total variation distance. For $\\beta\\in[0,1]$, we define the $L_1$-information at a distribution $P\\in\\mathcal{P}$ by\n\\begin{equation}\\label{L1info}\nJ_{\\text{\\rm TV}, P}^{-1}(\\beta) = (1\/2)\\cdot\\sup\\left\\{|\\theta(P)-\\theta(Q)|\\;\\Big|\\; \\text{\\rm TV}(P, Q)\\le\\beta, Q\\in\\mathcal{P}\\right\\}.\n\\end{equation}\n\nNote that the $L_1$-information is the analogue of the $\\text{H}$-information, which characterizes the sample complexity in the non-private setting, using the total variation distance (also known as the $L_1$-norm) instead of the Hellinger distance. Our estimation rate in the high privacy regime is characterized by the $L_1$-information. This follows immediately from Theorem~\\ref{lowerbound} and Lemma~\\ref{smalleps} which we'll state below. \n\n\\begin{theorem}\\label{smallepslower} Let $\\Qest_\\epsilon$ be the set of all $\\epsilon$-differentially private estimators. For any constant $k$ there exists a constants $C_1$ and $C_2$ such that if $\\epsilon_n\\le \\frac{k}{\\sqrt{n}}$, then for all families $\\mathcal{P}$, $P\\in\\mathcal{P}$, and $n\\in\\mathbb{N}$, \\[J_{\\TV, P}^{-1}\\left(\\frac{C_1}{n\\epsilon_n}\\right)\\le \\mathcal{E}^{\\rm loc}_n(P, \\mathcal{P}, \\Qest_\\epsilon, \\theta) \\le J_{\\TV, P}^{-1}\\left(\\frac{C_2}{n\\epsilon_n}\\right).\\]\n\\end{theorem}\n\nTheorem~\\ref{smallepslower} is interesting to contrast with Corollary~\\ref{nonprivLER}, which gives the estimation rate in the non-private regime. Note first that if $\\epsilon_n=O(\\frac{1}{\\sqrt{n}})$ then $n\\epsilon_n<\\sqrt{n}$, so the estimation rate is indeed slower under the constraint of privacy. Further, the metric characterizing the problem changes from the Hellinger distance to the total variance distance. A similar phenomenon is observed under local differential privacy constraints in \\cite{Duchi:2018}.\n\n\\begin{theorem}[Local DP \\citep{Duchi:2018}]\nLet $\\mathcal{F}_{\\text{local},\\epsilon}$ be the set of all $\\epsilon$-locally differentially private functions, there exists constants $C_1$ and $C_2$ such that for all families $\\mathcal{P}$, estimators $\\theta$, \nany $P\\in\\mathcal{P}$, and $n\\in\\mathbb{N}$, \\[J_{\\TV, P}^{-1}\\left(\\frac{C_1}{\\epsilon\\sqrt{n}}\\right)\\le \\mathcal{E}_n^{\\rm loc}(P, \\mathcal{F}_{\\text{local},\\epsilon}) \\leq J_{\\TV, P}^{-1}\\left(\\frac{C_2}{\\epsilon\\sqrt{n}}\\right).\\]\n\\end{theorem}\n\nThe corresponding class of testing functions $\\mathcal{F}^{\\text{\\rm test}}$ contains the set of all $\\epsilon$-local DP binary functions. \\cite{Duchi:2014} showed that the sample complexity for distinguishing between two distributions $P$ and $Q$ under local differential privacy is \n$\\Theta\\left(\\frac{1}{\\epsilon^2TV^2(P,Q)}\\right)$.\n\n\nAs discussed in \\cite{Duchi:2018}, the change from the Hellinger modulus of continuity to the total variation modulus of continuity has implications for how well one can expect estimation algorithms in the high privacy setting to adapt to problem-specific difficulty. For example in the case of Bernoulli estimation, the non-private local estimation rate for a Bernoulli with parameter $p\\in[0,1]$ is $\\Theta(\\sqrt{p(1-p)\/n})$, which shows that estimation algorithms in the non-private (and low central privacy setting) are able to adapt to ``easy`` instances of the problem. In contrast, in the high privacy setting, the local estimation rate is $\\Theta\\left(\\frac{1}{\\epsilon n}\\right)$, which is the same for all $p$, showing that private algorithms in this regime are not able to adapt to ``easy\" instances. As mentioned earlier, this is a direct consequence of the fact that the Hellinger distance between ${\\rm Bernoulli}(p)$ and ${\\rm Bernoulli}(p+\\alpha)$ is a function of $p$, while the total variation distance between these two distributions is independent of $p$. \n\nTheorem~\\ref{smallepslower} is a direct consequence of the following characterisation of the sample complexity of private hypothesis testing in the high privacy regime. The proof follows from the fact that in the high privacy regime, $\\epsilon\\le k\/\\sqrt{n}$, a noisy Scheff\\'e test performs as well as the optimal test $\\operatorname{ncLLR}_{\\epsilon}^{\\epsilon}$.\n\n\\begin{lemma}[High Privacy Sample Complexity Characterisation]\n\\label{smalleps}\nFor any constant $k$, there exists constants $C_1$ and $C_2$ such that for any distributions $P$ and $Q$, if $\\epsilon_n\\le\\frac{k}{\\sqrt{n}}$ then \\[SC_{\\epsilon_n}(P,Q)\\in\\left[\\frac{C_1}{\\epsilon_n \\cdot \\text{\\rm TV}(P,Q)}, \\frac{C_2}{\\epsilon_n \\cdot \\text{\\rm TV}(P,Q)}\\right].\\]\n\\end{lemma}\n\nBefore we prove Lemma~\\ref{smalleps}, a quick note on the privacy parameters. We will allow our privacy parameter, $\\epsilon$, to vary with the size of the database, $n$, so let $\\epsilon_n$ be a sequence and $n:[0,\\infty)\\to\\mathbb{N}$ be such that $\\epsilon_{n(\\epsilon)}=\\epsilon$ and $n(\\epsilon_n)$. We will often abuse notation and drop the argument of the function, e.g., referring to $\\epsilon_n$ as simply $\\epsilon$. We will assume that $\\epsilon$ is decreasing, so the larger the dataset, the more private we require our algorithm to be. We will say a simple hypothesis testing problem has sample complexity $n=SC_{\\epsilon_n}(P,Q)$ if $n=n(\\epsilon)$ is the smallest value such that $SC_{\\epsilon}(P,Q)$ and $n(\\epsilon)$ intersect.\n\n\\begin{proof}\nThe lower bound portion of this lemma is not specific to the high privacy setting; there exists $C_1$ such that for all $\\epsilon$, $SC_{\\epsilon}(P,Q)\\ge\\frac{C_1}{\\epsilon\\text{\\rm TV}(P,Q)}$. One way to prove this is as a direct consequence of \\cite[Theorem 11]{Acharya:2018}. This theorem argues that one can lower bound the sample complexity of an $\\epsilon$-DP test by upper bounding the Hamming distance between two datasets of size $n$ drawn from either $P$ and $Q$, i.e., the Hamming distance between $X$ and $Y$ where $X\\sim P^n$ and $Y\\sim Q^n$. \n\nFor the upper bound, we will show that a noisy version of the simple Scheff\\'e test has sample complexity $O(1\/\\epsilon TV(P,Q))$ in the high privacy regime. Let $E=\\{x\\;\\|\\; P(x)>Q(x)\\}$ be the Scheff\\'e set and define the test statistic $f_E$ by, for any database $X=\\{x_1, \\cdots, x_n\\}$, \\[f_E(X) = \\frac{1}{n}\\sum_{i=1}^n \\mathds{1}_{x_i\\in E}+\\text{Lap}\\left(\\frac{1}{\\epsilon n}\\right).\\]\nThen by definition of the total variation distance, \\[\\mathbb{E}_{X\\sim P^n}[f_E(X)]-\\mathbb{E}_{X\\sim Q^n}[f_E(X)] = \\Pr_{x\\sim P}(x\\in E)-\\Pr_{x\\sim Q}(x\\in E) = \\text{TV}(P,Q).\\]\nFurther, \n\\[\\max\\{\\text{var}_{X\\sim P^n}(f_E(X)), \\text{var}_{X\\sim Q^n}(f_E(X))\\}\\le \\frac{1}{n}+\\frac{1}{\\epsilon^2n^2}\\le \\frac{1+k^2}{\\epsilon^2 n^2},\\]\nwhere the last inequality follows since $\\epsilon\\le\\frac{k}{\\sqrt{n}}$. Therefore, if $n\\ge \\sqrt{\\frac{1+k^2}{12}}\\frac{1}{\\epsilon_n \\text{TV}(P,Q)}$, we have that \\[\\mathbb{E}_{X\\sim P^n}[f_E]-\\mathbb{E}_{X\\sim Q^n}[f_E]\\ge \\frac{1}{12}\\max\\{\\sqrt{\\text{var}_{X\\sim P^n}(f_E(X))}, \\sqrt{\\text{var}_{X\\sim Q^n}(f_E(X))}\\}.\\] A simple application of Chebyshev's inequality (for details see \\cite[Lemma 2.6]{Canonne:2019}) implies that there exists a threshold $\\tau$ such that the test that outputs $P$ if $f_E(X)\\ge \\tau$ and $Q$ otherwise, distinguishes between $P$ and $Q$ with sample complexity $\\sqrt{\\frac{1+k^2}{12}}\\frac{1}{\\epsilon_n \\text{TV}(P,Q)}$.\n\\end{proof}\n\n\n\\section*{Acknowledgments}\n\nWe thank Cl\u00e9ment Canonne and John Duchi for helpful conversations and comments. This work was started while the authors were visiting the Simons Institute for the Theory of Computing. Part of this work was done while AM was at Boston University and Northeastern University, where she was supported by BU's Hariri Institute for Computing, NSF award CCF-1763786, and Northeastern's Cybersecurity and Privacy Institute. \nJU's work on this project was supported by NSF awards CCF-1750640 and CNS-2120603. Part of this work was done while JU was visiting Apple. \nAS was supported in part by NSF award CCF-1763786 and a Sloan Foundation Research Award. \n\n\n\\addcontentsline{toc}{section}{References}\n\n\\bibliographystyle{abbrvnat}\n\n\n\\section{Differentially Private Simple Hypothesis Testing and the Optimal Local Estimation Rate in the High Privacy Setting}\n\\label{sec:testing}\n\nIn this section, we will discuss the optimal test statistic for differentially private simple hypothesis testing and characterise the optimal local estimation rate in the high privacy setting. The test statistic we give is a slight variant on that presented in \\cite{Canonne:2019}, who first characterised the sample complexity of differentially private simple hypothesis testing. The test statistic given here is more efficient and more amenable to the estimation problem. The characterisation of the local estimation rate in the high privacy regime is simpler than in other regimes, and offers a direct comparison to the local estimation rates in the non-private and local differential privacy regimes.\n\n\\subsection{Differential Privacy}\\label{DP} \nIn this work we are concerned with estimators that satisfy \\emph{differential privacy}, which we will formally define in this section. Let $\\mathcal{X}$ be a data universe \nand $\\mathcal{X}^n$ be the space of datasets of size $n$. Two datasets $d, d' \\in \\mathcal{X}^n$ are neighboring, denoted $d \\sim d'$, if they differ on a single record.\nLet $\\mathcal{Y}$ be an output space. \n\n\\begin{definition}[$\\epsilon$-Differential Privacy \\citep{DworkMNS06}]\\label{def:DP} Given privacy parameters $\\epsilon\\ge0$ and $\\delta\\in[0,1]$,\na randomized mechanism $M: \\mathcal{X}^n \\rightarrow \\mathcal{Y}$ is $\\epsilon$-\\emph{differentially private} if for all datasets $d \\sim d' \\in \\mathcal{X}^n$, and events $E\\subseteq\\mathcal{Y}$,\n\n\\begin{align*} \\label{def:dp-with-inputs}\n&\\Pr[M(d, \\text{hyperparams}) \\in E]\n\\leq e^\\epsilon \\cdot\\Pr[M(d', \\text{hyperparams}) \\in E]+\\delta,\n\\end{align*}\n\nwhere the probabilities are taken over the randomness induced by $M$. \n\\end{definition}\n\nThe key intuition for this definition is that the distribution of outputs on input dataset $d$ is almost indistinguishable from the distribution of outputs on input dataset $d'$. Therefore, given the output of a differentially private mechanism, it is impossible to confidently determine whether the input dataset was $d$ or $d'$. \nFor strong privacy guarantees, the privacy-loss parameter is typically taken to be a small constant less than $1$ (note that $e^\\epsilon \\approx 1+\\epsilon$ as $\\epsilon \\rightarrow 0$) and $\\delta$ is taken to be very small (say $10^{-6}$). In fact, for simple hypothesis testing, we can show that if $\\epsilon>1$, then for any $\\delta\\in[0,1]$, the private sample complexity within a constant factor of the non-private sample complexity, i.e., ${\\rm SC}_{\\epsilon, \\delta}(P,Q)=\\Theta({\\rm SC}(P,Q))$. Hence, for the remainder of this work, we will assume that $\\epsilon\\le 1$.\nNote if $\\Qest_\\epsilon$ is the set of all $\\epsilon$-DP estimators, then $\\mathcal{F}_{\\epsilon}^{\\text{\\rm test}}$ is the set of all $\\epsilon$-DP tests.\n\n\n\\subsection{An Optimal Differentially Private Simple Hypothesis Test}\\label{DPtestingsec}\n\nA characterisation of the sample complexity of differentially private simple hypothesis testing was given in \\cite{Canonne:2019}. They showed that a simple \\emph{noisy and clamped} version of the log likelihood ratio test gave an optimal sample complexity differentially private simple hypothesis test. Given distributions $P$ and $Q$, let $\\operatorname{cLLR}_a^b$ be the clamped log-likelihood statistic with thresholds $a$ and $b$, and $\\operatorname{ncLLR}_a^b$ be a noisy version: \n\\begin{equation}\\label{cllr}\n\\operatorname{cLLR}_a^b(X) = \\sum_{i=1}^n \\left[\\ln\\frac{P(x_i)}{Q(x_i)}\\right]_a^b\\;\\; \\text{ and }\\;\\; \\operatorname{ncLLR}_a^b(X)=\\operatorname{cLLR}_a^b(X)+\\mathrm{Lap}\\left(\\frac{|b-a|}{\\epsilon }\\right).\n\\end{equation}\nIn the original version of this test, the authors' proved that this test statistic gave rise to an optimal test when one set $b=\\epsilon$ and $a=-\\epsilon'$, where $\\epsilon'$ is some function of $\\epsilon, P$ and $Q$. In Appendix~\\ref{DPtestingappendix}, we improve on their results to show that setting $b=\\Theta(\\epsilon)$ and $-a=\\Theta(\\epsilon)$ is sufficient. This\nextension is crucial to us in our estimation algorithm where $\\epsilon'$ can not be computed. This is also of independent interest as an improvement of the testing result: unlike the original test presented in \\cite{Canonne:2019}, setting $a=-\\epsilon$ and $b=\\epsilon$, results in an efficient test which only requires oracle access to $P$ and $Q$. The original result in \\cite{Canonne:2019} required full knowledge of the distributions $P$ and $Q$ in order to compute $\\epsilon'$. \nIn order to simplify notation we use ${\\rm SC}_{\\epsilon}(P,Q):= {\\rm SC}_{\\Qest_\\epsilon}(P,Q)$ to denote the optimal sample complexity for distinguishing $P$ and $Q$ using an $\\epsilon$-DP algorithm.\nThe proof of the following proposition is found in Section~\\ref{arobustthresholds}\n\n\\begin{restatable}{proposition}{restaterobustthresholds}\n\\label{robustthresholds}\nIf $\\epsilon=O(1)$ then for all $a=\\Theta(\\epsilon)$ and $b=\\Theta(\\epsilon)$, there exists constants $C_1$ and $C_2$ such that for all distributions $P$ and $Q$,\n\\[{\\rm SC}_{\\operatorname{ncLLR}_a^b}(P,Q)\\in[C_1 \\cdot {\\rm SC}_{\\epsilon}(P,Q), C_2 \\cdot {\\rm SC}_{\\epsilon}(P,Q)].\\]\n\n\n\\end{restatable}\n\nThe sample complexity of $\\operatorname{ncLLR}_{\\epsilon}^{\\epsilon}$, characterised in \\cite{Canonne:2019}, has a nuanced dependence on $\\epsilon$, $P$ and $Q$. If $\\epsilon$ is large enough, privacy comes for free, and ${\\rm SC}_{\\epsilon}(P,Q)=\\Theta\\left({\\rm SC}(P,Q)\\right)$. As $\\epsilon$ decreases the dependence becomes more complicated. However, in Lemma~\\ref{smalleps} we will show that once $\\epsilon$ is small enough, $\\epsilon_n\\le\\frac{1}{\\sqrt{n}}$, the dependence is once again simple. \n\nFor hypothesis tests with constant error probabilities, the sample complexity bounds are equivalent, up to constant factors, for pure $\\epsilon$-differential privacy, and the less strict notions of approximate $(\\epsilon,\\delta)$-differential privacy and concentrated differential privacy \\citep{Dwork2016ConcentratedDP, Bun:CDP} (see \\citet[Lemma 5]{Acharya:2018}). Consequently, the test $\\operatorname{ncLLR}_{\\epsilon}^{\\epsilon}$ is optimal (up to constants) for each of these weaker notions. The class of estimators defined by each of these notions is closed under post-processing and thus, by Theorem~\\ref{lowerbound}, the optimal local estimation rate is, up to constants, the same for each of these notions. This may seem like a contradiction since there are many well-known cases of asymptotic gaps in the estimation rate of pure differential privacy and approximate differential privacy. However, the optimal local estimation rate need not be \\emph{uniformly achievable} under all (or any) of these notions of privacy, leaving room for a gap in the achievable estimation rate under pure, concentrated and approximate DP. \n\n\n\\section{Nonparametric Estimation of Functionals}\n\\label{sec:noparametric}\n\nIn the previous section, the statistic of interest fully characterised the distribution. In this section, we will study the problem of estimating a statistic that does not characterise the distribution. While one can still define the local estimation rate as in Equation~\\eqref{localerrorrate} in this setting, we will follow the standard set by \\cite{Donoho:1991} by focusing on a slightly different notion of local estimation rate and modulus of continuity. Given a family of distributions $\\mathcal{P}$ and a statistic $\\theta:\\mathcal{P}\\to\\mathbb{R}$, we define the \\emph{modulus of continuity with respect to $\\theta$} at any value $t\\in\\mathbb{R}$ as \n\\begin{align}\n\\nonumber\\MOCallt{n}{t}{\\mathcal{F}^{\\text{\\rm test}}}{\\mathcal{P}}{\\theta} &= \\sup\\left\\{|\\theta(P)-\\theta(Q)|\\;\\Big| \\; {\\rm SC}_{\\Qtest}(P,Q)\\ge n, \\theta(P)=t, P, Q\\in\\mathcal{P}\\right \\}\\\\\n&= \\max_{P\\in\\mathcal{P}, \\theta(P)=t} \\MOCallt{n}{P}{\\mathcal{F}^{\\text{\\rm test}}}{\\mathcal{P}}{\\theta}. \\label{modulust}\n\\end{align}\nThe quantity is the worst case modulus of any distribution $P$ in the family $\\mathcal{P}$ such that $\\theta(P)=t$. \\cite{Donoho:1991} showed that for some estimation problems, in the non-private setting, one can design an algorithm that can universally achieve error, $\\max_{\\theta(P)=t}\\ierror{\\hat{\\theta}}{n}{P}\\le \\MOCallt{n}{t}{\\mathcal{F}}{\\mathcal{P}}{\\theta}$. \n\nRather than using of simple hypothesis tests, \\cite{Donoho:1991} turn to the problem of distinguishing \n\\begin{equation}\n \\mathcal{P}_{\\le t}=\\{f|\\theta(f)\\le t\\}\\;\\;\\;\\text{ and }\\;\\;\\; \\mathcal{P}_{\\ge t+\\Delta}=\\{f|\\theta(f)\\ge t+\\Delta\\}.\n \\label{eq:t-Delta-tests}\n\\end{equation}\nUsing this test, given the promise that $t\\in[t_{\\min},t_{\\max}]$ and setting $\\Delta=|t_{\\max}-t_{\\min}|\/3$, we can rule out the true parameter lying in either $[t_{\\min}, t_{\\min}+\\Delta]$ or $[t_{\\max}-\\Delta, t_{\\max}]$, reducing the search space by a factor of $2\/3$ for the next round. We will call this ternary search. \nIf we run this algorithm for $\\lceil\\frac{\\log{\\left(|t_1-t_0|\/\\omega\\right)}}{\\log(3\/2)}\\rceil$ steps, then the resulting error on the final estimate is at most $\\omega$. The total sample complexity of the estimator is the sum of the sample complexities of the tests performed at each step. This is at most a logarithmic factor times the sample complexity of the most stringent (final) test; however, in many cases it is quite a bit lower than that, since the exponential decrease in $\\Delta$ can mean that \nthe sample complexity of the final test dominates the overall the sample complexity of the estimator.\n\n\n\\begin{algorithm}[t] \\caption{Ternary Search for Functional Estimation, $\\hat{\\theta}$}\\label{algo:BS}\n\\begin{algorithmic}[1]\n\\Require{Sample oracle for sample from $P$, $t_00$, there should exist distributions $P_0^*$ and $P_1^*$ such that the number of samples needed to distinguish $\\mathcal{P}_{\\le t}$ and $\\mathcal{P}_{\\ge t+\\Delta}$ is equal to the number of samples needed to distinguish between $P_0^*$ and $P_1^*$. Suppose we would like a test which competes with the modulus of continuity at some target sample size $m$. Given this condition, if we run Algorithm~\\ref{algo:BS} for $\\lceil\\frac{\\log{\\left(\\MOCallt{m}{t}{\\mathcal{F}^{\\text{\\rm test}}}{\\mathcal{P}}{\\theta}\/|t_1-t_0|\\right)}}{\\log(2\/3)}\\rceil$ rounds then it achieves error at most $ \\MOCallt{m}{t}{\\mathcal{F}^{\\text{\\rm test}}}{\\mathcal{P}}{\\theta}$. The final, most stringent test requires a sample of size $m$, by definition.\\footnote{This discussion elides the dependency on the tests' error probability, which must be set sufficiently low to ensure that the decisions made at every round are correct; see Algorithm~\\ref{algo:BS}.} The overall sample size is thus at most a logarithmic factor larger than $m$, and in some cases even closer to $m$.\n\nAlgorithm~\\ref{algo:BS} can be adapted to the private setting by letting $T^*_{t,\\Delta}$ be the optimal $\\epsilon$-DP test. In the following section, we adapt an example of this framework from \\cite{Donoho:1991} to the private setting. The key difficulty is showing that the sample complexity of distinguishing $\\mathcal{P}_{\\le t}$ and $\\mathcal{P}_{\\ge t+\\Delta}$ is characterised by the hardest simple test in the private setting.\n\n\\subsection{Tail Rates}\\label{tailrates}\n\nIn this section, we will consider estimation of the tail decay rate of a certain class of distributions. We will show that we can use the framework of Algorithm~\\ref{algo:BS} to design an algorithm for tail decay rate estimation that is instance optimal up to a logarithmic factor.\nThis section is a private analogue of Section 5 of \\cite{Donoho:1991}. The fact that much of Donoho and Liu's argument can be immediately adapted to the private setting is largely due to the fact that the optimal private test is similar to the optimal non-private test, which the optimal local estimator is built from. However, \\cite{Donoho:1991} are able to alter Algorithm~\\ref{algo:BS} to eliminate the logarithmic factor increase in the sample complexity. This alteration can not be adapted to the private setting.\nAs a result, unlike in the non-private setting, where the resulting algorithm is instance optimal, our private analogue will be instance optimal up to a logarithmic factor. \n\nAs in Donoho and Liu, rather than estimate the rate at which the tail of the density approaches 0 as $x\\to\\infty$, we will consider a transformation of the problem to observations $Y_i=1\/X_i$. This leads us to estimating the rate at which a density approaches 0 as $x \\to 0^+$. Let $\\mathcal{P}~=~\\textbf{Tails}(C_-, C_+, \\delta, t_0, t_1, \\gamma, p)$ be the set of distributions defined on $[0,\\infty)$ with densities satisfying:\n\\[f(x) = Cx^t(1+h(x))\\;\\; 0\\le x\\le\\delta<1,\\]\nwhere\n\\begin{align*}\n0< t_0\\le t\\le t_1<\\infty, \\hspace{0.1in}\n0< C_-\\le C\\le C_+<\\infty, \\hspace{0.1in} \\text{ and } \\hspace{0.1in}\n|h(x)|&\\le \\gamma x^p.\n\\end{align*}\nFor such a density function, the \\emph{tail rate} is given by $\\theta(f)=t$. The statistic $\\theta(f)$ is one dimensional and lies in the bounded interval $[t_0,t_1]$. \n\n\\begin{theorem}\\label{maintailtheorem-overall} For every positive integer $n$: \n\\begin{enumerate}\n \\item For every $t \\in [t_0,t_1]$ there exists a density $f$ with $\\theta(f)=t$ such that every differentially private estimator has error at least $\\MOCallt{n}{t}{\\mathcal{F}^{\\text{\\rm test}}_{\\epsilon}}{\\mathcal{P}}{\\theta}$ on some distribution in a neighborhood of $f$. That is, \n\\[\\errorlocall{n}{f}{\\mathcal{F}_{\\epsilon}}{\\mathcal{P}}{\\theta}\\ge \\frac{1}{2}\\MOCallt{n}{t}{\\mathcal{F}^{\\text{\\rm test}}_{\\epsilon}}{\\mathcal{P}}{\\theta}\\]\n\n\n \\item There exists an $\\varepsilon$-differentially private estimator $\\hat{\\theta}$ with the following property. For all $t\\in [t_0,t_1]$, if $k^*(n)= \\lceil\\log_{3\/2}(\\frac{|t_1-t_0|}{ \\MOCallt{n}{t}{\\mathcal{F}^{\\text{\\rm test}}_{\\epsilon}}{\\mathcal{P}}{\\theta}})\\rceil$ and $N=n\\cdot k^*(n)\\cdot \\lceil\\log k^*(n)\\rceil $, then $\\hat{\\theta}$ has error at most $\\MOCallt{n}{t}{\\mathcal{F}^{\\text{\\rm test}}_{\\epsilon}}{\\mathcal{P}}{\\theta}$ when run on $N$ samples. That is, \\[\\ierror{\\hat{\\theta}}{N}{f}\\le \\MOCallt{n}{t}{\\mathcal{F}^{\\text{\\rm test}}_{\\epsilon}}{\\mathcal{P}}{\\theta}\\]\n\n\\end{enumerate} \n\\end{theorem}\n\nTypically, we expect $N$ is $O(n \\cdot\\log (n|t_1-t_0|)\\cdot\\log\\log (n|t_1-t_0|))$ for $\\epsilon \\leq 1$ when\\\\ $\\MOCallt{n\/\\log k^*(n)}{t}{\\mathcal{F}^{\\text{\\rm test}}_{\\epsilon}}{\\mathcal{P}}{\\theta}~\\approx~1\/poly(n)$.\nAs in Donoho et al., the first aspect of the proof is to show that the corresponding testing problem has the property that the difficulty of distinguishing between two intervals in $[t_0,t_1]$ is captured by the difficulty of the hardest two-point testing problem. That is, for $t\\in[t_0,t_1]$ and $\\Delta>0$, there exists distributions $f_0^*$ and $f_1^*$ such that the number of samples needed to distinguish $\\mathcal{P}_{\\le t}$ and $\\mathcal{P}_{\\ge t+\\Delta}$ is equal to the number of samples needed to distinguish between $f_0^*$ and $f_1^*$. \\cite{Donoho:1991} showed that in the non-private case, the distributions $f_0^*$ and $f_1^*$ satisfy the following conditions:\n\\begin{align*}\nf_0^*(x) &= C_-x^t(1-\\gamma x^p), \\;\\;\\; x\\le a_1(t,\\Delta),\\\\\nf_1^*(x) &= C_+x^{t+\\Delta}(1+\\gamma x^p), \\;\\;\\; x\\le a_1(t,\\Delta),\n\\end{align*}\nand \n\\[\\frac{f_0^*(x)}{f_1^*(x)}=\\frac{f_0^*(a_1(t,\\Delta))}{f_1^*(a_1(t,\\Delta))}, \\;\\;\\; x>a_1(t,\\Delta),\\] and \\[\\frac{f_0^*(a_1(t,\\Delta))}{f_1^*(a_1(t,\\Delta))} = \\frac{1-\\int_0^{a_1(t,\\Delta)}f_0^*(v)dv}{1-\\int_0^{a_1(t,\\Delta)}f_1^*(v)dv}.\\]\nIn the following lemma we mirror their proof to show that the same distributions also satisfy this condition in the private case.\n\nWe say a real random variable $X$ is stochastically less than a random variable $Y$, denoted $X\\preceq Y$, if \\[\\mathbb{P}(X>x)\\le \\mathbb{P}(Y>x) \\;\\;\\text{ for all } x\\in(-\\infty, \\infty).\\]\n\n\\begin{lemma}\\label{stochord}\nIf $u$ is non-decreasing and $X\\preceq Y$ then $u(X)\\preceq u(Y)$. If $X_i\\preceq Y_i$ for $i=1,\\cdots, n$ then $\\sum_{i=1}^n X_i\\preceq \\sum_{i=1}^n Y_i$.\n\\end{lemma}\n\n\\begin{lemma}\\label{worstcasedists}\nFor the distribution $f_0^*$ and $f_1^*$ described above, we have \\[{\\rm SC}_{\\epsilon}(\\mathcal{P}_{\\le t}, \\mathcal{P}_{\\ge t+\\Delta}) = {\\rm SC}_{\\epsilon}(f_0^*, f_1^*).\\] Furthermore, the test statistic for distinguishing $\\mathcal{P}_{\\le t}$ and $\\mathcal{P}_{\\ge t+\\Delta}$ is the clamped log-likelihood ratio between $f_0^*$ and $f_1^*$. \n\\end{lemma}\n\n\\begin{proof} To simplify notation, we will let $a=a_1(t,\\Delta)$.\nRecall from Proposition~\\ref{robustthresholds}, the optimal test statistic for distinguishing between $f_0^*$ and $f_1^*$ is given by the noisy clamped log-likelihood ratio. Now, let \\[L_{t,\\Delta}(x) = \\begin{cases}\n\\ln\\left(\\frac{C_+}{C_-}x^{\\Delta}\\frac{1+\\gamma x^p}{1-\\gamma x^p}\\right), & 00$, they define the following estimator \n\\begin{equation}\\label{donohoestimator}\n\\theta^*_{n, \\Delta}(X) = \\frac{\\Delta}{2}+\\sup\\{t\\;|\\; L_{n,t,\\Delta}(X)\\le0\\},\n\\end{equation} which outputs the largest $t$ for such the hypothesis $\\mathcal{P}_{\\le t}$ would be accepted. This estimator is well-defined since $L_{n,t,\\Delta}$ is deterministic, a crucial distinction when we move to the private setting. Donoho et al. show that if $\\Delta$ is sufficiently small then $L_{n,t,\\Delta}$ is monotonically decreasing in $t$ for a given $x$, which implies that given input distribution $f$ such that $\\theta(f)=t$, the estimation algorithm $T^*_{n, \\Delta}$ has error rate $\\MOCallt{n}{t}{\\mathcal{F}^{\\text{\\rm test}}_{\\epsilon}}{\\mathcal{P}}{\\theta}.$\n\nThe estimator $\\theta^*_{n, \\Delta}(X)$ in eqn~\\eqref{donohoestimator} can be viewed as performing the test $L_{n,t,\\Delta}(X)$ on every $t$ value and outputting the threshold point, where the test flips from \\texttt{accept} to \\texttt{reject}. We can not replicate this directly in the private setting both because the private test is stochastic (so there is likely to be some false negatives and false positives), and because performing the test on every $t$ value would result in an unreasonably large privacy cost. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}