File size: 149,856 Bytes
5027ccc |
1 2 3 4 5 6 |
{"text":"\\section{Introduction}\n \nThe growing popularity of voice-controlled digital assistants, such as Google Assistant, brings with it new types of security challenges. Input to a speech interface is difficult to control. Furthermore, attacks via a speech interface are not limited to voice commands which are detectable by human users. Malicious input may also come from the space of sounds to which humans allocate a different meaning to the system, or no meaning at all. In this paper, we show that it is possible to hide malicious voice commands to the voice-controlled digital assistant Google Assistant in `nonsense' words which are perceived as meaningless by humans. \n\nThe remainder of the paper is structured as follows. Section II outlines the prior work in this area. Section III provides some relevant background on phonetics and speech recognition. Section IV details the methodology that was applied in the experimental work. This includes the process used to generate potential adversarial commands consisting of nonsensical word sounds. It also includes the processes for testing the response of Google Assistant to the potential adversarial commands, and for testing the comprehensibility of adversarial commands by humans. Section V presents the results of the experimental work. Section VI discusses the implications of the experimental results. Section VII makes some suggestions for future work and concludes the paper. \n\n\\section{Prior Work}\n\nThe idea of attacking voice-controlled digital assistants by hiding voice commands in sound which is meaningless or imperceptible to humans has been investigated in prior work. Carlini et al. \\cite{carlini2016hidden} have presented results showing it is possible to hide malicious commands to voice-controlled digital assistants in apparently meaningless noise, whereas Zhang et al.\\cite{zhang2017dolphinatack} have shown that it is possible to hide commands in sound which is inaudible. Whereas this prior work demonstrated voice attacks which are perceived by humans as noise and silence, the aim of this work was to develop a novel attack based on `nonsense' sounds which have some phonetic similarity with the words of a relevant target command. In related work by Papernot et al. \\cite{papernot2016crafting}, it was shown that a sentiment analysis method could be misled by input which was `nonsensical' at the sentence level, i.e. the input consisted of a nonsensical concatenation of real words. By contrast, this work examines whether voice-controlled digital assistants can be misled by input which consists of nonsensical word sounds. Nonsense attacks are one of the categories of possible attacks via the speech interface which have been identified in a taxonomy developed by Bispham et al. \\cite{bispham2018taxonomy}. \n\nOutside the context of attacks via the speech interface, differences between human and machine abilities to recognise nonsense syllables have been studied for example by Lippmann et al. \\cite{lippmann1997speech} and Scharenborg and Cooke \\cite{scharenborg2008comparing}. Bailey and Hahn \\cite{bailey2005phoneme} examine the relationship between theoretical measures of phoneme similarity based on phonological features, such as might be used in automatic speech recognition, and empirically determined measures of phoneme confusability based on human perception tests. Machine speech recognition has reached parity with human abilities in terms of the ability correctly to transcribe meaningful speech (see Xiong et al. \\cite{xiong2016achieving}), but not in terms of the ability to distinguish meaningful from meaningless sounds. The inability of machines to identify nonsense sounds as meaningless is exploited for security purposes by Meutzner et al. \\cite{meutzner2015constructing}, who have developed a CAPTCHA based on the insertion of random nonsense sounds in audio. The opposite scenario, i.e. the possible security problems associated with machine inability to distinguish sense from nonsense, has to the best of our knowledge not been exploited in prior work.\n\n\n\\section{Background}\n\nThe idea for this work was inspired by the use of nonsense words to teach phonics to primary school children.\\footnote{See The Telegraph, 1st May 2014, ``Infants taught to read `nonsense words' in English lessons\"} `Nonsense' is defined in this context as sounds which are composed of the sound units which are used in a given language, but to which no meaning is allocated within the current usage of that language. Such sound units are known as `phonemes'.\\footnote{See for example https:\/\/www.britannica.com\/topic\/phoneme} English has around 44 phonemes.\\footnote{See for example https:\/\/www.dyslexia-reading-well.com\/44-phonemes-in-english.html} The line between phoneme combinations which carry meaning within a language and phoneme combinations which are meaningless is subject to change over time and place, as new words evolve and old words fall out of use (see Nowak and Krakauer \\cite{nowak1999evolution}). The space of meaningful word sounds within a language at a given point in time is generally confirmed by the inclusion of words in a generally established reference work, such as, in the case of English, the Oxford English Dictionary.\\footnote{See for example https:\/\/blog.oxforddictionaries.com\/press-releases\/new-words-added-oxforddictionaries-com-august-2014\/} In this work, we tested the response of Google Assistant to English word sounds which were outside this space of meaningful word sounds, but which had a `rhyming' relationship with meaningful words recognised as commands by Google Assistant. The term `rhyme' is used to refer to a number of different sound relationships between words (see for example McCurdy et al. \\cite{mccurdy2015rhymedesign}), but it is most commonly used to refer to a correspondence of word endings.\\footnote{see https:\/\/en.oxforddictionaries.com\/definition\/rhyme} For the purposes of our experimental work we define rhyme according to this commonly understood sense as words which share the same ending. \n\nThere are a number of features of speech recognition in voice-controlled digital assistants which might affect the processing of nonsense syllables by such systems. One of these features is the word space which the assistant has been trained to recognise. The number of words which a voice assistant such as Google Assistant can transcribe is much larger than the number of words which it can `understand' in the sense of being able to map them to an executable command. In order to be able to perform tasks such as web searches by voice and note taking, a voice-controlled digital assistant must be able to transcribe all words in current usage within a language. It can therefore be assumed that the speech recognition functionality in Google Assistant must have access to a phonetic dictionary of all English words. We conducted some preliminary tests to determine whether this phonetic dictionary also includes nonsense words, so as to enable the assistant to recognise such words as meaningless. Using the example of the nonsense word sequence `voo terg spron', we tested the response of Google Assistant to nonsense syllables by speaking them in natural voice to a microphone three times. The nonsense word sequence was variably transcribed as `bedtime song', `who text Rob', and `blue tux prom', i.e. the Assistant sought to match the nonsense syllables to meaningful words, rather than recognising them as meaningless. This confirmed the viability of our experiment in which we sought to engineer the matching of nonsense words to a target command. \n\nAnother feature of speech recognition in voice assistant which might affect the processing of nonsense syllables is the influence of a language model. Modern speech recognition technology includes both an acoustic modelling and a language modelling component. The acoustic modelling component computes the likelihood of the acoustic features within a segment of speech having been produced by a given word. The language modelling component calculates the probability of one word following another word or words within an utterance. The acoustic model is typically based on Gaussian Mixture Models or deep neural networks (DNNs), whereas the language model is typically based on n-grams or recurrent neural networks (RNNs). Google's speech recognition technology as incorporated in Google Assistant is based on neural networks.\\footnote{See Google AI blog, 11th August 2015, `The neural networks behind Google Voice transcription' https:\/\/ai.googleblog.com\/2015\/08\/the-neural-networks-behind-google-voice.html} The words most likely to have produced a sequence of speech sounds are determined by calculation of the product of the acoustic model and the language model outputs. The language model is intended to complement the acoustic model, in the sense that it may correct `errors' on the part of the acoustic model in matching a set of acoustic features to words which are not linguistically valid in the context of the preceding words. This assumption of complementary functionality is valid in a cooperative context, where a user interacts via a speech interface in meaningful language. However, the assumption of complementarity is not valid in an adversarial context, where an attacker is seeking to engineer a mismatch between a set of speech sounds as perceived by a human, such as the nonsensical speech sounds generated here, and their transcription by a speech-controlled device. In an adversarial context such as that investigated here, the language model may in fact operate in the attacker's favour, in that if one `nonsense' word in an adversarial command is misrecognised as a target command word, subsequent words in the adversarial command will be more likely to be misrecognised as target command words in turn, as the language model trained to recognise legitimate commands will allocate a high probability to the target command words which follow the initial one. Human speech processing also uses an internal `lexicon' to match speech sounds to words (see for example Roberts et al. \\cite{roberts2013aligning}). However, as mentioned above, unlike machines, humans also have an ability to recognise speech sounds as nonsensical. This discrepancy between machine and human processing of word sounds was the basis of our attack methodology for hiding malicious commands to voice assistants in nonsense words. \n\n\n\\section{Methodology}\nThe experimental work comprised three stages. The first stage involved generating from a set of target commands a set of potential adversarial commands consisting of nonsensical word sequences. These potential adversarial commands were generated using a mangling process which involved replacing consonant phonemes in target command words to create a rhyming word sound, and then determining whether the resulting rhyming word sound was a meaningful word in English or a `nonsense word'. For the purposes of this work, the Unix word list was considered representative of the current space of meaningful sounds in English. Word sounds identified as nonsense words were used to create potential adversarial commands. Audio versions of these potential adversarial commands were created using speech synthesis technology. The second stage of the experimental work was to test the response of the target system to the potential adversarial commands. The target system for experiments on machine perception of nonsensical word sequences was the voice-controlled digital assistant Google Assistant. The Google Assistant system was accessed via the Google Assistant Software Development Kit (SDK).\\footnote{See https:\/\/developers.google.com\/assistant\/sdk\/} The third stage of the experimental work was to test the human comprehensibility of adversarial commands which were successful in triggering a target action in the target system. \n\\subsection{Adversarial Command Generation}\n\nA voice-controlled digital assistant such as Google Assistant typically performs three generic types of action, namely information extraction, control of a cyber-physical action, and data input. The data input category may overlap with the control of cyber-physical action category where a particular device setting needs to be specified, eg. light color or thermostat temperature. The three generic action categories are reflected in three different command structures for commands to Google Assistant and other voice-controlled digital assistants. The three command structures are: vocative + interrogative (eg. `Ok Google, what is my IP address'), vocative + imperative (eg. `Ok Google, turn on the light'), and vocative + imperative + data (eg. 'Ok Google, take a note that cats are great'). For our experimental work, we chose 5 three-word target commands corresponding to 5 target actions, covering all three possible target action categories. These target commands were: ``What's my name\" (target action: retrieve username, action category: information extraction), ``Turn on light\" (target action: turn light on, action category: control of cyber-physical action), ``Turn off light\" (target action: turn light off, action category: control of cyber-physical action), ``Turn light red\" (target action: turn light to red, action category: data input), ``Turn light blue\" (target action: turn light to blue, action category: data input). We originally included a sixth target command, which would have represented a second target command for the information extraction category: ``Who am I\". However, no successful adversarial commands could be generated from this target command. \n\nA set of potential adversarial commands was created from the target commands using a mangling process. This mangling process was based on replacing consonant phonemes in the target command words to generate nonsensical word sounds which rhymed with the original target command word.\\footnote{Our approach was inspired by an educational game in which a set of nonsense words is generated by spinning lettered wooden cubes - see https:\/\/rainydaymum.co.uk\/spin-a-word-real-vs-nonsense-words\/}\nThe target commands were first translated to a phonetic representation in the Kirschenbaum phonetic alphabet\\footnote{See http:\/\/espeak.sourceforge.net\/phonemes.html} using the `espeak' functionality in Linux. The starting consonant phonemes of each word of the target command were then replaced with a different starting consonant phoneme, using a Python script and referring to a list of starting consonants and consonant blends.\\footnote{See https:\/\/k-3teacherresources.com\/teaching-resource\/printable-phonics-charts\/} Where the target command word began with a vowel phoneme, a starting consonant phoneme was prefixed to the vowel. The resulting word sounds were checked for presence in a phonetic representation of the Unix word list, also generated with espeak, to ascertain whether the word sound represented a meaningful English word or not. If the sound did correspond to a meaningful word, it was discarded. This process thus generated from each target command a number of rhyming nonsensical phoneme sequences to which no English meaning was attached. Audio versions of the phoneme sequences were then created using espeak. A similar process was followed to generate a set of potential adversarial commands from the wake-up word `Hey Google'. In addition to replacing the starting consonants `H' and `G', the second `g' in `Google' was also replaced with one of the consonants which are found in combination with the `-le' ending in English.\\footnote{See https:\/\/howtospell.co.uk\/} \n\nNonsensical word sequences generated from the `Hey Google' wake-up word and nonsensical word sequences generated from target commands which were successful respectively in activating the assistant and triggering a target action in audio file input tests (see Results section for details) were combined with one another to generate a set of potential adversarial commands for over-the-air tests. This resulted in a total of 225 nonsensical word sequences representing a concatenation of each of 15 nonsensical word sequences generated from the wake-up word with each of 15 nonsensical word sequences generated from a target command. Audio versions of these 225 nonsensical word sequences were generated using the Amazon Polly speech synthesis service, generating a set of .wav files.\\footnote{See https:\/\/aws.amazon.com\/polly\/} Amazon Polly is the speech synthesis technology used by Amazon Alexa, hence the over-the-air tests represented a potential attack on Google Assistant with `Alexa's' voice. The audio contained a brief pause between the wake-up word and the command, as is usual in natural spoken commands to voice assistants. As Amazon Polly uses the x-sampa phonetic alphabet rather than the Kirschenbaum format, it was necessary prior to synthesis to translate the phonetic representations of the potential adversarial commands from Kirschenbaum to x-sampa format.\n\n\n\\subsection{Assistant Response Tests}\n The Google Assistant SDK was integrated in a Ubuntu virtual machine (version 18.04). The Assistant was integrated in the virtual machine using two options; firstly, the Google Assistant Service, and secondly the Google Assistant Library. The Google Assistant Service is activated via keyboard stroke and thus does not require a wake-up word, and voice commands can be inputted as audio files as well as over the air via a microphone. The Google Assistant Library, on the other hand, does require a wake-up word for activation, and receives commands via a microphone only. The Google Assistant Service could therefore be used to test adversarial commands for target commands and for the wake-up word separately and via audio file input rather than via a microphone. The Google Assistant Library could be used to test the activation of the Assistant and the triggering of a target command by an adversarial command in combination over the air, representing a more realistic attack scenario. \n\nSome additions to the source code for the Google Assistant Service were made in order to print the Assistant's spoken responses to commands to the terminal in text form, as well to print a confirmation of two non-verbal actions by the Assistant, namely `turn light red' and `turn light blue'. Similar amendments were made to the source code for the Google Assistant Library in order to print a confirmation of these two non-verbal actions for which the Assistant did not print a confirmation to the terminal by default. \n\n\n\nWe first tested the Assistant's response to plain-speech versions of each target command to confirm that these triggered the relevant target action. Using Python scripts, we then generated nonsense word sequences from the wake-up word `hey Google' and from each target command in batches of 100 and tested the response of Google Assistant Service to audio file input of the potential adversarial commands for wake-up word and target commands separately. The choice of consonant phoneme to be replaced to generate nonsense words was performed randomly by the Python scripts for each batch of 100 potential adversarial commands. We continued the testing process until we had generated 15 successful adversarial commands for the wake-up word, and 3 successful adversarial commands for each target command, i.e. 15 successful adversarial commands in total. Each successful adversarial command for the wake-up word and each successful adversarial command for a target command were then combined to generate potential adversarial commands for the over-the-air tests as described above.\n\nIn the over-the-air tests, the 225 potential adversarial commands generated from the adversarial commands for the wake-up word and target commands which had been successful in the audio file input tests were played to the Google Assistant Library via a USB plug-in microphone from an Android smartphone. \n\n\\subsection{Human Comprehensibility Tests}\n\nWe next tested the human comprehensibility of adversarial commands which had successfully triggered a target action by the Assistant. Human experimental subjects were recruited via the online platform Prolific Academic.\\footnote{https:\/\/prolific.ac\/} All subjects were native speakers of English. The subjects were asked to listen to audio of twelve successful adversarial commands, which were the successful adversarial commands shown in Tables 1 and 2 for the audio file input and over-the-air tests respectively (see Results section for further details). The audio which subjects were asked to listen to also included as `attention tests', two files consisting of synthesised audio of two easily understandable utterances, ``Hello how are you\" and ``Hi how are you\". Subjects were then asked to indicate whether they had identified any meaning in the audio. If they had identified meaning, they were asked to indicate what meaning they heard. The order in which audio clips were presented to the participants was randomised.\n\n\n\\section{Results}\n\n\n\\subsection{Assistant Response Tests}\nThrough application of the methodology described above, the audio file input tests for the wake-up word `Hey Google' identified 15 successful adversarial commands which triggered activation of the device. The audio file input tests for target commands identified 3 successful adversarial commands for each target action, i.e. 15 successful adversarial commands in total, in around 2000 tests. Three examples of the successful adversarial commands for the wake-up word and one example of an adversarial command for each of the target commands is shown in Table 1. The over-the-air tests identified 4 successful adversarial commands in the 225 tests (representing all possible combinations of each of the 15 successful adversarial commands for the wake-up word with each of the successful adversarial commands for the target commands). One of the successful over-the-air adversarial commands triggered the `turn on light' target action and three of the successful over-the-air adversarial commands triggered the `turn light red' target action. The 4 successful over-the-air adversarial commands are shown in Table 2. Also shown below, in Figures \\ref{fig:transcription_1} and \\ref{fig:transcription_2}, are examples of the print-out to terminal of the Google Assistant Service's response to a successful adversarial command for a wake-up word and for a target command. Further shown below is an example of the print-out to terminal of the Google Assistant Library's response to a successful over-the-air adversarial command (see Figure \\ref{fig:transcription_3}).\n\n\n\n\n\n\n\\begin{table}[htbp!]\n\\centering\n\\begin{center}\n \\begin{tabular}{|p{1.7cm}||p{1.7cm}|p{1.7cm}|p{1.7cm}|} \n \\hline\n \\footnotesize{Target Command} & \\footnotesize{Adversarial Command (Kirschenbaum phonetic symbols)} & \\footnotesize{Text Transcribed} & \\footnotesize{Action Triggered} \\\\ [0.5ex] \n \\hline\\hline\n \\footnotesize{Hey Google} & \\footnotesize{S'eI j'u:b@L (``shay yooble\")} & \\footnotesize{hey Google} & \\footnotesize{\\textit{assistant activated}}\\\\ \n \\hline\n \\footnotesize{Hey Google} & \\footnotesize{t'eI g'u:t@L (``tay gootle\")} & \\footnotesize{hey Google} & \\footnotesize{\\textit{assistant activated}} \\\\\n \\hline\n \\footnotesize{Hey Google} & \\footnotesize{Z'eI d'u:b@L (``zhay dooble\")} & \\footnotesize{hey Google} & \\footnotesize{\\textit{assistant activated}} \\\\\n \\hline\n \\footnotesize{turn off light} & \\footnotesize{h'3:n z'0f j'aIt (``hurn zof yight)} & \\footnotesize{turns off the light} & \\footnotesize{Turning device off}\\\\[1ex]\n \\hline\n \\footnotesize{turn light blue} & \\footnotesize{h'3:n gl'aIt skw'u: (``hurn glight squoo\"} & \\footnotesize{turn the lights blue} & \\footnotesize{color is blue}\\\\[1ex]\n \\hline\n \\footnotesize{turn light red} & \\footnotesize{str'3:n j'aIt str'Ed (``strurn yight stred\"} & \\footnotesize{turn the lights to Red} & \\footnotesize{color is red}\\\\[1ex]\n \\hline\n \\footnotesize{what's my name} & \\footnotesize{sm'0ts k'aI sp'eIm (``smots kai spaim\")} & \\footnotesize{what's my name} & \\footnotesize{You told me your name was MK}\\\\[1ex]\n \\hline\n \\footnotesize{turn on light} & \\footnotesize{p'3:n h'0n kl'aIt (``purn hon klight\")} & \\footnotesize{turn on light} & \\footnotesize{Turning device on}\\\\[1ex]\n \\hline\n\\end{tabular}\n\\end{center}\n\\caption{Examples of successful adversarial commands in audio file input experiments}\n\\label{table:1}\n\\end{table}\n\n\n\\begin{table}[htbp!]\n\\centering\n\\begin{center}\n \\begin{tabular}{|p{1.7cm}||p{1.7cm}|p{1.7cm}|p{1.7cm}|} \n \\hline\n \\footnotesize{Target Command} & \\footnotesize{Adversarial Command (x-sampa phonetic symbols)} & \\footnotesize{Text Transcribed} & \\footnotesize{Action Triggered} \\\\ [0.5ex] \n \\hline\\hline\n \\footnotesize{Hey Google turn on light} & \\footnotesize{t'eI D'u:bl= s'3:n Z'Qn j'aIt (``tay dooble surn zhon yight\")} & \\footnotesize{switch on the light} & \\footnotesize{Turning the LED on}\\\\ \n \\hline\n \\footnotesize{Hey Google turn light red} & \\footnotesize{t'eI D'u:bl= tr'3:n Tr'aIt str'Ed (``tay dooble trurn thright stred\")} & \\footnotesize{turn lights to Red} & \\footnotesize{The color is red} \\\\\n \\hline\n \\footnotesize{Hey Google turn light red} & \\footnotesize{t'eI D'u:bl= pr'3:n j'aIt sw'Ed (``tay dooble prurn yight swed\")} & \\footnotesize{turn the lights red} & \\footnotesize{The color is red} \\\\\n \\hline\n \\footnotesize{Hey Google turn light red} & \\footnotesize{t'eI D'u:bl= str'3:n j'aIt str'Ed (``tay dooble strurn yight stred\")} & \\footnotesize{turn lights to Red} & \\footnotesize{The color is red}\\\\[1ex]\n \\hline\n\\end{tabular}\n\\end{center}\n\\caption{Successful adversarial commands in over-the-air experiments}\n\\label{table:2}\n\\end{table}\n\n\n\\begin{figure}\n \\tiny\n \\centering\n \\noindent\\caption{\\small\\textbf{{Transcription of response to adversarial command for `Hey Google' from audio file}}}\n\n \\begin{verbatim}\n Wakeup word triggered by nonsense_wakeup\/Z'eI d'u:[email protected], nonsense_wakeup\/Z'eI d'u:b@L\n INFO:root:Connecting to embeddedassistant.googleapis.com\n \n INFO:root:Recording audio request.\n INFO:root:Transcript of user request: \"change\".\n INFO:root:Transcript of user request: \"JD\".\n INFO:root:Transcript of user request: \"hey dude\".\n INFO:root:Transcript of user request: \"hey Google\".\n INFO:root:Transcript of user request: \"hey Google\".\n INFO:root:Transcript of user request: \"hey Google\".\n INFO:root:End of audio request detected.\n INFO:root:Stopping recording.\n INFO:root:Transcript of user request: \"hey Google\".\n INFO:root:Expecting follow-on query from user.\n INFO:root:Playing assistant response.\n \\end{verbatim}\n \\label{fig:transcription_1}\n\\end{figure}\n\\normalsize\n\n\n\\begin{figure}\n \\tiny\n \\noindent\\caption{\\small\\textbf{{Transcription of response to adversarial command for `what's my name' (sm'0ts k'aI sp'eIm) from audio file}}}\n\n \\begin{verbatim}\n\n INFO:root:Recording audio request.\n INFO:root:Transcript of user request: \"what's\".\n INFO:root:Playing assistant response.\n INFO:root:Transcript of user request: \"some\".\n INFO:root:Playing assistant response.\n INFO:root:Transcript of user request: \"summer\".\n INFO:root:Playing assistant response.\n INFO:root:Transcript of user request: \"what's on Sky\".\n INFO:root:Playing assistant response.\n INFO:root:Transcript of user request: \"what's my IP\".\n INFO:root:Playing assistant response.\n INFO:root:Transcript of user request: \"some months cause pain\".\n INFO:root:Playing assistant response.\n INFO:root:Transcript of user request: \"what's my car's paint\".\n INFO:root:Playing assistant response.\n INFO:root:Transcript of user request: \"what's my car's paint\".\n INFO:root:Playing assistant response.\n INFO:root:End of audio request detected\n INFO:root:Transcript of user request: \"what's my name\".\n INFO:root:Playing assistant response.\n INFO:root:You told me your name was MK\n I could never forget that \ufffd\ufffd\n INFO:root:Finished playing assistant response.\n \\end{verbatim}\n \\label{fig:transcription_2}\n\\end{figure}\n\\normalsize\n\n\n\n\\begin{figure}\n \\tiny\n \\centering\n \\noindent\\caption{\\small\\textbf{{Transcription of response to adversarial command for `Hey Google turn on light' (t'eI D'u:bl= s'3:n Z'Qn j'aIt) from over-the-air audio}}}\n\n \\begin{verbatim}\n \n ON_CONVERSATION_TURN_STARTED\n ON_END_OF_UTTERANCE\n ON_RECOGNIZING_SPEECH_FINISHED:\n {\"text\": \"switch on the light\"}\n \n Do command action.devices.commands.OnOff with params {u'on': True}\n Turning the LED on.\n ON_RESPONDING_STARTED:\n {\"is_error_response\": false}\n ON_RESPONDING_FINISHED\n ON_CONVERSATION_TURN_FINISHED:\n {\"with_follow_on_turn\": false}\n \n \\end{verbatim}\n \\label{fig:transcription_3}\n\\end{figure}\n\\normalsize\n\nIn repeated tests, it was shown that the audio file input results were reproducible, whereas the over-the-air results were not, i.e. a successful adversarial command did not necessarily trigger the target action again on re-playing. Apart from the triggering target commands as described, a certain proportion of the nonsensical word sequences tested in the experiments were transcribed as other meaningful word sequences, prompting the Assistant to run web searches. For other nonsensical word sequences, the Assistant's response was simply to indicate non-comprehension of the input. \n\n\n\\subsection{Human Comprehensibility Tests}\n\nAs stated above, audio clips of the twelve successful adversarial commands shown in Tables 1 and 2, as well as two audio clips representing attention tests, were played to human subjects in an online experiment. There were 20 participants in the experiment, from whom 17 sets of valid results could be retrieved. All 17 participants who generated these results transcribed the attention tests correctly as `hi how are you' and 'hello how are you'. Three participants transcribed one adversarial command as the target command `turn on light', but did not identify any of the other target commands or the wake-up word `Hey Google' in either the audio file input clips or the over-the-air clips. None of the other participants identified any of the target commands or the wake-up word in any of the clips. Eight of the participants identified no meaning at all in any of the clips which did not represent attention tests. The other participants all either indicated incomprehension of the nonsensical sounds as well or else transcribed them as words which were unrelated to the target command for Google Assistant. Some examples of unrelated transcriptions were `hands off the yacht' and `smoking cause pain'. One participant also transcribed some of the nonsensical sounds as nonsense syllables e.g. `hurn glights grew' and `pern pon clight'. Another participant also transcribed a couple of the nonsensical sounds as the French words `Je du blanc'. \n\n\n\n\n\n\n\n\n\\section{Discussion}\n\nThe combined results from our machine response and human comprehensibility tests confirm that voice-controlled digital assistants are potentially vulnerable to covert attacks using nonsensical sounds. The key findings are that voice commands to voice-controlled digital assistant Google Assistant are shown to be triggered by nonsensical word sounds in some instances, whereby the same nonsensical word sounds are perceived by humans as either not having any meaning at all or as having a meaning unrelated to the voice commands to the Assistant. One notable feature of the results is that the transcription of the adversarial command by the Assistant does not need to match the target command exactly in order to trigger the target action; for example, an adversarial command for the target command `turn on light' is transcribed as `switch on the light' in one instance (see Table 2). In one case, the transcription of an adversarial command does not even need to be semantically equivalent to the target command in order to trigger the target action, as for example in the transcription of an adversarial command for ``turn off light\" as ``turns off the light\". This attack exploits a weakness in the natural language understanding functionality of the Assistant as well as in its speech recognition functionality.\n\nThe machine and human responses to nonsensical word sounds in general were comparable, in that both machine and humans frequently indicated incomprehension of the sounds, or else attempted to fit them to meaningful words. However, in the specific instances of nonsensical word sounds which triggered a target command in Google Assistant, none of the human listeners heard a Google Assistant voice command in the nonsensical word sounds which had triggered a target command. Another difference between the machine and human results was that whereas in addition to either indicating incomprehension or transcribing the nonsensical sounds as real words, human subjects on occasion attempted to transcribe the nonsensical word sounds phonetically as nonsense syllables, the Assistant always either indicated incomprehension or attempted to match the nonsensical sounds to real words. This confirms that, unlike humans, the Assistant does not have a concept of word sounds which have no meaning, making it vulnerable to being fooled by word sounds which are perceived by humans as obviously nonsensical.\n\n\n\n\n\n\n\n\n\\section{Future Work and Conclusions}\n\nBased on this small-scale study, we conclude that voice-controlled digital assistants are potentially vulnerable to malicious input consisting of nonsense syllables which humans perceive as meaningless. A focus of future work might be to conduct a larger scale study and to conduct a more fine-tuned analysis of successful and unsuccessful nonsense attacks, to determine which nonsense syllables are most likely to be confused with target commands by machines, whilst still being perceived as nonsensical by humans. This would enable investigation of more targeted attacks. Ultimately the focus of future work should be to consider how voice-controlled systems might be better trained to distinguish between meaningful and meaningless sound in terms of the language to which they are intended to respond. \n\n\n\\section*{Acknowledgments}\n\nThis work was funded by doctoral training grant from the Engineering and Physical Sciences Research Council (EPSRC).\n\n\n\n\n\n{\\footnotesize \\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Toy Model}\nAs mentioned, the toy model consists of a potential $H_Z$ that depends only on the total Hamming weight, or, in physics language, on the total $Z$ polarization, i.e., on the value of $Z=\\sum_i Z_i$. The eigenvalues of $Z$ are $-N,-N+2,-N+4,\\ldots,+N$.\nWe define the Hamming weight $w=(N-Z)\/2$, so that a qubit with $Z_i=1$ corresponds to a bit $0$ while $Z_i=-1$ corresponds to a bit $1$.\nAs a toy model, we consider mostly the following simple piecewise linear form of $H_Z$ as a function of the Hamming weight $w$\n\\begin{eqnarray}\nw\\leq N\/2+\\delta_w & \\quad \\rightarrow \\quad &H_Z=V_{max} \\frac{w}{N\/2+\\delta_w}\\\\ \\nonumber\nw\\geq N\/2+\\delta_w & \\quad \\rightarrow \\quad & H_Z=\\delta_V+(V_{max}-\\delta_V)\\frac{N-w}{N\/2-\\delta_w}.\n\\end{eqnarray}\nThat is, the function $H_Z$ increases linearly from $0$ at $w=0$ to $V_{max}$ at $w=N\/2+\\delta_w$, then it decreases linearly to $\\delta_V$ at $w=N$.\n\nHere we choose $V_{max}>0$ so that the function has two minima, one at $w=0$ and one at $w=N$. This is done to produce more interesting behavior showing small gaps; if the function $H_Z$ were chosen instead to simply be proportional to $w$ then no small gaps appear for annealing.\n\nWe pick $\\delta_V>0$ so that the global minimum is at $w=0$. However, we pick $\\delta_w<0$ so that the minimum at $w=0$ has a smaller basin around it, i.e., so that the maximum of the potential is closer to $w=0$ than to $w=N$.\nSee Fig.~\\ref{figp} for a plot of the potential $H_Z$.\n\n\\begin{figure}\n\\includegraphics[width=4in]{figpotential.png}\n\\caption{$H_Z$ for $N=60,V_{max}=N,\\delta_V=N\/4,\\delta_w=-N\/4$.}\n\\label{figp}\n\\end{figure}\n\n\nWe also later consider a slight modification of this potential, adding an additional small fluctuation added to the potential. This is done to investigate the effect of small changes in the potential which, however, lead to additional minima and may lead to difficulties for classical annealing algorithms which may have trouble getting stuck in additional local minima which are created. We do not perform an extensive investigation of this modification as our goal is not to consider in detail the effect on classical algorithms; rather, our goal is to understand the effect on the quantum algorithm.\n\nThe Hamiltonian of Eq.~(\\ref{Hsdef}) depends upon a parameter $K$. In previous work on the short path algorithm, this parameter was chosen to be an integer as that allowed an analytical treatment of the overlap using Brillouin-Wigner perturbation theory, and further it was chosen to be odd, again for technical reasons. However, for numerical work, there is no reason to restrict ourselves to this specific choice of $K$ (on a gate model quantum computer, arbitrary values of $K$ may be implemented, using polynomial overhead to compute the exponent to exponential accuracy).\nSo, we investigate more general choices of $K$.\nWhenever $K$ is taken to be an integer, we indeed will mean the Hamiltonian of Eq.~(\\ref{Hsdef}).\nHowever, if $K$ is taken non-integer, we instead consider the Hamiltonian\n\\begin{equation}\n\\label{Hsdef2}\nH_s=H_Z-sB (|X|\/N)^K,\n\\end{equation}\nwhere the absolute value of $X$ means that we consider the operator with the same eigenvectors as $X$ but with the eigenvalues replaced by their absolute values. We make this choice so that $|X|^K$ will still be a Hermitian operator; if instead we considered the operator $X^K$, then for non-integer $K$ this operator would have complex eigenvalues due to the negative eigenvalues of $X$.\n\nEven with the choice of non-integer powers of $K$, the simulation of $H_s$ can still be performed efficiently as in Ref.~\\onlinecite{Hastings_2018}\nby implementing the operator $|X|^K$ in an eigenbasis of the operators $X_i$ where it is diagonal.\n\n\\section{Numerical and Analytical Results}\nWe now give numerical and analytical results. Analytically, we consider some approximate expressions to locate the critical $b_{cr}$ at which the gap tends to zero as $N\\rightarrow \\infty$ and use this to help understand some of the scaling of the algorithms.\n\nWe consider three different algorithms and estimate their performance numerically.\nThese algorithms are the short path algorithm, the adiabatic algorithm, and a Grover speedup of a simple classical algorithm which picks a random initial state and then follows a greedy descent, repeating until it finds the global minimum.\nIn all cases, we will estimate the time as being $2^{CN}$ up to polynomial factors and we compute the constant $C$. Smaller values of $C$ are better. $C=1$ corresponds to brute force classical search while $C=0.5$ is a Grover speedup.\n\nWe consider three different choices of $H_Z$. First, an ``extensive\" $\\delta_V$, i.e., one that is proportional to $N$.\nThis makes the situation much better for both the short path and adiabatic algorithm since the local minimum of the potential at $w=N$ is at a much larger energy than that of the minimum at $w=0$.\nThis situation is somewhat unrealistic, as in general we may expect a much smaller energy difference. In this case, we are able to do the short path algorithm with $K=1$.\nSecond we consider $\\delta_V=1$. Here we find super-exponentially small gaps if $K=1$ and the location of the minimum gap tends to zero transverse field for $K=1$. Since the location of the minimum gap tends to zero for $K=1$, we instead use larger $K$ for the short path algorithm so that the location of the minimum gap becomes roughly independent of $N$.\n\nUp to this point, we find that the greedy descent is actually the fastest algorithm; perhaps this is no surprise since the potential is linear near each minimum so that so long as one is close enough, the descent works.\nTo get a more realistic situation, while still considering potentials that depends only on total Hamming weight,\nour third choice of $H_Z$ has additional ``fluctuations\" on top of the potential. We take the potential $H_Z$ given before and add many small additional peaks to it so that the greedy descent will {\\it not} work, instead getting trapped in local minima. Thus, in this case, we do not consider at all the time for the greedy algorithm; however, we find that there is only a small effect on the performance of the short path algorithm.\n\nThe idea of adding fluctuations is similar to that of the idea of adding a ``spike\"\\cite{spike,Crosson_2016}, where one considers a potential that depends only on total Hamming weight which is linear with an added spike. Here, however, we instead add many small peaks in the potential. We do not in detail analyze the effect on the classical algorithm (for example, adding multi-bit-flip moves to the classical algorithm may enable to to avoid being trapped in minima). Rather, the goal is just to consider the effect on the quantum algorithm.\n\nWe emphasize that, in contrast to previous work on spikes where the overall structure of the potential is linear with an added spike (so that without the added spike there is just one minimum), here we consider a potential which has multiple well-separated minima, even without any added spike. The $\\delta_w$ that we choose leads to only a very modest\nspeedup for the short path algorithm; different constants (in particular, making $|\\delta_w|$ smaller) make the speedup more significant. We choose the given value of constants as the interpretation of the numerical results was cleaner here.\n\n\n\n\\subsection{Results with Extensive $\\delta_V$}\nHere we consider extensive $\\delta_V$. In this subsection we use $K=1$ for the short path algorithm; later we will need larger $K$.\n\nConsider the Hamiltonian $H_Z-bX$ for extensive $\\delta_V$.\nWe can analytically estimate the location of the minimum gap and the value of the minimum gap as follows.\nThe minimum gap is due to an avoided level crossing. To a good approximation, at small $b$, there is one eigenstate with its probability maximum at Hamming weight zero and another eigenstate with its probability maximum at Hamming weight $N$. We can approximate these eigenstates by replacing the piecewise linear potential $H_Z$ by a linear potential that correctly describes the behavior near the probability maximum of the given eigenstate.\n\nSo, first we consider the Hamiltonian\n$H=V_{max} \\frac{w}{N\/2+\\delta_w}-bX$, which roughly describes the first eigenstate.\nThis Hamiltonian is equivalent to $\\sum_i V_{max}(\\frac{1-Z_i}{2})\/(N\/2+\\delta_w)-bX_i$, which describes $N$ decoupled spins.\nEach spin has ground state energy\n\\begin{equation}\nE_0\\equiv \\frac{V_{max}}{N+2\\delta_w}-\\sqrt{\\Bigl(\\frac{V_{max}}{N+2\\delta_w}\\Bigr)^2+b^2},\n\\end{equation}\nand so the total ground state energy is equal to $N$ times this.\n\nThe second eigenstate is roughly described by the Hamiltonian\n$H=\\delta_V+(V_{max}-\\delta_V)\\frac{N-w}{N\/2-\\delta_w}-bX$. Again this describes $N$ decoupled spins, with ground state energy\nper spin equal to\n\\begin{equation}\nE_1 \\equiv \\frac{\\delta_V}{N}+\n\\frac{V_{max}-\\delta_V}{N-2\\delta_w}\n-\\sqrt{\\Bigl(\\frac{V_{max}-\\delta_V}{N-2\\delta_w}\\Bigr)^2+b^2}.\n\\end{equation}\n\nWe can estimate the value of $b$ where the gap is minimum by looking for a level crossing between $E_0$ and $E_1$.\nThis simple estimate is in fact highly accurate.\nFor example, for $V_{max}=N,\\delta_V=N\/4,\\delta_w=-N\/4$, using a Golden section search we find that $E_0=E_1$ at\n$b=0.718070330\\ldots$, while a numerical study of the exact solution with $N=40$ gave the crossing\nat $b=0.718070335\\ldots$, also using a Golden section search.\n\nThe important thing to note is that\nthe location of the level crossing occurs at a value of $b$ that is roughly independent of $N$ and that has a limit as $N\\rightarrow \\infty$ at some nonzero value of $b$.\nAt such a value of $b$, the level splitting is exponentially small in $N$.\nA more careful treatment should also be able to estimate this level splitting quantitatively, i.e., one should be able to calculate the splitting scaling as $\\exp(-c N)$ up to subleading corrections and to calculate the value of $c$. We do not give this analytic treatment here and instead we are content to use numerical solution on finite sizes.\n\nFig.~\\ref{fige} shows a plot of gap and overlap as a function of the transverse field strength $b$ for the case $N=50, V_{max}=N,\\delta_V=N\/4,\\delta_w=-N\/4$. \nAs one can see, the overlap changes rapidly near where the gap becomes small. The minimum gap was in fact $1.938\\ldots\\times 10^{-10}$, but the figure does not have enough resolution in the regime where the gap becomes small to see this small gap. Let $b_{cr}$ be the value of the\ntransverse field where the gap tends to zero as $N\\rightarrow\\infty$.\nOne can also see that so long as we choose a value of $b=b_{cr}-\\delta$ for some small $\\delta>0$, then the exact choice of $\\delta$ does not matter too much for the overlap. For studying the short path algorithm, we picked $b=0.7$ for this specific set of parameters, which (for all sizes we studied) is comfortably far away from the gap closing.\n\n\\begin{figure}\n\\includegraphics[width=4in]{figextensive.png}\n\\caption{Gap and overlap for $N=50,V_{max}=N,\\delta_V=N\/4,\\delta_w=-N\/4$. Gap closing is not resolved finely enough to show the minimum gap on this figure.}\n\\label{fige}\n\\end{figure}\n\nWe now estimate the time required for three different algorithms explained above.\nTo estimate the time for the short path, since we have picked a value of $b$ that is small enough that the gap is $N$-independent, the time is obtained from the scaling of the overlap. We have computed the logarithm of the overlap to base $2$, and divided by $N$, for a range of sizes from $N=30$ to $N=50$. For the adiabatic algorithm, we have computed the minimum gap, and taken the inverse square of the minimum gap as a time estimate, again using a range of sizes from $N=30$ to $N=50$. \nOne finds that for both algorithms, the time estimate depends only weakly on the choice of $N$; it does get slightly worse as $N$ increases but the change is slow enough that we are confident that the numerical results provide a good estimate.\n\nNote that in some cases, if one knows the location of the gap minimum to high accuracy and the gap grows rapidly away from the minimum, it is possible to use the techniques of Ref.~\\onlinecite{roland2002quantum} to reduce the time so that it scales only as the inverse gap; in such cases the value of $C$ for the adiabatic algorithm is half that given here, however even if we halve the value of $C$ for the adiabatic algorithm the resulting $C$ is still larger than for other algorithms.\n\n\n\nOf course, since the size of the Hilbert space is only {\\it linear} in $N$ since we restrict to the subspace which is symmetric under permutation of qubits, it is possible to simulate systems of size much bigger than $N=50$. However, since the minimum gap tends to zero exponentially, we run into issues with numerical precision when computing the gap at larger sizes, so we have chosen to limit to this range of sizes (if we were only interested in the short path algorithm, it would be possible to go to larger sizes since the overlap is not vanishing as rapidly).\n\n\nTo estimate the time for the classical algorithm, we compute the fraction of volume of the hypercube which is within distance $N\/2-\\delta_w$ of the all $0$ string. This number gives the success probability; we take the inverse square-root of this number to get the time required using a Grover speedup. This gives\n$$2^{\\frac{1}{2}\\Bigl(\n1-H(\\frac{N\/2-\\delta_w}{N})\n\\Bigr)\n},$$\nwhere $H(p)=-p \\log_2(p) - (1-p) \\log_2(1-p)$ is the binary entropy function.\n\n\n\n\nThe results for the constant $C$ are $C=0.292\\ldots$ for short path, $C=1.29\\ldots$ for adiabatic and $C=0.094\\ldots$ for the Groverized greedy algorithm.\nNotably, the short path algorithm is significantly faster than the adiabatic algorithm. However, the Groverized algorithm is the fastest. In a later\nsubsection, we consider the effect of including ``fluctuations\" to the potential which will prevent this simple Groverized algorithm from working but which have only a small effect on the performance of the short path algorithm.\nChanging $\\delta_w$ to $-3N_{spin}\/8$ so that the basin near the global minimum becomes narrower, all algorithms slow down but the relative performance is similar:\n$C=0.404\\ldots$ for short path, $C=1.525$\\ldots for adiabatic, and $C=0.22\\ldots$ for the Groverized greedy.\n\n\n\n\\subsection{Results with $\\delta_V=1$}\nWe now consider the case of $\\delta_V=1$, keeping $\\delta_w=-N\/4$.\nIn this case, if we pick $K=1$, the location of the minimum gap tends to zero as $N\\rightarrow\\infty$. To understand this, note that from\nthe decoupled spin approximation before, both eigenvalues decrease to second order in $b$. The first eigenvalue is $0-c_1 N b^2+\\ldots$ where the $\\ldots$ denote higher terms in $b$ and where $c_1$ is some positive constant and the second eigenvalue is $1-c_2 N b^2+\\ldots$, for some other constant $c_2$ with $c_2>c_1$. These two eigenvalues cross at some value of $b$ which is proportional to $1\/\\sqrt{N}$.\n\nThe numerical results support this. We considered a range of $N$ from $N=20$ to $N=60$. Defining $b_{min}$ to denote the value of $b$ which gave the minimum gap, we found that $b_{min}*\\sqrt{N}$ was equal to $1.41\\ldots$ over this entire range.\n\nAs a result of this change in the location of the minimum gap, the value of the minimum gap is super-exponentially small. This shifting in the location of the minimum gap is the mechanism that leads to small gaps as in Ref.~\\onlinecite{altshuler2010anderson,Laumann_2015}. The minimum gap is predicted to scale as $\\exp(-c N \\log(N))$ for some constant $c$. We were not able to estimate this constant $c$ very accurately as the gap rapidly became much smaller than numerical precision. However, even for very small $N$, the adiabatic algorithm becomes significantly worse than even a \nbrute force classical search without Grover amplification; taking the time for the adiabatic algorithm as simply being the inverse square of the minimum gap, the crossover happens at $N \\approx 10$.\n\nThis $N$-dependence of $b_{min}$ means that for the short path algorithm we cannot take $K=1$ and expect to get any nontrivial speedup. However, we can take larger $K$.\n\n\nChoosing $K=3$, the gap and overlap are shown\nin Fig.~\\ref{figK3} for $N=40$. The figure does not have enough resolution to show the minimum gap accurately; the true minimum gap is roughly\n$1.4\\times 10^{-7}$.\nHowever, now the location of the jump in overlap (and the local minimum in gap which occurs near that jump) become roughly $N$-independent. Fig.~\\ref{figK3G} and Fig.~\\ref{figK3O} show the gap and overlap respectively for a sequence of sizes $N=20,40,80$. The lines in Fig.~\\ref{figK3G} all cross at roughly the same value of $b$. The scaling behavior is less clear when the overlap is considered but is consistent with the jump in overlap becoming $N$-independent at large $N$ (the curve for $N=20$ shows some differences).\n\nUsing the short path scaling with $b=0.5$ (which is comfortably below the value at which the gap becomes small) we find a time $2^{CN}$ with $C=0.42\\ldots$. For smaller $\\delta_w$, the value of $C$ reduces.\n\n\\begin{figure}\n\\includegraphics[width=4in]{figK3na.png}\n\\caption{Gap and overlap for $N=40$, $K=3$.}\n\\label{figK3}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=4in]{figK3gapna.png}\n\\caption{Gaps for $N=20,40,80$, $K=3$.}\n\\label{figK3G}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=4in]{figK3overlapna.png}\n\\caption{Overlaps for $N=20,40,80$, $K=3$.}\n\\label{figK3O}\n\\end{figure}\n\nIn contrast, for $K=2$, the location of the minimum gap does {\\it not} have\nan $N$-independent limit. \nFirst note that Fig.~\\ref{figK2} shows a complicated behavior of the gap, which reduces rapidly near where the overlap jumps, but continues to stay small beyond that point. The reason that the gap stays small even at large $b$ is that for $K=2$, the term $-X^K$ has a doubly degenerate ground state, one at $X=+N$ and one at $X=-N$. Further, although the figure does not have the resolution to show it, the gap also becomes small near where the overlap jumps, i.e., there is another minimum of the gap for $b$ slightly larger than $0.4$.\nSo, there is a phase transition where the overlap jumps, with the gap becoming small there, and a degenerate ground state as $b\\rightarrow \\infty$.\n\n\\begin{figure}\n\\includegraphics[width=4in]{figK2.png}\n\\caption{Gap and overlap for $N=40$, $K=2$.}\n\\label{figK2}\n\\end{figure}\n\n Fig.~\\ref{figK2G}\nshows the gap for a sequence of sizes $N=20,40,80$. Here we see that the curves shift leftwards as $N$ increases.\nPreviously, with the decoupled spin approximation we considered the Hamiltonian\n$H=V_{max} \\frac{w}{N\/2+\\delta_w}-bX\n=\\sum_i V_{max}(\\frac{1-Z_i}{2})\/(N\/2+\\delta_w)-bX_i$ to approximately describe the lowest eigenstate.\nNow, we can try considering \n$H=V_{max} \\frac{w}{N\/2+\\delta_w}-bX^2\/N$.\nThis Hamiltonian does not describe decoupled spins.\nRather, it is equivalent to (up to an additive constant)\n$$-\\frac{V_{max}}{N+2\\delta_w} Z-bX^2\/N,$$\nwhere $Z=\\sum_i Z_i$.\nWe have $Z^2+X^2+Y^2=N(N+2)$ where $Y=\\sum_i Y_i$. So, for small $X,Y$ and large $N$ we can approximate $Z=N-(X^2+Y^2)\/(2N)$. Treating $X,Y,Z$ as classical variables (which becomes more accurate as $N$ becomes larger), we see that the minimum is obtained by taking $Y=0$ and then the Hamiltonian is only a function of $X^2$. It is approximated by\n$$X^2\\cdot\\Bigl(\n\\frac{V_{max}}{N+2\\delta_w}\\frac{1}{2N}-\\frac{b}{N} \n\\Bigr),$$\nup to an additive constant.\nThis exhibits a phase transition as a function of $b$. For $V_{max}=N$ and $\\delta_w=-N\/4$, this phase transition occurs at $b=1$.\nFor the other eigenstate, we consider the Hamiltonian\n$H=\\frac{V_{max}-\\delta_V}{N-2\\delta_w}Z-bX^2\/N,$ up to an additive constant.\nUsing the same approximation $Z=N-(X^2+Y^2)\/(2N)$, for $V_{max}=N$ and $\\delta_w=-N\/4$ this Hamiltonian\nhas a phase transition at $b=1\/4$.\nThe plot shows values of the critical $b$ which are intermediate between $1$ and $1\/4$, i.e., below the first phase transition but above the second. Thus, it is possible that at large enough $N$, the leftward shift stops at $b=1\/4$, so that the second eigenvalue reduces its energy due to the transverse field term but the first does not. We leave this for future work.\n\n\n\\begin{figure}\n\\includegraphics[width=4in]{figK2G.png}\n\\caption{Gaps for $N=20,40,80$, $K=2$.}\n\\label{figK2G}\n\\end{figure}\n\n\nOne might try to consider $K$ intermediate between $2$ and $3$ to see if some further speedup is possible. We leave this for the future.\n\n\\subsection{Added Fluctuations}\nWe now consider the effect of adding fluctuations. We again took $\\delta_V=1,\\delta_w=-N\/4$ as in the previous subsection. However, we modified the potential by adding on an additional value $f$ if the Hamming weight was equal to $1$ mod $2$. This choice was chosen so that for $K=3$ (the case we took) the transverse field term connects computational basis states which differ by $1$ mod $2$ so that the added fluctuations will have some effect. We picked a large value of $f$, equal to $N\/4$, so that classical simulated annealing would have exponentially small probability to move over the fluctuations. We picked $N$ even so that the added fluctuations have no effect on the values of $H_Z$\n at $w=0$ and $w=N$.\n \nAs seen in Fig.~\\ref{figf}, the shape of the gap and overlap is similar to the case without fluctuations, except that there is an overall rightward shift. As seen in Fig.~\\ref{figfg}, the location of the small gap is again roughly $N$-independent.\nBecause of the rightward shift we are able to take a larger value of $b$ in the short path than we could without fluctuations; however, the overlap at this value of $b$ is roughly the same as without fluctuations. Thus, we find almost the same time scaling as before, in this case $C=0.43\\ldots$\n\n\\begin{figure}\n\\includegraphics[width=4in]{figfluct.png}\n\\caption{Gap and overlap for $N=40$, $K=3$, with added fluctuations.}\n\\label{figf}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[width=4in]{figfluctgap.png}\n\\caption{Gaps for $N=20,40,80$, $K=3$, with added fluctuations.}\n\\label{figfg}\n\\end{figure}\n\n\\section{Quantum Algorithms Projected Locally}\nAs expected, the adiabatic algorithm has problems with multiple minima. Depending on the values of $\\delta_V,\\delta_w$, this can lead to either exponentially small gaps (sufficiently small that in many cases the algorithm is slower than brute force search) or even super-exponentially small gaps. Thus, we suggest that it may be natural to consider the following modification of the adiabatic algorithm.\nIndeed, this modification could be applied to the short path algorithm as well, though we explain it first for the adiabatic algorithm\n\nWe explain this modification for {\\it arbitrary} functions $H_Z$ rather than just the specific choices here which depend only on $w$.\nDefine as a subroutine an ``adiabatic algorithm projected locally\" as follows: pick some bit string $b$ and some distance $d$.\nThen, consider the family of Hamiltonians\n$sH_Z-(1-s)X,$\nrestricted to the set of computational basis states within Hamming distance $d$ of bit string $b$.\nThat is, defining $\\Pi_{d,b}$ to project onto that set of computational basis states, we consider the family\n$$\\Pi_{d,b} \\Bigl(sH_Z-(1-s)X\\Bigr)\\Pi_{d,b}$$\nrestricted to the range of $\\Pi_{d,b}$.\nOne applies the adiabatic algorithm to this Hamiltonian for the given choice of $d,b$ and (hopefully) if $d$ is small enough, no small gap will appear so that the adiabatic algorithm will be able to efficiently search within distance $d$ of bit string $b$.\n\nTo implement the projector $\\Pi_{d,b}$, one first must decide on a representation for the set of states within Hamming distance $d$ of $b$.\nThe simplest representation is an overcomplete one: simply use all bit strings of length $N$ (other representations are possible but they make the circuits more complicated). Then, the projector $\\Pi_{d,b}$ can be computed using a simple quantum circuit: given a bit string, exclusive-OR the bit string with $b$ and then use an adder to compute the Hamming weight of the result, then compare the result of the addition to $d$, and finally uncompute the addition and exclusive-OR. The transverse field terms $X_i$ in the adiabatic algorithm can then be replaced with the corresponding terms $\\Pi_{d,b} X_i \\Pi_{d,b}$ (to do this in a gate model, once $\\Pi_{d,b}$ is computed on a given basis state, one can use it to control the application of $X_i$ and then uncompute $\\Pi_{d,b}$, so that now the Hamiltonian commutes with $\\Pi_{d,b}$).\n\nThere are a couple ways to\nprepare the initial state which is a ground state of $-\\Pi_{d,b} X \\Pi_{d,b}$. We very briefly sketch this here, leaving the details for future work. One is to note that the amplitudes in such a state depend only on the Hamming distance from $b$ (states closer to $b$ may have larger amplitude than those further, for example) and such a state is a matrix product state (applying an arbitrary ordering of qubits to regard them as lying on a one-dimensional line) with bond dimension at most $d$ and so can be prepared in polynomial time. More simply, one can adiabatically evolve to such a state by considering a path of Hamiltonians $V(s)-X$. Here, the potential $V(s)$ is diagonal in the computational basis and depends on a parameter $s$. We let the potential $V(s)$ equal $0$ for $s=0$, and (as $s$ increases) the potential gradually increases at large Hamming distance from $b$, while keeping $V$ equal to $0$ on states within Hamming distance $d$ of $b$. One may find a path of such potentials so that the amplitude at distance greater than $d$ from $b$ becomes negligible and such that the gap does not become small (note that here we choose $V$ to increase monotonically with increasing Hamming distance from $b$, rather than having multiple minima, to avoid any small gaps). \n\nThen, given this subroutine, we consider the problem of minimizing $H_Z$.\nWe consider this as a decision problem: does there exist a computational basis state such that\n$H_Z$ has some given value $E_0$?\nSo, one can define an algorithm which takes a given $d$ and then chooses $b$ randomly, and applies the adiabatic algorithm projected locally for the given $d,b$ in an attempt to find such a computational basis state; if $E_0$ is the minimum value of $H_Z$ within distance $d$ of bit string $b$, and if no small gap arises, then the adiabatic algorithm will succeed.\n\nFinally, one takes this algorithm with random choice of $d$ and applies amplitude amplification to it. Thus, in the case that $d=N$, the choice of $b$ is irrelevant and we have the original adiabatic algorithm, while if $d=0$ the algorithm reduces to Grover search.\n\nOne can do a similar thing for the short path: choose a random $d$, restrict to the the set of computational basis states within Hamming distance $d$ of bit string $b$, and apply the short path algorithm on that set. Finally, apply amplitude amplification to that algorithm rather than choosing $d$ randomly.\n\nIt may be worth investigating such algorithms. Interestingly, it is possible to study this algorithm efficiently on a classical computer for choices of $H_Z$ which depend only on total Hamming weight, such as those considered above.\nTo do this, suppose that bit string $b$ has total Hamming weight $w_b$. Then, without loss of generality, suppose that bit string $b$ is equal to $1$ on bits $1,\\ldots,w_b$ and is equal to $0$ on bits $w_b+1,\\ldots,N$.\nThen, the Hamiltonians and projectors considered are invariant under permutation of bits $1,\\ldots,w_b$ and under permutation of bits $w_b+1,\\ldots,N$, so that we may work in the symmetric subspace under both permutation.\nThe basis vectors in this symmetric subspace can be labelled by two integers, $w_1,w_2$, where $w_1=0,\\ldots,w_b$ is the total Hamming weight of bits $1,\\ldots,w_b$ and $w_2=0,\\ldots,N-w_b$ is the total Hamming weight of bits $w_b+1,\\ldots,N$.\nThen, this gives us a basis of size $O(N^2)$ and hence we can perform the classical simulation efficiently.\n\nHowever, we suspect that while this may be useful for the very simple piecewise linear potentials considered here, it will probably not be useful for more general $H_Z$. If there are many local minima (so that for any $b$ which is proportional to $N$ there are many comparable local minima in that basin), then there will probably still be a slowdown for either algorithm.\n\n\\section{Discussion}\nWe have considered the short path algorithm in some toy settings. We have considered a case with multiple well-separated local minima in the potential $H_Z$, where one minimum (not the global minimum) is wider and so its energy drops more rapidly as a function of transverse field.\nThis is the setting which is worst for the adiabatic algorithm, but some nontrivial speedup is still found for the short path algorithm.\nThe speedup is modest, but this may be because we have taken such a large $\\delta_w$. For smaller $\\delta_w$ (which also may be more realistic) the speedup becomes bigger.\n\nFor the case of a piecewise linear potential, a Grover speedup of a greedy classical algorithm works best. However, we find that adding fluctuations to the potential (which will defeat this simple algorithm) has little effect on the short path. Of course, this is not to be interpreted as implying that no classical algorithm can do well in this case. Rather, it is a simple case with many minima that can still be studied numerically at large sizes.\n\nThe potential $H_Z$ that we consider is not one of those for which the previous proofs on the short path algorithm work since it cannot be written as homogeneous polynomial in variables $Z_i$. This means that $H_Z$, averaged over points at given Hamming distance from the global minimum, may behave in a more complicated way than expected; the proofs regarding the short path algorithm rely heavily on properties of this average.\nHowever, still some speedup is found. Thus, this suggests applying the algorithm more broadly.\n\n{\\it Acknowledgments---} I thank D. Wecker for useful comments.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction}\\label{sec:intro}\nA dusty plasma comprises of electrons, ions, neutrals and micron sized charged dust particles. The great diversity in the space and time scales of these constituent components make for a rich collective dynamics of this medium and has made dusty plasmas an active field of research for the last three decades or so \\cite{pkshukla,goreerfanddc,morfill2009complex,firstcrystal}. One of the spectacular phenomena that can occur in a dusty plasma is the formation of an ordered arrangement of the dust component with characteristic features of a crystalline structure. This happens because the dust component can develop strong correlations due to the large charge on each dust particle and its low thermal energy. When the Coulomb parameter $\\Gamma$, quantifying the ratio of the dust electrostatic potential energy to its thermal energy, exceeds a critical value the system can undergo a phase transition from a liquid state to a solid (crystalline) phase. The first experimental observation of such a Couloumb dusty plasma crystal, by two independent groups \\cite{LinI94,firstcrystal} in 1994, opened up a whole new area of research in dusty plasma physics. \n Dusty plasma crystals provide an excellent experimental platform for investigating a host of fundamental physics problems associated with phase transitions and allied topics that have relevance for areas as diverse as statistical mechanics, soft condensed matter, strongly and weakly coupled systems, active matter dynamics and warm dense matter. The ease of observation of the individual dust dynamics coupled with the convenient time scales of the collective dynamics of such systems have spurred a great deal of laboratory investigations of such crystals. Some of these investigations include melting processes in two and three dimensional crystals \\cite{plasmacrystalmelting,fullmelting, ShockMelting}, dust lattice waves \\cite{wakes,dustlatticewaves}, heat transport \\cite{HeatTransfer,heattransport}, viscosity \\cite{Viscosity}, recrystallization\\cite{Recrystallization}, instabilities \\cite{modeinstability}, dust oscillations in magnetic field\\cite{magneticfield}, effect of magnetic field on phase transition \\cite{magneticfieldphasetransition}, photophoretic force \\cite{Photophoretic} and entropy production \\cite{naturepaper}. \\par \n\nMost of the above laboratory investigations have been carried out on dusty plasma crystals created in a RF discharge plasma \\cite{firstcrystal,Dustyplasmacavities,modeinstability,PhaseSeparation,Photophoretic,naturepaper,wakes,\ndustlatticewaves} and there are very few reported studies of dusty plasma crystals in DC glow discharge plasmas. Among such few works are those of Vladimir \\textit{et al.} \\cite{dcglowdischarge} and Maiorov \\textit{et al.} \\cite{dcgasmixture} who reported the formation of such crystals by the trapping of dust particles in the standing striations that appear under certain conditions in the positive column of a DC glow discharge. One of the limitations of investigating crystals created in such a manner is that the strong variations in the electric field in the striations lead to small scale inhomogeneities in the particle clouds. Another disadvantage is the narrow range of discharge parameters over which such crystals can be formed and sustained. This prevents the exploration of such structures over a wide parametric domain. Mitic {\\it et al} \\cite{Mitic2008} were successful in avoiding the formation of striations by adjusting the parameters of the plasma discharge and trapping the dust particles in the plasma sheath region. They used a very high discharge voltage ($\\sim 1000\\;V$) which led to the creation of a longitudinal electric field and a concommitant large directed ion flow. To mitigate the effects of the longitudinal electric field on the particles in the plasma, they switched the polarity of the voltage between the electrodes with a frequency of 1 kHz thereby substantially reducing the directed flow of ions in a time averaged manner. While the detailed underlying reasons behind the difficulty of forming dust crystals in a DC glow discharge, compared to a RF discharge, are not fully understood there appear to be two contributory factors. These are lower charging of the dust particles and excessive heating of the dust by ion bombardment. Since in a DC plasma the instantaneous and time-averaged electric fields are the same \\cite{goreerfanddc} the dust and the electrons behave similarly despite their tremendously different masses.\nIn RF plasmas, on the other hand, electrons can respond to megahertz frequency reversals of the electric field, whereas the heavier dust particles cannot. As a result, the electrons can easily move into spatial regions where dust cannot. This difference between DC and RF plasmas has a significant effect on the charging of dust particles \\cite{goreerfanddc}. In a DC plasma, the sheath has an insignificant electron density and therefore a dust particle that is immersed deep inside a DC sheath cannot collect many electrons. So the charge acquired by a dust particle in the DC sheath region is considerably less as compared to one in a RF sheath and consequently the Coulomb parameter (for the same dust temperature) tends to be relatively lower. The second factor, namely the heating due to the impact of streaming ions on the dust particles is related to the geometry of the conventional DC glow discharge systems where the electrodes are symmetric and placed directly facing each other. The ions accelerated by the DC voltage then directly impinge on the levitated dust cloud that is formed between the electrodes and causes its temperature to rise. This lowers the Coulomb parameter thereby inhibiting the formation of a crystalline structure. Any provision to mitigate these constraints in a conventional DC glow discharge set up could facilitate the formation of dust crystals in such devices. We believe our present system configuration succeeds in doing that as will be discussed later. \\par\n\nIn this paper we report the first observation of a dusty plasma Coulomb crystal in the cathode sheath region of a DC glow discharge plasma. The experiments have been carried out in the DPEx device \\cite{jaiswal2015dusty} with a unique electrode configuration that permits investigation of dust Coulomb crystals over a wide range of discharge parameters with excellent particle confinement. The crystalline character of the configurations have been confirmed by a variety of diagnostic analysis including the pair correlation function, the Voronoi diagram, the structural order parameter, the dust temperature and the coupling parameter that are obtained from the time evolution images of the particle positions. \\par\nThe paper is organized as follows. Section II contains a brief description of the experimental set-up (the DPEx device) and the configuration of the electrodes. Section III presents our main results of the crystal formation along with its analysis using various methods. Section IV provides some concluding remarks including a discussion on the possible advantages arising from the present configuration that facilitates crystal formation. \\\\\n\\section{Experimental Set-up}\\label{sec:setup}\n\\begin{figure}[ht]\n\\includegraphics[scale=1.0]{fig1}\n\\caption{\\label{fig:fig1} A schematic diagram of dusty plasma experimental (DPEx) setup. }\n\\end{figure}\n\nOur experiments were carried out in the DPEx device, consisting of a $\\Pi$-shaped glass vacuum chamber in which an Argon plasma is created by striking a discharge between a disc shaped steel anode and a long grounded steel plate as a cathode. Fig.~\\ref{fig:fig1} provides a schematic diagram of the experimental arrangement. More details about the DPEx device and its operational characteristics are given in Jaiswal {\\it et al} \\cite{jaiswal2015dusty}.\nA base pressure of $0.1$~Pa is achieved in the chamber with a rotary pump and the working pressure is set to 8-12 Pa by adjusting the pumping rate and the gas flow rate. Argon plasma is then produced by applying a discharge voltage of 280-320 DC volts between the anode and the cathode, and the discharge current is measured to be 1-4 mA. Mono-dispersive melamine fluoride (MF) particles of size 4.38 $\\mu$m are then introduced in the chamber by a dispenser to form the dust component. These dust particles get negatively charged by accumulating more electrons (due to their higher mobility) than ions and levitate in the cathode sheath region due to a balance between the gravitational force and the vertical electrostatic force of the sheath of a metal ring of diameter 5 cm placed on the tray as shown in Fig.~\\ref{fig:fig1}. The ring facilitates the radial confinement and also gives the crystals a circular shape. The dust particles usually levitate at the centre of the ring at a height of $\\sim$ 1-2 cm above the cathode plate such that the laser light gets scattered from the particles. In addition, it is possible to vary the discharge parameters to manipulate the sheath around the ring and cathode to control the crystal size and the crystal levitation height, respectively. The confinement ring also influences the dynamics of the streaming ions in a significant manner as will be discussed later. A green laser is used to illuminate the micron sized particles and the Mie-scattered light is captured by a CCD camera and stored into a computer for further analysis. \\par\n\n \n\n\n\\section{Experimental Results and Data Analysis}\n\\subsection{Crystal structure}\nA crystalline structure of the dust mono-layer is obtained by a careful manipulation of the neutral pressure for a given discharge voltage. Fig.~\\ref{fig:fig2}.(a) shows a camera image of a typical Coulomb coupled dusty plasma crystal with dust particles of diameter 4.38 $\\mu m$ at a discharge voltage of 300 V and a neutral pressure 10 Pa. The crystal is circular in shape due to the horizontal (radial) confinement provided by the circular metal ring. The structure is seen to be made up of hexagonal cells which display a good translational periodicity indicative of long range order. \nWe have been successful in obtaining such crystalline structures over a range of neutral pressures (8 Pa to 12 Pa), different discharge voltages (280 V to 360 V) and for two different particle sizes, namely of diameter 4.38 $\\mu m$ as shown in Fig.~\\ref{fig:fig2}.(a) and diameter 10.66 $\\mu$m as shown in Fig.~\\ref{fig:fig2}.(b). Once formed these crystals can be stably maintained for hours and can be conveniently used to conduct a variety of detailed parametric studies. To confirm and establish the true crystalline nature of the structure we have also carried out a set of diagnostic tests on the visual data in the form of calculating the pair correlation function, constructing a Voronoi diagram, estimating the dust temperature and the Coulomb coupling parameter. Below we provide a detailed description of the results of our analysis. \n\\begin{figure}[ht]\n\\includegraphics[scale=1.0]{fig2}\n\\caption{\\label{fig:fig2} Direct camera images of dusty plasma crystals obtained for dust particle sizes of (a) $4.38 \\mu m$ and b) 10.66 $\\mu$m.}\n\\end{figure}\n\\subsection{Radial Pair Correlation function}\nThe radial pair correlation function (RPDF) \\cite{paircorrelation}, $g(r)$, is a measure of the probability of finding other particles in the vicinity of a given particle and provides useful information about the structural properties, such as the range of order, of a system. The relevant pair correlation function for the structure in Fig.~\\ref{fig:fig2}.(a), calculated by taking into account all pairs of particles, is plotted in Fig.~\\ref{fig:fig3}.(a). The occurrence of multiple periodic peaks in this function establishes the long range ordering of the structure and indicates the formation of a crystalline structure. The position of the first peak provides information on the inter-particle distance and its variation with discharge parameters can serve as a useful diagnostic to study the spatial distribution of the particles in the crystal. For the crystal shown in Fig.~\\ref{fig:fig2}.(a) the average inter-particle distance is $\\sim 300 \\mu m$. The RPDF is also a useful tool to estimate the correlation void ($l$) which is defined as the distance at which probability to find a particle around a reference particle becomes half (i.e $g(r)|_{r=l} =0.5$). It gives an indirect measure of the repulsive force experienced by a dust particle due to other neighboring dust particles. In this particular case, the void length turns out to be $\\sim 130 \\mu m$. Using the correlation function we have also studied the spatial variation of the inter-particle distance as a function of the radial distance away from the center of the crystal. For doing this, the crystal data has been sampled over small regions of 5 $mm^{2}$ area and the RPDF has been calculated in each region and the inter-particle distance obtained from the position of the first peak. The results for the crystal structure of Fig.~\\ref{fig:fig2}.(b) are shown in Fig.~\\ref{fig:fig3}.(b). As can be seen the inter particle distance is minimum at the centre and increases as one moves away to the outer edge in a radial direction. This spatial inhomogeneity of a dust crystal is consistent with earlier numerical simulation results of Totsuji et.al \\cite{totsuiji} and is also a characteristic feature of two dimensional finite crystalline structures \\cite{Filinov2001}. As a further confirmation of the crystalline structure we have also measured the orientation ordering of the dust particle formation. The orientation order of a system is generally quantified by estimating the bond order parameter $(\\psi_6)$ \\cite{Bonitzbook}. For a perfect crystal, $\\mid{\\psi_6}\\mid$ would be one, but in most experimental situations a value greater than 0.45 is considered to be indicative of a crystalline state \\cite{Bonitzbook}. In our experiments, the bond order parameter, corresponding to Fig.~\\ref{fig:fig2}.(a) is estimated by calculating the local order parameter for each particle in the system and then averaging over all particles and comes out to be 0.68.\n\n\\begin{figure}[ht]\n\\includegraphics[scale=0.75]{fig3}\n\\caption{\\label{fig:fig3} (a) Pair correlation function corresponds to the crystal shown in Fig.~\\ref{fig:fig2}.(a) (b) Spatial variation of inter-particle distance.}\n\\end{figure}\n\n\\subsection{Voronoi Diagram and Delaunay Triangulation}\n The Voronoi diagram \\cite{voronoi} is another useful tool to portray the amount of order (or measure the amount of disorder) in a particular configuration. In the case of the two dimensional dusty plasma crystal it consists of a partitioning of the plane into regions based on distances to each dust position. For each dust particle position there is a corresponding region consisting of all points closer to it than to any other dust particle position. A convenient way to construct the Voronoi diagram is through the Delaunay Triangulation which forms a network of triangles by connecting each dust particle to its nearest neighbours. The perpendicular bisectors of each of these connecting lines (bonds) then creates the Voronoi diagram. Thus if a particle has six neighboring particles it will be surrounded by a six sided polygon. If there are fewer or more neighbors then it will have polygons of that many sides in the Voronoi diagram and these are classified as defects. Fig.~\\ref{fig:fig4}.(a) and (b) show the Delaunay Triangulation and the Voronoi diagram, respectively, for the dusty plasma crystal shown in Fig.~\\ref{fig:fig2}.(b). The yellow colored cells represent hexagonal cells while the other polygons shown in different colors represent defects. \n\nAs can be seen most of the cells are hexagonal indicating a highly ordered crystalline structure with very few defects. In the Delaunay diagram the same information is provided by the number of lines passing through each node of the diagram and any deviation from six would indicate a defect.\nThe nature of the defects are highlighted by the two circles shown in Fig.~4(a). The dashed circle shows a disclination in the crystal since in the region inside the circle, seven lines are passing through the node instead of six. Correspondingly in the Voronoi diagram the disclinations appear as green and red polygons that are not hexagons. An array of disclinations are also visible at the right bottom side of Fig.~\\ref{fig:fig4}.(b). This array of disclinations leads to line defects in the crystal and that can be traced out from Delaunay Triangulation. In Fig.~\\ref{fig:fig4}.(a), the area under the solid circle has line defects in the form of one line splitting into two and bending of grain boundaries. Such defects have also been observed previously in dust crystals created in RF plasma discharges \\cite{firstcrystal,Quinn1996}. \n \n \\begin{figure}[ht]\n\\includegraphics[scale=0.7]{fig4}\n\\caption{\\label{fig:fig4} (a) Delaunay Triangulation with defected areas shown in circles (b) corresponding Voronoi Diagram. }\n\\end{figure}\n \n Further, in order to quantify the ordering of the dust cluster a structural order parameter P can be defined, which is the ratio of number of hexagonal structures to the total number of polygons in the Voronoi diagram, \n\\begin{eqnarray}\nP=\\frac{N_H}{N_T}*100 \\%\n\\end{eqnarray}\nwhere $N_H$ and $N_T$ are the number of hexagonal structures and the total number of polygons respectively. The structural order parameter corresponding to Fig.~\\ref{fig:fig2}.(b) is estimated to be 88\\%. \\\\\n \\subsection{Estimation of dust temperature and Coulomb coupling parameter using Langevein Dynamics}\n Dust temperature and the Coulomb coupling constant are two important parameters that help in determining the state of the dusty plasma system. To find out these parameters, the dust particles are assumed to be distinguishable classical particles and to obey a Maxwell-Boltzmann distribution. In local equilibrium, when the interaction of these dust particles with the plasma as well as with individual neutral atoms is, on average, balanced by neutral friction, the dynamics of individual particles in the lattice can, in principle, be described by a Langevin equation \\cite{langevindynamics,morfill2009complex}. In such cases, for each lattice cell one can write the probability distribution \\cite{langevindynamics} as, \\par\n\\begin{eqnarray}\nP(r,v)\\propto exp\\left[-\\frac{m{(v-<v>)}^2}{2T}-\\frac{m{\\Omega_E}^2r^2}{2T}\\right],\n\\end{eqnarray}\nwith all $v$ available in phase space and with $T$ being the particle temperature, $\\Omega_E$ the Einstein frequency and $m$ the mass of the dust particle. The standard deviation of the velocity distribution and of the displacement distribution independently yield the dust temperature and the coupling parameter, respectively. The displacement distribution primarily provides the Einstein frequency which can be used to calculate the coupling parameter once the inter-particle distance is known. The standard deviation of the velocity distribution is given by ${\\sigma_v}= \\sqrt{\\frac{T}{m}}$ and the standard deviation of the displacement distribution is given by ${\\sigma_r}= \\sqrt{\\frac{T}{m\\Omega_E^2}}=\\sqrt{\\frac{\\Delta^2}{\\Gamma_{eff}}}$, where ${\\Gamma_{eff}}=f^2(k)\\times\\Gamma$ and $\\Gamma=\\frac{Q^2}{T\\Delta}$. $f^{2}(k)$ is the correction factor for Yukawa screening with $k$ as the ratio of inter-particle distance to the Debye length, and for a 2D dusty structure it can be expressed as $3e^{-k}(1+k+k^{2})$. $Q$ and $\\Delta$ are the charge acquired by the single dust particle and the inter-particle distance, respectively. Here, $\\Gamma_{eff}$ is the modified coupling parameter that takes into account the Yukawa screening potential and its corrections \\cite{langevindynamics}. Fig.~\\ref{fig:fig5}. shows the velocity distribution and displacement distribution respectively for our experimental data where approximately 100 particles have been considered for 200 consecutive frames. The dust temperature is estimated to be 0.1 eV from the velocity distribution function and the coupling parameter is found to be about 350 from the displacement distribution function. Hence, it can be concluded that the dusty plasma system is in the crystalline phase as the Coulomb coupling parameter value is much above the theoretical critical value \\cite{ikezi1986} of 172. By scanning across the radial width of the crystal we also determine the local variation of the temperature $T$ and the coupling parameter $\\Gamma$ as a function of the distance from the center. These variations are shown in Fig.~\\ref{fig:fig6}. The dust temperature is found to be maximum at the center and reduces towards the edge. At the center, the dust temperature is around 0.12 eV and reduces to 0.08 eV at the edge. The maxima of the temperature at the centre arises from the strong inter particle potential energy and the high Coulomb pressure at the centre as the inter particle distance is minimum there. A small displacement of the particles from their equilibrium position makes them move more randomly around the equilibrium position. As a result the screened Coulomb potential energy gets converted into kinetic energy which leads to a gain in the temperature at the centre. As a consequence, the coupling parameter shows a variation similar to the variation of the inter particle distance as shown in Fig.~\\ref{fig:fig3} (b). The coupling parameter has a minimum of 180 at the centre and a maximum at the edge. The high temperature leads to less coupling parameter and less temperature region is found to be of higher coupling parameter even though the inter-particle distance is decreasing at the center. Our present set of analysis suggests that the dust crystal obtained in our experiments is spatially inhomogeneous - a characteristic feature of finite sized two dimensional ordered structures \\cite{Filinov2001}. \n\\begin{figure}[ht]\n\\includegraphics[scale=0.8]{fig5}\n\\caption{\\label{fig:fig5} (a) Velocity distribution (b) Displacement distribution. }\n\\end{figure}\n\n\n \\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.9]{fig6}\n\\end{center}\n\\caption{\\label{fig:fig6} (a) Dust Temperature and (b) Coupling parameter in radial direction. }\n\\label{}\n\\end{figure}\n\\subsection{Characterization of dust crystal with neutral gas pressure}\nWe have next studied the dependence of some of the characteristic features of the crystal on the neutral gas pressure in the DC device. Fig.~\\ref{fig:fig7}(a) shows the variation of the inter-particle distance in the central region of the crystal over a range of neutral pressures. It is seen that the inter-particle distance increases with an increase in the neutral pressure. This can be understood in terms of earlier observations made in the DPEx device on the variation of plasma parameters with pressure \\cite{jaiswal2015dusty}. It has been shown that an increase in the neutral pressure raises the plasma density (due to increased ionization) and lowers the electron temperature (due to friction) which results in a net decrease in the plasma Debye length. This leads to a shrinking of the sheath width and thereby an expansion of the crystal area and hence an increase in the inter-particle distance. The dust temperature also falls with an increase in the neutral pressure as shown in Fig.~\\ref{fig:fig7}(b) which can be directly attributed to the collisional cooling of the dust particles. The dependencies of the inter-particle distance and the dust temperature on the neutral pressure in turn affect the coupling parameter $\\Gamma$ whose variation is shown in Fig.~\\ref{fig:fig7}(c). The coupling parameter is found to increase with neutral pressure till $p=9.5$ Pa and then it is observed to decrease. This is because, at first even though the inter-particle distance is increasing, the fall in dust temperature is fast which causes the coupling parameter to increase. When the neutral pressure is increased further, the inter-particle distance increases and the dust temperature decreases. However, the fall of dust temperature is no longer so steep. This leads to a decrease in the coupling parameter since the inter particle distance has an exponential inverse dependence on the coupling parameter. Hence the variation of the coupling parameter with neutral gas pressure as shown in Fig.~\\ref{fig:fig7}(c) is divided into two regions. In Region-I, the coupling parameter is temperature dominated whereas in Region-II it is inter-particle distance dominated. It is also to be noted that the coupling parameter remains in the solid state regime over this entire range of pressures. Fig.~\\ref{fig:fig7}(d) shows the structural order parameter variation. It is found to be almost constant ($\\sim 85\\%$) with the change of neutral gas pressure in the range of 8 pa to 11 Pa, which essentially indicates that the order of the dusty plasma crystal does not depend on the neutral gas pressure in this specific range. \n\\begin{figure}\n\\begin{center}\n\\includegraphics[scale=0.7]{fig7}\n\\end{center}\n\\caption{\\label{fig:fig7} Variation of (a) Inter-particle separation, (b) Dust Temperature (c) Coupling parameter and (d) structural order parameter(P) with neutral gas pressure. }\n\\label{}\n\\end{figure}\n\\subsection{Dependence of the dust temperature on the particle size}\n\nTo investigate the dependence of dust size on the temperature of dust particles comparisons were made between the characteristics of the two crystalline structures of Fig.~2 under similar discharge conditions. It was found that the dust temperature of the larger particle crystal was higher at around 2 eV compared to 0.1 eV in case of particles of diameter 4.38 $\\mu$m. According to the capacitive model\\cite{dustcharge1,dustcharge2} for spherical capacitor, the charge residing on a dust particle of radius \\lq$a$' is estimated to be $Q=CV_s$, where $C=4\\pi\\epsilon_0a$ is the capacitance of a spherical dust particle, $V_s$ is the surface potential, which can be estimated using a Collision-Enhanced Collections (CEC) model \\cite{khrapak2005,khrapak2006}. It essentially indicates that the bigger particle acquires a higher charge than that of smaller particles for a given discharge condition and as a result the bigger particle also suffers higher dust charge fluctuations. This observation agrees well with the theoretical prediction of Vaulina \\textit{et al.} \\cite{chargefluctuationheating} in which it is mentioned that the dust charge fluctuation is one of the main mechanisms to heat up the dust particles. Their study also concludes that the dust charge fluctuation directly depends on the two basics parameters of dusty plasma namely the charge acquired by the dust particles and their mass. Consequently, the higher charge leads to higher temperature because of the charge fluctuation heating mechanism, which is exactly observed in our experiments. According to the theoretical model \\cite{chargefluctuationheating} the temperature rise from the charge fluctuations is given by \n\\begin{eqnarray}\nT_f=\\frac{\\mid{Z_d}\\mid^3 e^4}{2m_dl^4}+\\frac{m_dg^2}{2\\mid{Z_d}\\mid\\xi}\n\\end{eqnarray}\nwhere $\\mid{Z_d}\\mid, m_d, e,g$ are respectively the charge acquired by the dust, the dust mass, the electronic charge and the acceleration of gravity, respectively. $l$ and $\\xi$ are parameters that depend on the system dimension, the electron temperature, the ion mass and the dust radius. In order to get a theoretical and experimental comparison of the temperature rise from the charge fluctuation of the dust particles an experiment is carried out to measure the dust temperature at a neutral pressure of 10 Pa, electron temperature of 3 eV and ion density of $10^{15}m^{-3}$ \\cite{jaiswal2015dusty} for dust particles of diameters 4.33 $\\mu$m and 10.66 $\\mu$m. Theoretically, it is found that the temperature rise from the charge fluctuation is $\\sim$35 times higher for the case of particles of diameter 10.66$\\mu$m compared to the particles of diameter 4.33 $\\mu$m. Furthermore, in experimental observations, this value comes out to be $\\sim$20. The difference may be arising from some factors of dust heating other than the charge fluctuation mechanism.\n\\begin{figure}[ht]\n\\includegraphics[scale=0.6]{fig8}\n\\caption{\\label{fig:fig8} Cathode after exposed to plasma for hours. }\n\\end{figure}\n\n\\section{Summary and Conclusions}\n\\label{sec:Conclusion} \n\nTo summarize, we have successfully demonstrated the creation and sustenance of dusty plasma crystals in a DC glow discharge plasma over a wide range of discharge parameters. These crystals are long lived and can be sustained for hours by maintaining the background plasma conditions. To establish the crystalline nature of these structures we have carried out a number of diagnostic tests including determination of the radial pair correlation function,orientational bond ordering parameter, Delauney triangulation and Voronoi diagrams of the structures. Other characteristic features of these finite sized crystals such as inhomogeneity in particle spacing and spatial variations of the dust temperature have also been experimentally demonstrated. We have also studied the dependence of the crystal properties on the background neutral pressure and size of the dust particles and delineated these dependencies in terms of the underlying changes in the basic plasma properties sustaining the crystal. \\\\\n\nOn the question of what factors have been responsible for the present success of creation of a dust crystal in a DC glow discharge, we believe our experiments carried out in the DPEx setup have benefited from two novel features of the device. The asymmetrical electrode configuration (small circular anode and large rectangular cathode) that do not face each other but are configured in the geometry shown in Fig.~\\ref{fig:fig1} has helped in reducing the heating effects associated with ion streaming. The ion path has been further influenced by the metal confinement ring which appears to have kept the streaming ions away from the inner central region of the ring where the crystals are formed. We have found experimental evidence of such a behaviour of the ions by examining the surface features of the cathode after several hours of experimental shots and exposure of the cathode to the plasma. Fig.~\\ref{fig:fig8} shows a snapshot of the cathode after such an exposure. The regions of strong sputtering arising from the impact of the energetic ions are seen as dark burnt areas. We notice that the ion bombardment is primarily restricted to two circular regions marked as (b) and (d) that encircle the confining ring on the inside and outside. The regions marked (a) and (c) are relatively free of energetic ion impacts - with (a) being just above the ring and (c) being the central region where the dust particles are levitated to form a mono-layer. When the ring is removed we are unable to obtain a proper crystalline structure. Thus the combination of the geometry of the electrodes and the presence of the confining ring play an important role in facilitating the creation of dust plasma crystals in a DC glow discharge plasma. Our findings could be useful for exploring other innovative modifications in various DC glow discharge devices to make them suitable for the study of dusty plasma crystals.\\\\\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Introduction}\nCompared to other buildings, supermarkets consume proportionately more energy~\\cite{TIMMA2016435,Mylona}.\nThis is mainly due to refrigeration needed to slow down the deterioration of food, by retaining them on a predetermined temperature~\\cite{Mylona}.\nElectricity costs associated with refrigeration accounts for a large part of the operating costs because these machines are continually utilizing energy, day and night.\nAs a result, costs associated with refrigerator equipment can represent more than 50\\% of the total energy costs~\\cite{Opti,Mavro,FoodRetail,TIMMA2016435}.\nRetailers operate in an industry that is characterized as \ncompetitive and low-margin~\\cite{Opti}.\nIf they are able to become more energy efficient this can make them more competitive.\nThis outlines the importance of operating the system at its optimum performance level so the associated energy costs can be reduced.\n\nEnergy baselining makes it possible to analyze the energy consumption by comparing it to a reference behavior~\\cite{Stulka}.\nFurthermore, it can be used to measure the effectiveness of energy efficiency policies by monitoring energy usage over time.\nChanges in energy policies, such as retrofitting the equipment, can require high investments.\nThis makes it important for a retailer to know if the investments are truly effective, in the reduction of energy consumption.\nTo estimate energy savings with reasonable accuracy, the energy baselines need to be accurate. \nIt can be challenging to estimate the quality of these energy baselines.\nOne way is to run the old policies in parallel with the new ones, which is often impossible.\nDetermining the quality of these baselines can yield significant results for supermarkets.\n\nThe objective of this work is to develop energy baselines using off-the-shelf data science technologies.\nDifferent technologies will be tested and applied on the data obtained from several supermarkets to test their performance.\nFives supermarkets in Portugal, will be analyzed as a case study with a methodology based on energy baselining.\n\n\n\n\n\n\\section{Background}\nThe characteristics of the food-retail industry, such as fierce competition and low margins, makes retailers continually search for ways to operate more efficiently~\\cite{Opti}.\nSince energy costs are the second highest costs for a retailer~\\cite{HPAC}, a decent energy management process is vital for improving efficiency~\\cite{SCHULZE20163692}.\n\nEnergy Management (EM) has been the subject of numerous studies throughout the years, and, because the field of EM is wide, it can be described in many different ways~\\cite{SCHULZE20163692}.\nA purpose of EM is to search for improved strategies to consume energy in a more efficient way.\nFrom a business point of view, greater energy efficiency is of importance because it provides a number of direct, and indirect, economic benefits~\\cite{WORRELL20031081}.\n\nSeveral reasons can keep companies from investing in energy efficiency measures ~\\cite{Gillingham}.\nFor example, when inadequate information is available about the results of these investments,\nthis can limit companies to invest in them~\\cite{Gillingham}.\nEnergy management can focus on addressing these factors to enable businesses to invest.\nIn order to evaluate the efficiency an energy efficiency measure the observed energy consumption of the store\/system must be compared to a \\emph{reference behavior}~\\cite{Stulka}.\nOne way to create this reference behavior is to use energy baselining, \nhere the reference behavior is defined as the previous, historically best, or ideal, theoretical performance of the given store~\\cite{Mavro}.\nEnergy baselines are usually created on the analysis of historical data~\\cite{Stulka} and can be developed using traditional data mining techniques.\n\nTime-series prediction is a method of forecasting future values based on historical data~\\cite{CHOU2016751}\nIn time series forecasting, forecasts are made on the basis of data comprising one or more time series~\\cite{chatfield2000time}.\nTime series data are defined as the sort of data that is captured over a period of time~\\cite{hamilton1994time} (Eq.~\\ref{eq:TimeseriesFormula}).\n\\begin{equation}\n\\label{eq:TimeseriesFormula}\nX_{1},X_{2},\\ldots X_{t-1},X_{t}\\ldots\n\\end{equation}\nWhere $X$ is the value measured at time $t$.\nCreating energy forecasts is an important aspect of the energy management of buildings~\\cite{WANG2017796}.\nFinally, making forecasts can also help in model evaluation when testing different time series algorithms~\\cite{chatfield2000time}.\n\nWe want to be able to use domain-specific knowledge to engineer new features, therefore, we decided to follow a regression approach.\nRegression is not a time series specific algorithm for forecasting, however, it can be applied to make time series forecasts.\nIn multiple regression models, we forecast the dependent variable using a linear combination of the independent variables.\nBased on this relationship the algorithm will be able to predict a value for the dependent variable.\n\nWe selected off-the-shelf machine learning algorithms like Multiple Linear Regression (MLR), Random Forests (RF) and Artificial Neural Networks (ANN) to perform the regression.\nOne way to test the accuracy of the algorithms, is to compare the predicted values with the actual observed values.\n\nNowadays, Machine Learning models and methods are applied in various areas and are used to make important decisions which can have far-reaching consequences~\\cite{BERGMEIR2012192}.\nTherefore, it is important to evaluate their performance.\nCurrently, Cross-Validation (CV) is the widely accepted and most used evaluating technique in data analysis and machine learning~\\cite{JIANG2017219,BERGMEIR2012192}.\nHowever, Cross Validation does not work well in evaluating the predictive performance of time series~\\cite{JIANG2017219}.\nOne way to validate the prediction performance of a time series model is to make use of a Sliding Window design~\\cite{HOOT2008116}, (Figure~\\ref{fig:TS}).\nIn this method, the algorithm is trained and tested in different periods of time.\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=\\textwidth]{Images\/SlidingWindowValidation.jpg}\n \\caption{Example of a Sliding Window Validation} \n \\label{fig:TS}\n\\end{figure}\nTo evaluate the prediction performance of the algorithms we used the Mean Absolute Error (MAE) as the error metric because the MAE is the most natural measure of the average prediction error~\\cite{MAE,Wilmott}.\nThe following formula shows how the Mean Absolute Error is calculated:\n\\begin{equation} \\label{eq:MAE}\nMAE = \\frac{1}{N}\\sum\\limits_{{i = 1}}^N {\\left| {\\hat{Y_i} - {{Y_i}}} \\right|}\n\\end{equation}\nHere \\(\\hat{Y_i}\\) is the predicted value and \\({Y_i}\\) is the observed value.\n\nNumerous studies focused on energy prediction because forecasting the energy consumption is an important component of any energy management system~\\cite{Nasr}.\nIn New Zealand~\\cite{SAFA2017107}, researchers used MLR to calculate the optimal energy usage level for office buildings, based on monthly outside temperatures and numbers of full-time employees.\nWith this knowledge, they could build an energy monitoring and auditing system for the optimization and reduction of energy consumption.\nIn the UK~\\cite{SPSS}, researchers used an MLR to forecast the expected effect of climate change on the energy consumption of a specific supermarket.\nThey estimated that, until 2040, the gas consumption will increase 28\\%, which is more, compared to the electricity usage, which will increase 5,5\\%.\n\nIn the UK, most supermarkets negotiate energy prices and, when they exceed their predicted demand, they have to pay a penalty.\nTherefore, their ability to accurately predict energy consumption will facilitate their negotiations on electricity tariffs with suppliers.\nOne supermarket in the UK used ANN's to analyze the Store's Total Electricity Consumption as well as their individual systems, such as Refrigeration and Lighting~\\cite{Mavro}.\nFor each of these systems, they developed a model to provide an energy baseline.\nThis baseline is used for performance monitoring which is vital to ensure systems to perform adequately and guarantee operating costs and energy use are kept to a minimum.\nFinally, ANN's have been used for energy prediction with the final goal of estimating the supermarkets future CO2 emissions~\\cite{Chari}.\n\nA recent paper~\\cite{WANG2017796}, provides a detailed literature review on the state-of-the-art developments of Artificial Intelligent (AI) based models for building energy use prediction.\nIt provides insight into ensemble learning, which combines multiple AI-based models to improve prediction accuracy.\nThe paper concludes that ensemble methods have the best prediction accuracy but that a high level of technical knowledge and computational resources is required to develop them.\nConsequently, this has hindered their application in real practice.\nAn advantage of high prediction accuracy is that this can allow early detection of equipment faults that could disrupt store operations~\\cite{Mavro}.\n\nThese studies show that predicting energy consumption is possible with data mining techniques and that they can predict energy usage within acceptable errors.\nCompared to other engineering methods, ensemble methods require less detailed information of the physical building parameters~\\cite{WANG2017796}.\nThis saves money and time in conducting predictions compared to simulation tools.\nHence, they could replace them in the future.\nBecause studies use different types and volumes of input data, there is no unified input data format.\nTherefore, knowledge of the methods and a variety of data is needed to create meaningful and accurate predictions.\n\n\\section{Defining baselines with Machine Learning Algorithms}\nEvery forecast $\\hat{Y_i}$ of an observed value ${Y_i}$ will have a forecast error $E$, which describes the deviation among them.\nThese deviations\ncan result from poor prediction performance or energy savings\/losses.\nIt is very hard to forecast a numeric value correctly, the deviations can be larger or smaller.\nThus, to provide good estimates of the effect of changes in energy management policies, it is important to have a learning model that can create energy baselines as accurate as possible.\n\nThe objective of this study is to asses the reliability of the learning model in different aspects.\nFirst, we want to determine which model is best in creating a reliable baseline with the least amount of training days.\nThis can be beneficial in two specific situations: when a retailer opens a new store, or implements new energy policies.\nWhen a new store is opened, no data has been collected about the energy performance of \\emph{this} specific store.\nTo create a baseline as soon as possible, it is essential to know how many days it takes to collect sufficient data.\nTherefore, we study the minimum amount of days needed to create a reliable baseline.\nThis information is also suitable for updating the baseline when the configuration of the store changes, e.g., due to upgrades of the refrigeration equipment.\n\nWhen we know this setup, we want to discover the lifespan of this prediction, i.e., how long does this energy baseline remain reliable after being learned.\nIt is important to determine how reliable the baseline is and if it needs updating, because we expect that the prediction error will grow over time.\nAs a result, the prediction error will behave differently for short and long term predictions.\nWith this information, the life-cycle of a model can be determined, which defines how often the model needs to be updated.\n\nWhen a new energy saving policy is implemented, the Retailer wants to estimate how much energy is saved.\nTherefore, a model has to be developed which is able to make long term predictions based on the old configuration of the store.\nWith this baseline, the Retailer can see what the estimated energy consumption would be if they did not change the layout.\nBy comparing this baseline with the observed energy consumption or the new baseline, the difference can be estimated.\nWe will examine the behavior of the model for long term predictions because the Retailer needs to know for how long he can estimate, with a reasonable accuracy, the energy gains from a certain energy policy.\n\n\\subsection{Approach}\nWe obtained time series data from five supermarkets across Portugal, which consist of measurements of the \\emph{Refrigeration Energy Consumption}, \\emph{Outside temperature} and the \\emph{Timestamp}.\nThe original time series data was provided, in sometimes irregular, 15-minute intervals.\nAfter this restructuring, the data is converted into hourly values and eventually, transformed to daily formats.\nThe energy consumption is measured in kilowatt hour (kWh) from the Retailer's energy monitoring system.\nThe weather data consists of the outside temperature derived from a sensor placed on the roof of the store and is measured in degrees Celsius (C$^{\\circ}$).\n\nIn order to apply a similar approach to the data of each store, we decided to work separately with datasets that have a similar structure.\nWe will use domain knowledge to create features for the datasets.\nThe process of designing new features, based on domain knowledge, is called Feature engineering~\\cite{LI2017232}.\nBefore creating these datasets, we first identified the dependent and the independent variables.\nIn this study, an energy baseline will be created that reflects the estimated refrigeration energy consumption.\nConsequently, this will be the dependent variable, and the independent variables are the ones influencing this consumption.\nOnly the factors that are measured, by all stores, can be used here as an independent variable.\n\n\n\\subsection{Estimating Reliability}\nFor a retailer it is important to estimate, with reasonable accuracy, the energy savings resulting from energy policies.\nIf we train an algorithm with data before a energy policy change, we can create an energy baseline that shows what the energy consumption would be if this policy has not been changed.\nBy comparing this energy baseline with the observed consumption, after the policy change, we can estimate the energy savings.\n\nThe first objective of this study is to define the minimal set of training examples needed to build a reliable energy baseline.\nTo do this, we train the machine learning algorithms with different numbers of training days.\nEach iteration we increase the number of training examples and evaluated the models' prediction accuracy.\nWhen all iterations have been completed, we are ready to plot the error metrics in the learning curves.\nBecause this approach is replicated for the three algorithms, this also reveals which one performs best.\n\nAfter we selected the learning model which is able to create the baseline with the least amount of data, we define the update frequency of this setup.\nWe expect the prediction error to grow over time, and therefore the energy baseline will become unreliable at some point when the prediction error becomes too high.\nTo find the point of which we recommend updating, we use the previously defined setup, to make predictions for the remaining dataset.\nAs soon as the predictions are made, we compute a MAE for each of 10 subsequent predictions.\nOnce all the errors are computed, we can plot them to see how the prediction error develops over time.\nThis enables us to analyze how the prediction accuracy develops along the prediction horizon, and define the update frequency.\n\nFinally, the third part of this research is to analyze the long term prediction performance.\nThis was done by training each model with various sizes of training data and let it predict for the remaining dataset.\nAfter the predictions were made, we then calculated a MAE for every 10 subsequent predictions.\nHaving plotted the error metrics meant that we could study their performance over time.\n\n\\section{Experimental Setup}\nIn order to study the three objectives described before, we designed an approach based on Learning curves in combination with Sliding windows.\nOur experimental setup is a variation of the Time series approach used by~\\cite{Busetti,vanRijn2015}.\nThe method we propose is visualized in Figure~\\ref{fig:SDLC}.\nWe decided to use this particular method because we want to train machine learning models with different sizes of historical training data.\nThe learning curves enable us to visualize and evaluate their performance.\n\n\\subsection{Data}\nThe studied datasets are mainly based on the energy consumption and weather data for the whole year of 2016 and the first half of 2017 (Table~\\ref{tab:OriginalData}).\nThe data for each store is available from the moment the store opened or started to collect the data.\nHence, for each store, the maximum amount of data is available.\n\n\\begin{table}[hpt]\n\\caption{Overview Datasets}\n\\label{tab:OriginalData}\n\\centering\n\\small\n\\renewcommand{\\arraystretch}{1.25}\n\\begin{tabular}{llll}\n\\hline \\hline\nStore & First day & Last day & Observations \\\\ \n\\hline\nAveiro & 04\/12\/2015 & 26\/04\/2017 & 510 days \\\\ \nFatima & 07\/01\/2016 & 26\/04\/2017 & 476 days \\\\ %\nMacedo de Cavaleiros\t & 13\/11\/2015 & 26\/04\/2017 & 531 days \\\\ %\nMangualde & 16\/05\/2016 & 16\/05\/2017 & 366 days \\\\ %\nRegua & 16\/05\/2016 & 16\/05\/2017 & 366 days \\\\ \n\\hline \\hline\n\\end{tabular}\n\\normalsize\n\\end{table}\n\nBased on the two available variables, \\emph{Timestamp} and \\emph{Outside temperature}, we created new features with additional information that the algorithm can use.\nDesigning appropriate features is one of the most important steps to create good predictions because they can highly influence the results that will be achieved with the learning model~\\cite{SILVA2014395}.\nTo determine which features to create, knowledge about the behavior of the store is important~\\cite{Mavro}.\nThe domain knowledge required for this process, was acquired through conversations with experts, reviewing similar studies~\\cite{Mavro,SPSS,SAFA2017107,Chari,KARATASOU2006949,Jacob,OROSA201289} and using descriptive data mining techniques, e.g., Subgroup Discovery (SD).\nSD is a method to identify, unusual, behaviors between dependent and independent variables in the data~\\cite{WIDM1144,Herrera2011}.\nIn this study, SD will be used to improve our understanding of the behavior of the energy consumption.\nTable~\\ref{tab:Features} gives an overview of the created features.\n\\begin{table}[hpt]\n\\caption{Overview Features}\n\\label{tab:Features}\n\\centering\n\\tiny\n\\renewcommand{\\arraystretch}{1.25}\n\\begin{tabular}{llll}\n\\hline \\hline\nName & Type & Description & Derived from \\\\ \n\\hline\nWeekday & Categorical (1-7) & Day of the week & Timestamp \\\\ \nWeek of the Month & Categorical (1-4) & Week of the Month & Timestamp \\\\\nWorkday & Binary (0-1) & Workday or Weekend & Timestamp \\\\\n\nMax Temperature & Numerical & Max Temperature of the Day & Temperature \\\\\nMean Temperature & Numerical & Mean Temperature of the Day & Temperature \\\\ \nMin Temperature & Numerical & Min Temperature of the Day & Temperature \\\\ \nTemperature Amplitude & Numerical & Absolute Difference Min and Max & Temperature \\\\\n\nMax Temperature Y.. & Numerical & Max Temperature of Yesterday & Temperature \\\\ \nMean Temperature Y.. & Numerical & Mean Temperature of Yesterday & Temperature \\\\ \nMin Temperature Y.. & Numerical & Min Temperature of Yesterday & Temperature \\\\ \nTemperature Amplitude Y.. & Numerical & Absolute Difference Min and Max & Temperature \\\\\n\\hline \\hline\n\\end{tabular}\n\\normalsize\n\\end{table}\n\n\\subsection{Algorithms}\nWe selected off-the-shelf machine learning algorithms like Multiple Linear Regression (MLR), Random Forests (RF) and Artificial Neural Networks (ANN) to perform the regression.\n\nLinear regression is a simple and widely used statistical technique for predictive modeling~\\cite{SPSS}.\nIt has been used before to predict the future energy consumption of a supermarket in the UK~\\cite{SPSS}.\nThe RF is considered to be one of the most accurate general-purpose learning techniques available and is popular because of its good off-the-shelf performance~\\cite{Fernandez,Biau}.\nFinally, Artificial Neural Networks have successfully been used in recent studies to predict energy consumption~\\cite{Mavro,Chari,WANG2017796,KARATASOU2006949,Nasr,FOUCQUIER2013272}.\n\n\n\\subsection{Performance Estimation}\n\nIn Machine Learning, learning curves are used to reflect the predictive performance as a function of the number of training examples~\\cite{LC}.\nFigure~\\ref{fig:LC} reveals the developing learning ability of a model when the number of training examples increases.\nThe curve indicates how much better the model gets in predicting when more training examples are used.\nThe general idea is to find out how good the model can become in predicting and what the subsequent number of training examples is~\\cite{LC}.\nSince we are searching for the minimum number of training days to create a baseline, we can use the learning curves to identify this number.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[scale=0.6]{Images\/LearningCurve.jpg}\n \\caption{A graphical representation of a learning curve}\n \\label{fig:LC}\n\\end{figure}\n\nTo test the learning ability of a model one can create several training sets of data and evaluate their performance on a test set~\\cite{Langley1988}.\nThese training sets can differ in, e.g., volume.\nIt is preferred that the data for these sets are randomly selected from the available data~\\cite{Langley1988}.\nThe purpose is to train the model multiple times, and after every training, the model performance should be tested.\nThe results of these tests can be plotted to draw a learning curve which shows the evolution in the performance of the model.\nThese curves can be clarifying, especially when the performance of multiple models is compared.\nBesides for model selection, also the performance of a model can be compared in relation to the number of training examples used~\\cite{LC}.\nSuch a learning curve will tell how the model behaves when it is constructed with varying volumes of training data.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[scale=0.45]{Images\/SlidingWindowandLC.jpg}\n \\caption{Example of our Sliding Window Process} \n \\label{fig:SDLC}\n\\end{figure}\n\n\n\n\\section{Results}\n\n\n\n\\subsection{Reliability of baselines}\n\\label{sec:RS2}\n\nIn Figure~\\ref{fig:TotalModels}, we see how the error evolves as we train the model with more data points i.e.\\ days.\nThis plot displays the learning curves obtained for each of the trained models, MLR, ANN, and RF.\nThe number of training examples ranged from 10 up to 180 days, with threads of 10, and have been tested for a period of 50 days.\nEach line represents the mean of 18 iterations, for all stores, Aveiro, Fatima, and Macedo Cavaleiros, we performed six iterations regarding the method visualized in~\\ref{fig:SDLC}.\n\nIn Figure~\\ref{fig:TotalModels}, we observe that the MLR is the most reliable by a number of 30 days with a MAE of 0.25.\nBesides, we observe that using the MLR, as we expand the size of training examples, there is an increase in the MAE.\nFurthermore, we perceive a different behavior for the other two learning models.\nWe see that the performance of the RF stabilizes when we increase the training data following 70 training examples up to 180.\nMoreover, we remark that the ANN exhibits a continuous reduction in the MAE when more training examples, up to 180, are attached to the training set.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=\\textwidth]{Images\/Total_Models.PNG}\n \\caption{Learning curves, based on an average for all stores and methods} \n \\label{fig:TotalModels}\n\\end{figure}\n\nThe learning curves in Figure~\\ref{fig:TotalModels}, reveal that each of the learning models is affected differently by the change in the training set size.\nWe notice that the MLR outperforms the other two methods, for making a reliable baseline using the least amount of days.\nFurthermore, we see that the performance of the MLR worsens when we increase the number of training examples.\nThis can be explained by the nonstationary nature of the datasets.\nThis non-stationarity is a problem for the MLR since it has difficulties with nonlinear relationships. Because the MLR works well with a smaller number of training examples, we assume that the dataset contains periods of local stationarity.\nOne study~\\cite{LOCALStationary}, shows that it is possible that nonstationary time series appear stationary when examined close up.\nIn this local period, the statistical properties change slowly over time.\nAs a consequence, the data that lies close to the forecast period is more likely to be predictive for this forecast period.\n\nFor the ANN and RF, stationarity is irrelevant since they are able to handle more complex, nonlinear relations.\nWe see evidence for this in our results, there is a promising development over time in the associated learning curve.\nWe believe that with more diverse data, the ANN could be able to predict a baseline with less number of training days than the MLR.\nUnfortunately, we were not able to investigate this further.\n\nAs shown in Figure~\\ref{fig:TotalModels}, we are able to create a reliable model with the MLR trained on 30 days.\nTherefore, we trained the MLR for each of the stores during the same period of the year, March 2016, and we estimated the energy consumption for the period of one year, from April 2016, until February 2017.\n\nFigure~\\ref{fig:LC30}, shows the evolution of the MAE throughout this period.\nWe observe that during the first 30 days of predictions, the MAE remains quite low, under 0.5.\nNext, we see that during the period between 50 till 180 days, the MAE is higher for all the stores.\nAs a matter of fact, this period represents the months June, July, August, and September.\nTable~\\ref{tab:AVGTEMP} shows, that throughout these months, temperature levels reach higher values than in March, the period that was used for training the model.\nThis explains why the MAE is higher.\nTo avoid this problem, we could train a different model for each of the two energy profiles.\nBecause our dataset is limited, we were not able to test this in practice.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=\\textwidth]{Images\/LifeCycle_30Days_New.PNG}\n \\caption{MAE over time using MLR} \n \\label{fig:LC30}\n\\end{figure}\n\nWe observe, in Figure~\\ref{fig:LC30}, that in Aveiro the influence of seasonality is less evident than for the supermarkets in Fatima and Macedo Cavaleiros.\nSince all stores are trained and tested with the same model and in the same period of time, the most plausible factor, for this, are the variables that are related to Temperature.\nThe average temperatures of the three stores follow a similar pattern, higher in the summer and lower in the winter.\nHowever, if we focus on the amplitudes of the average temperatures per month, (Table~\\ref{tab:AVGTEMP}), we observe that Aveiro registered the smallest amplitude, with a difference of $9 C^{\\circ}$.\nThe other stores, Fatima and Macedo Cavaleiros, noted an amplitude of $13 C^{\\circ}$ and $18 C^{\\circ}$ respectively.\nThis seems to explain why the model trained for the store of Aveiro, is less affected by seasonality.\n\nIn Figure~\\ref{fig:LC30}, we notice that after 220 days the accuracy of the model increases again.\nWhen we look at Table~\\ref{tab:AVGTEMP}, we see that the temperature values from November on, are comparable to the ones in March.\nNevertheless, the error is still higher than in the period of the first 30 days.\nWe applied this method in different periods of time, and we perceived similar behavior.\n\nIn conclusion, we base our decision on the average prediction.\nFigure~\\ref{fig:LC30} shows that the average prediction remains stable until 30 days, therefore, we recommend updating the model up to 30 days.\n\n\\begin{table}[hpt]\n\\caption{Average Temperature per Month and Store}\n\\label{tab:AVGTEMP}\n\\centering\n\\small\n\\setlength\\tabcolsep{3.5pt}\n\\begin{tabular}{lllllllllllllll}\n\\hline \\hline\nStore & Jan & Feb & Mar & Apr & May & June & July & Aug & Sep & Oct & Nov & Dec \\\\ \n\\hline \nAveiro & 12 & 13 & 14 & 16 & 17 & \\textbf{20} & \\textbf{21} & \\textbf{21} & 19 & 18 & 14 & 14 \\\\\nFatima & 9 & 11 & 11 & 14 & 15 & 19 & \\textbf{22} & \\textbf{22} & \\textbf{20} & 17 & 12 & 10 \\\\\nM. Cav. & 8 & 10 & 12 & 15 & 16 & \\textbf{22} & \\textbf{26} & \\textbf{25} & \\textbf{22} & 16 & 10 & 8 \\\\ \n\\hline \\hline\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Estimated energy savings}\n\nEach store has a different number of observations, and they are also collected in different periods of time.\nWe will train the MLR, RF, and ANN with the first 180 and 360 days of data, and test for the remaining days.\nWe will do this for the stores located in Aveiro, Fatima, and Macedo Cavaleiros.\nTherefore, we train each store in different periods, and not within the same period.\n\nIn Figure~\\ref{fig:LC30}, we noticed that 30 training days were not enough to make accurate long term predictions.\nTherefore, we decide to include more training days into our training set.\nEach of the following plots, in Figures~\\ref{fig:AV180},~\\ref{fig:FA180},~\\ref{fig:MC180},~\\ref{fig:AV360},~\\ref{fig:FA360}, and~\\ref{fig:MC360}, show how the prediction error evolves over time, per store, per model and number of training days.\nEach point shows the average error for 10 subsequent predictions.\n\nFigures~\\ref{fig:AV180},~\\ref{fig:FA180}, and~\\ref{fig:MC180} show the evolution of the prediction error when the models are trained on the first 180 days of data.\nWe observe, that each store shows a similar behavior as shown in Figure~\\ref{fig:LC30}.\nThis is more evident when we compare the error of the MLR (red line) with the error in Figure~\\ref{fig:LC30}.\nOverall, the MAE is lower for the stores of Fatima and Macedo Cavaleiros, if we use 180 days instead of 30 days.\nThese results also show, that the effect of the different consumption modes is still visible, but less dramatically.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=\\textwidth]{Images\/AV_180.jpg}\n \\caption{MAE over time using 180 training days, Aveiro} \n \\label{fig:AV180}\n\\end{figure}\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=\\textwidth]{Images\/FA_180.jpg}\n \\caption{MAE over time using 180 training days, Fatima} \n \\label{fig:FA180}\n\\end{figure}\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=\\textwidth]{Images\/MC_180.jpg}\n \\caption{MAE over time using 180 training days, Macedo Cavaleiros} \n \\label{fig:MC180}\n\\end{figure}\n\nWe expect that long term predictions become more accurate when we use 360 training days to train the model because the model is trained with data from all periods of the year.\nBecause we use this number of training days, a bigger variation of temperature values is included in the training set.\nTherefore, we decided to train the models, for all stores, on the first 360 training days and study the predictions on the remaining days.\nFigures~\\ref{fig:AV360},~\\ref{fig:FA360}, and~\\ref{fig:MC360} show us how the MAE error evolves for this period of time.\nWe observe, that the for the corresponding period of time, the MAE is a bit lower than for the models trained on 180 days.\n\nIn contrast to Figure~\\ref{fig:TotalModels}, the MLR has the worst performance, while the RF and ANN perform somewhat similar.\nThe results of this experimental part supports the general idea that when we train the models with more data, our predictions will improve.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\textwidth]{Images\/AV_360.jpg}\n \\caption{MAE over time using 360 training days, Aveiro} \n \\label{fig:AV360}\n\\end{figure}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\textwidth]{Images\/FA_360.jpg}\n \\caption{MAE over time using 360 training days, Fatima} \n \\label{fig:FA360}\n\\end{figure}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\textwidth]{Images\/MC_360.jpg}\n \\caption{MAE over time using 360 training days, Macedo Cavaleiros} \n \\label{fig:MC360}\n\\end{figure}\n\nWhen the algorithms are trained with 180 training days, the effect of the different energy consumption modes is still visible.\nWhen we use 360 training days, we observe that the predictions become more accurate.\nTherefore, we advice to train algorithms on 360 training days to create long term predictions.\n\n\n\\section{Estimate Energy Savings}\n\\label{sec:Energysavings}\nThe Retailer wants to estimate, with reasonable accuracy, the energy savings resulting from its energy policies.\nChanges in energy policies, such as the retrofitting an equipment, require high investments.\nThis makes it important for the Retailer to know if the investments are truly effective, in the reduction of energy consumption.\nIf we use a baseline trained with data before some measure is implemented, we can estimate the energy savings by comparing its estimates with the observed consumption.\n\nWe selected two stores that have undergone a retrofitting of the equipment.\nFrom these stores exactly one year of data is available.\nMangualde and Regua had, respectively, 170 and 200 training days available before the Retrofit.\nBecause we have less than a year of data available, we decide to use the MLR, trained on 30 days, which shows the best performance in Figure~\\ref{fig:TotalModels}.\n\nFigures~\\ref{fig:Mangualde} and~\\ref{fig:Regua} show the observed consumption (orange lines) versus the baseline estimates (blue lines) for these two stores.\nWe trained the MLR for both stores, on 30 training days, between 50 and 20 days before the Retrofit and we predicted for 50 days.\nThis makes it easier to visualize how the baseline compares with the energy consumption before and after the Retrofit.\n\nThe deviations, between the baseline and the energy consumption, can result from poor prediction performance or energy savings\/losses.\nWe chose a setup that gives us a reliable baseline, therefore, we believe that the deviations are caused by energy savings.\nIn both Figures~\\ref{fig:Mangualde} and~\\ref{fig:Regua}, we observe that, before the Retrofit, the baseline and the real energy consumption intertwine in several points.\nThis behavior, which was also seen before, shows that the predictions are close to the real consumption.\nAfter the Retrofit, however, the observed consumption is always lower than the prediction, which offers strong evidence that the implemented measure was effective.\n\nHence, if we assume that the baseline is accurate enough, we can estimate the energy savings using the difference between the predicted and observed energy consumption.\n\n\\begin{figure}[hpt!]\n \\centering\n \\includegraphics[width=\\textwidth]{Images\/Mangualde_Retrofit.png}\n \\caption{Example of the predicted and observed Energy Consumption, Mangualde} \n \\label{fig:Mangualde}\n\\end{figure}\n\n\\begin{figure}[hpt!]\n \\centering\n \\includegraphics[width=\\textwidth]{Images\/Regua_Retrofit.png}\n \\caption{Example of the predicted and observed Energy Consumption, Regua} \n \\label{fig:Regua}\n\\end{figure}\n\n\n\n\n\\section{Conclusions}\n\nEnergy efficiency measures can require high investments.\nThis makes it important for the Retailer to know if the investments are truly effective, in reducing energy consumption.\nEnergy baselines can be used to study the effectiveness of energy efficiency measures.\nThe results can simplify decisions to reserve funding for the required investments in other stores.\n\nIn this study, we researched if off-the-shelf data science technologies can be used to create energy baselines that support improved energy management. Before that, we also performed some exploratory analysis to better understand the data.\n\n\nOur first goal, was to determine the minimum amount of training days needed to create a reliable baseline, and which model performs best.\nFor that, we studied the prediction accuracy of three machine learning models, ANN, RF, and MLR, based on various datasets.\nFor the experiments, we proposed a sliding window approach in which we systematically expanded the size of the training set with historical data.\nOur experiments show, that the MLR has a clear advantage over the other two methods for creating a baseline with a minimum amount of days.\nThis model needs 30 training days to estimate a reliable baseline.\n\nThe second goal was to determine how often the algorithm needs to be updated when trained with a MLR on 30 training days.\nWe trained our algorithm multiple times, on all stores, and in different time periods.\nOur analysis shows that the MAE stays low for a period of 30 days, after this the MAE dramatically increases.\nMoreover, we observed that the energy consumption follows a different profile when average temperatures are higher than 20 degrees.\nThese findings are in line with our insights derived from Subgroup Discovery.\nOur analysis shows, that the amplitude of the average temperature affects the prediction performance.\nHence, we advise updating the model up to 30 days.\n\nOur third goal, was to determine if we can estimate energy savings after implementing an energy efficiency measure.\nTo answer this question, we trained our models with 180 and 360 training days and predicted for the remaining days.\nOur findings show, that the predictions become the most accurate when trained with 360 training days.\nBecause we use 360 training days, a bigger variation of temperature values is included in the training set.\nThis supports the general idea that when we train the models with more data, our predictions will improve.\nWith a baseline, trained on 360 training days, the Retailer is able to estimate, with reasonable accuracy, the energy savings resulting from its energy policies.\nMoreover, he can compare the energy savings to the investment made for the measure.\nThis has obvious advantages for the retailer.\n\nIn summary, the results of this study show that we have been able to create reliable energy baselines using off-the-shelf data science technologies.\nMoreover, we found a way to create them based on short term historical data.\n\n\\section*{Acknowledgments}\nThis work is financed by the ERDF \u2013 European Regional Development Fund through the COMPETE Programme (operational programme for competitiveness) and by National Funds through the FCT \u2013 Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia (Portuguese Foundation for Science and Technology) within project 3GEnergy (AE2016-0286).\n\n\\section*{References}\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
{"text":"\\section{Appendix A: : Derivation of Eq. (\\ref{e3}) for the middle monomer for\nphantom Rouse, Zimm, polymers in a $\\theta$-solvent, reptation, and\nself-avoiding Rouse polymers}\n\nI tag the middle monomer of the polymer, and here I obtain\nEq. (\\ref{e3}) for its dynamics.\n\nI start with the Langevin equation (\\ref{e3}) describing the evolution\nof the $p$-th mode amplitude ($p=0,1,2,\\ldots$), viz.,\n\\begin{eqnarray} \\gamma_p\\frac{X_{p\\sigma}(t)}{\\partial t}=-k_p\nX_{p\\sigma}(t)+f_{p\\sigma}(t)\\,,\n\\label{s1}\n\\end{eqnarray} with $\\langle f_{p\\sigma}\\rangle=0$ and the FDT\n$\\langle\nf_{p\\sigma}(t)f_{q\\lambda}(t')\\rangle=2\\gamma_pk_BT\\delta(t-t')\\delta_{pq}\\delta_{\\sigma\\lambda}$.\nAs noted in Table \\ref{table1}, Eq. (\\ref{s1}) can be derived for\nphantom Rouse, Zimm, polymers in a $\\theta$-solvent and reptation. A\nstraightforward result that follows from Eq. (\\ref{s1}) is that\n\\begin{eqnarray} \\langle\nX_{p\\sigma}(t)X_{q\\lambda}(t')\\rangle=\\delta_{pq}\\delta_{\\sigma\\lambda}(k_BT\/k_p)e^{-k_p(t-t')\/\\gamma_p}=\\delta_{pq}\\delta_{\\sigma\\lambda}(k_BT\/k_p)e^{-(t-t')\/\\tau_p},\n\\label{s2}\n\\end{eqnarray} where $\\tau_p$ is the relaxation time of the $p$-th\nmode ($p\\neq0$) for the polymer. When Eq. (\\ref{s2}) is combined with\nthe corresponding time correlation function for mode amplitudes for\nself-avoiding polymers \\cite{prouse}, namely $\\langle\\vec\nX_p(t)\\cdot\\vec X_q(t')\\rangle\\propto\nN^{2\\nu}p^{-(1+2\\nu)}e^{-t\/\\tau_p}\\delta_{pq}$ with\n$\\tau_p\\sim(N\/p)^{1+2\\nu}$, one can formulate an effective\nEq. (\\ref{e4}) with both $\\gamma_0$ and $\\gamma_{p\\neq0}$ independent\nof $p$, and $k_p\\sim p^{-(1+2\\nu)}$ [this implies that for a self\navoiding polymer $\\tau_p\\sim(N\/p)^{1+2\\nu}$]. Given this, I will\nhenceforth use Eq. (\\ref{s1}) also for self-avoiding Rouse polymers.\n\nThe Fokker-Planck equation for the probability ${\\cal\nP}(X_{p\\sigma},t)$ that corresponds to the LE is given by\n\\cite{vankampen}\n\\begin{eqnarray} \\frac{\\partial {\\cal P}(X_{p\\sigma},t)}{\\partial\nt}=\\underbrace{\\frac{k_p}{\\gamma_p}}_{=\\tau^{-1}_p\\,\\,\\mbox{for}\\,\\,p\\neq0}\\frac{\\partial}{\\partial\nX_{p\\sigma}}\\left[X_{p\\sigma}{\\cal\nP}(X_{p\\sigma},t)\\right]+\\underbrace{\\frac{k_BT}{\\gamma_p}}_{=a_p\\tau^{-1}_p\\,\\,\\mbox{for}\\,\\,p\\neq0}\\frac{\\partial^2\n{\\cal P}(X_{p\\sigma},t)}{\\partial X^2_{p\\sigma}},\n\\label{s3}\n\\end{eqnarray} where $a_p=k_BT\/k_p$ for $p\\neq0$, and\n$a_0=k_BT\/\\gamma_0$. The solution of Eq. (\\ref{s3}), with the initial\ncondition that\n$P(X_{p\\sigma},0)=\\delta(X_{p\\sigma}-X^{(0)}_{p\\sigma})$, is obtained\nas follows.\n\\begin{itemize}\n\\item[(i)] For $p=0$, $k_p=0$ i.e., (\\ref{s3}) is a simple diffusion\nequation. Its solution is given by \\cite{uh}\n\\begin{eqnarray} {\\cal P}(X_{0\\sigma},t)=\\frac{1}{\\sqrt{2\\pi\na_0t}}\\,\\exp\\left[-\\frac{(X_{0\\sigma}-X^{(0)}_{0\\sigma})^2}{2a_0t}\\right].\n\\label{s4}\n\\end{eqnarray}\n\\item[(ii)] For $p\\neq0$, Eq. (\\ref{s3}) can be verified by direct\nsubstitution of its solution\n\\begin{eqnarray} {\\cal P}(X_{p\\sigma},t)=\\frac{1}{\\sqrt{2\\pi\na_p(1-e^{-2t\/\\tau_p})}}\\,\\exp\\left[-\\frac{(X_{p\\sigma}-X^{(0)}_{p\\sigma}e^{-t\/\\tau_p})^2}{2a_p(1-e^{-2t\/\\tau_p})}\\right].\n\\label{s5}\n\\end{eqnarray}\n\\end{itemize} Next, as noted above Eq. (\\ref{e4}), in terms of the\nmode amplitudes, the location of the middle monomer ($n=N\/2$) at any\ntime $t$ is given by\n\\begin{eqnarray} \\vec r(t)=\\vec X_0(t)+2\\sum_{p=1}^\\infty\\vec\nX_p(t)\\,\\cos\\frac{p\\pi}{2}.\n\\label{s6}\n\\end{eqnarray} Using Eq. (\\ref{s6}), I obtain, upon averaging over\nall possible initial states of the polymer at $t=0$\n\\begin{eqnarray} P(r_\\sigma,t|r_{0\\sigma},0)=\\prod_{p=0}^\\infty\n\\int_{-\\infty}^\\infty dX^{(0)}_{p\\sigma}\\,{\\cal\nP}_{\\text{eq}}(X^{(0)}_{p\\sigma})\\,\\,\\delta\\!\\!\\left[r_{0\\sigma}-\\left(X^{(0)}_{0\\sigma}+2\\sum_{q=1}^\\infty\nX^{(0)}_{q\\sigma}\\,\\cos\\frac{q\\pi}{2}\\right)\\right]\\nonumber\\\\&&\\hspace{-10cm}\\times\\int_{-\\infty}^\\infty\ndX_{p\\sigma}\\,{\\cal\nP}(X_{p\\sigma},t)\\,\\,\\delta\\!\\!\\left[r_\\sigma-\\left(X_{0\\sigma}+2\\sum_{q=1}^\\infty\nX_{q\\sigma}\\,\\cos\\frac{q\\pi}{2}\\right)\\right],\n\\label{s7}\n\\end{eqnarray} where ${\\cal P}_{\\text{eq}}(X)$ is the equilibrium\nprobability of $X$, i.e., a Gaussian, obtained by taking the\n$t\\rightarrow\\infty$ limit of Eq. (\\ref{s5}).\n\nAt this stage, because of the $\\delta$-functions in Eq. (\\ref{s7}), it\nis easiest to Fourier transform $P(r_\\sigma,t|r_{0\\sigma},0)$, defined\nas\n\\begin{eqnarray} \\tilde{\\cal P}_{k,k';t}=\\frac1{2\\pi}\\int dr_\\sigma\\,\ndr_{0\\sigma} e^{i\\left[k'r_\\sigma+kr_{0\\sigma}\\right]}\\,{\\cal\nP}(r_\\sigma,t|r_{0\\sigma},0],\n\\label{s8}\n\\end{eqnarray} which reduces Eq. (\\ref{s7}) to\n\\begin{eqnarray}\n2\\pi\\,e^{-i\\left[kr_\\sigma+k'r_{0\\sigma}\\right]}\\tilde {\\cal\nP}_{k,k';t}=\\prod_{p=0}^\\infty \\int_{-\\infty}^\\infty\ndX^{(0)}_{p\\sigma}\\,{\\cal\nP}_{\\text{eq}}(X^{(0)}_{p\\sigma})\\,e^{-ik\\left[X^{(0)}_{0\\sigma}+2\\sum_{q=1}^\\infty\nX^{(0)}_{q\\sigma}\\,\\cos\\frac{q\\pi}{2}\\right]}\\nonumber\\\\&&\\hspace{-8cm}\\times\\int_{-\\infty}^\\infty\ndX_{p\\sigma}\\,{\\cal\nP}(X_{p\\sigma},t)\\,e^{-ik'\\left[X_{0\\sigma}+2\\sum_{q=1}^\\infty\nX_{q\\sigma}\\,\\cos\\frac{q\\pi}{2}\\right]},\n\\label{s9}\n\\end{eqnarray} At this point, in order to follow through the\ncalculation of $\\tilde {\\cal P}_{k,k';t}$, I need the two following\nintegrals:\n\\begin{itemize}\n\\item[(a)]\n\\begin{eqnarray} \\int_{-\\infty}^\\infty\ndX_{0\\sigma}\\,\\frac{1}{\\sqrt{2\\pi\na_0t}}\\,\\exp\\left[-\\frac{(X_{0\\sigma}-X^{(0)}_{0\\sigma})^2}{2a_0t}-ik'X_{0\\sigma}\\right]=e^{-iX^{(0)}_{0\\sigma}k'-\\frac12a_0tk'^2}.\n\\label{s10}\n\\end{eqnarray}\n\\item[(b)] for $p\\neq0$:\n\\begin{eqnarray} \\int_{-\\infty}^\\infty\ndX_{p\\sigma}\\,\\frac{1}{\\sqrt{2\\pi\na_p(1-e^{-2t\/\\tau_p})}}\\,\\exp\\left[-\\frac{(X_{p\\sigma}-X^{(0)}_{p\\sigma}e^{-t\/\\tau_p})^2}{2a_p(1-e^{-2t\/\\tau_p})}-2ik'X_{p\\sigma}\\cos\\frac{p\\pi}2\\right]\\nonumber\\\\&&\\hspace{-10cm}=e^{-2iX^{(0)}_{p\\sigma}e^{-t\/\\tau_p}k'\\cos(p\\pi\/2)-2a_p(1-e^{-2t\/\\tau_p})k'^2\\cos^2(p\\pi\/2)}.\n\\label{s11}\n\\end{eqnarray}\n\\end{itemize} Using (a-b), I now integrate over $X^{(0)}_{0\\sigma}$\n(i.e., the location the center-of-mass of the polymer) with a uniform\nprobability density measure yields $\\sqrt{2\\pi}\\delta(k+k')$, which\nleads me to\n\\begin{eqnarray}\n\\hspace{-5mm}\\sqrt{2\\pi}\\,e^{-i\\left[k'r_\\sigma+kr_{0\\sigma}\\right]}\\tilde\n{\\cal P}_{k,k';t}=e^{-k^2\\left[\\frac12a_0t+2\\sum_{q=1}^\\infty\na_q(1-e^{-2t\/\\tau_q})\\cos^2(q\\pi\/2)\\right]}\\delta(k+k')\\nonumber\\\\&&\\hspace{-7.5cm}\\times\n\\prod_{p=1}^\\infty\\!\\!\\left[ \\int_{-\\infty}^\\infty\\!\\!\\!\\!\\!\ndX^{(0)}_{p\\sigma}\\,{\\cal\nP}_{\\text{eq}}(X^{(0)}_{p\\sigma})e^{-2ik\\left[X^{(0)}_{p\\sigma}(1-e^{-t\/\\tau_p})\\cos(p\\pi\/2)\\right]}\\right]\\nonumber\\\\&&\\hspace{-8.1cm}=e^{-k^2\\left[\\frac12a_0t+2\\sum_{q=1}^\\infty\na_q\\cos^2(q\\pi\/2)\\{(1-e^{-2t\/\\tau_q})+(1-e^{-t\/\\tau_q})^2\\}\\right]}\\delta(k+k')\n\\nonumber\\\\&&\\hspace{-8.1cm}=e^{-k^2\\left[\\frac12a_0t+4\\sum_{q=1}^\\infty\na_q\\cos^2(q\\pi\/2)(1-e^{-t\/\\tau_q})\\right]}\\delta(k+k');\n\\label{s13}\n\\end{eqnarray} ${\\cal P}_{k,k';t}\\propto\\delta(k+k')$ implies that\n$P(r_\\sigma,t|r_{0\\sigma},0)$ is a function of\n$(r_\\sigma-r_{0\\sigma})$.\n\nFinally, I now need to evaluate the discrete sum in the exponent of\nEq. (\\ref{s13}). Having noticed that $\\cos(q\\pi\/2)=0$ for odd\n$q$-values and $\\cos^2(q\\pi\/2)=1$ for even $q$-values, the sum can be\nconverted into an integral; thereafter the inverse Fourier transform\nfrom $k$ to $(r_\\sigma-r_{0\\sigma})$ leads to Eq. (\\ref{e3}), with the\nbehavior of $\\Delta(t)$ presented in Table \\ref{table2}. With the\ncorresponding scaling of $\\gamma_p$ and $\\tau_p=\\gamma_p\/k_p$ for\nphantom Rouse, Zimm, polymers in a $\\theta$-solvent, reptation, and\nself-avoiding Rouse polymers (see Table \\ref{table1}), these integrals\nare listed below. Note that in Eqs. (\\ref{s14}-\\ref{s18}) I omit\nconstants in converting the discrete sums to integrals.\n\\begin{itemize}\n\\item[A.] Phantom Rouse:\n\\begin{eqnarray} 4\\sum_{q=1}^\\infty\na_q\\cos^2(q\\pi\/2)(1-e^{-t\/\\tau_q})\\rightarrow k_BT\\int_0^\\infty\n\\frac{dq}{q^2}\\,(1-e^{-cq^2t})\\sim\\sqrt t.\n\\label{s14}\n\\end{eqnarray}\n\\item[B.] Phantom Zimm and polymers in a $\\theta$-solvent:\n\\begin{eqnarray} 4\\sum_{q=1}^\\infty\na_q\\cos^2(q\\pi\/2)(1-e^{-t\/\\tau_q})\\rightarrow k_BT\\int_0^\\infty\n\\frac{dq}{q^2}\\,(1-e^{-cq^{3\/2}t})\\sim t^{2\/3}.\n\\label{s15}\n\\end{eqnarray}\n\\item[C.] (self-avoiding) Zimm:\n\\begin{eqnarray} 4\\sum_{q=1}^\\infty\na_q\\cos^2(q\\pi\/2)(1-e^{-t\/\\tau_q})\\rightarrow k_BT\\int_0^\\infty\n\\frac{dq}{q^{1+2\\nu}}\\,(1-e^{-cq^{3\\nu}t})\\sim t^{2\/3}.\n\\label{s16}\n\\end{eqnarray}\n\\item[D.] reptation (curvilinear co-ordinate):\n\\begin{eqnarray} 4\\sum_{q=1}^\\infty\na_q\\cos^2(q\\pi\/2)(1-e^{-t\/\\tau_q})\\rightarrow k_BT\\int_0^\\infty\n\\frac{dq}{q^2}\\,(1-e^{-cq^2t})\\sim\\sqrt t.\n\\label{s17}\n\\end{eqnarray}\n\\item[E.] self-avoiding Rouse:\n\\begin{eqnarray} 4\\sum_{q=1}^\\infty\na_q\\cos^2(q\\pi\/2)(1-e^{-t\/\\tau_q})\\rightarrow k_BT\\int_0^\\infty\n\\frac{dq}{q^{1+2\\nu}}\\,(1-e^{-cq^{1+2\\nu}t})\\sim t^{2\\nu\/(1+2\\nu)}.\n\\label{s18}\n\\end{eqnarray}\n\\end{itemize} Clearly, these power-law behavior of $\\Delta(t)$ cannot\nhold longer than time $\\tau$, this is also noted in Table\n\\ref{table2}.\n\n\n\\end{widetext}\n\n\\section{Appendix B: Simulation details}\n\nOver the past years, a highly efficient simulation approach to polymer\ndynamics has been developed in our group. This is made possible via a\nlattice polymer model, based on Rubinstein's repton model \\cite{rub}\nfor a single reptating polymer, with the addition of sideways moves\n(Rouse dynamics). A detailed description of this model, its\ncomputationally efficient implementation and a study of some of its\nproperties and applications can be found in \\cite{heuk1}.\n\nIn this model, each polymer is represented by a sequential string of\nmonomers, living on a face-centered-cubic lattice with periodic\nboundary conditions in all three spatial directions. Hydrodynamic\ninteractions between the monomers are not taken into account in this\nmodel. Monomers adjacent in the string are located either in the same,\nor in neighboring lattice sites. The polymers are self-avoiding:\nmultiple occupation of lattice sites is not allowed, except for a set\nof adjacent monomers. The number of stored lengths within any given\nlattice site is one less than the number of monomers occupying that\nsite. The polymers move through a sequence of random single-monomer\nhops to neighboring lattice sites. These hops can be along the contour\nof the polymer, thus explicitly providing reptation dynamics. They can\nalso change the contour ``sideways'', providing Rouse dynamics. Each\nkind of movement is attempted with a statistical rate of unity, which\ndefines the unit of time. This model has been used before to simulate\nthe diffusion and exchange of polymers in an equilibrated layer of\nadsorbed polymers \\cite{wolt1}, dynamics self-avoiding Rouse polymers\n\\cite{prouse1}, polymer translocation under a variety of circumstances\n\\cite{vocksa,anom,panjatrans}, and the dynamics of polymer adsorption\n\\cite{adsorb}.\n\nThe same model has been used for the polymer melt simulations (here\nthe polymers are both self- and mutually-avoiding) for a system of\nsize $60^3$ with an overall monomer density unity per lattice\nsite. Due to the possibility that adjacent monomers belonging to the\nsame polymer can occupy the same site, overall approximately 40\\% of\nthe sites typically remain empty.\n\nInitial thermalizations were performed as follows: completely crumpled\nup polymers are placed in lattice sites at random. The system is then\nbrought to equilibrium by letting it evolve up to $10^9$ units of\ntime, with a combination of random intermediate redistribution of\nstored lengths within each polymer. Additional details on the melt\nsimulations can be found in \\cite{panja2}.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}
|