{"text":"\\section{Introduction} \\label{sec1}\nBuildings are responsible on more than 32 percent of the overall energy consumed worldwide, and this percentage is expected to be doubled by 2050 as a result of the well-being improvement and wide use of electrical appliances and central heating\/cooling systems \\cite{Elattar2020}. Specifically, this is due to population growth, house comfort enhancement and improvement of wealth and lifestyle. To that end, reducing wasted energy and promoting energy-saving in buildings have been nowadays emerged as a hot research topic. One of the cost-effective solutions is via encouraging energy-efficiency behaviors among building end-users based on analyzing energy consumption footprints of individual appliances. Therefore, tailored recommendations can be generated to help end-users improve their behavior \\cite{sardianos2020emergence}.\n\nIn this regard, load monitoring of appliances can not only provide the end-users with their fine-grained consumption footprints, but it promptly contributes in promoting sustainability and energy efficiency behaviors \\cite{REHABC2020}. Moreover, it can significantly contribute in elaborating and developing reliable smart-grid demand management systems. On the other hand, load consumption monitoring principally encompasses two wide groups, namely intrusive load monitoring (ILM) and non-intrusive load monitoring (NILM), respectively. ILM necessitates to install smart-meters at the front-end of each electrical appliance aiming at collecting real-time energy consumption patterns. Even though this strategy presents high performance in accurately gathering appliance-specific data, it requires a heavy lifting with high-cost installation, where a large number of sun-meters are installed. In addition an intrusive transformation of the available power grid is essential \\cite{alsalemi2020micro}. On the contrary, no additional sub-meter required when the NILM strategy (named energy disaggregation as well) is adopted to infer device-specific consumption footprints since the latter are immediately extracted from the main load using feature extraction and learning models \\cite{HIMEUR2020114877}. \n\n\nIn this context, the NILM issue has been investigated for many years and extensive efforts are still paid to this problematic because of its principal contributions to improve energy consumption behavior of end-users \\cite{PEREIRA2020102399,LIU2020101918}. Specifically, it can help in achieving a better comprehending of consumers' consumption behavior through supplying them with specific appliance data. Therefore, put differently, the NILM task indirectly aims at (i) promoting the energy efficiency behavior of individuals, (ii) reducing energy bills and diminishing reliance on fossil fuel, and (iii) reducing carbon emissions and improving environmental conditions \\cite{alsalemi2020achieving}.\n \n\nTwo crucial stages in NILM are the feature extraction and inference and learning procedures. The feature extraction step aims at deriving pertinent characteristics of energy consumption signals to help representing appliances from the same category with similar signatures while differencing between power signals from different classes \\cite{Welikala8039522}. On the other hand, the inference and learning step is essentially reserved to train classifiers in order to identify appliances and extract appliance-level power footprints \\cite{He8669739}. It can be achieved either by using conventional classification models, such as artificial neural networks (ANN), support vector machine (SVM), k-nearest neighbors (KNN), etc. or novel classifiers, including deep neural networks (DNNs). Consequently, the identification of electrical devices simultaneously operating through an interval of time in a household is the central part of the NILM architecture. Its performance is highly dependent on the deployed feature extraction and inference model. To that end, the development of robust schemes belonging to these two modules attracts a considerable interest in recent years \\cite{Park2019,Ma8118142}.\n\nIn this paper, recent NILM systems are first reviewed based on the principal components contributing into the implementation of such architectures including feature extraction and learning models. In this respect, techniques pertaining to three main feature extraction categories are described among them graph signal processing (GSP), sparse coding features and binary encoding schemes. Following, a discussion of their limitations and drawbacks is also presented after conducting a deep comparison of their performances and properties. Moving forward, a non-intrusive appliance identification architecture is proposed, which is mainly based on a novel local power histogramming (LPH) descriptor. The latter relies on (i) representing power signals in a 2D space, (ii) performing a binary power encoding in small regions using square patches of $3 \\times 3$ samples and (iii) returning back to the initial 1D space through extracting histograms of 2D representations. Following, an improved k-nearest neighbors (IKNN) is introduced to effectively identify appliance-level fingerprints and reduce the computation cost. This has resulted in very short appliance signatures of 256 samples, in which each power signal is represented with a unique histogram, and thus leads to a better appliance identification performance at a low computational complexity. Moreover, it is worth-noting that to the best of the authors' knowledge, this paper is the first work that discusses the applicability of 2D local descriptors for identifying electrical appliances using their power consumption signals. Overall, The main contributions of this paper can be summarized as follows:\n\n\\begin{itemize}\n\\item We present a comprehensive overview of recent trends in event-based NILM systems along with describing the their drawbacks and limitations.\n\\item We propose a novel NILM framework based on an original 2D descriptor, namely LPH, which can be considered as an interesting research direction to develop robust and reliable NILM solutions. Explicitly, after converting appliance power signals into 2D space, the appliance identification becomes a content-based image retrieval (CBIR) problem and a powerful short description is extracted to represent each electrical device. According, LPH operates also as a dimensionality reduction, where each resulted appliance signature has only 256 samples.\n\\item We design a powerful IKNN model that efficiently aids in recognizing appliances from the extracted LPH fingerprints and reducing significantly the computational cost.\n\\item We evaluate the performance of the proposed LPH-IKNN based NILM system on four different data sets with distinct sampling frequency rates and in comparison with various recent NILM systems and other 2D descriptors. \n\\end{itemize}\n\nThe remainder of this paper is structured as follows. An overview of NILM systems is introduced in Section \\ref{sec2} along with a discussion of their drawbacks and limitations. In Section \\ref{sec3}, the main steps of the proposed NILM system based on the LPH descriptor and IKNN are described in details. The performance results of the exhaustive empirical evaluation conducted in this framework are presented and thoroughly discussed in Section \\ref{sec4}, in which different comparisons are conducted with state-of-the-art works. Finally, Section \\ref{sec5} concludes the paper, discusses the important findings and highlights the future works.\n\n\n\n\\section{Related work} \\label{sec2}\n\\subsection{Overview of NILM techniques}\nNILM frameworks can be categorized into two major groups. The first one called non-event-based approaches, which focus on using algorithms without depending on the training\/learning procedures (using data from a particular building). They can segregate the main power signal collected from the overall circuit into various appliance-level fingerprints. An explicit example of this kind of techniques that have been typically studied is related the deployment of statistical analysis, including hidden Markov models (HMM) \\cite{Makonin7317784}, higher-order statistics (HOS) \\cite{Guedes2015} or probabilistic models \\cite{Ji8684887}. The second group deals with methods allowing the identification of state changes occurred in power consumption signals using different types event detectors, classifiers, and further implementing appropriate techniques to calculate an individual load usage fingerprint for each electrical device. In this section, we focus on describing recent NILM systems pertaining to the second category because the proposed framework is an even-based NILM framework.\n\n\nExplicitly, this category of NILM systems deploys two principal components. The first one is a feature descriptor to extract pertinent characteristics of electrical appliances, while the second is a learning algorithm that can help in detecting and classifying each device based on its features. Conventional NILM methods have been basically concentrated on extracting features related to steady-states and transient-states, in addition to the adoption of conventional machine learning (ML) classifiers. On the other side, novel strategies are introduced in recent years to deal with the NILM issue based on the use of new signal analysis procedures and innovative learning models. This class of NILM frameworks is defined as non-conventional, they are classified into four principal sub-categories as follows: \n\\vskip2mm\n\\noindent \\textbf{Graph signal processing (GSP):} A trending research field aiming at describing stochastic characteristics of power signals based on graph theory. \nIn \\cite{He7539273}, a graph-based method for identifying individual appliances has been introduced after detecting appliance events. This results in a better detection of appliance-level fingerprints and further a reduction of time computation compared to conventional graph-based techniques. \nIn \\cite{Li8437176}, various multi-label graphs have been developed to detect individual devices based on a semi-supervised procedure. In \\cite{Zhao2018Access}, NILM performance have been enhanced via the use of a generic GSP-based technique, which is build upon the application of graph-based filters. This results in a better detection of on\/off appliance states, via the mitigation of electric noise produced by appliances.\n\n\n\\vskip2mm\n\n\n\n\\noindent \\textbf{Sparse coding features:} In this category, the NILM framework is treated as a blind source separation problem and recent sparse coding schemes are then applied to split an aggregated power consumption signal into specific appliance based profiles \\cite{Kolter2010EDV}. In \\cite{Singh2019SG}, a co-sparse analysis dictionary learning is proposed to segregate the total energy consumption into a device-level data and significantly shorten the training process. In \\cite{Singh7847445}, a deep learning architecture is used for designing a multi-layer dictionary of each appliance rather than constructing one-level codebook. Obtained multi-layer codebooks are then deployed as features for the source-separation algorithm in order to break down the aggregated energy signal. In \\cite{Rahimpour7835299}, an improved non-negative matrix factorization is used to pick up perceptibly valuable appliance-level signatures from the aggregated mixture.\n\n\\vskip2mm\n\n\\noindent \\textbf{Binary descriptions:} Most recently, binary descriptors have been investigated for the classification and fault detection of 1D signals such as electroencephalogram (EEG), electrocardiogram (ECG), and myoelectric signals \\cite{HAMMAD2019180}. For power consumption signals, this concept is novel. The only few works that can be found in the literature are mainly focusing on representing the power signal in a novel space and directly being used to train the convolutional neural network (CNN). In \\cite{Du7130652}, power fingerprints are derived by estimating the similarity of voltage-current (V-I) shapes, encoding it using a binary dictionary and then extracting image graphical footprints that are directly fed to a self-organizing map (SOM) classifier, which is based on neural networks. In \\cite{Gao7418189}, V-I binary representation is employed through converting the normalized V-I magnitude into binary matrices using a thresholding process before being fed to a CNN. More specifically, this approach relies on binary coding of the V-I edges plotted in the new representation. These data are then fed into an ML classifier in order to identify each appliance class. In \\cite{Liu8580416}, a color encoder is proposed to draw V-I signatures that can also be translated to visual plots. These footprints are then fed to a deep learning classifier to identify each electrical appliance. In \\cite{DEBAETS2019645}, a siamese-neural network is employed aims at mapping the V-I trajectories into a novel characteristic representation plan. \n\n\\vskip2mm\n\n\\noindent \\textbf{Time-frequency analysis:} Time-frequency analysis is an imperative research topic, in which much attention has been devoted to it in the past and even nowadays. It is applied in several applications among them energy efficiency \\cite{JUNKER2018175}, NILM or energy disaggregation \\cite{Himeur2020icict} and power consumption anomaly detection \\cite{WANG2020114145}. In \\cite{himeur2020effective,Himeur2020iscas}, a novel NILM descriptor is proposed based on the fusion of different time-domain descriptors. In \\cite{HIMEUR2020114877}, a novel time-scale analysis is adopted based on the use of multi-scale wavelet packet trees (MSWPT) and a cepstrum-based event detection scheme to glean appliance-level power consumption patterns from the aggregated load. \n\n\n\\subsection{Classification}\n\\subsubsection{Improved k-nearest neighbors} \\label{sec221}\nIn the building energy sector, KNN has been widely deployed in the literature for different purposes, such as energy disaggregation \\cite{shi2019nonintrusive} and anomaly detection \\cite{Himeur2020IJIS-AD,mulongo2020anomaly,himeur2020novel} although it has some issues, e.g. the sensitivity of the neighborhood size $k$ could significantly degrade its performance \\cite{mehta2018new,abu2019effects}. To that end, an improved version is proposed in \\cite{gou2019generalized} to address this issue, named generalized mean distance-based k-nearest neighbor. Specifically, multi-generalized mean distances are introduced along with the nested generalized mean distance that rely on the properties of the generalized mean. Accordingly, multi-local mean vectors of a specific pattern in every group are estimated through deploying its class-specific $k$ nearest neighbors. Using the obtained $k$ local mean vectors per group, the related $k$ generalized mean distances are estimated and thereby deployed for designing the categorical nested generalized mean distance.\nSimilarly, in \\cite{gou2019local}, the authors introduce a local mean representation-based KNN aiming at further improving the classification performance and overcoming the principal drawbacks of conventional KNN. Explicitly, they select the categorical KNN coefficients of a particular pattern to estimate the related categorical k-local mean vectors. Following, a linear combination of the categorical k-local mean vectors is used to represent the particular pattern. Moving forward, in order to capture the group of this latter, group-specific representation-based distances between the particular pattern and the categorical k-local mean vectors are then considered.\n\nMoreover, in \\cite{gou2019locality}, two locality constrained representation-based KNN rules are presented to design an improved KNN classifier. The first one is a weighted representation-based KNN rule, in which the test pattern is considered as a linear aggregation of its KNN samples from every group, while the localities of KNN samples per group are represented as the weights constraining their related representation elements. Following, a classification decision rule is used to calculate the representation-based distance between the test pattern and the group-specific KNN coefficients.\nOn the other side, the second rule is a weighted local mean representation-based KNN, where k-local mean vectors of KNN coefficients per group are initially estimated and then utilized to represent the test pattern. On the other hand, aiming at improving the performance of existing KNN classifiers and making them scalable and automatic, granular ball computing has been used in various frameworks. This is the case of \\cite{xia2019granular}, where a granular ball KNN (GBKNN) algorithm is developed, which could perform the classification task on large-scale data sets with low computation. In addition, it provides a solution to automatically select the number $k$ of clusters.\n\n\n\n\\subsubsection{Improved k-means clustering}\nIn addition to the use of KNN and its variants, K-means clustering (KMC) is another important data clustering method. It has been widely investigated to classify similar data into the same cluster in large-scale data sets for different applications, such as appliance identification \\cite{chui2013appliance}, anomaly detection \\cite{henriques2020combining}, cancer detection \\cite{saba2020recent}, and social media analysis \\cite{alsayat2016social}. Despite the simplicity of KMC, its performance was not convincing in some applications. To that end, different variants have been proposed in the literature to design efficient, scalable and robust KMC classifiers. For example, in \\cite{yu2018two}, to overcome the vulnerability of the conventional KMC classifier to outliers and noisy data, a tri-level k-means approach is introduced. This was possible through updating the cluster centers because data in a specific data set usually change after a period of time. Therefore, without updating the cluster centers it is not possible to accurately represent data in every cluster. While in \\cite{zhang2018improved}, the authors focus on improving both the accuracy and stability of the KMC classifier. This has been achieved by proposing a k-means scheme based on density Canopy, which aims at solving the issue corresponding to the determination of the optimal number $k$ of clusters along with the optimum initial seeds. Specifically, the density Canopy has been utilized as a pre-processing step and then its feedback has been considered as the cluster number and initial clustering center of the improved KMC technique. Similarly, in \\cite{lu2019improved}, an incremental KMC scheme is introduced using density estimation for improving the clustering accuracy. Explicitly, the density of input samples has been firstly estimated, where every primary cluster consists of the center points having a density superior than a given threshold along with points within a specific density range. Following, the initial cluster has been merged with reference to the distance between the two cluster centers before dividing the points without any cluster affection into clusters nearest to them.\n\n\nOn the other hand, in some specific data sets, e.g. real-world medical data sets, data samples could pertain to more than one cluster simultaneously while traditional KMC methods do not allow that since they are developed based on an exclusive clustering process. Therefore, an overlapping k-means clustering (OKMC) scheme is proposed in \\cite{whang2015non} to overcome that issue, which have intrinsically overlapping information. Similarly, in \\cite{khanmohammadi2017improved}, the authors introduce a hybrid classifier that aggregates k-harmonic means and OKMC to address the sensitivity problem of the latter to initial cluster centroids.\n\n\n\n\n\n\n\n\n\\subsection{Drawbacks and limitations}\nDespite the fact that the outlined event-based NILM systems have recently been widely examined in the state-of-the-art, they can be affected by certain problems and limitations, which impede the development of powerful NILM architectures and even increase the difficulty to implementing real-time NILM systems. Moreover, most of these issues have not yet been overcome. For example, most of existing solutions suffer from a low disaggregation accuracy. Therefore, these approaches need deeper investigation in order to improve their performance. Moreover, they are usually built upon detecting transient states, which can limit their detection accuracy if multiple appliances are turning on\/off simultaneously. In addition, most of the reviewed NILM systems are only validated on one category of data with a unique sampling frequency. This restricts the applicability of these techniques on different data repositories. On the other hand, most of the existing classifier have some issue to accurately identify appliance-level data especially if the validation data set is imbalanced.\n\nTo overcome the aforementioned limitations, we present, in this framework, a novel non-intrusive load identification, which relies on (i) shifting power fingerprints into 2D space, (ii) deriving binary characteristics at local regions, (iii) representing the extracted features in the decimal field, and (iv) going back to 1D space via capturing novel histograms of the 2D representations. Following, these steps can help in designing a robust identification approach, which has various benefits; (i) via transforming the appliance signatures into 2D space, novel appliance footprints are developed that describe each appliance fingerprint in another way and texture descriptions are derived from local regions using square kernels; (ii) the proposed strategy helps in identifying appliances at accurately without depending on the devices' states (i.e. steady or transient); (iii) the proposed scheme can support real-time applications because it can be run at a low computation cost. Specifically, it acts as a dimensionality reduction component as well, where short characteristic histograms having only 256 samples are collected at the final stage to represent every appliance, and (iv) an improved KNN algorithm has been developed to overcome the issues occurring with imbalanced data sets and improve the appliance identification performance. Moreover, our 2D descriptor can be trained via simple machine learning algorithms without the need to deploy deep leaning models, which usually have a high computation complexity.\n\n\\begin{figure}[b!]\n\\begin{center}\n\\includegraphics[width=16cm, height=10.9cm]{Fig1.pdf}\n\\end{center}\n\\caption{Flowchart of the proposed NILM framework.}\n\\label{NILM-system}\n\\end{figure}\n\n\\section{Proposed NILM based on 2D feature extraction} \\label{sec3}\nThis section focuses on presenting the principal steps of the proposed appliance identification system, which relies on the application of an original 2D descriptor. Accordingly, the flowchart of the proposed NILM system is portrayed in Fig. \\ref{NILM-system}. It is clear that the 2D-based load identification system represents the fundamental part of the NILM system.\n\n\\subsection{Background of local 2D feature extraction}\nIn recent years, 2D local feature extraction schemes have received significant attention in various research topics, including image and video processing \\cite{Tao2019}, breast cancer diagnosis \\cite{Kumar9097394}, face identification \\cite{Gong7812744} and fingerprint recognition \\cite{Ramirez8681394}. They are generally deployed to derive fine-grained characteristics after partitioning the overall 2D representation into various local regions using small kernels. Explicitly, a local feature extraction can be applied at each local region of the 2D representation to draw pertinent features about the neighborhood of each key-point. The multiple features derived from several regions are then fused into a unique, spatially augmented characteristic vector, in which the initial signals are effectively represented.\n\n\n\n\n\\subsection{Event detection}\nFor the event detection step, various event detection schemes are proposed in the state-of-the-art. Event detection techniques are split into three main groups \\cite{Batra:2019:TRS}: specialized heuristics, probabilistic models and matched filters \\cite{Batra:2019:DRS,Lu8090442}. In this framework, the pre-processed aggregated power is segregated into different sections using the edge detector module \\cite{Batra2019} implemented in the NILMTK platform \\cite{Batra2014ACM}. Accordingly, the on\/off events of electrical devices are generally picked up via the analysis of power level variations in the aggregated signal. This event detector has been elected because of its simplicity and availability of its source code in the NILMTK platform.\n\n\n\\subsection{Local power histogramming (LPH) descriptor}\nThe proposed appliance identification scheme relies mainly on transforming the appliance consumption signals into 2D space and therefore treating the appliance recognition task as a CBIR problem. With this in mind, all image descriptors could be utilized to extract the fine-grained properties of the obtained 2D power signal representations.\n \nIn that respect, the proposed LPH-based feature extraction scheme transforms appliance signals to image representations. Following, an examination local regions around each power sample is performed using a block partition procedure to collect local features. Explicitly, LPH descriptor is introduced for abstracting histogram-based descriptions of the 2D representations of power observations. Accordingly, LPH performs a binary encoding of power blocks through comparing the central power sample of each block with its neighbors. \n\n\\begin{figure}[b!]\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{Fig2.pdf}\n\\end{center}\n\\caption{Block diagram of the LPH descriptor: Example using a patch of size $3 \\times 3$ (N = 8).}\n\\label{2D-flowchart}\n\\end{figure}\n\nFig. \\ref{2D-flowchart} explains the flowchart of the proposed LPH description scheme. A comparison of each central power observation is conducted with its power neighboring in a kernel of $N \\times N$ power samples through subtracting the central power value from the neighboring power patterns. Following, a binary encoding procedure is applied where the positive values of the subtractions are moved to 1, on the flip side, the negative values are considered as 0. Next, a binary sequence is then acquired by means of a clockwise-based comparison process. Consequently, the gathered binary samples represent the corresponding LPH codes. Moving forward, the overall binary sequences are gleaned from all the regions (kernels) to form a binary array, which in turn, is converted to the decimal field. Specifically, each binary sequence extracted from a specific block is converted to decimal (as it is illustrated in Fig. \\ref{2D-flowchart}). Lastly, a histogramming procedure is applied on the resulted decimal array, in which an LPH histogram is extracted to represent the initial power signal. The whole steps of the proposed LPH descriptor are summarized in Algorithm \\ref{algo1}. \n\n\n\n\n\n\\begin{algorithm}[t!]\n\\SetAlgoLined\n\n\\KwResult{ $\\mathbf{B}_{LPH}$: The histogram of local power histograms (LPH) }\n\na. Define the array $Y(i,j)$ of the appliance power signatures, where $i$ presents the index of appliance power sequences, and $j$ stands for the index of the samples in every sequence;\n\n \\While{$i \\leq M$ (\\textnormal{with} $M$ \\textnormal{the total number of appliance signatures in the overall database})}{\n\n\\textbf{Step 1.} Normalize and transform the appliance signature $Y(i,:)$ into 2D space (image representation), as explained in Fig. \\ref{1D-to-2D}.\n\n\\textbf{Step 2.} Calculate the LPH values of each power pattern $(u_{c},v_{c})$ in each specific kernel of size $S \\times S$, by comparing the central power pattern with its neighbor as follows:\n\\begin{equation}\nLPH_{n,S}(u_{c},v_{c})=\\sum\\limits_{n=1}^{N-1}b(j_{n}-j_{c})2^{n}\n\\end{equation}\nwhere $j_{c}$ refers to the central power sample, $j_{n}$ represents the $n^{th}$ surrounding power neighbor in a patch of size $S \\times S$ and $N = S^{2}-1$. Moving forward, a binary encoding function $b(u)$ is generated as:\n\\begin{equation}\nb(u)=\\left\\{ \n\\begin{array}{cc}\n1 & \\mathrm{if}~u \\geq 0 \\\\ \n0 & \\mathrm{if}~u < 0%\n\\end{array}%\n\\right. \n\\end{equation}\n\n\\textbf{Step 3.} Glean the binary samples $LPH_{n,S}(u_{c},v_{c})$ generated from every kernel and therefore transform the obtained binary data into a decimal field in order to design a new decimal array $I_{D}$ (as it is explained in Fig. \\ref{2D-flowchart}).\n\n\\textbf{Step 4.} Perform a histogramming procedure on the obtained decimal matrix for extracting an LPH histogram $H_{LPH}(n,S)$, which is measured from each patch. Thus, the resulted histogram is then used as a texture feature vector to represent the initial appliance signature. Finally, after conducting the histogramming process, a description histogram $H_{LPH}(n,S)$ is produced, which has $2^{N}$ patterns (i.e. with relation to the $2^{N}$ binary samples generated by $N$ power sample neighbors of each block of data). \n\\begin{equation}\nH_{LPH}(n,S) = hist(I_{D}) = [H_{1}, H_{2}, \\cdots, H_{2^{N}}]\n\\end{equation}\n\n\\textbf{Step 5.} Normalize the resulted histogram to make the value of each bin in the range [0,1].\n\\begin{equation}\n\\mathbf{B}_{LPH}^{i}= \\mathrm{Normalize}(H_{LPH}(n,S))=b_{1},b_{2},\\cdots ,%\n~b_{2^{N}}=\\frac{H_{1}}{\\textstyle\\sum_{m=1}^{2^{N}}H_{1}},~\\frac{%\nH_{2}}{\\textstyle\\sum_{m=1}^{2^{N}}H_{2}},~\\cdots ,~\\frac{H_{2^{N}}}{%\n\\textstyle\\sum_{m=1}^{2^{N}}H_{2^{N}}} \n\\label{eq8}\n\\end{equation}\n\n\n}\n\\caption{The principal steps of the proposed LPH descriptor deployed to derive LPH features from the $M$ appliance power signals.}\n\\label{algo1}\n\\end{algorithm}\n\n\n\nMoving forward, a histogram of 256 samples is derived to represent each appliance signature, which has significantly lower number of samples compared to the initial signal. Accordingly, LPH helps also in reducing the dimensionality of the appliance power signals. Therefore, this leads to efficaciously reducing the computation cost of our NILM system. \n\n\n\n\n\n\n\n\n\\subsection{Improved k-nearest neighbors (IKNN)} \\label{ClassModel}\nThis stage is responsible on predicting the labels of each power consumption observations $P(t)$ that belongs to a specific micro-moment group. Consequently, the class identification step of SAD-M2 is applied in two stages using a 10-fold validation, i.e. the training and test. In the first one, device load usage fingerprints are learned along with their labels generated based on the rule-based algorithm described previously. Accordingly, $9$ folds of the database are utilized randomly in each training phase while the remaining fold is employed for the test purpose. \n\n \nMoving forward, selecting the value of $K$ is of utmost importance for KNN model. However, power abnormality detection data sets suffer from the imbalanced classes issue, in which some classes include more consumption observations (i.e. majority classes) than other classes (i.e. minority classes). Accordingly, a salient drawback of conventional KNN schemes is related to the fact that if $K$ is a fixed, user-defined value, the classification output will be biased towards the majority groups in most of the application scenarios. Therefore, this results in a miss-classification problem.\n\nTo avoid the issue encountered with imbalanced data set, some works have been proposed with the aim of optimizing the value of $K$, such as \\cite{Liu2011,Zhang7858565}. However, they are very complex to implement and can significantly increase the computational cost, which hinders developing real-time abnormality detection solutions. In contrast, in this paper, we introduce a simple yet effective improvement of KNN, which can maintain a low computational cost. It is applied as explained in Algorithm \\ref{improvedKNN}.\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=1.0\\textwidth]{Fig3.pdf}\n\\end{center}\n\\caption{Conversion of 1D signal into 2D representation.}\n\\label{1D-to-2D}\n\\end{figure}\n\n\n\n\\begin{algorithm}[t!]\n\\SetAlgoLined\n\\KwResult{Predicting class labels of test samples}\nRead the training appliance histograms extracted using LPH in Algorithm \\ref{algo1}. \\\\\n\n\\While{$j \\leq J$ (\\textnormal{with} $J$ \\textnormal{is the number of test appliance histograms to be identified})}{\n\n\\textbf{Step 1:} Compute the information-entropy of every appliance histogram $b$ that is deployed to estimate its information gain. Thus, it operates as the weight of appliance histograms power consumption observations to allocate priorities to each of them;\n\n\\begin{equation}\nE(B)=-\\sum\\limits_{i=1}^{n}a_{i}\\log _{2}(a_{i})\n\\end{equation}\n\nwhere $B$ is the training ensemble, $\\left\\vert B\\right\\vert $ as the\nnumber of training data, $a_{i}=\\left\\vert c_{i},B\\right\\vert $ $\/$ $%\n\\left\\vert B\\right\\vert $ and $a_{i}$ refers to the probability that an random\nhistogram in $B$ pertains to class $c_{i}$ \n\n\\textbf{Step 2:} Define the $k$ values of the training ensemble;\n\n\\textbf{Step 3:} Partition the training ensemble into $m$ sub-groups;\n\n\\textbf{Step 4:} Estimate the mean value of each sub-group to derive its center;\n\n\\textbf{Step 5:} Identify the sub-group that is closest to a test histogram $b_{j}$ via estimating the Euclidean distance between each test observation and\nthe center of each sub-group as follows:\n\n\\begin{equation}\nd_{j}(b_{c_{i}},b_{j})=\\sqrt{(b_{c_{i}})-(b_{j})^{2}}\n\\end{equation}\nwhere $c_{i}$ represents the central instance of the $i^{th}$ sub-group and $i=1,2,\\cdots ,m$.\n\n\\textbf{Step 6:} Estimate the weighted-Euclidean distance $wd_{j}$ between the test histogram $b_{j}$ and every histogram in the closest sub-group as follows: \n\n\\begin{equation}\nwd_{j}(b_{i},b_{j})=\\sqrt{w_{i}(b_{i}-b_{j})^{2}}\n\\end{equation}\n\nTherefore, this results in determining the $k$ nearest neighbors;\n\n\\textbf{Step 7:} Compute the weighted class probability of the test histogram $b_{j}$ as follows:\n\\begin{equation}\nc(b_{j})=\\arg \\underset{c\\in C}{\\max }\\sum\\limits_{i}^{k}w_{i}\\delta\n(c,c(y_{i})) ~~ \\label{eq5}\n\\end{equation}\nwhere $y_{1},y_{2},\\cdots ,y_{k}$ refer to the $k$ nearest neighbors of the test histogram $b_{j}$, $C$ denotes the finite set of the appliance class labels, $\\delta (c,c(y_{i}))$ $=1$ if $c=c(y_{i})$ and $\\delta (c,c(y_{i}))$ $=0$ otherwise. \n\n}\n\n\\caption{IKNN algorithm used to classify appliances based on their LPH signatures.}\n\\label{improvedKNN}\n\\end{algorithm}\n\n\n\n\n\nOverall, the proposed improved KNN helps in improving the appliance identification performance through enhancing the classification accuracy and F1 score results in addition to reducing the execution time as it will be demonstrated in the next step. Therefore, this could help in developing real-time abnormality detection solutions. \n\n\n\n\n\n\\section{Evaluation and discussion} \\label{sec4}\nWe concentrate in this section on presenting the outcomes of an extensive empirical evaluation conducted on four real-world data sets, namely UK-DALE \\cite{UK-DALE2015}, GREEND \\cite{GREEND2014}, PLAID \\cite{PLAID2014} and WHITED \\cite{WHITED2016}. They are which are vastly deployed to validate NILM and load identification frameworks in the state-of-the-art. \n\n\\subsection{data set description}\nThe three power consumption repositories considered in this framework are gleaned at distinct sampling rates (i.e. 1\/6 Hz, 30 kHz and 44 kHz) to perform a thorough evaluation study and inform the effectiveness of the proposed solution when the sampling rate of the recorded appliance consumption signals varies.\n\nUnder UK-Dale, power usage footprints have been gathered for a long time period ranging 2 to 4 years at both sampling frequencies of 1\/6 Hz and 16 kHz (for aggregated data). In order to assess the performance of proposed scheme, we exploit the consumption fingerprints gleaned from a specific household at 1\/6 Hz, which encompasses nine appliance categories and each category includes a large number of daily consumption signatures. Moving forward, power traces of six different appliances collected under GREEND \\cite{GREEND2014} are also considered, in which a sampling frequency of 1 Hz has been used to record energy consumption footprints for a period of more than six months. Under PLAID, the power signatures of 11 device groups have been recorded on the basis of a frequency resolution of 30 kHz. Moreover, load usage footprints of the WHITED have been gleaned with reference to 11 appliance classes at a frequency resolution of 44 kHz. The properties of each data set, their appliance categories and the number of observed appliance\/days are recapitulated in Table \\ref{WHITED}.\n\n\\begin{table}[t!]\n\\caption{Properties of power consumption data sets considered in this framework, i.e. appliance classes and their number for both PLAID and WHITED, and appliance classes and number of observed days for both UK-DALE and GREEND.}\n\\label{WHITED}\n\\begin{center}\n\n\\begin{tabular}{lll|lll|lll|lll}\n\\hline\n\\multicolumn{3}{c|}{\\small UK-DALE} & \\multicolumn{3}{|c|}{GREEND}\n& \\multicolumn{3}{|c|}{\\small PLAID} & \\multicolumn{3}{|c}{\\small WHITED} \\\\ \n\\hline\n{\\small \\#} & {\\small Device} & {\\small \\#} & \\# & Device\n& \\multicolumn{1}{l|}{\\#} & {\\small \\#} & {\\small Device} & {\\small %\n\\# } & {\\small \\#} & {\\small Device} & {\\small \\#} \\\\ \n& {\\small class} & {\\small days} & & class & \\multicolumn{1}{l|}{%\ndays} & & {\\small class} & {\\small app} & & {\\small class} & \n{\\small app} \\\\ \\hline\n{\\small 1} & {\\small Dishwasher} & {\\small 183} & 1 & Coffee & 242 & {\\small 1} & {\\small Fluorescent lamp} & {\\small 90} & \n{\\small 1} & {\\small Modems\/receivers} & \\multicolumn{1}{r}{\\small 20} \\\\ \n{\\small 2} & {\\small Refrigerator} & {\\small 214} & & machine & & \n{\\small 2} & {\\small Fridge} & {\\small 30} & {\\small 2} & {\\small Compact\nfluorescent} & \\multicolumn{1}{r}{\\small 20} \\\\ \n{\\small 3} & {\\small Washing machine} & {\\small 210} & 2 & Radio & 242 & {\\small 3} & {\\small Hairdryer} & {\\small 96} & & \n{\\small lamp} & \\multicolumn{1}{r}{} \\\\ \n{\\small 4} & {\\small Microwave} & {\\small 171} & 3 & Fridge\n& 240 & {\\small 4} & {\\small Microwave} & {\\small 94} & {\\small 3}\n& {\\small Charger} & \\multicolumn{1}{r}{\\small 30} \\\\ \n{\\small 5} & {\\small Stove} & {\\small 193} & & w\/freezer & & \n{\\small 5} & {\\small Air conditioner} & {\\small 51} & {\\small 4} & {\\small %\nCoffee machine} & \\multicolumn{1}{r}{\\small 20} \\\\ \n{\\small 6} & {\\small Oven} & {\\small 188} & 4 & Dishwasher & 242 & {\\small 6} & {\\small Laptop} & {\\small 107} & {\\small 5} & \n{\\small Drilling machine} & \\multicolumn{1}{r}{\\small 20} \\\\ \n{\\small 7} & {\\small Washer\/dryer} & {\\small 216} & 5 & Kitchen & 242 & {\\small 7} & {\\small Vacuum cleaner} & {\\small 8}\n& {\\small 6} & {\\small Fan} & \\multicolumn{1}{r}{\\small 30} \\\\ \n{\\small 8} & {\\small Air conditioner} & {\\small 157} & & lamp & & \n{\\small 8} & {\\small Incadescent light bulb} & {\\small 79} & {\\small 7} & \n{\\small Flat iron} & \\multicolumn{1}{r}{\\small 20} \\\\ \n{\\small 9} & {\\small LED light} & {\\small 172} & 6 & TV & \n242 & {\\small 9} & {\\small Fan} & {\\small 96} & {\\small 8} & \n{\\small LED light} & \\multicolumn{1}{r}{\\small 20} \\\\ \n& & & & & & {\\small 10} & {\\small Washing machine} & {\\small 22} & \n{\\small 9} & {\\small Kettles} & \\multicolumn{1}{r}{\\small 20} \\\\ \n& & & & & & {\\small 11} & {\\small Heater} & {\\small 30} & {\\small 10} & \n{\\small Microwave} & \\multicolumn{1}{r}{\\small 20} \\\\ \n& & & & & & & & & {\\small 11} & {\\small Iron} & \\multicolumn{1}{r}%\n{\\small 20} \\\\ \\hline\n\\end{tabular}\n\n\n\\end{center}\n\\end{table}\n\n\n\n\n\\subsection{Evaluation metrics}\nAiming at evaluating the quality of the proposed appliance identification objectively, various metric are considered, including the accuracy and F1 score, normalized cross-correlation (NCC) and histogram length. The accuracy is introduced to measure the ratio of successfully recognized devices in the testbed, but it is nonetheless not enough to evaluate the performance of an appliance identification system giving that alone it is not regarded as a reliable measure. This is mainly the the case of imbalanced data sets, in which the power samples are not uniformly distributed (e.g. in this framework, both PLAID and WHITED data sets are imbalanced). To reinforce the objectivity of the evaluation study, F1 score is also recorded as well, which is considered as a fairly trustworthy metric in such scenarios. Explicitly, F1 score describes the specified as the harmonic average of both the precision and recall measures.\n\n\n\n\\begin{equation}\nAccuracy=\\frac{TP+TN}{TP+FP+TN+FN}\n\\end{equation}\nwhere $TP$, $TN$, $FP$ and $FN$ depict the number of true positives, true negatives, false positives and false negatives, respectively. \n\\begin{equation}\nF1 score = 2\\times \\frac{precision \\times recall}{precision + recall}\n\\end{equation}\nwhere $[precision=\\frac{TP}{TP+TF}]$ and $[recall=\\frac{TP}{TP+FP}]$.\n\nAdditionally, normalized cross-correlation (NCC) has been deployed to measure the similarity of the raw appliance signatures and LPH histograms derived form original power signals. NCC is also described via calculating the cosine of the angle $\\theta$ between two power signals (or extracted characteristic histograms) $x$ and $y$:\n\n\\begin{equation}\nNCC=Cos(\\theta )=\\frac{x\\cdot y}{\\left\\vert x\\right\\vert \\left\\vert\ny\\right\\vert }=\\frac{\\sum\\nolimits_{i}x_{i}\\cdot y_{i}}{\\sqrt{%\n\\sum\\nolimits_{i}x_{i}}\\sqrt{\\sum\\nolimits_{i}y_{i}}}, ~~~~-1\\leq NCC\\leq 1\n\\end{equation}\n\n\n\\subsection{Performance in terms of the NCC}\nIt is of utmost importance to comprehend at the outset how LPH histograms varies from the initial appliance power signatures. Accordingly, this subsection focuses on investigating the nature of the relation between appliance power signatures that pertain to the same appliance class. In addition, this can aid in understanding the way LPH histograms could improve the discrimination ability between appliances belonging to different classes and on the other flip increasing the similarity ratio between appliance from the same group. \n\nTo that end, six appliance signatures $s1, s2, \\cdots , s6$ have been considered randomly from every device category of the UK-DALE data set. Moving forward, the NCC performance has been measured between these signatures to evidently demonstrate why the LPH can results in a better correlation between the signatures of the same device category. Fig. \\ref{CorrMat} outlines obtained NCC matrices, which are calculated between the six raw power signals (left side) and the LPH feature vectors (right side), respectively. Both raw power signals and LPH vectors are gleaned from four device groups, defined as the washing machine, fridge w\/ freezer, coffee machine and radio. It can be shown from the plots in the left side of Fig. \\ref{CorrMat} that NCC rates are quiet low and vary randomly. Specifically, it is hard to identify a certain interval specifying the limits of the NCC rates. On the other side, when measuring the correlation between LPH vectors as indicated in the right side of Fig. \\ref{CorrMat}, NCC values outperform those obtained from the raw power signals. Overall, NCC rates gleaned from LPH vectors are generally more than 0.97 for all appliance groups investigated in this correlation study.\n\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=6.5cm, height=4.2cm]{Fig4a.pdf}\n\\includegraphics[width=6.5cm, height=4.2cm]{Fig4b.pdf}\\\\\n(I) Coffee machine\\\\\n\\includegraphics[width=6.5cm, height=4.2cm]{Fig4c.pdf}\n\\includegraphics[width=6.5cm, height=4.2cm]{Fig4d.pdf}\\\\\n(II) Radio \\\\\n\\includegraphics[width=6.5cm, height=4.2cm]{Fig4e.pdf}\n\\includegraphics[width=6.5cm, height=4.2cm]{Fig4f.pdf}\\\\\n(III) Fridge w\/ freezer \\\\\n\\includegraphics[width=6.5cm, height=4.2cm]{Fig4g.pdf}\n\\includegraphics[width=6.5cm, height=4.2cm]{Fig4h.pdf}\\\\\n(IV) Washing machine \n\n\\end{center}\n\\caption{Correlation arrays computed for: (a) raw appliance signatures belonging to the same appliance category and (b) their LPH histograms.}\n\\label{CorrMat}\n\\end{figure}\n\n\nFig. \\ref{HLPH} portrays an example of six appliance signals extracted from UK-DALE database, their encoded 2D representations and final histograms collected using the LPH descriptor. It is worth noting that each appliance signal is represented by a unique image in 2D space and further by a specific histogram in the final step. \nIt has been clearly seen that via transforming the appliance signals into 2D space, they have been considered as image, where we can use any 2D feature extraction scheme to collected pertinent features. Moreover, through adopting the 2D representation, every power sample has been encircled by eight neighboring samples instead of only two neighbors in the 1D space. Therefore, more opportunities have been available for extracting fine-grained characteristics from each device signature in reliable way. Consequently, it could help effectively correlate between devices that pertain to the same device group, and in contrast, it increases the distance between devices corresponding to distinct categories. In addition, the LPH-based load identification system does not relies on capturing the appliances' states (steady or transient). This represents and additional benefit of the proposed solution, which could recognize each electrical device without the need to collection state information. \n\n\nMoreover, even the neighborhood is temporary distant in 2D space but this gives us various possibilities to encode the power signal. Therefore, this results in better correlation and discrimination abilities and hence a better classification performance of the power signals. In the contrary, if the signal is processed in the original 1D space, the possibilities for encoding the signal are very limited. Thus, the correlation and discrimination abilities lose their efficacy since the classifiers frequently make confusion in identifying appliances using the features generated in this space.\n\n\n\\begin{figure*}[t!]\n\\begin{center}\n\\includegraphics[width=12.9cm, height=4.3cm]{Fig5a.pdf}\\\\\nI. Example of power signals from the UK-DALE data set.\n\\includegraphics[width=14.5cm, height=6.9cm]{Fig5b.pdf}\\\\ \nII. Image representation of LPH encoding of the power signals.\n\\includegraphics[width=12.9cm, height=4.3cm]{Fig5c.pdf}\\\\ \nIII. Histograms of LPH representations of the power signals.\n\\end{center}\n\\caption{Example of appliance signals, their 2D LPH representations and their LPH histograms from the UK-DALE data set: (a) television, (b) Network Attached Storage (NAS), (c) washing machine, (d) dishwasher, (e) notebook and (f) coffee machine.}\n\\label{HLPH}\n\\end{figure*} \n\n\n\n\\subsection{Performance compared to different ML classifiers }\nWe present in this subsection the performance of the proposed appliance identification system based LPH-IKNN in comparison with different classifiers, namely conventional KNN, DT, SVM, DNN and EBT. Specifically, Table \\ref{ACCFscore} reports the accuracy and F1 scores collected under UK-DALE, PLAID and WHITED data set, in which a 10-fold cross validation is adopted. \n\nIt is clearly shown that the proposed IKNN classifier based on both Euclidean distance and weighted Euclidean distance outperforms the other classification models, it provides the best results on the three data sets considered in this framework. For instance, it achieves 98.50\\% and 98.49 F1 score under UK-DALE while the performance has slightly propped for both the PLAID and WHITED data sets. Accordingly, 96.85\\% accuracy and 96.18\\% F1 score have been obtained under PLAID and 96.5\\% and 96.04\\% have been attained under WHITED.\nThis might be justified by the rise of the sampling frequency in both PLAID and WHITED data sets, where data have been gleaned at 30 kHz and 44 kHz, respectively, in contrast to UK-DALE, in which consumption footprints have been gathered at a resolution of 1 Hz. Moreover, it is important to notice that the LPH descriptor serves not only as a feature extraction descriptor but as a dimensionality reduction approach as well. Explicitly, for each appliance signal, the resulting LPH vector encompasses only 256 samples while the initial appliance signals include much higher samples (e.g. 22491, 57600 and 30000 samples WHITED, UK-DALE and PLAID, respectively).\nThis drives us to determine that the proposed LPH-IKNN solution operates better under low frequencies. All in all, the performance obtained with LPH-IKNN are very promising because they are all superior than 96\\% for all the data sets considered in this study.\n\n\n \n\\begin{table}[t!]\n\\caption{Performance of the proposed appliance identification system using the LPH descriptor in terms of the accuracy and F1 score with reference to various classifiers.}\n\\label{ACCFscore}\n\\begin{center}\n\n\\begin{tabular}{lccccccccc}\n\\hline\n{\\small Classifier } & {\\small Classifier} & \\multicolumn{2}{c}{\\small %\nUK-DALE} & \\multicolumn{2}{c}{GREEND} & \\multicolumn{2}{c}{\\small %\nPLAID} & \\multicolumn{2}{c}{\\small WHITED} \\\\ \\cline{3-10}\\cline{3-9}\n& {\\small \\ parameters} & {\\small Accuracy} & {\\small F1 score} & Accuracy & F1 score & {\\small Accuracy} & {\\small F1 score} & \n{\\small Accuracy} & {\\small F1 score} \\\\ \\hline\n{\\small LDA} & {\\small \/} & {\\small 93.71} & {\\small 93.53} & 94.55\n& 94.37 & {\\small 84.71} & {\\small 77.93} & {\\small 82.50} & \n{\\small 77.41} \\\\ \n{\\small DT} & {\\small Fine, 100 splits} & {\\small 97.42} & {\\small 97.37} & \n97.81 & 97.69 & {\\small 75.42} & {\\small 66.9} & {\\small %\n92.5} & {\\small 90.49} \\\\ \n{\\small DT} & {\\small Medium, 20 splits} & {\\small 96.51} & {\\small 96.5} & \n96.77 & 96.70 & {\\small 65.85} & {\\small 50.20} & {\\small %\n91.25} & {\\small 90.84} \\\\ \n{\\small DT} & {\\small Coarse, 4 splits} & {\\small 73.86} & {\\small 69.38} & \n75.11 & 71.36 & {\\small 49} & {\\small 31.15} & {\\small 34.16%\n} & {\\small 28.36} \\\\ \n{\\small DNNs} & {\\small 50 hidden layers} & {\\small 71.69} & {\\small 69.82}\n& 74.3 & 72.42 & {\\small 78.14} & {\\small 76.09} & {\\small %\n82.37} & {\\small 81.86} \\\\ \n{\\small EBT} & {\\small 30 learners, 42 k} & {\\small 82.51} & {\\small 81.26}\n& 84.66 & 82.71 & {\\small 82.57} & {\\small 74.98} & \n{\\small 91.66} & {\\small 88.67} \\\\ \n& {\\small splits} & & & & & & & & \\\\ \n{\\small SVM} & {\\small Linear Kernel} & {\\small 94.84} & {\\small 95} & \n95.39 & 95.48 & {\\small 81.85} & {\\small 71.61} & {\\small %\n84.58} & {\\small 82.52} \\\\ \n{\\small SVM} & {\\small \\ Gaussian kernel} & {\\small 89.31} & {\\small 98.93}\n& 90.61 & 90.05 & {\\small 85} & {\\small 77.57} & {\\small %\n84.91} & {\\small 87.91} \\\\ \n{\\small SVM} & {\\small Quadratic kernel} & {\\small 93.93} & {\\small 93.81} & \n94.72 & 94.13 & {\\small 89.14} & {\\small 85.34} & {\\small %\n92.5} & {\\small 89.07} \\\\ \n{\\small KNN} & {\\small k=10\/Weighted} & {\\small 96.96} & {\\small 96.81} & \n97.23 & 96.97 & {\\small 82.14} & {\\small 73.57} & {\\small %\n87.91} & {\\small 82.71} \\\\ \n& {\\small \\ Euclidean dist} & & & & & & & & \\\\ \n{\\small KNN} & {\\small k=10\/Cosine dist} & {\\small 96.13} & {\\small 96.01} & \n96.79 & 96.55 & {\\small 75.57} & {\\small 65.57} & {\\small %\n84.58} & {\\small 80.1} \\\\ \n{\\small KNN} & {\\small k=1\/Euclidean \\ dist} & {\\small 97.45} & {\\small 97.41%\n} & 97.60 & 97.36 & {\\small 91.75} & {\\small 89.07} & \n{\\small 92.43} & {\\small 89.97} \\\\ \n{\\small IKNN} & {\\small k=5\/Weight Euclidean } & {\\small \\textbf{98.50}} & {\\small %\n\\textbf{98.49}} & \\textbf{98.84} & \\textbf{98.77} & {\\small \\textbf{96.85}} & {\\small \\textbf{96.48}} & \n{\\small \\textbf{96.55}} & {\\small \\textbf{96.34}} \\\\ \n& {\\small \\ distance + Euclidean} & & & & & & & & \\\\ \n& {\\small \\ \\ distance } & & & & & & & & \\\\ \\hline\n\\end{tabular}\n\n\\end{center}\n\\end{table}\n\n\nOn the other side, it is worth noting that the proposed LPH descriptor can be trained using simple ML algorithms without the need to deploy deep leaning models, which usually have a high computation complexity. In this direction, it was obvious that conventional classifiers, e.g. LDA, DT, EBT, SVM and KNN outperforms significantly the DNN classifier, especially under UK-DALE and GREEND data sets. \n\n\n\\subsection{Comparison with existing 2D descriptors}\nThe promising results of the proposed LPH obtained under the three data sets considered in this study has pushed us to investigate the performance of other 2D descriptors in comparison with our solution. Accordingly, in this section, we investigate the performance of three other feature extraction schemes.\n\n\\begin{itemize}\n\\item \\textit{Local directional patterns (LDP)}: After transforming the power signal into 2D space, for each pattern of the power array, an 8-bit binary sequence is derived using LDP \\cite{Perumal2016}. The latter is measured via the convolution of small kernels from the power array (e.g. $3 \\times 3$) with the Kirsch blocks in 8 different orientations. Fig. \\ref{kenels} portrays an example of the Kirsch blocks used in LDP. \n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=9.5cm, height=4.9cm]{Fig6.pdf}\n\\end{center}\n\\caption{Kirsch kernels utilized in the LDP approach.}\n\\label{kenels}\n\\end{figure}\n\n\\item \\textit{Local ternary pattern (LTeP)}: Unlike LPH, LTeP does not encode the difference of power patterns in every kernel into 0 or 1, but encode them into other quantization values using a thresholding process \\cite{Yuan2014}. Let consider $thr$ is the threshold parameter, $s_{c}$ presents the central power pattern in a patch of $3 \\times 3$, and $s_{n}$ stands for the neighbor patterns, every central pattern $s_{c}^{\\prime}$ can be encoded as follows: \n\n\\begin{equation}\ns_{c}^{\\prime }=\\left\\{ \n\\begin{array}{cc}\n1 & \\mathrm{if}~ s_{n} > s_{c} + thr \\\\ \n0 & \\mathrm{if}~ s_{n} > s_{c} - thr ~ \\mathrm{and}~s_{n} < s_{c} + thr \\\\ \n-1 & \\mathrm{if}~ s_{n} < s_{c} - thr%\n\\end{array}%\n\\right. \n\\end{equation}\n\n\n\\item \\textit{Local Transitional Pattern (LTrP)}: It compares the transitions of intensity changes in small local regions (e.g. kernels of $3 \\times 3$) in different orientations in order to binary encode the 2D representations of appliance power signals. Specifically, LTrP generates a bit (0\/1) via the comparison the central power pattern of a $3 \\times 3$ patch with only the intensities of two neighbors related to two particular directions \\cite{Ahsan2013}.\n\n\\item \\textit{Local binary pattern (LBP):} Is a texture descriptor that presents a low computation complexity along with a capability to capturing a good part of textural patterns of 2D representations. LBP represents micro-patterns in power matrices by an ensemble of simple computations around each power sample \\cite{Tabatabaei2018}.\n\n\\item \\textit{Binarized statistical image features (BSIF):} It constructs local descriptions of 2D representations via effectively encoding texture information and extracting histograms of local regions. Accordingly, binary codes for power patterns are extracted via the projection of local power regions onto a subspace, where basis-vectors were learnt using other natural images \\cite{Kannala6460393}.\n\n\\end{itemize}\n\nTable \\ref{LPHvriants} along with Fig. \\ref{LPHvariant} portray the performance of LPH in comparison with the aforementioned 2D descriptors, among them LBP, LDP, LTeP, BSIF and LTrP with regard to the histogram length, accuracy and F1 score. The results have been obtained by considering the IKNN for all descriptors (K=5). I has been evidently shown that high performance has been obtained with all the descriptors under UK-DALE. Explicitly, all the descriptors have achieved an accuracy and F1 score of more than 96\\%. On the other hand, LDP and LTeP descriptors marginally surpass the LPH under this repository. On the contrary, the performance of the other descriptors have been highly dropped under PLAID and WHITED and only LPH maintains good accuracy and F1 score results. For instance, LPH has attained 96.85\\% accuracy and 96.48 F1 score under PLAID and 96.55\\% accuracy and\t96.34\\% F1 score under WHITED. In this context, under PLAID, LPH outperforms BSIF, LBP, LTrP, LTeP and LDP in terms of the accuracy by more than 6\\%, 5\\%, 11\\%, 5.5\\% and 7\\%, respectively. While in terms of the F1 score, it outperforms them by 7\\%, 5.5\\%, 15\\%, 7\\% and 10\\%, respectively.\n\n\nConversely, the performance variation reported under the different data sets is due to frequency resolutions variation, in addition because UK-DALE records appliance power consumption for multiple days (i.e. it collects the consumption from the same devices but for distinct days for a long period) while PLAID and WHITED data sets observe different devices from distinct manufacturers (brands) and which are belonging to the same device category. \n\n\\begin{table}[t!]\n\\caption{Performance of the LPH-based descriptor vs. other 2D descriptors with reference to the histogram length, accuracy and F1 score. }\n\\label{LPHvriants}\n\\begin{center}\n\n\\begin{tabular}{llllllllll}\n\\hline\n{\\small Descriptor} & {\\small Histogram} & \\multicolumn{2}{c}{\\small UK-DALE}\n& \\multicolumn{2}{c}{GREEND} & \\multicolumn{2}{c}{\\small PLAID} & \n\\multicolumn{2}{c}{\\small WHITED} \\\\ \\cline{3-10}\n& {\\small length} & {\\small Accuracy} & {\\small F1 score} & Accuracy\n& F1 score & {\\small Accuracy} & {\\small F1 score} & {\\small %\nAccuracy} & {\\small F1 score} \\\\ \\hline\n{\\small LDP} & {\\small 56} & {\\small \\textbf{99.46}} & {\\small \\textbf{99.50}} & 99.62\n& \\textbf{99.59} & {\\small 89.79} & {\\small 85.82} & {\\small 81.66} & \n{\\small 79.38} \\\\ \n{\\small LTeP} & {\\small 512} & {\\small 98.86} & {\\small 98.80} & 98.95 & 98.91 & {\\small 91.28} & {\\small 88.97} & {\\small 82.08} & \n{\\small 80.15} \\\\ \n{\\small LTrP} & {\\small 256} & {\\small 97.04} & {\\small 96.99} & 97.11 & 97.05 & {\\small 85.81} & {\\small 81.37} & {\\small 81.25} & \n{\\small 78.78} \\\\ \n{\\small LBP} & {\\small 256} & {\\small 97.21} & {\\small 96.56} & 97.35 & 97.13 & {\\small 91.83} & {\\small 90.72} & {\\small 92.5} & \n{\\small 92.03} \\\\ \n{\\small BSIF} & {\\small 256} & {\\small 96.75} & {\\small 96.12} & 96.88 & 96.50 & {\\small 90.33} & {\\small 89.41} & {\\small 88.94} & \n{\\small 87.77} \\\\ \n{\\small LPH} & {\\small 256} & {\\small 98.51} & {\\small 98.49} & \\textbf{99.65} & 99.55 & {\\small \\textbf{96.85}} & {\\small \\textbf{96.48}} & {\\small \\textbf{96.55}} & \n{\\small \\textbf{96.34}} \\\\ \\hline\n\\end{tabular}\n\n\\end{center}\n\\end{table}\n\n\n\\begin{figure}[t!]\n\\begin{center}\n\\includegraphics[width=0.46\\textwidth]{Fig7a.pdf}\n\\includegraphics[width=0.46\\textwidth]{Fig7b.pdf}\n\n\\end{center}\n\\caption{The performance of LPH descriptor compared to other 2D feature extraction schemes in terms of (a) accuracy, and (b) F1 score.}\n\\label{LPHvariant}\n\\end{figure}\n\n\nMoving forward, we have evaluated the computation cost of the proposed appliance identification scheme based on different 2D descriptors in order to demonstrates its applicability in real-time scenarios. Accordingly, the computation time for the training and test phases of our approach have been computed using MATLAB 9.4. The computational costs are computed on a laptop having a Core i7-85500 with 32 GB RAM and 1.97 GHz. Table \\ref{time} depicts the obtained computational times (in sec) with regard to various 2D descriptors under the three data sets adopted in this framework.\n\nAccordingly, it has been clearly seen that the appliance identification based LPH achieves the lowest computational time in comparison with the\nother descriptors for both the training and test stages under the three data sets. Moreover, the test time of LPH based solution under PLAID and WHITED is less than 1 sec, which can proves that it is possible to implement it for real-time applications since most of the transmitter can transmit data with a sampling rate of more than 1 sec. On the other flip, the test time of the LPH based solution has increased under UK-ALE to more than 2 sec because in this case long daily consumption signatures are analyzed. In contrast to PLAID and WHITED, where short appliance fingerprints from are considered. \n\n\n\n\\begin{table}[t!]\n\\caption{Computational time (in sec) of the proposed appliance identification based on different 2D descriptors. }\n\\label{time}\n\\begin{center}\n\n\\begin{tabular}{lcccccccc}\n\\hline\n& \\multicolumn{8}{c}{\\small Time (in sec)} \\\\ \\cline{2-9}\n{\\small 2D descriptors \\ \\ } & \\multicolumn{2}{c}{\\small UK-DALE} & \n\\multicolumn{2}{c}{GREEND} & \\multicolumn{2}{c}{\\small PLAID} & \n\\multicolumn{2}{c}{\\small WHITED} \\\\ \\cline{2-9}\n& \\ {\\small training \\ } & \\ \\ \\ \\ {\\small test \\ \\ \\ \\ } & training & \\ \\ \\ \\ test \\ \\ \\ & \\ {\\small training \\ } & \\ \\ \\ \\ \\ \n{\\small test \\ \\ \\ } & \\ {\\small training \\ } & \\ \\ \\ \\ {\\small test \\ \\ \\ \\ \n} \\\\ \\hline\n{\\small LDP} & {\\small 25.18} & {\\small 3.71} & 32.23 & 4.73 & {\\small 8.89} & {\\small 1.27} & {\\small 6.17} & {\\small 0.88} \\\\ \n{\\small LTeP} & {\\small 31.22} & {\\small 3.86} & 39.97 & 4.96 & {\\small 11.39} & {\\small 1.69} & {\\small 7.76} & {\\small 1.19} \\\\ \n{\\small LTrP} & {\\small 34.69} & {\\small 4.38} & 44.41 & 5.61 & {\\small 12.48} & {\\small 1.44} & {\\small 8.42} & {\\small 1.03} \\\\ \n{\\small LBP} & {\\small 19.55} & {\\small 2.96} & 25.44 & 3.77 & {\\small 6.26} & {\\small 0.97} & {\\small 4.13} & {\\small 0.69} \\\\ \n{\\small BSIF} & {\\small 39.17} & {\\small 5.11} & 49.17 & 6.61 & {\\small 13.75} & {\\small 1.76} & {\\small 9.27} & {\\small 1.25} \\\\ \n\n{\\small LPH} & {\\small \\textbf{19.45}} & {\\small \\textbf{2.92}} & \\textbf{21.89} & \\textbf{3.76} & {\\small \\textbf{5.91}} & {\\small \\textbf{0.93}} & {\\small \\textbf{3.68}} & {\\small \\textbf{0.64}} \\\\ \\hline\n\\end{tabular}\n\n\\end{center}\n\\end{table}\n\n\n\n\n\n\n\n\n\\subsection{Comparison with existing load identification frameworks} \nTable \\ref{AIScomp} recapitulates the results of various existing load identification frameworks under REDD data set, in comparison with the proposed LPH-IKNN solution and with reference to different parameters, among them the description of learning model, its type, number of the device categories and accuracy performance. It has been clearly seen that the LPH-IKNN framework outperforms all other architectures considered in this study. Moreover, LPH-IKNN has a low computational cost, which can make it a candidate for real-time applications. On the other side, it is worth noting that the proposed method is evaluated under three distinct power repositories with different sampling rates, in which it presents promising results. In contrast, each of the other methods is only validated under one data set, therefore, this increases the credibility of the our study and proves that it could be deployed under different scenarios without caring about the sampling rate. \n\n \n\\begin{table}[t!]\n\\caption{Performance of the proposed LPH-IKNN based load identification system vs. existing solutions with reference to different criteria.}\n\\label{AIScomp}\n\\begin{center}\n\n\\begin{tabular}{ccccc}\n\\hline\n{\\small Framework} & {\\small Approach} & {\\small Learning } & {\\small \\ \\ \\#\nappliance \\ \\ \\ } & {\\small \\ \\ \\ Accuracy \\ \\ \\ } \\\\ \n& & {\\small type} & {\\small classes} & {\\small (\\%)} \\\\ \\hline\n\\multicolumn{1}{l}{{\\small \\cite{HIMEUR2020114877}}} & {\\small MSWPT + DBT}\n& {\\small supervised} & {\\small 9} & {\\small 96.81} \\\\ \n\\multicolumn{1}{l}{{\\small \\cite{Park2019}}} & {\\small ANN} & {\\small %\nsupervised} & {\\small 8} & {\\small 83.8} \\\\ \n\\multicolumn{1}{l}{{\\small \\cite{Ma8118142}}} & {\\small fingerprint-weighting KNN%\n} & {\\small supervised } & {\\small 6} & {\\small 83.25} \\\\ \n\\multicolumn{1}{l}{{\\small \\cite{Guedes2015}}} & {\\small HOS} & {\\small %\nsupervised} & {\\small 11} & {\\small 96.8} \\\\ \n\\multicolumn{1}{l}{{\\small \\cite{Wang2012}}} & {\\small \\ \\ \\ mean-shift\nclustering \\ \\ \\ \\ } & {\\small \\ \\ unsupervied \\ \\ } & {\\small 13} & {\\small %\n80} \\\\ \n\\multicolumn{1}{l}{{\\small \\cite{Dinesh2016}}} & {\\small Karhunen Lo\\'{e}ve}\n& {\\small supervised} & {\\small N\/A} & {\\small 87} \\\\ \n\\multicolumn{1}{l}{{\\small \\cite{Morais2019}}} & {\\small AANN } & {\\small %\nsupervised} & {\\small 5} & {\\small 97.7} \\\\ \n\\multicolumn{1}{l}{{\\small \\cite{Zhiren2019}}} & {\\small AdaBoost} & {\\small %\nsupervised} & {\\small 5} & {\\small 94.8} \\\\ \n\\multicolumn{1}{l}{{\\small \\cite{Ghosh2019}}} & {\\small Fuzzy model} & \n{\\small supervised} & {\\small 7} & {\\small 91.5} \\\\ \n\\multicolumn{1}{l}{{\\small \\cite{YAN2019101393} \\ \\ }} & {\\small Bayesian\nclassifier + correlation} & {\\small supervised} & {\\small 29} & {\\small 95.6} \\\\ \n\\multicolumn{1}{l}{\\small Our} & {\\small LPH + IKNN} & {\\small supervised} & \n{\\small 9} & {\\small 98.5} \\\\ \\hline\n\\end{tabular}\n\n\n\n\n\\end{center}\n\\end{table}\n\n\n\nAll in all, it is of significant importance to mention that the proposed LPH-IKNN has been validated using four different data sets (i.e. UK-DALE, GREEND, PLAID and UK-DALE) including different kinds of power signatures recorded (i) with distinct frequency resolutions, and (ii) under different scenarios. For instance, under both UK-DALE and GREEND, we collect the power consumption signatures that vary through the time for a set of appliances (i.e. each daily consumption trace represents a power signature); while under both PLAID and WHITED, for each appliance category, the power traces are gleaned from different manufacturers. In this regard, validating our solution under these data sets using different scenarios, has helped in (i) showing its high performance although the frequency resolution has been changed, and (ii) proving its capability to be implemented in real-application scenarios since it can identify appliance-level data even if they are from different manufacturers and although the power signatures change from a day to another. \n\n\n\n \n\n\n\\section{Conclusion} \\label{sec5}\n\nIn this paper, a novel method for performing accurate appliance identification and hence improving the performance of the NILM systems has been presented. The applicability of a local 2D descriptor, namely LPH-IKNN, to recognize electrical devices has been successfully validated. Consequently, other types of 2D descriptors can be investigated in order to further improve the identification accuracy, such as local texture descriptors, color histograms, moment-based descriptors and scale-invariant descriptors. This line of research is full of challenges and plenty of opportunities are available. Moving forward, in addition to the high performance reached, the LPH-IKNN based appliance identifications scheme has shown a low computational cost because of the use of a fast 2D descriptor along with the IKNN, which uses a smart strategy to reduce the training and test times. Furthermore, LPH-IKNN acts also as a dimensionality reduction, in which very short histograms have been derived to represent appliance fingerprints. \n\n\n\nOn the other hand, although LPH-IKNN has shown very promising performance, it still has some drawbacks among them is its reliance on a supervised learning process. Explicitly, this could limit its application in some scenarios, where it might be difficult to collect data to train the proposed model. To that end, it is part of our next future work to change the learning process by building an improved version of this LPH-IKNN using an unsupervised learning approach. Moreover, another option is by adding a transfer learning module to eliminate the need to collect new data for training our system if the sampling frequency of collected data is changed. Moreover, IKNN classifier could be replaced by any other improved algorithm that enables an automatic selection of the $k$ value to simplify the use of LPH-IKNN in real application scenarios. In this context, the GBKNN classifier discussed in Section \\ref{sec221} seems to be a good alternative that could be investigated in our future work.\n\n\nOn the other hand, due to the size of appliance identification based data sets is not very large, it will be of significant importance to investigate the use of other feature extraction methods in our future work, which are very convenient for small data sets, e.g. rough set based techniques \\cite{xia2020gbnrs,xia2020lra}. The latter helps also in attribute reduction and feature selection and hence it could further reduce the computational cost of the appliance identification task to support real-time applications. Finally, it will also be part of our future work to focus on developing a powerful recommender system, which can use the output of the LPH-IKNN based NILM system to detect abnormal power consumption in buildings before triggering tailored recommendations to help end-users in reducing wasted energy. \n\n\n\n\n\n\\section*{Acknowledgements}\\label{acknowledgements}\nThis paper was made possible by National Priorities Research Program (NPRP) grant No. 10-0130-170288 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\nCommunity Question Answering (CQA) on web forums such as Stack Overflow\\footnote{\\url{http:\/\/stackoverflow.com\/}} and Qatar Living,\\footnote{\\url{http:\/\/www.qatarliving.com\/forum}} is gaining popularity, thanks to the flexibility of forums to provide information to a user \\cite{Moschitti:2016:SIGIR:workshop}. Forums are moderated only indirectly via the community, rather open, and subject to few restrictions, if any, on who can post and answer a question, or what questions can be asked. On the positive side, a user can freely ask any question and can expect a variety of answers. On the negative side, it takes efforts to go through the provided answers of varying quality and to make sense of them. It is not unusual for a popular question to have hundreds of answers, and it is very time-consuming for a user to inspect them all. \n\n\\noindent Hence, users can benefit from automated tools to help them navigate these forums, including support for finding similar existing questions to a new question, and for identifying good answers, e.g.,~by retrieving similar questions that already provide an answer to the new question.\n\nGiven the important role that natural language processing (NLP) plays for CQA, we have organized a challenge series to promote related research for the past three years. We have provided datasets, annotated data and we have developed robust evaluation procedures in order to establish a common ground for comparing and evaluating different approaches to CQA.\n\nIn greater detail, in SemEval-2015 Task 3 ``Answer Selection in Community Question Answering'' \\cite{nakov-EtAl:2015:SemEval},\\footnote{\\url{http:\/\/alt.qcri.org\/semeval2015\/task3}} we mainly targeted conventional Question Answering (QA) tasks, i.e.,~answer selection.\nIn contrast, in SemEval-2016 Task 3 \\cite{nakov-EtAl:2016:SemEval}, we targeted a fuller spectrum of CQA-specific tasks, moving closer to the real application needs,\\footnote{A system based on SemEval-2016 Task 3 was integrated in Qatar Living's betasearch \\cite{hoque-EtAl:2016:COLINGDEMO}:\\\\ \\indent \\url{http:\/\/www.qatarliving.com\/betasearch}} particularly in Subtask C, which was defined as follows: ``given (\\emph{i})~a new question and (\\emph{ii})~a large collection of question-comment threads created by a user community, rank the comments that are most useful for answering the new question''.\nA test question is new with respect to the forum, but can be related to one or more questions that have been previously asked in the forum. The best answers \ncan come from different question--comment threads. \nThe threads are independent of each other, the lists of comments are chronologically sorted, and there is meta information, e.g., date of posting, who is the user who asked\/answered the question, category the question was asked in, etc.\n\n\\noindent The comments in a thread are intended to answer the question initiating that thread, but since this is a resource created by a community of casual users, there is a lot of noise and irrelevant material, in addition to the complications of informal language use, typos, and grammatical mistakes. Questions in the collection can also be related in different ways, although there is in general no explicit representation of this structure.\n\nIn addition to Subtask C, we designed subtasks A and B to give participants the tools to create a CQA system to solve subtask C. \nSpecifically, Subtask A (\\emph{Question-Comment Similarity}) is defined as follows: ``given a question from a question--comment thread, rank the comments according to their relevance (similarity) with respect to the question.'' Subtask B (\\emph{Question-Question Similarity}) is defined as follows: ``given a new question, rerank all similar questions retrieved by a search engine, assuming that the answers to the similar questions should also answer the new question.'' \n\nThe relationship between subtasks A, B, and C is illustrated in Figure~\\ref{fig:triangle}. In the figure, $q$ stands for the new question, $q'$ is an existing related question, and $c$ is a comment within the thread of question $q'$. \nThe edge $\\overline{qc}$ relates to the main CQA task (subtask C), i.e., deciding whether a comment for a potentially related question is a good answer to the original question. This relation captures the \\emph{relevance} of $c$ for $q$. \nThe edge $\\overline{qq'}$ represents the similarity between the original and the related questions (subtask B). This relation captures the \\emph{relatedness} of $q$ and $q'$. \nFinally, the edge $\\overline{q'c}$ represents the decision of whether $c$ is a good answer for the question from its thread, $q'$ (subtask A). This relation captures the \\emph{appropriateness} of $c$ for $q'$. \nIn this particular example, $q$ and $q'$ are indeed related, and $c$ is a good answer for both $q'$ and $q$.\n\n\n\\begin{figure}[t]\n\\centering\n\\hspace*{-4mm}\n\\includegraphics[width=.50\\textwidth]{cQA-triangle_v4_cropped.pdf}\n\\caption{\\label{fig:triangle}The similarity triangle for CQA, showing the three pairwise interactions between the original question $q$, the related question $q'$, and a comment $c$ in the related question's thread.}\n\\end{figure}\n\nThe participants were free to approach Subtask C with or without solving Subtasks A and B, and participation in the main subtask and\/or the two subtasks was optional. \n\nWe had three objectives for the first two editions of our task:\n(\\emph{i})~to focus on semantic-based solutions beyond simple ``bag-of-words'' representations and ``word matching'' techniques; (\\emph{ii})~to study new NLP challenges arising in the CQA scenario, e.g., relations between the comments in a thread, relations between different threads, and question-to-question similarity; and (\\emph{iii}) to facilitate the participation of non-IR\/QA experts. \n\n\\noindent The third objective was achieved by \nproviding the set of potential answers \nand asking the participants to (re)rank the answers, and also by defining two optional subtasks (A and B), in addition to the main subtask (i.e., C).\n\nLast year, we were successful in attracting a large number of participants to all subtasks. However, as the task design was new (we added subtasks B and C in the 2016 edition of the task), we felt that participants would benefit from a rerun, with new test sets for subtasks A--C.\n\nWe preserved the multilinguality aspect (as in 2015 and 2016), providing data for two languages: English and Arabic. In particular, we had an Arabic subtask D, which used data collected from three medical forums. This year, we used a slightly different procedure for the preparation of test set compared to the way the training, development, and test data for subtask D was collected last year. \n\nAdditionally, we included a new subtask, subtask E, which enables experimentation on \\emph{Question--Question Similarity} on a large-scale CQA dataset, i.e., StackExchange, based on the CQADupStack data set \\cite{hoogeveen2015cqadupstack}.\nSubtask E is a \\emph{duplicate question detection} task, and like Subtask B, it is focused on question--question similarity. Participants were asked to rerank 50 candidate questions according to their relevance with respect to each query question. The subtask included several elements that differentiate it from Subtask B (see \\secref{taskE}).\n\n\\noindent We provided manually annotated training data for both languages and for all subtasks. All examples were manually labeled by a community of annotators using a crowdsourcing platform. The datasets and the annotation procedure for the old data for subtasks A, B and C are described in \\cite{nakov-EtAl:2016:SemEval}. In order to produce the new data for Subtask D, we used a slightly different procedure compared to 2016, which we describe in \\secref{arabic-data}.\n\nThe remainder of this paper is organized as follows:\n\\secref{sec:related} introduces related work.\n\\secref{taskdef} gives a more detailed definition of the subtasks; it also describes the datasets and the process of their creation, and it explains the evaluation measures we used.\n\\secref{sec:results} presents the results for all subtasks and for all participating systems.\n\\secref{sec:discussion} summarizes the main approaches used by these systems and provides further discussion.\nFinally, \\secref{sec:conclusion} presents the main conclusions.\n\n\n\n\n\\section{Related Work}\n\\label{sec:related}\n\n\n\nThe first step to automatically answer questions on CQA sites is to retrieve a set of questions similar to the question that the user has asked. This set of similar questions is then used to extract possible answers for the original input question. Despite its importance, question similarity for CQA is a hard task due to problems such as the ``lexical gap'' between the two questions. \n\n\\emph{Question-question similarity} has been featured as a subtask (subtask B) of SemEval-2016 Task 3 on Community Question Answering \\cite{nakov-EtAl:2016:SemEval}; there was also a similar subtask as part of SemEval-2016 Task 1 on Semantic Textual Similarity \\cite{agirre-EtAl:2016:SemEval1}. Question-question similarity is an important problem with application to question recommendation, question duplicate detection, community question answering, and question answering in general.\nTypically, it has been addressed using a variety of textual similarity measures. Some work has paid attention to modeling the question topic, which can be done explicitly, e.g., using question topic and focus \\cite{duan2008searching} or using a graph of topic terms \\cite{Cao:2008:RQU:1367497.1367509}, or implicitly, e.g., using a language model with a smoothing method based on the category structure of Yahoo!~Answers \\cite{cao2009use} or using LDA topic language model that matches the questions not only at the term level but also at the topic level \\cite{zhang2014question}.\n\n\\noindent Another important aspect is syntactic structure, e.g., \\citet{wang2009syntactic} proposed a retrieval model for finding similar questions based on the similarity of syntactic trees, and \\citet{DaSanMartino:CIKM:2016} used syntactic kernels. Yet another emerging approach is to use neural networks, e.g., \\citet{dossantos-EtAl:2015:ACL-IJCNLP} used convolutional neural networks (CNNs), \\citet{Romeo:2016coling} used long short-term memory (LSTMs) networks with neural attention to select the important part of text when comparing two questions, and \\citet{LeiJBJTMM16} used a combined recurrent--convolutional model to map questions to continuous semantic representations.\nFinally, translation \\cite{jeon2005finding,zhou2011phrase} and cross-language models \\cite{SIGIR2017:cross-lang} have also been popular for question-question similarity.\n\n\n\n\n\\emph{Question-answer similarity} has been a subtask (subtask A) of our task in its two previous editions \\cite{nakov-EtAl:2015:SemEval,nakov-EtAl:2016:SemEval}.\nThis is a well-researched problem in the context of general question answering.\nOne research direction has been to try to match the syntactic structure of the question to that of the candidate answer. For example, \\newcite{wang:2007} proposed a probabilistic quasi-synchronous grammar to learn syntactic transformations from the question to the candidate answers. \\newcite{heilman:naacl:2010} used an algorithm based on Tree Edit Distance (TED) to learn tree transformations in pairs. \\newcite{wang_manning:acl:2010} developed a probabilistic model to learn tree-edit operations on dependency parse trees. \\newcite{yao:naacl:2013} applied linear chain conditional random fields (CRFs) with features derived from TED to learn associations between questions and candidate answers.\nMoreover, syntactic structure was central for some of the top systems that participated in SemEval-2016 Task 3 \\cite{SemEval2016:task3:KeLP,SemEval2016:task3:ConvKN}.\n\nAnother important research direction has been on using neural network models for question-answer similarity \\cite{feng2015applying,severyn2015sigir,wang-nyberg:2015:ACL-IJCNLP, tan2015lstm,SemEval2016:task3:ConvKN,SemEval2016:task3:KeLP,SemEval2016:task3:SLS}. \nFor instance, \\newcite{tan2015lstm} used neural attention over a bidirectional long short-term memory (LSTM) neural network in order to generate better answer representations given the questions.\nAnother example is the work of \\newcite{DBLP:conf\/cikm\/TymoshenkoBM16}, who combined neural networks with syntactic kernels.\n\n\\noindent Yet another research direction has been on using machine translation models as features for question-answer similarity\n\\cite{Berger:2000:BLC:345508.345576,Echihabi:2003:NAQ:1075096.1075099,jeon2005finding,Soricut:2006:AQA:1127331.1127342,riezler-EtAl:2007:ACLMain,li2011improving,Surdeanu:2011:LRA:2000517.2000520,tran-EtAl:2015:SemEval,SemEval2016:task3:UniMelb,SemEval2016:task3:ICL00},\ne.g., a variation of IBM model 1 \\cite{Brown:1993:MSM}, to compute the probability that the question is a ``translation'' of the candidate answer. Similarly, \\cite{guzman-marquez-nakov:2016:P16-2,guzman-nakov-marquez:2016:SemEval} ported an entire machine translation evaluation framework \\cite{guzman-EtAl:2015:ACL-IJCNLP} to the CQA problem.\n\n\n\nUsing information about the answer thread is another important direction, which has been explored mainly to address Subtask A.\nIn the 2015 edition of the task, the top participating systems used thread-level features, in addition to local features that only look at the question--answer pair.\nFor example, the second-best team, HITSZ-ICRC, used as a feature the position of the comment in the thread, such as whether the answer is first or last\n\\cite{hou-EtAl:2015:SemEval1}.\nSimilarly, the third-best team, QCRI, used features to model a comment in the context of the entire comment thread, focusing on user interaction~\\cite{nicosia-EtAl:2015:SemEval}.\nFinally, the fifth-best team, ICRC-HIT, treated the answer selection task as a sequence labeling problem and proposed recurrent convolutional neural networks to recognize good comments \\cite{zhou-EtAl:2015:SemEval}.\n\nIn follow-up work, \\newcite{zhou-EtAl:2015:ACL-IJCNLP} included long-short term memory (LSTM) units in their convolutional neural network to model the classification sequence for the thread, and \\newcite{barroncedeno-EtAl:2015:ACL-IJCNLP} exploited the dependencies between the thread comments to tackle the same task. This was done by designing features that look globally at the thread and by applying structured prediction models, such as CRFs.\n\nThis research direction was further extended by \\newcite{joty:2015:EMNLP}, who used the output structure at the thread level in order to make more consistent global decisions about the goodness of the answers in the thread. They modeled the relations between pairs of comments at any distance in the thread, and combined the predictions of local classifiers using graph-cut and Integer Linear Programming.\nIn follow up work, \n\\newcite{Joty:2016:NAACL} proposed joint learning models that integrate inference within the learning process using global normalization and an Ising-like edge potential.\n\n\n\n\\emph{Question--External comment similarity} is our main task (subtask C), and it is inter-related to subtasks A and B, as described in the triangle of Figure~\\ref{fig:triangle}. This task has been much less studied in the literature, mainly because its definition is specific to our SemEval Task 3, and it first appeared in the 2016 edition~\\cite{nakov-EtAl:2016:SemEval}.\nMost of the systems that took part in the competition, including the winning system of the SUper team \\cite{SemEval2016:task3:SUper}, approached the task indirectly by solving subtask A at the thread level and then using these predictions together with the reciprocal rank of the related questions in order to produce a final ranking for subtask C.\nOne exception is the \\emph{KeLP} system \\cite{SemEval2016:task3:KeLP}, which was ranked second in the competition. This system combined information from different subtasks and from all input components. It used a modular kernel function, including stacking from independent subtask A and B classifiers, and applying SVMs to train a Good vs. Bad classifier \\cite{SemEval2016:task3:KeLP}. \nIn a related study, \\newcite{nakov-marquez-guzman:2016:EMNLP2016} discussed the input information to solve Subtask C, and concluded that one has to model mainly question-to-question similarity (Subtask B) and answer goodness (subtask A), while modeling the direct relation between the new question and the candidate answer (from a related question) was found to be far less important. \n\nFinally, in another recent approach, \\newcite{bonadiman-uva-moschitti:2017:EACLshort} studied how to combine the different CQA subtasks. They presented a multitask neural architecture where the three tasks are trained together with the same representation. The authors showed that the multitask system yields good improvement for Subtask C, which is more complex and clearly dependent on the other two tasks.\n\n\\emph{Some notable features across all subtasks.} Finally, we should mention some interesting features used by the participating systems across all three subtasks. This includes fine-tuned word embeddings\\footnote{\\url{https:\/\/github.com\/tbmihailov\/semeval2016-task3-cqa}} \\cite{SemEval2016:task3:SemanticZ}; features modeling text complexity, veracity, and user trollness\\footnote{Using a heuristic that if several users call somebody a troll, then s\/he should be one \\cite{mihaylov-georgiev-nakov:2015:CoNLL,mihaylov-EtAl:2015:RANLP2015,mihaylov-nakov:2016:trolls,DarkWeb:2017}.}\n\\cite{SemEval2016:task3:SUper}; sentiment polarity features \\cite{nicosia-EtAl:2015:SemEval}; and PMI-based goodness polarity lexicons \\cite{SemEval2016:task3:PMI-cool,PMI:SIGIR:2017}. \n\n\\begin{table*}[t]\n\\small\n\\begin{center}\n\\begin{tabular}{l@{}rrrr}\n\\multirow{2}{*}{\\bf Category} & \\bf Train+Dev+Test & \\bf Train(1,2)+Dev+Test & \\multirow{2}{*}{\\bf Test}\\\\\n & \\bf from SemEval-2015 & \\bf from SemEval-2016 & \\\\\n\\hline\n\\hline\n\\bf Original Questions & \\multicolumn{1}{c}{\\bf --} & \\bf (200+67)+50+70 & \\bf 88 & \\\\\n\\\\\n\\bf Related Questions & \\bf 2,480+291+319 & \\bf (1,999+670)+500+700 & \\bf 880 \\\\\n -- Perfect Match & \\multicolumn{1}{c}{--} & (181+54)+59+81 & 24 \\\\\n -- Relevant & \\multicolumn{1}{c}{--} & (606+242)+155+152 & 139 \\\\\n -- Irrelevant & \\multicolumn{1}{c}{--} & (1,212+374)+286+467 & 717 \\\\\n & \\bf & \\bf & \\bf \\\\\n\\bf Related Comments & \\multicolumn{1}{c}{\\bf --} & \\bf (19,990+6,700)+5,000+7,000 & \\bf 8,800\\\\\n\\bf (with respect to Original Question) & & & & \\\\\n -- Good & \\multicolumn{1}{c}{--} & (1,988+849)+345+654 & 246\\\\\n -- Bad & \\multicolumn{1}{c}{--} & (16,319+5,154)+4,061+5,943 & 8,291 \\\\\n -- Potentially Useful & \\multicolumn{1}{c}{--} & (1,683+697)+594+403 & 263 \\\\\n & \\bf & \\bf & \\bf \\\\\n\\bf Related Comments & \\bf 14,893+1,529+1,876 & \\bf (14,110+3,790)+2,440+3,270 & \\bf 2,930\\\\\n\\bf (with respect to Related Question) & & & & \\\\\n -- Good & 7,418+813+946 & (5,287+1,364)+818+1,329 & 1,523 \\\\\n -- Bad & 5,971+544+774 & (6,362+1,777)+1,209+1,485 & 1,407 \\\\\n -- Potentially Useful & 1,504+172+156 & (2,461+649)+413+456 & 0 \\\\\n\\hline\n\\end{tabular}\n\\caption{Statistics about the English CQA-QL dataset. \n\t\t Note that the \\emph{Potentially Useful} class was merged with \\emph{Bad} at test time for SemEval-2016 Task 3, and was eliminated altogether at SemEval-2017 task 3.}\n\\label{table:statistics:english}\n\\end{center}\n\\end{table*}\n\n\n\\section{Subtasks and Data Description}\n\\label{taskdef}\n\\label{sec:task}\n\nThe 2017 challenge was structured as a set of five subtasks, four of which (A, B, C and E) were offered for English, while the fifth (D) one was for Arabic. \nWe leveraged the data we developed in 2016 for the first four subtasks, creating only new test sets for them, whereas we built a completely new dataset for the new Subtask E.\n\n\\subsection{Old Subtasks}\nThe first four tasks and the datasets for them are described in \\cite{nakov-EtAl:2016:SemEval}. Here we review them briefly.\n\\paragraph{English subtask A} \\emph{Question-Comment Similarity}.\nGiven a question $Q$ and the first ten comments\\footnote{We limit the number of comments we consider to the first ten only in order to spare some annotation efforts.} in its question thread ($c_1,\\dots,c_{10}$), the goal is to rank these ten comments according to their relevance with respect to that question.\n\nNote that this is a ranking task, not a classification task; we use mean average precision (MAP) as an official evaluation measure. This setting was adopted as it is closer to the application scenario than pure comment classification. For a perfect ranking, a system has to place all ``Good'' comments above the ``PotentiallyUseful'' and the ``Bad'' comments; the latter two are not actually distinguished and are considered ``Bad'' at evaluation time. This year, we elliminated the ``PotentiallyUseful'' class for test at annotation time.\n\n\n\\paragraph{English subtask B} \\emph{Question-Question Similarity}.\nGiven a new question $Q$ (aka \\emph{original question}) and the set of the first ten related questions from the forum ($Q_1,\\dots,Q_{10}$) retrieved by a search engine, the goal is to rank the related questions according to their similarity with respect to the original question. \n\nIn this case, we consider the ``PerfectMatch'' and the ``Relevant'' questions both as good (i.e., we do not distinguish between them and we will consider them both ``Relevant''), and they should be ranked above the ``Irrelevant'' questions. \nAs in subtask A, we use MAP as the official evaluation measure. To produce the ranking of related questions, participants have access to the corresponding related question-thread.\\footnote{Note that the search engine indexes entire Web pages, and thus, the search engine has compared the original question to the related questions together with their comment threads.} \nThus, being more precise, this subtask could have been named \\emph{Question --- Question+Thread Similarity}.\n\n\\paragraph{English subtask C} \\emph{Question-External Comment Similarity}.\nGiven a new question $Q$ (also known as the \\emph{original question}), and the set of the first ten related questions ($Q_1,\\dots,Q_{10}$) from the forum retrieved by a search engine for $Q$, each associated with its first ten comments appearing in $Q$'s thread ($c_1^1,\\dots,c_1^{10},\\dots,c_{10}^1,\\dots,c_{10}^{10}$), the goal is to rank these 10$\\times$10 = 100 comments $\\{c_i^j\\}_{i,j=1}^{10}$ according to their relevance with respect to the original question $Q$. \n\n\\noindent This is the main English subtask.\nAs for subtask A, we want the ``Good'' comments to be ranked above the ``PotentiallyUseful'' and the ``Bad'' comments, which will be considered just bad in terms of evaluation. Although, the systems are supposed to work on 100 comments, we take an application-oriented view in the evaluation,\nassuming that\nusers would like to have good comments concentrated in the first ten positions.\nWe believe users care much less about what happens in lower positions (e.g., after the 10th) in the rank, as they typically do not ask for the next page of results in a search engine such as Google or Bing. This is reflected in our primary evaluation score, MAP, which we restrict to consider only the top ten results for subtask C. \n\n\\paragraph{Arabic subtask D} \\emph{Rank the correct answers for a new question}.\nGiven a new question $Q$ (aka the original question), the set of the first 30 related questions retrieved by a search engine, each associated with one correct answer ($(Q_1,c_1)\\dots,(Q_{30},c_{30})$), the goal is to rank the 30 question-answer pairs according to their relevance with respect to the original question. We want the ``Direct'' and the ``Relevant'' answers to be ranked above the ``Irrelevant'' answers; the former two are considered ``Relevant'' in terms of evaluation. We evaluate the position of ``Relevant'' answers in the rank, and this is again a ranking task.\nUnlike the English subtasks, here we use 30 answers since the retrieval task is much more difficult, leading to low recall, and the number of correct answers is much lower. Again, the systems were evaluated using MAP, restricted to the top-10 results.\n\n\n\\subsubsection{Data Description for A--D}\n\nThe English data for subtasks A, B, and C comes from the Qatar Living forum, which is organized as a set of seemingly independent question--comment threads. In short, for subtask A, we annotated the comments in a question-thread as ``Good'', ``PotentiallyUseful'' or ``Bad'' with respect to the question that started the thread. Additionally, given original questions, we retrieved related question--comment threads and annotated the related questions as ``PerfectMatch'', ``Relevant'', or ``Irrelevant'' with respect to the original question (Subtask B). We then annotated the comments in the threads of related questions as ``Good'', ``PotentiallyUseful'' or ``Bad'' with respect to the original question (Subtask C).\n\n\\noindent For Arabic, the data was extracted from medical forums and has a different format. Given an original question, we retrieved pairs of the form (related\\_question, answer\\_to\\_the\\_related\\_question). These pairs were annotated as ``Direct'' answer, ``Relevant'' and ``Irrelevant'' with respect to the original question.\n\n\n\\paragraph{For subtasks A, B, and C} we annotated new English test data following the same setup as for SemEval-2016 Task 3 \\cite{nakov-EtAl:2016:SemEval}, except that we elliminated the ``Potentially Useful'' class for subtask A. We first selected a set of questions to serve as original questions. In a real-world scenario those would be questions that had never been asked previously, but here we used existing questions\nfrom Qatar Living. \n\nFrom each original question, we generated a query, using the question's subject (after some word removal if the subject was too long). Then, we executed the query against Google, limiting the search to the Qatar Living forum, and we collected up to 200 resulting question-comment threads as related questions. Afterwards, we filtered out threads with less than ten comments as well as those for which the question was more than 2,000 characters long. Finally, we kept the top-10 surviving threads, keeping just the first 10 comments in each thread.\n\nWe formatted the results in XML with UTF-8 encoding, adding metadata for the related questions and for their comments; however, we did not provide any meta information about the original question, in order to emulate a scenario where it is a new question, never asked before in the forum. In order to have a valid XML, we had to do some cleansing and normalization of the data. We added an XML format definition at the beginning of the XML file and we made sure it validated.\n\nWe organized the XML data as a sequence of original questions (OrgQuestion), where each question has a subject, a body, and a unique question identifier (ORGQ\\_ID). Each such original question is followed by ten threads, where each thread consists of a related question (from the search engine results) and its first ten comments.\n\nWe made available to the participants for training and development the data from 2016 (and for subtask A, also from 2015), and we created a new test set of 88 new questions associated with 880 question candidates and 8,800 comments; details are shown in Table~\\ref{table:statistics:english}. \n\n\n\n\\begin{table}[t]\n\\small\n\\begin{center}\n\\begin{tabular}{lrrrr}\n\\multirow{2}{*}{\\bf Category} & \\multicolumn{3}{c}{\\bf SemEval-2016 data} & \\multirow{2}{*}{\\bf Test-2017}\\\\\n\\cline{2-4}\n & \\bf Train & \\bf Dev & \\bf Test & \\\\\n\\hline\n\\hline\n\\bf Questions & \\bf 1,031 & \\bf 250 & \\bf 250 & \\bf 1,400\\\\\n\\bf QA Pairs & \\bf 30,411 & \\bf 7,384 & \\bf 7,369 & \\bf 12,600\\\\\n -- Direct & 917 & 70 & 65 & 891\\\\\n -- Related & 17,412 & 1,446 & 1,353 & 4,054\\\\\n -- Irrelevant & 12,082 & 5,868 & 5,951 & 7,655\\\\\n\\hline\n\\end{tabular}\n\\caption{Statistics about the CQA-MD corpus.}\n\\label{table:statistics:arabic}\n\\end{center}\n\\end{table}\n\n\n\\label{arabic-data}\n\n\n\n\n\\paragraph{For subtasks D} we had to annotate new test data.\nIn 2016, we used data from three Arabic medical websites, which we downloaded and indexed locally using Solr.\\footnote{\\url{https:\/\/lucene.apache.org\/solr\/}} Then, we performed 21 different query\/document formulations, and we merged the retrieved results, ranking them according to the reciprocal rank fusion algorithm~\\cite{cormack2009reciprocal}. Finally, we truncated the result list to the 30 top-ranked question--answer pairs.\n\nThis year we only used one of these websites, namely \\url{Altibbi.com}\\footnote{\\url{http:\/\/www.altibbi.com\/}\\<\u0637\u0628\u064a\u0629>-\\<\u0627\u0633\u0626\u0644\u0629>}\nFirst, we selected some questions from that website to be used as original questions, and then we used Google to retrieve potentially related questions using the \\url{site:*} filter.\n\n\nWe turned the question into a query as follows: We first queried Google using the first thirty words from the original question. If this did not return ten results, we reduced the query to the first ten non-stopwords\\footnote{We used the following Arabic stopword list: \\url{https:\/\/sites.google.com\/site\/kevinbouge\/stopwords-lists}} from the question, and if needed we further tried using the first five non-stopwords only. If we did not manage to obtain ten results, we discarded that original question. \n\nIf we managed to obtain ten results, we followed the resulting links and we parsed the target page \nto extract the question and the answer, which is given by a physician, as well as some metadata such as date, question classification, doctor's name and country, etc.\n\n\n\n\n\n\nIn many cases, Google returned our original question as one of the search results, in which case we had to exclude it, thus reducing the results to nine. In the remaining cases, we excluded the 10th result in order to have the same number of candidate question--answer pairs for each original question, namely nine.\nOverall, we collected 1,400 original questions, with exactly nine potentially related question--answer pairs for each of them, i.e., a total of 12,600 pairs.\n\n\n\\noindent We created an annotation job on CrowdFlower to obtain judgments about the relevance of the question--answer pairs with respect to the original question.\nWe controlled the quality of annotation using a hidden set of 50 test questions. We had three judgments per example, which we combined using the CrowdFlower mechanism. \nThe average agreement was 81\\%. Table~\\ref{table:statistics:arabic} shows statistics about the resulting dataset, together with statistics about the datasets from 2016, which could be used for training and development.\n\n\n\n\n\n\n\\subsubsection{Evaluation Measures for A--D}\n\\label{sec:scoring}\nThe official evaluation measure\nwe used to rank the participating systems is Mean Average Precision (``MAP''), calculated over the top-10 comments as ranked by a participating system. We further report the results for two unofficial ranking measures, which we also calculated over the top-10 results only: Mean Reciprocal Rank (``MRR'') and Average Recall (``AvgRec'').\nAdditionally, we report the results for four standard classification measures,\nwhich we calculate over the full list of results: Precision, Recall and F$_1$ (with respect to the Good\/Relevant class), and Accuracy.\n\nWe released a specialized scorer that calculates and returns all the above-mentioned scores.\n\n\\subsection{The New Subtask E}\n\\label{taskE}\n\nSubtask E is a duplicate question detection task, similar to Subtask B. \nParticipants were asked to rerank 50 candidate questions according to their relevance with respect to each query question. The subtask included several elements that distinguish it from Subtask B:\n\n\\begin{itemize}\n \\setlength\\itemsep{1em}\n\t\\item Several meta-data fields were added, including the tags that are associated with each question, the number of times a question has been viewed, and the score of each question, answer and comment (the number of upvotes it has received from the community, minus the number of downvotes), as well as user statistics, containing information such as user reputation and user badges.\\footnote{The complete list of available meta-data fields can be found on the Task website.}\n \\item At test time, two extra test sets containing data from two surprise subforums were provided, to test the participants' system's cross-domain performance.\n \\item The participants were asked to truncate their result list in such a way that only ``PerfectMatch'' questions appeared in it. The evaluation metrics were adjusted to be able to handle empty result lists (see \\secref{sec:e-evaluation}).\n \\item The data was taken from StackExchange instead of the Qatar Living forums, and reflected the real-world distribution of duplicate questions in having many query questions with zero relevant results.\n\\end{itemize}\n\nThe cross-domain aspect was of particular interest, as it has not received much attention in earlier duplicate question detection research.\n\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{lccc}\n \\toprule\n \\bf Subforums & \\bf Train\t\t& \\bf Development & \\bf Test \t\\\\\n \\midrule\t\n Android\t\t& 10,360\t\t& 3,197\t\t& 3,531\t\t\\\\\n English \t\t& 20,701\t\t& 6,596\t\t& 6,383\t\t\\\\\n Gaming \t\t& 14,951\t\t& 4,964\t\t& 4,675\t\t\\\\\n Wordpress \t& 13,733\t\t& 5,007\t\t& 3,816\t\t\\\\\n \\midrule\n Surprise 1\t& ---\t\t& ---\t\t& 5,123\t\t\\\\\n Surprise 2\t& ---\t\t& ---\t\t& 4,039\t\t\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Statistics on the data for Subtask E. Shown is the number of query questions; for each of them, 50 candidate questions were provided.}\n \\label{tab:stackexchange}\n\\end{table}\n\n\n\\subsubsection{Data Description for E}\n\nThe data consisted of questions from the following four StackExchange subforums: \\emph{Android}, \\emph{English}, \\emph{Gaming}, and \\emph{Wordpress}, derived from a data set known as CQADupStack \\cite{hoogeveen2015cqadupstack}. Data size statistics can be found in \\tabref{tab:stackexchange}. These subforums were chosen due to their size, and to reflect a variety of domains.\n\n\nThe data was provided in the same format as for the other subtasks. Each original question had 50 candidate questions, and these related questions each had a number of comments. On top of that, they had a number of answers, and each answer potentially had individual comments. The difference between answers and comments is that answers should contain a well-formed answer to the question, while comments contain things such as requests for clarification, remarks, and small additions to someone else's answer. Since the content of StackExchange is provided by the community, the precise delineation between comments and the main body of a post can vary across forums.\n\n\\noindent The relevance labels in the development and in the training data were sourced directly from the users of the StackExchange sites, who can vote for questions to be closed as duplicates: these are the questions we labeled as \\emph{PerfectMatch}. \n\nThe questions labeled as \\emph{Related} are questions that are not duplicates, but that are somehow similar to the original question, also as judged by the StackExchange community. It is possible that some duplicate labels are missing, due to the voluntary nature of the duplicate labeling on StackExchange. The development and training data should therefore be considered a silver standard \\cite{Hoogeveen+:2016a}. \n\nFor the test data, we started an annotation project together with StackExchange.\\footnote{A post made by StackExchange about the project can be found here: {\\small\\url{http:\/\/meta.stackexchange.com\/questions\/286329\/project-reduplication-of-deduplication-has-begun}}} The goal was to obtain multiple annotations per question pair in the test set, from the same community that provided the labels in the development and in the training data. We expected the community to react enthusiastically, because the data would be used to build systems that can improve duplicate question detection on the site, ultimately saving the users manual effort. Unfortunately, only a handful of people were willing to annotate a sizeable set of question pairs, thus making their annotations unusable for the purpose of this shared task.\n\nAn example that includes a query question from the English subforum, a duplicate of that question, and a non-duplicate question (with respect to the query) is shown below:\n\\begin{itemize}\n\t\\item Query: \\textit{Why do bread companies add sugar to bread?}\n\t\\item Duplicate: \\textit{What is the purpose of sugar in baking plain bread?}\n\t\\item Non-duplicate: \\textit{Is it safe to eat potatoes that have sprouted?}\n\\end{itemize}\n\n\\subsubsection{Evaluation Measure for E}\n\\label{sec:e-evaluation}\n\nIn CQA archives, the majority of new questions do not have a duplicate in the archive. We maintained this characteristic in the training, in the development, and in the test data, to stay as close to a real world setting as possible. This means that for most query questions, the correct result is an empty list. \n\n\\noindent This has two consequences: (1) a system that always returns an empty list is a challenging baseline to beat, and (2) standard IR evaluation metrics like MAP, which is used in the other subtasks, cannot be used, because they break down when the result list is empty or there are no relevant documents for a given query.\n\nTo solve this problem we used a modified version of MAP, as proposed by \\newcite{liu2016}. To make sure standard IR evaluation metrics do not break down on empty result list queries, \\newcite{liu2016} add a nominal terminal document to the end of the ranking returned by a system, to indicate where the number of relevant documents ended. This terminal document has a corresponding gain value of:\n\\begin{eqnarray*}\n r_{t} = \\begin{cases}\n 1 & \\textup{if}\\: R = 0 \\\\\n \\sum_{i=1}^{d} r_{i}\/R & \\textup{if}\\: R > 0 \n \\end{cases}\n\\end{eqnarray*}\nThe result of this adjustment is that queries without relevant documents in the index, receive a MAP score of 1.0 for an empty result ranking. This is desired, because in such cases, the empty ranking is the correct result.\n\n\n\\section{Participants and Results}\n\\label{sec:results}\n\nThe list of all participating teams can be found in \\tabref{table:teams}. The results for subtasks A, B, C, and D are shown in tables \\ref{table:results:subtaskA}, \\ref{table:results:subtaskB}, \\ref{table:results:subtaskC}, and \\ref{table:results:subtaskD}, respectively. Unfortunately, there were no official participants in Subtask E, and thus we present baseline results in \\tabref{tab:baseline-e}. In all tables, the systems are ranked by the official MAP scores for their primary runs\\footnote{Participants could submit one primary run, to be used for the official ranking, and up to two contrastive runs, which are scored, but they have unofficial status.} (shown in the third column). The following columns show the scores based on the other six unofficial measures; the ranking with respect to these additional measures are marked with a subindex (for the primary runs).\n\nTwenty two teams participated in the challenge presenting a variety of approaches and features to address the different subtasks. They submitted a total of 85 runs (36 primary and 49 contrastive), which breaks down by subtask as follows: The English subtasks A, B and C attracted 14, 13, and 6 systems and 31, 34 and 14 runs, respectively. The Arabic subtask D got 3 systems and 6 runs. And there were no participants for subtask E.\n\n\\noindent The best MAP scores had large variability depending on the subtask, going from 15.46 (best result for subtask C) to 88.43 (best result for subtask A). The best systems for subtasks A, B, and C were able to beat the baselines we provided by sizeable margins. In subtask D, only the best system was above the IR baseline. \n\n \\begin{table*}[tbh]\n \\small\n \\begin{center}\n \\begin{tabular}{@{}l@{ }@{ }l@{}}\n\\toprule\n \\bf Team ID & \\bf Team Affiliation\\\\\n \\midrule\n Beihang-MSRA & Beihang University, Beijing, China; Microsoft Research, Beijing, China\\\\\n & \\cite{SemEval-2017:task3:BEIHANG-MSRA}\\\\\n bunji & Hitachi Ltd., Japan\\\\\n & \\cite{SemEval-2017:task3:BUNJI}\\\\\n ECNU & East China Normal University, P.R. China;\n Shanghai Key Laboratory of Multidimensional \\\\\n & Information Processing, P.R. China \\cite{SemEval-2017:task3:ECNU}\\\\\n EICA & East China Normal University, Shanghai, P.R.China\\\\\n & \\cite{SemEval-2017:task3:EICA}\\\\\n FuRongWang & National University of Defense Technology, P.R. China\\\\\n & \\cite{SemEval-2017:task3:FURONGWANG}\\\\\n FA3L & University of Pisa, Italy \\\\\n & \\cite{SemEval-2017:task3:FA3L}\\\\\n GW\\_QA & The George Washington University, D.C. USA\\\\\n & \\cite{SemEval-2017:task3:GW-QA}\\\\\n IIT-UHH & Indian Institute of Technology Patna, India; University of Hamburg, Germany\\\\\n & \\cite{SemEval-2017:task3:IIT-UHH}\\\\\n KeLP & University of Roma, Tor Vergata, Italy; Qatar Computing Research Institute,\\\\\n & HBKU, Qatar \\cite{SemEval-2017:task3:KELP}\\\\\n MoRS & Universidade de Lisboa, Portugal\\\\\n & \\cite{SemEval-2017:task3:LASIGUE}\\\\\n LearningToQuestion & Georgia Institute of Technology, Atlanta, GA, USA\\\\\n & \\cite{SemEval-2017:task3:LEARNINGTOQUESTION}\\\\\n LS2N & LS2N\\\\\n & [no paper submitted] \\\\\n NLM\\_NIH & U.S. National Library of Medicine,\n Bethesda, MD, USA\\\\\n & \\cite{SemEval-2017:task3:NLM-NIH}\\\\\n QU-BIGIR & Qatar University, Qatar\\\\\n & \\cite{SemEval-2017:task3:QU-BIGIR}\\\\\n SCIR-QA & Harbin Institute of Technology, P.R. China\\\\\n & \\cite{SemEval-2017:task3:SCIR-QA}\\\\\n SimBow & Orange Labs, France\\\\\n & \\cite{SemEval-2017:task3:SIMBOW}\\\\\n SnowMan & Harbin Institute of Technology, P.R. China\\\\\n & [no paper submitted] \\\\\n SwissAlps & Zurich University of Applied Sciences, Switzerland\\\\\n & \\cite{SemEval-2017:task3:SWISSALPS}\\\\\n TakeLab-QA & University of Zagreb, Croatia\\\\\n & \\cite{SemEval-2017:task3:TAKELAB-QA}\\\\\n Talla & Talla, Boston, MA, USA \\\\\n & \\cite{SemEval-2017:task3:TALLA}\\\\\n TrentoTeam & University of Trento, Italy\\\\\n & \\cite{SemEval-2017:task3:TRENTOTEAM}\\\\\n UINSUSKA-TiTech & UIN Sultan Syarif Kasim Riau, Indonesia; Tokyo Institute of Technology, Japan \\\\\n & \\cite{SemEval-2017:task3:UINSUSKA-TITECH}\\\\\n UPC-USMBA & Universitat Polit\\`{e}cnica de Catalunya, Spain; Sidi Mohamed Ben Abdellah University, Morocco \\\\\n & \\cite{SemEval2017:task3:UPC-USMBA}\\\\\n \\bottomrule\n \\end{tabular}\n \\caption{The participating teams and their affiliations.}\n \\label{table:teams}\n \\end{center}\n \\end{table*}\n\n\\subsection{Subtask A, English (Question-Comment Similarity)}\n\n\\begin{table*}[tbh]\n\\begin{center}\n\\begin{tabular}{clrrrrrrr}\n\\toprule\n& \\bf Submission & \\bf MAP & \\bf \\scriptsize AvgRec & \\bf \\scriptsize MRR & \\bf \\scriptsize P & \\bf \\scriptsize R & \\bf \\scriptsize F1 & \\bf \\scriptsize Acc\\\\\n\\hline\n\\bf 1 & \\bf KeLP-primary & \\bf 88.43$_{1}$ & \\bf \\scriptsize 93.79$_{2}$ & \\bf \\scriptsize 92.82$_{1}$ & \\bf \\scriptsize 87.30$_{3}$ & \\bf \\scriptsize 58.24$_{9}$ & \\bf \\scriptsize 69.87$_{5}$ & \\bf \\scriptsize 73.89$_{3}$ \\\\\n\\bf 2 & \\bf Beihang-MSRA-primary & \\bf 88.24$_{2}$ & \\bf \\scriptsize 93.87$_{1}$ & \\bf \\scriptsize 92.34$_{2}$ & \\bf \\scriptsize 51.98$_{14}$ & \\bf \\scriptsize 100.00$_{1}$ & \\bf \\scriptsize 68.40$_{6}$ & \\bf \\scriptsize 51.98$_{13}$ \\\\\n& Beihang-MSRA-contrastive2 & 88.18 & \\scriptsize 93.91 & \\scriptsize 92.45 & \\scriptsize 51.98 & \\scriptsize 100.00 & \\scriptsize 68.40 & \\scriptsize 51.98 \\\\\n& Beihang-MSRA-contrastive1 & 88.17 & \\scriptsize 93.82 & \\scriptsize 92.17 & \\scriptsize 51.98 & \\scriptsize 100.00 & \\scriptsize 68.40 & \\scriptsize 51.98 \\\\\n\\bf 3 & \\bf IIT-UHH-primary & \\bf 86.88$_{3}$ & \\bf \\scriptsize 92.04$_{7}$ & \\bf \\scriptsize 91.20$_{5}$ & \\bf \\scriptsize 73.37$_{11}$ & \\bf \\scriptsize 74.52$_{3}$ & \\bf \\scriptsize 73.94$_{2}$ & \\bf \\scriptsize 72.70$_{4}$ \\\\\n& ECNU-contrastive1 & 86.78 & \\scriptsize 92.41 & \\scriptsize 92.65 & \\scriptsize 83.05 & \\scriptsize 66.91 & \\scriptsize 74.11 & \\scriptsize 75.70 \\\\\n\\bf 4 & \\bf ECNU-primary & \\bf 86.72$_{4}$ & \\bf \\scriptsize 92.62$_{4}$ & \\bf \\scriptsize 91.45$_{3}$ & \\bf \\scriptsize 84.09$_{6}$ & \\bf \\scriptsize 72.16$_{4}$ & \\bf \\scriptsize 77.67$_{1}$ & \\bf \\scriptsize 78.43$_{1}$ \\\\\n& EICA-contrastive2 & 86.60 & \\scriptsize 92.25 & \\scriptsize 90.67 & \\scriptsize 88.50 & \\scriptsize 31.32 & \\scriptsize 46.27 & \\scriptsize 62.18 \\\\\n\\bf 5 & \\bf bunji-primary & \\bf 86.58$_{5}$ & \\bf \\scriptsize 92.71$_{3}$ & \\bf \\scriptsize 91.37$_{4}$ & \\bf \\scriptsize 84.59$_{4}$ & \\bf \\scriptsize 63.43$_{5}$ & \\bf \\scriptsize 72.50$_{3}$ & \\bf \\scriptsize 74.98$_{2}$ \\\\\n\\bf 6 & \\bf EICA-primary & \\bf 86.53$_{6}$ & \\bf \\scriptsize 92.50$_{5}$ & \\bf \\scriptsize 89.57$_{8}$ & \\bf \\scriptsize 88.29$_{2}$ & \\bf \\scriptsize 30.20$_{12}$ & \\bf \\scriptsize 45.01$_{12}$ & \\bf \\scriptsize 61.64$_{11}$ \\\\\n& EICA-contrastive1 & 86.48 & \\scriptsize 92.18 & \\scriptsize 90.69 & \\scriptsize 88.43 & \\scriptsize 29.61 & \\scriptsize 44.37 & \\scriptsize 61.40 \\\\\n& IIT-UHH-contrastive1 & 86.35 & \\scriptsize 91.74 & \\scriptsize 91.40 & \\scriptsize 79.42 & \\scriptsize 51.94 & \\scriptsize 62.80 & \\scriptsize 68.02 \\\\\n\\bf 7 & \\bf SwissAlps-primary & \\bf 86.24$_{7}$ & \\bf \\scriptsize 92.28$_{6}$ & \\bf \\scriptsize 90.89$_{6}$ & \\bf \\scriptsize 90.78$_{1}$ & \\bf \\scriptsize 28.43$_{13}$ & \\bf \\scriptsize 43.30$_{13}$ & \\bf \\scriptsize 61.30$_{12}$ \\\\\n& SwissAlps-contrastive1 & 85.53 & \\scriptsize 91.98 & \\scriptsize 90.52 & \\scriptsize 90.37 & \\scriptsize 24.03 & \\scriptsize 37.97 & \\scriptsize 59.18 \\\\\n& bunji-contrastive1 & 85.29 & \\scriptsize 91.77 & \\scriptsize 91.48 & \\scriptsize 83.14 & \\scriptsize 56.34 & \\scriptsize 67.16 & \\scriptsize 71.37 \\\\\n& IIT-UHH-contrastive2 & 85.24 & \\scriptsize 91.37 & \\scriptsize 90.38 & \\scriptsize 81.22 & \\scriptsize 57.65 & \\scriptsize 67.43 & \\scriptsize 71.06 \\\\\n\\bf 8 & \\bf $^{\\star}$FuRongWang-primary & \\bf 84.26$_{8}$ & \\bf \\scriptsize 90.79$_{8}$ & \\bf \\scriptsize 89.40$_{9}$ & \\bf \\scriptsize 84.58$_{5}$ & \\bf \\scriptsize 48.98$_{10}$ & \\bf \\scriptsize 62.04$_{10}$ & \\bf \\scriptsize 68.84$_{7}$ \\\\\n& bunji-contrastive2 & 84.01 & \\scriptsize 90.45 & \\scriptsize 89.17 & \\scriptsize 81.88 & \\scriptsize 59.03 & \\scriptsize 68.60 & \\scriptsize 71.91 \\\\\n\\bf 9 & \\bf FA3L-primary & \\bf 83.42$_{9}$ & \\bf \\scriptsize 89.90$_{9}$ & \\bf \\scriptsize 90.32$_{7}$ & \\bf \\scriptsize 73.82$_{10}$ & \\bf \\scriptsize 59.62$_{6}$ & \\bf \\scriptsize 65.96$_{9}$ & \\bf \\scriptsize 68.02$_{8}$ \\\\\n& ECNU-contrastive2 & 83.15 & \\scriptsize 90.01 & \\scriptsize 89.46 & \\scriptsize 75.06 & \\scriptsize 78.86 & \\scriptsize 76.91 & \\scriptsize 75.39 \\\\\n& LS2N-contrastive2 & 82.91 & \\scriptsize 89.70 & \\scriptsize 89.58 & \\scriptsize 72.19 & \\scriptsize 71.77 & \\scriptsize 71.98 & \\scriptsize 70.96 \\\\\n& FA3L-contrastive1 & 82.87 & \\scriptsize 89.64 & \\scriptsize 89.98 & \\scriptsize 77.28 & \\scriptsize 56.27 & \\scriptsize 65.12 & \\scriptsize 68.67 \\\\\n& SnowMan-contrastive1 & 82.01 & \\scriptsize 89.36 & \\scriptsize 88.56 & \\scriptsize 75.92 & \\scriptsize 73.47 & \\scriptsize 74.67 & \\scriptsize 74.10 \\\\\n\\bf 10 & \\bf SnowMan-primary & \\bf 81.84$_{10}$ & \\bf \\scriptsize 88.67$_{10}$ & \\bf \\scriptsize 87.21$_{12}$ & \\bf \\scriptsize 79.54$_{8}$ & \\bf \\scriptsize 58.44$_{7}$ & \\bf \\scriptsize 67.37$_{7}$ & \\bf \\scriptsize 70.58$_{5}$ \\\\\n\\bf 11 & \\bf TakeLab-QA-primary & \\bf 81.14$_{11}$ & \\bf \\scriptsize 88.48$_{12}$ & \\bf \\scriptsize 87.51$_{11}$ & \\bf \\scriptsize 78.72$_{9}$ & \\bf \\scriptsize 58.31$_{8}$ & \\bf \\scriptsize 66.99$_{8}$ & \\bf \\scriptsize 70.14$_{6}$ \\\\\n\\bf 12 & \\bf LS2N-primary & \\bf 80.99$_{12}$ & \\bf \\scriptsize 88.55$_{11}$ & \\bf \\scriptsize 87.92$_{10}$ & \\bf \\scriptsize 80.07$_{7}$ & \\bf \\scriptsize 43.27$_{11}$ & \\bf \\scriptsize 56.18$_{11}$ & \\bf \\scriptsize 64.91$_{10}$ \\\\\n& TakeLab-QA-contrastive1 & 79.71 & \\scriptsize 87.31 & \\scriptsize 87.03 & \\scriptsize 73.88 & \\scriptsize 62.77 & \\scriptsize 67.87 & \\scriptsize 69.11 \\\\\n& TakeLab-QA-contrastive2 & 78.98 & \\scriptsize 86.33 & \\scriptsize 87.13 & \\scriptsize 80.06 & \\scriptsize 56.66 & \\scriptsize 66.36 & \\scriptsize 70.14 \\\\\n\\bf 13 & \\bf TrentoTeam-primary & \\bf 78.56$_{13}$ & \\bf \\scriptsize 86.66$_{13}$ & \\bf \\scriptsize 85.76$_{13}$ & \\bf \\scriptsize 65.59$_{12}$ & \\bf \\scriptsize 75.71$_{2}$ & \\bf \\scriptsize 70.28$_{4}$ & \\bf \\scriptsize 66.72$_{9}$ \\\\\n& LS2N-contrastive1 & 74.08 & \\scriptsize 81.88 & \\scriptsize 81.66 & \\scriptsize 70.66 & \\scriptsize 28.30 & \\scriptsize 40.41 & \\scriptsize 56.62 \\\\\n\\bf 14 & \\bf MoRS-primary & \\bf 63.32$_{14}$ & \\bf \\scriptsize 71.67$_{14}$ & \\bf \\scriptsize 71.99$_{14}$ & \\bf \\scriptsize 59.23$_{13}$ & \\bf \\scriptsize 5.06$_{14}$ & \\bf \\scriptsize 9.32$_{14}$ & \\bf \\scriptsize 48.84$_{14}$ \\\\\n\\midrule\n& Baseline 1 (chronological) & \\bf 72.61 & \\scriptsize \\bf 79.32 & \\scriptsize \\bf 82.37 & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- \\\\\n& Baseline 2 (random) & 62.30 & \\scriptsize 70.56 & \\scriptsize 68.74 & \\scriptsize 53.15 & \\scriptsize 75.97 & \\scriptsize 62.54 & \\scriptsize \\bf 52.70 \\\\\n& Baseline 3 (all `true') & --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize 51.98 & \\scriptsize 100.00 & \\scriptsize \\bf 68.40 & \\scriptsize 51.98 \\\\\n& Baseline 4 (all `false') & --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize 48.02 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\textbf{Subtask A, English (Question-Comment Similarity)}: results for all submissions. The first column shows the rank of the primary runs with respect to the official MAP score. The second column contains the team's name and its submission type (primary vs. contrastive).\nThe following columns show the results for the primary, and then for other, unofficial evaluation measures. The subindices show the rank of the primary runs with respect to the evaluation measure in the respective column. All results are presented as percentages.\nThe system marked with a $^{\\star}$ was a late submission.}\n\\label{table:results:subtaskA}\n\\end{center}\n\\end{table*}\n\n\nTable~\\ref{table:results:subtaskA} shows the results for subtask A, English, which attracted 14 teams (two more than in the 2016 edition). In total 31 runs were submitted: 14 primary and 17 contrastive. \nThe last four rows of the table show the performance of four baselines.\nThe first one is the chronological ranking, where the comments are ordered by their time of posting; we can see that all submissions but one outperform this baseline on all three ranking measures.\nThe second baseline is a random baseline, which is 10 MAP points below the chronological ranking. \nBaseline 3 classifies all comments as Good, and it outperforms all but three of the primary systems in terms of F$_1$ and one system in terms of Accuracy. However, it should be noted that the systems were not optimized for such measures.\nFinally, baseline 4 classifies all comments as Bad; it is outperformed by all primary systems in terms of Accuracy. \n\nThe winner of Subtask A is \\emph{KeLP} with a MAP of 88.43, closely followed by \\emph{Beihang-MSRA}, scoring 88.24. Relatively far from the first two, we find five systems, \\emph{IIT-UHH},\n\\emph{ECNU}, \\emph{bunji}, \\emph{EICA} and \\emph{SwissAlps}, which all obtained an MAP of around 86.5.\n\n\\subsection{Subtask B, English (Question-Question Similarity)}\nTable~\\ref{table:results:subtaskB} shows the results for subtask B, English, which attracted 13 teams (3 more than in last year's edition) and 34 runs:~13 primary and 21 contrastive. This is known to be a hard task. In contrast to the 2016 results, \n in which only 6 out of 11 teams beat\nthe strong IR baseline (i.e., ordering the related questions in the order provided by the search engine), this year 10\nof the 13 systems outperformed this baseline \nin terms of MAP, AvgRec and MRR. Moreover, the improvements for the best systems over the IR baseline are larger (reaching $>7$ MAP points absolute). This is a remarkable improvement over last year's results.\n\n\n\\noindent The random baseline outperforms two systems in terms of Accuracy.\nThe ``all-good'' baseline is below almost all systems on F$_1$, but the ``all-false'' baseline yields the best Accuracy results. This is partly because the label distribution in the dataset is biased (81.5\\% of negative cases), but also because the systems were optimized for MAP rather than for classification accuracy (or precision\/recall). \n\nThe winner of the task is \\emph{SimBow} with a MAP of 47.22, followed by \\emph{LearningToQuestion} with 46.93, \\emph{KeLP} with 46.66, and \\emph{Talla} with 45.70. The other nine systems scored sensibly lower than them, ranging from about 41 to 45. \nNote that the contrastive1 run of \\emph{KeLP}, which corresponds to the \\emph{KeLP} system from last year \\cite{SemEval2016:task3:KeLP}, achieved an even higher MAP of 49.00. \n\n\n\\begin{table*}[tbh]\n\\begin{center}\n\\begin{tabular}{clrrrrrrr}\n\\toprule\n& \\bf Submission & \\bf MAP & \\bf \\scriptsize AvgRec & \\bf \\scriptsize MRR & \\bf \\scriptsize P & \\bf \\scriptsize R & \\bf \\scriptsize F1 & \\bf \\scriptsize Acc\\\\\n\\midrule\n& KeLP-contrastive1 & 49.00 & \\scriptsize 83.92 & \\scriptsize 52.41 & \\scriptsize 36.18 & \\scriptsize 88.34 & \\scriptsize 51.34 & \\scriptsize 68.98 \\\\\n& SimBow-contrastive2 & 47.87 & \\scriptsize 82.77 & \\scriptsize 50.97 & \\scriptsize 27.03 & \\scriptsize 93.87 & \\scriptsize 41.98 & \\scriptsize 51.93 \\\\\n\\bf 1 & \\bf SimBow-primary & \\bf 47.22$_{1}$ & \\bf \\scriptsize 82.60$_{1}$ & \\bf \\scriptsize 50.07$_{3}$ & \\bf \\scriptsize 27.30$_{10}$ & \\bf \\scriptsize 94.48$_{3}$ & \\bf \\scriptsize 42.37$_{9}$ & \\bf \\scriptsize 52.39$_{11}$ \\\\\n& LearningToQuestion-contrastive2 & 47.20 & \\scriptsize 81.73 & \\scriptsize 53.22 & \\scriptsize 18.52 & \\scriptsize 100.00 & \\scriptsize 31.26 & \\scriptsize 18.52 \\\\\n& LearningToQuestion-contrastive1 & 47.03 & \\scriptsize 81.45 & \\scriptsize 52.47 & \\scriptsize 18.52 & \\scriptsize 100.00 & \\scriptsize 31.26 & \\scriptsize 18.52 \\\\\n\\bf 2 & \\bf LearningToQuestion-primary & \\bf 46.93$_{2}$ & \\bf \\scriptsize 81.29$_{4}$ & \\bf \\scriptsize 53.01$_{1}$ & \\bf \\scriptsize 18.52$_{12}$ & \\bf \\scriptsize 100.00$_{1}$ & \\bf \\scriptsize 31.26$_{12}$ & \\bf \\scriptsize 18.52$_{12}$ \\\\\n& SimBow-contrastive1 & 46.84 & \\scriptsize 82.73 & \\scriptsize 50.43 & \\scriptsize 27.80 & \\scriptsize 94.48 & \\scriptsize 42.96 & \\scriptsize 53.52 \\\\\n\\bf 3 & \\bf KeLP-primary & \\bf 46.66$_{3}$ & \\bf \\scriptsize 81.36$_{3}$ & \\bf \\scriptsize 50.85$_{2}$ & \\bf \\scriptsize 36.01$_{3}$ & \\bf \\scriptsize 85.28$_{5}$ & \\bf \\scriptsize 50.64$_{1}$ & \\bf \\scriptsize 69.20$_{5}$ \\\\\n& Talla-contrastive1 & 46.54 & \\scriptsize 82.15 & \\scriptsize 49.61 & \\scriptsize 30.39 & \\scriptsize 76.07 & \\scriptsize 43.43 & \\scriptsize 63.30 \\\\\n& Talla-contrastive2 & 46.31 & \\scriptsize 81.81 & \\scriptsize 49.14 & \\scriptsize 29.88 & \\scriptsize 74.23 & \\scriptsize 42.61 & \\scriptsize 62.95 \\\\\n\\bf 4 & \\bf Talla-primary & \\bf 45.70$_{4}$ & \\bf \\scriptsize 81.48$_{2}$ & \\bf \\scriptsize 49.55$_{5}$ & \\bf \\scriptsize 29.59$_{9}$ & \\bf \\scriptsize 76.07$_{8}$ & \\bf \\scriptsize 42.61$_{8}$ & \\bf \\scriptsize 62.05$_{8}$ \\\\\n& Beihang-MSRA-contrastive2 & 44.79 & \\scriptsize 79.13 & \\scriptsize 49.89 & \\scriptsize 18.52 & \\scriptsize 100.00 & \\scriptsize 31.26 & \\scriptsize 18.52 \\\\\n\\bf 5 & \\bf Beihang-MSRA-primary & \\bf 44.78$_{5}$ & \\bf \\scriptsize 79.13$_{7}$ & \\bf \\scriptsize 49.88$_{4}$ & \\bf \\scriptsize 18.52$_{13}$ & \\bf \\scriptsize 100.00$_{2}$ & \\bf \\scriptsize 31.26$_{13}$ & \\bf \\scriptsize 18.52$_{13}$ \\\\\n& NLM\\_NIH-contrastive1 & 44.66 & \\scriptsize 79.66 & \\scriptsize 48.08 & \\scriptsize 33.68 & \\scriptsize 79.14 & \\scriptsize 47.25 & \\scriptsize 67.27 \\\\\n\\bf 6 & \\bf NLM\\_NIH-primary & \\bf 44.62$_{6}$ & \\bf \\scriptsize 79.59$_{5}$ & \\bf \\scriptsize 47.74$_{6}$ & \\bf \\scriptsize 33.68$_{5}$ & \\bf \\scriptsize 79.14$_{6}$ & \\bf \\scriptsize 47.25$_{3}$ & \\bf \\scriptsize 67.27$_{6}$ \\\\\n& UINSUSKA-TiTech-contrastive1 & 44.29 & \\scriptsize 78.59 & \\scriptsize 48.97 & \\scriptsize 34.47 & \\scriptsize 68.10 & \\scriptsize 45.77 & \\scriptsize 70.11 \\\\\n& NLM\\_NIH-contrastive2 & 44.29 & \\scriptsize 79.05 & \\scriptsize 47.45 & \\scriptsize 33.68 & \\scriptsize 79.14 & \\scriptsize 47.25 & \\scriptsize 67.27 \\\\\n& Beihang-MSRA-contrastive1 & 43.89 & \\scriptsize 79.48 & \\scriptsize 48.18 & \\scriptsize 18.52 & \\scriptsize 100.00 & \\scriptsize 31.26 & \\scriptsize 18.52 \\\\\n\\bf 7 & \\bf UINSUSKA-TiTech-primary & \\bf 43.44$_{7}$ & \\bf \\scriptsize 77.50$_{11}$ & \\bf \\scriptsize 47.03$_{9}$ & \\bf \\scriptsize 35.71$_{4}$ & \\bf \\scriptsize 67.48$_{11}$ & \\bf \\scriptsize 46.71$_{4}$ & \\bf \\scriptsize 71.48$_{4}$ \\\\\n\\bf 8 & \\bf IIT-UHH-primary & \\bf 43.12$_{8}$ & \\bf \\scriptsize 79.23$_{6}$ & \\bf \\scriptsize 47.25$_{7}$ & \\bf \\scriptsize 26.85$_{11}$ & \\bf \\scriptsize 71.17$_{10}$ & \\bf \\scriptsize 38.99$_{10}$ & \\bf \\scriptsize 58.75$_{10}$ \\\\\n& UINSUSKA-TiTech-contrastive2 & 43.06 & \\scriptsize 76.45 & \\scriptsize 46.22 & \\scriptsize 35.71 & \\scriptsize 67.48 & \\scriptsize 46.71 & \\scriptsize 71.48 \\\\\n\\bf 9 & \\bf SCIR-QA-primary & \\bf 42.72$_{9}$ & \\bf \\scriptsize 78.24$_{9}$ & \\bf \\scriptsize 46.65$_{10}$ & \\bf \\scriptsize 31.26$_{8}$ & \\bf \\scriptsize 89.57$_{4}$ & \\bf \\scriptsize 46.35$_{5}$ & \\bf \\scriptsize 61.59$_{9}$ \\\\\n& SCIR-QA-contrastive1 & 42.72 & \\scriptsize 78.24 & \\scriptsize 46.65 & \\scriptsize 32.69 & \\scriptsize 83.44 & \\scriptsize 46.98 & \\scriptsize 65.11 \\\\\n& ECNU-contrastive2 & 42.48 & \\scriptsize 79.44 & \\scriptsize 45.09 & \\scriptsize 36.47 & \\scriptsize 78.53 & \\scriptsize 49.81 & \\scriptsize 70.68 \\\\\n& IIT-UHH-contrastive2 & 42.38 & \\scriptsize 78.59 & \\scriptsize 46.82 & \\scriptsize 32.99 & \\scriptsize 59.51 & \\scriptsize 42.45 & \\scriptsize 70.11 \\\\\n& ECNU-contrastive1 & 42.37 & \\scriptsize 78.41 & \\scriptsize 45.04 & \\scriptsize 34.34 & \\scriptsize 83.44 & \\scriptsize 48.66 & \\scriptsize 67.39 \\\\\n& IIT-UHH-contrastive1 & 42.29 & \\scriptsize 78.41 & \\scriptsize 46.40 & \\scriptsize 32.66 & \\scriptsize 59.51 & \\scriptsize 42.17 & \\scriptsize 69.77 \\\\\n\\bf 10 & \\bf FA3L-primary & \\bf 42.24$_{10}$ & \\bf \\scriptsize 77.71$_{10}$ & \\bf \\scriptsize 47.05$_{8}$ & \\bf \\scriptsize 33.17$_{6}$ & \\bf \\scriptsize 40.49$_{13}$ & \\bf \\scriptsize 36.46$_{11}$ & \\bf \\scriptsize 73.86$_{2}$ \\\\\n& LS2N-contrastive1 & 42.06 & \\scriptsize 77.36 & \\scriptsize 47.13 & \\scriptsize 32.01 & \\scriptsize 59.51 & \\scriptsize 41.63 & \\scriptsize 69.09 \\\\\n\\bf 11 & \\bf ECNU-primary & \\bf 41.37$_{11}$ & \\bf \\scriptsize 78.71$_{8}$ & \\bf \\scriptsize 44.52$_{13}$ & \\bf \\scriptsize 37.43$_{1}$ & \\bf \\scriptsize 76.69$_{7}$ & \\bf \\scriptsize 50.30$_{2}$ & \\bf \\scriptsize 71.93$_{3}$ \\\\\n\\bf 12 & \\bf EICA-primary & \\bf 41.11$_{12}$ & \\bf \\scriptsize 77.45$_{12}$ & \\bf \\scriptsize 45.57$_{12}$ & \\bf \\scriptsize 32.60$_{7}$ & \\bf \\scriptsize 72.39$_{9}$ & \\bf \\scriptsize 44.95$_{6}$ & \\bf \\scriptsize 67.16$_{7}$ \\\\\n& EICA-contrastive1 & 41.07 & \\scriptsize 77.70 & \\scriptsize 46.38 & \\scriptsize 32.30 & \\scriptsize 70.55 & \\scriptsize 44.32 & \\scriptsize 67.16 \\\\\n\\bf 13 & \\bf LS2N-primary & \\bf 40.56$_{13}$ & \\bf \\scriptsize 76.67$_{13}$ & \\bf \\scriptsize 46.33$_{11}$ & \\bf \\scriptsize 36.55$_{2}$ & \\bf \\scriptsize 53.37$_{12}$ & \\bf \\scriptsize 43.39$_{7}$ & \\bf \\scriptsize 74.20$_{1}$ \\\\\n& EICA-contrastive2 & 40.04 & \\scriptsize 76.98 & \\scriptsize 44.00 & \\scriptsize 31.69 & \\scriptsize 71.17 & \\scriptsize 43.86 & \\scriptsize 66.25 \\\\\n\\midrule\n& Baseline 1 (IR) & \\bf 41.85 & \\scriptsize \\bf 77.59 & \\scriptsize \\bf 46.42 & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- \\\\\n& Baseline 2 (random) & 29.81 & \\scriptsize 62.65 & \\scriptsize 33.02 & \\scriptsize 18.72 & \\scriptsize 75.46 & \\scriptsize 30.00 & \\scriptsize 34.77 \\\\\n& Baseline 3 (all `true') & --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize 18.52 & \\scriptsize 100.00 & \\scriptsize \\bf 31.26 & \\scriptsize 18.52 \\\\\n& Baseline 4 (all `false') & --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize \\bf 81.48 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\textbf{Subtask B, English (Question-Question Similarity):} results for all submissions. The first column shows the rank of the primary runs with respect to the official MAP score. The second column contains the team's name and its submission type (primary vs. contrastive).\nThe following columns show the results for the primary, and then for other, unofficial evaluation measures. The subindices show the rank of the primary runs with respect to the evaluation measure in the respective column. All results are presented as percentages.}\n\\label{table:results:subtaskB}\n\\end{center}\n\\end{table*}\n\n\\subsection{Subtask C, English (Question-External Comment Similarity)}\n\nThe results for subtask C, English are shown in Table~\\ref{table:results:subtaskC}. This subtask attracted 6 teams (sizable decrease compared to last year's 10 teams), and 14 runs: 6 primary and 8 contrastive. \nThe test set from 2017 had much more skewed label distribution, with only 2.8\\% positive instances, compared to the $\\sim$10\\% of the 2016 test set. This makes the overall MAP scores look much lower, as the number of examples without a single positive comment increased significantly, and they contribute 0 to the average, due to the definition of the measure. Consequently, the results cannot be compared directly to last year's. \n\nAll primary systems managed to outperform all baselines with respect to the ranking measures. Moreover, all but one system outperformed the ``all true'' system on F$_1$, and all of them were below the accuracy of the ``all false'' baseline, due to the extreme class imbalance. \n\nThe best-performing team for subtask C is \\emph{IIT-UHH}, with a MAP of 15.46, followed by \\emph{bunji} with 14.71, and \\emph{KeLP} with 14.35. The contrastive1 run of \\emph{bunji}, which used a neural network, obtained the highest MAP, 16.57, two points higher than their primary run, which also uses the comment plausibility features. Thus, the difference seems to be due to the use of comment plausibility features, which hurt the accuracy. In their SemEval system paper, \\newcite{SemEval-2017:task3:BUNJI} explain \nthat the similarity features are more important for Subtask C than plausibility features.\n\n\\noindent Indeed, Subtask C contains many comments that are not related to the original question, while candidate comments for subtask A are almost always on the same topic. Another explanation may be the overfitting to the development set since the authors manually designed plausibility features using that set. As a result, such features perform much worse on the 2017 test set.\n\n\\begin{table*}[tbh]\n\\begin{center}\n\\begin{tabular}{clrrrrrrr}\n\\toprule\n& \\bf Submission & \\bf MAP & \\bf \\scriptsize AvgRec & \\bf \\scriptsize MRR & \\bf \\scriptsize P & \\bf \\scriptsize R & \\bf \\scriptsize F1 & \\bf \\scriptsize Acc\\\\\n\\hline\n& bunji-contrastive2 & 16.57 & \\scriptsize 30.98 & \\scriptsize 17.04 & \\scriptsize 19.83 & \\scriptsize 19.11 & \\scriptsize 19.46 & \\scriptsize 95.58 \\\\\n\\bf 1 & \\bf IIT-UHH-primary & \\bf 15.46$_{1}$ & \\bf \\scriptsize 33.42$_{1}$ & \\bf \\scriptsize 18.14$_{1}$ & \\bf \\scriptsize 8.41$_{3}$ & \\bf \\scriptsize 51.22$_{3}$ & \\bf \\scriptsize 14.44$_{2}$ & \\bf \\scriptsize 83.03$_{4}$ \\\\\n& IIT-UHH-contrastive1 & 15.43 & \\scriptsize 33.78 & \\scriptsize 17.52 & \\scriptsize 9.45 & \\scriptsize 54.07 & \\scriptsize 16.08 & \\scriptsize 84.23 \\\\\n\\bf 2 & \\bf bunji-primary & \\bf 14.71$_{2}$ & \\bf \\scriptsize 29.47$_{4}$ & \\bf \\scriptsize 16.48$_{2}$ & \\bf \\scriptsize 20.26$_{1}$ & \\bf \\scriptsize 19.11$_{4}$ & \\bf \\scriptsize 19.67$_{1}$ & \\bf \\scriptsize 95.64$_{2}$ \\\\\n& EICA-contrastive1 & 14.60 & \\scriptsize 32.71 & \\scriptsize 16.14 & \\scriptsize 10.80 & \\scriptsize 9.35 & \\scriptsize 10.02 & \\scriptsize 95.31 \\\\\n\\bf 3 & \\bf KeLP-primary & \\bf 14.35$_{3}$ & \\bf \\scriptsize 30.74$_{2}$ & \\bf \\scriptsize 16.07$_{3}$ & \\bf \\scriptsize 6.48$_{5}$ & \\bf \\scriptsize 89.02$_{2}$ & \\bf \\scriptsize 12.07$_{4}$ & \\bf \\scriptsize 63.75$_{5}$ \\\\\n& IIT-UHH-contrastive2 & 14.00 & \\scriptsize 30.53 & \\scriptsize 14.65 & \\scriptsize 5.98 & \\scriptsize 85.37 & \\scriptsize 11.17 & \\scriptsize 62.06 \\\\\n\\bf 4 & \\bf EICA-primary & \\bf 13.48$_{4}$ & \\bf \\scriptsize 24.44$_{6}$ & \\bf \\scriptsize 16.04$_{4}$ & \\bf \\scriptsize 7.69$_{4}$ & \\bf \\scriptsize 0.41$_{6}$ & \\bf \\scriptsize 0.77$_{6}$ & \\bf \\scriptsize 97.08$_{1}$ \\\\\n& ECNU-contrastive2 & 13.29 & \\scriptsize 30.15 & \\scriptsize 14.95 & \\scriptsize 13.86 & \\scriptsize 26.42 & \\scriptsize 18.18 & \\scriptsize 93.35 \\\\\n\\bf 5 & \\bf $^{\\star}$FuRongWang-primary & \\bf 13.23$_{5}$ & \\bf \\scriptsize 29.51$_{3}$ & \\bf \\scriptsize 14.27$_{5}$ & \\bf \\scriptsize 2.80$_{6}$ & \\bf \\scriptsize 100.00$_{1}$ & \\bf \\scriptsize 5.44$_{5}$ & \\bf \\scriptsize 2.80$_{6}$ \\\\\n& EICA-contrastive2 & 13.18 & \\scriptsize 25.16 & \\scriptsize 15.05 & \\scriptsize 10.00 & \\scriptsize 0.81 & \\scriptsize 1.50 & \\scriptsize 97.02 \\\\\n\\bf 6 & \\bf ECNU-primary & \\bf 10.54$_{6}$ & \\bf \\scriptsize 25.56$_{5}$ & \\bf \\scriptsize 11.09$_{6}$ & \\bf \\scriptsize 13.44$_{2}$ & \\bf \\scriptsize 13.82$_{5}$ & \\bf \\scriptsize 13.63$_{3}$ & \\bf \\scriptsize 95.10$_{3}$ \\\\\n& ECNU-contrastive1 & 10.54 & \\scriptsize 25.56 & \\scriptsize 11.09 & \\scriptsize 13.83 & \\scriptsize 14.23 & \\scriptsize 14.03 & \\scriptsize 95.13 \\\\\n& bunji-contrastive1 & 8.19 & \\scriptsize 15.12 & \\scriptsize 9.25 & \\scriptsize 0.00 & \\scriptsize 0.00 & \\scriptsize 0.00 & \\scriptsize 97.20 \\\\\n\\midrule\n& Baseline 1 (IR) & \\bf 9.18 & \\scriptsize \\bf 21.72 & \\scriptsize \\bf 10.11 & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- \\\\\n& Baseline 2 (random) & 5.77 & \\scriptsize 7.69 & \\scriptsize 5.70 & \\scriptsize 2.76 & \\scriptsize 73.98 & \\scriptsize 5.32 & \\scriptsize 26.37 \\\\\n& Baseline 3 (all `true') & --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize 2.80 & \\scriptsize 100.00 & \\scriptsize \\bf 5.44 & \\scriptsize 2.80 \\\\\n& Baseline 4 (all `false') & --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize \\bf 97.20 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\textbf{Subtask C, English (Question-External Comment Similarity):} results for all submissions. The first column shows the rank of the primary runs with respect to the official MAP score. The second column contains the team's name and its submission type (primary vs. contrastive). The following columns show the results for the primary, and then for other, unofficial evaluation measures. The subindices show the rank of the primary runs with respect to the evaluation measure in the respective column. All results are presented as percentages.\nThe system marked with a $^{\\star}$ was a late submission.}\n\\label{table:results:subtaskC}\n\\end{center}\n\\end{table*}\n\n\\subsection{Subtask D, Arabic (Reranking the Correct Answers for a New Question)}\n\nFinally, the results for subtask D, Arabic are shown in Table~\\ref{table:results:subtaskD}. This year, subtask D attracted only 3 teams, which submitted 6 runs: 3 primary and 3 contrastive. \nCompared to last year, the 2017 test set contains a significantly larger number of positive question--answer pairs ($\\sim$40\\% in 2017, compared to $\\sim$20\\% in 2016), and thus the MAP scores are higher this year. Moreover, this year, the IR baseline is coming from Google and is thus very strong and difficult to beat. Indeed, only the best system was able to improve on it (marginally) in terms of MAP, MRR and AvgRec. \n\nAs in some of the other tasks, the participants in Subtask D did not concentrate on optimizing for precision\/recall\/F$_1$\/accuracy and they did not produce sensible class predictions in most cases. \n\nThe best-performing system is \\emph{GW\\_QA} with a MAP score of 61.16, which barely improves over the IR baseline of 60.55. The other two systems \\emph{UPC-USMBA} and \\emph{QU\\_BIGIR} are about 3-4 points behind.\n\n\\begin{table*}[tbh]\n\\begin{center}\n\\begin{tabular}{clrrrrrrr}\n\\toprule\n& \\bf Submission & \\bf MAP & \\bf \\scriptsize AvgRec & \\bf \\scriptsize MRR & \\bf \\scriptsize P & \\bf \\scriptsize R & \\bf \\scriptsize F1 & \\bf \\scriptsize Acc\\\\\n\\midrule\n\\bf 1 & \\bf GW\\_QA-primary & \\bf 61.16$_{1}$ & \\bf \\scriptsize 85.43$_{1}$ & \\bf \\scriptsize 66.85$_{1}$ & \\bf \\scriptsize 0.00$_{3}$ & \\bf \\scriptsize 0.00$_{3}$ & \\bf \\scriptsize 0.00$_{3}$ & \\bf \\scriptsize 60.77$_{2}$ \\\\\n& QU\\_BIGIR-contrastive2 & 59.48 & \\scriptsize 83.83 & \\scriptsize 64.56 & \\scriptsize 55.35 & \\scriptsize 70.95 & \\scriptsize 62.19 & \\scriptsize 66.15 \\\\\n& QU\\_BIGIR-contrastive1 & 59.13 & \\scriptsize 83.56 & \\scriptsize 64.68 & \\scriptsize 49.37 & \\scriptsize 85.41 & \\scriptsize 62.57 & \\scriptsize 59.91 \\\\\n\\bf 2 & \\bf UPC-USMBA-primary & \\bf 57.73$_{2}$ & \\bf \\scriptsize 81.76$_{3}$ & \\bf \\scriptsize 62.88$_{2}$ & \\bf \\scriptsize 63.41$_{1}$ & \\bf \\scriptsize 33.00$_{2}$ & \\bf \\scriptsize 43.41$_{2}$ & \\bf \\scriptsize 66.24$_{1}$ \\\\\n\\bf 3 & \\bf QU\\_BIGIR-primary & \\bf 56.69$_{3}$ & \\bf \\scriptsize 81.89$_{2}$ & \\bf \\scriptsize 61.83$_{3}$ & \\bf \\scriptsize 41.59$_{2}$ & \\bf \\scriptsize 70.16$_{1}$ & \\bf \\scriptsize 52.22$_{1}$ & \\bf \\scriptsize 49.64$_{3}$ \\\\\n& UPC-USMBA-contrastive1 & 56.66 & \\scriptsize 81.16 & \\scriptsize 62.87 & \\scriptsize 45.00 & \\scriptsize 64.04 & \\scriptsize 52.86 & \\scriptsize 55.18 \\\\\n\\midrule\n& Baseline 1 (IR) & \\bf 60.55 & \\scriptsize \\bf 85.06 & \\scriptsize \\bf 66.80 & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- \\\\\n& Baseline 2 (random) & 48.48 & \\scriptsize 73.89 & \\scriptsize 53.27 & \\scriptsize 39.04 & \\scriptsize 66.43 & \\scriptsize 49.18 & \\scriptsize 46.13 \\\\\n& Baseline 3 (all `true') & --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize 39.23 & \\scriptsize 100.00 & \\scriptsize \\bf 56.36 & \\scriptsize 39.23 \\\\\n& Baseline 4 (all `false') & --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize --- & \\scriptsize \\bf 60.77 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\textbf{Subtask D, Arabic (Reranking the correct answers for a new question):} results for all submissions. The first column shows the rank of the primary runs with respect to the official MAP score. The second column contains the team's name and its submission type (primary vs. contrastive). The following columns show the results for the primary, and then for other, unofficial evaluation measures. The subindices show the rank of the primary runs with respect to the evaluation measure in the respective column. All results are presented as percentages.}\n\\label{table:results:subtaskD}\n\\end{center}\n\\end{table*}\n\n\\subsection{Subtask E, English (Multi-Domain Question Duplicate Detection)}\n\nThe baselines for Subtask E can be found in \\tabref{tab:baseline-e}. The IR baseline is BM25 with perfect truncation after the final relevant document for a given document (equating to an empty result list if there are no relevant documents). The zero results baseline is the score for a system that returns an empty result list for every single query. This is a high number for each subforum because for many queries there are no duplicate questions in the archive.\n\nAs previously stated, there are no results submitted by participants to be discussed for this subtask. Eight teams signed up to participate, but unfortunately none of them submitted test results.\n\n\\begin{table*}[t]\n\\centering\n\\renewcommand{\\arraystretch}{1.1}\n\\begin{tabular}{l c}\n\\toprule\n\\bf Baseline & \\bf TMAP \\\\\n\\midrule\nAndroid Baseline 1 (IR oracle) & 99.00 \\\\\nAndroid Baseline 2 (all empty results) & 98.56 \\\\[0.5ex]\nEnglish Baseline 1 (IR oracle) & 98.05 \\\\\nEnglish Baseline 2 (all empty results) & 97.65 \\\\[0.5ex]\nGaming Baseline 1 (IR oracle) & 99.18 \\\\\nGaming Baseline 2 (all empty results) & 98.73 \\\\[0.5ex]\nWordpress Baseline 1 (IR oracle) & 99.21 \\\\\nWordpress Baseline 2 (all empty results) & 98.98 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption{\\textbf{Subtask E, English (Multi-Domain Duplicate Detection):} Baseline results on the test dataset. The empty result baseline has an empty result list for all queries. The IR baselines are the results of applying BM25 with perfect truncation. All results are presented as percentages.}\n\\label{tab:baseline-e}\n\\end{table*}\n\n\n\n\\section{Discussion and Conclusions}\n\\label{sec:discussion}\n\nIn this section, we first describe features that are common across the different subtasks. Then, we discuss the characteristics of the best systems for each subtask with focus on the machine learning algorithms and the instance representations used.\n\n\\subsection{Feature Types}\n\nThe features the participants used across the sutbtasks can be organized into the following groups:\n\n(\\emph{i})~\\emph{similarity features} between questions and comments from their threads or between original questions and related questions, e.g., cosine similarity applied to lexical, syntactic and semantic representations, including distributed representations, often derived using neural networks;\n\n(\\emph{ii})~\\emph{content features}, which are special signals that can clearly indicate a bad comment, e.g., when a comment contains ``thanks'';\n\n(\\emph{iii})~\\emph{thread level\/meta features}, e.g., user ID, comment rank in the thread;\n\n(\\emph{iv})~\\emph{automatically generated features} from syntactic structures using tree kernels.\n\nGenerally, similarity features were developed for the subtasks as follows:\n\n\\paragraph{Subtask A.} Similarities between question subject vs.~comment, question body vs.~comment, and question subject+body vs.~comment.\n\n\\paragraph{Subtask B.} Similarities between the original and the related question at different levels: subject vs.~subject, body vs.~body, and subject+body vs.~subject+body.\n\n\\paragraph{Subtask C.} The same as above, plus the similarities of the original question, subject and body at all levels with the comments from the thread of the related question.\n\n\\paragraph{Subtask D.} The same as above, without information about the thread, as there is no thread.\\\\\n\nThe similarity scores to be used as features were computed in various ways, e.g., most teams used dot product calculated over word $n$-grams ($n$=1,2,3), character $n$-grams, or with TF-IDF weighting. Simple word overlap, i.e., the number of common words between two texts, was also considered, often normalized, e.g., by question\/comment length. Overlap in terms of nouns or named entities was also explored.\n\n\n\\subsection{Learning Methods}\n\n\nThis year, we saw variety of machine learning approaches, ranging from SVMs to deep learning.\n\nThe \\emph{KeLP} system, which performed best on Subtask A, \nwas SVM-based and used syntactic tree kernels with relational links between questions and comments, together with some standard text similarity measures linearly combined with the tree kernel. Variants of this approach were successfully used in related research\n\\cite{DBLP:conf\/cikm\/TymoshenkoBM16,DaSanMartino:CIKM:2016}, as well as in last year's \\emph{KeLP} system \\cite{SemEval2016:task3:KeLP}.\n\nThe best performing system on Subtask C, \\emph{IIT-UHH}, was also SVM-based, and it used textual, domain-specific, word-embedding and topic-modeling features. The most interesting aspect of this system is their method for dialogue chain identification in the comment threads, which yielded substantial improvements.\n\n\nThe best-performing system on Subtask B was \\emph{SimBow}. They used logistic regression on a rich combination of different unsupervised textual similarities, built using a relation matrix based on standard cosine similarity between bag-of-words and other semantic or lexical relations.\n\nThis year, we also saw a jump in the popularity of deep learning and neural networks.\nFor example, the \\emph{Beihang-MSRA} system was ranked second with a result very close to that of \\emph{KeLP} for Subtask A. They used gradient boosted regression trees, i.e., XgBoost, as a ranking model to combine (\\emph{i})~TF$\\times$IDF, word sequence overlap, translation probability, (\\emph{ii})~three different types of tree kernels, (\\emph{iii})~subtask-specific features, e.g., whether a comment is written by the author of the question, the length of a comment or whether a comment contains URLs or email addresses, and (\\emph{iv})~neural word embeddings, and the similarity score from Bi-LSTM and 2D matching neural networks.\n\n\\emph{LearningToQuestion} achieved the second best result for Subtask B using SVM and Logistic Regression as integrators of rich feature representations, mainly embeddings generated by the following neural networks: (\\emph{i})~siamese networks to learn similarity measures using GloVe vectors \\cite{Pennington:2014}, (\\emph{ii})~bidirectional LSTMs, (\\emph{iii})~gated recurrent unit (GRU) used as another network to generate the neural embeddings trained by a siamese network similar to Bi-LSTM, (\\emph{iv})~and convolutional neural networks to generate embeddings inside the siamese network.\n\n\\noindent The \\emph{bunji} system, second on Subtask C, produced features using neural networks that capture the semantic similarities between two sentences as well as comment plausibility. The neural similarity features were extracted using a decomposable attention model \\cite{parikh-EtAl:2016:EMNLP2016}, which can model alignment between two sequences of text, allowing the system to identify possibly related regions of a question and of a comment, which then helps it predict whether the comment is relevant with respect to the question.\nThe model compares each token pair from the question tokens and comment tokens associating them with an attention weight.\nEach question-comment pair is mapped to a real-value score using a neural network with shared weights and the prediction loss is calculated list-wise. \nThe plausibility features are task-specific, e.g., is the person giving the answer actually trying to answer the question or is s\/he making remarks or asking for more information. Other features are the presence keywords such as \\emph{what}, \\emph{which}, \\emph{who}, \\emph{where} within the question. There are also features about the question and the comment length. All these features were merged in a CRF.\n\n\nAnother interesting system is that of \\emph{Talla}, which consists of an ensemble of syntactic, semantic, and IR-based features, i.e., semantic word alignment, term frequency Kullback-Leibler divergence, and tree kernels. These were integrated in a pairwise-preference learning handled with a random forest classifier with 2,000 weak estimators. This system achieved very good performance on Subtask B.\n\nRegarding Arabic, \\emph{GW\\_QA}, the best-performing system for Subtask D, used features based on latent semantic models, namely, weighted textual matrix factorization models (WTMF), as well as a set of lexical features based on string lengths and surface-level matching. \nWTMF builds a latent model, which is appropriate for semantic profiling of a short text. Its main goal is to address the sparseness of short texts using both observed and missing words to explicitly capture what the text is and is not about. The missing words are defined as those of the entire training data vocabulary minus those of the target document.\nThe model was trained on text data from the Arabic Gigaword as well as on Arabic data that we provided in the task website, as part of the task. For Arabic text processing, the MADAMIRA toolkit was used.\n\n\\noindent The second-best team for Arabic, \\emph{QU-BIGIR}, used SVM-rank with two similarity feature sets. The first set captured similarity between pairs of text, i.e., synonym overlap, language model score, cosine similarity, Jaccard similarity, etc. The second set used word2vec to build average word embedding and covariance word embedding similarity to build the text representation.\n\nThe third-best team for Arabic, \\emph{UPC-USMBA}, combined several classifiers, including (\\emph{i})~lexical string similarities in vector representations, and (\\emph{ii})~rule-based features. A core component of their approach was the use of medical terminology covering both Arabic and English terms, which was organized into the following three categories: body parts, drugs, and diseases. In particular, they translated the Arabic dataset into English using the Google Translate service. The linguistic processing was carried out with Stanford CoreNLP for English and MADAMIRA for Arabic. Finally, WordNet synsets both for Arabic and English were added to the representation without performing word sense disambiguation. \n\n\n\n\n\\section{Conclusions}\n\\label{sec:conclusion}\n\nWe have described SemEval-2017 Task 3 on Community Question Answering, which extended the four subtasks at SemEval-2016 Task 3 \\cite{nakov-EtAl:2016:SemEval} with a new subtask on multi-domain question duplicate detection.\nOverall, the task attracted 23 teams, which submitted 85 runs; this is comparable to 2016, when 18 teams submitted 95 runs. The participants built on the lessons learned from the 2016 edition of the task, and further experimented with new features and learning frameworks. The top systems used neural networks with distributed representations or SVMs with syntactic kernels for linguistic analysis. A number of new features have been tried as well.\n\nApart from the new lessons learned from this year's edition, we believe that the task has another important contribution: the datasets we have created as part of the task, and which we have released for use to the research community, should be useful for follow-up research beyond SemEval.\n\n\n\nFinally, while the new subtask E did not get any submissions, mainly because of the need to work with a large amount of data, \nwe believe that it is about an important problem and that it will attract the interest of many researchers of the field.\n\n\n\n\n\n\\section*{Acknowledgements} \nThis research was performed in part by the Arabic Language Technologies (ALT) group at the Qatar Computing Research Institute (QCRI), HBKU, part of Qatar Foundation. It is part of the Interactive sYstems for Answer Search ({\\sc Iyas}) project, which is developed in collaboration with MIT-CSAIL. This research received funding in part from the Australian Research Council.\n\n\n\n\n\\bibliographystyle{acl_natbib}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\\label{intro}\n\n\n\n\nParity-time ($\\cal{PT}$) symmetry has recently emerged as a promising design principle for extending Hermitian to non-Hermitian optics and has given rise to a rich variety of physical phenomena based on the appearance of exceptional points and phase transitions in the eigenvalues of the associated non-Hermitian Hamiltonian \\cite{bender1,bender2}.\nIn classical wave systems, where real part of the potential in optics is the refractive index and gain\/loss is analogous to its imaginary part and $\\cal{PT}$-symmetry demands that $n(r) = n^{*}(-r)$, one can envisage various structures obtained by combining the index and gain\/loss modulations with required symmetries which represent classical analogues of quantum systems described by $\\cal{PT}$-symmetric potentials.\n The recent experimental realizations of $\\cal{PT}$-symmetric optical systems have attracted widespread interest, in particular due to their promising prospect to achieve tunable components with extreme sensitivity and very unconventional wave behavior\\cite{iop}-\\cite{guo}. These include loss induced invisibility\\cite{lin}, Bloch oscillations\\cite{longhi1}, laser generation by reversing the effect of loss at threshold \\cite{peng}-\\cite{phang1}, unidirectional propagation\\cite{regen}-\\cite{feng1}, optical solitons in PT periodic systems \\cite{mussli}-\\cite{wimmer}, to name a few of numerous new concepts proposed.\n\n\nThe key feature of $\\cal{PT}$ symmetric photonic structures stems from the fact that they may have real eigenvalues despite having gain and loss which break the space symmetry. For a certain amount of gain\/loss, there exists a threshold at which the system undergoes a spontaneous $\\cal{PT}$-symmetry breaking, and above which eigenfrequencies become complex and power grows exponentially. Based on the $\\cal{PT}$ concept several types of extended systems included $\\cal{PT}$ gratings, $\\cal{PT}$ lattices and $\\cal{PT}$-symmetric resonant structures characterized by the complex-valued periodic functions have been studied both theoretically and experimentally\\cite{feng2}-\\cite{phang2}. The periodic systems also provide asymmetric response, offering for example an unidirectional invisibility \\cite{lin}, unidirectional coupling \\cite{kestas1} and other peculiar effects. Due to periodicity such systems are resonant, i.e. provide the asymmetric responses in the vicinity resonant wavelength $\\lambda \\sim 2a$, where and $a$ is the period of the structure.\n\n\nMost of the $\\cal{PT}$ studies focus on one-dimensional systems. A naive extension to 2D is possible by considering parity symmetry in one space direction, which however can hardly lead to principally novel effects. Some exception is perhaps in \\cite{kestas1}, where the nontrivial chiral-$\\cal{PT}$ concept has been introduced, which is possible only in 2D or 3D systems. Majority of the studies also consider the global $\\cal{PT}$ -symmetry. This means that the bulk is uniformly filled by a continuous $\\cal{PT}$-media (i.e. by $\\cal{PT}$-lattices). Perhaps a single exception within this context is the work \\cite{kestas2}, where the local-$\\cal{PT}$ concept has been introduced for the first time, providing the $\\cal{PT}$ effect in different directions. This results in $\\cal{PT}$-systems with nontrivial flows, i.e. for instance axisymmetric flows toward some focus as suggested in original proposal \\cite{kestas2}. Moreover, it could be extended to build a systems with arbitrary $\\cal{PT}$-flows, like closed loops of currents, chiral objects, and any other flow-configuration on demand \\cite{kestas3}.\n\nThe most natural way of looking into the physical properties of such complicated $\\cal{PT}$-objects is to consider them as structures built from microscopic $\\cal{PT}$ objects i.e. so called $\\cal{PT}$ molecules represented by $\\cal{PT}$-dipoles \u2013 which can be described as generalized form of {\\color{black}the} conventional ones - see Fig. \\ref{fig_dipole}. The main idea behind our paper is to identify $\\cal{PT}$-dipole as a minimum unit building block which possesses the $\\cal{PT}$ properties. We demonstrate that such a minimum object consists of two scatterers with different complex scattering coefficients.\nWe demonstrate that such a minimum object consists of two scatterers with different complex coefficients. Realization of PT-symmetry in optics requires considerable amounts of gain loss can be provided only by semiconductors and polymers. For example, $\\cal{PT}$-symmetry breaking was observed experimentally in a passive $\\cal{PT}$-symmetry ridge optical waveguide consisting of multilayer Al$_x$Ga$_{1-x}As$ heterostructure with varying concentration, where the loss is introduced though deposition of a thin layer of chromium on one of the coupler arms\\cite{guo}. Another configuration employing the $\\cal{PT}$ symmetry concept was demonstrated in the $\\cal{PT}$-synthetic microring resonator with InGaAsP multiple quantum wells deposited on InP substrate where balanced gain\/loss modulation is achieved by periodically formed Cr\/Ge structures on the top of the InGaAsP\\cite{feng2}. A possible design of the $\\cal{PT}$-dipole that could be implemented and measured in microphotonic devices was proposed in the context of 2D $\\cal{PT}$-symmetric complex structure\\cite{kestas1}. The configuration shown in Fig. 4(a) in Ref.~\\onlinecite{kestas1} consists of a dielectric slab with holes filled by p\/n and n\/p semiconductor junctions which provide gain or loss depending on the orientation {\\color{black}of} each component.\n\nOur paper is organized as follows. In Sec. II we develop a scattering theory of a $\\cal{PT}$-dipole within the first Born approximation and define the difference in intensity of scattered field between the configurations with $\\cal{PT}$ dipoles parallel($\\vec{p}$) and antiparallel ($-\\vec{p}$) to the direction of the incident wave. We consider two specific configurations of the $\\cal{PT}$ dipole aligned parallel and perpendicular with respect to the incident wave. In Sec. III we numerically investigate the scattering properties of the $\\cal{PT}$ -dipole represented by the system consisting of infinitely long, parallel cylinders with the opposite sign of the imaginary component of the refractive index. In Sec. IV we present the numerical results for both configurations of the $\\cal{PT}$-dipole considered. The discussion of the validity of theoretical model based on the first Born approximation and deviations identified by numerical approach are discussed in Sec. V.\n\n\n\n\\section{Theoretical model}\n\\label{theory}\n\n\\begin{figure}[t]\n\\noindent\\includegraphics[width=0.3\\textwidth]{SMK-fig1.png}\n\\caption{(Color online) A single ${\\cal{PT}}$-dipole.}\n\\label{fig_dipole}\n\\end{figure}\n\n\\defS_{\\textrm{\\footnotesize{Re}}}{S_{\\textrm{\\footnotesize{Re}}}}\n\\defS_{\\textrm{\\footnotesize{Im}}}{S_{\\textrm{\\footnotesize{Im}}}}\n\n\\subsection{The model}\n\nThe ${\\cal{PT}}$ dipole consists of two point scatterers centered at the positions $\\vec{r}_1$ and $\\vec{r}_2$ in the $xy$ plane with different complex scattering coefficients $s_1$ and $s_2$ -- see Fig.~\\ref{fig_dipole}. The distance between two scatterers, $a = |\\vec{r}_1-\\vec{r}_2|$ defines the length scale of the model.\nScattering coefficients which represent effective polarizabilities can be in general complex,\n\\begin{equation} s_{1,2} = S_{\\textrm{\\footnotesize{Re}}}\\pm i~S_{\\textrm{\\footnotesize{Im}}}.\n\\end{equation}\nThe real component corresponds to elastic scattering and the imaginary one accounts for the emission\/absorption.\nThe incident field is the plane wave propagating in the $xy$ plane with unit amplitude and normalized frequency $f = a\/\\lambda$.\n\n\\subsection{Electromagnetic response of\nthe ${\\cal{PT}}$ dipole}\n\nThe total electric field $E$ at a point $\\vec{r}$ is assumed to be parallel to the $z$ axis can be written\nas a superposition of incident plane wave with unit amplitude and field $E_S$ scattered by two scatterers:\n\\begin{equation}\nE(\\vec{r}) = e^{i\\vec{k}_0\\vec{r}} + E_S(\\vec{r}).\n\\end{equation}\nHere, $k_0= 2\\pi\/\\lambda$ is the wave vector of the incident wave.\n\nIn the limit of weak scattering one can use the first Born approximation which allows to calculate the field far away from the scattering center. To describe the behavior of a single dipole consisting of two point scatterers we focus on the scattered field that can be written as\n\\begin{equation}\nE_{\\rm S}(\\vec{r}) = \\sum_{j=1}^{2}\n\\frac{ i s_j e^{i\\vec{k}_0\\vec{r}_j}e^{i |\\vec{k}_0||\\vec{r}-\\vec{r}_j|}}\n{|\\vec{r} - \\vec{r}_j|^{1\/2}}.\n\\label{E_scatt_Born}\n\\end{equation}\nIn order to obtain analytical insight we first simplify the general expression for the scattered field into asymptotic form in the far-field limit assuming $|\\vec{r}| \\gg |\\vec{r}_{1,2}|$\n\\begin{equation}\nE_{\\rm S}(\\vec{r}) = \\frac{ ie^{i |\\vec{k}_0||\\vec{r}|}}\n{ |\\vec{r}|^{1\/2} }\n\\left(\\sum_{j=1}^{2} s_j e^{ i |\\vec{k}_0|(\\vec{e_k}-\\vec{e_r})\\vec{r_j}} + O(|\\vec{r}|^{-1})\\right)\n\\label{E_scatt_far-field}\n\\end{equation}\nwhere unit vectors $\\vec{e_k} = \\vec{k}_0\/|\\vec{k}_0|$ and $\\vec{e_r} = \\vec{r}\/|\\vec{r}|$ indicate directions of the incident wave and of the observer at the point $\\vec{r}$, respectively.\n\n\\medskip\n\nTo understand how the ${\\cal{PT}}$ symmetry influences the electromagnetic response of the dipole, we first consider the limit of ``small'' dipole $k_0a \\ll 1$ the latter equation can be simplified into the form\n\\begin{equation}\nE_{\\rm S}(\\vec{r}) = \\frac{ ie^{i |\\vec{k}_0||\\vec{r}|}}\n{ |\\vec{r}|^{1\/2} }\n\\left( s + i (\\vec{e_k}-\\vec{e_r})\\vec{p} + O(|\\vec{r}|^{-1})\\right)\n\\label{E_scatt_small-dipole}\n\\end{equation}\nThe first term in the bracket in the latter equation represents the total scattering defined as a sum of the scattering coefficients associated with each of the scatterers\n$s = \\sum_{j} s_j$. This term is parity-invariant and thus provides symmetric scattering, while the second term in which\n\\begin{equation}\\vec{p} = |\\vec{k}_0|\\sum_{j}{\\color{black}s_j}\\cdot\\vec{r_j}\\end{equation}\ndefines the $\\cal{PT}$ dipole gives rise to asymmetry of scattering which depends on both its strength and orientation.\n\nThe parity asymmetry of the scattering can be determined by calculating the scattered field for the dipoles with opposite orientations $\\vec{p}$ and $-\\vec{p}$.\nSpecifically, in the forward direction $\\vec{e_k} = \\vec{e_r}$ when orientation of the dipole coincides with the direction of the incident wave, the second term in the brackets of the Eq.(\\ref{E_scatt_small-dipole}) vanishes and the scattered field does not depend on the $\\cal{PT}$ dipole $\\vec{p}$. In the case of backward scattering when $\\vec{e_k} = -\\vec{e_r}$ the scattered field given by Eq. (\\ref{E_scatt_small-dipole}) is proportional to $s + 2i\\vec{e_k}\\vec{p}$. This means that for real-valued scalar scattering $s = \\sum_{j} s_j$ and for real-valued $\\cal{PT}$ dipole $\\vec{p}$ {\\color{black}the} parity symmetry is maintained i.e. $|s + 2i\\vec{e_k}\\vec{p}| = |s - 2i\\vec{e_k}\\vec{p}|$. On the other hand, when the scattering is described in terms of the complex coefficients\n${\\color{black}s_j,}$ the parity symmetry is broken.\nThus, asymmetric scattering requires the dipole characterized by nonzero elastic scattering and a nonzero gain\/loss balance.\n\n\\medskip\n\nThe preceding discussion is valid also without assumption of ``small'' dipoles. In what follows the condition $k_0a\\ll 1$ is lifted.\nBy using the notation $\\vec{r}_{1,2} = \\pm \\Delta \\vec{r}\/2$, the Eq.(\\ref{E_scatt_far-field}) can be rewritten into the form\n\\begin{equation}\n\\label{E_scatt_dipole}\n\\begin{array}{lcl}\n\\displaystyle{E_{\\rm S}(\\vec{r},\\vec{p}) = \\frac{ 2ie^{i |k_0||\\vec{r}|}}\n{ |\\vec{r}|^{{1}\/{2}} } }\n\\Big[ &&S_{\\textrm{\\footnotesize{Re}}}\\cos(\\frac{|\\vec{k}_0|(\\vec{e_k}-\\vec{e_r})\\cdot\\Delta\\vec{r}}{2}) \\\\\n&+&\nS_{\\textrm{\\footnotesize{Im}}}\\sin(\\frac{|\\vec{k}_0|(\\vec{e}_k-\\vec{e}_r)\\cdot\\Delta\\vec{r}}{2})\\Big],\n\\end{array}\n\\end{equation}\naccording to which the $E_S(\\vec{r},\\vec{p})$ depends on the orientation of the dipole.\nTo characterize the asymmetry of scattered field associated with the opposite orientations of the\n$\\cal{PT}$-dipole, we calculate the difference in the intensity of scattered field between the configurations with $\\cal{PT}$ dipoles aligned parallel and antiparallel to the direction of the incident wave.\n\\begin{equation}\n\\label{deltaP}\n\\Delta P_{\\rm S}(\\vec{r}) = \\left| | E_{\\rm S}(\\vec{r},\\vec{p})|^2 - |E_S(\\vec{r},-\\vec{p})|^2 \\right|.\n\\end{equation}\nWe apply expressions (\\ref{E_scatt_dipole},\\ref{deltaP}) to the following two specific orientations of the dipole:\n\nWhen the\n$\\cal{PT}$-dipole\n is oriented along the direction of the incident wave, $\\vec{e_k}\\parallel\\Delta\\vec{r}$, the scattering field can be expressed as a function of the observation angle $\\theta$\n\\begin{equation}\n\\label{E_scatt_dipole_theta}\n\\begin{array}{lcl}\n\\displaystyle{E_{\\rm S}(\\vec{r}) = \\frac{ 2ie^{i |k_0||\\vec{r}|}}\n{ |\\vec{r}|^{{1}\/{2}} } }\n\\Big[ && S_{\\textrm{\\footnotesize{Re}}}\\cos(\\frac{k_0a(1-\\cos\\theta)}{2})\\\\\n&+&\nS_{\\textrm{\\footnotesize{Im}}}\\sin(\\frac{k_0a(1-\\cos\\theta)}{2})\\Big]\n\\end{array}\n\\end{equation}\nwhere $\\cos\\theta = \\vec{e}_k\\cdot\\vec{r}\/|\\vec{r}|$, and\n\\begin{equation}\n\\label{I_scatt_dipole_theta}\n\\Delta P_{\\rm S}(\\vec{r}) = \\frac{ 4S_{\\textrm{\\footnotesize{Re}}}S_{\\textrm{\\footnotesize{Im}}}}\n{|\\vec{r}|} \\sin\\left[k_0a(1-\\cos\\theta)\\right].\n\\end{equation}\nThe behavior of the angular dependence of $\\Delta P_{S}(\\vec{r})$, given by {\\color{black}the} Eq. (\\ref{I_scatt_dipole_theta}) is demonstrated in Fig. \\ref{fig_1}. The forward scattering does not depend on the orientation of the dipole ($\\Delta P_{\\rm S}(\\theta=0)=0)$ while significant asymmetry is observed in the backward scattering ($\\theta=\\pi)$.\nInterestingly, the backward scattering is symmetric in special cases when $2ak_0 = \\pi\\times N$, where $N$ is an integer, i.e. an \"accidental\" symmetric backscattering occurs at\nthe wavelengths $\\lambda_N = 4a\/N$.\n\n\\begin{figure}[t]\n\\noindent\\includegraphics[width=0.4\\textwidth]{SMK-fig2.pdf} \n\\caption{Angular diagrams of $\\cal{PT}$ asymmetry $\\Delta P_{S}(\\vec{r})$ in the case of parallel orientation of the $\\cal{PT}$-dipole for the normalized frequencies $f=a\/\\lambda = k_0a\/(2 \\pi)$ in the range $0.03 < f < 1.6$. Incident wave propagates along the $y$ axis. Orientation of axes is shown in the left upper panel. Since all panels have the same scale, diagrams give also an estimation how the total scattered energy depends on the frequency of incident wave.}\n\\label{fig_1}\n\\end{figure}\n\nIn the case when the $\\cal{PT}$-dipole is oriented {\\color{black}perpendicularly} to the direction of the incident wave,\n$\\vec{e}_k\\perp\\Delta\\vec{r}$, the scattering field can be expressed as a function of the observation angle $\\theta$\n\\begin{equation}\n\\begin{array}{lcl}\n\\label{E_scatt_dipole_theta_perp}\n\\displaystyle{\nE_{\\rm S}(\\vec{r}) = \\frac{ 2ie^{i |k_0||\\vec{r}|}}\n{ |\\vec{r}|^{{1}\/{2}} } }\n\\Big[ && S_{\\textrm{\\footnotesize{Re}}}\\cos(\\frac{k_0a\\sin\\theta}{2}) \\\\\n&-& S_{\\textrm{\\footnotesize{Im}}}\\sin(\\frac{k_0a\\sin\\theta}{2})\\Big]\n\\end{array}\n\\end{equation}\nand the asymmetry in the scattered field between the opposite orientation of the $\\cal{PT}$ dipole reads\n\\begin{equation}\n\\label{I_scatt_dipole_theta_perp}\n\\Delta P_{\\rm S}(\\vec{r}) = \\frac{ 4S_{\\textrm{\\footnotesize{Re}}}S_{\\textrm{\\footnotesize{Im}}}}\n{|\\vec{r}|} \\sin\\left(k_0a\\sin\\theta\\right)\n\\end{equation}\n\nOne can see in this case that no asymmetry between forward({\\color{black}$\\theta = 0$}) and backward({\\color{black}$\\theta = \\pi$}) scattering occurs. In addition, the Eq. \\ref{I_scatt_dipole_theta_perp} allows to determine the critical observation angle $\\theta$ at which the asymmetry $\\Delta P_{\\rm S}(\\vec{r})$ vanishes for a given frequency $f$: $\\theta = \\arcsin(N\/(2f))$ and yields number of the critical observation angles $N_{\\theta}$ which appear in one quadrant for the frequencies $f > N_{\\theta}\/2$. These features arising from the Eq. (\\ref{I_scatt_dipole_theta_perp}) can be observed in the angular dependence of $\\Delta P_{S}(\\vec{r})$ shown in Fig. \\ref{fig_1a}.\n\nThe equations\n(\\ref{E_scatt_dipole},\\ref{deltaP})\nallow us to evaluate both frequency and angular dependence of the asymmetry scattering of the $\\cal{PT}$ dipole and could be be generalized for arbitrary orientation of the dipole.\nAs expected, scattered field decreases as $r^{-1}$ at large distances.\nThe only parameter which determines the angular dependence of scattered field is $k_0a=2\\pi a\/\\lambda$.\n\\begin{figure}[t]\n\\noindent\\includegraphics[width=0.4\\textwidth]{SMK-fig3.pdf} \n\\caption{The same as in Fig. 2 but for\n perpendicular orientation of the $\\cal{PT}$-dipole\n}\n\\label{fig_1a}\n\\end{figure}\n\n\n\n\n\\section{Numerical method}\n\\label{numerics}\n\nIn numerical calculation,\nthe ${\\cal{PT}}$-dipole is represented by two infinitely long, cylinders,\nparallel to the $z$ axis.\nThe distance between centers of the cylinders is $a$.\nThe radius of cylinders is $R_0$ and refractive indices $n_j = n_R \\pm in_I$, $j = 1,2$.\nThe incident electromagnetic plane wave with normalized frequency $f = a\/\\lambda$ propagating in the $xy$ plane\n\\begin{equation}\nE(x, y |\\omega)_{\\rm inc} = \\exp[i (k_x x+ k_y y) - i\\omega t]\n\\label{eq.P1}\n\\end{equation}\nis polarized parallel to the axes of cylinders.\n\nThe total electric field can be expressed as the sum of the incident field $E(x,y|\\omega)_{\\rm inc}$ and a scattered field $E_{\\rm S}(x,y|\\omega)$\n\\begin{equation}\nE(x,y|\\omega) = E(x, y |\\omega)_{\\rm inc} + E_{\\rm S}(x,y|\\omega)\n\\label{eq.P2}\n\\end{equation}\nTo study scattering properties of EM waves for a single ${\\cal{PT}}$-dipole we evaluate the radial component of the Poynting vector\n\\begin{equation}\nP_{\\rm S}(R,\\phi) = E_{\\rm S}(R,\\phi)[H_{\\rm S}^\\phi(R,\\phi)]^{*}\n\\label{eq.P3}\n\\end{equation}\nalong the circumference of the circle with radius $R$ centered at the focus of the system.\nIn {\\color{black}the} Eq. (\\ref{eq.P3}),\n\\begin{equation} H^\\phi_{\\rm S} = \\frac{i}{\\omega\\mu }\\frac{\\partial E_{\\rm S}}{\\partial r}\\Bigg|_{r=R}\n\\end{equation}\nis the tangential component of magnetic field.\n\n\nIn analogy to the Eq. (\\ref {deltaP}) we characterize the asymmetry of scattered field associated with the opposite orientation of the\n$\\cal{PT}$-dipole in terms of the difference $\\Delta P_{\\rm S}(R,\\phi)$ defined as\n\\begin{equation}\n\\label{deltaPnum}\n\\Delta P_{\\rm S}(R,\\phi) = \\left| P_{\\rm S}(R,\\phi,\\vec{p}) - P_S(R,\\phi,-\\vec{p}) \\right|.\n\\end{equation}\n\nTo compute the $\\Delta P_{\\rm S}(R,\\phi)$ we apply a numerical algorithm based on the expansion of electromagnetic field into cylinder functions \\cite{Hulst}.\nThe scattered electric field can be\nexpressed in cylindrical coordinates $r$ and $\\phi$ as\n\\begin{equation}\nE_{S}^{j}(\\vec{r}) = \\sum_{j=1}^{2}\\sum_{m}\\beta_m^{j}H_m(k_0|\\vec{r}-\\vec{r}_j|) e^{im\\phi_j}.\n\\label{eq.1b}\n\\end{equation}\nwhere $H_m$ are the Hankel functions of the first kind and $r_j$, $\\phi_j$ are cylinder coordinates centered at the center of the $j$th cylinder.\n The coefficients $\\beta_m$ can be calculated from the continuity condition of the tangential components of the electric and magnetic\n field at the boundary of cylinders.\nOur approach is described in detail in Refs.~\\onlinecite{PM3,PM4}.\n\n\n\\section{results}\n\\label{results}\n\n\n\\subsection{${\\cal{PT}}$-dipole - parallel configuration}\n\n\n\\begin{figure}[t]\n\\noindent\\includegraphics[width=0.23\\textwidth]{SMK-fig4a.pdf}\n~~\n\\noindent\\includegraphics[width=0.23\\textwidth]{SMK-fig4b.pdf}\n\\caption{(Color online) The intensity of scattered field for two opposite orientations of the dipole parallel to the propagation of the incident wave (dashed black and solid red lines)\nand their difference $\\Delta P_{S}(R,\\phi)$ (shaded area) for the ${\\cal{PT}}$-dipole with gain\/loss characterized by $n_I = \\pm 0.5i$.\n(a) in the near field $(R = 2a)$, and (b) in the far field $(R = 20a)$. The frequency of an incident wave $f = 0.16$.}\n\\label{fig_2}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\noindent\\includegraphics[width=0.23\\textwidth]{SMK-fig5a.pdf}\n~~\n\\noindent\\includegraphics[width=0.23\\textwidth]{SMK-fig5b.pdf}\n\\caption{(Color online) Left: the difference in the intensity scattering for two antiparallel orientations of ${\\cal{PT}}$-dipole $\\Delta P_{S}(R,\\phi)$ characterized by $n_I = \\pm 0.5i$ in the near field $(R = 2a)$. Right panel shows the detail of the scattered field.\n The frequency of an incident wave $f = 1.0$.}\n\\label{fig_3}\n\\end{figure}\n\n\\begin{figure}[t]\n\\noindent\\includegraphics[width=0.23\\textwidth]{SMK-fig6.pdf}\n\\caption{(Color online) The intensity of the scattered field $P_{S}(R,\\phi)$ along the $y$-axis as a function of the normalized radius $R\/a$.\nSolid lines represent backward scattering for two orientations of the dipole. Similarly,\nblack dashed and red dotted lines show the forward scattering for two antiparallel orientations of the dipole.\nInset shows the product $P_S(R\/a)$ to prove that the intensity of scattered field decreases $\\sim 1\/R$ in the far field.\n}\n\\label{fig_4}\n\\end{figure}\n\n\n\n\n\\begin{figure}[t]\n\\noindent\\includegraphics[width=0.43\\textwidth]{SMK-fig7.pdf}\n\\caption{(Color online) The difference in the intensity scattering for antiparallel orientations of ${\\cal{PT}}$-dipole $\\Delta P_{S}(R,\\phi)$ in the far field($R = 20$) vs gain\/loss parameter $n_I = \\pm 0.005i$ (left), $n_I = \\pm 0.05i$ (middle) and $n_I = \\pm 0.5i$ (right), when $f = 1.0$. Red line in the middle (right) panel represents $\\Delta P_S$\nfrom the left (middle) panels, respectively, multiplied by factor of 10, to display a linear dependence of scattered intensity\non the gain\/loss parameter $n_I$ for small values $n_I$ and its breaking when $n_I$ increases.}\n\\label{fig_5}\n\\end{figure}\n\n\n\n\\begin{figure}[t]\n\\noindent\\includegraphics[width=0.43\\textwidth]{SMK-fig8.pdf}\n\\caption{(Color online) The difference in {\\color{black}the intensity scattering} for antiparallel orientations of the ${\\cal{PT}}$-dipole $\\Delta P_{S}(R,\\phi)$ in the far-field($R = 20$) vs radius of the cylinder for $n_I = \\pm 0.05i$ $R_0 = 0.01a$ (left), $R_0 = 0.1a$ (middle), and $R_0 = 0.4a$ (right). The frequency $f = 1.0$. Note that Born approximation predicts the symmetrical backscattering for this frequency. Numerically, this is observed only for very tiny cylinders (the left panel). For stronger scatterers, Born approximation is not valid.}\n\\label{fig_6}\n\\end{figure}\n\nWe first consider the case when the ${\\cal{PT}}$-dipole is parallel to the propagation direction of the incident wave. The system which represents a $\\cal{PT}$-dipole consists of two cylinders of radius $R_0 = 0.1a$ characterized by\nthe refractive index $n_i = n_R \\pm in_I$ $i = 1,2$, where real part $n_R = 3.5$ is kept constant while an imaginary part is varied in the range $0.005 < n_I < 0.5$.\nWe have chosen the radius of the cylinder to be sufficiently small $R_0 = 0.1a$ to allow comparison with analytic results based on the point scatterer approximation.\nIn Fig. \\ref{fig_2}. we display the scattering diagrams obtained for the frequency $f = 0.16$ for two\nthe parallel orientations of the dipole $\\pm \\vec{p}$ in the near-field ($R=2a$) and far-field limit($R = 20a$).\nThe grey shaded area in Fig. \\ref{fig_2} which shows the absolute value of the difference between the scattered power $\\Delta P_{S}(R,\\phi)$ for both orientations $\\vec{p}$ and $-\\vec{p}$ represent the key feature associated with scattering properties of the $\\cal{PT}$ dipole.\nPrimarily, in the case of parallel orientation of the $\\cal{PT}$ dipole the scattering diagrams reveal a strong asymmetry along the direction of propagation of the incident wave. Such a behavior is consistent with analytical results given by the Eq. \\ref{I_scatt_dipole_theta} and it is displayed in Fig. \\ref{fig_1}.\nIn addition, one can observe that the in the $\\it{far}$-$\\it{field}$ the power scattered along the $y-$axis for two antiparallel orientations of the dipole shown in Fig. \\ref{fig_2}(b) coincide and yield a vanishing difference $\\Delta P_{S}(R,\\phi=0)$. Simultaneously, this behavior confirms the theoretical prediction given by the {\\color{black}Eq.\n\\ref{E_scatt_small-dipole}}.\nBy inspecting the scattering of the ${\\cal{PT}}$-dipole in the ${\\it near}$-${\\it field}$ limit we found the\nasymmetry of the transmitted power along $y-$axis as it is demonstrated in Fig. \\ref{fig_2}(a).\nWe explored the existence of this phenomenon also at larger frequencies where the system reveals according to theoretical model richer scattering patterns (Fig. \\ref{fig_1}). As an example we display\nin Fig.~\\ref{fig_3}\nthe scattering diagram for the frequency $f=1$.\n\nTo quantify the transition between the near and far field limit we depict in Fig. \\ref{fig_4} the dependence of the field scattered along the $y-$axis on the normalized radius $R\/a$ for both orientations of the dipole. One can see that the asymmetry in the forward scattering vanishes at $R \\simeq 12.5a$ which {\\color{black}suggests} that the non-vanishing difference in the forward scattering appears solely in the near-field regime. Since scattered fields decrease as $\\sim 1\/R$ in the far field, the normalized product $RP\/a$ remains constant for the backward scattering as it is shown in the inset of Fig. \\ref{fig_4}.\n\nBesides the effect associated with near-field asymmetry described above, we found yet another interesting difference between the theoretical prediction and numerical results.\nNamely, we observed that the scattering pattern associated with {\\color{black}a ${\\cal{PT}}$-dipole} in the far-field limit obtained numerically reveals strong dependence on the strength of the imaginary part $n_I$. It is demonstrated in Fig. \\ref{fig_5}, where dependencies of the scattering diagrams for three values of the gain\/loss parameters are depicted. We note that according to the theoretical model the difference between the the intensities for antiparallel orientations of the ${\\cal{PT}}$-dipole given by {\\color{black}the} Eq.\n\\ref{I_scatt_dipole_theta}, the size of the imaginary component $n_I$ {\\color{black}does not affect} the shape of the $\\Delta P_{S}(R,\\phi)$. The theoretical approach in principle cannot account for the features described above.\n\nTo quantify the range of the gain\/loss parameter beyond which the numerical results indicate limits of the validity of the Born approximation we compare in the middle panel of Fig. \\ref{fig_5} the scattering intensities for\ntwo values of $n_I$: 0.005 (red line, multiplied by 10) and $n_I=0.05$. Clearly, the scattering increase linearly with $n_I$ for small values of $n_I$.\nThe same procedure applied to the results for $n_I=0.05$ and 0.5 (right panel of Fig. \\ref{fig_5})\n{\\color{black}unveils} the breaking of linear behavior for higher gain\/loss parameter.\n\nFor completeness, we also studied how the scattering pattern is affected when the radius of the cylinder is varied in the range $0.01a < R_0 < 0.4a$ -- see Fig. \\ref{fig_6}. One can observe that in comparison with the results shown in Fig. \\ref{fig_5} which display the dependence of the $\\Delta P_{S}(R,\\phi)$ on the $n_I$, the variation of the radius gives rise to a significantly wider range of the $\\Delta P_{S}(R,\\phi)$ and to a strong dependence of the shape of the scattering pattern. We do not expect any simple $R_0$ dependence of the scattering pattern since the latter is strongly affected by Fano resonances \\cite{rr,PM4}.\n\nThe result shown in left panel of Fig. \\ref{fig_6} confirms the accidental symmetry in the {\\color{black} backscattering} in the far-field limit along the $y$-axis which occurs\n\tfor small $\\cal{PT}$ dipoles. The effect observed in numerical calculations occurs at the integer-valued frequencies at which the difference $\\Delta P_{\\rm S}(R,\\phi)$ vanishes and {\\color{black}is in accord with} the theoretical model in the far-field limit given by {\\color{black}the} Eq. \\ref{I_scatt_dipole_theta}. This result also indicates that Born approximation {\\color{black} is} not sufficient for thicker cylinders as it shown in {\\color{black}the} middle and {\\color{black}the} right panels of {\\color {black} Fig.} \\ref{fig_6}, where the backscattering is not symmetric.\n\n\n\n\\subsection{${\\cal{PT}}$-dipole - perpendicular configuration}\n\n\\begin{figure}[t]\n\\noindent\\includegraphics[width=0.23\\textwidth]{SMK-fig9a.pdf}\n~~\n\\noindent\\includegraphics[width=0.23\\textwidth]{SMK-fig9b.pdf}\n\\caption{The difference in the intensity scattering for two antiparallel orientations of ${\\cal{PT}}$-dipole lying perpendicularly to incident wave.\n(black dashed and red solid line).\nDashed area is the difference $\\Delta P_S$.\nGain\/loss parameter $n_I = \\pm 0.5$. Left:\n$f = 0.16$ , right: $f = 1.0$.\n}\n\\label{fig_7}\n\\end{figure}\n\n\n\\begin{figure}[t]\n\\noindent\\includegraphics[width=0.43\\textwidth]{SMK-fig10.pdf}\n\\caption{Dependence of the difference in the intensity scattering for antiparallel orientations of ${\\cal{PT}}$-dipole given by $\\Delta P_{S}(R,\\phi)$ in the far field($R = 20$) on the gain\/loss parameter for $n_I = \\pm 0.005i$ (left), $n_I = \\pm 0.05i$ (middle) and $n_I = \\pm 0.5i$ (right), when $f = 1.0$.}\n\\label{fig_8}\n\\end{figure}\n\nIn Fig. \\ref{fig_7} we show the scattering patterns in the near-field limit($R = 2a$) for the perpendicular orientation of the ${\\cal{PT}}$-dipole for two different frequencies. The intensity of the scattered power $\\Delta P_{S}(R,\\phi)$ for parallel and antiparallel orientations $\\pm\\vec{p}$ indicated by dashed black and solid red lines possess the features which significantly deviate from those associated with parallel orientation of the $\\cal{PT}$-dipole while the asymmetric scattering indicated by the shaded areas is maintained. It is interesting to note that for small frequencies the $\\Delta P_{S}(R,\\phi)$ is symmetric along both $x$ and $y$-axis while with an increasing frequency becomes strongly asymmetric along the $y$-axis. One can also observe that in contrast to the parallel orientation of the ${\\cal{PT}}$-dipole asymmetric scattering does not occur along the $y$-axis in accord with theoretical model -- see {\\color {black} Eq. \\ref{I_scatt_dipole_theta_perp}.}\n\nIn Fig. \\ref{fig_8} we demonstrate the dependence of the scattering diagrams in the far-field for three values of the gain\/loss parameter $n_I$. The scattering patterns display qualitatively\nsimilar behavior as those associated with parallel orientation shown in Fig. \\ref{fig_6}, in particular they confirm a linear dependence on the gain\/loss parameter $n_I$ in the range\n$0.005 < n_I < 0.05$ within the range of the validity of the first Born approximation.\n\n\\section{Discussion and Conclusions}\n\n\n\\begin{figure}[t]\n\\noindent\\includegraphics[width=0.4\\textwidth]{SMK-fig11.pdf}\n\\caption{(Color online) Angular diagrams of $\\cal{PT}$ asymmetry for both parallel and perpendicular orientation of the $\\cal{PT}$-dipole given by $\\Delta P_{S}(R,\\phi)$ at frequency $f=0.8$. Analytical prediction, given by Eq. \\ref{I_scatt_dipole_theta} (black line)\nis compared with numerical data for $R_0=0.01a$, $n_I=0.005$ and $R=20a$ (red line, re-scaled in absolute value).\n}\n\\label{KS-fig5}\n\\end{figure}\n\n\nThe numerical results presented in the previous Section confirm the asymmetric scattering of the ${\\cal{PT}}$-dipole for both parallel an perpendicular dipole orientations and simultaneously offer possibility to examine the limits of the theoretical model associated with the approximations implemented.\n\nFirst of all, we numerically explored differences in the scattering patterns arising in the near-field which are demonstrated in Figs. \\ref{fig_3} and \\ref{fig_4}, in particular that in the near-field the scattering {\\color{black}in} the forward direction of ceases to be symmetric. This feature clearly arises due to the modified properties of the scattering pattern when the transition between near and far-field limit takes place as it has been shown in Fig. \\ref{fig_4} and yields the threshold between both regimes.\n\nTo check the validity of Born approximation we compare the results obtained analytically and numerically for the case of small $\\cal{PT}$-dipole\nwith the radius $R = 0.01a$ and gain\/loss parameter $n_I = 0.005$ -- see Fig. \\ref{KS-fig5}. The analytical and numerical results coincide for both orientation of the $\\cal{PT}$-dipole.\n\nIn addition, the limit of the validity of the first Born approximation {\\color{black}has} been determined by exploring the dependence of the scattering of the ${\\cal{PT}}$-dipole on the size of the gain\/lass parameter $n_I$. We have shown that when the size of the imaginary component $n_I$ is small ($\\lesssim 0.05$) {\\color{black}it} does not qualitatively affects the shape of the $\\Delta P_{\\rm S}(R,\\phi)$ and follows the linear dependence on the $n_I$ in accord with the first Born approximation. When the size of the $n_I$ is increased, the linear scaling of the scattering with the $n_I$ does not apply and the system cannot be described in terms of the first Born approximation.\nFinally, we note that the results shown in Fig. \\ref{fig_7} which demonstrate the dependence of the $\\Delta P_{S}(R,\\phi)${\\color{black},} display the strong dependence on the radius of the cylinder $R_0$, however one cannot anticipate any trivial scaling since its behavior may be strongly affected for the frequencies in the vicinity of the Fano resonances.\n\nIn conclusion, we analyzed, both analytically and numerically, the electromagnetic response of the ${\\cal{PT}}$ dipole\nand found that the Born approximation is valid in the limit of far-field and tiny scattering parameters of the dipole. For a general case,\nrich variety of the scattering pattern is observed both for the parallel and perpendicular orientation of the dipole.\nOur results indicate that structures composed from large number of ${\\cal{PT}}$ dipoles might possess interesting new transmission properties, worth to be analyzed in the future.\n\n\n\n\n\\section*{Acknowledgements}\nWe acknowledge financial support by Spanish Ministerio de Ciencia e Innovaci{\\'o}n, the European Union FEDER through project FIS2011-29731-C02-01. The research of P.~Marko\\v s was supported by the Slovak Research and Development Agency under the contract No. APVV-15-0496\nand by the Agency VEGA under the contract No. 1\/0108\/17. The research of V. Kuzmiak was supported by\nGrant 16-00329S of the Czech Science Foundation(CSF).\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\\section{Introduction}\n\\label{Introduction}\nIn general, humans observe data as a sequence and seldom observe the samples twice; otherwise, they can learn and accumulate knowledge of new data throughout their whole lives.\nUnlike humans, artificial neural networks (ANNs), which are inspired by biological neural systems, suffer from catastrophic forgetting \\cite{McClelland1995Why,Mccloskey1989CatastrophicII,Ratcliff1990Connectionnist}, whereby learned knowledge is disrupted when a new task is being learned.\n\nContinual learning \\cite{Hassabis2017Neuroscience,Ring1994Continual,Thrun1995Lifelong} aims to alleviate catastrophic forgetting in ANNs. The key to continual learning is that the model handles the data individually and preserves the knowledge of previous tasks without storing all the data from previous tasks. With continual learning, the model has the potential to learn novel tasks quickly if it can consolidate the previously acquired knowledge. Unfortunately, in the\nmost common approaches,\nthe model cannot learn new knowledge about previous tasks when acquiring new knowledge of new tasks in order to alleviate catastrophic forgetting. For example, EWC\\cite{Kirkpatrick2017OvercomingCF}, PI \\cite{Zenke2017ContinualLT}, RWALK \\cite{Chaudhry2018RiemannianWF} and MAS \\cite{Aljundi2018MemoryAS}, which use regularization to slow down learning with weights that correlate with previously acquired knowledge, resist decreasing the performance on previous tasks and cannot acquire new knowledge fast. The assumption of gradient episodic memory (GEM) \\cite{LopezPaz2017GradientEM} and average gradient episodic memory (A-GEM) \\cite{Chaudhry2018EfficientLL} is that the model guarantees to avoid\nincreasing the loss over episodic memory\nwhen the model updates the gradient, which has the same shortcoming.\n\nCatastrophic forgetting can be alleviated if the model can acquire novel knowledge about previous tasks when learning new tasks.\nBuilding on GEM and A-GEM, we assume that the model not only maintains the loss over episodic memory, preventing it from increasing, but actually decreases the loss to acquire novel knowledge of experiences that are representative of the previous tasks.\nTo achieve this goal, the optimizer of the model should guarantee that the angle between the gradient of samples from episodic memory and the updated gradient is less than $90^{\\circ}$.\nBased on the idea above, we introduce a soft constraint $\\epsilon \\in [0, 1]$, which is a balance between forgetting old tasks (loss over previous tasks that are represented by episodic memory) and learning new tasks (loss over new tasks), and propose a variant of A-GEM with a soft constraint $\\epsilon$, called $\\epsilon$-SOFT-GEM, which is a combination of episodic memory and optimization constraints.\nAdditionally, we introduce an intuitive idea, average A-GEM (A-A-GEM), in which the updated gradient is the average of the gradient of samples from episodic memory and the gradient of new samples from learning task, and the angle between the gradient of the samples from episodic memory and the updated gradient must be no more than $90^{\\circ}$.\n\nWe evaluate $\\epsilon$-SOFT-GEM, A-A-GEM and several representative baselines on a variety of sequential learning tasks on the metrics of the stability and plasticity of the model. Our experiments demonstrate that $\\epsilon$-SOFT-GEM achieves better performance than A-GEM with almost the same efficiency in terms of computation and memory; meanwhile, $\\epsilon$-SOFT-GEM outperforms other common continual learning benchmarks in a single training epoch.\n\n\\section{Related Work}\n\\label{Related Works}\nThe term catastrophic forgetting was first introduced by \\cite{Mccloskey1989CatastrophicII}, who claimed that catastrophic forgetting is a fundamental limitation of neural networks and a downside of their high generalization ability.\nThe cause of catastrophic forgetting is that ANNs are based on concurrent learning, where the whole population of the training samples is presented and trained as a single and complete entity; therefore, alterations to the parameters of ANNs using back-propagation lead to catastrophic forgetting when training on new samples.\n\nSeveral works have described\ndestructive consequences of catastrophic forgetting in\nsequential learning and provided a few primitive solutions, such as employing experience replay with all previous data or subsets of previous data \\cite{Robins1993Catastrophic,Robins1995Catastrophic}.\n\nOther works focus on training individual models or sharing structures to alleviate catastrophic forgetting.\nA progressive neural network (PROGNN) \\cite{Rusu2016ProgressiveNN} has been proposed to train individual models on each task, retain a pool of pretrained models and learn lateral connections from these to extract useful features for new tasks, which eliminates forgetting altogether but requires growing the network after each task and can cause the architecture complexity to increase with the number of tasks.\nDEN \\cite{Yoon2018Lifelong} can learn a compact overlapping knowledge-sharing structure among tasks.\nPROGNN and DEN require the number of parameters to be constantly increased and thus lead to a huge and complex model.\n\nMany works focus on optimizing network parameters on new tasks while minimizing alterations to the consolidated weights on previous tasks.\nIt is suggested that regularization methods, such as dropout\\cite{Hinton2012Improving}, L2 regularization and activation functions \\cite{Glorot2011Deep,Ian2013Maxout}, help to reduce forgetting for previous tasks \\cite{Goodfellow2014AnEI}.\nFurthermore, EWC \\cite{Kirkpatrick2017OvercomingCF} uses a Fisher information matrix-based regularization to slow down learning on network weights that correlate with previously acquired knowledge.\nPI \\cite{Zenke2017ContinualLT} employs the path integrals of loss derivatives to slow down learning on weights that are important for previous tasks.\nMAS \\cite{Aljundi2018MemoryAS} accumulates an importance measure for each parameter of the network and penalizes the important parameters.\nRWALK \\cite{Chaudhry2018RiemannianWF} introduces a distance in the Riemannian manifold as a means of regularization.\nThe regularization methods resist decreasing the performance on previous tasks and learn new tasks slowly.\n\nEpisodic memory can store previously seen samples and replay them; iCarl \\cite{Rebuffi2017iCaRLIC} replays the samples in episodic memory, while GEM \\cite{LopezPaz2017GradientEM} and A-GEM \\cite{Chaudhry2018EfficientLL} use episodic memory to make future gradient update.\nHowever, choosing samples from previous tasks is challenging since it requires knowing how many samples to store and how the samples represent the tasks.\nThe experience replay strategies proposed in \\cite{Chaudhry2019Continual} are reservoir sampling \\cite{Riemer2019Learning}, ring buffer \\cite{LopezPaz2017GradientEM}, k-means and mean of features \\cite{Rebuffi2017iCaRLIC}.\n\n\\section{Gradient Episodic Memory with a Soft Constraint}\n\\label{softgem}\nFor $\\epsilon$-SOFT-GEM, the learning protocol is described in \\cite{Chaudhry2018EfficientLL}, and the sequential learning task is divided into two ordered sequential streams $D^{CV} = \\{D_{1}, ..., D_{T^{CV}}\\}$ and $D^{EV}=\\{D_{T^{CV}+1}, ..., D_{T}\\}$, where $D_{k}=\\{(\\mathbf{x}_{i}^{k}, t_{i}^{k}, y_{i}^{k})_{i=1}^{n_{k}}\\}$ is the dataset of the $k$-th task, $T^{CV} t$ when the model is learning task $t$. A positive $FWT$ shows that the model can perform ``zero-shot'' learning.\n\\end{enumerate}\n\n\\begin{figure}\n\t\\centering\n\t\\subfloat[Permuted MNIST($A_{T}$)]{\\label{fig:fig_agem_pbt_split_cifar_avg_acc_mnist}\\includegraphics[width=0.48\\linewidth]{results\/agem_pbt_mnist_avg_acc_ax.pdf}}\n\t\\subfloat[Permuted MNIST($F_{T}$)]{\\label{fig:fig_agem_pbt_split_cifar_fgt_mnist}\\includegraphics[width=0.48\\linewidth]{results\/agem_pbt_mnist_fgt_ax.pdf}} \\\\\n\t\\caption{$\\epsilon$-SOFT-GEM on Permuted MNIST in 5 training repeats, where the models are trained over 5 runs in a single training repeat.}\n\t\\label{fig:fig_agem_pbt_permuted_mnist_split_cifar_avg_acc_fgt_mnist}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\subfloat[Split CIFAR($A_{T}$)]{\\label{fig:fig_agem_pbt_split_cifar_avg_acc_cifar}\\includegraphics[width=0.48\\linewidth]{results\/agem_pbt_split_cifar_avg_acc_ax.pdf}}\n\t\\subfloat[Split CIFAR($F_{T}$)]{\\label{fig:fig_agem_pbt_split_cifar_fgt_cifar}\\includegraphics[width=0.48\\linewidth]{results\/agem_pbt_split_cifar_fgt_ax.pdf}}\n\t\\caption{$\\epsilon$-SOFT-GEM on Split CIFAR in 5 training repeats, where the models are trained over 5 runs in a single training repeat.}\n\t\\label{fig:fig_agem_pbt_permuted_mnist_split_cifar_avg_acc_fgt_cifar}\n\\end{figure}\n\n\\subsection{Comparison with baselines}\nIn this section, we show the applicability of $\\epsilon$-SOFT-GEM and A-A-GEM on sequential learning tasks. The details of the results are shown in \\textbf{Tables} \\ref{tab:dataset_mnist_cifar_statistics_permuted_mnist}, \\ref{tab:dataset_mnist_cifar_statistics_cifar}, \\ref{tab:dataset_cub_statistics_tab_ohot}, \\ref{tab:dataset_cub_statistics_tab_je}, \\ref{tab:dataset_awa_statistics_tab_ohot} and \\ref{tab:dataset_awa_statistics_tab_je} in \\textbf{Appendix} \\ref{RESULTS}.\n\nFirst, \\textbf{Figure} \\ref{fig:fig_methods_avg_acc_forget_permuted_mnist_split_cifar_split_cub_split_cub_je}\nshows that $\\epsilon$-SOFT-GEM outperforms other models on Permuted MNIST, Split CIFAR, Split CUB, Split CUB(-JE), Split AWA and Split AWA(-JE), except for PROGNN, which achieves slightly better performance than $\\epsilon$-SOFT-GEM on Permuted MNIST.\nThe reason is that PROGNN trains an individual model on a previous task and then carries out a new stage of training a new task; it can preserve all the information it learned on previous tasks.\nMeanwhile, from the snapshot of the statistics of the datasets shown in \\textbf{Table} \\ref{dataset_statistics_tab} in \\textbf{Appendix} \\ref{data_static}, PROGNN achieves better performance on a large-scale training sample dataset and lower performance on a smaller training sample dataset.\nHowever, PROGNN has the worst memory problem because the size of the parameters of the model increases superlinearly with the number of tasks, and it will run out of memory during training due to the large size of the model; therefore, PROGNN is invalid on Split CUB and Split AWA, which apply the standard ResNet18 in \\textbf{Table} \\ref{tab:setting_architecture_datasets} in \\textbf{Appendix} \\ref{architecture}, which is not shown in \\textbf{Figures}\n\\ref{fig:fig_methods_avg_acc_forget_split_cub}, \\ref{fig:fig_methods_avg_acc_forget_split_cub_je}, \\ref{fig:fig_methods_avg_acc_forget_split_awa} and \\ref{fig:fig_methods_avg_acc_forget_split_awa_je}.\n\nSecond, $\\epsilon$-SOFT-GEM acquires better $a_{1}$, $a_{t}$, $A_{T}$ and $F_{T}$ values than the other baselines shown in \\textbf{Tables} \\ref{tab:dataset_mnist_cifar_statistics_permuted_mnist}, \\ref{tab:dataset_mnist_cifar_statistics_cifar}, \\ref{tab:dataset_cub_statistics_tab_ohot}, \\ref{tab:dataset_cub_statistics_tab_je}, \\ref{tab:dataset_awa_statistics_tab_ohot} and \\ref{tab:dataset_awa_statistics_tab_je},\nwhich means that $\\epsilon$-SOFT-GEM can maintain its performance on previous tasks when learning new tasks.\n$LCA_{10}$ in \\textbf{Figure} \\ref{fig:fig_methods_avg_acc_forget_permuted_mnist_split_cifar_split_cub_split_cub_je}\nshows that $\\epsilon$-SOFT-GEM has a competitive capacity to learn new knowledge fast, even compared with A-GEM.\n\n\nThird, the $BWT$ values of $\\epsilon$-SOFT-GEM and A-A-GEM are positive in Split CUB and Split AWA; \\textbf{Figure} \\ref{fig:fig_methods_avg_acc_forget_permuted_mnist_split_cifar_split_cub_split_cub_je} shows that $\\epsilon$-SOFT-GEM and A-A-GEM can learn new knowledge of previous tasks to increase the performance of the model on previous tasks, while the other baselines have a negative $BWT$ for 4 sequential learning tasks. The $FWT$ value indicates that $\\epsilon$-SOFT-GEM has a competitive ability to perform ``zero-shot'' learning.\n\nFourth, from the results shown in \\textbf{Figure} \\ref{fig:fig_methods_avg_acc_forget_permuted_mnist_split_cifar_split_cub_split_cub_je},\nA-A-GEM performs better than the other models except for $\\epsilon$-SOFT-GEM, but A-A-GEM is much simpler than $\\epsilon$-SOFT-GEM without specifying $\\epsilon$.\n\nFinally, we can conclude that $\\epsilon$-SOFT-GEM can balance preserving the knowledge of old tasks with a soft constraint $\\epsilon$ and learning new tasks with a fast learning curve.\n\n\\subsection{Exploration of $\\epsilon$}\n\nIn this work, we use a simple heuristic algorithm to explore $\\epsilon$. Each row in \\textbf{Figures}\n\\ref{fig:fig_agem_pbt_permuted_mnist_split_cifar_avg_acc_fgt_mnist} and \\ref{fig:fig_agem_pbt_permuted_mnist_split_cifar_avg_acc_fgt_cifar}\nis a population trained with a specific set $\\epsilon$, and there are 5 training repeats in the\nwhole experiment.\n\nIn the first repeat, we divide $\\epsilon \\in [0, 1]$ into $N$ equal parts with an interval $\\delta$; for example, for Permuted MNIST, $N=11, \\delta=0.1$, and for Split CIFAR, $N=7, \\delta=0.16667$.\n\nAfter the $j$-th training repeat, we choose the $\\epsilon_{j,1}$ that yields the best $A_{j,T}$, the $\\epsilon_{j,2}$ with the second-best $A_{j,T}$ and an interval $\\delta_{j}$, where $A_{j, T}$ is $A_{T}$ after the $j$-th training repeat and $\\delta_{j}$ is the interval of the $j$-th training repeat.\nMeanwhile, we define $\\epsilon_{j}[1]$ and $\\epsilon_{j}[N]$ as the smallest and largest values of $\\epsilon_{j}$, respectively. We assume $\\epsilon_{j, 1} \\le \\epsilon_{j, 2}$, and the update rule of $\\epsilon_{j+1}$ is:\n\\setlength{\\arraycolsep}{0.0em}\n\\begin{equation}\n\\footnotesize\n\\epsilon_{j+1} \\in\n\\left\\{\n\\begin{array}{l}\n\\left[0, 1\\right], \\quad j=0; \\\\\n\\mathrm{stop}, \\quad \\epsilon_{j, 1}=\\epsilon_{j}[1], \\epsilon_{j, 2} = \\epsilon_{j}[N] \\ \\mathrm{and} \\ j > 0 \\ \\mathrm{or} \\ j > M\\\\\n\\left[\\epsilon_{j,1}, \\epsilon_{j,2}+\\delta_{j}\\right], \\quad \\epsilon_{j,1}=\\epsilon_{j}[1] \\ \\mathrm{and} \\ 0 < j \\le M; \\\\\n\\left[\\epsilon_{j, 1} - \\delta_{j}, \\epsilon_{j,2}\\right], \\quad \\epsilon_{j,2} = \\epsilon_{j}[N] \\ \\mathrm{and} \\ 0 < j \\le M; \\\\\n\\left[\\epsilon_{j, 1} - \\delta_{j}, \\epsilon_{j, 2} + \\delta_{j}\\right], \\ \\mathrm{other};\n\\end{array}\n\\right.\n\\end{equation}\nwhere $M$ is the number of training repeats. $\\epsilon$-SOFT-GEM with $\\epsilon=0.0$ is equal to A-GEM with the original $g$ and $g_{ref}$, not $\\hat{g}$ and $\\hat{g}_{ref}$.\n\nThe soft constraint $\\epsilon$ adjusts the capacity of the model to learn new tasks and preserve old tasks.\nAccording to the update rule above, we repeat the process 5 times to explore $\\epsilon$.\nFirst, the top row in \\textbf{Figures} \\ref{fig:fig_agem_pbt_permuted_mnist_split_cifar_avg_acc_fgt_mnist} and \\ref{fig:fig_agem_pbt_permuted_mnist_split_cifar_avg_acc_fgt_cifar}\nshows that $\\epsilon$-SOFT-GEM outperforms A-GEM with a specified $\\epsilon$, such as $\\epsilon=0.1$ in Permuted MNIST and all training populations with a specified $\\epsilon$ in Split-CIFAR. Second, $A_{T}$ in every repeat\nis basically a parabolic curve;\ntherefore, the heuristic optimization algorithm for exploring the best $\\epsilon$ is effective. Finally, we find that $\\epsilon=0.07185$ in Permuted MNIST and $\\epsilon=0.5$ in Split CIFAR after 5 training repeats.\n\n\\begin{figure}\n\t\\centering\n\t\\subfloat[Permuted MNIST]{\\includegraphics[width=0.48\\linewidth]{results\/memory_permuted_mnist_avg_acc_forgetting_ax.pdf}}\n\t\\subfloat[Split CIFAR]{\\includegraphics[width=0.48\\linewidth]{results\/memory_split_cifar_avg_acc_forgetting_ax.pdf}} \\\\\n\t\\subfloat{\\includegraphics[width=0.48\\linewidth]{results\/memory_split_cifar_avg_acc_multi_legend.pdf}} \\\\\n\t\\caption{$A_{T}$, $a_{1}$, $a_{t}$ and $F_{T}$ on Permuted MNIST and Split CIFAR with varying episodic memory size in a single training epoch; the models are trained over 5 runs. The details are shown in \\textbf{Tables} \\ref{tab:dataset_permuted_mnist_episodic_memory_statistics_mnist} and \\ref{tab:dataset_permuted_mnist_episodic_memory_statistics_cifar} in \\textbf{Appendix} \\ref{RESULTS}.}\n\t\\label{fig:fig_avg_acc_mem_permuted_mnist_split_cifar}\n\\end{figure}\n\n\\subsection{Episodic memory}\nThe conventional solution to catastrophic forgetting is to learn a new task alongside the old samples we\npreserve; the episodic memory needed to preserve the old samples is significant in A-GEM, A-A-GEM and $\\epsilon$-SOFT-GEM.\n\nTherefore, we run the experiments on A-GEM, A-A-GEM and $\\epsilon$-SOFT-GEM with varying episodic memory size.\n$A_{T}$, $a_{1}$, $a_{t}$ and $F_{T}$ on Permuted MNIST and Split CIFAR are shown in \\textbf{Figure} \\ref{fig:fig_avg_acc_mem_permuted_mnist_split_cifar}; $\\epsilon$-SOFT-GEM outperforms A-GEM and A-A-GEM in $A_{T}$ and $F_{T}$. The reasons are: (\\romannumeral1) the larger the episodic memory is, the more old information can be preserved; $g_{ref}$ can represent the actual gradient of the old tasks more accurately, and $\\epsilon$-SOFT-GEM can preserve more of the old information, as illustrated in $a_{1}$ which is increasing; (\\romannumeral2) A-GEM, A-A-GEM and $\\epsilon$-SOFT-GEM can learn new tasks with competitive accuracy in a training epoch with a relatively slow learning curve, which is illustrated by $a_{t}$ which is decreasing.\n\n\n\n\\subsection{Efficiency}\nFrom the update rule of the gradient, $\\epsilon$-SOFT-GEM and A-A-GEM have only one more gradient normalization operation than A-GEM, and we can deduce that $\\epsilon$-SOFT-GEM and A-A-GEM acquire almost the same efficient computation and memory costs as A-GEM.\n\n\\section{CONCLUSION}\nIn the real world, humans can learn and accumulate knowledge throughout their whole lives, but ANNs that learn sequential tasks suffer from catastrophic forgetting, in which the learned knowledge is disrupted while a new task is being learned.\nTo alleviate catastrophic forgetting, we propose a variant of A-GEM with a soft constraint $\\epsilon$, called $\\epsilon$-SOFT-GEM, as well as A-A-GEM. The experiments demonstrate that $\\epsilon$-SOFT-GEM has competitive performance against state-of-the-art models.\nFirst, compared to regularization-based approaches, $\\epsilon$-SOFT-GEM achieves significantly higher average accuracy and lower forgetting; additionally, it maintains a fast learning curve and can acquire new knowledge of previous tasks represented by episodic memory when learning new tasks.\nSecond, $\\epsilon$-SOFT-GEM has almost the same efficiency cost as A-GEM in terms of computation and memory. Third, A-A-GEM performs better than the other models except for $\\epsilon$-SOFT-GEM, but A-A-GEM is much simpler than $\\epsilon$-SOFT-GEM without specifying $\\epsilon$.\n\n\\section*{Acknowledgement}\nThe work of this paper is supported by the National Natural Science Foundation of China (No. 91630206), Shanghai Science and Technology Committee (No. 16DZ2293600) and the Program of Shanghai Municipal Education Commission (No. 2019-01-07-00-09-E00018).\n\n\n\n\\bibliographystyle{unsrt} \n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} {"text":"\n\n\\section{\\label{sec:results}Results}\nThe above methodology to prepare and anneal model CSAs indicates that aged NiCoCr alloys may exhibit varying degrees of chemical ordering under different annealing temperatures.\nUsing robust composition-based metrics in Sec.~\\ref{sec:sro_temp}, we further confirm the formation of SROs and their strong dependence on the thermal processing.\nSection~~\\ref{sec:LatticeDistortions} presents the misfit volume analysis indicating meaningful correlations with the degree of short range ordering. \nIn Sec.~\\ref{sec:sro_disl}, we analyze dynamics of partial dislocations in the presence of SROs and discuss potential implications in terms of the SRO-induced CSA strengthening.\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/scatter3d_2.png}\n %\n \\LabelFig{6}{82}{$a)~r_c=5~$\\r{A}}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/scatter3d_1.png}\n \\LabelFig{6}{82}{$b)~r_c=10~$\\r{A}}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/scatter3d_0.png}\n \\LabelFig{6}{82}{$c)~r_c=18~$\\r{A}}\n \\end{overpic}\n %\n \\caption{Local concentration fluctuations $p_b$ with $b$ being Ni, Co, and Cr in an annealed CSA (based on the Li-Sheng-Ma~ potential) at $T_a=400$ K and multiple lengthscales \\textbf{a)} $r_c=5~$\\r{A} \\textbf{b)} $r_c=10~$\\r{A} \\textbf{c)} $r_c=18~$\\r{A}. \\add[KK]{We note that at low distance scales $r_c=5$ and $10$ \\r{A}, the segregation of Ni and Co\/Cr domains is very strong.} The black scatter points represent the two dimensional projections. The red dot denotes the equimolar concentration. \\note{PS: comparing fluctuations for large radii -- much larger than the size of segregation does not give much physical information. decrease of the fluctuations with increasing radius is pure statistics. Consider three panels with $r_c$ equal to 5 and 10 and 18 \\AA} \\note[KK]{done.} \\note[PS]{Great. But I would add a comment in the caption signalling the meaining: that at low distance scales $r_c$ of 10, 5 \\AA the segregation of Ni and Co\/Cr domains is very strong.}} \n \\label{fig:concentrationFluctuations}\n\\end{figure*}\n\n\n\n\\begin{figure}[b]\n \\begin{overpic}[width=0.23\\textwidth]{Figs\/varSro.png}\n %\n \\LabelFig{17}{15}{$a)$\\scriptsize annealed}\n \\Labelxy{50}{-3}{0}{$r_c$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$p^\\text{rms}_b$}\n \n \\begin{tikzpicture}\n \\coordinate (a) at (0,0);\n \\node[white] at (a) {\\tiny.}; \n \\drawSlope{1.3}{1.5}{0.35}{50}{red}{\\frac{3}{2}}\n \t\t\\end{tikzpicture}\n \t\\end{overpic}\n %\n \\begin{overpic}[width=0.23\\textwidth]{Figs\/varSroRescaled.png}\n %\n \\Labelxy{-6}{40}{90}{$r_c^{\\frac{3}{2}}p^\\text{rms}_b$}\n \\Labelxy{50}{-3}{0}{$r_c$(\\r{A})}\n \\LabelFig{17}{15}{$c)$}\n %\n \t\\end{overpic}\n %\n \\begin{overpic}[width=0.23\\textwidth]{Figs\/varRsa.png}\n \\LabelFig{17}{15}{$b)$\\scriptsize RSA}\n \\Labelxy{-6}{40}{90}{$p^\\text{rms}_b$}\n \\Labelxy{50}{-3}{0}{$r_c$(\\r{A})}\n \\begin{tikzpicture}\n \\coordinate (a) at (0,0);\n \\node[white] at (a) {\\tiny.}; \n \\drawSlope{1.3}{1.5}{0.35}{50}{red}{\\frac{3}{2}}\n \t\t\\end{tikzpicture} \n \t\\end{overpic}\n %\n \\begin{overpic}[width=0.23\\textwidth]{Figs\/varRssRescaled.png}\n \\Labelxy{-6}{40}{90}{$r_c^{\\frac{3}{2}}p^\\text{rms}_b$}\n \\Labelxy{50}{-3}{0}{$r_c$(\\r{A})}\n \\LabelFig{17}{15}{$d)$}\n %\n \t\\end{overpic}\n %\n \\caption{Root-mean-squared (rms) fluctuations $p^\\text{rms}_b$ of local Ni, Co, and Cr concentrations as a function of window size $r_c$ in \\textbf{a)} annealed NiCoCr CSAs at $T_a=400$ K \\textbf{b)} NiCoCr RSAs at $T=400$ K based on the Li-Sheng-Ma~ potential. The panels in \\textbf{c)} and \\textbf{d)} are the same as \\textbf{a)} and \\textbf{b)} but with the $y$-axis rescaled by $r_c^{-\\frac{3}{2}}$. The curves are shifted for the sake of clarity.} \n \\label{fig:std_concentrationFluctuations}\n\\end{figure}\n\n\n\n\n\n\\subsection{Temperature-dependence of SROs}\\label{sec:sro_temp}\nFigure~\\ref{fig:thermo} illustrates that the formation of SROs within the solid solutions may strongly depend on the annealing temperature $T_a$.\nThe color map of atoms, the inset of Fig.~\\ref{fig:thermo}(a), indicates a clear segregation of Ni-rich phases (gray circles) in the NiCoCr CSA modeled via the Li-Sheng-Ma~ potential at $T_a=800$ K.\nOverall, the observed clustering features tend to become less pronounced at higher annealing temperatures as shown in Fig.~\\ref{fig:Annealed}.\n\nThe presence of SROs appears to have a direct relevance on the (constant-pressure) heat capacity $C_p=\\partial_T H$ in Fig.~\\ref{fig:thermo}(a) featuring a maximum around $T_a\\simeq 800$~K.\nHere $H$ denotes the enthalpy \\footnote{We only report the (excess) configurational heat capacity (and thermal expansion coefficient) neglecting (ideal) kinetic contributions.}.\nThe data presented in Fig.~\\ref{fig:thermo} correspond to a sample equilibrated at $T=300$ K and subsequently aged at multiple annealing temperatures.\nThe emerging peak in $C_p$ may suggest a dominant role of enthalpic interactions over entropic effects that rule out the formation of an ideal random solid solution \\cite{Li2019}. \nWe note that the (ideal) heat capacity associated with the latter rises monotonically within the temperature range of interest.\nSuch deviations seem to be less pronounced in terms of the thermal expansion coefficient $\\alpha_p=\\frac{1}{V}\\partial_T V$, with $V$ being the system volume, as shown in Fig.~\\ref{fig:thermo}(b).\n\n\n\nThe notion of SROs typically refers to coherent compositional deviations from (statistically) random distributions of atoms within the solution matrix.\nAlong these lines, we investigated the Warren\u2013Cowley SRO parameters $p_{ab}(r)$ \\cite{wolverton2000short}\nprobing the concentration variations of type-$b$ atoms at a distance $r$ from a center type-$a$ element. \nFor an equi-molar random NiCoCr~ solid solution, one obtains $p^{\\text{rsa}}_{ab}=0.33$ (on average) at any $r$ ---more precisely, between the successive valleys of the pair correlation function $g(r)$ as in Fig.~\\ref{fig:sroSheng}(g).\nThe SRO parameters could be also determined locally for individual atoms which will presumably show strong fluctuations in the presence of SROs.\nNevertheless, the ``averaged\" Warren\u2013Cowley parameters should be relevant as the system tends to be statistically homogeneous beyond the mean SRO size.\n\n\n\nFigure~\\ref{fig:sroSheng}(a-f) illustrates deviations of $p_{ab}$ associated with the annealed CSAs from $p_{ab}^{\\text{rsa}}$ including the six (distinct) elemental pairs at $T_a=400$ K.\nThe order parameters reveal several interesting features describing the SRO microstructure.\nThe abundance of the Ni-Ni pairs beyond random concentrations is remarkable and appears to persist up to $r\\simeq 5$~\\r{A} in Fig.~\\ref{fig:sroSheng}(a).\nIt should be noted that twice this lengthscale is in a rough agreement with the visual impression one gets from the segregation map, the inset of Fig.~\\ref{fig:thermo}(a), regarding the mean SRO size.\nBelow the base line, the dip in $p_{ab}$ corresponding to Ni-Ni pairs should indicate their scarcity above the mean size.\nThe order parameter re-crosses the horizontal line beyond which it features a fairly broad peak at $r\\simeq 15$~\\r{A} before going asymptotically to $p_{ab}^{\\text{rsa}}$.\nWe remark that the inferred lengthscale could potentially relate to the average spacing between adjacent SROs.\nFigure~\\ref{fig:sroSheng}(b-c) associated with $p_\\text{NiCo}$ and $p_\\text{NiCr}$ feature fairly identical properties but with opposite trends as seen in $p_\\text{NiNi}$ since they must all add up to unity.\n\n\n\nAs opposed to Ni-Ni ordering, we observe less coherent patterns associated with the identical (same-element) pairs Co-Co and Cr-Cr as in Figure~\\ref{fig:sroSheng}(d) and (f). \nIn particular, $p_\\text{CoCo}$ and $p_\\text{CrCr}$ seem to indicate ordering as well as anti-ordering (potentially due to repulsion) at the first and next nearest neighbor distances.\nThe strong bonding between Co-Cr in Fig.~\\ref{fig:sroSheng}(e) at the first nearest neighbor is also remarkable (see also \\cite{Li2019,jian2020effects,yang2022chemical}).\nWe further note that, unlike $p_{ab}$, the pair correlation function $g(r)$ does not suggest any \\emph{structural} differences between annealed and random solid solutions as shown in Fig.~\\ref{fig:sroSheng}(g).\n\n\\begin{figure}[t]\n \\centering\n \\begin{overpic}[width=0.32\\textwidth]{Figs\/pdf_pNi_T.png}\n %\n \\Labelxy{68}{71}{0}{$\\scriptstyle T_a$\\scriptsize (K)}\n \\Labelxy{50}{-3}{0}{$p_\\text{Ni}$}\n \\Labelxy{-6}{40}{90}{PDF}\n \\end{overpic}\n %\n \n \n \n \n \n \n \n \n \n %\n \\caption{Probability distribution functions of the local concentrations $p_\\text{Ni}$ in an annealed CSA based on the Li-Sheng-Ma~ potential at the window size $r_c=10$ \\r{A} and various $T_a$. \\note{PS: rather than comparing window sizes, compare the distributions for different annealing temperatures for a given $r_c$. Then look for a scaling factor that could transform all these PDF shapes into a single one: this would provide quantitative measure of the effects of annealing.}\\note[KK]{please see the updated figures and relevant discussions in the text}. \\note[PS]{Can we normalize the PDF histograms, so that the CDF sums to 1? I am asking because then one can state which fraction of environments has $p_{Ni}$ equal to zero (or close to it). for $T_a$ of 400 and 600 it is a non-negligible fraction, which docuents strong segregation of Ni and Co\/Cr phases.}\\note[KK]{these are probability density functions, normalized such that the *integral* over the entire range is 1. In other words, associated CDFs should be between 0 and 1 by construction. I am adding relevant discussions on the relative abundance of regions with very low and high Ni content in the text.} }\n \\label{fig:pdf_concentrationFluctuations}\n\\end{figure}\n\n\n\n\n\nFigure~\\ref{fig:sroTemp} quantifies the abundance of Ni-Ni elemental pairs upon annealing at multiple temperatures between $T_a=400-1400$ K.\n\\add[KK]{As shown in Fig.~\\ref{fig:sroTemp}(a), the curves show quite complex nonmonotonic features with certain characteristic lengthscales that seem to scale non-trivially with temperature.}\nNevertheless, the SRO-related features in $p_\\text{NiNi}-p^\\text{rsa}_\\text{NiNi}$ become less and less pronounced with an increase in $T_a$ continually approaching their asymptote at the zero-valued base line.\nThis is mathematically quantified by the metric $p^\\text{rms}_\\text{NiNi}=\\langle (p_\\text{NiNi}-p^\\text{rsa}_\\text{NiNi})^2\\rangle^{\\frac{1}{2}}$ as a root-mean-squared (rms) measure of deviations from RSAs.\nFigure.~\\ref{fig:sroTemp}(b) features a monotonic growth of $p^\\text{rms}_\\text{NiNi}$ upon decreasing $T_a$.\nWe also note the tendency for the saturation of $p^\\text{rms}_\\text{NiNi}$ at $T_a\\simeq 1400$ K or above potentially due to residual SRO distributions at atomistic levels that preclude an ideal RSA formation.\nThe above analysis was repeated for NiCoCr alloys simulated based on the Farkas-Caro~ potential.\nInterestingly, we found no clear signature of clustering in these samples as opposed to those generated via the Li-Sheng-Ma~ potential (see Fig.~\\ref{fig:sroFarkas}).\n\n\n\n\nWe probed fluctuations in \\emph{local} concentrations of the constituent elements in annealed NiCoCr alloys that, in the presence of SROs, should deviate from those of random solid solution alloys.\nIn this context, the entire space was partitioned using sub-volumes of varying size $r_c$ and local molar compositions $p_\\text{Ni}$, $p_\\text{Co}$, and $p_\\text{Cr}$ were determined within each cube.\nAs illustrated in the scatter plot of Fig.~\\ref{fig:concentrationFluctuations} associated with NiCoCr CSAs annealed at $T_a=400$ K, the fluctuations tend to self-average at larger $r_c$ which could be also understood in terms of counting statistics.\n\nWe also investigated \\emph{local} fluctuations in CSA elemental concentrations in space that, in the presence of SROs, show marked deviations from those of RSAs.\nFigure~\\ref{fig:std_concentrationFluctuations} shows rms fluctuations of local concentrations $p_\\text{Ni}$, $p_\\text{Co}$, and $p_\\text{Cr}$ and their decay with distance $r_c$. \nIn Fig.~\\ref{fig:std_concentrationFluctuations}(a) and (c), rms fluctuations in annealed NiCoCr CSAs seem to self-average but only above some certain lengthscale above which the decay is well-predicted by the expected $r_c^{-3\/2}$ power-law behavior. \nThe latter is the relevant scaling in purely random atomic configurations as illustrated in Fig.~\\ref{fig:std_concentrationFluctuations}(b) and (d).\nWe take the characteristic peak in Fig.~\\ref{fig:std_concentrationFluctuations}(c) as a signature of spatial correlations which may be interpretted as the average SRO size $\\xi^\\text{sro}\\simeq 10$ \\r{A}. \nFurthermore, the inferred lengthscale closely agrees with the one extracted from the SRO order parameters in the preceding paragraphs which is within the typical range of SRO size ($0.5-1.5$ nm) seen experimentally \\cite{zhang2020short,wang2022chemical}.\n\\add[KK]{Figure~\\ref{fig:pdf_concentrationFluctuations} illustrates the probability distribution functions (PDFs) of the local Ni concentrations $p_\\text{Ni}$ at $r_c=10$ \\r{A} for different annealing temperatures. \nWe note the marked deviation of the low-$T_a$ PDFs from a standard Gaussian distribution which, otherwise, seems to be the asymptotic limit for the $p_\\text{Ni}$ distributions at higher annealing temperatures.\nOne could also observe a marked abundance of low and high (local) Ni concentrations away from the average at $p_\\text{Ni}=0.33$ for $T_a=400$ and $600$ K which is indicative of the strong segregation of Ni phases.}\n\n\\begin{figure}[b]\n \\centering\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/hist_vorVolumeTemp600.png}\n \\LabelFig{19}{89}{$a)$ annealed}\n \\Labelxy{50}{-4}{0}{$V_b$(\\r{A}$^3$)}\n \\Labelxy{-6}{40}{90}{PDF}\n \\end{overpic}\n \n \\begin{overpic}[width=0.24\\textwidth]{Figs\/hist_vorVolumeTemp5Rss.png}\n \\LabelFig{19}{89}{$b)$ RSA}\n \\Labelxy{50}{-4}{0}{$V_b$(\\r{A}$^3$)}\n \\Labelxy{-6}{40}{90}{PDF}\n \\end{overpic}\n \\caption{Probability distributions of alloy atomic (Voronoi) volumes $V_b$ with $b$ denoting the Ni, Co, and Cr atoms in \\textbf{a)} the NiCoCr~ alloy aged at $T_a=600$ K and \\textbf{b)} NiCoCr~ RSA. The volume measurements were carried out at $5$ K.\n \\note{PS: graphics: please make the figure graphically consistent, all Y axis descriptions on one side. }\\note[KK]{done.}} \n \\label{fig:voronoi}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/voronoiAverageTempSro.png}\n \\LabelFig{19}{76}{$a)$}\n \\Labelxy{50}{-4}{0}{$T_a$(K)}\n \\Labelxy{83}{4}{0}{\\scriptsize RSA}\n \\Labelxy{-8}{40}{90}{$\\langle V_b \\rangle$}\n \\end{overpic}\n \n \\begin{overpic}[width=0.24\\textwidth]{Figs\/voronoiStdTempSro.png}\n \\LabelFig{19}{76}{$b)$}\n \\Labelxy{50}{-4}{0}{$T_a$(K)}\n \\Labelxy{83}{4}{0}{\\scriptsize RSA}\n \\Labelxy{-12}{40}{90}{$\\text{var}^{\\frac{1}{2}}(V_b)\/\\langle V_b \\rangle$}\n \\end{overpic}\n \\caption{Dependence of \\textbf{a)} average Voronoi volume $\\langle V_b \\rangle$ and \\textbf{b)} rms volume fluctuations scaled by the average value $\\text{var}^{\\frac{1}{2}}(V_b)\/\\langle V_b \\rangle$ on the annealing temperature $T_a$. The empty symbols in correspond to a NiCoCr~ RSA.} \n \\label{fig:voronoiRMS}\n\\end{figure}\n\n\\begin{figure}[b]\n \\centering\n \\begin{overpic}[width=0.5\\textwidth]{Figs\/DislAgingT600K.png}\n %\n \\LabelFig{16}{8}{$a)~y=90$ \\scriptsize \\r{A}}\n \\Labelxy{50}{-1}{0}{$x$(\\r{A})}\n \\Labelxy{4}{20}{90}{$z$(\\r{A})}\n \\end{overpic}\n\n \\begin{overpic}[width=.5\\textwidth]{Figs\/DislAgingT600Kystack.png}\n \\LabelFig{16}{8}{$b)~y=10$ \\scriptsize \\r{A}}\n \\Labelxy{50}{-1}{0}{$x$(\\r{A})}\n \\Labelxy{4}{20}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.27\\textwidth]{Figs\/wc_aging_dislocated_index0.png}\n \\Labelxy{46}{-5}{0}{$r$(\\r{A})}\n \\Labelxy{-5}{36}{90}{$p_\\text{NiNi}$}\n \\LabelFig{16}{14}{$c)$}\n \\LabelFigg{52}{60}{\\tiny Stacking fault}\n %\n \\end{overpic}\n\n \\caption{SRO microstructure in the presence of (partial) dislocations in aging NiCoCr at $T_a=600$ K. \\textbf{a}) cross section containing the stacking fault \\textbf{b}) cross section at $y=10$~\\r{A} \\textbf{c}) the SRO parameter $p_\\text{NiNi}$ as a function of pairwise distance $r$. The different curves in \\textbf{c}) correspond to the entire configuration as well as the two dimensional stacks depicted in \\textbf{a}) and \\textbf{b}). The black segments denote dislocation lines. The dashdotted (red) line indicates RSAs. {The stacking fault region in \\textbf{a}) lies between $x\\simeq 25$ and $220$ \\r{A}.} \\note[PS]{Expend the caption by direct explanation of the meaning: near the stacking fault Ni is over-represented. What does it mean?} \\note[KK]{could you elaborate what you are asking?}}\n \\label{fig:stackingFault}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \\begin{overpic}[width=\\textwidth]{Figs\/DislAgedT600K.png}\n %\n \\LabelFig{4}{16}{$a)$ \\scriptsize annealed}\n \\Labelxy{-2}{6}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\begin{overpic}[width=\\textwidth]{Figs\/DislRss.png}\n \\LabelFig{4}{16}{$b)$ \\scriptsize RSA}\n \\Labelxy{50}{-2}{0}{$x$(\\r{A})}\n \\Labelxy{-2}{6}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\caption{Realizations of partial dislocations in \\textbf{a)} NiCoCr annealed at $T_a=600$ K \\textbf{b)} RSA.} \n \\label{fig:dislStackingFault}\n\\end{figure*}\n\n\n\n\n\\subsection{Lattice distortions}\\label{sec:LatticeDistortions}\nTo characterize local distortion properties, we analyzed fluctuations in atomic Voronoi cell volumes and associated temperature-dependence in annealed alloys.\nThe aged solid solutions were equilibrated at $5$ K upon annealing in order to suppress thermal fluctuations.\nFigure~\\ref{fig:voronoi}(a) and (b) shows alloy atomic volumes and associated PDFs for the Ni, Co, and Cr atoms in annealed and random solid solutions. \nBoth alloys feature quite narrow distributions with well-defined mean values $\\langle V_\\text{Ni} \\rangle$, $\\langle V_\\text{Co} \\rangle$, and $\\langle V_\\text{Cr} \\rangle$ that show slight variations with $T_a$ as in Fig.~\\ref{fig:voronoiRMS}(a).\nThe measured mean atomic volume in annealed NiCoCr~ samples is $\\langle V\\rangle \\simeq 11.3$ \\r{A}$^3$ ---which is equivalent to the average lattice constant of $a=3.56$ \\r{A}--- in very close agreement with experimental observations reported in \\cite{Yin2020}.\nFigure~\\ref{fig:voronoiRMS}(a) indicates features near a characteristic annealing temperature $T_a\\simeq 800$ K below which the mean atomic volumes seem to accelerate, potentially a signature of remarkable enthalpy-driven ordering \\cite{Li2019}. \nThis observation is in accordance with the heat capacity $C_p$ developing a characteristic peak in Fig.~\\ref{fig:thermo}(a).\n\n\\begin{figure*}\n \\centering\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/DislItime45000_id1_load500.png}\n %\n \\LabelFig{19}{76}{$a)\\scriptscriptstyle\\sigma=500$\\tiny(Mpa)}\n \\Labelxy{50}{-6}{0}{$x$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/DislItime45000_id0_load600.png}\n \\LabelFig{19}{76}{$b)\\scriptscriptstyle\\sigma=600$\\tiny(Mpa)}\n \\Labelxy{50}{-6}{0}{$x$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/DislItime45000_id1_load650.png}\n \\LabelFig{19}{76}{$c)\\scriptscriptstyle\\sigma=650$\\tiny(Mpa)}\n \\Labelxy{50}{-6}{0}{$x$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/DislItime45000_id0_load750.png}\n \\LabelFig{19}{76}{$d)\\scriptscriptstyle\\sigma=750$\\tiny(Mpa)}\n \\Labelxy{50}{-6}{0}{$x$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\caption{Realizations of (immobile) partial edge dislocations in an annealed NiCoCr subject to the shear stress \\textbf{a)} $\\sigma=500$, \\textbf{b)} $600$, \\textbf{c)} $650$, and \\textbf{d)} $750$ MPa. Here the two-dimensional stack denotes the $(111)$ glide plane.} \n \\label{fig:dislSnapshots}\n\\end{figure*}\n\n\n\\begin{figure*}\n \\centering\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/DislItime45000_id1_load400Rss.png}\n %\n \\LabelFig{19}{76}{$a)\\scriptscriptstyle\\sigma=400$\\tiny(Mpa)}\n \\Labelxy{50}{-6}{0}{$x$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/DislItime45000_id1_load500Rss.png}\n \\LabelFig{19}{76}{$b)\\scriptscriptstyle\\sigma=500$\\tiny(Mpa)}\n \\Labelxy{50}{-6}{0}{$x$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/DislItime45000_id1_load550Rss.png}\n \\LabelFig{19}{76}{$c)\\scriptscriptstyle\\sigma=550$\\tiny(Mpa)}\n \\Labelxy{50}{-6}{0}{$x$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/DislItime45000_id1_load600Rss.png}\n \\LabelFig{19}{76}{$d)\\scriptscriptstyle\\sigma=600$\\tiny(Mpa)}\n \\Labelxy{50}{-6}{0}{$x$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\caption{Realizations of (immobile) partial dislocations in a NiCoCr RSA subject to the shear stress \\textbf{a)} $\\sigma=400$, \\textbf{b)} $500$, \\textbf{c)} $550$, and \\textbf{d)} $600$ MPa.} \n \\label{fig:dislSnapshotsRsa}\n\\end{figure*}\n\n\n\n\nThe misfit volumes $\\Delta V_b=\\langle V_b\\rangle-\\langle V\\rangle$ of Ni, Co, and Cr are determined as $+0.04$, $-0.03$, and $-0.01$ \\r{A}$^3$, respectively.\nThe estimated atomic misfits appear to be at least one order of magnitude off from precise experiments \\cite{Yin2020} but are, otherwise, reasonably close to ab initio-based estimates in \\cite{oh2019engineering}.\nWe further explored the scaled fluctuation $\\text{var}^{\\frac{1}{2}}(V_b)\/\\langle V_b \\rangle$ as a relevant measure of atomic distortions in Fig.~\\ref{fig:voronoiRMS}(b)\nwith $b$ denoting Ni, Co, and Cr.\n$\\text{var}^{\\frac{1}{2}}(V_b)\/\\langle V_b \\rangle$ shows a (fairly) monotonic increase for Ni as a function of $T_a$ till it saturates at a limiting value that appears to be lower than the one set by the RSA. \nThis is in line with our SRO analysis that the abundance of Ni-Ni pairs within the first nearest neighbor distance and, therefore, ordering tends to curtail local atomic misfits or randomness in aged systems.\nAs for Co and Cr, we observe a non-monotonic evolution with a pronounced peak at $T_a\\simeq 600$ K that, in the case of the former, even exceeds the associated RSA limit.\n\\add[KK]{Apart from the observed peaks, the relative variance for aged Co\/Cr seem to always fall short of those of RSAs with a more dramatic decrease associated with Cr (the green diamonds). \nThis, we conjecture, might be attributed to the favored formation of Cr-Co regions and less tendency for Cr-Ni as well as Cr-Cr bonding as evidenced by the behavior of the first-nearest-neighbor order parameters in Fig.~\\ref{fig:sroSheng}.\nWe conclude this subsection by stating that short range order will have strong bearings on misfit volumes of NiCoCr as our data suggest direct correlations between the latter and the order parameters presented in Sec.~\\ref{sec:sro_temp}.}\n{One may infer a characteristic scale based on the rms fluctuations analysis presented in Fig.~\\ref{fig:voronoiRMS}(b) which we interpret as the (mean) misfit size $\\xi^\\text{misfit}\\simeq 1$ \\r{A}. \nTogether with nanoscopic SROs ($\\xi^\\text{sro}\\simeq 10$ \\r{A}), atomic-level misfit fluctuations will determine the dislocation glide resistance as discussed in Sec.~\\ref{sec:sro_disl}.}\n\\note{PS: Ni is explained, but what about the dramatic decrease of the relative variance for Co\/Cr? This is in even greater need for explanation. My guess is that by forming an ordered CoCr regions, the variance is diminished. The variance, most likely comes from the contact surface regions between Ni and CoCr precipitates. Also, one could look into the properties of equiatomic CoCr alloy (no Nickel) (e.g. \\url{https:\/\/www.sciencedirect.com\/science\/article\/pii\/S0264127521002628}), to see how the local ordering of CoCr pairs in Nickel-free regions is similar\/different from pure CoCr crystals.} \\note[KK]{relevant discussions added.}\n\n\n\n\n\\subsection{Interplay between SROs and dislocations}\\label{sec:sro_disl}\nWe follow two different approaches to address the dislocation-SRO interplay in NiCoCr CSAs: \\romn{1}) study of dislocation effects on the nucleation of SROs in \\emph{aging} alloys \\romn{2}) investigations of strengthening mechanisms at play in \\emph{as-aged} SRO-rich alloys driven out of equilibrium.\nIn \\romn{1}), we aged samples with a dislocation allowing for both the {dislocation dissociation process} and spatio-temporal evolution of SROs while annealing.\nIn \\romn{2}), on the other hand, we embedded partial dislocations in as-annealed alloys and performed shear depining tests, with no appreciable change in the SRO microstructure. \n\n\n\nIn line with \\romn{1}), Fig.~\\ref{fig:stackingFault} compares the structure of SROs within the dislocation dissociation bounds and outside in aging NiCoCr at $T_a=600$ K. \nIn Fig.~\\ref{fig:stackingFault}(a), the denser population of SROs within the stacking fault is visually apparent in comparison with a dislocation-free two-dimensional stack at $y=10$~\\r{A} illustrated in Fig.~\\ref{fig:stackingFault}(b). \nWe quantified the observed trend in Fig.~\\ref{fig:stackingFault}(c) where the SRO parameter $p_\\text{NiNi}$ associated with the former reveals a shallower decay relative to that of atoms outside the fault plane. \nTo our knowledge, the drastic increase in chemical ordering within the stacking fault region has not been previously reported in the literature.\nAs one possible mechanism at play, we speculate that the long-range stress field and mutual interactions between the two partials \\cite{hull2001introduction} might favor the SRO nucleation and its growth within the dissociation zone.\nFigure~\\ref{fig:dislStackingFault}(a) and (b) illustrates that such interactions at $T_a=600$ K lead to a notable reduction in the stacking fault width which implies the enhanced fault energy due to SROs \\cite{Li2019,ding2018tunable}.\n{It is expected that the fault dimension, and therefore the associated formation energy, will be strongly controlled by the annealing temperature as well.\nWe note that a detailed description of the SRO kinetics and dynamics of the dislocation dissociation as well as their (dynamical) interplay during the aging processs is outside the scope of our current study.}\n\n\n\nFollowing approach \\romn{2}), the notion of ``plastic flow\" in solute strengthening theories directly links to the existence of the intrinsic friction stress $\\sigma_c$ beyond which dislocations tend to glide rather smoothly at a non-negligible (mean) velocity $\\langle v \\rangle$.\nBelow this critical stress, by contrast, the migration of dislocations within CSAs (with a severely distorted energy landscape) typically occurs in a very intermittent manner with long periods of quiescent states (i.e. $\\langle v \\rangle\\simeq 0$) interrupted by bursts of displacements \\cite{osetsky2019two}.\nIn the absence of thermal activation, this depinning transition is phenomenologically described as \\cite{zaiser2006scale} \n\\begin{equation}\\label{eq:depinning}\n\\langle v \\rangle \\propto (\\sigma-\\sigma_c)^{1\/\\beta},\n\\end{equation}\nat $\\sigma \\ge \\sigma_c$ and $\\langle v \\rangle=0$ otherwise, with $\\beta \\ge 1$ reflecting a marked discontinuity at $\\sigma_c$ \\cite{esfandiarpour2022edge}.\nHere $\\sigma$ is the applied shear stress resolved in the glide plane.\n\n\\begin{figure}\n \\centering\n \\begin{overpic}[width=0.27\\textwidth]{Figs\/h0_0.png}\n %\n \\Labelxy{44}{-3}{0}{$\\hat{h}_x(z)$}\n \\Labelxy{-6}{35}{90}{$z$(\\r{A})}\n %\n \\LabelFig{17}{90}{$a)$ \\scriptsize annealed}\n \n \\Labelxy{17}{16}{90}{$\\scriptstyle 500$\\tiny (MPa)}\n \\Labelxy{36}{16}{90}{$\\scriptstyle 600$}\n \\Labelxy{50}{16}{90}{$\\scriptstyle 650$}\n \\Labelxy{62}{16}{90}{$\\scriptstyle 700$}\n \\Labelxy{83}{16}{90}{$\\scriptstyle 750$}\n \\end{overpic}\n \n\n \\begin{overpic}[width=0.27\\textwidth]{Figs\/h0_crltn_0.png}\n %\n \\Labelxy{44}{-3}{0}{$|z-z^\\prime|$}\n \\Labelxy{70}{80}{0}{$\\sigma$\\tiny (MPA)}\n \\Labelxy{-6}{35}{90}{$c_h(|z-z^\\prime|)$}\n %\n \\LabelFig{17}{15}{$b)$}\n \\end{overpic}\n \n \\caption{\\textbf{a}) Configurations of (immobile) partial edge dislocations \\textbf{b}) associated correlations $c_h(|z-z^\\prime|)=\\langle ~\\hat{h}_x(z).\\hat{h}_x(z^\\prime)~\\rangle$ as a function of distance $|z-z^\\prime|$ in an annealed NiCoCr under different applied stresses (below the depinning transition) at $T_a=600$ K. This includes $\\sigma=500$, $600$, $650$, $700$, and $750$ MPa. The shear tests were carried out at $5$ K. Here, the two-dimensional $x-z$ glide plane denotes a $(111)$ cross section. The dislocation configurations are shifted vertically for the clarity.}\n \\label{fig:sroBelowStrsAnnealed}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\begin{overpic}[width=0.27\\textwidth]{Figs\/h0_1.png}\n %\n \\Labelxy{44}{-3}{0}{$\\hat{h}_x(z)$}\n \\Labelxy{-6}{35}{90}{$z$(\\r{A})}\n %\n \\LabelFig{17}{90}{$a)$ \\scriptsize RSA}\n \n \\Labelxy{19}{16}{90}{$\\scriptstyle 400$\\tiny (MPa)}\n \\Labelxy{40}{16}{90}{$\\scriptstyle 500$}\n \\Labelxy{49}{16}{90}{$\\scriptstyle 550$}\n \\Labelxy{68}{16}{90}{$\\scriptstyle 600$}\n \\Labelxy{81}{16}{90}{$\\scriptstyle 650$}\n \\end{overpic}\n \n \n \\begin{overpic}[width=0.27\\textwidth]{Figs\/h0_crltn_1.png}\n %\n \\Labelxy{44}{-3}{0}{$|z-z^\\prime|$}\n \\Labelxy{38}{80}{0}{$\\sigma$\\tiny (MPA)}\n \\Labelxy{-6}{35}{90}{$c_h(|z-z^\\prime|)$}\n %\n \\LabelFig{17}{15}{$b)$}\n \\end{overpic}\n \\caption{\\textbf{a}) Configurations of (immobile) partial edge dislocations \\textbf{b}) associated correlations $c_h(|z-z^\\prime|)=\\langle ~\\hat{h}_x(z).\\hat{h}_x(z^\\prime)~\\rangle$ as a function of distance $|z-z^\\prime|$ in a NiCoCr RSA under different applied stresses below the depinning transition. This includes $\\sigma=400$, $500$, $550$, $600$, and $650$ MPa}\n \\label{fig:sroBelowStrsRsa}\n\\end{figure}\n\n\\begin{figure*}\n \\centering\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/DislVelocItime20000_id1_load1200.png}\n %\n \\LabelFig{19}{76}{$a)$}\n \\Labelxy{50}{-6}{0}{$x$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/DislVelocItime30000_id1_load1200.png}\n \\LabelFig{19}{76}{$b)$}\n \\Labelxy{50}{-6}{0}{$x$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/DislVelocItime35000_id1_load1200.png}\n \\LabelFig{19}{76}{$c)$}\n \\Labelxy{50}{-6}{0}{$x$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/DislVelocItime40000_id1_load1200.png}\n \\LabelFig{19}{76}{$d)$}\n \\Labelxy{50}{-6}{0}{$x$(\\r{A})}\n \\Labelxy{-6}{40}{90}{$z$(\\r{A})}\n \\end{overpic}\n %\n \\caption{ (z-scored) velocity of partial edge dislocations $\\hat{v}_x(z)$, illustrated by the black arrows, in annealed NiCoCr at $T_a=600$ K and under the applied stress $\\sigma =1200$ MPa (above $\\sigma_c$). The shear tests were carried out at $5$ K. Here, the panels indicate different realizations associated with the gliding dislocations and the two-dimensional plane denotes a $(111)$ cross section. The arrows denote the velocity field.}\n \\label{fig:dislVelSnapshots}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\begin{overpic}[width=0.27\\textwidth]{Figs\/mobility.png}\n %\n \\Labelxy{44}{-3}{0}{$\\sigma$~\\scriptsize(MPa)}\n \\Labelxy{-6}{35}{90}{$\\langle v_x \\rangle(\\text{ms}^{-1})$}\n %\n \\LabelFig{17}{15}{$a)$}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.27\\textwidth]{Figs\/v0_crltn_0_multipleLoads.png}\n %\n \\Labelxy{44}{-3}{0}{$|z-z^\\prime|$}\n \\Labelxy{-6}{35}{90}{$\\langle c_v(|z-z^\\prime|) \\rangle_\\text{ens}$}\n \n \\Labelxy{68}{80}{0}{$\\sigma$\\tiny (MPA)}\n %\n \\LabelFig{17}{15}{$b)$ \\scriptsize annealed}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.27\\textwidth]{Figs\/v0_crltn_1_multipleLoads.png}\n %\n \\Labelxy{44}{-3}{0}{$|z-z^\\prime|$}\n \\Labelxy{-6}{35}{90}{$\\langle c_v(|z-z^\\prime|) \\rangle_\\text{ens}$}\n \n \\Labelxy{68}{80}{0}{$\\sigma$\\tiny(MPA)}\n %\n \\LabelFig{17}{15}{$c)$ \\scriptsize RSA}\n \\end{overpic}\n \\caption{Stress-dependence of the mean dislocation velocity and associated fluctuations. \\textbf{a}) Mobility rule describing the mean dislocation velocity $\\langle v_x \\rangle$ as a function of applied stress $\\sigma$. Mean velocity auto correlations $\\langle c_v(|z-z^\\prime|) \\rangle_\\text{ens}$ as a function of distance $|z-z^\\prime|$ in \\textbf{b}) annealed NiCoCr \\textbf{c}) NiCoCr RSA subject to multiple applied stresses $\\sigma$ above $\\sigma_y$. \\note{i) how SROs affect mobility?}}\n \\label{fig:velCrltn}\n\\end{figure*}\n\nTo estimate $\\sigma_c$, we performed atomistic simulations of dislocation properties in fcc-based NiCoCr CSAs by studying dynamics of $\\frac{1}{2}\\langle{1}10\\rangle\\{111\\}$ edge dislocations which, under an external drive, dissociate into two mixed partials and a stacking fault in between \\cite{hull2001introduction}.\nTo measure the dislocation velocity and its spatial-temporal evolution, we first identified all dislocation line defects in the atomistic crystal, along with their Burgers vectors, and output a line representation of the dislocations by using OVITO \\cite{stukowski2012automated}.\nDue to inherent lattice distortions, dislocation lines are not straight but show local fluctuations with respect to the average line direction along $z$.\nWe describe line fluctuations projected along the glide direction $x$ by the function $h_x(z)$ discretized via a fine grid of size $2$~\\r{A} across the glide plane parallel to the $z$ direction. \nWe obtain the dislocation velocity $v_x(z) = \\delta h_x(z)\/\\delta t$ by considering successive dislocation snapshots that are apart by the time window $\\delta t\\simeq 4$ ps.\nThe latter is chosen to be at least three orders of magnitude longer than the discretization time $\\Delta t$ yet short enough to resolve displacements down to atomistic scales.\nThe subsequent correlation analysis was performed on CSAs initially annealed at $T=600$ K and sheared, along with the RSAs, at $5$ K.\n\nFigure~\\ref{fig:dislSnapshots} and \\ref{fig:dislSnapshotsRsa} illustrate configurations of (frozen) dislocations in an annealed NiCoCr as well as a NiCoCr RSA under different loads well below the depinning transition ($\\sigma < \\sigma_c$).\n{The local curvatures associated with the dislocation segments in Fig.~\\ref{fig:dislSnapshots} indicate fairly coherent pinning effects that somewhat correlate with the spatial locations of SROs. \nSuch features might be also present in RSAs, as in Fig.~\\ref{fig:dislSnapshotsRsa}, but to a very limited extent in space.} \nThe local line curvature, and its positive sign with respect to the glide direction, should potentially indicate how effectively dislocations are pinned near SROs and\/or due to local atomic misfits.\nIn this context, line fluctuations associated with the aged alloy in Fig.~\\ref{fig:sroBelowStrsAnnealed}(a) appear to be correlated over larger lengthscales than those of the random alloy in Fig.~\\ref{fig:sroBelowStrsRsa}(a).\nSimilar trends could be also inferred from the associated correlation functions\n\\begin{equation}\nc_h(|z-z^\\prime|)=\\langle~\\hat{h}_x(z).\\hat{h}_x(z^\\prime)~\\rangle,\n\\end{equation}\nwith the $z-$scored fluctuations $\\hat{h}_x=(h_x-\\langle h_x \\rangle)\/\\text{var}^{\\frac{1}{2}}(h_x)$.\nHere the angular brackets $\\langle .\\rangle$ denote averaging in space.\nOverall, the slower decay of correlations $c_h(|z-z^\\prime|)$ in Fig.~\\ref{fig:sroBelowStrsAnnealed}(b), in comparison with Fig.~\\ref{fig:sroBelowStrsRsa}(b), may indicate additional SRO-induced pinning effects in annealed alloys.\nWe note that the absence of SROs does not necessarily rule out long-range fluctuation patterns in RSAs, as in Fig.~\\ref{fig:sroBelowStrsRsa}(b), and, therefore, coherent pinning patterns due to atomic-scale distortions \\cite{zhang4102468data,zaiser2021pinning}. \n\n\n\n\\begin{figure}[t\n \\centering\n \\begin{overpic}[width=0.5\\textwidth]{Figs\/Elastic.png}\n \\Labelxy{91}{7.5}{0}{\\scriptsize RSA}\n \\end{overpic}\n \\caption{Elastic constants and their dependence on the annealing temperature $T_a$. Measurements were carried out at $T=5$ K. The rightmost data point represents elastic properties of a random solid solution. Samples were generated based on the Li-Sheng-Ma~ potential. \\note{referenced in the text?, boot out the Poisson ratio since it is rather useless.}} \n \\label{fig:elasticConstants}\n\\end{figure}\n\n\\begin{figure}[b]\n \\centering\n \\begin{overpic}[width=0.27\\textwidth]{Figs\/v0_0.png}\n %\n \\Labelxy{44}{-3}{0}{$\\hat{v}_x(z)$}\n \\Labelxy{-6}{35}{90}{$z$(\\r{A})}\n %\n \\LabelFig{17}{90}{$a)$ \\scriptsize annealed}\n \n \\Labelxy{18}{16}{90}{$\\scriptstyle (1)$}\n \\Labelxy{34}{16}{90}{$\\scriptstyle (2)$}\n \\Labelxy{48}{16}{90}{$\\scriptstyle (3)$}\n \\Labelxy{62}{16}{90}{$\\scriptstyle (4)$}\n \\Labelxy{76}{16}{90}{$\\scriptstyle (5)$}\n \\end{overpic}\n \n\n \\begin{overpic}[width=0.27\\textwidth]{Figs\/v0_crltn_0.png}\n %\n \\Labelxy{44}{-3}{0}{$|z-z^\\prime|$}\n \\Labelxy{-6}{35}{90}{$c_v(|z-z^\\prime|)$}\n %\n \\LabelFig{17}{15}{$b)$}\n \\end{overpic}\n \\caption{ \\textbf{a}) (Scaled) velocity of partial edge dislocations $\\hat{v}_x(z)$ \\textbf{b}) associated correlations $c_v(|z-z^\\prime|)=\\langle ~\\hat{v}_x(z).\\hat{v}_x(z^\\prime)~\\rangle$ as a function of distance $|z-z^\\prime|$ in an annealed NiCoCr at $T_a=600$ K under the applied stress $\\sigma =1200$ MPa (above $\\sigma_c$). The shear tests were carried out at $5$ K. Here, the numbers indicate different realizations associated with gliding dislocations and the two-dimensional plane denotes a $(111)$ cross section. The velocity profiles are shifted vertically for the clarity.}\n \\label{fig:sroAboveStrsAnnealed}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \n \\begin{overpic}[width=0.27\\textwidth]{Figs\/v0_1.png}\n %\n \\Labelxy{44}{-3}{0}{$\\hat{v}_x(z)$}\n \\Labelxy{-6}{35}{90}{$z$(\\r{A})}\n %\n \\LabelFig{17}{90}{$a)$ \\scriptsize RSA}\n \n \\Labelxy{18}{16}{90}{$\\scriptstyle (1)$}\n \\Labelxy{34}{16}{90}{$\\scriptstyle (2)$}\n \\Labelxy{48}{16}{90}{$\\scriptstyle (3)$}\n \\Labelxy{62}{16}{90}{$\\scriptstyle (4)$}\n \\Labelxy{76}{16}{90}{$\\scriptstyle (5)$}\n \\end{overpic}\n \n %\n \\begin{overpic}[width=0.27\\textwidth]{Figs\/v0_crltn_1.png}\n %\n \\Labelxy{44}{-3}{0}{$|z-z^\\prime|$}\n \\Labelxy{-6}{35}{90}{$c_v(|z-z^\\prime|)$}\n %\n \\LabelFig{17}{15}{$b)$}\n \\end{overpic}\n \\caption{ \\textbf{a}) (Scaled) velocity of partial edge dislocations $\\hat{v}_x(z)$ \\textbf{b}) associated correlations $c_v(|z-z^\\prime|)=\\langle ~\\hat{v}_x(z).\\hat{v}_x(z^\\prime)~\\rangle$ as a function of distance $|z-z^\\prime|$ in a NiCoCr RSA under the applied stress $\\sigma =1200$ MPa (above $\\sigma_c$).}\n \\label{fig:sroAboveStrsRsa}\n\\end{figure}\n\n\n\nWe repeated the above analysis by probing velocity fluctuations $v_x(z)$ associated with the gliding dislocations (at $\\sigma>\\sigma_c$) in annealed as well as random alloys (see Fig.~\\ref{fig:dislVelSnapshots}).\nFigure~\\ref{fig:velCrltn} illustrates the shear stress dependence of the mean dislocation velocity as well as (mean) velocity correlations (averaged over different configurations) for the aged and random alloys.\nAs shown in Fig.~\\ref{fig:velCrltn}(a), the observed behavior of $\\langle v_x\\rangle$ versus $\\sigma$ at $\\sigma_c$ marks the dislocation pinning-to-depinning transition which is in agreement with the expected generic dependence around the transition. This seems to be fairly insensitive to annealing except for a meaningful shift of $\\sigma_c$ to larger strengths.\nThe estimated critical shear stresses are $\\sigma_c\\simeq 950$ and $650$ MPa associated with aged and random samples, respectively. \nThe (mean) velocity auto-correlations, averaged over different realizations, $\\langle c_v(|z-z^\\prime|) \\rangle_\\text{ens}$ (see the definition of $c_v(|z-z^\\prime|)$ below) are shown at different stress levels beyond $\\sigma_c$ in Fig.~\\ref{fig:velCrltn}(b) and (c), both indicating a finite correlation length. \n\n{The marked increase of $\\sigma_c$ is despite (relatively) insignificant variations of elastic properties (Fig.~\\ref{fig:elasticConstants}) and, therefore, improving yielding properties against RSAs cannot be naively attributed to the enhancement in elasticity of aged alloys (see Sec.~\\ref{sec:discussions}).\nThe elastic constants we probed in this study include $C_{11}$, $C_{12}$, and $C_{44}$ (based on the Voigt notation) as well as the bulk modulus $B$ and Poisson's ratio that were determined by using the Li-Sheng-Ma~ interatomic potential.\nHere the $x$, $y$, and $z$ dimensions are parallel to $[100]$, $[010]$, and $[001]$ crystal directions, respectively.\nThe overall trend we observe in Fig.~\\ref{fig:elasticConstants} is consistent with the study of Li et al. \\cite{Li2019} which reported the change of elastic properties with increasing SROs (upon decreasing $T_a$).\nThe elastic constants seem to develop features near $T_a\\simeq 800$ where the dominant presence of chemical ordering is expected.}\n\n\nWe further investigate individual dislocation configurations and associated fluctuations of local velocities in Fig.~\\ref{fig:sroAboveStrsAnnealed} and \\ref{fig:sroAboveStrsRsa} where the dislocations move at an average speed $\\langle v_x\\rangle \\simeq 1000 ~\\text{ms}^{-1}$ in both systems subject to the applied shear stress of $\\sigma=1200$ MPa, well above the corresponding depinning thresholds.\nThe ($z$-scored) velocity profiles $\\hat{v}_x(z)$ in Fig.~\\ref{fig:sroAboveStrsAnnealed}(a) and \\ref{fig:sroAboveStrsRsa}(a) correspond to five different snapshots of gliding dislocations that are shifted for a better view of variations across the (average) dislocation line direction $z$.\nWe remark that the regions to the left of the dashdotted lines indicate local velocities below the average speed $\\langle v_x\\rangle$, {as depicted by the left-headed arrows in Fig.~\\ref{fig:dislVelSnapshots} }.\nStatistically speaking, the segments with $v_x(z)<\\langle v_x\\rangle$ somewhat correlate with the positively-bent segments of dislocation lines which are mostly influenced by the existence of SROs and\/or atomic misfits.\nNevertheless, velocity fluctuations quantified by the velocity auto correlations\n\\begin{equation}\nc_v(|z-z^\\prime|)=\\langle~\\hat{v}_x(z).\\hat{v}_x(z^\\prime)~\\rangle,\n\\end{equation}\nnot appear to be statistically different in annealed and random alloys in Fig.~\\ref{fig:sroAboveStrsAnnealed}(b) and \\ref{fig:sroAboveStrsRsa}(b).\n\n\n\n\\note{expand}\n\n\n\n\n\n\\section{Discussions \\& Conclusions}\\label{sec:discussions}\nOur atomistic simulations of NiCoCr~ CSAs under special thermal treatments have revealed the formation of nanostructural local chemical ordering and enhanced dislocation glide resistance in close agreement with recent SRO-based studies in simulated and \\emph{real} NiCoCr experiments \\cite{Li2019,zhang2020short}.\nOn the ordering effects, we made use of the Li-Sheng-Ma~ potential function that has been validated in terms of detailed and accurate modelling of Ni, Co, and Cr interatomic interactions \\cite{Li2019}.\nOur direct measurements of local lattice strains agree very closely with a recent ab initio study \\cite{oh2019engineering} but failed to fully reproduce experimental findings \\cite{Yin2020}.\nBy using the Farkas-Caro~ potential, we find very limited relevance to real annealed NiCoCr~ alloys. \nThe simulated alloys, in this context, exhibit no ordering (beyond statistical fluctuations), but also no notable improvement in yield strengths or elastic properties as reported in Table~\\ref{tab:EC_Mo}.\nWe have interpreted the physical origin of such differences by using robust (experimentally-relevant) SRO descriptors in various thermal annealing scenarios. \nWe find that the Li-Sheng-Ma potential, under the proper aging process, leads to an exceptional dislocation depinning strength with low stacking fault width that falls short of that of RSAs.\n{The latter is associated with the enhanced stacking fault energy which might, in part, relate to improved elasticity as a result of SROs but, compared to the yield strength, the ordering effects appear to be less pronounced.} \nThe intrinsic strengthening mechanism is mostly dominated by coherent SROs-induced pinning effects, but random spatial distributions of misfit volumes and the resulting roughening seem to be also at play.\n\nOur correlation analyses of the dislocation structure and its spatial-temporal evolution allow for inferring a characteristic pinning length $\\xi_p$ and optimal displacement $w_p$ \\cite{VARVENNE2016164}.\nWe interpret the latter as being the rms fluctuations in the dislocation height (with respect to the mean), i.e. $w_p=\\langle h^2_x-\\langle h_x\\rangle^2\\rangle^\\frac{1}{2}$, whereas the former is determined as the shortest distance where the height correlations cross zero, e.g. $c_h(|z-z^\\prime|)=0$ at $|z-z^\\prime|=\\xi_p$.\nAt $T_a=600$ K and $\\sigma < \\sigma_y$, it follows that $w_p=5.6-11.2$ \\r{A} and $\\xi_p=25-31$ \\r{A} associated with the annealed alloy. \n{We conjecture that these two quantities should both correlate with the observed increase in the depinning stress. Based on our present data, however, we are not able to quantify such (anti-)correlations numerically.}\nBoth observables $w_p$ and $\\xi_p$ \\r{A} are also expected to show meaningful associations with the average SRO size $\\xi^\\text{sro}$ as well as the amplitude of misfit fluctuations (characterized by $\\xi^\\text{misfit}$) and are relevant ingredients in \\emph{mean-field} solute models that make yield strength predictions based on dislocation line properties (e.g. line tension $\\Gamma$ , length $L$, and Burgers vector $b$).\n{In this mean-field picture, SROs introduce the characteristic scale $\\xi^\\text{sro}$ that \\emph{effectively} decreases the pinning length $\\xi_p$ leading to a reduction of the optimal displacement $w_p$ and, therefore, an extra strengthening.}\n{Within these mean-field model frameworks, the depinning stress should scale with the line tension $\\Gamma$, which itself is proportional to the shear modulus and, based on our findings, annealing is not expected to boost $\\sigma_y$ simply because of such elasticity-based contributions but instead variations in the disorder strength and the pinning field are the key factors.}\nTo validate such theories in simulations, one must be vigilant to use appropriate mesoscopic lengths (beyond atomistic scales) where continuum-like concepts such as line tension and local curvature are well-defined \\cite{VARVENNE2016164}.\nTo explore the full dislocation waviness in MD, it is also necessary that the dislocation length $L \\gg \\xi_p$ and associated deformation $w_p\\ll \\xi_p$ \\cite{Priv_Comm_curtin}. \nHowever, the above separation of scales is typically a limit beyond atomistic modeling assumptions including the present study.\n\n\nComplications might also arise in the application of strengthening theories (e.g. the VC model) due to SRO-induced correlations. \nThe latter are at odds with the \"randomness\" hypothesis taken as granted based on solutes' arrangements in RSAs.\nGiven that the VC theory is constructed exclusively on misfit information, an \\emph{effective} treatment in the presence of spatial correlations is the assumption that SROs should alter dramatically local misfit distributions and, in that case, an alternative length $\\xi^\\text{misfit}_\\text{eff}$ may be used to describe the strength of distortions. \nAlternatively, Zaiser and Wu (ZW) \\cite{zaiser2021pinning, vaid2021pinning} formulated a more relevant approach based on the fact that pinning forces caused by obstacles are not fully random but rather correlated over some certain length $a$ \\cite{geslin2021microelasticity} that we tend to interpret as the effective SRO size ($a\\simeq\\xi^\\text{sro}$). \nIn their formulation of dislocation dynamics, ZW introduced a order\/disorder length that, along with the strength of misfit fluctuations, may describe the SRO- and misfit-induced noise field in a more accurate way than the VC methodology.\nSimilar efforts were made by Zhang et al. \\cite{zhang2019effect} along these lines who developed a stochastic Peierls-Nabarro model to incorporate the role of both CSA randomness and short range ordering effects on glide dynamics of roughened dislocations.\n\n\n\n\n{The existing literature reports the emergence of (varying degrees of) SROs as a rather generic feature across a broad range of high-entropy alloys (see \\cite{wu2021short} and references therein). \nNevertheless, the focus has been placed on different variants of NiCoCr-based alloys (including the well-studied Cantor alloy) and, owing to similar atomic size and electron negativity, such compositions might tend to favor SRO nucleation \\cite{ding2019tuning}.\nIn terms of mechanical properties, the SRO-induced enhancement in the dislocation glide resistance may also constitute fairly universal mechanisms associated with it, i.e. coherent pinning and enhanced roughening, not specific to particular chemical compositions but their robustness over a broader range of compositionally complex solid solutions has yet to be fully explored.\n}\n\nThere is a large multitude of results in this work, some of which including the dislocation roughening and SRO emergence could be potentially validated experimentally through the in\/ex-situ electron microscopy analysis or other image-based characterization techniques.\n{Furthermore, our finding will have important implications for Discrete Dislocation Dynamics (DDD) models and associated mobility rules that additionally consider spatial correlations within the rough potential energy landscape \\cite{salmenjoki2020plastic}.\nThis is conceptually similar to intrinsic Peierls stresses that are locally distributed in space but also correlated over certain microstructural scales.\nIncorporating and tuning SROs' structural features as model ingredients will potentially lead to further improvements in DDD predictive capabilities and design-level hardening features in the context of NiCoCr-based CSAs with dense and complex networks of interacting dislocations.}\n\n\n\n\n\n\n\n\n\n\\section{\\label{sec:introduction}Introduction}\n\nMetallurgy of alloys is at the core of technological progress. \nConcentrated solid solution alloys (CSAs) have recently emerged as major candidates for novel alloys for extreme conditions applications \\cite{li2019mechanical,shang2021mechanical}. \nOut of all, the equiatomic NiCoCr CSA represents a simple enough composition that has been consistently reported to show exceptional mechanical properties \\cite{li2019mechanical}. \nThese include (among others) a relatively high (tensile) strength and ductility, fracture toughness, as well as (micro-)hardness, that often exceed those of the \"Cantor\" alloy \\cite{Gludovatz2016}, yet with a fewer number of principal components.\nThis is most likely rooted in the chemical composition and underlying sub-structure.\nHowever, the microstructural origin of the exceptional mechanical properties has been heavily debated, with a possible explanation being the presence of nm-level (chemical\/structural) short-range order (SRO) \\cite{zhang2020short,wu2021short,zhou2022atomic,chen2021direct} that {arises from particular thermal processing and} influences dislocation pinning and stacking fault widths. In addition, lattice distortions and local crystalline misfits, due to atomic size differences~\\cite{Yin2020,noehring2019correlation}, have been shown to be correlated to the exceptional mechanical behavior of this alloy. Given the apparent importance of local misfit volumes~\\cite{Yin2020}, it remains a challenge to identify the role of SRO for exceptional mechanical properties.\nIn this paper, we focus on extensive molecular simulations to understand how SRO influences dislocation plasticity and how it might depend on processing parameters (i.e. annealing temperature), the properties of the ordered phase, as well as the role of the MD atomic potentials in the ordering process. \n{\nIn this framework, we investigated two case studies involving commonly-used NiCoCr interatomic potentials i) Li-Sheng-Ma and ii) Farkas-Caro potential. \nWhile i) leads to the formation of short range ordering upon aging, ii) displays chemical\/structural features, under the exact same thermal treatment, that are almost indistinguishable from random solid solutions.\nOur multi-scale characterization of local ordering were based upon the use of novel descriptors that exhibit distinct structural\/chemical signatures owing to the presence of nanoscopic SROs.\n}\nWe study the effects of the SRO on mechanical properties, showing that it significantly influences them, via the interplay of dislocations and SRO structures.\n{Such an interplay was quantified via a detailed analysis of the dislocation substructure indicating enhanced roughening properties due to combined SRO-misfit effects.}\n\nShort range order has been at the core of studies in CSAs across the board \\cite{liu2021nanoprecipitate,he2021understanding,wolverton2000short}. \nThermodynamically speaking, SROs' ubiquity at low-temperature alloys has been mainly attributed to dominant enthalpic effects that, in the absence of entropy-driven mechanisms, do not favor idealistic perfect mixtures of equimolar elements \\cite{Li2019}.\nIn this context, SROs typically refer to coherent compositional deviations apart from (statistically) random distributions of atoms within the solution matrix as in random solid solution alloys (RSAs).\nMore importantly, (thermal) processing parameters associated with annealing and homogenization procedures (i.e. temperature and time) or irradiation may have a drastic effect on the nucleation of SROs and associated substructural features \\cite{Yin2020,walsh2021magnetically,zhang2017local}.\nOwing to their nanoscopic scales, laboratory-based observations of SROs are quite nontrivial involving intensive use of advanced characterization techniques such as high-resolution electron microscopy and atomic-resolution energy dispersive spectrometry mapping \\cite{wang2022chemical,chen2021direct}.\nThe latter are strongly tied to underlying physical mechanisms that govern fundamental alloy properties. \nFor instance, the formation of Ni-rich nano-precipitates and associated inhomogeneities within the annealed NiCoCr matrix has been recently suggested to tune the stacking fault width with evident consequences in terms of the alloy strengthening \\cite{ding2018tunable,Li2019}. \nSimilar conclusions were drawn experimentally by Ritchie et al. \\cite{zhang2020short} who reported on the emergence of SROs in aged NiCoCr CSAs with significant impacts on the dislocation activation energy and hardness. \nIn studies of NiCoCr-based alloys, stacking fault energy, hardness, and fracture toughness, as bulk properties, were recently shown to strongly correlate with the degree of Ni-rich SROs and corresponding structural features \\cite{yang2022chemical,jian2020effects,liu2022exceptional,miao2021ordering}.\n\n\\begin{figure*}[t]\n \\centering\n \\begin{overpic}[width=0.45\\textwidth]{Figs\/ChemPotenNiCo.png}\n \\LabelFig{83.5}{12}{$a)$}\n \\end{overpic}\n \\begin{overpic}[width=0.45\\textwidth]{Figs\/ChemPotenNiCr.png}\n \\LabelFig{83.0}{12}{$b)$}\n \\end{overpic}\n \\caption{Chemical potential differences for \\textbf{a)} Ni-Co and \\textbf{b)} Ni-Cr pairs used for variance constrained semi-grand canonical (VCSGC) ensemble.} \n \\label{fig:Chemical Poyentials}\n\\end{figure*}\n\n\n\nIn contrast, local misfit properties have been conventionally viewed as a key solid solution strengthening mechanism \\cite{sohn2019ultrastrong}.\n{In this framework, the inherent yield strength of alloys (or Peierls stress) associates to the stress threshold required for dislocation depinning and in the context of multi-component high-entropy alloys, their intrinsic hetrogeneity and randomness gives rise to somewhat uncorrelated perturbations to the local thresholds. }\nThis concept was theoretically put forward in the seminal work by Labusch \\cite{Labusch} who hypothesized that the motion of dislocations within a random set of solute obstacles leads to significant hardening effects in dilute solutions. \nIn a series of relevant papers, Varvenne and Curtin (VC) \\cite{VARVENNE2016164,VARVENNE2017660, VARVENNE201892} further investigated the RSA context in terms of elastic-type long-range interactions between dislocation lines and residual strain fields resulting from atomic size misfits. \nAlong these lines, a mean-field theoretical framework was proposed to make fairly accurate predictions of yield strengths solely based on the effective medium elastic properties and, more importantly, local misfit fluctuations. \n{The proposed theory accounts for local compositional fluctuations described by spatial distributions of misfit volumes which are accessible through atomistic simulations and experimentation.\nThe VC framework was further generalized to additionally account for thermal and strain-rate effects on the alloys' strength and checked in the context of random alloys \\cite{VARVENNE2017660, Yin2020}.\nIn the case of high-entropy alloys (HEAs), however, there may often be a considerable degree of short range ordering and, therefore, HEAs cannot be simply treated as fully random.}\n\nShort-range order and local misfits are not one-way streets but they combine and interplay in ways that are somewhat unpredictable in advance and inseparable \\cite{jian2020effects}.\nMoreover, almost all crucial properties related to alloy strengths are strongly dependent on the underlying microstructure and processing methods used to synthesize CSAs.\n{In an aged multicomponent alloy, the effective Peierls stress will be influenced by both randomness in the local composition distribution (giving rise to misfit fluctuations) and short ranged (but still finite) spatial correlations introduced by SROs.\nNaturally, this combined effect leads to an effective yield stress that typically exceeds that of a random solid solution, lacking this finite range ordering component. \nA naive picture in this context is that dislocations will move by locally bending between pinning sites to overcome locally-fluctuating Peierls stresses leading to extra strengthening \\cite{utt2022origin}.\nNevertheless, a systematic study accounting for dislocations' glide resistance and their substructure discerning the separate roles of SROs and misfits seems necessary. \n}\n\n\n\\begin{figure*}[t]\n \\centering\n \\begin{overpic}[width=0.75\\textwidth]{Figs\/Figure1.png}\n \\LabelFig{8}{27}{$a)$}\n \\LabelFig{58}{27}{$b)$}\n %\n \\Labelxy{0.5}{13}{90}{$~~~~C_p(J$K$^{-1}$)}\n \\Labelxy{51.2}{10}{90}{$\\alpha_p(\\times 10^{-6}$K$^{-1})$}\n \n \\put(22,13) {\\includegraphics[width=.2\\textwidth]{Figs\/Snapshot.jpg}}\n \n \\begin{tikzpicture} \n \\coordinate (a) at (0,0);\n \\node[white] at (a) {\\tiny.};\n \\scaleBarr{4.3}{1.2}{0.35}{0.15}{$\\tiny 0$}{$\\tiny 15$}{$\\tiny 30$}{\\footnotesize\\r{A}}\n \\end{tikzpicture}\n %\n \\end{overpic}\n \\caption{Annealing temperature effects on NiCoCr based on the Li-Sheng-Ma~ potential. \\textbf{a)} Heat capacity $C_p$ \\textbf{b)} thermal expansion coefficient $\\alpha_p$ versus anealing temperature $T_a$. The inset represents a $(111)$ cross-section of Ni (grey), Co (blue), and Cr (red) atoms at $T_a=800$ K.} \n \\label{fig:thermo}\n\\end{figure*}\n\\begin{figure*}[t]\n \\centering\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/Equilibrated_400_hq.jpg}\n \\LabelFig{12}{12}{$a)$}\n \\end{overpic}\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/Annealed_400_hq.jpg}\n \\LabelFig{13}{10}{$b)$}\n \\end{overpic}\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/Annealed_800_hq.jpg}\n \\LabelFig{13}{10.5}{$c)$}\n \\end{overpic}\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/Annealed_1400_hq.jpg}\n \\LabelFig{12}{10}{$d)$}\n \\end{overpic}\n \\caption{Snapshots of NiCoCr samples \\textbf{a)} RSA equilibrated at $T=400$ K, \\textbf{b)} annealed at $T_a=400$ K, \\textbf{c)} annealed at $T_a=800$ K and \\textbf{d)} annealed at $T_a=1400$ K.}\n \\label{fig:Annealed}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/wc_11_sheng.png}\n %\n \\Labelxy{50}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{-6}{35}{90}{$p_\\text{NiNi}$}\n %\n \\LabelFig{19}{76}{$a)$}\n \n \n \n \n \n \n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/wc_12_sheng.png}\n \\Labelxy{50}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{-6}{35}{90}{$p_\\text{NiCo}$}\n %\n \\LabelFig{19}{76}{$b)$}\n \n \\begin{tikzpicture}\n \\legCirc{1.6}{3}{black}{\\footnotesize annealed}{0.9}\n \\legSq{1.6}{2.6}{red}{\\footnotesize random CSA}{1.2}\n \\end{tikzpicture}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/wc_13_sheng.png}\n \\Labelxy{50}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{-6}{35}{90}{$p_\\text{NiCr}$}\n %\n \\LabelFig{19}{76}{$c)$}\n %\n \\put(53,50) {\\includegraphics[width=.15\\textwidth]{Figs\/gr_sheng.png}}\n %\n \\Labelxy{78}{48}{0}{$\\scriptstyle r$\\scriptsize (\\r{A})}\n \\Labelxy{116}{80}{90}{$\\scriptstyle g(r)$}\n \\LabelFig{104}{105}{$g)$}\n %\n \\end{overpic}\n %\n \n \n \\begin{overpic}[width=0.24\\textwidth]{Figs\/wc_22_sheng.png}\n \\Labelxy{50}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{-6}{35}{90}{$p_\\text{CoCo}$}\n %\n \\LabelFig{19}{76}{$d)$}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/wc_23_sheng.png}\n \\Labelxy{50}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{-6}{35}{90}{$p_\\text{CoCr}$}\n %\n \\LabelFig{19}{76}{$e)$}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/wc_33_sheng.png}\n %\n \\Labelxy{50}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{-6}{35}{90}{$p_\\text{CrCr}$}\n \\LabelFig{19}{76}{$f)$}\n %\n \\end{overpic}\n \\caption{Short range ordering in annealed NiCoCr CSAs based on the Li-Sheng-Ma~ potential. Warren\u2013Cowley SRO parameters including \\textbf{a)} $p_\\text{NiNi}$ \\textbf{b)} $p_\\text{NiCo}$ \\textbf{c)} $p_\\text{NiCr}$ \\textbf{d)} $p_\\text{CoCo}$ \\textbf{e)} $p_\\text{CoCr}$ \\textbf{f)} $p_\\text{CrCr}$ plotted against distance $r$ at $T_a=400$ K. \\textbf{g)} Pair correlation function $g(r)$ at $T_a=400$ K. The base (red) dashdotted line indicates the random concentration.} \n \\label{fig:sroSheng}\n\\end{figure*}\n\n\\begin{figure}[b]\n \\begin{overpic}[width=0.28\\textwidth]{Figs\/wc_diff_sheng_index0.png}\n %\n \\LabelFig{17}{15}{$a)$}\n \\Labelxy{44}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{62}{71}{0}{$\\scriptstyle T_a$\\scriptsize (K)}\n \\Labelxy{-6}{35}{90}{$p_\\text{NiNi}-p^\\text{rsa}_\\text{NiNi}$}\n %\n \\end{overpic}\n \\begin{overpic}[width=0.28\\textwidth]{Figs\/rms_sro_temperature.png}\n %\n \\Labelxy{44}{-3}{0}{$T_a$\\scriptsize (K)}\n \\Labelxy{-6}{35}{90}{$p^\\text{rms}_\\text{NiNi}$}\n %\n \\LabelFig{17}{15}{$b)$}\n\n \\end{overpic}\n %\n \n \n \n \n \n \n\n \n %\n \\caption{SRO variations with annealing temperature $T_a$. \\textbf{a)} $p_\\text{NiNi}-p^\\text{rss}_\\text{NiNi}$ as a function of pair distance $r$ \\textbf{b)} root-mean-squared fluctuations $p^\\text{rms}_\\text{NiNi}$ plotted against $T_a$. The (red) base line indicate zero correlations associated with RSAs. The results are based on the Li-Sheng-Ma~ interatomic potential.\n \\note{PS: consider splitting this figure in two. The first should contain only the upper panel (make it larger), plus an attempt to some scaling of the $p_\\text{NiNi}-p^\\text{rss}_\\text{NiNi}$ as a function of pair distance $r$ for various annealing temperatures. Can the functions be scaled (e.g. linearly) so that that they \"look the same\"? If yes then the second picture (after Fig 6) could provide the scaling from figure 4 and fig 6, giving quantitative view on the effects of the annealing temperature.}\n \\note[KK]{tried a simple rescaling by dividing $p_\\text{NiNi}-p^\\text{rsa}_\\text{NiNi}$ by $p^\\text{rms}_\\text{NiNi}$ but didn't get a master curve. The curves show quite complex nonmonotonic features with a couple of characteristic lengthscales that i) are hard to measure ii) scale non-trivially with temperature.} \\note[PS]{Thanks for checking. Maybe we should openly point out this nontrivial differences in the text, any non-triviality is \"sexy\". }\n } \n \\label{fig:sroTemp}\n\\end{figure}\n\nTo this end, we use a versatile approach to investigate the micro-structural\/chemical origin of strengthening in NiCoCr including enthalpy-driven ordering effects and local distortions. \nWe perform hybrid Monte Carlo\/Molecular Dynamics (MC\/MD) simulations at a range of annealing temperatures based on two commonly-used NiCoCr interatomic potentials. \nWe find that the emergence of SROs is not a robust feature of annealed model NiCoCr CSAs but, to a great extent, depends on the chosen potential energy.\nMore specifically, the two models generate microstructurally different alloys (with\/out SROs) with the exact same thermal processing.\nFollowing the numerical framework in \\cite{Li2019}, we probed effects of SROs in terms of local concentration fluctuations, stacking fault widths, dislocation glide resistance, and misfit volumes of NiCoCr as well as thermodynamic properties such as the specific heat and thermal expansion coefficient. \nOur analysis indicates meaningful correlations of the above observables with varying degrees of SROs in aged alloys, making them easily distinguishable from RSAs. \n{We further observe a marked growth in the population of SROs inside the stacking fault region and remarkable strengthening behavior against dislocation glide with the latter rooted in the interplay between short range ordering and local misfit properties.}\n\nThe organization of this paper is the following.\nIn Sec.~\\ref{sec:methods}, we describe the numerical setup, sample preparation (including aging\/annealing), loading protocols, and relevant simulation details {including the hybrid MD\/MC model, interatomic forces, and shear test description}. \nSection~\\ref{sec:results} presents our simulation results relevant to the chemical\/microstructural characterization of SROs and their potential effects on dynamics of dislocations.\nIn this context, Sec.~\\ref{sec:sro_temp} introduces robust structural\/compositional metrics {associated with local elemental fluctuations } to characterize the temperature-dependence of SROs and {distribution of misfit volumes}.\nLattice distortions in the presence of short range ordering will be discussed in Sec.~\\ref{sec:LatticeDistortions}.\nIn Sec.~\\ref{sec:sro_disl}, we provide an in-depth analysis of partial dislocations and their depinning mechanism in the presence of SROs.\n{This includes auto-correlation analyses associated with the dislocation line fluctuations as well as local variations of the dislocation velocity .}\nSection~\\ref{sec:discussions} presents relevant discussions and conclusions.\n\n\n\\section{conclusions}\\label{sec:conclusions}\n\n\\begin{acknowledgments}\nThis research was funded by the European Union Horizon 2020 research and innovation program under grant agreement no. 857470 and from the European Regional Development Fund via Foundation for Polish Science International Research Agenda PLUS program grant no. MAB PLUS\/2018\/8.\n\\end{acknowledgments}\n\\section{\\label{sec:methods}Methods and Materials}\nMolecular dynamics simulations were carried out in LAMMPS \\cite{LAMMPS} by implementing atomistic samples of size $N=500,000$ and $1,700,000$ within a three-dimensional periodic cell.\nIn order to study SRO properties (in the absence of dislocations), we prepared cubic samples with length $10$~nm along\nthe $x[100]$, $y[010]$, and $z[001]$ directions.\nThe NPT ensembles were implemented via a Nose-Hoover thermostat and barostat with relaxation time scales $\\tau_d^\\text{therm}=10$ fs and $\\tau_d^\\text{bar}=100$ fs.\nWe also set the discretization time to $\\Delta t\\simeq 1.0$ fs.\nSamples were initially prepared via an energy minimization at $T=0$ K (at a fixed volume) and subsequently thermalized at different temperatures \\add[KK]{and constant pressure $P=0$ bar} for $100$ ps prior to annealing. \n\nThe interatomic forces are based on two commonly-used embedded-atom method (EAM) potentials in the context of NiCoCr~ solid solution alloys: \\romn{1}) the Li-Sheng-Ma potential proposed in \\cite{Li2019} which has been utilized in recent SRO studies, modeling dislocation nucleation and glide dynamics \\cite{cao2020novel,jian2020effects}, and nanoindentation tests \\cite{yang2022chemical}\n\\romn{2}) the EAM Farkas\u2013Caro potential \\cite{farkas2018model} originally developed to model equimolar high-entropy FeNiCoCrCu~ alloys but used here to validate the SRO formation and its robustness against different potentials.\n\nAnnealed configurations were obtained performing hybrid MC\/MD simulations based on the variance constrained semi-grand canonical (VCSGC) ensemble \\cite{PhysRevB.85.184203} within the annealing temperature range $T_a=400-1300$ K. \nIn order to determine the values of $\\Delta\\mu_{X_1X_2} = \\mu_{X_1} - \\mu_{X_2}$ which minimizes the composition errors we perform a set of semi-grand canonical simulations varying the chemical composition at $T = 1500$ K, and fitted the MC data using the equation: \\(\\Delta\\mu(X_1,P,T)=T\\ln(X_1\/[1-X_1])+\\sum_{i=0}^{n}A_iX_{1}^{i}\\) (Fig.~\\ref{fig:Chemical Poyentials}), where $X_1$ is the reference element (Ni in our work) and $A_i$ are the fitting parameters \\cite{Becker}. This allow us to perform hybrid MD\/VCSGC-MC with a fixed target composition. During the annealing process, we perform $1$ MC cycle consisting of $N\/2$ attempts of transmutation every $20$ MD steps \\remove[KK]{ and swapping}. \nWe carried out a total number of $800,000$ MC cycles at all annealing temperatures $T_a$ to ensure that the configurations reach thermal equilibrium and that the structure of SROs are statistically indifferent.\n\n\n\nWe also studied dynamics of a $\\frac{1}{2}[\\bar{1}10](111)$ edge dislocation which, under an external perturbation, dissociates into two separate partials with a stacking fault in face-centered cubic (fcc) crystals.\nTo this end, we constructed a simulation cell with dimensions $L_x\\simeq80$~nm, $L_y\\simeq 20$~nm, and $L_z\\simeq 15$~nm (see Fig.~\\ref{fig:sketchLoadSetup}) and performed annealing at a desired temperature $T_a$.\nWe subsequently equilibrated the annealed alloy at a low temperature $T=5$~K and pressure $P=0$ for the duration of $100$ ps.\nTo create an edge dislocation within the aged sample, we used the periodic array of dislocation (PAD) model proposed in \\cite{osetsky2003} to ensure a periodic setup along the Burgers vector ($x[\\bar{1}10]$) and dislocation line ($z[\\bar{1}\\bar{1}2]$). \nThe dislocated sample was further relaxed using the NPT framework with $P_{xx}=P_{zz}=0$ and $T=5$ K (for 100 ps) leading to the dislocation dissociation into two Shockley partials.\nA stress-controlled framework was employed within the NVT ensemble at $T=5$ K by applying additional forces on the top plane (normal to $y$) with the bottom layer held fixed (with a thickness $\\simeq 4$ \\r{A}) during the course of shear simulations. \nThe applied stress was gradually increased from $\\sigma=50$~MPa (in a quasi-static fashion) above a critical (depinning) stress in order to move the partial dislocations.\n\n\n\n\n\n\n\n\n\\section*{Supplementary Materials}\nIn this Supplementary Materials we will present further discussions on local concentration fluctuations of model NiCoCr CSAs using the Farkas-Caro~ potential function.\nFigure~\\ref{fig:sroFarkas}(a-f) illustrates that the the Warren\u2013Cowley SRO parameters $p_{ab}$ associated with the annealed CSAs are statistically indistinguishable from $p_{ab}^{\\text{rsa}}$ including the six (distinct) elemental pairs at $T_a=400$ K.\nTherefore, the former can be essentially treated as RSAs with random distributions of the chemical compositions and associated fluctuations that exhibit the expected scale-dependence due to randomness (see Fig.~\\ref{fig:std_concentrationFluctuations} (b,d)).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure*\n \\centering\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/wc_11_farkas.png}\n %\n \\Labelxy{50}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{-6}{35}{90}{$p_\\text{NiNi}$}\n %\n \\LabelFig{19}{76}{$a)$}\n \n \n \n \n \n \n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/wc_12_farkas.png}\n \\Labelxy{50}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{-6}{35}{90}{$p_\\text{NiCo}$}\n %\n \\LabelFig{19}{76}{$b)$}\n \n \\begin{tikzpicture}\n \\legCirc{0.8}{1.1}{black}{\\footnotesize annealed}{0.9}\n \\legSq{0.8}{0.7}{red}{\\footnotesize random CSA}{1.2}\n \\end{tikzpicture}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/wc_13_farkas.png}\n \\Labelxy{50}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{-6}{35}{90}{$p_\\text{NiCr}$}\n %\n \\LabelFig{19}{76}{$c)$}\n %\n %\n \\end{overpic}\n %\n \n \n \\begin{overpic}[width=0.24\\textwidth]{Figs\/wc_22_farkas.png}\n \\Labelxy{50}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{-6}{35}{90}{$p_\\text{CoCo}$}\n %\n \\LabelFig{19}{76}{$d)$}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/wc_23_farkas.png}\n \\Labelxy{50}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{-6}{35}{90}{$p_\\text{CoCr}$}\n %\n \\LabelFig{19}{76}{$e)$}\n \\end{overpic}\n %\n \\begin{overpic}[width=0.24\\textwidth]{Figs\/wc_33_farkas.png}\n %\n \\Labelxy{50}{-3}{0}{$r$(\\r{A})}\n \\Labelxy{-6}{35}{90}{$p_\\text{CrCr}$}\n \\LabelFig{19}{76}{$f)$}\n %\n \\end{overpic}\n \\caption{Short range ordering in annealed NiCoCr CSAs based on the Farkas-Caro~ potential. Warren\u2013Cowley SRO parameters including \\textbf{a)} $p_\\text{NiNi}$ \\textbf{b)} $p_\\text{NiCo}$ \\textbf{c)} $p_\\text{NiCr}$ \\textbf{d)} $p_\\text{CoCo}$ \\textbf{e)} $p_\\text{CoCr}$ \\textbf{f)} $p_\\text{CrCr}$ plotted against distance $r$ at $T_a=400$ K. The base (red) dashdotted line indicates the random concentration.} \n \\label{fig:sroFarkas}\n\\end{figure*}","meta":{"redpajama_set_name":"RedPajamaArXiv"}}