diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzecnm" "b/data_all_eng_slimpj/shuffled/split2/finalzzecnm" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzecnm" @@ -0,0 +1,5 @@ +{"text":"\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nTo the best of our knowledge, our {\\small GRETA}\\ approach is the first to aggregate event trends that are matched by nested Kleene patterns without constructing these trends. We achieve this goal by compactly encoding all event trends into the {\\small GRETA}\\ graph and dynamically propagating the aggregates along the edges of the graph during graph construction. We prove that our approach has optimal time complexity. Our experiments demonstrate that {\\small GRETA}\\ achieves up to four orders of magnitude speed-up and requires up to 50--fold less memory than the state-of-the-art solutions.\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\nIn this section, we sketch how our {\\small GRETA}\\ approach can be extended to support additional language features.\n\n\\textbf{Other Aggregation Functions}.\nWhile Theorem~\\ref{theorem:count} defines event trend count computation, i.e., \\textsf{COUNT}(*), we now sketch how the principles of incremental event trend aggregation proposed by our {\\small GRETA}\\ approach apply to other aggregation functions (Definition~\\ref{def:query}).\n\n\\vspace{-2mm}\n\\begin{theorem}[\\textbf{Event Trend Aggregation Computation}]\nLet $G$ be the {\\small GRETA}\\ graph for a query $q$ and a stream $I$,\n$e,e' \\in I$ be events in $G$ such that \n$e.type=E$, \n$e'.type \\neq E$, \n$attr$ is an attribute of $e$, \n$Pr$ and $Pr'$ be the predecessor events of $e$ and $e'$ respectively in $G$, and\n$End$ be the \\textsf{END} events in $G$.\n\\[\n\\begin{array}{l}\ne.count_E = e.count + \\sum_{p \\in Pr} p.count_E\\\\\ne'.count_E = \\sum_{p' \\in Pr'} p'.count_E\\\\\n\\textsf{COUNT}(E) = \\sum_{end \\in End} end.count_E\n\\vspace*{2mm}\\\\\n\ne.min = \\textsf{min}_{p \\in Pr}(e.attr, p.min)\\\\\ne'.min = \\textsf{min}_{p' \\in Pr'}(p'.min)\\\\\n\\textsf{MIN}(E.attr) = \\textsf{min}_{end \\in End}(end.min)\n\\vspace*{2mm}\\\\\n\ne.max = \\textsf{max}_{p \\in Pr}(e.attr, p.max)\\\\\ne'.max = \\textsf{max}_{p' \\in Pr'}(p'.max)\\\\\n\\textsf{MAX}(E.attr) = \\textsf{max}_{end \\in End}(end.max)\n\\vspace*{2mm}\\\\\n\ne.sum = e.attr*e.count + \\sum_{p \\in Pr} p.sum\\\\\ne'.sum = \\sum_{p' \\in Pr'} p'.sum\\\\\n\\textsf{SUM}(E.attr) = \\sum_{end \\in End} end.sum\n\\vspace*{2mm}\\\\\n\n\\textsf{AVG}(E.attr) = \\textsf{SUM}(E.attr) \/ \\textsf{COUNT}(E)\n\\end{array}\n\\]\n\\label{theorem:aggregate}\n\\end{theorem}\n\\vspace{-4mm}\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[\\small \\textsf{COUNT}(*)=11]{\n\\includegraphics[width=0.22\\columnwidth]{figures\/count_star.png} \n\\label{fig:count_star}\n}\n\\hspace*{2mm}\n\\subfigure[\\small \\textsf{COUNT}(A)=20]{\n\\includegraphics[width=0.22\\columnwidth]{figures\/count_e.png} \n\\label{fig:count_e}\n}\n\\hspace*{2mm}\n\\subfigure[\\small \\textsf{MIN}(A.attr)=4]{\n\\includegraphics[width=0.22\\columnwidth]{figures\/min.png} \n\\label{fig:min}\n}\n\\hspace*{2mm}\n\\subfigure[\\small \\textsf{SUM}(A.attr)=100]{\n\\includegraphics[width=0.22\\columnwidth]{figures\/sum.png} \n\\label{fig:sum}\n}\n\\caption{Aggregation of trends matched by the pattern $P=(\\textsf{SEQ}(A+,B))+$ in the stream \\textit{I = \\{a1, b2, a3, a4, b7\\}} where \\textit{a1.attr=5, a3.attr=6,} and \\textit{a4.attr=4}}\n\\label{fig:aggregates}\n\\end{figure} \n\nAnalogously to Theorem~\\ref{theorem:count}, Theorem~\\ref{theorem:aggregate} can be proven by induction on the number of events in the graph $G$.\n\n\\begin{example}\nIn Figure~\\ref{fig:aggregates}, we compute \\textsf{COUNT}$(*)$, \\textsf{COUNT}$(A)$, \\textsf{MIN}$(A.attr)$, and \\textsf{SUM}$(A.attr)$ based on the {\\small GRETA}\\ graph for the pattern $P$ and the stream $I$. Compare the aggregation results with Example~\\ref{ex:aggregates}. \n\\textsf{MAX}$(A.attr)=6$ is computed analogously to \\textsf{MIN}$(A.attr)$. \n\\textsf{AVG}$(A.attr)$ is computed based on \\textsf{SUM}$(A.attr)$ and \\textsf{COUNT} $(A)$.\n\\end{example}\n\n\\textbf{Disjunction} and \\textbf{Conjunction} can be supported by our {\\small GRETA}\\ approach without changing its complexity because the count for a disjunctive or a conjunctive pattern $P$ can be computed based on the counts for the sub-patterns of $P$ as defined below.\nLet $P_i$ and $P_j$ be patterns (Definition~\\ref{def:pattern}).\nLet $P_{ij}$ be the pattern that detects trends matched by both $P_i$ and $P_j$.\n$P_{ij}$ can be obtained from its DFA representation that corresponds to the intersection of the DFAs for $P_i$ and $P_j$~\\cite{theoretical-info}.\nLet $\\textsf{COUNT}(P)$ denote the number of trends matched by a pattern $P$.\nLet \n$C_{ij} = \\textsf{COUNT}(P_{ij})$,\n$C_i = \\textsf{COUNT}(P_i) - C_{ij}$, and \n$C_j = \\textsf{COUNT}(P_j) - C_{ij}$.\n\nIn contrast to event sequence and Kleene plus (Definition~\\ref{def:pattern}),\ndisjunctive and conjunctive patterns do not impose a time order constraint upon trends matched by their sub-patterns.\n\n\\textbf{\\textit{Disjunction}} $(P_i \\vee P_j)$ matches a trend that is a match of $P_i$ or $P_j$. \n$\\textsf{COUNT}(P_i \\vee P_j) = C_i + C_j - C_{ij}$.\n$C_{ij}$ is subtracted to avoid counting trends matched by $P_{ij}$ twice.\n\n\\textbf{\\textit{Conjunction}} $(P_i \\wedge P_j)$ matches a pair of trends $tr_i$ and $tr_j$ where $tr_i$ is a match of $P_i$ and $tr_j$ is a match of $P_j$. \n$\\textsf{COUNT}(P_i \\wedge P_j) = \nC_i * C_j + \nC_i * C_{ij} +\nC_j * C_{ij} +\n\\binom{C_{ij}}{2}$\nsince each trend detected only by $P_i$ (not by $P_j$) is combined with each trend detected only by $P_j$ (not by $P_i$). In addition, each trend detected by $P_{ij}$ is combined with each other trend detected only by $P_i$, only by $P_j$, or by $P_{ij}$.\n\n\\textbf{Kleene Star} and \\textbf{Optional Sub-patterns} can also be supported without changing the complexity because they are syntactic sugar operators. Indeed,\n$\\textsf{SEQ}(P_i*,P_j) = \\textsf{SEQ}(P_i+,$ $P_j) \\vee P_j$ and \n$\\textsf{SEQ}(P_i?,P_j) = \\textsf{SEQ}(P_i,P_j) \\vee P_j$.\n\n\\textbf{Constraints on Minimal Trend Length}.\nWhile our language does not have an explicit constraint on the minimal length of a trend, one way to model this constraint in {\\small GRETA}\\ is to unroll a pattern to its minimal length. For example, assume we want to detect trends matched by the pattern $A+$ and with minimal length 3. Then, we unroll the pattern $A+$ to length 3 as follows: $\\textsf{SEQ}(A,A,A+)$. \n\nAny correct trend processing strategy must keep all current trends, including those which did not reach the minimal length yet. Thus, these constraints do not change the complexity of trend detection. They could be added to our language as syntactic sugar. \n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l|c|c|}\n\\hline\nSemantics & \\textbf{Skipped events} & \\textbf{\\# of trends} \\\\\n\\hline\n\\hline\n\\textbf{Skip-till-any-match}\n& Any\n& Exponential \\\\\n\\hline\n\\textbf{Skip-till-next-match}\n& Irrelevant\n& \\multirow{2}{*}{Polynomial} \\\\\n\\cline{1-2}\n\\textbf{Contiguous}\n& None\n& \\\\\n\\hline\n\\end{tabular}\n\\vspace*{2mm}\n\\caption{Event selection semantics}\n\\label{tab:ess}\n\\end{table}\n\n\\textbf{Event Selection Semantics} are summarized in Table~\\ref{tab:ess}. As explained in Section~\\ref{sec:model}, we focus on Kleene patterns evaluated under the most flexible semantics returning all matches, called \\textit{skip-till-any-match} in the literature~\\cite{ADGI08, WDR06, ZDI14}. \nOther semantics return certain subsets of matches~\\cite{ADGI08, WDR06, ZDI14}. \\textit{Skip-till-next-match} skips only those \\textit{events that cannot be matched}, while \\textit{contiguous} semantics skips \\textit{no} event. \nTo support these semantics, Definition~\\ref{def:pattern} of adjacent events in a trend must be adjusted. Then, fewer edges would be established in the {\\small GRETA}\\ graph than for skip-till-any-match resulting in fewer trends. Based on this modified graph, Theorem~\\ref{theorem:count} defines the event trend count computation.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.5\\columnwidth]{figures\/kleene5.png} \n\\vspace{-2mm} \n\\caption{Count of trends matched by the pattern $P$ in the stream $I=\\{a1,b2,a3,a4,b5\\}$}\n\\label{fig:kleene5}\n\\end{figure}\n\n\\textbf{Multiple Event Type Occurrences in a Pattern}.\nWhile in Section~\\ref{sec:model} we assumed for simplicity that an event type may occur in a pattern at most once, we now sketch a few modifications of our {\\small GRETA}\\ approach allowing to drop this assumption. \nFirst, we assign a unique identifier to each event type. For example, \n\\textsf{SEQ}(A+,B,A,A+,B+)\nis translated into \nP=\\textsf{SEQ}(A1+,B2,A3,A4+,B5+). \nThen, each state in a GRE\\-TA template has a unique label (Figure~\\ref{fig:kleene5}).\nOur {\\small GRETA}\\ approach still applies with the following modifications. \n(1) Events in the first sub-graph are \\textsf{START} events, while events in the last sub-graph are \\textsf{END} events.\n(2)~An event $e$ may not be its own predecessor event since an event may occur at most once in a trend.\n(3)~An event $e$ may be inserted into several sub-graphs. Namely, $e$ is inserted into a sub-graph for $e.type$ if $e$ is a \\textsf{START} event or $e$ has predecessor events. \nFor example, a4 is inserted into the sub-graphs for A1, A3, and A4 in Figure~\\ref{fig:kleene5}. a4 is a \\textsf{START} event in the sub-graph for A1. b5 is inserted into the sub-graphs for B2 and B5. b5 is an \\textsf{END} event in the sub-graph for B5.\n\nSince an event is compared to each previous event in the graph in the worst case, our {\\small GRETA}\\ approach still has quadratic time complexity $O(n^2 k)$ where $n$ is the number of events per window and $k$ is the number of windows into which an event falls (Theorem~\\ref{theorem:optimality}).\nLet $t$ be the number of occurrences of an event type in a pattern. Then, each event is inserted into $t$ sub-graphs in the worst case. Thus, the space complexity increases by the multiplicative factor $t$, i.e., $O(t n k)$, where $n$ remains the dominating cost factor for high-rate streams and meaningful patterns (Theorem~\\ref{theorem:complexity}).\n\n\n\n\n\\section{Performance Evaluation}\n\\label{sec:evaluation}\n\n\\subsection{Experimental Setup}\n\\label{sec:exp_setup}\n\n\\textbf{Infrastructure}. \nWe have implemented our {\\small GRETA}\\ approach in Java with JRE 1.7.0\\_25 running on Ubuntu 14.04 with 16-core 3.4GHz CPU and 128GB of RAM. We execute each experiment three times and report their average.\n\n\\textbf{Data Sets}. \nWe evaluate the performance of our {\\small GRETA}\\ approach using the following data sets.\n\n$\\bullet$~\\textbf{\\textit{Stock Real Data Set}}. \nWe use the real NYSE data set~\\cite{stockStream} with 225k transaction records of 10 companies. Each event carries volume, price, time stamp in seconds, type (sell or buy), company, sector, and transaction identifiers. We replicate this data set 10 times. \n\n$\\bullet$~\\textbf{\\textit{Linear Road Benchmark Data Set}}. \nWe use the traffic simulator of the Linear Road benchmark~\\cite{linear_road} for streaming systems to generate a stream of position reports from vehicles for 3 hours. Each position report carries a time stamp in seconds, a vehicle identifier, its current position, and speed. Event rate gradually increases during 3 hours until it reaches 4k events per second. \n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{|l||l|l|}\n\\hline\n\\textbf{Attribute} & \\textbf{Distribution} & \\textbf{min--max} \\\\\n\\hline\n\\hline\nMapper id, job id & Uniform & 0--10 \\\\\n\\hline\nCPU, memory & Uniform & 0--1k \\\\\n\\hline\nLoad & Poisson with $\\lambda=100$ & 0--10k \\\\\n\\hline\n\\end{tabular}\n\\vspace{2mm}\n\\caption{Attribute values}\n\\label{tab:parameters}\n\\end{table}\n\n$\\bullet$~\\textbf{\\textit{Cluster Monitoring Data Set}}. \nOur stream generator creates cluster performance measurements for 3 hours. Each event carries a time stamp in seconds, mapper and job identifiers, CPU, memory, and load measurements. The distribution of attribute values is summarized in Table~\\ref{tab:parameters}. The stream rate is 3k events per second.\n\n\\textbf{Event Queries}. \nUnless stated otherwise, we evaluate query $Q_1$ (Section~\\ref{sec:introduction}) and its nine variations against the stock data set. These query variations differ by the predicate $S.price * X < \\textsf{NEXT}(S).price$ that requires the price to increase (or decrease with $>$) by $X \\in \\{1, 1.05, 1.1, 1.15, 1.2\\}$ percent from one event to the next in a trend. \nSimilarly, we evaluate query $Q_2$ and its nine variations against the cluster data set, and query $Q_3$ and its nine variations against the Linear Road data set. \nWe have chosen these queries because they contain all clauses (Definition~\\ref{def:query}) and allow us to measure the effect of each clause on the number of matched trends. The number of matched trends ranges from few hundreds to trillions.\nIn particular, we vary the number of events per window, presence of negative sub-patterns, predicate selectivity, and number of event trend groups.\n\n\n\\textbf{Methodology}. \nWe compare {\\small GRETA}\\ to CET~\\cite{PLAR17}, SA\\-SE~\\cite{ZDI14}, and Flink~\\cite{flink}.\nTo achieve a fair comparison, we have implemented CET and SASE on top of our platform. We execute Flink on the same hardware as our platform. While Section~\\ref{sec:related_work} is devoted to a detailed discussion of these approaches, we briefly sketch their main ideas below.\n\n$\\bullet$~\\textbf{\\textit{CET}}~\\cite{PLAR17} is the state-of-the-art approach to event trend detection. It stores and reuses partial event trends while constructing the final event trends. Thus, it avoids the re-computation of common sub-trends. While CET does not explicitly support aggregation, we extended this approach to aggregate event trends upon their construction.\n\n$\\bullet$~\\textbf{\\textit{SASE}}~\\cite{ZDI14} supports aggregation, nested Kleene patterns, predicates, and windows. It implements the two-step approach as follows. \n(1)~Each event $e$ is stored in a stack and pointers to $e$'s previous events in a trend are stored. For each window, a DFS-based algorithm traverses these pointers to construct all trends. \n(2)~These trends are aggregated.\n\n$\\bullet$~\\textbf{\\textit{Flink}}~\\cite{flink} is an open-source streaming platform that supports event pattern matching. We express our queries using Flink operators. Like other industrial systems~\\cite{esper, dataflow, streaminsight}, Flink does not explicitly support Kleene closure. Thus, we flatten our queries, i.e., for each Kleene query $q$ we determine the length $l$ of the longest match of $q$. We specify a set of fixed-length event sequence queries that cover all possible lengths from 1 to $l$. Flink is a two-step approach.\n\n\\textbf{Metrics}. \nWe measure common metrics for streaming systems, namely, \\textit{latency, throughput}, and \\textit{memory}. \n\\textit{Latency} measured in milliseconds corresponds to the peak time difference between the time of the aggregation result output and the arrival time of the latest event that contributes to the respective result.\n\\textit{Throughput} corresponds to the average number of events processed by all queries per second.\n\\textit{Memory} consumption measured in bytes is the peak memory for storing\nthe {\\small GRETA}\\ graph for {\\small GRETA},\nthe CET graph and trends for CET,\nevents in stacks, pointers between them, and trends for SASE, and \ntrends for Flink.\n\n\\begin{figure*}[t]\n\t\\centering\n \\subfigure[Latency]{\n \t\\includegraphics[width=0.25\\columnwidth]{experiments-sources\/events\/bars\/latency\/events-latency.png}\n \t \\label{fig:events-latency}\n\t}\n\t\\hspace*{5mm}\n\t\\subfigure[Memory]{\n \t\\includegraphics[width=0.25\\columnwidth]{experiments-sources\/events\/bars\/memory\/events-memory.png}\n \t\\label{fig:events-memory}\n\t}\n\t\\hspace*{5mm}\n\t\\subfigure[Throughput]{\n \t\\includegraphics[width=0.25\\columnwidth]{experiments-sources\/events\/bars\/throughput\/events-throughput.png}\n \t \\label{fig:events-throughput}\n\t}\n\t\\vspace{-3mm}\n\t\\caption{Positive patterns (Stock real data set)}\n\t\\label{fig:exp_positive}\n\t\\subfigure[Latency]{\n \t\\includegraphics[width=0.25\\columnwidth]{experiments-sources\/negation\/bars\/latency\/negation-latency.png}\n \t \\label{fig:negation-latency}\n\t}\n\t\\hspace*{5mm}\n\t\\subfigure[Memory]{\n \t\\includegraphics[width=0.25\\columnwidth]{experiments-sources\/negation\/bars\/memory\/negation-memory.png}\n \t\\label{fig:negation-memory}\n\t}\n\t\\hspace*{5mm}\n\t\\subfigure[Throughput]{\n \t\\includegraphics[width=0.25\\columnwidth]{experiments-sources\/negation\/bars\/throughput\/negation-throughput.png}\n \t \\label{fig:negation-throughput}\n\t}\n\t\\vspace{-3mm}\n\t\\caption{Patterns with negative sub-patterns (Stock real data set)}\n\t\\label{fig:exp_negative}\n\\end{figure*}\n\n\\subsection{Number of Events per Window} \n\\label{exp:window}\n\n\\textbf{Positive Patterns}.\nIn Figure~\\ref{fig:exp_positive}, we evaluate positive patterns against the stock real data set while varying the number of events per window. \n\n\\textit{\\textbf{Flink}} does not terminate within several hours if the number of events exceeds 100k because Flink is a two-step approach that evaluates a set of event sequence queries for each Kleene query. Both the unnecessary event sequence construction and the increased query workload degrade the performance of Flink. For 100k events per window, Flink requires 82 minutes to terminate, while its memory requirement for storing all event sequences is close to 1GB. Thus, Flink is neither real time nor lightweight.\n\n\\textit{\\textbf{SASE}}. The latency of SASE grows exponentially in the number of events until it fails to terminate for more than 500k events. Its throughput degrades exponentially. Delayed responsiveness of SASE is explained by the DFS-based stack traversal which re-computes each sub-trend $tr$ for each longer trend containing $tr$. The memory requirement of SASE exceeds the memory consumption of {\\small GRETA}\\ 50--fold because DFS stores the trend that is currently being constructed. Since the length of a trend is unbounded, the peak memory consumption of SASE is significant.\n\n\\textit{\\textbf{CET}}. Similarly to SASE, the latency of CET grows exponentially in the number of events until it fails to terminate for more than 700k events. Its throughput degrades exponentially until it becomes negligible for over 500k events. In contrast to SASE, CET utilizes the available memory to store and reuse common sub-trends instead of recomputing them. To achieve almost double speed-up compared to SASE, CET requires 3 orders of magnitude more memory than SASE for 500k events.\n\n\\textit{\\textbf{{\\small GRETA}}} consistently outperforms all above two-step approaches regarding all three metrics because it does not waste computational resources to construct and store exponentially many event trends. Instead, {\\small GRETA}\\ incrementally computes event trend aggregation. Thus, it achieves 4 orders of magnitude speed-up compared to all above approaches. \n{\\small GRETA}\\ also requires 4 orders of magnitude less memory than Flink and CET since these approaches store event trends. The memory requirement of {\\small GRETA}\\ is comparable to SASE because SASE stores only one trend at a time. Nevertheless, {\\small GRETA}\\ requires 50--fold less memory than SASE for 500k events.\n\n\\textbf{Patterns with Negative Sub-Patterns}.\nIn Figure~\\ref{fig:exp_negative}, we evaluate the same patterns as in Figure~\\ref{fig:exp_positive} but with negative sub-patterns against the stock real data set while varying the number of events. \nCompared to Figure~\\ref{fig:exp_positive}, the latency and memory consumption of all approaches except Flink significantly decreased, while their throughput increased. \nNegative sub-patterns have no significant effect on the performance of Flink because Flink evaluates multiple event sequence queries instead of one Kleene query and constructs all matched event sequences. \nIn contrast, negation reduces the {\\small GRETA}\\ graph, the CET graph, and the SASE stacks \\textit{before} event trends are constructed and aggregated based on these data structures. Thus, both CPU and memory costs reduce. Despite this reduction, SASE and CET fail to terminate for over 700k events.\n\n\\begin{figure*}[t]\n\t\\centering\n \\subfigure[Latency]{\n \t\\includegraphics[width=0.25\\columnwidth]{experiments-sources\/predicates\/latency\/predicates-latency.png}\n \t \\label{fig:predicates-latency}\n\t}\n\t\\hspace*{5mm}\n\t\\subfigure[Memory]{\n \t\\includegraphics[width=0.25\\columnwidth]{experiments-sources\/predicates\/memory\/predicates-memory.png}\n \t\\label{fig:predicates-memory}\n\t}\n\t\\hspace*{5mm}\n\t\\subfigure[Throughput]{\n \t\\includegraphics[width=0.25\\columnwidth]{experiments-sources\/predicates\/throughput\/predicates-throughput.png}\n \t \\label{fig:predicates-throughput}\n\t}\n\t\\vspace{-3mm}\n\t\\caption{Selectivity of edge predicates (Linear Road benchmark data set)}\n\t\\label{fig:exp_predicates}\n\\end{figure*}\n\\begin{figure*}[t]\n\t\\centering\n \\subfigure[Latency]{\n \t\\includegraphics[width=0.25\\columnwidth]{experiments-sources\/grouping\/latency\/grouping-latency.png}\n \t \\label{fig:grouping-latency}\n\t}\n\t\\hspace*{5mm}\n\t\\subfigure[Memory]{\n \t\\includegraphics[width=0.25\\columnwidth]{experiments-sources\/grouping\/memory\/grouping-memory.png}\n \t\\label{fig:grouping-memory}\n\t}\n\t\\hspace*{5mm}\n\t\\subfigure[Throughput]{\n \t\\includegraphics[width=0.25\\columnwidth]{experiments-sources\/grouping\/throughput\/grouping-throughput.png}\n \t \\label{fig:grouping-throughput}\n\t}\n\t\\vspace{-3mm}\n\t\\caption{Number of event trend groups (Cluster monitoring data set)}\n\t\\label{fig:exp_grouping}\n\\end{figure*}\n\n\\subsection{Selectivity of Edge Predicates} \n\\label{exp:predicate}\n\nIn Figure~\\ref{fig:exp_predicates}, we evaluate positive patterns against the Linear Road benchmark data set while varying the selectivity of edge predicates. We focus on the selectivity of edge predicates because vertex predicates determine the number of trend groups (Section~\\ref{sec:filtering}) that is varied in Section~\\ref{exp:grouping}. To ensure that the two-step approaches terminate in most cases, we set the number of events per window to 100k.\n\nThe latency of Flink, SASE, and CET grows exponentially with the increasing predicate selectivity until they fail to terminate when the predicate selectivity exceeds 50\\%. In contrast, the performance of {\\small GRETA}\\ remains fairly stable regardless of the predicate selectivity. {\\small GRETA}\\ achieves 2 orders of magnitude speed-up and throughput improvement compared to CET for 50\\% predicate selectivity. \n\nThe memory requirement of Flink and CET grows exponentially (these lines coincide in Figure~\\ref{fig:predicates-memory}). The memory requirement of SASE remains fairly stable but almost 22--fold higher than for {\\small GRETA}\\ for 50\\% predicate selectivity. \n\n\\subsection{Number of Event Trend Groups} \n\\label{exp:grouping}\n\nIn Figure~\\ref{fig:exp_grouping}, we evaluate positive patterns against the cluster monitoring data set while varying the number of trend groups. The number of events per window is 100k. \n\nThe latency and memory consumption of Flink, SASE, and CET decrease exponentially with the increasing number of event trend groups, while their throughput increases exponentially. Since trends are constructed per group, their number and length decrease with the growing number of groups. Thus, both CPU and memory costs reduce.\nIn contrast, {\\small GRETA}\\ performs equally well independently from the number of groups since event trends are never constructed. Thus, {\\small GRETA}\\ achieves 4 orders of magnitude speed-up compared to Flink for 10 groups and 2 orders of magnitude speed-up compared to CET and SASE for 5 groups.\n\n\n\n\n\n\n\n\\section{Other Language Clauses}\n\\label{sec:filtering}\n\nWe now expand our {\\small GRETA}\\ approach to handle sliding windows, predicates, and grouping. \n\n\\begin{figure*}[t]\n\\begin{minipage}{0.78\\textwidth}\n\\centering\n\\subfigure[\\small {\\small GRETA}\\ sub-graph replication]{\n\t\\includegraphics[width=0.58\\columnwidth]{figures\/window-baseline.png}\n\t\\label{fig:window-baseline}\t\n}\n\\subfigure[\\small {\\small GRETA}\\ sub-graph sharing]{\n\t\\includegraphics[width=0.33\\columnwidth]{figures\/window.png}\n\t\\hspace*{0.2cm}\n\t\\label{fig:window}\t\n}\n\\vspace{-3mm}\n\\caption{Sliding window \\textsf{WITHIN} 10 seconds \\textsf{SLIDE} 3 seconds}\n\\label{fig:rest}\n\\end{minipage}\n\\begin{minipage}{0.2\\textwidth}\n\\centering\n\t\\includegraphics[width=0.92\\columnwidth]{figures\/predicates.png}\n\t\\caption{Edge predicate $A.attr<\\textsf{NEXT}($ $A).attr$}\n\t\\label{fig:predicates}\n\\end{minipage}\n\\end{figure*}\n\n\\textbf{Sliding Windows}.\nDue to continuous nature of streaming, an event may contribute to the aggregation results in several overlapping windows. Furthermore, events may expire in some windows but remain valid in other windows. \n\n$\\bullet$ \\textbf{\\textit{{\\small GRETA}\\ Sub-Graph Replication}}.\nA naive solution would build a {\\small GRETA}\\ graph for each window independently from other windows. Thus, it would replicate an event $e$ across all windows that $e$ falls into. Worse yet, this solution introduces repeated computations, since an event $p$ may be predecessor event of $e$ in multiple windows. \n\n\\vspace{-2mm}\n\\begin{example}\nIn Figure~\\ref{fig:window-baseline}, we count the number of trends matched by the pattern $(\\textsf{SEQ}(A+,B))+$ within a 10-seconds-long window that slides every 3 seconds. The events $a1$--$b9$ fall into window $W_1$, while the events $a4$--$b9$ fall into window $W_2$. If a {\\small GRETA}\\ graph is constructed for each window, the events $a4$--$b9$ are replicated in both windows and their predecessor events are recomputed for each window.\n\\label{ex:window-baseline}\n\\end{example}\n\\vspace{-2mm}\n\n$\\bullet$ \\textbf{\\textit{{\\small GRETA}\\ Sub-Graph Sharing}}.\nTo avoid these drawbacks, we share a sub-graph $G$ across all windows to which $G$ belongs. Let $e$ be an event that falls into $k$ windows. The event $e$ is stored once and its predecessor events are computed once across all $k$ windows. The event $e$ maintains a count fro each window. To differentiate between $k$ counts maintained by $e$, each window is assigned an identifier $wid$~\\cite{LMTPT05-2}. The count with identifier $wid$ of $e$ ($e.count_{wid}$) is computed based on the counts with identifier $wid$ of $e$'s predecessor events (Line~10 in Algorithm~\\ref{lst:ETA_algorithm}). The final count for a window $wid$ ($final\\_count_{wid}$) is computed based on the counts with identifier $wid$ of the \\textsf{END} events in the graph (Line~12). \nIn Example~\\ref{ex:window-baseline}, the events $a4$--$b9$ fall into two windows and thus maintain two counts in Figure~\\ref{fig:window}. The first count is for $W_1$, the second one for $W_2$. \n\n\\textbf{Predicates} on vertices and edges of the {\\small GRETA}\\ graph are handled differently by the {\\small GRETA}\\ runtime. \n\n$\\bullet$ \\textbf{\\textit{Vertex Predicates}} restrict the vertices in the {\\small GRETA}\\ graph. They are evaluated on single events to either filter or partition the stream~\\cite{QCRR14}. \n\n\\textit{Local predicates} restrict the attribute values of events, for example, \\textit{companyID=IBM}. They purge irrelevant events early. We associate each local predicate with its respective state in the \n{\\small GRETA}\\ template.\n\n\\textit{Equivalence predicates} require all events in a trend to have the same attribute values, for example, \\textit{[company, sector]} in query $Q_1$. They partition the stream by these attribute values. Thereafter, {\\small GRETA}\\ queries are evaluated against each sub-stream in a divide and conquer fashion. \n\n$\\bullet$ \\textbf{\\textit{Edge Predicates}} restrict the edges in the graph (Line~4 of Algorithm~\\ref{lst:ETA_algorithm}). Events connected by an edge must satisfy these predicates. Therefore, edge predicates are evaluated during graph construction. We associate each edge predicate with its respective transition in the\n{\\small GRETA}\\ template.\n\n\\vspace{-2mm}\n\\begin{example}\nThe edge predicate $A.attr < \\textsf{NEXT}(A).attr$ in Figure~\\ref{fig:predicates} requires the value of attribute \\textit{attr} of events of type $A$ to increase from one event to the next in a trend. The attribute value is shown in the bottom left corner of a vertex. Only two dotted edges satisfy this predicate. \n\\label{ex:predicates}\n\\end{example}\n\\vspace{-2mm}\n\n\\textbf{Event Trend Grouping}.\nAs illustrated by our motivating examples in Section~\\ref{sec:introduction}, event trend aggregation often requires event trend grouping. Analogously to A-Seq~\\cite{QCRR14}, our {\\small GRETA}\\ runtime first partitions the event stream into sub-streams by the values of grouping attributes. A {\\small GRETA}\\ graph is then maintained separately for each sub-stream. Final aggregates are output per sub-stream.\n\n\n\\section{GRETA Framework}\n\\label{sec:implementation}\n\nPutting Setions~\\ref{sec:positive}--\\ref{sec:filtering} together, we now describe the {\\small GRETA}\\ runtime data structures and parallel processing.\n\n\\textbf{Data Structure for a Single {\\small GRETA}\\ Graph}.\nEdges logically capture the paths for aggregation propagation in the graph. Each edge is traversed \\textit{exactly once} to compute the aggregate of the event to which the edge connects (Lines~8--10 in Algorithm~\\ref{lst:ETA_algorithm}). Hence, edges are not stored. \n\nVertices must be stored in such a way that the predecessor events of a new event can be efficiently determined (Line~4). To this end, we leverage the following data structures.\nTo quickly locate \\textit{previous} events, we divide the stream into non-overlapping consecutive time intervals, called \\textit{\\textbf{Time Pa\\-nes}}~\\cite{LMTPT05}. Each pane contains all vertices that fall into it based on their time stamps. These panes are stored in a time-stamped array in increasing order by time (Figure~\\ref{fig:data-structure}). The size of a pane depends on the window specifications and stream rate such that each query window is composed of several panes -- allowing panes to be shared between overlapping windows~\\cite{AW04, LMTPT05}.\nTo efficiently find vertices of \\textit{predecessor event types}, each pane contains an \\textit{\\textbf{Event Type Hash Table}} that maps event types to vertices of this type. \n\nTo support \\textit{edge predicates}, we utilize a tree index that enables efficient range queries. The overhead of maintaining \\textit{\\textbf{Vertex Trees}} is reduced by event sorting and a pane purge mechanism. \nAn event is inserted into the Vertex Tree for its respective pane and event type. This sorting by time and event type reduces the number of events in each tree.\nFurthermore, instead of removing single expired events from the Vertex Trees, a whole pane with its associated data structures is deleted after the pane has contributed to all windows to which it belongs. \nTo support \\textit{sliding windows}, each vertex $e$ maintains a \\textit{\\textbf{Window Hash Table}} storing an aggregate per window that $e$ falls into.\nSimilarly, we store final aggregates per window in the \\textit{\\textbf{Results Hash Table}}.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.6\\columnwidth]{figures\/data-structure.png} \n\\vspace{-1mm} \n\\caption{Data structure for a single {\\small GRETA}\\ graph}\n\\label{fig:data-structure}\n\\end{figure}\n\n\\textbf{Data Structure for {\\small GRETA}\\ Graph Dependencies}. \nTo support negative sub-patterns, we maintain a \\textbf{\\textit{Graph Dependencies Hash Table}} that maps a graph identifier $G$ to the identifiers of graphs upon which $G$ depends. \n\n\\textbf{Parallel Processing}.\nThe grouping clause partitions the stream into sub-streams that are processed in parallel \\textit{independently} from each other. Such stream partitioning enables a highly scalable execution as demonstrated in Section~\\ref{exp:grouping}. \n\nIn contrast, negative sub-patterns require concurrent maintenance of \\textit{inter-dependent} {\\small GRETA}\\ graphs. To avoid race conditions, we deploy the time-based transaction model~\\cite{sstore}. \nA \\textit{stream transaction} is a sequence of operations triggered by all events with the same time stamp on the same {\\small GRETA}\\ graph. The application time stamp of a transaction (and all its operations) coincides with the application time stamp of the triggering events. \nFor each time stamp $t$ and each {\\small GRETA}\\ graph $G$, our time-driven scheduler waits till the processing of all transactions with time stamps smaller than $t$ on the graph $G$ and other graphs that $G$ depends upon is completed. Then, the scheduler extracts all events with the time stamp $t$, wraps their processing into transactions, and submits them for execution.\n\n\n\\section{Introduction}\n\\label{sec:introduction}\n\nComplex Event Processing (CEP) is a technology for supporting streaming applications from algorithmic trading to traffic management. CEP systems continuously evaluate event queries against high-rate streams composed of primitive events to detect event trends such as stock market down-trends and aggressive driving. In contrast to traditional event sequences of \\textit{fixed} length~\\cite{LRGGWAM11}, event trends have \\textit{arbitrary} length~\\cite{PLAR17}. They are expressed by Kleene closure. \nAggregation functions are applied to these trends to provide valuable summarized insights about the current situation. CEP applications typically must react to critical changes of these aggregates in real time~\\cite{ADGI08, WDR06, ZDI14}.\n\n\\textbf{Motivating Examples}.\nWe now describe three application scenarios of time-critical event trend aggregation.\n\n$\\bullet$ \\textit{\\textbf{Algorithmic Trading}}.\nStock market analytics platforms evaluate expressive event queries against high-rate streams of financial transactions. They deploy event trend aggregation to identify and then exploit profit opportunities in real time. For example, query $Q_1$ computes the \\textit{count} of down-trends per industrial sector. Since stock trends of companies that belong to the same sector tend to move as a group~\\cite{K02}, the number of down-trends across different companies in the same sector is a strong indicator of an upcoming down trend for the sector. When this indicator exceeds a certain threshold, a sell signal is triggered for the whole sector including companies without down-trends. These aggregation-based insights must be available to an algorithmic trading system in \\textit{near real time} to exploit short-term profit opportunities or avoid pitfalls.\n\nQuery $Q_1$ computes the number of down-trends per sector during a time window of 10 minutes that slides every 10 seconds. These stock trends are expressed by the \nKleene plus operator $S+$. All events in a trend carry the same company and sector identifier as required by the predicate $[company,sector]$. The predicate $S.price>\\textsf{NEXT}(S).price$ expresses that the price continually decreases from one event to the next in a trend. \nThe query ignores local price fluctuations by skipping over increasing price records. \n\n\n\\vspace*{-1mm}\n\\begin{lstlisting}[]\n$Q_1$: RETURN$\\;\\text{sector},\\;\\textsf{COUNT}(*)\\ $PATTERN$\\;\\text{Stock}\\ S+$\n WHERE$\\;\\text{[company,sector]}\\;$AND$\\;S.\\text{price}>\\textsf{NEXT}(S).\\text{price}$ \n GROUP$\\text{-}$BY$\\;\\text{sector}\\;$WITHIN$\\;\\text{10 minutes}\\;$SLIDE$\\;\\text{10 seconds}$\n\\end{lstlisting}\n\\vspace*{-1mm}\n\n$\\bullet$ \\textit{\\textbf{Hadoop Cluster Monitoring}}.\nModern computer cluster monitoring tools gather system measurements regarding CPU and memory utilization at runtime. These measurements combined with workflow-specific logs (such as start, progress, and end of Hadoop jobs) form load distribution trends per job over time. These load trends are aggregated to dynamically detect and then tackle cluster bottlenecks, unbalanced load distributions, and data queuing issues~\\cite{ZDI14}. For example, when a mapper experiences increasing load trends on a cluster, we might measure the \\textit{total CPU cycles} per job of such a mapper. These aggregated measurements over load distribution trends are leveraged in \\textit{near real time} to enable automatic tuning of cluster performance.\n\nQuery $Q_2$ computes the total CPU cycles per job of each mapper experiencing increasing load trends on a cluster during a time window of 1 minute that slides every 30 seconds. A trend matched by the pattern of $Q_2$ is a sequence of a job-start event $S$, any number of mapper performance measurements $M+$, and a job-end event $E$. All events in a trend must carry the same job and mapper identifiers expressed by the predicate $[job, mapper]$. The predicate M.load $<$ \\textsf{NEXT}(M).load requires the load measurements to increase from one event to the next in a load distribution trend. \nThe query may ignore any event to detect all load trends of interest for accurate cluster monitoring.\n\n\n\\vspace*{-1mm}\n\\begin{lstlisting}[]\n$Q_2:\\ \\textsf{RETURN}\\ mapper,\\ \\textsf{SUM}(M.cpu)$\n $\\textsf{PATTERN SEQ}(Start\\ S,\\ Measurement\\ M+,\\ End\\ E)$\n $\\textsf{WHERE}\\ [job,mapper]\\ \\textsf{AND}\\ M.load<\\textsf{NEXT}(M).load$\n $\\textsf{GROUP-BY}\\ mapper\\ \\textsf{WITHIN}\\ 1\\ minute\\ \\textsf{SLIDE}\\ 30\\ seconds$\n\\end{lstlisting}\n\\vspace*{-1mm}\n\n$\\bullet$ \\textbf{\\textit{Traffic Management}} is based on the insights gained during continuous traffic monitoring. For example, leveraging the \\textit{maximal} speed per vehicle that follows certain trajectories on a road, a traffic control system recognizes congestion, speeding, and aggressive driving. Based on this knowledge, the system predicts the traffic flow and computes fast and safe routes in real time to reduce travel time, costs, noise, and environmental pollution.\n\nQuery $Q_3$ detects traffic jams which are not caused by accidents. To this end, the query computes the number and the average speed of cars continually slowing down in a road segment without accidents during 5 minutes time window that slides every minute. A trend matched by $Q_3$ is a sequence of any number of position reports $P+$ without an accident event $A$ preceding them. All events in a trend must carry the same vehicle and road segment identifiers expressed by the predicate $[P.vehicle, segment]$. The speed of each car decreases from one position report to the next in a trend, expressed by the predicate $P.\\text{speed}>\\textsf{NEXT}(P).\\text{speed}$. The query may skip any event to detect all relevant car trajectories for precise traffic statistics.\n\n\\vspace*{-1mm}\n\\begin{lstlisting}[]\n$Q_3$: RETURN $\\text{segment},\\ \\textsf{COUNT}(*),\\ \\textsf{AVG}(P.\\text{speed})$\n PATTERN SEQ(NOT $\\text{Accident A, Position P+)}$\n WHERE $\\text{[\\textit{P}.vehicle,segment]}$ AND $P.\\text{speed}>\\textsf{NEXT}(P).\\text{speed}$\n GROUP$\\text{-}$BY $\\text{segment}$ WITHIN $\\text{5 minutes}$ SLIDE $\\text{1 minute}$ \n\\end{lstlisting}\n\\vspace*{-1mm}\n \n\\textbf{State-of-the-Art Systems} do not support incremental aggregation of event trends. They can be divided into:\n\n$\\bullet$ \\textit{\\textbf{CEP Approaches}} including SASE~\\cite{ADGI08,ZDI14}, Cayuga~\\cite{DGPRSW07}, and ZStream~\\cite{MM09} support Kleene closure to express event trends. While their query languages support aggregation, these approaches do not provide any details on how they handle aggregation on top of nested Kleene patterns. Given no special optimization techniques, these approaches construct all trends prior to their aggregation (Figure~\\ref{fig:overview}). These two-step approaches suffer from high computation costs caused by the exponential number of arbitrarily-long trends. Our experiments in Section~\\ref{sec:evaluation} confirm that such two-step approaches take over two hours to compute event trend aggregation even for moderate stream rates of 500k events per window. Thus, they fail to meet the low-latency requirement of time-critical applications. \nA-Seq~\\cite{QCRR14} is the only system we are aware of that targets incremental aggregation of event sequences. However, it is restricted to the simple case of \\textit{fixed-length} sequences such as \\textsf{SEQ}$(A,B,C)$. It supports neither Kleene closure nor expressive predicates. Therefore, A-Seq does not tackle the exponential complexity of event trends -- which now is the focus of our work.\n\n$\\bullet$ \\textit{\\textbf{Streaming Systems}} support aggregation computation over streams~\\cite{AW04, GHMAE07, KWF06, LMTPT05, THSW15}. However, these approaches evaluate simple Select-Project-Join queries with windows, i.e., their execution paradigm is set-based. They support neither event sequence nor Kleene closure as query operators. Typically, these approaches require the construction of join-results prior to their aggregation. Thus, they define incremental aggregation of \\textit{single raw events} but focus on multi-query optimization techniques~\\cite{KWF06} and sharing aggregation computation between sliding windows~\\cite{LMTPT05}.\n\n\\textbf{Challenges}. We tackle the following open problems:\n\n$\\bullet$ \\textbf{\\textit{Correct Online Event Trend Aggregation}}. \nKleene closure matches an exponential number of arbitrarily-long event trends in the number of events in the worst case~\\cite{ZDI14}. Thus, any practical solution must aim to aggregate event trends without first constructing them to enable real-time in-memory execution. At the same time, correctness must be guaranteed. That is, the same aggregation results must be returned as by the two-step approach.\n\n$\\bullet$ \\textbf{\\textit{Nested Kleene Patterns}}.\nKleene closure detects event trends of arbitrary, statically unknown length. Worse yet, Kleene closure, event sequence, and negation may be arbitra\\-rily-nested in an event pattern, introducing complex inter-dependencies between events in an event trend. Incremental aggregation of such arbitrarily-long and complex event trends is an open problem.\n\n$\\bullet$ \\textbf{\\textit{Expressive Event Trend Filtering}}. \nExpressive predicates may determine event relevance depending on other events in a trend. Since a new event may have to be compared to \\textit{any} previously matched event, all events must be kept. The need to store all matched events conflicts with the instantaneous aggregation requirement. \nFurthermore, due to the continuous nature of streaming, events expire over time -- triggering an update of all affected aggregates. However, recomputing aggregates for each expired event would put \\textit{real-time} system responsiveness at risk.\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.7\\columnwidth]{figures\/overview.png} \n\\vspace{-2mm} \n\\caption{State-of-the-art versus our {\\small GRETA}\\ approach}\n\\label{fig:overview}\n\\end{figure} \n\n\\textbf{Our Proposed {\\small GRETA}\\ Approach}.\nWe are the first to tackle these challenges in our Graph-based Real-time Event Trend Aggregation ({\\small GRETA}) approach (Figure~\\ref{fig:overview}). Given an event trend aggregation query $q$ and a stream $I$, the {\\small GRETA}\\ runtime compactly encodes all event trends matched by the query $q$ in the stream $I$ into a {\\small GRETA}\\ graph. During graph construction, aggregates are propagated from previous events to newly arrived events along the edges of the graph following the dynamic programming principle. This propagation is proven to assure incremental aggregation computation without first constructing the trends. The final aggregate is also computed incrementally such that it can be instantaneously returned at the end of each window of $q$.\n\n\\textbf{Contributions}. \nOur key innovations include:\n\n1)~We translate a nested Kleene pattern $P$ into a \n{\\small GRETA}\\ template. \nBased on this template, we construct the \\textit{{\\small GRETA}\\ graph} that compactly captures all trends matched by pattern $P$ in the stream. During graph construction, the aggregates are dynamically propagated along the edges of the graph. We prove the correctness of the {\\small GRETA}\\ graph and the graph-based aggregation computation.\n\n2)~To handle nested patterns with negative sub-patterns, we split the pattern into positive and negative sub-patterns. We maintain a separate {\\small GRETA}\\ graph for each resulting sub-pattern and invalidate certain events if a match of a negative sub-pattern is found. \n\n3)~To avoid sub-graph replication between overlapping sliding windows, we share one {\\small GRETA}\\ graph between all windows. Each event that falls into $k$ windows maintains $k$ aggregates. Final aggregate is computed per window.\n\n4)~To ensure low-latency lightweight query processing, we design the \\textit{{\\small GRETA}\\ runtime data structure} to support dynamic insertion of newly arriving events, batch-deletion of expired events, incremental propagation of aggregates, and efficient evaluation of expressive predicates.\n\n5)~We prove that our {\\small GRETA}\\ approach reduces the time complexity from exponential to quadratic in the number of events compared to the two-step approach and in fact achieves \\textit{optimal time complexity}. We also prove that the space complexity is reduced from exponential to linear. \n\n6)~Our experiments using synthetic and real data sets demonstrates that {\\small GRETA}\\ achieves up to four orders of magnitude speed-up and consumes up to 50--fold less memory compared to the state-of-the-art strategies~\\cite{flink, PLAR17, ZDI14}.\n\n\\textbf{Outline}. \nWe start with preliminaries in Section~\\ref{sec:model}.\nWe overview our {\\small GRETA}\\ approach in Section~\\ref{sec:overview}. \nSection~\\ref{sec:positive} covers positive patterns,\nwhile negation is tackled in Section~\\ref{sec:negative}.\nWe consider other language clauses in Section~\\ref{sec:filtering}. \nWe describe our data structure in Section~\\ref{sec:implementation} and analyze complexity in Section~\\ref{sec:complexity}. \nSection~\\ref{sec:discussion} discusses how our {\\small GRETA}\\ approach can support additional language features.\nSection~\\ref{sec:evaluation} describes the experiments. \nRelated work is discussed in Section~\\ref{sec:related_work}. \nSection~\\ref{sec:conclusions} concludes the paper. \n\\section{GRETA Data and Query Model}\n\\label{sec:model}\n\n\\textbf{Time}.\nTime is represented by a linearly ordered \\textit{set of time points} $(\\mathbb{T},\\leq)$, where $\\mathbb{T} \\subseteq \\mathbb{Q^+}$ and $\\mathbb{Q^+}$ denotes the set of non-negative rational numbers. \n\n\\textbf{Event Stream}.\nAn \\textit{event} is a message indicating that something of interest happens in the real world. An event $e$ has an \\textit{occurrence time} $e.time \\in \\mathbb{T}$ assigned by the event source. For simplicity, we assume that events arrive in-order by time stamps. Otherwise, an existing approach to handle out-of-order events can be employed~\\cite{LTSPJM08, LLGRC09}.\n\nAn event $e$ belongs to a particular \\textit{event type} $E$, denoted $e.type=E$ and described by a \\textit{schema} which specifies the set of \\textit{event attributes} and the domains of their values. \n\nEvents are sent by event producers (e.g., brokers) on an \\textit{event stream I}. An event consumer (e.g., algorithmic trading system) monitors the stream with \\textit{event queries}. We borrow the query syntax and semantics from SASE~\\cite{ADGI08, ZDI14}. \n\n\\vspace*{-2mm}\n\\begin{definition}(\\textbf{Kleene Pattern}.) \nLet $I$ be an event stream. A \\textbf{\\textit{pattern}} $P$ is recursively defined as follows:\n\n$\\bullet$ An \\textit{\\textbf{event type}} $E$ matches an event $e \\in I$, denoted $e \\in matches(E)$, if $e.type = E$.\n\n$\\bullet$ An \\textit{\\textbf{event sequence operator}} \\textsf{SEQ}$(P_i,P_j)$ matches an \\textbf{\\textit{event sequence}} $s=(e_1,\\dots,e_k)$,\ndenoted $s \\in matches(\\textsf{SEQ}($ $P_i,P_j))$,\nif $\\exists m \\in \\mathbb{N}$, $1 \\leq m \\leq k$, such that\n$(e_1,\\dots,e_m) \\in matches (P_i)$,\n$(e_{m+1},\\dots,e_k) \\in matches (P_j)$, and\n$\\forall l \\in \\mathbb{N},$ $1 \\leq l < k,$ $e_l.time < e_{l+1}.time$.\nTwo events $e_l$ and $e_{l+1}$ are called \\textit{\\textbf{adjacent}} in the sequence $s$.\nFor an event sequence $s$, we define $s.start = e_1$ and $s.end = e_k$.\n\n$\\bullet$ A \\textit{\\textbf{Kleene plus operator}} $P_i+$ matches an \\textit{\\textbf{event trend}} $tr=(s_1,\\dots,s_k)$,\ndenoted $tr \\in matches(P_i+))$,\nif \n$\\forall l \\in \\mathbb{N},$ $1 \\leq l \\leq k,$ \n$s_l \\in matches(P_i)$ and \n$s_l.end.time < s_{l+1}.start.$ $time$.\nTwo events $s_l.end$ and $s_{l+1}.start$ are called \\textit{\\textbf{adjacent}} in the trend $tr$.\nFor an event trend $tr$, we define $tr.start = s_1.start$ and $tr.end = s_k.end$.\n\n$\\bullet$ A \\textit{\\textbf{negation operator}} \\textsf{NOT} $N$ appears within an event sequence operator \\textsf{SEQ}$(P_i, \\textsf{NOT}\\; N,$ $P_j)$ (see below).\nThe pattern \\textsf{SEQ}$(P_i,\\textsf{NOT}\\; N, P_j)$ matches an \\textbf{\\textit{event sequence}} $s=(s_i,s_j)$,\ndenoted $s \\in matches(\\textsf{SEQ}(P_i, \\textsf{NOT}\\; N, P_j))$,\nif \n$s_i \\in matches(P_i)$,\n$s_j \\in matches(P_j)$, and\n$\\nexists s_n \\in matches(N)$ such that\n$s_i.end.time < s_n.start.time$ and \n$s_n.end.time < s_j.start.time$.\nTwo events $s_i.end$ and $s_j.start$ are called \\textit{\\textbf{adjacent}} in the sequence $s$.\n\nA \\textit{\\textbf{Kleene pattern}} is a pattern with at least one Kleene plus operator.\nA pattern is \\textit{\\textbf{positive}} if it contains no negation.\nIf an operator in $P$ is applied to the result of another operator, $P$ is \\textit{\\textbf{nested}}. Otherwise, $P$ is \\textit{\\textbf{flat}}. The \\textit{\\textbf{size}} of $P$ is the number of event types and operators in it. \n\\label{def:pattern}\n\\end{definition}\n\\vspace{-2mm}\n\nAll queries in Section~\\ref{sec:introduction} have Kleene patterns. The patterns of $Q_1$ and $Q_2$ are positive. The pattern of $Q_3$ contains a negative sub-pattern \\textsf{NOT} \\textit{Accident A}. The pattern of $Q_1$ is flat, while the patterns of $Q_2$ and $Q_3$ are nested. \n\nWhile Definition~\\ref{def:pattern} enables construction of arbitrarily-nest\\-ed patterns, nesting a Kleene plus into a negation and vice versa is not useful. Indeed, the patterns \\textsf{NOT} $(P+)$ and (\\textsf{NOT} $P)+$ are both equivalent to \\textsf{NOT} $P$. Thus, we assume that a negation operator appears within an event sequence operator and is applied to an event sequence operator or an event type. \nFurthermore, an event sequence operator applied to consecutive negative sub-patterns \\textsf{SEQ}(\\textsf{NOT} $P_i$, \\textsf{NOT} $P_j$) is equivalent to the pattern \\textsf{NOT SEQ}($P_i, P_j$). Thus, we assume that only a positive sub-pattern may precede and follow a negative sub-pattern. Lastly, negation may not be the outer most operator in a meaningful pattern. \nFor simplicity, we assume that an event type appears at most once in a pattern. In Section~\\ref{sec:discussion}, we describe a straightforward extension of our {\\small GRETA}\\ approach allowing to drop this assumption.\n\n\\begin{figure}[t]\n\\[\n\\begin{array}{lll}\nq & := & $\\textsf{\\small RETURN }$ Attributes\\ \\langle A \\rangle\\ $\\textsf{\\small PATTERN}$\\ \\langle P \\rangle\\\\\n&& $(\\textsf{\\small WHERE}$\\ \\langle \\theta \\rangle)?\\\n$(\\textsf{\\small GROUP-BY}$\\ Attributes)?\\\\\n&& $\\textsf{\\small WITHIN }$ Duration\\ $\\textsf{\\small SLIDE }$ Duration \\\\ \n\nA & := & $\\textsf{\\small COUNT}$(* | EventType)\\ |\\\\\n&& ($\\textsf{\\small MIN}$ | $\\textsf{\\small MAX}$ | $\\textsf{\\small SUM}$ | $\\textsf{\\small AVG}$)\n(EventType.Attribute)\\\\\nP & := & EventType\\ |\\ \\langle P \\rangle $\\textsf{+}$\\ |\\ $\\textsf{\\small NOT}$ \\langle P \\rangle\\ |\\ $\\textsf{\\small SEQ}$ ( \\langle P \\rangle , \\langle P \\rangle ) \\\\\n\\theta & := & Constant\\ |\\ EventType . Attribute\\ |\\\\\n&& $\\textsf{\\small NEXT}($EventType$)$ . Attribute\\ |\\\n\\langle \\theta \\rangle\\ \\langle O \\rangle\\ \\langle \\theta \\rangle \\\\\nO & := & +|-|\/|*|\\%|=|\\neq|>|\\geq|<|\\leq|\\wedge|\\vee \\\\\n\\end{array}\n\\]\n\\vspace{-4mm}\n\\caption{Language grammar}\n\\label{fig:language_grammar}\n\\end{figure}\n\n\\begin{definition}(\\textbf{Event Trend Aggregation Query}.) \nAn \\textit{\\textbf{event trend aggregation query}} $q$ consists of five clauses:\n\n$\\bullet$ Aggregation result specification (\\textsf{RETURN} clause),\n\n$\\bullet$ Kleene pattern $P$ (\\textsf{PATTERN} clause),\n\n$\\bullet$ Predicates $\\theta$ (optional \\textsf{WHERE} clause),\n\n$\\bullet$ Grouping $G$ (optional \\textsf{GROUP-BY} clause), and\n\n$\\bullet$ Window $w$ (\\textsf{WITHIN\/SLIDE} clause).\n\nThe query $q$ requires each event in a trend matched by its pattern $P$ (Definition~\\ref{def:pattern}) to be within the same window $w$, satisfy the predicates $\\theta$, and carry the same values of the grouping attributes $G$. \nThese trends are grouped by the values of $G$. An aggregate is computed per group. We focus on distributive (such as \\textsf{COUNT, MIN, MAX, SUM}) and algebraic aggregation functions (such as \\textsf{AVG}) since they can be computed incrementally~\\cite{Gray97}. \n\nLet $e$ be an event of type $E$ and $attr$ be an attribute of $e$.\n\\textsf{COUNT}$(*)$ returns the number of all trends per group, while\n\\textsf{COUNT}$(E)$ computes the number of all events $e$ in all trends per group.\n\\textsf{MIN}$(E.attr)$ (\\textsf{MAX}$(E.attr)$) computes the minimal (maximal) value of $attr$ for all events $e$ in all trends per group.\n\\textsf{SUM}$(E.attr)$ calculates the summation of the value of $attr$ of all events $e$ in all trends per group.\nLastly, \\textsf{AVG}$(E.attr)$ is computed as \\textsf{SUM}$(E.attr)$ divided by \\textsf{COUNT}$(E)$ per group.\n\\label{def:query}\n\\end{definition}\n\\vspace{-3mm}\n\n\\textbf{Skip-Till-Any-Match Semantics}.\nWe focus on Kleene patterns evaluated under the most flexible semantics, called \\textit{skip-till-any-match} in the literature~\\cite{ADGI08, WDR06, ZDI14}. Skip-till-any-match detects \\textit{all possible trends} by allowing to skip \\textit{any} event in the stream as follows. When an event $e$ arrives, it extends each existing trend $tr$ that can be extended by $e$. In addition, the unchanged trend $tr$ is kept to preserve opportunities for alternative matches. Thus, an event doubles the number of trends in the worst case and the number of trends grows exponentially in the number of events~\\cite{QCRR14, ZDI14}. While the number of all trends is exponential, an application selects a subset of trends of interest using predicates, windows, grouping, and negation (Definition~\\ref{def:query}).\n\nDetecting all trends is necessary in some applications such as algorithmic trading (Section~\\ref{sec:introduction}). For example, given the stream of price records $I=\\{10,2,9,8,7,1,6,5,4,3\\}$, skip-till-any-match is the only semantics that detects the down-trend $(10,9,8,7,6,5,4,3)$ by ignoring local fluctuations 2 and 1. Since longer stock trends are considered to be more reliable~\\cite{K02}, this long trend%\n\\footnote{We sketch how constraints on minimal trend length can be supported by {\\small GRETA}\\ in Section~\\ref{sec:discussion}.}\ncan be more valuable to the algorithmic trading system than three shorter trends $(10,2)$, $(9,8,7,1)$, and $(6,5,4,3)$ detected under the skip-till-next-match semantics that does not skip events that can be matched (Section~\\ref{sec:discussion}).\nOther use cases of skip-till-any-match include financial fraud, health care, logistics, network security, cluster monitoring, and e-commerce~\\cite{ADGI08, WDR06, ZDI14}. \n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.5\\columnwidth]{figures\/aggregates.png} \n\\vspace{-2mm} \n\\caption{Event trends matched by the pattern $P=(\\textsf{SEQ}(A+,$ $B))+$ in the stream $I=\\{a1,b2,a3,a4,b7\\}$ where a1.attr=5, a3.attr=6, and a4.attr=4}\n\\label{fig:aggregates}\n\\end{figure} \n\n\\vspace{-2mm}\n\\begin{example}\nIn Figure~\\ref{fig:aggregates}, the pattern $P$ detects \n\\textsf{COUNT}(*)=11 event trends in the stream $I$ with five events under the skip-till-any-match semantics. \nThere are \\textsf{COUNT}(A)=20 occurrences of $a$'s in these trends.\nThe minimal value of attribute $attr$ in these trends is \\textsf{MIN}(A.attr)=4, while the maximal value of $attr$ is \\textsf{MAX}(A.attr)=6. \\textsf{MAX}$(A.attr)$ is computed analogously to \\textsf{MIN}$(A.attr)$.\nThe summation of all values of $attr$ is all trends is \\textsf{SUM}(A.attr)=100.\nLastly, the average value of $attr$ in all trends is \\textsf{AVG}(A.attr)=\\textsf{SUM}( A.attr)\/\\textsf{COUNT}(A)=5.\n\\label{ex:aggregates}\n\\end{example}\n\\vspace{-2mm}\n\n\n\\section{Patterns with Nested Negation}\n\\label{sec:negative}\n\nTo handle nested patterns with negation, we split the pattern into positive and negative sub-patterns at compile time (Section~\\ref{sec:split}). At runtime, we then maintain the {\\small GRETA}\\ graph for each of these sub-patterns (Section~\\ref{sec:negative-algorithm}).\n\n\\subsection{Static GRETA Template}\n\\label{sec:split}\n\nAccording to Section~\\ref{sec:model}, negation appears within a sequence preceded and followed by positive sub-patterns.\nFurthermore, negation is always applied to an event sequence or a single event type. Thus, we classify patterns containing a negative sub-pattern $N$ into the following groups:\n\nCase 1.~\\textbf{\\textit{A negative sub-pattern is preceded and followed by positive sub-patterns}}. A pattern of the form $P_1=\\textsf{SEQ}(P_i,\\textsf{NOT} N,P_j)$ means that no trends of $N$ may occur between the trends of $P_i$ and $P_j$. A trend of $N$ disqualifies sub-trends of $P_i$ from contributing to a trend detected by $P_1$. A trend of $N$ marks all events in the graph of the \\textit{previous} event type $end(P_i)$ as \\textit{invalid} to connect to any future event of the \\textit{following} event type $start(P_j)$. Only valid events of type $end(P_i)$ connect to events of type $start(P_j)$.\n\n\\vspace{-2mm}\n\\begin{example}\nThe pattern \\textit{(\\textsf{SEQ}(A+,\\textsf{NOT SEQ}(C,\\textsf{NOT} E, D),B))+} is split into one positive sub-pattern $(\\textsf{SEQ}(A+,B))+$ and two negative sub-patterns $\\textsf{SEQ}(C,D)$ and \\textit{E}. Figure~\\ref{fig:dependencies} illustrates the previous and following connections between the template for the negative sub-pattern and the event types in the template for its parent pattern. \n\\label{ex:pattern_split}\n\\end{example}\n\\vspace{-2mm}\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[\\small \\textit{(\\textsf{SEQ}(A+, \\textsf{NOT SEQ}(C, \\textsf{NOT} E, D), B))+}]{\n\\hspace*{2cm}\n\\includegraphics[width=0.205\\columnwidth]{figures\/dependencies.png} \n\\hspace*{2cm} \n\\label{fig:dependencies}\n}\n\\subfigure[\\small \\textit{\\textsf{SEQ}(A+, \\textsf{NOT} E)}]{\n\\hspace*{0.8cm}\n\\includegraphics[width=0.1\\columnwidth]{figures\/dependencies2.png} \n\\hspace*{0.8cm}\n\\label{fig:dependencies2}\n}\n\\subfigure[\\small \\textit{\\textsf{SEQ}(\\textsf{NOT} E, A+)}]{\n\\hspace*{0.8cm}\n\\includegraphics[width=0.1\\columnwidth]{figures\/dependencies3.png} \n\\hspace*{0.8cm}\n\\label{fig:dependencies3}\n}\n\\vspace*{-2mm}\n\\caption{{\\small GRETA}\\ graph templates}\n\\end{figure} \n\nCase 2.~\\textbf{\\textit{A negative sub-pattern is preceded by a positive sub-pattern}}. A pattern of the form $P_2=\\textsf{SEQ}(P_i,\\textsf{NOT}$ $N)$ means that no trends of $N$ may occur after the trends of $P_i$ till the end of the window (Section~\\ref{sec:filtering}). A trend of $N$ marks all previous events in the graph of $P_i$ as \\textit{invalid}.\n\nCase 3.~\\textbf{\\textit{A negative sub-pattern is followed by a positive sub-pattern}}. A pattern of the form $P_3=\\textsf{SEQ}(\\textsf{NOT} N,$ $P_j)$ means that no trends of $N$ may occur after the start of the window and before the trends of $P_j$. A trend of $N$ marks all future events in the graph of $P_j$ as \\textit{invalid}. The pattern of $Q_3$ in Section~\\ref{sec:introduction} has this form.\n\n\\vspace{-2mm}\n\\begin{example}\nFigures~\\ref{fig:dependencies2} and \\ref{fig:dependencies3} illustrate the templates for the patterns \\textit{\\textsf{SEQ}(A+,\\textsf{NOT} E)} and \\textit{\\textsf{SEQ}(\\textsf{NOT} E,A+)} respectively. The first template has only a previous connection, while the second template has only a following connection between the template for the negative sub-pattern $E$ and the event type $A$. \n\\label{ex:pattern_split_special}\n\\end{example}\n\\vspace{-2mm}\n\n\n\\begin{algorithm}[t]\n\\caption{Pattern split algorithm}\n\\label{lst:split_algorithm}\n\\begin{algorithmic}[1]\n\\Require Pattern $P$ with negative sub-patterns\n\\Ensure Set $S$ of sub-patterns of $P$\n\n\\State $S \\leftarrow \\{P\\}$\n\\State $split(P)\\ \\{$ \n\\Switch {$P$}\n\\Case {$P_i+:$}\n\t$S \\leftarrow S \\cup split(P_i)$ \\EndCase\n\\Case {$SEQ(P_i,P_j):$} \n\t$S \\leftarrow S \\cup split(P_i) \\cup split(P_j)$ \\EndCase\n\\Case {$NOT\\ P_i:$} \n\t\\State $Parent \\leftarrow S.getPatternContaining(P)$\n\t\\State $P_i.previous \\leftarrow Parent.getPrevious(P)$\n\t\\State $P_i.following \\leftarrow Parent.getFollowing(P)$\n\t\\State $S.replace(Parent, Parent-P)$\n\t\\State $S \\leftarrow S \\cup \\{P_i\\} \\cup split(P_i)$ \\EndCase\n\\EndSwitch\n\\State\\Return $S \\ \\}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\textbf{Pattern Split Algorithm}.\nAlgorithm~\\ref{lst:split_algorithm} consumes a pattern $P$, splits it into positive and negative sub-patterns, and returns the set $S$ of these sub-patterns. At the beginning, $S$ contains the pattern $P$ (Line~1). The algorithm traverses $P$ top-down. If it encounters a negative sub-pattern $P=\\textsf{NOT}\\ P_i$, it finds the sub-pattern containing $P$, called $Parent$ pattern, computes the previous and following event types of $P_i$, and removes $P$ from $Parent$ (Lines~7--10). The pattern $P_i$ is added to $S$ and the algorithm is called recursively on $P_i$ (Line~11).\nSince the algorithm traverses the pattern $P$ top-down once, the time and space complexity are linear in the size of the pattern $s$, i.e., $\\Theta(s)$.\n\n\\vspace{-2mm}\n\\begin{definition}(\\textbf{Dependent {\\small GRETA}\\ Graph}.)\nLet $G_N$ and $G_P$ be {\\small GRETA}\\ graph that are constructed according to templates $\\mathcal{T}_N$ and $\\mathcal{T}_P$ respectively. The {\\small GRETA}\\ graph $G_P$ is \\textit{dependent} on the graph $G_N$ if there is a previous or following connection from $\\mathcal{T}_N$ to an event type in $\\mathcal{T}_P$.\n\\label{def:dependent}\n\\end{definition}\n\\vspace{-2mm}\n\n\\subsection{Runtime GRETA Graphs}\n\\label{sec:negative-algorithm}\n\nIn this section, we describe how patterns with nested negation are processed according to the template.\n\n\\vspace{-2mm}\n\\begin{definition}(\\textbf{Invalid Event}.)\nLet $G_P$ and $G_N$ be {\\small GRETA}\\ graphs such that $G_P$ is dependent on $G_N$.\nLet $tr=(e_1, \\dots, e_n)$ be a \\textit{finished} trend captured by $G_N$, i.e., $e_n$ is an \\textsf{END} event.\nThe trend $tr$ marks all events of the previous event type that arrived before $e_1.time$ as \\textit{invalid} to connect to any event of the following event type that will arrive after $e_n.time$.\n\\label{def:invalidation}\n\\end{definition}\n\\vspace{-4mm}\n\n\\begin{example}\nFigure~\\ref{fig:negation} depicts the graphs for the sub-patterns from Example~\\ref{ex:pattern_split}.\nThe match $e3$ of the negative sub-pattern $E$ marks $c2$ as invalid to connect to any future $d$. Invalid events are highlighted by a darker background. \nAnalogously, the match $(c5,d6)$ of the negative sub-pattern \\textsf{SEQ}$(C,D)$ marks all $a$'s before $c5$ ($a1, a3, a4$) as invalid to connect to any $b$ after $d6$. \n$b7$ has no valid predecessor events and thus cannot be inserted. $a8$ is inserted and all previous $a$'s are connected to it. The marked $a$'s are valid to connect to new $a$'s. $b9$ is inserted and its valid predecessor event $a8$ is connected to it. The marked $a$'s may not connect to $b9$. \n\nFigures~\\ref{fig:negation2} and \\ref{fig:negation3} depict the graphs for the patterns from Example~\\ref{ex:pattern_split_special}. The trend $e3$ of the negative sub-pattern $E$ marks all previous events of type $A$ as invalid in Figure~\\ref{fig:negation2}. In contrast, in Figure~\\ref{fig:negation3} $e3$ invalidates all following $a$'s.\n\\label{ex:invalidation}\n\\end{example}\n\\vspace{-2mm}\n\n\\begin{figure}[t]\n\\centering\n\\subfigure[\\small \\textit{\\textsf{SEQ}(A+,\\textsf{NOT} E)}]{\n\\includegraphics[width=0.22\\columnwidth]{figures\/negation2.png} \n\\label{fig:negation2}\n}\n\\hspace*{2mm}\n\\subfigure[\\small \\textit{\\textsf{SEQ}(\\textsf{NOT} E,A+)}]{\n\\includegraphics[width=0.2\\columnwidth]{figures\/negation3.png} \n\\label{fig:negation3}\n}\n\\vspace*{-2mm}\n\\caption{Count of trends matched by the pattern $P$ in the stream $I=\\{a1,b2,c2,a3,e3,a4,c5,d6,b7,a8,b9\\}$}\n\\end{figure} \n\n\\textbf{Event Pruning}.\nNegation allows us to purge events from the graph to speed-up insertion of new events and aggregation propagation. The following events can be deleted:\n\n$\\bullet$ \\textbf{\\textit{Finished Trend Pruning}}. A finished trend that is matched by a negative sub-pattern can be deleted once it has invalidated all respective events. \n\n$\\bullet$ \\textbf{\\textit{Invalid Event Pruning}}. An invalid event of type $end(P_i)$ will never connect to any new event if events of type $end(P_i)$ may precede only events of type $start(P_j)$. The aggregates of such invalid events will not be propagated. Thus, such events may be safely purged from the graph.\n\n\\vspace{-2mm}\n\\begin{example}\nContinuing Example~\\ref{ex:invalidation} in Figure~\\ref{fig:negation}, the invalid $c2$ will not connect to any new event since $c$'s may connect only to $d$'s. Thus, $c2$ is purged. $e3$ is also deleted.\nOnce $a$'s before $c5$ are marked, $c5$ and $d6$ are purged.\nIn contrast, the marked events $a1,a3,$ and $a4$ may not be removed since they are valid to connect to future $a$'s.\nIn Figures~\\ref{fig:negation2} and \\ref{fig:negation3}, $e3$ and all marked $a$'s are deleted.\n\\label{ex:pruning}\n\\end{example}\n\\vspace{-4mm}\n\n\\begin{theorem}(\\textbf{Correctness of Event Pruning}.)\nLet $G_P$ and $G_N$ be {\\small GRETA}\\ graphs such that $G_P$ is dependent on $G_N$.\nLet $G'_P$ be the same as $G_P$ but without invalid events of type $end(P_i)$ if \n$P.pred(start(P_j)) = \\{ end(P_i) \\}$.\nLet $G'_N$ be the same as $G_N$ but without finished event trends.\nThen, $G'_P$ returns the same aggregation results as $G_P$.\n\\label{theo:pruning}\n\\end{theorem}\n\\vspace{-3mm}\n\n\\begin{proof}\nWe first prove that all invalid events are marked in $G_P$ despite finished trend pruning in $G'_N$. We then prove that $G_P$ and $G'_P$ return the same aggregation result despite invalid event pruning. \n\n\\textbf{\\textit{All invalid events are marked in $G_P$}}.\nLet $tr=(e_1,$ $\\dots, e_n)$ be a finished trend in $G_N$. Let $Inv$ be the set of events that are invalidated by $tr$ in $G_P$. By Definition~\\ref{def:invalidation}, all events in $Inv$ arrive before $e_1.time$. According to Section~\\ref{sec:model}, events arrive in-order by time stamps. Thus, no event with time stamp less than $e_1.time$ will arrive after $e_1$. Hence, even if an event $e_i \\in \\{e_1, \\dots, e_{n-1}\\}$ connects to future events in $G_N$, no event $e \\not\\in Inv$ in $G_P$ can be marked as invalid.\n\n\\textbf{\\textit{$G_P$ and $G'_P$ return the same aggregates}}.\nLet $e$ be an event of type $end(P_i)$ that is marked as invalid to connect to events of type $start(P_j)$ that arrive after $e_n.time$. \nBefore $e_n.time$, $e$ is valid and its count is correct by Theorem~\\ref{theorem:count}. Since events arrive in-order by time stamps, no event with time stamp less than $e_n.time$ will arrive after $e_n$. \nAfter $e_n.time$, $e$ will not connect to any event and the count of $e$ will not be propagated if \n$P.pred(start(P_j))=\\{end(P_i)\\}$.\nHence, deletion of $e$ does not affect the final aggregate of $G_P$.\n\\end{proof}\n\\vspace{-2mm}\n\n\\textbf{{\\small GRETA}\\ Algorithm for Patterns with Negation}.\nAlgorithm~\\ref{lst:ETA_algorithm} is called on each event sub-pattern with the following modifications.\nFirst, only valid predecessor events are returned in Line~4. \nSecond, if the algorithm is called on a negative sub-pattern $N$ and a match is found in Line~12, then all previous events of the previous event type of $N$ are either deleted or marked as incompatible with any future event of the following event type of $N$. Afterwards, the match of $N$ is purged from the graph. {\\small GRETA}\\ concurrency control is described in Section~\\ref{sec:implementation}. \n \n\\section{Optimality of GRETA Approach}\n\\label{sec:complexity}\n\n\nWe now analyze the complexity of {\\small GRETA}. Since a negative sub-pattern is processed analogously to a positive sub-pattern (Section~\\ref{sec:negative}), we focus on positive patterns below.\n\n\\vspace{-2mm}\n\\begin{theorem}[\\textbf{Complexity}]\nLet $q$ be a query with edge predicates,\n$I$ be a stream, \n$G$ be the {\\small GRETA}\\ graph for $q$ and $I$,\n$n$ be the number of events per window, and \n$k$ be the number of windows into which an event falls.\nThe time complexity of {\\small GRETA}\\ is $O(n^2k)$, while its space complexity is $O(nk)$.\n\\label{theorem:complexity}\n\\end{theorem}\n\\vspace{-3mm}\n\n\\begin{proof}\\let\\qed\\relax\n\\textbf{Time Complexity}. Let $e$ be an event of type $E$. The following steps are taken to process $e$.\nSince events arrive in-order by time stamps (Section~\\ref{sec:model}), the Time Pane to which $e$ belongs is always the latest one. It is accessed in constant time.\nThe Vertex Tree in which $e$ will be inserted is found in the Event Type Hash Table mapping the event type $E$ to the tree in constant time.\nDepending on the attribute values of $e$, $e$ is inserted into its Vertex Tree in logarithmic time $O(log_b m)$ where $b$ is the order of the tree and $m$ is the number of elements in the tree, $m \\leq n$.\n\nThe event $e$ has $n$ predecessor events in the worst case, since each vertex connects to each following vertex under the skip-till-any-match semantics. Let $x$ be the number of Vertex Trees storing previous vertices that are of predecessor event types of $E$ and fall into a sliding window $wid \\in windows(e)$, $x \\leq n$. Then, the predecessor events of $e$ are found in $O(log_b m + m)$ time by a range query in one Vertex Tree with $m$ elements. The time complexity of range queries in $x$ Vertex Trees is computed as follows:\n\\[\n\\sum_{i=1}^x O(log_b m_i + m_i) =\n\\sum_{i=1}^x O(m_i) =\nO(n).\n\\]\n\nIf $e$ falls into $k$ windows, a predecessor event of $e$ updates at most $k$ aggregates of $e$.\nIf $e$ is an \\textsf{END} event, it also updates $k$ final aggregates.\nSince these aggregates are maintained in hash tables, updating one aggregate takes constant time.\n{\\small GRETA}\\ concurrency control ensures that all graphs this graph $G$ depends upon finishing processing all events with time stamps less than $t$ before $G$ may process events with time stamp $t$. Therefore, all invalid events are marked or purged before aggregates are updated in $G$ at time $t$. Consequently, an aggregate is updated at most once by the same event.\nPutting it all together, the time complexity is:\n\\[\nO(n (log_b m + nk)) = O(n^2k).\n\\] \n\n\\textbf{Space Complexity}. The space complexity is determined by $x$ Vertex Trees and $k$ counts maintained by each vertex.\n\\begin{flalign*} \n\\sum_{i=1}^x O(m_ik) = O(nk). \\rlap{$\\qquad \\Box$}\n\\end{flalign*} \n\\end{proof} \n\n\\vspace{-3mm}\n\\begin{theorem}[\\textbf{Time Optimality}]\nLet $n$ be the number of events per window and $k$ be the number of windows into which an event falls.\nThen, {\\small GRETA}\\ has optimal worst-case time complexity $O(n^2k)$.\n\\label{theorem:optimality}\n\\end{theorem}\n\\vspace{-3mm}\n\n\\begin{proof} \n\\textit{Any} event trend aggregation algorithm must process $n$ events to guarantee correctness of aggregation results. \nSince \\textit{any} previous event may be compatible with a new event $e$ under the skip-till-any-match semantics~\\cite{WDR06}, the edge predicates of the query $q$ must be evaluated to decide the compatibility of $e$ with $n$ previous events in worst case. While we utilize a tree-based index to sort events by the most selective predicate, other predicates may have to be evaluated in addition. Thus, each new event must be compared to each event in the graph in the worst case.\nLastly, a final aggregate must be computed for each window of $q$. An event that falls into $k$ windows contributes to $k$ aggregates.\nIn summary, the time complexity $O(n^2k)$ is optimal.\n\\end{proof} \n\\vspace{-2mm}\n\\section{GRETA Approach In A Nutshell}\n\\label{sec:overview}\n\nOur \\textbf{\\textit{Event Trend Aggregation Problem}} to compute event trend aggregation results of a query $q$ against an event stream $I$ with \\textit{minimal latency}. \n\nFigure~\\ref{fig:system} provides an overview of our {\\small GRETA}\\ framework. The \\textbf{\\textit{{\\small GRETA}\\ Query Analyzer}} statically encodes the query into a {\\small GRETA}\\ configuration. In particular, the pattern is split into positive and negative sub-patterns (Section~\\ref{sec:split}). Each sub-pattern is translated into a \n{\\small GRETA}\\ template\n(Section~\\ref{sec:template}). Predicates are classified into vertex and edge predicates (Section~\\ref{sec:filtering}). \nGuided by the {\\small GRETA}\\ configuration, the \\textit{\\textbf{{\\small GRETA}\\ Runtime}} first filters and partitions the stream based on the vertex predicates and grouping attributes of the query. Then, the {\\small GRETA}\\ runtime encodes matched event trends into a {\\small GRETA}\\ graph. During the graph construction, aggregates are propagated along the edges of the graph in a dynamic programming fashion. The final aggregate is updated incrementally, and thus is returned immediately at the end of each window (Sections~\\ref{sec:positive-algorithm}, \\ref{sec:negative-algorithm}, \\ref{sec:filtering}). \n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.6\\columnwidth]{figures\/system.png} \n\\caption{{\\small GRETA}\\ framework}\n\\label{fig:system}\n\\end{figure}\n\n\\section{Positive Nested Patterns}\n\\label{sec:positive}\n\nWe statically translate a positive pattern into a \n{\\small GRETA}\\ template (Section~\\ref{sec:template})\nAs events arrive at runtime, the {\\small GRETA}\\ graph is maintained according to this template (Section~\\ref{sec:positive-algorithm}).\n\n\\subsection{Static GRETA Template}\n\\label{sec:template}\n\nThe {\\small GRETA}\\ query analyzer translates a Kleene pattern $P$ into a Finite State Automaton that is then used as a template during {\\small GRETA}\\ graph construction at runtime. \nFor example, the pattern \\textit{P=(\\textsf{SEQ}(A+,B))+} is translated into the \n{\\small GRETA}\\ template\nin Figure~\\ref{fig:automaton}.\n\n\\textbf{\\textit{States}} correspond to event types in $P$. \nThe initial state is labeled by the first type in $P$, denoted $start(P)$. Events of type $start(P)$ are called \\textsf{START} events. The final state has label $end(P)$, i.e., the last type in $P$. Events of type $end(P)$ are called \\textsf{END} events. All other states are labeled by types $mid(P)$. Events of type $E \\in mid(P)$ are called \\textsf{MID} events. \nIn Figure~\\ref{fig:automaton}, $start(P) = A$, $end(P) = B$, and $mid(P) = \\emptyset$.\n\nSince an event type may appear in a pattern at most once (Section~\\ref{sec:model}), state labels are distinct.\nThere is one $start(P)$ and one $end(P)$ event type per pattern $P$ (Theorem~\\ref{theorem:start-and-end}). There can be any number of event types in the set $mid(P)$. $start(P) \\not\\in mid(P)$ and $end(P) \\not\\in mid(P)$. An event type may be both $start(P)$ and $end(P)$, for example, in the pattern $A+$.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=0.2\\columnwidth]{figures\/automaton.png} \n\\vspace{-2mm} \n\\caption{{\\small GRETA}\\ template for (\\textsf{SEQ}(A+,B))+}\n\\label{fig:automaton}\n\\end{figure} \n\n\\textbf{Start and End Event Types of a Pattern}.\nLet $trends(P)$ denote the set of event trends matched by a positive event pattern $P$ over any input event stream.\n\n\\vspace*{-2mm}\n\\begin{lemma}\nFor any positive event pattern $P$,\n$trends(P)$ does not contain the empty string.\n\\label{lemma:non-empty}\n\\end{lemma}\n\\vspace*{-2mm}\n\nLet $tr \\in trends(P)$ be a trend and $start(tr)$ and $end(tr)$ be the types of the first and last events in $tr$ respectively.\n\n\\vspace*{-2mm}\n\\begin{theorem}\nFor all $tr_1, tr_2 \\in trends(P)$, $start(tr_1) = start(tr_2)$ and $end(tr_1) = end(tr_2)$. \n\\label{theorem:start-and-end}\n\\end{theorem}\n\\vspace*{-4mm}\n\n\\begin{proof}\nDepending on the structure of $P$ (Definition~\\ref{def:pattern}), the following cases are possible.\n\n\\textbf{\\textit{Case}} $E$. For any $tr \\in trends(E)$, $start(tr) = end(tr) = E$.\n\n\\textbf{\\textit{Case}} $P_i+$. Let $tr_1 = (t_1 t_2 \\dots t_m), tr_2 = (t'_1 t'_2 \\dots t'_n) \\in trends(P_i+)$ where $t_x,t'_y \\in trends(P_i), 1 \\leq x \\leq m, 1 \\leq y \\leq n$. \nBy Lemma~\\ref{lemma:non-empty}, $start(t_1)$ and $start(t'_1)$ are not empty.\nAccording to Algorithm~\\ref{lst:preprocessing_algorithm} Lines 10--14, $start(tr_1) = start(t_1)$ and $start(tr_2) = start(t'_1)$.\nSince $P_i$ contains neither disjunction nor star-Kleene, $start(t_1) = start(t'_1)$. Thus, $start(tr_1) = start(tr_2)$. \nThe proof for $end(P_i+)$ is analogous.\n\n\\textbf{\\textit{Case}} \\textsf{SEQ}$(P_i, P_j)$. Let $tr_1 = (t_1 t_2), tr_2 = (t'_1 t'_2) \\in trends($ \\textsf{SEQ}$(P_i, P_j))$ where $t_1, t'_1 \\in trends(P_i)$ and $t_2, t'_2 \\in trends(P_j)$. \nBy Lemma~\\ref{lemma:non-empty}, $start(t_1)$ and $start(t'_1)$ are not empty.\nAccording to Algorithm~\\ref{lst:preprocessing_algorithm} Lines~10--14, $start(tr_1) = start(t_1)$ and $start(tr_2) = start(t'_1)$. \nSince $P_i$ contains neither disjunction nor star-Kleene, $start(t_1) = start(t'_1)$. Thus, $start(tr_1) = start(tr_2)$. \nThe proof for $end(\\textsf{SEQ}(P_i, P_j))$ is analogous.\n\\end{proof}\n\\vspace*{-2mm}\n\n\\textbf{\\textit{Transitions}} correspond to operators in $P$. They connect types of events that may be adjacent in a trend matched by $P$. \nIf a transition connects an event type $E_i$ with an event type $E_j$, then $E_i$ is a \\textit{predecessor event type} of $E_j$, denoted $E_i \\in P.predTypes(E_j)$.\nIn Figure~\\ref{fig:automaton}, $P.predTypes(A) = \\{A,B\\}$ and $P.predTypes(B) = \\{A\\}$.\n\n\\begin{algorithm}[t]\n\\caption{{\\small GRETA}\\ template construction algorithm}\n\\label{lst:preprocessing_algorithm}\n\\begin{algorithmic}[1]\n\\Require Positive pattern $P$\n\\Ensure GRETA template $\\mathcal{T}$\n\n\\State $generate(P)\\ \\{$\n\\State $S \\leftarrow \\text{event types in } P,\\ T \\leftarrow \\emptyset,\\ \\mathcal{T}=(S,T)$\n\n\\ForAll {$\\textsf{SEQ}(P_i,P_j)$ in $P$}\n\t\\State $t \\leftarrow (end(P_i),start(P_j)),\\ t.label \\leftarrow ``\\textsf{SEQ}\"$\n\t\\State $T \\leftarrow T \\cup \\{t\\}$\n\\EndFor\n\n\\ForAll {$P_i+$ in $P$}\n\t\\State $t \\leftarrow (end(P_i),start(P_i)),\\ t.label \\leftarrow ``+\"$\n\t\\State $T \\leftarrow T \\cup \\{t\\}$\n\\EndFor\n\\State\\Return $\\mathcal{T}\\ \\}$\n\n\\State $start(P)\\ \\{$ \n\\Switch {$P$}\n\\Case {$E$} \\Return $E$ \\EndCase\n\\Case {$P_i+$} \\Return $start(P_i)$ \\EndCase\n\\Case {$\\textsf{SEQ}(P_i,P_j)$} \\Return $start(P_i)\\ \\}$ \\EndCase\n\\EndSwitch\n\n\\State $end(P)\\ \\{$\n\\Switch {$P$}\n\\Case {$E$} \\Return $E$ \\EndCase\n\\Case {$P_i+$} \\Return $end(P_i)$ \\EndCase\n\\Case {$\\textsf{SEQ}(P_i,P_j)$} \\Return $end(P_j)\\ \\}$ \\EndCase\n\\EndSwitch\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\textbf{{\\small GRETA}\\ Template Construction Algorithm}.\nAlgorithm~\\ref{lst:preprocessing_algorithm} consumes a positive pattern $P$ and returns the automaton-based representation of $P$, called \\textit{GRETA template} $\\mathcal{T}=(S,T)$. The states $S$ correspond to the event types in $P$ (Line~2), while the transitions $T$ correspond to the operators in $P$. Initially, the set $T$ is empty (Line~2). \nFor each event sequence \\textsf{SEQ}$(P_i,P_j)$ in $P$, there is a transition from $end(P_i)$ to $start(P_j)$ with label ``\\textsf{SEQ}\" (Lines~3--5). \nAnalogously, for each Kleene plus $P_i+$ in $P$, there is a transition from $end(P_i)$ to $start(P_i)$ with label ``+\" (Lines~6--8). Start and end event types of a pattern are computed by the auxiliary methods in Lines~10--19. \n\n\\textbf{Complexity Analysis}.\nLet $P$ be a pattern of size $s$ (Definition~\\ref{def:pattern}). To extract all event types and operators from $P$, $P$ is parsed once in $\\Theta(s)$ time. For each operator, we determine its start and event event types in $O(s)$ time. Thus, the time complexity is quadratic $O(s^2)$. \nThe space complexity is linear in the size of the template $\\Theta(|S|+|T|)=\\Theta(s)$.\n\n\n\\begin{figure*}[t]\n\\centering\n\\subfigure[\\small $A+$]{\n\t\\includegraphics[width=0.105\\columnwidth]{figures\/kleene1.png}\n\t\\label{fig:kleene1}\n}\n\\hspace*{2mm}\n\\subfigure[\\small $\\textsf{SEQ}(A+,B)$]{\n\t\\includegraphics[width=0.17\\columnwidth]{figures\/kleene2.png}\n\t\\label{fig:kleene2}\t\n}\n\\hspace*{2mm}\n\\subfigure[\\small $(\\textsf{SEQ}(A+,B))+$]{\n\t\\includegraphics[width=0.17\\columnwidth]{figures\/kleene3.png}\n\t\\label{fig:kleene3}\n}\n\\hspace*{2mm}\n\\subfigure[\\small $(\\textsf{SEQ}(A+, \\textsf{NOT SEQ}(C, \\textsf{NOT} E, D), B))+$]{\n\t\\includegraphics[width=0.41\\columnwidth]{figures\/negation.png}\n\t\\label{fig:negation}\n}\n\\vspace{-3mm}\n\\caption{Count of trends matched by the pattern $P$ in the stream $I=\\{a1,b2,c2,a3,e3,a4,c5,d6,b7,a8,b9\\}$}\n\\label{fig:pattern}\n\\end{figure*}\n\n\\subsection{Runtime GRETA Graph}\n\\label{sec:positive-algorithm}\n\nThe {\\small GRETA}\\ graph is a runtime instantiation of the \n{\\small GRETA}\\ template. \nThe graph is constructed on-the-fly as events arrive \n(Algorithm~\\ref{lst:ETA_algorithm}).\nThe graph compactly captures all matched trends and enables their incremental aggregation.\n\n\\textbf{Compact Event Trend Encoding}.\nThe graph encodes all trends and thus avoids their construction. \n\n\\textbf{\\textit{Vertices}} represent events in the stream $I$ matched by the pattern $P$. Each state with label $E$ in the template is associated with the sub-graph of events of type $E$ in the graph. We highlight each sub-graph by a rectangle frame. If $E$ is an end state, the frame is depicted as a double rectangle. Otherwise, the frame is a single rectangle. An event is labeled by its event type, time stamp, and intermediate aggregate (see below). Each event is stored once.\nFigure~\\ref{fig:kleene3} illustrates the template and the graph for the stream $I$. \n\n\\textbf{\\textit{Edges}} connect adjacent events in a trend matched by the pattern $P$ in a stream $I$ (Definition~\\ref{def:pattern}).\nWhile transitions in the template express predecessor relationships between event types in the pattern, edges in the graph capture predecessor relationships between events in a trend. \nIn Figure~\\ref{fig:kleene3}, we depict a transition in the template and its respective edges in the graph in the same way.\nA path from a \\textsf{START} to an \\textsf{END} event in the graph corresponds to a trend. The length of these trends ranges from the shortest $(a1,b2)$ to the longest $(a1,b2,a3,a4,b7,a8,b9)$. \n\nIn summary, the {\\small GRETA}\\ graph in Figure~\\ref{fig:kleene3} compactly captures all 43 event trends matched by the pattern $P$ in the stream $I$. In contrast to the two-step approach, the graph avoids repeated computations and replicated storage of common sub-trends such as $(a1,b2)$.\n\n\\textbf{Dynamic Aggregation Propagation}.\nIntermediate aggregates are propagated through the graph from previous events to new events in dynamic programming fashion. Final aggregate is incrementally computed based on intermediate aggregates. In the examples below, we compute event trend count \\textsf{COUNT}(*) as defined in Section~\\ref{sec:model}. Same principles apply to other aggregation functions (Section~\\ref{sec:discussion}).\n\n\\textbf{\\textit{Intermediate Count}} $e.count$ of an event $e$ corresponds to the number of (sub-)trends in $G$ that begin with a \\textsf{START} event in $G$ and end at $e$.\nWhen $e$ is inserted into the graph, all predecessor events of $e$ connect to $e$. That is, $e$ extends all trends that ended at a predecessor event of $e$. To accumulate the number of trends extended by $e$, $e.count$ is set to the sum of counts of the predecessor events of $e$. In addition, if $e$ is a \\textsf{START} event, it starts a new trend. Thus, $e.count$ is incremented by 1. \nIn Figure~\\ref{fig:kleene3}, the count of the \\textsf{START} event $a4$ is set to 1 plus the sum of the counts of its predecessor events $a1,b2,$ and $a3$. \n\\[\n\\begin{array}{l}\na4.count=1+(a1.count+b2.count+a3.count)=6 \\\\\n\\end{array}\n\\]\n\n$a4.count$ is computed once, stored, and reused to compute the counts of $b7,a8,$ and $b9$ that $a4$ connects to. For example, the count of $b7$ is set to the sum of the counts of all predecessor events of $b7$.\n\\[\n\\begin{array}{l}\nb7.count=a1.count+a3.count+a4.count=10 \\\\\n\\end{array}\n\\]\n\n\\textbf{\\textit{Final Count}} corresponds to the sum of the counts of all \\textsf{END} events in the graph. \n\\[\n\\begin{array}{l}\nfinal\\_count=b2.count+b7.count+b9.count=43 \\\\\n\\end{array}\n\\]\n\nIn summary, the count of a new event is computed based on the counts of previous events in the graph following the dynamic programming principle. Each intermediate count is computed once. The final count is incrementally updated by each \\textsf{END} event and instantaneously returned at the end of each window.\n\n\\vspace{-2mm}\n\\begin{definition}(\\textbf{{\\small GRETA}\\ Graph}.)\nThe \\textit{{\\small GRETA}\\ graph} $G = (V,E,fi\\-nal\\_count)$ for a query $q$ and a stream $I$ is a directed acyclic graph with a set of vertices $V$, a set of edges $E$, and a $final\\_count$.\nEach vertex $v \\in V$ corresponds to an event $e \\in I$ matched by $q$. A vertex $v$ has the label $(e.type\\ e.time : e.count)$ (Theorem~\\ref{theorem:count}). \nFor two vertices $v_i, v_j \\in V$, there is an edge $(v_i,v_j) \\in E$ if their respective events $e_i$ and $e_j$ are adjacent in a trend matched by $q$. Event $v_i$ is called a \\textit{predecessor event} of $v_j$. \n\\label{def:graph}\n\\end{definition}\n\\vspace{-2mm} \n\nThe {\\small GRETA}\\ graph has different shapes depending on the pattern and the stream.\nFigure~\\ref{fig:kleene1} shows the graph for the pattern $A+$. Events of type $B$ are not relevant for it. Events of type $A$ are both \\textsf{START} and \\textsf{END} events.\nFigure~\\ref{fig:kleene2} depicts the {\\small GRETA}\\ graph for the pattern \\textsf{SEQ}$(A+,B)$. There are no dashed edges since $b$'s may not precede $a$'s. \n\nTheorems~\\ref{theorem:correctness-graph} and \\ref{theorem:count} prove the correctness of the event trend count computation based on the {\\small GRETA}\\ graph.\n\n\\vspace{-2mm}\n\\begin{theorem}[\\textbf{Correctness of the GRETA Graph}]\nLet $G$ be the {\\small GRETA}\\ graph for a query $q$ and a stream $I$. \nLet $\\mathcal{P}$ be the set of paths from a \\textsf{START} to an \\textsf{END} event in $G$.\nLet $\\mathcal{T}$ be the set of trends detected by $q$ in $I$.\nThen, the set of paths $\\mathcal{P}$ and the set of trends $\\mathcal{T}$ are equivalent. That is, for each path $p \\in \\mathcal{P}$ there is a trend $tr \\in \\mathcal{T}$ with same events in the same order and vice versa.\n\\label{theorem:correctness-graph}\n\\end{theorem}\n\\vspace{-4mm}\n\n\\begin{proof}\n\\textbf{\\textit{Correctness}}. For each path $p \\in \\mathcal{P}$, there is a trend $tr \\in \\mathcal{T}$ with same events in the same order, i.e., $\\mathcal{P} \\subseteq \\mathcal{T}$.\nLet $p \\in \\mathcal{P}$ be a path. By definition, $p$ has one \\textsf{START}, one \\textsf{END}, and any number of \\textsf{MID} events. Edges between these events are determined by the query $q$ such that a pair of \\textit{adjacent events in a trend} is connected by an edge. Thus, the path $p$ corresponds to a trend $tr \\in \\mathcal{T}$ matched by the query $q$ in the stream $I$.\n\n\\textbf{\\textit{Completeness}}. For each trend $tr \\in \\mathcal{T}$, there is a path $p \\in \\mathcal{P}$ with same events in the same order, i.e., $\\mathcal{T} \\subseteq \\mathcal{P}$.\nLet $tr \\in \\mathcal{T}$ be a trend. We first prove that all events in $tr$ are inserted into the graph $G$. Then we prove that these events are connected by directed edges such that there is a path $p$ that visits these events in the order in which they appear in the trend $tr$. \nA \\textsf{START} event is always inserted, while a \\textsf{MID} or an \\textsf{END} event is inserted if it has predecessor events since otherwise there is no trend to extend. Thus, all events of the trend $tr$ are inserted into the graph $G$. The first statement is proven.\n\\textit{All} previous events that satisfy the query $q$ connect to a new event. Since events are processed in order by time stamps, edges connect previous events with more recent events. The second statement is proven.\n\\end{proof}\n\\vspace{-4mm}\n\n\\begin{theorem}[\\textbf{Event Trend Count Computation}]\nLet $G$ be the {\\small GRETA}\\ graph for a query $q$ and a stream $I$ and \n$e \\in I$ be an event with predecessor events $Pr$ in $G$. \n(1)~The intermediate count $e.count$ is the number of (sub) trends in $G$ that start at a \\textsf{START} event and end at $e$. \n$e.count = \\sum_{p \\in Pr} p.count$. \nIf $e$ is a \\textsf{START} event, $e.count$ is incremented by one.\nLet $End$ be the \\textsf{END} events in $G$.\n(2)~The final count is the number of trends captured by $G$. \n$final\\_count = \\sum_{end \\in End} end.count$.\n\\label{theorem:count}\n\\end{theorem}\n\\vspace*{-2mm}\n\n\\begin{proof}\n(1)~We prove the first statement by induction on the number of events in $G$.\n\n\\textbf{\\textit{Induction Basis}}: $n=1$. If there is only one event $e$ in the graph $G$, $e$ is the only (sub-)trend captured by $G$. Since $e$ is the only event in $G$, $e$ has no predecessor events. The event $e$ can be inserted into the graph only if $e$ is a \\textsf{START} event. Thus, $e.count=1$.\n\n\\textbf{\\textit{Induction Assumption}}: The statement is true for $n$ events in the graph $G$.\n\n\\textbf{\\textit{Induction Step}}: $n \\rightarrow n+1$. Assume a new event $e$ is inserted into the graph $G$ with $n$ events and the predecessor events $Pred$ of $e$ are connected to $e$. According to the induction assumption, each of the predecessor events $p \\in Pred$ has a count that corresponds to the number of sub-trends in $G$ that end at the event $p$. The new event $e$ continues \\textit{all} these trends. Thus, the number of these trends is the sum of counts of all $p \\in Pred$. In addition, each \\textsf{START} event initiates a new trend. Thus, 1 is added to the count of $e$ if $e$ is a \\textsf{START} event. The first statement is proven.\n\n(2)~By definition only \\textsf{END} events may finish trends. We are interested in the number of finished trends only. Since the count of an \\textsf{END} event $end$ corresponds to the number of trends that finish at the event $end$, the total number of trends captured by the graph $G$ is the sum of counts of all \\textsf{END} events in $G$. The second statement is proven.\n\\end{proof}\n\\vspace{-2mm}\n\n\\begin{algorithm}[t]\n\\caption{{\\small GRETA}\\ algorithm for positive patterns}\n\\label{lst:ETA_algorithm}\n\\begin{algorithmic}[1]\n\\Require Positive pattern $P$, stream $I$\n\\Ensure Count of trends matched by $P$ in $I$\n\\State $process\\_pos\\_pattern(P,I)\\ \\{$\n\\State $V \\leftarrow \\emptyset,\\ final\\_count \\leftarrow 0$\n\\ForAll {$e \\in I$ of type $E$} \n\t\t\\State $Pr \\leftarrow V.predEvents(e)$\n \\If {$E = start(P)$ or $Pr \\neq \\emptyset$}\t\t\n \t\t\t\\State $V \\leftarrow V \\cup e,\\;e.count \\leftarrow (E = start(P))\\;?\\;1\\;:\\;0$\n \t\t\t\\ForAll {$p \\in Pr$} $\\switch e.count += p.count$ \\EndFor \t\t\t\n \t\t\t\\If {$E = end(P)$} $\\switch final\\_count += e.count$ \\EndIf \t\t\t\n\t\\EndIf\n\\EndFor\n\\State\\Return $final\\_count\\ \\}$ \n\\end{algorithmic}\n\\end{algorithm}\n\n\\textbf{{\\small GRETA}\\ Algorithm for Positive Patterns} computes the number of trends matched by the pattern $P$ in the stream $I$. The set of vertices $V$ in the {\\small GRETA}\\ graph is initially empty (Line~2 of Algorithm~\\ref{lst:ETA_algorithm}). Since each edge is traversed exactly once, edges are not stored.\nWhen an event $e$ of type $E$ arrives, the method $V.predEvents(e)$ returns the predecessor events of $e$ in the graph (Line~4).\nA \\textsf{START} event is always inserted into the graph since it always starts a new trend, while a \\textsf{MID} or an \\textsf{END} event is inserted only if it has predecessor events (Lines~5--6).\nThe count of $e$ is increased by the counts of its predecessor events (Line~7). \nIf $e$ is a \\textsf{START} event, its count is incremented by 1 (Line~6).\nIf $e$ is an \\textsf{END} event, the final count is increased by the count of $e$ (Line~8). This final count is returned (Line~9).\n\n\\vspace*{-2mm}\n\\begin{theorem}[\\textbf{Correctness of the {\\small GRETA}\\ Algorithm}]\nGiven a positive event pattern $P$ and a stream $I$, Algorithm~\\ref{lst:ETA_algorithm} constructs the {\\small GRETA}\\ graph for $P$ and $I$ (Definition~\\ref{def:graph}) and computes the intermediate and final counts (Theorem~\\ref{theorem:count}).\n\\label{theorem:correctness-algo}\n\\end{theorem}\n\\vspace*{-4mm}\n\n\\begin{proof}\n\\textbf{\\textit{Graph Construction}}.\nEach event $e \\in I$ is processed (Line~3). A \\textsf{START} event is always inserted, while a \\textsf{MID} or an \\textsf{END} event is inserted if it has predecessor events (Lines~4--6). Thus, the set of vertices $V$ of the graph corresponds to events in the stream $I$ that are matched by the pattern $P$. \nEach predecessor event $p$ of a new event $e$ is connected to $e$ (Lines~8--9). Therefore, the edges $E$ of the graph capture adjacency relationships between events in trends matched by the pattern $P$. \n\n\\textbf{\\textit{Count Computation}}.\nInitially, the intermediate count $e.count$ of an event $e$ is either 1 if $e$ is a \\textsf{START} event or 0 otherwise (Line~7). $e.count$ is then incremented by $p.count$ of each predecessor event $p$ of $e$ (Lines~8, 10). Thus, $e.count$ is correct.\nInitially, the final count is 0 (Line~2). Then, it is incremented by $e.count$ of each \\textsf{END} event $e$ in the graph. Since $e.count$ is correct, the final count is correct too.\n\\end{proof}\n\\vspace*{-2mm}\n\nWe analyze complexity of Algorithm~\\ref{lst:ETA_algorithm} in Section~\\ref{sec:complexity}.\n\n\n\n\\section{Related Work}\n\\label{sec:related_work}\n\n\\textbf{Complex Event Processing}.\nCEP approaches like SASE \\cite{ADGI08,ZDI14}, Cayuga~\\cite{DGPRSW07}, ZStream~\\cite{MM09}, and E-Cube~\\cite{LRGGWAM11} support aggregation computation over event streams. \nSASE and Cayuga deploy a Finite State Automaton (FSA)-based query execution paradigm, meaning that each query is translated into an FSA. Each run of an FSA corresponds to an event trend. \nZStream translates an event query into an operator tree that is optimized based on the rewrite rules and the cost model.\nE-Cube employs hierarchical event stacks to share events across different event queries.\n\nHowever, the expressive power of all these approaches is limited. E-Cube does not support Kleene closure, while Cayuga and ZStream do not support the skip-till-any-match semantics nor the \\textsf{GROUP-BY} clause in their event query languages.\nFurthermore, these approaches define no optimization techniques for event trend aggregation. Instead, they handle aggregation as a post-processing step that follows trend construction. This trend construction step delays the system responsiveness as demonstrated in Section~\\ref{sec:evaluation}.\n\nIn contrast to the above approaches, A-Seq~\\cite{QCRR14} proposes \\textit{online} aggregation of \\textit{fixed-length} event sequences. The expressiveness of this approach is rather limited, namely, it supports neither Kleene closure, nor arbitrarily-nested event patterns, nor edge predicates. Therefore, it does not tackle the exponential complexity of event trends.\n\nThe CET approach~\\cite{PLAR17} focuses on optimizing the \\textit{construction of event trends}. It does not support aggregation, grouping, nor negation. In contrast, our {\\small GRETA}\\ approach focuses on \\textit{aggregation of event trends} without trend construction. Due to the exponential time and space complexity of trend construction, the CET approach is neither real-time nor lightweight as confirmed by our experiments. \n\n\\textbf{Data Streaming}.\nStreaming approaches~\\cite{AW04, GHMAE07, KWF06, LMTPT05, LMTPT05-2, THSW15, ZKOS05, ZKOSZ10} support aggregation computation over data streams. Some approaches incrementally aggregate \\textit{raw input events for single-stream queries}~\\cite{LMTPT05, LMTPT05-2}. Others share aggregation results between overlapping sliding windows~\\cite{AW04, LMTPT05}, which is also leveraged in our {\\small GRETA}\\ approach (Section~\\ref{sec:positive-algorithm}). Other approaches share intermediate aggregation results between multiple queries~\\cite{KWF06, ZKOS05, ZKOSZ10}.\nHowever, these approaches evaluate simple Select-Project-Join queries with window semantics. Their execution paradigm is set-based. They do not support CEP-specific operators such as event sequence and Kleene closure that treat the order of events as first-class citizens. Typically, these approaches require the \\textit{construction of join results} prior to their aggregation. Thus, they define incremental aggregation of \\textit{single raw events} but implement a two-step approach for join results.\n\nIndustrial streaming systems including Flink~\\cite{flink}, Esper~\\cite{esper}, Google Dataflow~\\cite{dataflow}, and Microsoft StreamInsight~\\cite{streaminsight} do not explicitly support Kleene closure nor aggregation of Kleene matches. However, Kleene closure computation can be simulated by a set of event sequence queries covering all possible lengths of a trend. This approach is possible only if the maximal length of a trend is known apriori -- which is rarely the case in practice. Furthermore, this approach is highly inefficient for two reasons. First, it runs a set of queries for each Kleene query. This increased workload drastically degrades the system performance. Second, since this approach requires event trend construction prior to their aggregation, it has exponential time complexity and thus fails to compute results within a few seconds.\n\n\\textbf{Static Sequence Databases}.\nThese approaches extend traditional SQL queries by order-aware join operations and support aggregation of their results~\\cite{LS03, LKHLCC08}. However, they do not support Kleene closure. Instead, \\textit{single data items} are aggregated~\\cite{LS03, MZ97, SZZA04, SLR96}. \nFurthermore, these approaches assume that the data is statically stored and indexed prior to processing. Hence, these approaches do not tackle challenges that arise due to dynamically streaming data such as event expiration and real-time execution. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nHow should social scientists understand and communicate the uncertainty of statistically estimated causal effects? They usually use a specific decision threshold of a \\textit{p}-value or confidence\/credible interval and conclude whether a causal effect is significant or not, meaning it is not (practically) null \\autocite{Gross2015, Kruschke2018a}. While convenient to make categorical decisions, the significance-vs.-insignificance dichotomy leads us to overlook the full nuance of the statistical measure of uncertainty. This is because uncertainty is the degree of confidence and, therefore, a continuous scale. The dichotomy is also associated with problems such as \\textit{p}-hacking, publication bias, and seeing statistical insignificance as evidence for the null hypothesis \\autocite[e.g., see][]{Amrhein2019, Esarey2016, Gerber2008, McShane2016, McShane2017, Simonsohn2014}. While these cited articles discuss these problems stemming from the Frequentist Null Hypothesis Significance Testing, Bayesian inference is equally susceptible to the same issues if a specific decision threshold is used to interpret a posterior distribution.\n\nBehavioral research suggests both researchers and decision makers understand uncertainty more appropriately if it is presented as a numerical, continuous scale rather than as a verbal, discrete scale \\autocite{Budescu2014, Friedman2018, Jenkins2018, McShane2016, McShane2017, Mislavsky2021}. If researchers are interested in estimating a causal effect, the natural quantity of uncertainty is the probability of an effect, i.e., the probability that a causal factor affects the outcome of interest. Probability is an intuitive quantity and used in everyday life, for example, in a weather forecast for the chance of rain. The probability of an effect can be computed by a posterior distribution estimated by Bayesian statistics \\autocite[or, if appropriate, by a pseudo-Bayesian, confidence distribution; see][]{Wood2019}.\n\nThe standard ways to summarize a posterior is the plot of a probability density function, the probability of a one-sided hypothesis, and a credible interval. These standard ways have drawbacks. A probability density plot is not the best for readers to accurately compute a probability mass for a specific range of parameter values \\autocite{Kay2016}. The probability of a one-sided hypothesis needs a decision threshold for the effect size from which on the probability is computed (typically, the probability of an effect being greater\/smaller than zero). A credible interval also needs a decision threshold for the level of probability based on which the interval is computed (typically, the 95\\% level). From a decision-theoretic perspective, the use of a decision threshold demands a justification based on a context-specific utility function \\autocites{Berger1985}[276--78]{Kruschke2018a}[170]{Lakens2018}{Mudge2012}. Yet, social scientific research often deals with too heterogeneous cases (e.g., all democracies) to find a single utility function, and may want to be agnostic about it.\n\nMotivated by these backgrounds, I propose presenting the uncertainty of statistically estimated causal effects, as the probabilities of different effect sizes. More specifically, it is the plot of a complementary cumulative distribution, where the probability is presented for an effect being greater than different effect sizes (here, ``greater'' is meant in absolute terms: a greater positive value than zero or some positive value, or a greater negative value than zero or some negative value). In this way, it is unnecessary for researchers to use any decision threshold for the ``significance,'' ``confidence,'' or ``credible'' level of uncertainty or for the effect size beyond which the effect is considered practically relevant. This means researchers can be agnostic about a decision threshold and a justification for that. In my approach, researchers play a role of an information provider and present different effect sizes and their associated probabilities as such. My approach applies regardless of the types of causal effect estimate (population average, sample average, individual, etc).\n\nThe positive implications of my approach can be summarized as follows. First, my approach could help social scientists avoid the dichotomy of significance vs. insignificance and present statistical uncertainty regarding causal effects as such: a step also recommended by \\textcite{Gelman2017}. This could enable social scientists to better understand and communicate the continuous nature of the uncertainty of statistically estimated causal effects. I demonstrate this point by applying my approach to a previous social scientific study.\n\nSecond, as a result of social scientists using my approach, decision makers could evaluate whether to use a treatment or not, in light of their own utility functions. The conventional thresholds such as $p<5\\%$ could produce statistical \\textit{in}significance for a treatment effect because of a small sample or effect size, even if the true effect were non-zero. Then, decision makers might believe it is evidence for no effect and therefore decide not to use the treatment. This would be a lost opportunity, however, if the probability of the beneficial treatment effect were actually high enough (if not as high as 95\\%) for these decision makers to use the treatment and accept the risk of failure.\n\nFinally, my approach could help mitigate \\textit{p}-hacking and the publication bias, as researchers could feel less need to report only a particular model out of many that produces an uncertainty measure below a certain threshold. It might be argued that my approach will also allow theoretically or methodologically questionable research to claim credibility based on not so high but decent probability (e.g., 70\\%) that some factor affects an outcome. Yet, the threshold of $p<5\\%$ has not prevented questionable research from being published, as the replication crisis suggests. A stricter threshold of a \\textit{p}-value \\autocite[e.g.,][]{Benjamin2018} may reduce, but does not eliminate, the chance of questionable research satisfying the threshold. A stricter threshold also has a side effect: it would increase false negatives and, as a result, the publication bias. In short, the important thing is to evaluate the credibility of research not only based on an uncertainty measure such as a p-value and probability but also on the entire research design and theoretical arguments: ``No single index should substitute for scientific reasoning'' \\autocite[132]{Wasserstein2016a}. While further consideration and research are necessary to understand what the best practice to present and interpret statistical uncertainty is to maximize scientific integrity, I hope my approach contributes to this discussion. In this article, I assume statistically estimated causal effects are not based on questionable research and the model assumptions are plausible.\n\nThe rest of the article further explains the motivation for, and the detail of, my approach, and then applies it to a previous social scientific study. The accompanying R package makes my approach easy to implement (see the section ``Supplemental Materials''). All statistical analyses for this article were done on RStudio \\autocite{RStudioTeam2020} running R version 4.1.0 \\autocite{RCoreTeam2021}. The data visualization was done by the ggplot2 package \\autocite{Wickham2016}.\n\n\n\\section{Motivation}\nBehavioral research suggests the use of probability as a numerical, continuous scale of uncertainty has merits for both decision makers and researchers, compared to a verbal, discrete scale. In communicating the uncertainty of predictions, a numerical scale more accurately conveys the degree of uncertainty researchers intend to communicate \\autocite{Budescu2014, Jenkins2018, Mandel2021}; improves decision makers' forecasting \\autocite{Fernandes2018, Friedman2018}; mitigates the public perception that science is unreliable in case the prediction of unlikely events fails \\autocite{Jenkins2019}; and helps people aggregate the different sources of uncertainty information in a mathematically consistent way \\autocite{Mislavsky2021}. Even researchers familiar with quantitative methods often misinterpret statistical insignificance as evidence for the null effect, while presenting a numerical probability corrects such a dichotomous thinking \\autocite{McShane2016}. My approach builds on these insights into understanding and communicating uncertainty, adding to a toolkit for researchers.\n\nSocial scientists usually turn the continuous measures of the uncertainty of statistically estimated causal effects, such as a \\textit{p}-value and a posterior, into the dichotomy of significance and insignificance using a decision threshold. Such a dichotomy results in the misunderstandings and misuses of uncertainty measures \\autocite[e.g., see][]{Amrhein2019, Esarey2016, Gerber2008, McShane2016, McShane2017, Simonsohn2014}. \\textit{p}-hacking is a practice to search for a model that produces a statistically significant effect to increase the chance of publication, and results in a greater likelihood of false positives being published \\autocite{Simonsohn2014}. If only statistically significant effects are to be published, our knowledge based on published studies is biased -- the publication bias \\autocite[e.g.,][]{Esarey2016, Gerber2008, Simonsohn2014}. Meanwhile, statistical \\textit{in}significance is often mistaken as evidence for no effect \\autocite{McShane2016, McShane2017}. For example, given a decision threshold of 95\\%, both 94\\% probability and 1\\% probability are categorized as statistically insignificant, although the former presents much smaller uncertainty than the latter. If a study failed to find statistical significance for a beneficial causal effect because of the lack of access to a large sample or because of the true effect size being small, it could be a lost opportunity for decision makers. It might be argued that a small effect size is practically irrelevant, but this is not by definition so but depends on contexts. For example, if a treatment had a small but beneficial effect and also were cheap, it could be useful for decision makers.\n\nA decision threshold is not always a bad idea. For example, it may sometimes be necessary for experts to suggest a way for non-experts to make a yes\/no decision \\autocite[271]{Kruschke2018a}. In such a case, a decision threshold may be justified by a utility function tailored to a particular case \\autocites[276]{Kruschke2018a}[170]{Lakens2018}{Mudge2012}.\n\nHowever, social scientific research often examines too heterogeneous cases to identify a single utility function that applies to every case, and may want to be agnostic about it. This may be one of the reasons why researchers usually resort to the conventional threshold such as whether a point estimate has a \\textit{p}-value of less than 5\\%, or whether a 95\\% confidence\/credible interval does not include zero. Yet, the conventional threshold (or any other threshold) is unlikely to be universally optimal for decision making, exactly because how small the uncertainty of a causal effect should be, depends on a utility function, which varies across decision-making contexts \\autocites{Berger1985}[278]{Kruschke2018a}[170]{Lakens2018}{Mudge2012}.\n\nEven if one adopted the view that research should be free from context-specific utility and should use an ``objective'' criterion universally to detect a causal effect, $p<5\\%$ would not be such a criterion. This is because universally requiring a fixed \\textit{p}-value shifts subjectivity from the choice of a decision threshold for a \\textit{p}-value to that for a sample size, meaning that statistical significance can be obtained by choosing a large enough sample size subjectively \\autocite{Mudge2012}. If ``anything that plausibly could have an effect will not have an effect that is exactly zero'' in social science \\autocite[961]{Gelman2011}, a non-zero effect is by definition ``detectable'' as a sample size increases.\n\nWhile some propose, as an alternative to the conventional threshold, that we evaluate whether an interval estimate excludes not only zero but also practically null values \\autocite{Gross2015, Kruschke2018a}, this requires researchers to justify why a certain range of values should be considered practically null, which still depends on a utility function \\autocite[276--78]{Kruschke2018a}. My approach avoids this problem, because it does not require the use of any decision threshold either for an uncertainty measure or for an effect size. Using a posterior distribution, it simply presents the probabilities of different effect sizes as such. My approach allows researchers to play a role of an information provider for the uncertainty of statistically estimated causal effects, and let decision makers use this information in light of their own utility functions.\n\nThe standard ways to summarize a posterior distribution are a probability density plot, the probability of a one-sided hypothesis, and a credible interval. My approach is different from these in the following respects. A probability density plot shows the full distribution of parameter values. Yet, it is difficult for readers to accurately compute a probability mass for a specific range of parameter values, just by looking at a probability density plot \\autocite{Kay2016}.\n\nPerhaps for this reason, researchers sometimes report together with a density plot either (1) the probability mass for parameter values greater\/smaller than a particular value (usually zero), i.e., the probability of a one-sided hypothesis, or (2) a credible interval. However, these two approaches also have drawbacks. In the case of the probability of a one-sided hypothesis, a specific value needs to be defined as the decision threshold for the effect size from which on the probability is computed. Therefore, it brings the aforementioned problem, i.e., demanding a justification for that particular threshold based on a utility function. In the case of a credible interval, we must decide what level of probability is used to define an interval, and what are practically null values \\autocite{Kruschke2018a}. This also brings the problem of demanding a justification for particular thresholds, one for the level of probability and the other for practically null values, based on a utility function. In addition, when a credible interval includes conflicting values (e.g., both significant positive and significant negative values), it can only imply inconclusive information \\autocite{Kruschke2018a}, although the degree of uncertainty actually differs depending on how much portion of the interval includes these conflicting values (e.g., 5\\% of the interval vs. 50\\% of the interval).\n\nExperimental studies by \\textcites{Allen2014, Edwards2012, Fernandes2018} suggest the plot of a complementary cumulative distribution, such as my approach as explained in the next section, is one of the best ways to present uncertainty (although their findings are about the uncertainty of predictions rather than that of causal effects). The plot of a complementary cumulative distribution presents the probability of a random variable taking a value greater (in absolute terms) than some specific value. Formally: $P(X>x)$, where $X$ is a random variable and $x$ is a particular value of $X$. \\textcites{Allen2014, Edwards2012} find the plot of a complementary cumulative distribution is effective both for probability estimation accuracy and for making a correct decision based on the probability estimation, even under time pressure \\autocite{Edwards2012} or cognitive load \\autocite{Allen2014}. Note that \\textcites{Allen2014, Edwards2012} indicate a complementary cumulative distribution is unsuitable to accurately estimating the mean of a distribution. If the quantities of interest included the mean of a posterior distribution, one could report it together. \\textcite{Fernandes2018} find the plot of a complementary cumulative distribution is one of the best formats for optimal decision making over repeated decision-making contexts, while a probability density plot, a one-sided hypothesis, and a credible interval are not.\n\nI am not arguing my proposed approach should be used or ideal to report the uncertainty of statistically estimated causal effects in every case. For example, if one were interested in the Frequentist coverage of a fixed parameter value, she would rather use a confidence interval over repeated sampling. Note that repeated sampling for the same study is usually uncommon in social science; uncertainty is usually specific to a single study.\n\nIt is possible that there is no universally best way to present uncertainty \\autocite{Visschers2009}. It is also possible that ``the presentation format hardly has an impact on people in situations where they have time, motivation, and cognitive capacity to process information systematically'' \\autocites[284]{Visschers2009}[but also see][]{Suzuki2021}. Yet, even those people would be unable to evaluate uncertainty properly, if the presentation provided only limited information because of its focus on a specific decision threshold without giving any justification for that threshold. For example, if only a 95\\% credible interval were presented, it would be difficult to see what the range of the effect sizes would be if a decision maker were interested in a different probability level (e.g., 85\\%). My approach conveys the uncertainty of statistically estimated causal effects without any decision threshold. \n\n\n\\section{The Proposed Approach}\nMy approach utilizes a posterior distribution estimated by Bayesian statistics \\autocite[for Bayesian statistics, see for example][]{Gelman2013BDA, Gill2015, Kruschke2015, McElreath2016}. A posterior, denoted as $p(\\theta|D,\\ M)$, is the probability distribution of a parameter, $\\theta$, given data, $D$, and model assumptions, $M$, such as a functional form, an identification strategy, and a prior distribution of $\\theta$. Here, let us assume $\\theta$ is a parameter for the effect of a causal factor.\n\nUsing a posterior distribution, we can compute the probabilities of different effect sizes. More specifically, we can compute the probability that a causal factor has an effect greater (in absolute terms) than some effect size. Formally: $P(\\theta>\\tilde\\theta^{+}| D, M)$ for the positive values of $\\theta$, where $\\tilde\\theta^{+}$ is zero or some positive value; $P(\\theta<\\tilde\\theta^{-}| D, M)$ for the negative values of $\\theta$, where $\\tilde\\theta^{-}$ is zero or some negative value. If we compute this probability while changing $\\tilde\\theta$ up to theoretical or practical limits (e.g., up to the theoretical bounds of a posterior or up to the minimum\/maximum in posterior samples), it results in a complementary cumulative distribution.\n\nAs often the case, a posterior distribution might include both positive and negative values. In such a case, we should compute two complementary distribution functions, one for the positive values of $\\theta$, i.e., $P(\\theta>\\tilde\\theta^{+}| D, M)$ and the other for the negative values of $\\theta$, i.e., $P(\\theta<\\tilde\\theta^{-}| D, M)$. Whether it is plausible to suspect the effect can be either positive or negative, depends on a theory. If only either direction of the effect is theoretically plausible \\autocite[e.g., see][]{Vanderweele2010}, this should be reflected on the prior of $\\theta$, e.g., by putting a bound on the range of the parameter.\n\nFigure \\ref{ccdfExa} is an example of my approach. I use a distribution of 10,000 draws from a normal distribution with the mean of 1 and the standard deviation of 1. Let us assume these draws are those from a posterior distribution, or ``posterior samples,'' for a coefficient in a linear regression, so that the values represent a change in an outcome variable. The x-axis is different effect sizes, or different values of minimum predicted changes in the outcome; the y-axis is the probability of an effect being greater than a minimum predicted change. The y-axis has the adjective ``near'' on 0\\% and 100\\%, because the normal distribution is unbounded and, therefore, the plot of the posterior samples cannot represent the exact 0\\% and 100\\% probability.\n\n\\begin{figure}[t]\n \\includegraphics[scale=0.15]{ccdfExa.png}\n \\centering\n \\caption{Example of presenting a posterior as a complementary cumulative distribution plot.}\n \\label{ccdfExa}\n\\end{figure}\n\nFrom the figure, it is possible to see what is the probability of an effect being greater (in absolute terms) than a certain effect size. For example, we can say: ``The effect is expected to increase the outcome by greater than zero point with a probability of approximately 84\\%.'' It is also clear that the positive effect is much more probable than the negative effect: 16\\% probability for $\\theta<0$ while 84\\% probability for $\\theta>0$. Finally, with a little more attention, we can also read the probability of a range of the effect size. For example, we can compute the probability that the effect increases the outcome by greater than one point and up to three points, as $P(\\theta>1) - P(\\theta>3) \\approx .49 - .02 = .47$. This computation is useful if there is such a ``too much'' effect for a treatment to become counterproductive (e.g., the effect of a diet on weight loss).\n\nFor comparison, I also present the standard ways to summarize a posterior, using the same 10,000 draws as in Figure \\ref{ccdfExa}: a probability density plot (Figure \\ref{pdfExa}), a credible interval, and one-sided hypotheses (both in Table \\ref{tableExa}). The probability density plot gives an overall impression that positive values are more probable than negative values. However, it is difficult to compute exactly what probability an effect is greater than, say, 1. Unlike my approach, the y-axis is density rather than probability, and density is not an intuitive quantity.\n\nThe credible interval is computed based on the conventional decision threshold of the 95\\% credible level. It includes both negative and positive values and, therefore, the conventional approach leads either to the conclusion that the effect is not statistically significant, or to the one that there is no conclusive evidence for the effect \\autocite{Gross2015, Kruschke2018a}. The one-sided hypotheses are computed based on the decision threshold of the null effect, i.e., $P(\\theta>0)$ and $P(\\theta<0)$, as commonly done. Because of this threshold, the information of the uncertainty is much more limited than what my approach presents in Figure \\ref{ccdfExa}. Most importantly, the use of these decision thresholds require researchers to justify why these thresholds should be used, rather than, say, the 94\\% credible interval or $P(\\theta>0.1)$ and $P(\\theta<-0.1)$. This problem does not apply to my approach. My approach can be considered as a generalized way to use one-sided hypotheses. It graphically summarizes all one-sided hypotheses (up to a practically computable point), using different effect sizes as different thresholds from which on the probability of an effect is computed.\n\n\\begin{figure}[t]\n \\includegraphics[scale=0.15]{pdfExa.png}\n \\centering\n \\caption{Example of presenting a posterior as a probability density plot.}\n \\label{pdfExa}\n\\end{figure}\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{c c c c}\n\\hline\nMean & 95\\% Credible Interval & $P(\\theta>0)$ & $P(\\theta<0)$\\\\\n\\hline\n$0.98$ & [$-1.01$, $2.92$] & $0.84$ & $0.16$\\\\\n\\hline\n\\end{tabular}\n\\caption{Example of presenting a posterior as a credible interval or as one-sided hypotheses.}\n\\label{tableExa}\n\\end{table}\n\nI note three caveats about my approach. First, it takes up more space than presenting regression tables, the common presentation style in social science. Therefore, if we wanted to present all regression coefficients using my approach, it would be too cumbersome. Yet, the purpose of quantitative causal research in social science is usually to estimate the effect size of a causal factor or two, and the remaining regressors are controls to enable the identification of the causal effect(s) of interest. Indeed, it is often hard to interpret all regression coefficients causally because of complex causal mechanisms \\autocite{Keele2020}. When researchers use matching instead of regression, they typically report only the effect of a treatment variable \\autocite[1--2]{Keele2020}, although the identification strategy is the same -- selection on observables \\autocite[321--22]{Keele2015}. Thus, if researchers are interested in estimating causal effects, they usually need to report only one or two causal factors. If so, it is not a problem even if my proposed approach takes more space than regression tables.\n\nSecond, if the effect size is scaled in a nonintuitive measure (such as a log odds ratio in logistic regression), researchers will need to convert it to an intuitive scale to make my approach work best \\autocite[for detail, see][]{Sarma2020}. For example, in the case of logistic regression, researchers can express an effect size as a difference in the predicted likelihood of an outcome variable.\n\nThird, as all models are wrong, Bayesian models are also wrong; they are the simplification of reality. A posterior distribution is conditional on data used and model assumptions. It cannot be a reliable estimate, if data used are inadequate for the purpose of research (e.g., a sample being unrepresentative of the target population or collected with measurement errors), and\/or if one or more of the model assumptions are implausible (which is usually the case). Moreover, in practice a posterior usually needs to be computed by a Markov chain Monte Carlo (MCMC) method, and there is no guarantee that the resulting posterior samples precisely mirror the true posterior. Therefore, the estimated probability of an effect should not be considered as the ``perfect'' measure of uncertainty. For example, even if the estimated probability of an effect being greater than zero is 100\\% \\textit{given the model and the computational method}, it should NOT be interpreted as the certainty of the effect \\textit{in practice}.\n\nGiven these limitations, the same principle applies to my proposed approach as to the \\textit{p}-value: ``No single index should substitute for scientific reasoning'' \\autocite[132]{Wasserstein2016a}. What matters is not the ``trueness'' of a model but the ``usefulness'' of a model. My approach makes a model more useful than the conventional approaches to evaluating and communicating the uncertainty of statistically estimated causal effects, in the following respects. First, it uses the probability of an effect as an intuitive quantity of uncertainty, for a better understanding and communication. Second, it does not require any decision thresholds for uncertainty measures or effect sizes. Therefore, it allows researchers to be agnostic about a utility function required to justify such decision thresholds, and to be an information provider presenting the probabilities of different effect sizes as such.\n\n\n\\section{Application}\nI exemplify my proposed approach by applying it to a previous social scientific study \\autocite{Huff2016} and using its dataset \\autocite{Huff2015a}. \\textcite{Huff2016} collected a nationally representative sample of 2,000 Polish adults and experimented whether more violent methods of protest by an opposition group increase or decrease public support for the government negotiating with the group. Specifically, I focus on the two analyses in Figure 4 of \\textcite{Huff2016}, which present (1) the effect of an opposition group using bombing in comparison to occupation, and (2) the effect of an opposition group using occupation in comparison to demonstrations, on the attitude of the experiment participants towards tax policy in favor of the opposition group. The two treatment variables are measured dichotomously, while the attitude towards tax policy as the dependent variable is measured in a 100-point scale. \\textcite{Huff2016} use linear regression per treatment variable to estimate its average effect. The model is $Y=\\beta_0+\\beta_{1}D+\\epsilon$, where $Y$ is the dependent variable, $D$ is the treatment variable, $\\beta_{0}$ is the constant, $\\beta_{1}$ captures the size of the average causal effect, and $\\epsilon$ is the error term.\n\nIn the application, I convert the model to the following equivalent Bayesian linear regression model:\n\n\\begin{align*}\ny_{i} & \\sim Normal(\\mu_{i}, \\sigma),\\\\\n\\mu_{i} & = \\beta_0 + \\beta_1 d_{i},\\\\\n\\beta_{0} & \\sim Normal(\\mu_{\\beta_0}=50,\\sigma_{\\beta_0}=20),\\\\\n\\beta_{1} & \\sim Normal(\\mu_{\\beta_1}=0,\\sigma_{\\beta_1}=5),\\\\\n\\sigma & \\sim Exponential(rate=0.5),\n\\end{align*}\n\n\\noindent\nwhere $y_{i}$ and $d_{i}$ are respectively the outcome $Y$ and the treatment $D$ for an individual $i$; $Normal(\\cdot)$ denotes a normal distribution; $\\mu_{i}$ is the mean for $i$ and $\\sigma$ is the standard deviation in the normal distribution likelihood. For the quantity of interest $\\beta_{1}$, I use a weakly informative prior of $Normal(\\mu_{\\beta_1}=0,\\sigma_{\\beta_1}=5)$. This prior reflects the point that the original study presents no prior belief in favor of the negative or positive average effect of a more violent method by an opposition group on public support, as it hypothesizes both effects as plausible. For $\\beta_{0}$, the constant term, I use a weakly informative prior of $Normal(\\mu_{\\beta_0}=50,\\sigma_{\\beta_0}=20)$; this prior means that, without the treatment, respondents are expected to have a neutral attitude on average, but the baseline attitude of each individual may well vary. For $\\sigma$ in the likelihood, I put a weakly informative prior of the exponential distribution $Exponential(rate=0.5)$, implying any stochastic factor is unlikely to change the predicted outcome value by greater than 10 points. I use four chains of MCMC process; per chain 10,000 iterations are done, the first 1,000 of which are discarded. The MCMC algorithm used is Stan \\autocite{StanDevelopmentTeam2019b}, implemented via the rstanarm package version 2.21.1 \\autocite{Goodrich2020a}. The $\\hat{R}$ was approximately 1.00 for every estimated parameter, suggesting the models did not fail to converge. The effective sample size exceeded at least 15,000 for every estimated parameter.\n\nTable \\ref{tableHK} presents the results in a conventional way: the mean in the posterior of $\\beta_{1}$ and the 95\\% credible interval for the model examining the effect of bombing in comparison to occupation, and those for the model examining the effect of occupation in comparison to demonstrations. While the mean is a negative value in both models, the 95\\% credible interval includes not only negative values but also zero and positive values. This means the typical interval approach \\autocite[e.g.,][]{Gross2015, Kruschke2018a} would lead us to simply conclude there is no conclusive evidence for the average effect.\n\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{c c c c}\n\\hline\n& Mean $\\beta_{1}$ & 95\\% Credible Interval & N\\\\\n\\hline\nbombing vs. occupation & $-2.49$ & [$-5.48$, $0.49$] & 996\\\\\noccupation vs. demonstration & $-0.53$ & [$-3.31$, $2.32$] & 985\\\\\n\\hline\n\\end{tabular}\n\\caption{Results using the 95\\% credible interval. $\\hat{R}\\cong1.00$ for all parameters.}\n\\label{tableHK}\n\\end{table}\n\nFigure \\ref{ccdfBom} uses my approach for the effect of bombing in comparison to occupation. It enables richer inference than the above conventional approach. If we focus on the probability of $\\beta_{1}<0$ for example, the model expects that if an opposition group uses bombing instead of occupation, it should reduce public support for tax policy in favor of the group by greater than 0 point, with a probability of approximately 95\\%. Thus, the negative effect of bombing is much more likely (95\\% probability) than the positive effect (5\\% probability).\n\n\\begin{figure}[t]\n \\includegraphics[scale=0.15]{ccdfBom.png}\n \\centering\n \\caption{Effect of bombing in comparison to occupation.}\n \\label{ccdfBom}\n\\end{figure}\n \nThe original conclusion was that the effect was not statistically significant at $p<5\\%$, the threshold set at the pre-registration stage \\autocite[1794--1795]{Huff2016}. However, they put a \\textit{post hoc} caveat that if the threshold of statistical significance had been set at 10\\%, the effect would have been regarded as statistically significant \\autocite[1795]{Huff2016}. This interpretation is inconsistent either with the Fisherian paradigm of \\textit{p}-values or with the Neyman-Person paradigm of \\textit{p}-values \\autocite{Lew2012}. According to the Fisherian paradigm, the preset threshold of statistical significance is unnecessary, because a \\textit{p}-value in this paradigm is a local measure and not a global false positive rate -- the rate of false positives over repeated sampling from the same data distribution \\autocite[1562--63]{Lew2012}. An exact \\textit{p}-value should be interpreted as such -- although it is difficult to make intuitive sense of because it is not the probability of an effect but the probability of obtaining data as extreme as, or more extreme than, those that are observed, given the null hypothesis being true \\autocites[1560]{Lew2012}[131]{Wasserstein2016a}. According to the Neyman-Person paradigm, no \\textit{post hoc} adjustment to the preset threshold of statistical significance should be made, because a \\textit{p}-value in this paradigm is used as a global false positive rate and not as a local measure \\autocite[1562--63]{Lew2012}. The estimates from my approach are more straightforward to interpret. We need no \\textit{a priori} decision threshold of a posterior to determine significance, and can \\textit{post hoc} evaluate the probability of an effect \\autocite[328]{Kruschke2015}.\n\nFigure \\ref{ccdfOcc} uses my approach for the effect of occupation in comparison to demonstrations. The model expects that if an opposition group uses occupation instead of demonstrations, it should reduce public support for tax policy in favor of the group by greater than 0 point, with a probability of approximately 64\\%. This suggests weak evidence, rather than no evidence, for the negative effect of occupation, meaning that the negative effect is more likely (64\\% probability) than the positive effect (36\\% probability). Meanwhile, the original conclusion was simply that the effect was not statistically significant at $p<5\\%$ \\autocite[1777, 1795]{Huff2016}.\n\n\\begin{figure}[t]\n \\includegraphics[scale=0.15]{ccdfOcc.png}\n \\centering\n \\caption{Effect of occupation in comparison to demonstration.}\n \\label{ccdfOcc}\n\\end{figure}\n\nIn short, my approach presents richer information about the uncertainty of the statistically estimated causal effects, than the significance-vs.-insignificance approach taken by the original study. It helps a better understanding and communication of the uncertainty of the average causal effects of violent protesting methods.\n\n \n\\section{Conclusion}\nI have proposed the alternative approach to social scientists presenting the uncertainty of statistically estimated causal effects: the probabilities of different effect sizes via the plot of a complementary cumulative distribution function. Unlike the conventional significance-vs.-insignificance approach and the standard ways to summarize a posterior distribution, my approach does not require any preset decision threshold for the ``significance,'' ``confidence,'' or ``credible'' level of uncertainty or for the effect size beyond which the effect is considered practically relevant. It therefore allows researchers to be agnostic about a decision threshold and a justification for that. In my approach, researchers play a role of an information provider and present different effect sizes and their associated probabilities as such. I have shown through the application to the previous study that my approach presents richer information about the uncertainty of statistically estimated causal effects than the conventional significance-vs.-insignificance approach, helping a better understanding and communication of the uncertainty.\n\nMy approach has implications for problems in the current (social) scientific practices, such as \\textit{p}-hacking, the publication bias, and seeing statistical insignificance as evidence for the null effect \\autocite{Amrhein2019, Esarey2016, Gerber2008, McShane2016, McShane2017, Simonsohn2014}. First, if the uncertainty of statistically estimated causal effects were reported by my approach, both researchers and non-experts would be able to understand it intuitively, as probability is used as a continuous measure of uncertainty in everyday life (e.g., a weather forecast for the chance of rain). This could help mitigate the dichotomous thinking of there being an effect or no effect, commonly associated with the significance-vs.-insignificance approach. Second, if research outlets such as journals accepted my approach as a way to present the uncertainty of statistically estimated causal effects, researchers could feel less need to report only a model that produces an uncertainty measure below a certain threshold, such as $p<5\\%$. This could help address the problem of \\textit{p}-hacking and the publication bias.\n\nPresenting the uncertainty of statistically estimated causal effects as such has implications for decision makers as well. If decision makers had access to the probabilities of different effect sizes, they could use this information in light of their own utility functions and decide whether to use a treatment or not. It is possible that even when the conventional threshold of the 95\\% credible level produced statistical \\textit{in}significance for a treatment effect, decision makers could have such a utility function to prefer using the treatment over doing nothing, given the level of probability that does not reach the conventional threshold but they see ``high enough.'' It is also possible that even when a causal effect reaches the conventional threshold of the 95\\% credible level, decision makers could have such a utility function to see the probability not high enough (e.g., must be 99\\% rather than 95\\%) and prefer doing nothing over using the treatment.\n\nI acknowledge my proposed approach may not suit everyone's needs. Yet, I hope this article provides a useful insight into how to understand and communicate the uncertainty of statistically estimated causal effects, and the accompanying R package helps other researchers to implement my approach without difficulty.\n\n\n\\section*{Acknowledgments}\nI would like to thank Johan A. Elkink, Jeff Gill, Zbigniew Truchlewski, Alexandru Moise, and participants in the 2019 PSA Political Methodology Group Annual Conference, the 2019 EPSA Annual Conference, and seminars at Dublin City University and University College Dublin, for their helpful comments. I would like to acknowledge the receipt of funding from the Irish Research Council (the grant number: GOIPD\/2018\/328) for the development of this work. The views expressed are my own unless otherwise stated, and do not necessarily represent those of the institutes\/organizations to which I am\/have been related.\n\n\n\\section*{Supplemental Materials}\nThe R code to reproduce the results in this article and the R package to implement my approach (``ccdfpost'') ia available on my website at \\url{https:\/\/akisatosuzuki.github.io}.\n\n\n\\printbibliography\n \n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\\label{sec:intro}\n\nAmorphous carbon (a-C) has been intensively investigated over the\nyears, but its properties are not fully understood. \nSome of them are strongly debated. Tetrahedral a-C (ta-C), containing a \nhigh fraction of $sp^3$ hybrids, is the form of a-C which\nhas drawn most attention because of its\ndiamondlike properties \\cite{Robertson02,Silva02}, including high\nhardness for mechanical purposes, a wide band gap for optical \napplications, and biocompatibility for biomedical coatings. \nTa-C has also promising applications in micro-electromechanical \ndevices (MEMS).\n\nRecently, nanostructured amorphous carbon (na-C) has attracted \nattention \\cite{Sabra98,Banhart99}. It is a hybrid form of carbon in which \nnanocrystallites are embedded in the a-C matrix. These range from \nnanodiamonds \\cite{Lifshitz}, to open graphene structures \nwith negative curvature (schwarzites) \\cite{Vanderbilt,Donadio},\nto carbyne films (composed of $sp^1$ chainlike structures) \n\\cite{Barborini,Ravagnan}. The variety of embedded nanostructures, \nand the great number of possible configurations in the matrix, open\nsome new ways to explore the physics, tailor the mechanical and electronic \nproperties, and extend the applications of a-C.\n\nFrom a fundamental point of view, the most important characteristic\nof na-C is its inhomogeneous nature, characterized by large gradients of \ndensity and coordination through the system. The challenge for the theorist,\ntherefore, is to provide a global description of the composite\nmaterial, extending and\/or generalizing concepts and trends which\napply to the better understood single-phase system. \n\nHere, we present recent work \\cite{Fyta03,Mathiou04} aiming at such\na theoretical description. It is based on tight-binding molecular\ndynamics (TBMD) and Monte Carlo (MC) simulations. We first review\nwork on single-phase a-C pertained to its structure, stress\nstate, and to physical trends followed by the $sp^3$ fraction and\nelastic moduli as a function of density and mean coordination. We then\ndiscuss na-C, focusing on diamond nanocomposite films. Emphasis is given \non their structure, stability, stress state, and hardness. \nOne of the important findings is that the hardness of nanocomposite \nfilms is considerably higher than that of single-phase a-C films.\n\n\\section{METHODOLOGY}\n\\label{sec:method}\n\nIn much of the work presented here, we treat the interatomic \ninteractions in a-C networks using the tight-binding (TB) method. \nThis bridges the gap between classical and first-principles \ncalculations. It is more accurate and transferable\nthan empirical schemes, providing a quantum-mechanical \ndescription of the interactions, and it yields greater\nstatistical precision than {\\it ab initio} methods, allowing the use \nof larger cells.\n\nWe use the environment-dependent tight-binding\n(EDTB) model of Tang, Wang, Chan, and Ho \\cite{TWCH}. This model\ngoes beyond the traditional two-center approximation and allows the\nTB parameters to change according to the bonding environment. In this\nrespect, it is a considerable improvement over the\nprevious two-center model of Xu, Wang, Chan, and Ho \\cite{XWCH}. Both\naccuracy and transferability are improved, as shown from recent\nsuccessful applications \\cite{Galli1}.\n\nThe TBMD simulations are carried out in the canonical ($N,V,T$) ensemble.\n$T$ is controlled {\\it via} a stochastic temperature control \nalgorithm. The a-C networks are generated by quenching from\nthe melt. Although not directly related to the kinetics of the growth \nprocess of a-C films, it produces generic structures associated with \nthe equilibrium state of the films. (See relevant discussion \nin Ref. \\cite{Kel00}.)\nCubic computational cells of 216 and 512 atoms with periodic boundary \nconditions are used. Quenching at different durations and rates \nwas performed to check the effect on the properties. \nThe longer run was for 52 ps and the rate was 226 K\/ps. \nTwo other runs lasted for 26 and 12 ps, at 226 K\/ps and 500 K\/ps, \nrespectively. No significant changes were found.\nDuring quenching, the volume\/density of the \ncells was kept constant. After quenching, the density was\nallowed to relax by changing homogeneously the dimensions of the cells\nwithin small increments and seeking energy minimization. The minimum \nenergy and the corresponding density and bulk modulus of each cell were \ndetermined by fitting the energy-versus-volume data to Murnaghan's \nequation of state \\cite{Murnaghan}.\n\nDiamond nanocomposite cells are generated by melting and subsequent\nquenching a diamond structure, while keeping a certain number of atoms\nin the central portion of the cell frozen in their ideal crystal \npositions. After quenching, which produces amorphization of the\nsurrounding matrix, the cells are thoroughly relaxed with respect to\natom positions and density. Cells with varying coordination (density) \nof the amorphous matrix can be formed by changing the initial starting \ndensity (volume) of the diamond structure. The size (radius) of the \nnanocrystals is controlled by the choice of the number of the shells kept \nfrozen during quenching. Their shape is spherical. The nanocomposite\ncells produced by TBMD contain 512 atoms.\n\nThe TBMD simulations are supplemented by MC simulations using larger\ncells of 4096 atoms to make certain investigations, such as the\nstability of nanocomposite films as a function of the density\nof the amorphous matrix and the stress analysis, tractable. The\nTersoff potential is used \\cite{Terspot}. \n\nIn addition, we examine the properties of the WWW generic\nmodel \\cite{Wooten,Djord1}. This is a hypothetical model of \n``amorphous diamond'', completely tetrahedral,\nconstructed from the diamond lattice by a bond-switching mechanism.\nWe relaxed its topology and density with the\nEDTB model. The WWW model, although hypothetical, is very useful \nbecause it provides an upper bound to the density, $sp^3$ fraction, \nand bulk modulus of single-phase a-C. Its properties can then \nbe compared to the respective ones from diamond nanocomposites.\n\n\\section{RESULTS AND DISCUSSION}\n\n\\subsection{Single-phase a-C}\n\nWe first discuss the pure a-C phase, reviewing past and recent\nwork. To examine its microstructure, let us look at two \nrepresentative examples, among the sequence of structures characterized\nby different density and $sp^3$ fraction. One for the ta-C dense phase, \nand the other for a low-density phase. Their networks are portrayed in \nFig. 1. \n\nThe ta-C network in panel (a) has a density of 2.99\ngcm$^{-3}$, it shows a clear predominance of $sp^3$ bonding (79\\%), \nand it reveals that the $sp^2$ sites are largely \nclustered \\cite{Frau93,Drabold,Marks96}. Clustering is present in the\nform of olefinic, chainlike geometries. The $sp^2$ chains are isolated\nand do not link (percolate) to a single spanning cluster, in agreement\nwith {\\it ab initio} work \\cite{Marks96}. The driving force behind \nthe clustering effect is stress relief. Earlier work \\cite{Kel00},\naddressing the issue of local rigidity in ta-C, showed \nthat clustering contributes stress relief and rigidity to the \nnetwork. This depends on the degree of clustering. The larger the \ncluster, the higher the stress relaxation and the contribution to \nrigidity in the network.\n\nThe low-density network in panel (b) has a density of 1.20 gcm$^{-3}$\nand contains only 1\\% of $sp^3$ sites. It has an open structure with \nlong chains and large rings, and with numerous $sp^1$ sites (33\\%). \nThis network should be typical of cluster-assembled carbon \nfilms \\cite{Barborini,Ravagnan} with an amorphous $sp^2$ character \nand a sizeable carbyne ($sp^1$ chains) component. Such films have\nattracted attention for various applications, including field emission, \ncatalysis, and gas absorption. \n\nAnalysis of the ring statistics in the ta-C networks reveals the\nexistence of three- and four-membered rings. (The shortest-path criterion \nof Franzblau \\cite{Franzblau} was used to define the ring sizes.)\nThis is the first tight-binding model which predicts three-membered\nrings in ta-C, in agreement with {\\it ab initio} MD \nsimulations \\cite{Marks96} using the Car-Parrinello method.\nThe five-membered rings are slightly more numerous than the six-membered\nrings and significantly more numerous than the seven-membered ones.\n\nThe issue of intrinsic stress and its association to $sp^3$ bonding\nin ta-C films has been strongly debated over the\nyears. We have now reached at a rather clear picture of this issue.\nWe summarize here the important points. Work by McKenzie and \nco-workers \\cite{McKenz1} proposed that the compressive stress in ta-C\nis produced by the energy of ion bombardment in the deposition process \nwhich gives rise to local compression, accompanied by the shallow \nimplantation of incoming atoms. Their model considers the compressive \nstress as the causative factor for the formation of sp$^{3}$ sites \nand supports the idea of a transition from an sp$^{2}$-rich to an \nsp$^{3}$-rich phase at a critical value of the average compressive stress \n(about 4-5 GPa) which stabilizes the sp$^{3}$ bonding.\n\nWhile this scenario can not be ruled out for as-grown films, it fails to\ndescribe the stress state of post-growth annealed\/relaxed films. It has\nnow become apparent, after a series of theoretical works by \nKelires \\cite{Kel00,Kel94,Kel01Phy} and thermal annealing experiments\n\\cite{Friedmann,Ferrari,Kalish,Alam}, that the intrinsic stress is\nnot a crucial factor for the stabilization of sp$^{3}$ bonding.\nA critical {\\it average} compressive stress necessary to sustain a high\nfraction of sp$^{3}$ sites, as required by the McKenzie model,\ndoes not exist. This conclusion is borne out of the {\\it local atomic stress}\nmodel of Kelires \\cite{Kel94}, which proposes that the average intrinsic\nstress of relaxed ta-C films can be zero, while stress at the atomic level\ncan be finite and substantial. It further says that the favored stress \nstate of sp$^{3}$ sites is compression, while that of sp$^{2}$ sites\nis tension, the latter playing the role of relieving stress in the\nnetwork. \n\nAccording to the local stress model, the as-grown, highly strained and\nsp$^{3}$-rich ta-C films are in a metastable state with respect to the \nrelaxed, stress-free and still sp$^{3}$-rich ``quasi-equilibrium'' \nta-C structures. (True equilibrium structures are the \ngraphite-like sp$^{2}$-rich films.) The as-grown films\npossess high intrinsic stress because the stressed non-equilibrium local\nstructures are frozen-in during deposition, but the network at the\nlow deposition temperatures does not acquire enough energy, or it is\nvery slow at typical times in the laboratory, in order to overcome\nthe potential barrier between the two states and relax the excessive\nstress. Post-growth thermal annealing at moderate $T$'s has proved to \nbe a very efficient mechanism for providing the necessary\nenergy in ta-C films to reach their quasi-equilibrium, stress-free state.\nThe stress relief can be achieved with minimal structural \nmodifications \\cite{Ferrari,Sullivan}, without reducing the sp$^{3}$\nfraction. However, further annealing above $\\sim$ 1200 K transforms \nta-C into the graphite-like sp$^{2}$-rich phase.\n\nAnother issue which is still unclear regards the variation of $sp^3$ \nfraction or, equivalently, of mean coordination $\\bar{z}$, \nwith density. The basic question underlying this issue is whether there \nis a linear relationship between these two quantities.\nWe have recently carried out an extensive investigation \\cite{Mathiou04}\nof this issue through the entire range of densities relevant to a-C,\nusing the TBMD method. Several networks have been generated, at\nvarious quenching rates, providing sufficient statistics to reach\na definite conclusion. We also compare to the WWW network relaxed with \nthe EDTB model.\n\nThe variation of $sp^3$ fraction with density is shown in Fig. 2. \n(Hybrid fractions are extracted by counting neighbors\nwithin and up to the first minimum of the pair distribution \nfunctions, not shown.) Without any doubt, the variation\nis linear through the entire range of possible densities.\nA linear fit to the points gives\n\\begin{equation}\n\\rho (\\rm{g\/cm}^{3}) = 1.27 + 2.08\\ (sp^{3} \\rm{fraction}).\n\\label{den-sp3}\n\\end{equation}\nEq. (\\ref{den-sp3}) predicts the minimum density required to\nsustain $sp^3$ bonding in a-C to be $\\sim$ 1.3 gcm$^{-3}$. \nThe $sp^3$ sites are needed in such low-density networks as linking \ngeometries between the main $sp^2$ and $sp^1$ components. For 100\\%\n$sp^3$ bonding, the corresponding density is 3.35 gcm$^{-3}$.\nThis is slightly higher than the density of the WWW network, but still \nless than diamond's by $\\sim$ 3\\%. We conclude that this is the upper \nlimit in the possible densities of ta-C. The highest densities for ta-C \nreported until now by experiment are less than 3.3 gcm$^{-3}$.\n\nUnfortunately, experimental results do not provide a\nclear picture of this issue. Different growth and characterization\ntechniques give sets of data which show a linear variation within\nthe respective set, but not when viewed all together. (A thorough discussion\nof this point is given in Ref.\\ \\cite{Mathiou04}.) Very good agreement \nbetween theory and experiment holds for the ta-C region, i.e., for\ndensities higher than $\\sim$ 2.8 gcm$^{-3}$. At lower densities,\nexperimental points scatter from method to method. For example,\ndata extracted from samples prepared by filtered\ncathodic vacuum arc (FCVA) deposition \\cite{Fallon,FerrPRB00}\ndiffer from data extracted from samples prepared by\nmagnetron sputtering (MS) \\cite{Schwan97}. \nA linear fit over the FCVA data was carried out \nby Ferrari {\\it et al.} \\cite{FerrPRB00}, yielding\n$\\rho$ (g\/cm$^{3})$ = 1.92 + 1.37 ($sp^{3}$ fraction). This gives a\ndensity of $\\sim$ 3.3 gcm$^{-3}$ for 100\\% $sp^{3}$ content, in good\nagreement with our upper limiting value, but the lower limit at \n1.92 gcm$^{-3}$ is higher than ours, suggesting that $sp^{3}$ hybrids \nare absent in networks with lower densities. This can not explain reports \nof $sp^{3}$ sites in low-density carbyne films \\cite{Barborini,Ravagnan}.\nNote, however, that there are uncertainties in the measurements, usually \nby EELS, of the $sp^{3}$ content in such films.\n\nAn equally interesting physical trend in a-C is the\nvariation of elastic moduli as a function of mean coordination.\nWe seek to find simple formulas able to predict the \nhardness and related properties for any given network, over the entire\nrange of densities.\n\nThorpe and collaborators \\cite{Thorpe85,Djord2} suggested that\nthe elastic moduli of bond-depleted crystalline diamond lattices\nand of bond-depleted ``amorphous diamond'' networks (WWW model)\nfollow a power-law behavior $c \\sim (\\bar{z} - \\bar{z}_{f})^{\\nu}$,\nwith the exponent taking the value 1.5 $\\pm$ 0.2. This mean-field\nequation is characteristic of percolation theory, and describes\nthe contributions to rigidity from the local components of the\nsystem as they connect to each other. The critical\ncoordination $\\bar{z}_f$ = 2.4, denotes the transition from rigid to\nfloppy behavior, and comes out of the constraint-counting model of\nPhillips \\cite{Philips79} and Thorpe \\cite{Thorpe83}.\n\nWe examined whether more realistic a-C networks can be described \nby the constraint-counting model, and if their moduli exhibit a power-law\nbehavior. For this, we used the cells generated by TBMD simulations\nand the EDTB model. As a representative quantity, we calculated\nthe equilibrium bulk modulus $B_{eq}$. The results for $B_{eq}$ for\nseveral networks as a function of $\\bar{z}$ are given in Fig. 3.\nAlso included in this figure is the computed $B_{eq}$ for diamond (428 GPa) \nand for the WWW model (361 GPa). The latter value coincides with\nthat calculated with the Tersoff potential for \nWWW \\cite{Kel00,Kel94,Kel01Diam}. \nThe computed data can be fitted to the power-law relation\n\\begin{equation}\nB_{eq} = B_{0}\\ \\left(\\frac{\\bar{z} - \\bar{z}_{f}}{\\bar{z}_{0} - \n\\bar{z}_{f}}\\right)^{\\nu},\n\\label{modul1}\n\\end{equation}\nwhere $B_{0}$ is the bulk modulus of the fully tetrahedral \namorphous network, for which $\\bar{z}_{0}$ = 4.0. Letting all \nfitting parameters in Eq. (\\ref{modul1}) free, we obtain $B_{0}$ = 361 GPa, \nwhich is exactly the computed value for WWW, $\\bar{z}_f$ = 2.25, \nand $\\nu$ = 1.6. (For a measure of the quality of the fit: \n$R^2$ = 0.9907). If we fix $\\nu$ to be 1.5 ($R^2$ = 0.9906), \nwe get 2.33 for $\\bar{z}_f$, and if we fix $\\bar{z}_f$ to be 2.4 \n($R^2$ = 0.9904), we get 1.4 for $\\nu$. We thus conclude that the \nvariation confirms the constraint-counting theory of Phillips and Thorpe, \nwith a critical coordination close to 2.4, and it has a power-law behavior \nwith a scaling exponent $\\nu = 1.5 \\pm 0.1$. For convenience, let us use\n$\\nu = 1.5$, so the modulus obeys the relation\n\\begin{equation}\nB_{eq} = 167.3\\ (\\bar{z} - 2.33)^{1.5}.\n\\label{modul2}\n\\end{equation}\nOur theory also predicts that ``amorphous diamond'' is softer\nthan diamond by $\\sim$ 10\\%.\n\nComparison of these results with experimental moduli derived from \nsurface acoustic waves \\cite{Schultrich} (SAW) and surface Brillouin \nscattering \\cite{FerrAPL99} (SBS) measurements is very good,\nespecially in the ta-C region. For example, the computed modulus for \n$\\bar{z} \\simeq$ 3.9 equals $\\sim$ 330 GPa and nearly\ncoincides with the SBS data. The agreement is less good at lower \ncoordinations, where a fit to experimental \npoints \\cite{Robertson02,Mathiou04} extrapolates to $\\bar{z}_f$ = 2.6, \nhigher than the constraint-counting prediction.\n\n\\subsection{Diamond nanocomposite films}\n\nDiamond nanocomposites consist of diamond nanocrystals embedded in\nan a-C matrix \\cite{Lifshitz}. They are produced by chemical\nvapor deposition (CVD) via a multistage process \\cite{Lifshitz},\nand they differ from pure nanodiamond films with no a-C\ncomponent \\cite{Gruen}. All diamond nanocomposite films reported\nuntil now contain a hydrogenated a-C matrix. Recently, nanodiamonds\nin pure a-C have been successfully grown \\cite{Shay}.\nTheir structure, either with or without H, is rather well known\nexperimentally, but their stability and most of their properties,\nincluding mechanical, are not yet understood.\n\nA first step towards a theoretical description of these films\nwas done recently in our group \\cite{Fyta03}. We summarize here the\nmost important findings of this investigation, based on MC \nsimulations with the Tersoff potential, and we also provide\nsupplementary new results from TBMD simulations using the EDTB model.\n\nA representative diamond nanocomposite network, generated by TBMD, is\nportrayed in Fig. 4. It shows a spherical diamond nanocrystal, \nwhose diameter is 12.5 \\AA, positioned in the\nmiddle of the cell and surrounded by the a-C matrix. Part of the\nimage cells, due to the periodic boundary conditions, are also shown.\nThis would correspond to an ideal case with a homogeneous dense dispersion\nof crystallites of equal size in the matrix, at regularly ordered\npositions. The nanodiamond volume fraction is 31\\%. The density of the\na-C matrix $\\rho_{am}$ is 3 gcm$^{-3}$ and its mean coordination \n$\\bar{z}_{am}$ is 3.8. The size of the diamond crystallite is smaller \nthan seen experimentally, but the overall structure captures the \nessential features of CVD grown nanocomposite films, especially the\nnon-hydrogenated ones.\n\nA crucial issue is the stability of the diamonds as a function of\nthe coordination\/density of the embedding medium. This extensive \ninvestigation required the analysis of many composite structures\nand it was done through MC simulations using larger\ncells (4096 atoms) \\cite{Fyta03}. The quantity of interest is the\nformation energy of a nanocrystal $E_{form}$, which can describe\nthe interaction of the embedded configuration with the host. It is\ndefined as \n\\begin{equation}\nE_{form} = E_{total} - N_{a}E_{a} - N_{c}E_{c},\n\\label{form}\n\\end{equation}\nwhere $E_{total}$ is the total cohesive energy of the composite system\n(amorphous matrix plus nanocrystal), calculated directly from the\nsimulation, $E_{c}$ is the cohesive energy per atom of the respective \ncrystalline phase, $N_{c}$ is the number\nof atoms in the nanocrystal, $N_{a}$ is the number of atoms in the\namorphous matrix, and $E_{a}$ is the cohesive energy per atom of the\npure, undistorted amorphous phase (without the nanocrystal) with\ncoordination $\\bar{z}_{am}$. A negative value of $E_{form}$ denotes stability\nof the nanostructure, a positive value indicates metastability or\ninstability.\n\nThe variation of $E_{form}$ as a function of $\\bar{z}_{am}$ for a diamond \nwith a fixed size embedded in several matrices is shown in Fig. 5. \n$E_{a}$ was computed from a series of calculations on pure a-C cells.\n(For details see Ref.\\ \\cite{Fyta03}.) The most striking result of this\nanalysis is that diamonds are stable in matrices with $\\bar{z}_{am}$ higher\nthan 3.6 ($\\rho_{am} \\simeq$ 2.6 gcm$^{-3}$), and unstable, or metastable\ndepending on temperature, in matrices with lower densities.\nThis nicely explains experimental results from different laboratories\nindicating that diamond nanocrystals precipitate in a dense a-C\nmatrix \\cite{Lifshitz,Shay}.\n\nOne way of checking the stability of nanocrystals is to subject them to \nthermal annealing. A stable structure should be sustained in the \namorphous matrix, while a metastable structure should shrink in favor of \nthe host. Indeed, analysis of the structure of diamonds annealed at high\n$T$ (1500 - 2000 K) reveals \\cite{Fyta03} that metastable nanocrystals\nbecome heavily deformed in the outer regions near the interface \nwith the amorphous matrix. Since only a small core remains intact, this\nmeans that the diamonds extensively shrink. On the other hand,\nthe stable nanodiamonds are only slightly deformed and retain their \ntetrahedral geometry.\n\nIn addition, a stable nanodiamond has, in principle, the potential to \nexpand against the surrounding matrix, provided that the barriers for \nthis transformation can be overcome, possibly by further annealing\nor ion irradiation. This means that nucleation\nof diamond cores in a dense matrix might lead, under the appropriate \nexperimental conditions, to a fully developed nanostructured\nmaterial with large grains.\n\nThe observation that diamonds are stable only in dense matrices\nsuggests a quantitative definition of ta-C, vaguely referred to \nas the form of a-C with a high fraction of sp$^3$ bonding. \nWe can define ta-C as the form of a-C with a fraction of sp$^3$ sites \nabove 60\\%, in which diamond nanocrystals are stable (see Fig. 5). \nIn other words, the predominantly tetrahedral amorphous network of ta-C \nis able to sustain crystalline inclusions. Networks with sp$^3$ fractions\nbelow 60\\% do not belong to the class of ta-C materials, because they\ncan not be transformed into a stable nanocrystalline state.\n\nThe intrinsic stress of the diamond nanocrystals and of the whole\ncomposite material is a crucial quantity. As for pure ta-C, the\naverage stress influences the adhesion properties of the films.\nThe stress within the nanodiamonds is indicative of their stability \nin a-C matrices. To examine these issues, we calculated\nthe stress fields in the nanocomposite cells, using as a probe the\ntool of atomic level stresses, as in the case of single-phase amorphous\ncarbon \\cite{Kel00,Kel94,Kel01Diam}. This gives us the ability to\nextract the stress built up in the nanodiamond and separate it from\nthe stress in the matrix, by summing up the atomic stresses over the\ndesired region. \n\nThe first important aspect of this analysis is that,\nin all cases studied, the average intrinsic stress\nin the fully relaxed composite material is less than 1 GPa, practically\nzero, even in the highly tetrahedral cases. This means, as in the case of\npure ta-C, that diamond nanocomposite films are able to eliminate any \ncompressive stress generated during growth, when brought into their \nequilibrium state, perhaps by moderate thermal annealing. \nThe stress in as-grown films has not yet been experimentally \nreported.\n\nThe other important finding is that the stress in the\nnanocrystal is always found to be tensile, while it is compressive\nin the matrix, yielding a net zero stress. This contrast can be\nexplained by noting that the density of the embedding medium is lower\nthan that of the diamond inclusions. As a result, atoms in the latter\nare forced to strech their bonds in order to conform with the lower\ndensity of the environment. \n\nA typical example of the stress state and its variation in a nanodiamond \nis shown in Fig. 6. The nanocrystal has a diameter of 17 \\AA, and is \nembedded in a ta-C matrix with $\\bar{z}_{am}$ = 3.9. \nThe atomic stresses are averaged over spherical shells starting from \nthe center and moving towards the interface. Negative values denote \ntensile stresses \\cite{Kel94}. The stress in the core\nof the diamond is very small, since the effect of the medium is weak, \nbut it rises up as we move outwards, especially near the interface.\nThis is logical. Atoms near the interface strongly feel the\ninfluence of the medium. However, the average tensile stress is low, \n$\\sim$ -1.5 GPa\/atom, because the density gradient between the nanodiamond \nand the matrix is small.\n\nObviously, the larger the density gradient the higher the tension \nfelt by the inclusion. \nFor example, when $\\bar{z}_{am}$ = 3.84, the nanodiamond stress is \n-6.3 GPa\/atom, and when $\\bar{z}_{am}$ = 3.75, it becomes -9 GPa\/atom. \nThis trend explains the lowering of the relative \nstability of diamonds as $\\bar{z}_{am}$ gets \nsmaller (Fig. 5). Tension substantially increases at the outer regions \nof the nanodiamond, leading to deformation and eventually to amorphization \nand shrinking. This is remarkably evident for nanodiamonds in the region \nof metastability, where the intrinsic tensile stress becomes huge. \nFor example, for a matrix with $\\bar{z}_{am}$ = 3.3, the average stress \nin a typical nanodiamond is -30 GPa\/atom.\n\nFinally, we briefly comment on the hardness of these nanocomposite\nmaterials. Their mechanical properties are not yet measured \nexperimentally. We have preliminary results of calculations of the\nbulk modulus of diamond nanocomposites, generated by TBMD\/EDTB\nsimulations, a typical example of which is shown in Fig. 4.\nWe find moduli which are considerably higher than moduli of single-phase \nfilms of the same density, as calculated using Eq. (\\ref{modul2}).\nFor example, for a nanocomposite with a total $\\bar{z} \\simeq$ 3.75 \nand a density of $\\sim$ 2.85 gcm$^{-3}$, the modulus approaches 350 GPa.\nEq. (\\ref{modul2}) predicts $\\sim$ 280 GPa for a pure a-C network\nhaving the same $\\bar{z}$. This represents a drastic 25\\% increase\nin strength, and opens up the possibility for even harder ta-C\nmaterials for coatings and MEMS applications. A comprehensive account \nof the elastic properties of diamond nanocomposites will be given elsewhere.\n\n\\section{CONCLUSIONS}\n\nResults from TBMD and MC simulations of pure a-C and diamond nanocomposite \nnetworks are presented. Definite trends in a-C regarding\nthe variation of the $sp^3$ fraction and the bulk moduli as a function \nof coordination\/density are shown to firmly hold. Nanodiamonds are\nstable only in dense ta-C matrices. The nanocomposite films are harder \nthan pure a-C films of the same density. They possess zero intrinsic\nstress when they are fully relaxed.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\n\nHumans are connected in numerous ways, and our many types of interactions with each other influence what we believe and how we act. \nTo model how opinions spread between people or other agents, researchers across many disciplines have developed a variety of models of opinion dynamics \\cite{castellano2009, sune-yy2018, noorazar-et-al2020, noorazar2020, peralta2022, galesic2021}.\nHowever, in part because of the difficulty of gathering empirical data on opinions, much of the work on opinion dynamics has focused on theory and model development, with little empirical validation \\cite{castellano2009, galesic2021, peralta2022} \n\\footnote{Chacoma and Zanette \\cite{chacoma2015} and Vande Kerckhove et al.~\\cite{vandeKerckhove2016} conducted multiple-round experiments in which they asked participants for their opinions and confidence levels on various quantitative questions. They examined the evolution of their answers and compared those to the results of opinion-dynamics models.\nThese experiments have several limitations, including the potential sensitivity of the models to measurement errors \\cite{carpentras2021}.}.\nEven with these difficulties, mechanistic modeling is valuable; it forces researchers to clearly define relationships and assumptions when developing a model, \nand it provides a framework to explore and generate testable hypotheses about complex social phenomena \\cite{holme2015}. \n\nIn an agent-based model (ABM) of opinion dynamics, each agent is endowed with an opinion and an underlying network structure governs which agents can interact with each other. \nWe assume that all interactions are dyadic (i.e., between exactly two agents). We suppose that the opinions take continuous values in a closed interval on the real line.\nThis interval represents a continuous spectrum of agreement for a single belief, such as the strength of support for a political candidate or ideology. \nAt each discrete time step of an ABM, a selection procedure determines which agents interact and then an update rule determines if and how their opinions change. \nBounded-confidence models (BCMs) are a popular class of continuous-opinion models \\cite{noorazar-et-al2020}.\nIn a BCM, interacting agents influence each other only when their opinions are sufficiently similar. This mechanism comes from the psychological idea of selective exposure, which asserts that people tend to seek out information or conversations that support their existing views and avoid those that challenge their views~\\cite{selective_exposure_def}. \nUnder this assumption, an agent's views are influenced directly only by agents with sufficiently similar views. For example, online platforms include polarizing posts, but individuals can choose whether or not to engage with such content and do not adopt the views of everything in their social-media feeds.\n\nThe two most popular BCMs are the Hegselmann--Krause (HK) model \\cite{hegselmann_krause2002} and the Deffuant--Weisbuch (DW) model \\cite{deffuant2000}. \nIn each time step, the HK model has synchronous updates of node opinions, whereas the DW model has asynchronous opinion updates, with a single pair of agents (i.e., a dyad) interacting and potentially updating their opinions at each time. \nAn asynchronous mechanism is consistent with empirical studies, which suggest that individuals in social networks have different times and frequencies of activity \\cite{alizadeh2015}. \nIn the present paper, we generalize the DW model to incorporate different activity levels and sociability of nodes. \nAlthough many heterogeneities and modifications have been incorporated into DW models \\cite{noorazar2020}, to the best of our knowledge, few studies have modified the agent-selection procedure.\nThe ones that have done so (see, e.g., Refs.~\\cite{alizadeh2015, zhang2018, sirbu2019, pansanella2022}) focused on specific scenarios, rather than on investigating baseline effects of heterogeneity in agent-selection probabilities.\nBefore describing previous research on heterogeneous selection of agents in the DW model, we first discuss other generalizations of the model.\nSome studies have drawn the initial opinions of nodes from nonuniform distributions \\cite{jacobmeier2006, carro2013, sobkowicz2015} and have thereby considered differential initial conditions from those in the standard DW model. \nOther investigations have incorporated heterogeneous confidence radii \\cite{dw2002, deffuant-amblard2002, lorenz2008, kou2012, sobkowicz2015, chen2020} and heterogeneous compromises \\cite{dw2002, deffuant-amblard2002, zhang2014, huang2018}. Such generalizations affect the opinion updates of interacting agents.\nOther studies of the effect of network structure on opinion dynamics have examined DW models on time-independent graphs \\cite{meng2018}, hypergraphs \\cite{hickok2022}, and coevolving networks \\cite{unchitta2021}.\nThe standard DW model selects pairs of agents to interact uniformly at random, but social interactions are not uniform in real life.\nHowever, few studies of the DW model have modified the selection procedure that determine which agents interact with each other.\nExamples of such studies include Refs.~\\cite{alizadeh2015, zhang2018, sirbu2019, pansanella2022}.\n\nOne can think of agents that are selected not uniformly at random as having different activity levels that encode the number of interactions in a given time interval. Such ideas have also been employed in activity-driven models of temporal networks \\cite{perra2012}.\nThere have also been studies of activity-driven models of opinion dynamics.\nLi et al.~\\cite{li2017} developed an activity-driven model of opinion dynamics using networks with fixed nodes with assigned activity rates (i.e., assigned activation probabilities).\nAt each time step of their model, all existing edges are removed and activated agents randomly form a fixed number of connections. All agents then evaluate the \nmean opinion of their neighbors to determine if and how to update their own opinion \\cite{li2017}.\nBaronchelli et al.~\\cite{baronchelli2011} studied a voter model with edge activity.\nZhang et al. \\cite{zhang2018} incorporated heterogeneous node activities into a DW model to study social-media networks. As we will discuss shortly, our inspiration is similar to theirs, but we make fundamentally different choices in how we generalize the DW model.\n\nIn social networks, some individuals share their ideas and opinions more frequently than others.\nAlizadeh and Cioffi-Revilla \\cite{alizadeh2015} studied a modified DW model that incorporates a repulsion mechanism, which was proposed initially by Huet et al.~\\cite{huet2008}, in which interacting agents with opinions that differ by more than a cognitive-dissonance threshold move farther away from each other in the space of opinions when they interact.\nThey used 2-dimensional (2D) vector-valued opinions and placed their nodes on complete graphs.\nTo model agents with different activity levels, Alizadeh and Cioffi-Revilla \\cite{alizadeh2015} implemented a Poisson node-selection probability, which one can interpret as independent internal ``clocks'' that determine agent activation.\nIn comparison to selecting agent pairs uniformly at random (as in the standard DW model) the Poisson node-selection probability can either lessen or promote the spread of extremist opinions, depending on which opinions are more prevalent in more-active agents.\n\nZhang et al.~\\cite{zhang2018} studied a modified DW model with asymmetric updates on activity-driven networks. In their model, each node has a fixed activity potential, which they assign uniformly at random from a distribution of activity potentials. The activity potential of an agent is its probability of activating. At each discrete time step, each active agent $i$ randomly either (1) creates a message (e.g., a social-media post) or (2) forwards a message that was created by a neighboring agent $j$. If agent $i$ forwards a message from agent $j$, then $i$ updates its opinion using the standard DW update mechanism.\nZhang et al.~\\cite{zhang2018} simulated their model on a social network from Tencent Weibo\n(\\begin{CJK*}{UTF8}{gbsn}\u817e\u8baf\u5fae\u535a\\end{CJK*})\nand found that the distribution of activity potentials influences the location of the transition between opinion consensus and fragmentation.\nThe node-weights in our BCM are similar in spirit to the the activity potentials of Zhang et al.~\\cite{zhang2018}, and both can encode the social activity levels of individuals such as frequency of posting or commenting on social media. \nHowever, the way we incorporate our node weights in our BCM fundamentally differs from Ref.~\\cite{zhang2018}.\nWe consider a time-independent network $G$ and at each time step select a single pair of neighboring agents for interaction. We randomly select a first agent and then a second neighboring agent with probabilities proportional to their node weights. Both selected agents update their opinions using the DW update mechanism.\n\nIn addition to individuals having different activity levels in social networks, some pairwise interactions are also more likely than others. \nSocial-media feeds tend to curate content that is based on the concept of homophily, which is the idea that people have a tendency to connect with people who are similar to themselves or have similar ideas or beliefs \\cite{mcpherson2001}.\nFor example, social-media feeds tend to show content to a user that closely matches their profile and past activity \\cite{spohr2017}. \nTo examine the effect of such algorithmic bias on opinion dynamics, S\\^{i}rbu et al.~\\cite{sirbu2019} studied a modified DW model that includes a homophily-promoting activation mechanism. \nAt each time step, a first agent is selected uniformly at random, and then one of its neighbors is selected with a probability that depends on the magnitude of the opinion difference between that neighbor and the first agent.\nThe simulations by S\\^{i}rbu et al. of this model on complete graphs suggest that more algorithmic bias yields slower convergence times and more opinion fragmentation \\cite{sirbu2019}. \nPansanella et al.~\\cite{pansanella2022} applied the same algorithmic-bias model to a variety of network topologies (specifically Erd\\H{o}s--R\\'{e}nyi, Barab\\'{a}si--Albert, and Lancichinetti--Fortunato--Radicchi (LFR) graphs), and they found similar trends as S\\^{i}rbu et al. did on complete graphs.\n\nFrom the investigations in Refs.~\\cite{alizadeh2015, zhang2018, sirbu2019, pansanella2022}, \nwe know that incorporating heterogeneous node-selection probabilities into a DW model can influence opinion dynamics.\nEach of these papers examined a specific implementation of heterogeneous agent selection; we are not aware of any systematic investigations of the effects of heterogeneous agent selection.\nIn the present paper, we propose a novel BCM with heterogeneous agent-selection probabilities, which we implement using node weights.\nIn general terms, we are studying a dynamical process on node-weighted networks.\nWe use node weights to model agents with different probabilities of interacting. These probabilities can encode heterogeneities in individual behavior, such as in sociability and activity levels.\nWe conduct a methodical investigation of the effects of incorporating heterogeneous node weights, which we draw from various distributions, into our generalization of the DW model. We compare these effects on a variety of types of networks. \nIn our study, we consider fixed node weights that we assign in a way that disregards network structure and node opinions. However, one can readily can adapt the node weights in our BCM to capture a variety of sociological scenarios in which nodes have heterogeneous selection probabilities.\nWe find that introducing heterogeneous node weights into the DW model results in longer convergence times and more opinion fragmentation than selecting nodes uniformly at random.\nOur results illustrate that it is important to consider the baseline influence of assigning node weights uniformly at random in implementations of heterogeneous node-selection patterns before drawing conclusions about more specific mechanisms such as algorithmic bias \\cite{sirbu2019}.\n\nOur paper proceeds as follows. In Sec.~\\ref{sec:model}, we describe the standard DW model and present our generalized DW model with node weights to incorporate heterogeneous agent-selection probabilities.\nIn Sec.~\\ref{sec:methods}, we discuss our implementation of our BCM, the networks and node-weight distributions that we examine, and the quantities that we compute to characterize the behavior of our model.\nIn Sec.~\\ref{sec:results}, we discuss the results from our numerical simulations of our BCM. \nIn Sec.~\\ref{sec:discussion}, we summarize our results and discuss their implications, present some ideas for future work, and discuss the importance of studying networks with node weights.\nOur code is available at \\url{https:\/\/gitlab.com\/gracejli1\/NodeWeightDW}.\n\n\n\n\\section{Model} \\label{sec:model} \n\nIn this section, we first discuss the Deffuant--Weisbuch (DW) \\cite{deffuant2000} bounded-confidence model (BCM) of opinion dynamics, and we then introduce our BCM with heterogeneous node-selection probabilities.\n\n\n\\subsection{The Standard Deffuant--Weisbuch (DW) BCM} \\label{sec:DW}\n\nThe DW model was introduced over two decades ago \\cite{deffuant2000}, and it and generalizations of it have been studied extensively since then \\cite{noorazar-et-al2020, noorazar2020}. It was examined originally on complete graphs and encoded node opinions as scalar values in a closed interval of the real line. \nDeffuant et al.~\\cite{deffuant2000} let each node have an opinion in $[0,1]$, and we follow this convention. The standard DW model has two parameters. The ``confidence radius'' $c \\in [0,1]$ is a thresholding parameter; if the opinions of two agents (i.e., nodes) differ by more than $c$, then they do not interact. \nThe ``compromise parameter'' $m \\in (0, 0.5]$ (which is also sometimes called a convergence parameter \\cite{deffuant2000} or a cautiousness parameter \\cite{meng2018}) parametrizes the amount that an agent changes its opinion to compromise with the opinion of an agent with whom it interacts.\n\n\nIn the standard DW model, the opinions of the agents update in an asynchronous fashion. We endow each agent with an initial opinion. At each discrete time, one selects a pair of agents uniformly at random.\nAt time $t$, suppose that we pick agents $i$ and $j$, whose associated opinions are $x_i$ and $x_j$, respectively. Agents $i$ and $j$ update their opinions through the following equations:\n\\begin{align} \\label{eq:DW}\n\\begin{split}\n x_i(t+1) &= \n \\begin{cases}\n x_i(t) + m \\Delta_{ij}\\,, & \\text{if } |\\Delta_{ij}(t)| < c \\\\\n x_i(t)\\,, & \\text{otherwise} \\,,\n \\end{cases} \\\\\n x_j(t+1) &= \n \\begin{cases}\n x_j(t) + m \\Delta_{ji}\\,, & \\text{if } |\\Delta_{ij}(t)| < c \\\\\n x_j(t)\\,, & \\text{otherwise}\\,,\n \\end{cases}\n\\end{split}\n\\end{align}\nwhere $\\Delta_{ij}(t) = x_i(t) - x_j(t)$. \n\n\nWhen one extends the DW model to consider an underlying network of agents \\cite{weisbuch2001}, only adjacent agents are allowed to interact. Each node in a network represents an agent, and each edge between two agents encodes a social or communication tie between them. \nAt each discrete time, one selects an edge of a given network uniformly at random and the two agents that are attached to the edge interact as in Eq.~\\eqref{eq:DW}. \nFor the DW model, which updates opinions asynchronously, an alternative to an edge-based approach of randomly selecting an interacting edge is to take a node-based approach to selecting an interacting pair. (See {Ref.}~\\cite{kureh2020} for a discussion of node-based updates versus edge-based updates in the context of voter models.)\nIn a node-based approach, one randomly selects a first node and then randomly selects a second node from its neighbors. \nTo capture the effect of some agents having more frequent interactions (such as from greater sociability or a stronger desire to share their opinions), we implement such a node-based agent-selection procedure in our study. \nThe choice between edge-based and node-based agent selection can have substantial effects on the dynamics of voter models of opinion dynamics \\cite{kureh2020}, and we expect that this is also true for other types of opinion models. \nWe are not aware of a comparison of edge-based and node-based agent selection in asynchronous BCMs (and, in particular, in DW models), and it seems both interesting and relevant to explore this issue.\nMost past work on the DW model has considered edge-based selection \\cite{noorazar2020}. \nHowever, Refs.~\\cite{alizadeh2015, sirbu2019, pansanella2022} used a node-based selection procedure to model heterogeneous activities of agents.\n\n\n\\subsection{A BCM with Heterogeneous Node-Selection Probabilities} \\label{sec:BCM} \n\nWe now introduce our BCM with heterogeneous \nnode-selection probabilities.\nConsider an undirected network $G = (V, E)$, where $V$ is the set of nodes and $E$ is the set of edges between them. Suppose that $V$ has $N$ agents and that each agent $i$ holds a time-dependent opinion $x_i(t)$.\nEach agent also has a fixed node weight $w_i$ that encodes sociability, how frequently it engages in conversations, or simply the desire to share its opinions. \nOne can think of a node's weight as a quantification of how frequently it talks to its friends or posts on social media.\nThrough the incorporation of network structure, the standard DW model can capture agents with different numbers of friends (or other social connections).\nHowever, selecting interacting node pairs uniformly at random is unable to capture heterogeneous interaction frequencies of individuals.\nBy introducing node weights, we encode this heterogeneity\nand then examine how it affects opinion dynamics in a BCM.\nAlthough we employ fixed node weights, one can adapt our model to include time-dependent node weights, such as through purposeful strategies (such as posting on social media more frequently as one's opinions become more extreme).\n\n\nIn our node-weighted DW mode, at each discrete time, we first select an agent $i$ with a probability that is proportional to its weight. Agent $i$ then interacts with a neighbor $j$, which we select with a probability that is equal to its weight divided by the sum of the weights of $i$'s neighbors.\nThat is, the probabilities of first selecting agent $i$ and then selecting agent $j$ are\n\\begin{align} \\label{eq:node-probablity}\n P_1(i) = \\frac{w_i}{\\sum\\limits_{k = 1}^N w_k} \\,, \\quad\n P_2(j|i) = \\frac{w_j}{\\sum\\limits_{k \\in \\mathcal{N}(i)} w_k}\\,, \n\\end{align}\nwhere $\\mathcal{N}(i)$ denotes the neighborhood (i.e., the set of neighbors) of node $i$. Once we select the pair of interacting agents, we update their opinions following the DW opinion update rule in Eq.~\\ref{eq:DW}.\n\n\nOur BCM incorporates heterogeneous node-selection probabilities\nwith node weights that model phenomena such as the heterogeneous sociability of individuals. One can also use edge weights to model heterogeneous agent selection.\nThis variant can encode heterogeneous selection probabilities\nof pairwise (i.e., dyadic) interactions, instead of focusing on the selection of individuals.\nFor instance, in the dyadic interactions of a given individual, that individual may discuss their ideological views with a close friend more frequently than with a work colleague.\nOne can use edge weights to determine the probabilities of selecting each dyadic interaction in a BCM.\nAt each discrete time, one can select an edge with a probability that is proportional to its weight. We do not examine edge-based heterogeneous selection probabilities in the present paper, but it is worth exploring in BCMs.\n\n\\section{Methods and Simulation Details} \\label{sec:methods} \n\nIn this section, we discuss the network structures and node-weight distributions that we consider, the specifications of our numerical simulations, and the quantities that we compute to characterize the results of our simulations.\n\n\n\\subsection{Network Structures} \\label{sec:nets}\n\nWe now describe the details of the networks on which we simulate our node-weighted BCM. We summarize these networks in Table~\\ref{tab:networks}. \n\nWe first simulate our BCM on complete graphs as a baseline scenario that will allow us to examine how incorporating heterogeneous node-selection probabilities affects the opinion dynamics.\nAlthough DW models were introduced more than 20 years ago, it is still the case that complete graphs are the most common type of network on which to study them \\cite{noorazar-et-al2020}. \nTo examine finite-size effects from the networks, we consider complete graphs with 100--1000 nodes in increments of 100. For all other synthetic networks, we consider networks of size $N = 500$ nodes. \n\n\n\\newcommand{1.5in}{1.5in}\n\\newcommand{3.3in}{3.3in}\n\\newcommand{1.5in}{1.5in}\n\n\\begin{table*}\n\\centering\n\\caption{\\label{tab:networks} Summary of the networks on which we simulate our node-weighted BCM.}\n\\begin{ruledtabular}\n\\def\\arraystretch{1.1}\n\\begin{tabular}{m{1.5in} m{3.3in} m{1.5in}}\nNetwork & Description & Parameters \\\\\\hline\n$C(N)$ \n& \\begin{tabular}{m{3.3in}} \nComplete graph with $N$ nodes \\end{tabular}\n& \\begin{tabular}[c]{m{1.5in}}\n$N \\in \\{100, 200, \\ldots, 1000\\}$ \\end{tabular} \n\\\\\\hline\n$G(N,p)$ \n& \\begin{tabular}{m{3.3in}} Erd\\H{o}s--R\\'{e}nyi (ER) random-graph model with $N$ nodes and homogeneous, independent edge probability $p$ \\end{tabular}\n& \\begin{tabular}[c]{m{1.5in}} \n$\\, p \\in \\{0.1, 0.3, 0.5, 0.7\\}$ \\end{tabular} \n\\\\\\hline\nTwo-Community SBM\\footnotemark[1] \n& \\begin{tabular}{m{3.3in}} Stochastic block model with 2 $\\times$ 2 blocks. There is a larger probability of edges within the sets A and B than between the two sets; the block probabilities satisfy $P_{BB} > P_{AA} > P_{AB}$. \\end{tabular}\n& \\begin{tabular}[c]{m{1.5in}}$P_{AA} = 49.9\/374$ \\\\ \n$P_{BB} = 49.9\/124$ \\\\ $P_{AB} = 1\/500$ \\end{tabular}\n\\\\\\hline\nCore--Periphery SBM\\footnotemark[1] \n& \\begin{tabular}{m{3.3in}} Stochastic block model with 2 $\\times$ 2 blocks. Set A is a set of core nodes and set B is a set of peripheral nodes. The block probabilities satisfy $P_{AA} > P_{AB} > P_{BB}$. \\end{tabular} \n& \\begin{tabular}[c]{m{1.5in}}$P_{AA} = 147.9\/374$ \\\\ \n$P_{BB} = 1\/174$ \\\\ $P_{AB} = 1\/25$ \\end{tabular} \n\\\\\\hline\nCaltech Network\n& \\begin{tabular}{m{3.3in}} The largest connected component of the Facebook friendship network at Caltech on one day in fall 2005. This network, which is part of the {\\sc Facebook100} data set \\cite{red2011, traud2012}, has 762 nodes and 16,651 edges. \\end{tabular}\n& \n\\end{tabular}\n\\end{ruledtabular}\n\\footnotetext[1]{In our SBM networks, there are $N=500$ nodes. We partition the network into two sets of nodes; set A has 75\\% of the nodes, and set B has 25\\% of the nodes.}\n\\end{table*}\n\n\nWe then consider synthetic networks that we generate using\nthe $G(N, p)$ Erd\\H{o}s--R\\'{e}nyi (ER) random-graph model, where $p$ is the homogeneous, independent probability of an edge between each pair of nodes \\cite{newman2018}. When $p=1$, this yields a complete graph. We examine $G(500, p)$ graphs with $p \\in \\{0.1, 0.3, 0.5, 0.7\\}$.\n\n\nTo determine how a network with an underlying block structure affects the dynamics of our node-weighted BCM, we consider stochastic-block-model (SBM) networks \\cite{newman2018} with $2 \\times 2$ blocks, where each block consists of an ER graph.\nInspired by the choices of Kureh and Porter \\cite{kureh2020}, we consider two types of SBM networks. The first has a two-community structure, in which there is a larger probability of edges within a community than between communities. \nThe second SBM has a core--periphery structure, in which there is a set of core nodes with a large probability of connections within the set, a set of peripheral nodes with a small probability of connections within the set, and an intermediate probability of connections between core nodes and peripheral nodes. \nTo construct our $2 \\times 2$ SBMs, we partition a network into two sets of nodes; set A has 375 nodes (i.e., 75\\% of the network) and set B has 125 nodes (i.e., 25\\% of the network). We define a symmetric edge-probability matrix\n\\begin{equation}\n P = \\begin{bmatrix}\n P_{AA} & P_{AB} \\\\ P_{AB} & P_{BB}\n \\end{bmatrix} \\,,\n\\end{equation}\nwhere $P_{AA}$ and $P_{BB}$ are the probabilities of an edge between two nodes within set A and set B, respectively, and $P_{AB}$ is the probability of an edge between a node in set A and a node in set B.\n\n\nIn a two-community SBM, the probabilities $P_{AA}$ and $P_{BB}$ are larger than $P_{AB}$, so there is a larger probability of edges within a community than between communities.\nFor our two-community SBM, we choose $P_{AA}$ and $P_{BB}$ so that the expected mean degree matches that of the $G(500, 0.1)$ ER model if we only consider edges within set A or edges within set B. A network from the $G(N, p)$ model has an expected mean degree of $p(N-1)$ \\cite{newman2018}, so we want the two communities in these SBM networks to have an expected mean degree of $49.9 = 0.1 \\times 499$. We thus use edge probabilities $P_{AA} = 49.9\/374$ and $P_{BB} = 49.9\/124$. To ensure that there are few edges between the sets A and B, we choose $P_{AB} = 1\/500$.\n\n\nWe want our core--periphery SBM with core set A and periphery set B to satisfy $P_{AA} > P_{AB} > P_{BB}$.\nWe chose $P_{AA}$ so that the expected mean degree matches that of the $G(500, 0.3)$ model (i.e., it is 147.9) if we only consider edges within the set A. We thus choose the edge probability $P_{AA} = 147.9\/374$. To satisfy $P_{AA} > P_{AB} > P_{BB}$, we choose $P_{AB} = 1\/25$ and $P_{BB} = 1\/174$.\n\n\nFinally, we investigate our node-weighted BCM on a real social network from Facebook friendship data. We use the Caltech network from the {\\sc Facebook100} data set; its nodes encode individuals at Caltech, and its edges encode Facebook ``friendships'' on one day in fall 2005 \\cite{red2011, traud2012}. We only consider the network's largest connected component, which has 762 nodes and 16,651 edges.\n\n\n\\subsection{Node-Weight Distributions} \\label{sec:weights}\n\nIn Table~\\ref{tab:distributions}, we give the parameters and probability density functions of the node-weight distributions that we examine in our BCM. In this subsection, we discuss our choices of distributions.\n\n\n\\begin{table*}\n\\caption{\\label{tab:distributions} Names and specifications of our distributions of node weights. We show both the general mathematical expressions for the means and the specific values of the means for our parameter values.}\n\\def\\arraystretch{1.2}\n\\begin{ruledtabular}\n\\begin{tabular}{lclccl\n\\textbf{Distribution} & \n\\textbf{\\begin{tabular}[c]{@{}c@{}}Probability Density \\\\ Function\\end{tabular}} & \n\\textbf{Parameter values} & \\textbf{Domain} & \\multicolumn{2}{c}{\\textbf{Mean}} \\\\ \\colrule\nConstant & $\\delta (x - 1)$ & N\/A & $\\{1\\}$ & 1 & 1 \\\\ \\colrule\nPareto-80-10 &\n \\multirow{3}{*}{$\\dfrac{\\alpha}{x^{\\alpha+1}}$} &\n $\\alpha = \\log_{4.5}(10)$ &\n \\multirow{3}{*}{$[1,\\infty)$} &\n \\multirow{3}{*}{$\\dfrac{\\alpha}{\\alpha-1}$} &\n 2.8836 \\\\\nPareto-80-20 & & $\\alpha = \\log_{4}(5)$ & & & 7.2126 \\\\\nPareto-90-10 & & $\\alpha = \\log_9 (10)$ & & & 21.8543 \\\\ \\colrule\nExp-80-10 &\n \\multirow{3}{*}{$\\frac{1}{\\beta} \\exp\\left(\\frac{-(x-1)}{\\beta}\\right)$} &\n $\\beta = 1.8836$ &\n \\multirow{3}{*}{$[1,\\infty)$} &\n \\multirow{3}{*}{$\\beta + 1$} &\n 2.8836 \\\\\nExp-80-20 & & $\\beta = 6.2125$ & & & 7.2125 \\\\\nExp-90-10 & & $\\beta = 20.8543$ & & & 21.8543 \\\\ \\colrule\nUnif-80-10 & \n \\multirow{3}{*}{$\\dfrac{1}{b-1}$} & \n $b = 4.7672$ & \n \\multirow{3}{*}{$[1, b]$} & \n \\multirow{3}{*}{$\\dfrac{1}{2} (1+b)$} & \n 2.8836 \\\\\nUnif-80-80 & & $b = 13.425$ & & & 7.2125 \\\\\nUnif-90-10 & & $b = 42.7086$ & & & 21.8543 \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table*}\n\n\nTo study the effects of incorporating node weights \nin our BCM, we compare our model to a baseline DW model.\nTo ensure a fair comparison, we implement a baseline DW model that selects interacting agents uniformly at random using a node-based selection process as in our BCM. \nAs we discussed in Sec.~\\ref{sec:intro}, it is much more common to employ an edge-based selection process.\nWe refer to the case in which all nodes weights are equal to $1$ (that is, $w_i = 1$ for all nodes $i$) as the ``constant distribution''. The constant distribution (and any other situation in which all node weights equal the same positive number) results in a uniformly random selection of nodes for interaction. \nThis is what call the ``baseline DW model''; we compare our DW models with heterogeneous node weights to this baseline model.\nWe reserve the term ``standard DW model'' for the DW model with uniformly random edge-based selection of agents.\n\n\nWe use the node weights in our BCM to model heterogeneities in interaction frequencies, such as when posting content online.\nThe majority of online content arises from a minority of user accounts \\cite{90-9-1}. \nThe ``90-9-1 rule'' has been proposed for such participation inequality; \nin this rule of thumb, about 1\\% of the individuals in online discussions (e.g., on social-media platforms) account for most \ncontributions, about 9\\% of the individuals contribute on occasion,\nand the remaining 90\\% of the individuals are present online (e.g., they consume content) but do not contribute to it \\cite{90-9-1}. \nSimilar participation inequality has been documented in the numbers of posts on digital-health social networks \\cite{vanMierlo2014}, \nposts on internet support groups \\cite{carron2014}, and contributions to open-source software-development platforms \\cite{gasparini2020}.\nAdditionally, the number of tweets on particular topics has also been modeled as following a power-law distribution \\cite{xiong2014}. \nIn 2019, a Pew Research Center survey found that about 10\\% of the accounts generate about 80\\% of the tweets on Twitter from the United States \\cite{twitter_stats}.\n\n\nOne can interpret the node weights in our BCM as encoding the participation of individuals in the form of contribution content to an online social network.\nWe model online participation inequality by using a Pareto distribution for the node weights. This choice of distribution is convenient because of its simple power-law form. \nIt has also been used to model inequality in a variety of other contexts, including distributions of wealth, word frequencies, website visits, and numbers of paper citations \\cite{newman2005}.\nWe are interested in modeling individuals who are active in a finite time interval. For example, when representing social-media interactions, we only care about accounts that make posts or comments; we ignore inactive accounts. Therefore, we impose a minimum node weight in our model.\nWe use the Pareto Type-I distribution, which is defined on $[1, \\infty)$, so each node has a minimum weight of $1$. This positive minimum weight yields a reasonable convergence time for our the simulations of our BCM. \nIf nodes had that weights close to $0$, they would have a very small probability of interacting, and this would prolong simulations.\n\n\nLet Pareto-X-Y denote the continuous Pareto distribution in which (in theory) X\\% of the total node weight is distributed among Y\\% of the nodes. \nIn practice, once we determine the $N$ node weights for our simulations from a Pareteo node-weight distribution, it is not true that precisely X\\% of the total weight is held by Y\\% of the $N$ nodes. \nInspired by the results of the aforementioned Pew Research Center survey of Twitter users \\cite{twitter_stats}, we first consider a Pareto-80-10 distribution, in which we expect 80\\% of the total weight to be distributed among 10\\% of nodes.\nThe Pareto principle (which is also known as the 80-20 rule) is a popular rule of thumb that suggests that 20\\% of individuals have 80\\% of available wealth \\cite{newman2005}. Accordingly, we also consider a Pareto-80-20 distribution. \nFinally, as an example of a node-weight distribution with a more extreme inequality, we also consider a Pareto-90-10 distribution. \n\n\nWe also examine uniform and exponential distributions of node weights. To match the parameters of our Pareto distributions, we shift the uniform and exponential distributions so that their minimum node weight is also $1$. \nWe also choose their parameters to match the means of our Pareto distributions. We use Exp-X-Y and Unif-X-Y to denote the exponential and uniform distributions, respectively, that have the same mean as the Pareto-X-Y distribution. \nIn total, we examine three different families of distributions (Pareto, exponential, and uniform) with different heaviness in their tails.\nIn Table~\\ref{tab:distributions}, we show the details of the probability density functions and the parameters of our node-weight distributions.\n\n\n\n\\subsection{Simulation Specifications} \\label{sec:sims} \n\nIn our node-weighted BCM, agents have opinions in the 1-dimensional (1D) opinion space $[0,1]$. Accordingly, we examine values of the confidence radius $c \\in (0, 1)$\n\\footnote{The extreme case $c=0$ is degenerate (because no agents update their opinions), and the case $c=1$ allows all agents to interact with each other. We are not interested in examining these cases.}.\nWe examine values of the compromise parameter $m \\in (0, 0.5]$, which is the typically studied range for the DW model \\cite{noorazar-et-al2020, meng2018}. \nWhen $m = 0.5$, two interacting agents who influence each other fully compromise and thus average their opinions. \nWhen $m < 0.5$, the two agents move towards each other's opinions, but they do not change their opinions to the mean (i.e., they do not fully compromise).\n\n\nIt is computationally intensive to conduct numerical simulations of a DW model. Additionally, as we will show in Sec.~\\ref{sec:results}, our node-weighted DW model with heterogeneous node weights can converge even more slowly than the baseline DW model to a steady state.\nIn our node-weighted BCM, the generation of graphs in a random-graph ensemble, the node-weight profiles, the initial-opinion profiles, and the selection of pairs of agents to interact at each time step are all stochastic. We use Monte Carlo simulations to reduce these sources of noise in our simulation results.\nFor each of our random-graph models (i.e., the ER and SBM graphs), we generate 5 graphs.\nFor each graph and each node-weight distribution, we randomly generate 10 sets of node weights. For each set of node weights, we generate 10 sets of initial opinions that are distributed uniformly at random. \nIn total, we have 100 distinct sets of initial opinions and node weights for the Monte Carlo simulations of each individual graph.\nWhen we compare simulations from\ndifferent distributions of node weights in the same individual graph, we reuse the same 100 sets of initial opinions.\n\n\nIn theory, the standard DW model and our node-weighted DW model can take infinitely long to approach a steady state.\nWe define an ``opinion cluster'' $S_r$ to be a maximal connected set of agents in which the pairwise differences in opinions are all strictly less than the confidence radius $c$; adding any other agent to $S_r$ will cause a violation in the condition on the opinion differences.\nEquivalently, we are defining an effective-receptivity network $G_{\\mathrm{eff}}(t) = (V, E_{\\mathrm{eff}}(t))$ as the time-dependent network that retains only the edges from the original network in which the associated pair of nodes are receptive to each others' opinions. That is,\n\\begin{equation}\n E_{\\mathrm{eff}}(t) = \\{ (i,j) \\in E : |x_i(t) - x_j(t)| < c \\} \\,.\n\\end{equation}\nThe opinion clusters are the connected components of $G_{\\mathrm{eff}}(t)$.\nIf two opinion clusters $S_1$ and $S_2$ are separated by a distance of at least $c$ (i.e.,\n$|x_i - x_j| \\geq c$ for all $i \\in S_1$ and $j \\in S_2$) at some time $\\tilde{T}$, \nthen (because $c$ is fixed) no agents from $S_1$ can influence the opinion of an agent in $S_2$ (and vice versa) for all $t \\geq \\tilde{T}$. \nTherefore, in finite time, we observe the formation of steady-state clusters of distinct opinions.\nIn practice, we specify that one of our simulations has ``converged'' if all opinion clusters are separated by a distance of at least $c$ and each opinion cluster has an opinion spread that is less than a tolerance of $0.02$. That is, for each cluster $S$, we have that\n$\\max_{i, j \\in S} |x_i - x_j| < 0.02$.\n{We use $T$ to denote the convergence time in our simulations; the connected components of $G_{\\mathrm{eff}}(T)$ are the steady-state opinion clusters.} \n\n\nTo reduce the computational burden of checking for convergence, we do not check at each time step and we compute the convergence time to three significant figures. \nTo guarantee that each simulation stops in a reasonable amount of time, we set a bailout time of $10^9$ iterations. \nIn our simulations, the convergence time is always shorter than the bailout time. We thus report the results of these simulations as steady-state results.\n\n\n\n\\subsection{Quantifying Opinion Consensus and Fragmentation} \\label{sec:quantities}\n\nIn our numerical simulations, we investigate which situations yield consensus (specifically, that result in one ``major'' opinion cluster, which will discuss shortly) at steady state and which situations yield opinion fragmentation (when there are at least 2 distinct major clusters) at steady state.\n\\footnote{Some researchers use the term ``polarization'' to refer to the presence of exactly two opinion clusters (or to refer to exactly 2 major opinion clusters) and ``fragmentation'' to refer to the presence of 3 or more opinion clusters (or 3 or more major opinion clusters) \\cite{hegselmann_krause2002, bramson2016}. However, because we are interested in distinguishing between consensus states and any state that is not consensus, we use the term ``fragmentation'' for any state with at least 2 major opinion clusters. We then quantify the extent of opinion fragmentation.}\nWe are also interested both in how long it takes to determine the steady-state behavior of a simulation and in quantifying the extent of opinion fragmentation when it occurs. \nTo investigate these model behaviors, we compute the convergence time and the number of steady-state opinion clusters.\nIt is common to study these quantities in investigations of BCMs \\cite{noorazar-et-al2020, meng2018, peralta2022}.\n\n\nThere are situations when an opinion cluster has very few agents.\nConsider a 500-node network in which 499 agents eventually have the same opinion, but the remaining agent \n(say, Agent 86, despite repeated attempts by Agent 99 and other agents to convince them) retains a distinct opinion at steady state.\nIn applications, it is not appropriate to think of this situation as opinion fragmentation. \nTo handle this, we use the notions of ``major clusters'' and ``minor clusters'' \\cite{laguna2004, lorenz2008}. We characterize major and minor clusters in an ad hoc way.\nWe define a ``minor'' opinion cluster in a network as an opinion cluster with at most 2\\% of the agents. Any opinion cluster that is not a minor cluster is a ``major'' cluster. \nIn our simulations, we calculate the numbers of major and minor opinion clusters at steady state. We only account for the number of major clusters when determining if a simulation reaches consensus (i.e., one major cluster) or fragmentation (i.e., more than one major cluster).\nWe still track the number of minor clusters and use the minor clusters when quantifying opinion fragmentation.\n\n\nQuantifying opinion fragmentation is much less straightforward than determining whether or not there is fragmentation. \nResearchers have proposed a variety of notions of fragmentation and polarization, and they have also proposed several ways to quantify them~\\cite{bramson2016}.\nIn principle, a larger number of opinion clusters is one indication of more opinion fragmentation. However, as we show in Fig.~\\ref{fig:cluster_size}, there can be considerable variation in the sizes (i.e., the number of nodes) of the opinion clusters. \nFor example, suppose that there are two opinion clusters. If the two opinion clusters have the same size, then one can view the opinions in the system as more polarized \nthan if one opinion cluster has a large majority of the nodes and the other opinion cluster has a small minority. \nAdditionally, although we only use major clusters to determine if a system reaches consensus or opinion fragmentation, we seek to distinguish quantitatively between the scenarios of opinion clusters (major or minor) with similar sizes from ones with opinion clusters with a large range of sizes. \nFollowing Han et al.~\\cite{han2020}, we do this by calculating Shannon entropy.\n\n\n\\begin{figure}[b]\n\\includegraphics[width=0.45\\textwidth]{images\/constant_weight_trajectory-d0.1-mu0.1-weight8-op0.png}\n\\caption{\\label{fig:cluster_size} Sample trajectories of {agent opinions versus time in a single simulation of} our node-weighted BCM on a complete graph with $N = 500$ nodes and a constant weight distribution. Therefore, this situation corresponds to our baseline DW model. We color the trajectory of each node by its final opinion cluster. Observe that the final opinion clusters have different sizes. There is a minor cluster (in black); it consists of a single node whose final opinion is about $0.4$. The opinion cluster that converges to the largest opinion value has about twice as many nodes as the other major clusters.\n}\n\\end{figure}\n\n\nSuppose that we have $K$ opinion clusters, which we denote by $S_r$ for $r \\in \\{1, \\ldots, K\\}$.\nWe call the set $\\{S_r\\}_{r=1}^K$ an ``opinion-cluster profile''; such a profile is a partition of a network.\nThe fraction of agents in opinion cluster $S_r$ is $|S_r|\/N$. The Shannon entropy $H$ of the opinion-cluster profile is\n\\begin{equation}\n H = - \\sum_{r = 1}^{K} \\frac{|S_r|}{N} \\ln \\left( \\frac{|S_r|}{N} \\right) \\,.\n\\end{equation}\nThe quantity $H$ describes the information gain of a particular opinion-cluster profile from knowing the cluster membership of a single agent in comparison to knowing no information.\nFor a fixed $K$, the entropy $H$ is larger if the cluster sizes are closer in magnitude than if there is more heterogeneity in the cluster sizes. For cluster profiles with similar cluster sizes, $H$ is larger if there are more clusters.\nAs in Han et al.~\\cite{han2020}, we use $H$ to quantify opinion fragmentation, with larger $H$ corresponding to greater opinion fragmentation. \nWe calculate $H$ using all steady-state opinion clusters (i.e., both major and minor clusters).\n\n\nAnother way to quantify opinion fragmentation is to look at a local level and consider individual agents. As Musco et al.~\\cite{musco2021} pointed out, if an individual agent has many neighbors with a similar opinion to it, then it may be ``unaware'' of other opinions in the network. \nFor example, an agent can observe that a majority of its neighbors hold an opinion that is uncommon in the global network. This phenomenon is sometimes called a ``majority illusion''~\\cite{lerman2016}.\nIf a set of adjacent agents tend to have neighbors with similar opinions as theirs, they may be in an ``echo chamber'' \\cite{flaxman2016}, as it seems that they are largely exposed only to conforming opinions.\nTo quantify the local observations of agent, Musco et al.~\\cite{musco2021} calculated a notion of local agreement that measures the fraction of an agent's neighbors with opinions on the same side of the mean opinion of the agents in a network.\nIn our simulations, we often observe opinion fragmentation with three or more opinion clusters. \nTherefore, we need to look beyond the mean opinion of an entire network. To do this, we introduce the \\emph{local receptiveness} of an agent.\nAt time $t$, a node $i$ with neighborhood $\\mathcal{N}(i)$ has a local receptiveness of\n\\begin{equation}\n L_i(t) = \n \\frac{ \\left| \\{j \\in \\mathcal{N}(i) : |x_i(t) - x_j(t)| < c \\} \\right|}\n {|\\mathcal{N}(i)|} \\, .\n\\end{equation}\nThat is, $L_i(t)$ is the fraction of the neighbors of agent $i$ at time $t$ to which it is receptive to interacting (and thereby having its opinion influenced). \nIn our paper, we only consider networks with no isolated nodes, so each agent $i$ has $|\\mathcal{N}(i)| \\geq 1$ neighbors. If one wants to consider isolated nodes, one can assign them a local receptiveness of $0$ or $1$.\nIn our numerical simulations, we calculate the local receptiveness of each agent at the convergence time $T$. \nWe then calculate the mean $\\langle L_i(T) \\rangle$ of all agents in a network. This is the steady-state mean local receptiveness, as it is based on edges in the steady-state effective-receptivity network $G_{\\mathrm{eff}}(T)$.\nWhen consensus is not reached, a smaller mean local receptiveness is an indication of greater opinion fragmentation. \nAs we will discuss in Sec.~\\ref{sec:results}, in concert with the number of opinion clusters, both the Shannon entropy and the mean local receptiveness can provide insight into the extent of opinion fragmentation.\n\n\n\n\\section{Numerical Simulations and Results} \\label{sec:results}\n\nIn this section, we present results from our numerical simulations of our node-weighted BCM. \nIn our numerical experiments, the compromise parameter takes the values $m \\in \\{0.1, 0.3, 0.5\\}$. For the confidence radius, we first consider the values $c \\in \\{0.1, 0.3, 0.5, 0.7, 0.9\\}$, and we then examine additional values of $c$ near regions with interesting results.\nAs we discussed in Sec.~\\ref{sec:sims}, we simulate a total of 100 distinct sets of initial opinions and node weights in Monte Carlo simulations of our BCM on each individual graph. For each of our random-graph models (i.e., ER and SBM graphs), we generate 5 graphs. \nFor the 500-node complete graphs, we simulate the 10 weight distributions in Table~\\ref{tab:distributions}. \nBecause of computation time, we simulate the distributions with Pareto-90-10 mean only on 500-node complete graphs. For the other networks in Table~\\ref{tab:networks}, we consider the Pareto, exponential, and uniform families of distributions with the Pareto-80-10 and Pareto-80-20 means.\nSee our repository at \\url{https:\/\/gitlab.com\/gracejli1\/NodeWeightDW} for our code and additional plots.\nIn Table~\\ref{tab:trends}, we summarize the trends that we observe in the examined networks. In the following subsections, we discuss details of our results for each type of network. \nThe numbers of major and minor clusters, Shannon entropies, and values of mean local receptiveness are all steady-state values.\n\n\n\\newcommand{0.5in}{0.5in}\n\\newcommand{2.4in}{2.4in}\n\\newcommand{\\trendvspace}{10pt} \n\n\n\\begin{table}\n\\caption{\\label{tab:trends} Summary of the trends in our simulations of our node-weighted BCM. Unless we note otherwise, we observe these trends for each of the networks that we examine (complete graphs, ER and SBM random graphs, and the Caltech network).}\n\\begin{ruledtabular}\n\\begin{tabular}{m{0.5in} m{2.4in}}\nQuantity & Trends \\\\ \\hline\n\\begin{tabular}{m{0.5in}} Convergence Time \\end{tabular}\n& \\begin{tabular}{m{2.4in}}\n- The heterogeneous weight distributions have longer convergence times than the constant-weight distribution. \n\\end{tabular} \\\\ \\hline\n\\begin{tabular}{m{0.5in}}\nOpinion \\\\ Fragmentation\\footnotemark[1]\n\\end{tabular} &\n\\begin{tabular}{m{2.4in}}\n - For confidence radii $c \\in [0.1, 0.4]$, the heterogeneous weight distributions usually have more opinion fragmentation than the constant-weight distribution. \\\\ \\vspace{\\trendvspace}\n - For a fixed distribution mean, there is more opinion fragmentation as the tail of a distribution becomes heavier. \\\\ \\vspace{\\trendvspace}\n - Within a family of distributions, there is more opinion fragmentation when a distribution has a larger mean.\\footnotemark[2]\n \\end{tabular}\n\\\\ \\hline\n\\begin{tabular}{m{0.5in}}\nNumber of Major Clusters \\end{tabular} \n& \\begin{tabular}{m{2.4in}}\n - In comparison to the constant-weight distribution, the heterogeneous weight distributions need to be above a larger threshold $c$ to consistently reach consensus. \\\\ \\vspace{\\trendvspace}\n - For a fixed distribution mean, there are more major clusters as the tail of a distribution becomes heavier. \\\\ \\vspace{\\trendvspace}\n - Within a family of distributions, there are more major clusters when a distribution has a larger mean.\n \\end{tabular} \\\\ \\hline\n\\begin{tabular}{m{0.5in}}\nNumber of Minor Clusters \\end{tabular}\n& \\begin{tabular}{m{2.4in}}\n- For the constant-weight distribution, there tends to be more minor clusters when the compromise parameter $m \\in \\{0.3, 0.5\\}$ than when $m = 0.1$. The heterogeneous weight distributions do not follow this trend.\\footnotemark[3] \\end{tabular}\n\\end{tabular}\n\\end{ruledtabular}\n\\footnotetext[1]{We quantify opinion fragmentation using Shannon entropy and mean local receptiveness. We observe clearer trends for Shannon entropy than for the mean local receptiveness.}\n\\footnotetext[2]{In contrast to this trend, the associated results are inconclusive for 500-node complete graphs for the uniform and exponential distribution families.\n}\n\\footnotetext[3]{For the Caltech network, we usually observe more minor clusters when $m \\in \\{0.3, 0.5\\}$ than when $m = 0.1$ for each of our heterogeneous weight distributions.}\n\\end{table}\n\n\n\n\\subsection{Complete Graphs} \\label{sec:results-complete} \n\nThe simplest underlying network structure on which we run our node-weighted BCM is a complete graph.\nComplete graphs provide a baseline setting for examining how heterogeneous node-selection probabilities affect opinion dynamics.\nIn our numerical simulations on complete graphs, we consider all three means (which we denote by 80-10, 80-20, and 90-10 in Table~\\ref{tab:distributions}) for each of the uniform, exponential, and Pareto node-weight distribution families.\n\nFor the standard DW model on a complete graph with agents with opinions in the interval $[0,1]$, one eventually reaches consensus if the confidence radius $c \\geq 0.5$.\nAs one decreases $c$ from $0.5$, there are progressively more {steady-state} clusters (both minor and major) \\cite{lorenz2008, benaim2003}.\nLorenz \\cite{lorenz2008} showed using numerical simulations that the number of major clusters is approximately\n$\\lfloor \\frac{1}{2c} \\rfloor$ for the standard DW model. \nTherefore, a transition between consensus and opinion fragmentation occurs for $c \\in [0.25, 0.3]$. We zoom in on these confidence radii in our simulations to explore this transition.\nThe location of the transition is different for the Pareto distributions, so we instead simulate additional values of $c \\in [0.3, 0.4]$ for this situation.\nWe also simulate these additional values of $c$ for the constant-weight distribution, which is our baseline DW model with uniformly random node-based selection of agents.\n\n\n\\begin{figure}[b]\n\\includegraphics[width=0.48\\textwidth]{images\/complete\/T_avg--labeled--with_std.png}\n\\caption{\\label{fig:complete_T} Convergence time (in terms of the number of time steps) in simulations of our node-weighted BCM on a 500-node complete graph. If we only consider the time steps in which interacting nodes actually change their opinions, the times are smaller; however, the trends are the same.\nFor this heat map and all subsequent heat maps, the depicted values are the means of our simulations plus and minus one standard deviation.}\n\\end{figure}\n\n\nIn Fig.~\\ref{fig:complete_T}, we show the convergence times (which we measure in terms of the numbers of time steps) of our simulations for various node-weight distributions. In comparison to the constant-weight distribution, all heterogeneous weight distributions have longer convergence times. \nWithin the same family of distributions (uniform, exponential, or Pareto), the convergence time is progressively larger for distributions with progressively larger means. \nFor a given heterogeneous weight distribution, the convergence time is also progressively larger for progressively smaller values of the compromise parameter $m$.\nWhen calculating convergence time, we include the time steps in which two nodes interact but do not change their opinions.\nTo see if the heterogeneous weight distributions have inflated converges times as a result of more of these futile interactions, we also calculate the number of time steps to convergence when we exclude such time steps. \nThat is, we count the total number of opinion changes to reach convergence. On a logarithmic scale, there is little difference between the total number of opinion changes and the total number of time steps to reach convergence. We include the associated plot and values of the numbers of opinion changes in our code repository.\n\n\n\\begin{figure}[ht!]\n\\includegraphics[width=0.48\\textwidth]{images\/complete\/major_clusters_avg--labeled--with_std.png}\n\\caption{\\label{fig:complete_clusters} Numbers of major opinion clusters {at steady state}\nin simulations of our node-weighted BCM on a 500-node complete graph with various distributions of node weights. We consider a cluster to be major cluster if it has more than 2\\% of the nodes in the network. (In this case, a major cluster must have at least 11 nodes.)}\n\\end{figure}\n\n\nIn Fig.~\\ref{fig:complete_clusters}, we show the numbers of major opinion clusters at steady state in our simulations for various node-weight distributions. For all distributions, consensus occurs consistently (i.e, in all of our simulations) when the confidence radius $c \\geq 0.5$. \nFor the constant-weight distribution, consensus occurs consistently when $c \\geq 0.35$. When $c \\in [0.1, 0.4]$, the heterogeneous weight distributions have more steady-state major clusters than the constant-weight distribution.\nWhen we introduce heterogeneous node weights into our BCM, we need a larger threshold confidence radius $c$ to consistently reach consensus. \nIt appears that our BCM with heterogeneous node weights\ntends to have more opinion fragmentation than the baseline DW model.\nBy comparing the columns in Fig.~\\ref{fig:complete_clusters}, we observe for each distribution family (uniform, exponential, and Pareto) that there are more steady-state major clusters\n(when proceeding left to right in the plot from 80-10 to 80-20 and then to 90-10) when the distribution mean is larger.\nAdditionally, for a fixed mean weight, there are more steady-state major clusters as we proceed from a uniform distribution to an exponential distribution and then to a Pareto distribution.\n\n\n\\begin{figure}[ht!]\n\\includegraphics[width=0.48\\textwidth]{images\/complete\/entropy_avg--labeled--with_std.png}\n\\caption{\\label{fig:complete_entropy} Shannon entropies of the steady-state opinion-cluster profiles in simulations of our node-weighted BCM on a 500-node complete graph with various node-weight distributions.}\n\\end{figure}\n\n\nTo investigate how the model parameters affect the amount of opinion fragmentation, we calculate steady-state Shannon entropy and mean local receptiveness (see Sec.~\\ref{sec:quantities}). In Fig.~\\ref{fig:complete_entropy}, we show the steady-state entropy values of our simulations for various node-weight distributions.\nFor all distributions, when there is opinion fragmentation instead of consensus, a progressively smaller confidence radius $c$ yields progressively larger {steady-state} entropies.\nIn line with our observations in Fig.~\\ref{fig:complete_clusters}, when $c \\in [0.1, 0.4]$, simulations of heterogeneous weight distributions usually yield larger entropies than the constant-weight distribution.\nFor a fixed mean weight, we also observe a slightly larger entropy as we proceed from a uniform distribution to an exponential distribution and then to a Pareto distribution. \nFor the Pareto distributions, there is progressively larger entropy for progressively larger distribution means (from left to right in Fig.~\\ref{fig:complete_entropy}). \nFor the exponential and uniform distributions, although we do not conclusively observe the same trend, we do obtain progressively more major clusters for progressively larger distribution means (see Fig.~\\ref{fig:complete_clusters}). \nFor these distributions, a larger mean weight results in more major clusters, but these clusters are smaller, so the Shannon entropy is similar. \nTherefore, if we quantify fragmentation using Shannon entropy, we conclude that increasing the mean weight of a distribution has little effect on the amount of opinion fragmentation.\nBecause the entropy depends on the sizes of the opinion clusters, it provides more information about the opinion fragmentation than only tracking the number of clusters major clusters.\nOur plot of the mean local receptiveness illustrates the same trends as the entropy. (See our code repository for the relevant figure.) This suggests that both the Shannon entropy and the mean local receptiveness are useful for quantifying opinion fragmentation.\n\n\nWe now discuss the numbers of steady-state minor clusters\nthat we obtain in our numerical experiments on complete graphs. (See our code repository for a plot.)\nFor all of the examined parameter values, once we average over our 100 simulations for a given node-weight distribution and specified values of $c$ and $m$, we obtain at most two minor clusters at steady state. \nWe observe the most minor clusters when $c \\in \\{0.1, 0.2\\}$, which are the smallest confidence radii that we examine.\nFor the constant-weight distribution, there tends to be more minor clusters when $m \\in \\{0.3, 0.5\\}$ than when $m = 0.1$. However, we do not observe this trend for the heterogeneous weight distributions.\nFor example, for the Pareto-80-10 distribution, when $c \\in [0.34, 0.4]$, \nsmaller values of $m$ result in more minor clusters. For the Pareto distribution, for smaller values of $m$, we also observe that minor clusters tend to appear at smaller confidence radii. \nSmaller values of $m$ entail smaller opinion compromises for interacting agents; this may give more time for agents to interact before they settle into their final opinion clusters. \nFor the constant-weight distribution, this may reduce the number of minor clusters by giving more opportunities for agents to be assimilated into a major cluster.\nHowever, for our heterogeneous weight distributions, nodes with larger weights have a larger probability of being involved in interactions and we no longer observe fewer minor clusters for smaller values of $m$. \n\n\n\\begin{figure}[h!]\n\\includegraphics[width=0.45\\textwidth]{images\/pareto_trajectory-d0.2-mu0.1-weight4-op0.png}\n\\caption{\\label{fig:pareto_trajectory} Sample trajectories of {agent opinions versus time in a single simulation of} our node-weighted BCM on a complete graph with $N = 500$ nodes and node weights that are distributed according to a Pareto-80-10 distribution. We color the trajectory of each agent by its node weight. The nodes in the two minor clusters are all small-weight nodes; their weights are close to 0 (and are hence in purple).}\n\\end{figure}\n\n\nWe now propose a possible mechanism by which our node-weighted BCM may promote the trends in Table~\\ref{tab:trends}.\nIn Fig.~\\ref{fig:pareto_trajectory}, we show the trajectories of opinions versus time for a single simulation with node weights that we draw from a Pareto-80-10 distribution.\nTo qualitatively describe our observations, we examine the large-weight and small-weight nodes (i.e., the nodes that are near and at the extremes of a set of node weights in a given simulation). Because our node-selection probabilities are proportional to node weights, to compare the weights in a simulation, we normalize them to sum to $1$. \nIn Fig.~\\ref{fig:pareto_trajectory}, the large-weight nodes in a simulation appear to quickly stabilize into an associated major opinion cluster, and some small-weight nodes are left behind to form the two minor clusters.\nIn our numerical simulations on complete graphs, we observed that introducing heterogeneous node weights results in large-weight nodes interacting more frequently and quickly settling into their respective steady-state major opinion clusters.\nSmall-weight nodes that are not selected early in a simulation are left behind to form the smallest clusters in a steady-state opinion-cluster profile; this increases the amount of opinion fragmentation.\nIn comparison to the constant-weight distribution, when we increase the mean node weight or increase the relative proportion of large-weight nodes in the Pareto-80-10 distribution (thereby increasing the heaviness of the tail of a distribution) or decrease the value of the compromise parameter $m$, \nsmall-weight nodes take longer to settle into their respective opinion clusters; this may promote both opinion fragmentation and the formation of minor clusters.\n\n\n\\subsection{Erd\\H{o}s--R\\'{e}nyi (ER) Graphs} \\label{sec:results-ER} \n\n\nWe now examine random graphs that we generate using $G(N,p)$ ER random-graph models, where $p$ is the homogeneous, independent probability of an edge between any pair of nodes \\cite{newman2018}. For $p=1$, these ER graphs are complete graphs (see Sec.~\\ref{sec:results-complete}). In this subsection, we use the probabilities $p \\in \\{0.1, 0.3, 0.5, 0.7\\}$. \n\n\n\\begin{figure*}[th!]\n\\includegraphics[width=0.95\\textwidth]{images\/ER\/All--entropy_avg--labeled--with_std.png}\n\\caption{\\label{fig:er_entropy} Shannon entropies of the steady-state opinion-cluster profiles in simulations of our node-weighted BCM on $G(500, p)$ ER random graphs with various node-weight distributions.}\n\\end{figure*}\n\n\nFor each value of $p$, we observe the trends in Table~\\ref{tab:trends}. We include the plots of our simulation results for convergence times, \nthe steady-state numbers of major and minor clusters, and the steady-state values of mean local receptiveness in our code repository.\nIn Fig.~\\ref{fig:er_entropy}, we show the steady-state Shannon entropies of our simulations for various node-weight distributions and values of $p$. \nThe entropies are comparable to those that we obtained in our simulations on 500-node complete graphs.\nWhen $c \\in [0.1, 0.4]$, for each of the three distribution families that we examine,\nthe larger-mean distribution (with a Pareto-80-20 mean of 7.2126) has larger entropy than the smaller-mean distribution (with a Pareto-80-10 mean of 2.8836). In our 500-node complete-graph simulations, this trend was inconclusive for the uniform and exponential distributions.\n\n\nFor larger $p$, we expect the results of our simulations on $G(500, p)$ networks to be similar to those of our simulations on a 500-node complete graph.\nFor $p \\in \\{ 0.3, 0.5, 0.7\\}$ and $N=500$, the number of major clusters and the mean local receptiveness are comparable to the corresponding results for a 500-node complete graph. \nWhen $p = 0.1$ and there is opinion fragmentation, we observe fewer major opinion clusters than for larger values of $p$. \nFor $p = 0.1$, when $c\\in [0.1, 0.4]$, we also observe that the mean local receptiveness tends to be larger than the corresponding values for larger $p$. One possible contributing factor for this observation may be that smaller values of $p$ yield $G(N,p)$ graphs with more small-degree nodes, which have fewer possible values of local receptiveness.\nFor example, a node with degree $2$ can only have a local receptiveness in the set $\\{0, 0.5, 1\\}$. \nUnless a small-degree node is an isolated node in the steady-state effective-receptivity network $G_{\\mathrm{eff}}(T)$, it may help inflate the value of the steady-state mean local receptiveness. \n\n\nFor progressively smaller values of $p$, the steady-state number of minor clusters becomes progressively larger. \nFor $p \\in \\{0.5, 0.7\\}$, the steady-state numbers of minor clusters are comparable to the numbers that we obtained for a 500-node complete graph. \nIn our mean of our 100 simulations for each distribution and each combination of $c$ and $m$, we obtain at most 2--3 minor clusters at steady state; this occurs when $c \\in \\{0.1, 0.2\\}$.\nHowever, for $p = 0.1$, we observe up to 9 minor clusters at steady state; this occurs when $c \\in \\{0.35, 0.4\\}$.\nIt seems sensible that smaller values of $p$ yield more minor clusters. For small $p$, there are more small-degree nodes than for larger values of $p$. \nSmall-degree nodes have fewer edges $(i,j)$ than large-degree nodes that need to satisfy the inequality $|x_i - x_j| < c$ to become part of a minor cluster in the steady-state effective-receptivity network.\n\n\n\\subsection{Stochastic-Block-Model (SBM) Graphs} \\label{sec:results-SBM} \n\nWe now examine SBM random graphs that we generate using the parameters in Table~\\ref{tab:networks}. For both the two-community and core--periphery SBM graphs, we observe the trends in Table~\\ref{tab:trends}. \nWe include the plots of our simulation results at steady state for convergence times, numbers of major and minor clusters, values of Shannon entropy, and values of mean local receptiveness in our code repository. \n\n\nFor the two-community SBM graphs, the steady-state Shannon entropies and numbers of major clusters are comparable to those in our simulations on a complete graph. \nWhen there is opinion fragmentation, the steady-state values of mean local receptiveness tend to be similar to the corresponding values for $G(500, 0.1)$ graphs and larger than the values for a complete graph.\nThe steady-state numbers of minor clusters are similar to those for the $G(500, 0.1)$ random graphs. We observe up to 9 minor clusters when $c \\in \\{0.35, 0.4\\}$, which is near the transition between consensus and fragmentation in the standard DW model \\cite{lorenz2008}.\nRecall that we selected the edge probabilities of the two-community SBM so that each of the two communities have an expected mean degree that matches that of $G(500, 0.1)$ graphs.\nTherefore, it is reasonable that we obtain similar results for the two-community SBM and the $G(500, 0.1)$ random graphs. \nIn our numerical experiments, we assign the node weights randomly without consideration of the positions of the nodes in a network. In our numerical simulations, it may be that the sparsity of a graph is more important than community structure because we do not use community structure to influence the assignment of weights (e.g., which specific nodes have large weights) in our networks.\n\n\nFor the core--periphery SBM graphs, both the {steady-state} Shannon entropy and mean local receptiveness tend to be larger than the corresponding values for a complete graph. Larger entropy and smaller local receptiveness are both indications of more opinion fragmentation. \nWhen we consider the opinion-cluster profile of an entire network, Shannon entropy reveals that there is more opinion fragmentation in core--periphery SBM graphs than in a complete graph. \nHowever, the steady-state mean local receptiveness indicates that the nodes in a core--periphery SBM graph tend to be receptive to a larger fraction of their neighbors than the nodes in a complete graph.\n\n\nWe believe that Shannon entropy provides a more useful quantification mean local receptiveness of opinion fragmentation in a network. For networks with a large range of degrees, small-degree nodes can inflate the mean value of local receptiveness. \nA similar trend has been observed for clustering coefficients; the mean local clustering coefficient places more weight on small-degree nodes than the global clustering coefficient of a network \\cite{newman2018}. \nIn the context of our node-weighted BCM, consider a node with degree 2 and a node with degree 100, and suppose that both of them have a local receptiveness of 0.5. The larger-degree node having a local receptiveness of 0.5 gives a better indication that there may be opinion fragmentation in a network than the smaller-degree node having the same local receptiveness. \nHowever, we treat both nodes equally when calculating the mean local receptiveness. \nWe believe that local receptiveness is a useful quantity to calculate for individual nodes to determine how they perceive the opinions of their neighbors. However, the mean local receptiveness appears to be less useful than Shannon entropy for quantifying opinion fragmentation in a network.\n\n\nThe steady-state numbers of major clusters that we obtain in the core--periphery SBM graphs are comparable to the corresponding numbers for a complete graph. \nThe steady-state numbers of minor clusters tend to be larger for core--periphery SBM graphs than for two-community SBM graphs (which have more minor clusters than a complete graph). \nWe observe up to 11 minor clusters at steady state; this occurs when $c = 0.1$. One possibility is that the core--periphery structure makes it easier to disconnect peripheral nodes in the effective-receptivity network, causing these nodes to form minor clusters. \nFor the core--periphery SBM graphs, it also seems interesting to investigate the effect of using network structure to assign which nodes have large weights.\nFor example, if we assign all of the large weights to nodes in the core, will that pull more of the peripheral nodes into opinion clusters with core nodes? If we place a large-weight node in the periphery, will it be able to pull core nodes into its opinion cluster?\n\n\n\n\\subsection{Caltech Network} \\label{sec:results-Caltech} \n\n\\begin{figure*}[th!]\n\\includegraphics[width=0.93\\textwidth]{images\/Caltech\/All--minor_clusters_avg--labeled--with_std.png}\n\\caption{\\label{fig:caltech_minor} The steady-state numbers of minor opinion clusters in simulations of our node-weighted BCM on the Caltech Facebook network with various distributions of node weights. We consider a cluster to be minor cluster if it has at most 2\\% of the nodes (i.e., 15 or fewer nodes) in the network.}\n\\end{figure*}\n\n\\begin{figure*}[th!]\n\\includegraphics[width=0.93\\textwidth]{images\/Caltech\/All--entropy_avg--labeled--with_std.png}\n\\caption{\\label{fig:caltech_entropy} Shannon entropies of the steady-state opinion-cluster profile in simulations of our node-weighted BCM on the Caltech Facebook network with various node-weight distributions.}\n\\end{figure*}\n\n\nWe now discuss the Caltech Facebook network, which is an empirical data set in which the nodes are individuals with Caltech e-mail addresses and the edges represent ``friendships'' on Facebook on one day in fall 2005 \\cite{red2011, traud2012}. We consider the network's largest connected component, which has 762 nodes and 16,651 edges. \nThe Caltech network has all but one of the trends that we reported in Table~\\ref{tab:trends}; the only exception is the trend in the number of minor clusters.\nWhen there is opinion fragmentation, the Caltech network has more steady-state minor clusters and larger steady-state Shannon entropies than in our synthetic networks.\n\n\nIn Fig.~\\ref{fig:caltech_minor}, we show the steady-state numbers of minor clusters in simulations of our BCM on the Caltech network. We obtain the most minor clusters when $c = 0.1$, which is the smallest value of $c$ that we examine. \nOnce we average over our 100 simulations on the Caltech network for each distribution and each pair of values of $c$ and $m$, we obtain up to 78 minor clusters, which is much more than the single-digit numbers that we usually observe for our synthetic networks.\nAdditionally, unlike in our synthetic networks, for all distributions (not just the constant-weight distribution), the Caltech network tends to have more minor clusters for $m \\in \\{0.3, 0.5\\}$ than for $m = 0.1$. \nWe include our plot of the steady-state number of major clusters in our code repository. The Caltech network tends to have fewer major clusters than the $G(500, 0.1)$ random graphs, which in turn tends to have fewer major clusters than our other synthetic networks.\n\n\nIn Fig.~\\ref{fig:caltech_entropy}, we show the steady-state Shannon entropies for the Caltech network.\nWhen there is opinion fragmentation, the Caltech network has a larger entropy than the corresponding entropy for our synthetic networks. This aligns with our observation that the Caltech network has many more minor clusters than our synthetic networks. \nWe show a plot of the steady-state values of mean local receptiveness for the Caltech network in our code repository. The mean local receptiveness tends to be larger for the Caltech network than for the 500-node complete graph. We suspect that this arises from the presence of many small-degree nodes in the Caltech network. We discussed the impact of small-degree nodes on the mean local receptiveness in Sec.~\\ref{sec:results-SBM}. \n\n\nThe degree histogram of the Caltech network in Fig.~\\ref{fig:caltech_degree} differs dramatically from those of our synthetic networks.\nUnlike in our synthetic networks, the most common degrees in the Caltech network are among the smallest degrees. In Fig.~\\ref{fig:caltech_degree}, the tallest bar in the histogram is for nodes of degrees 0--9. These abundant small-degree nodes are likely to disconnect from the largest connected component(s) in the effective-receptivity network and form minor clusters. \nBecause we select the initial opinions uniformly at random from $[0,1]$, for $c = 0.1$, it is possible that small-degree nodes are initially isolated nodes in the effective-receptivity network because of their initial opinions. The abundance of small-degree nodes in the Caltech network helps explain its larger steady-state numbers of minor clusters and the correspondingly larger \nentropies than for our synthetic networks.\nDespite the fact that the Caltech network is structurally very different from our synthetic networks, it follows all of the trends in Table~\\ref{tab:trends} other than the one for the number of minor clusters. \nTherefore, it appears that the trends that we observe in our node-weighted BCM when we assign node weights uniformly at random (and hence in a way that is independent of network structure) are fairly robust to the underlying network structure. \n\n\n\\begin{figure}[th!]\n\\includegraphics[width=0.35\\textwidth]{images\/Caltech\/Caltech_degree_histogram.png}\n\\caption{\\label{fig:caltech_degree} Degree histogram for the Caltech Facebook network. The bins have width 10 and originate at the left end point (i.e., the bins indicate degrees 0--9, 10--19, and so on).}\n\\end{figure}\n\n\n\n\\subsection{Finite-Size Effects} \\label{sec:results-finite_size} \n\nWe now investigate finite-size effects in our BCM results for our simulations on a complete graph. \nTo ensure reasonable computation times, we examined synthetic networks with 500 nodes.\nHowever, it is useful to get a sense \\map{for} the trends in Table~\\ref{tab:trends} hold for networks of different sizes. \nTo start to investigate this, we simulate our BCM on complete graphs of sizes $100, 200, \\ldots, 1000$. We examine $m \\in \\{0.3, 0.5\\}$, and $c \\in \\{0.1, 0.3, 0.5\\}$, which give regimes of opinion fragmentation, transition between fragmentation and consensus for the constant-weight distribution, and opinion consensus. \nWe examine the constant-weight distribution and the uniform, exponential, and Pareto distributions with the same `80-10' mean (i.e., with a mean node weight of 2.8836) because the smaller-mean distributions have shorter computation times. \n\n\nIn Fig.~\\ref{fig:size_T}, we show the convergence times of our simulations of our BCM on complete graphs of various sizes. For all distributions, as the graph size becomes progressively larger, the convergence times also become progressively longer. \nFor each graph size, the convergence times for the heterogeneous weight distributions are longer than those for the constant-weight distribution. \nThe convergence times for the different heterogeneous distributions in Fig.~\\ref{fig:size_T} do not follow a clear trend.\n\n\n\\begin{figure*}[th!]\n\\includegraphics[width=0.95\\textwidth]{images\/finite-size\/T.png}\n\\caption{\\label{fig:size_T} Convergence times (in terms of the number of time steps) in simulations of our node-weighted BCM on complete graphs of various sizes. The plots give results for different choices of $c$ and $m$; the marker shape and color indicates the node-weight distribution. The points are means of 100 simulations, and the error bars represent one standard deviation from the mean. The vertical axes of the plots have different scales.}\n\\end{figure*}\n \n \nIn Fig.~\\ref{fig:size_entropy}, we show the steady-state Shannon entropies from our simulations of our BCM on complete graphs of various sizes. For a given value of $c$, we observe similar results for $m = 0.3$ and $m = 0.5$. \nFor $c = 0.5$, we see that for each distribution, we reach consensus (i.e., the steady-state entropy is $0$) fairly consistently for $N \\geq 300$ nodes. As the size of the network becomes progressively larger, the error bars (which indicate one standard deviation from the mean) also become progressively smaller. \nFor $c = 0.3$, the mean steady-state entropies appear to have settled with respect to $N$ for $N \\geq 400$. For $c = 0.1$, the graph size appears to have little effect on the mean entropy. \n\n\n\\begin{figure*}[th!]\n\\includegraphics[width=0.95\\textwidth]{images\/finite-size\/entropy.png}\n\\caption{\\label{fig:size_entropy} Shannon entropies of the steady-state opinion-cluster profiles in simulations of our node-weighted BCM on complete graphs of various sizes. The plots give results for different choices of $c$ and $m$; the marker shape and color indicates the node-weight distribution. The points are means of 100 simulations, and the error bars represent one standard deviation from the mean. The vertical axes of the plots have different scales.}\n\\end{figure*}\n\n\nWhen there is opinion fragmentation, the heterogeneous distributions yield larger steady-state Shannon entropies (and hence more opinion fragmentation, if measuring it using entropy) than the constant-weight distribution for each graph size. \nAdditionally, for a given distribution mean, we obtain larger entropies (and thus more opinion fragmentation) as we increase the heaviness of the tail of a distribution.\nWe have not explored the effect of graph size on the trend of increasing the distribution mean within the same distribution family.\nIn our code repository, we include a plot of the the steady-state mean local receptiveness for complete graphs of various sizes. In that plot, we also observe the trend of more opinion fragmentation (specifically, in the sense of a smaller mean local receptiveness) for heterogeneous distributions with increasingly heavy tails.\n\n\nWe also examine plots of the steady-state numbers of major and minor clusters in simulations of our BCM on complete graphs of various sizes; we include these plots in our code repository.\nThere is not a clear trend in the numbers of major clusters as the graph size becomes progressively larger. \nFor each graph size, when $c = 0.3$, there are more major clusters as we increase the heaviness of the tail of a distribution. For each graph size, when $c = 0.1$, the Unif-80-10 and constant-weight distributions have similar numbers of major clusters. \nFor $c = 0.1$, for each distribution, there are usually \nprogressively more minor clusters as the complete graph becomes progressively larger.\n(See the associated plot in our code repository.)\n\n\nOverall, for graphs with $N = 500$ or more nodes, the mean steady-state entropies for each distribution appear to have settled with respect to $N$; the mean entropies fluctuate less for $N \\geq 500$ than for smaller values of $N$.\nFor each of the distributions that we consider in this discussion and for each of the graph sizes, the heterogeneous distributions have longer convergence times than the constant-weight distribution. Additionally, in all of these cases, there is more opinion fragmentation as we increase the heaviness of the tail of a distribution.\nBecause of computation time, we have not examined distributions with different means. However, because the mean entropies settle with respect to $N$ for graph sizes of $N \\geq 500$, we hypothesize that the trends in opinion fragmentation and convergence time in Table~\\ref{tab:trends} continue to hold for our synthetic networks when there are more than 500 nodes.\n\n\n\n\\section{Conclusions and Discussion} \\label{sec:discussion} \n\nWe developed a novel bounded-confidence model (BCM) with heterogeneous node-selection probabilities, which we modeled by introducing node weights. One can interpret these node weights as encoding phenomena such as heterogeneous agent sociabilities or activity levels.\nWe studied our node-weighted BCM with fixed node weights that we assign in a way that disregards network structure and node opinions.\nWe also demonstrated that our BCM yields longer convergence times and more opinion fragmentation than our baseline Deffuant--Weisbuch (DW) BCM, in which we uniformly randomly select nodes for interaction.\nIt is straightforward to adapt our BCM to assign node weights in a way that depends on network structure and\/or node opinions. See Sec.~\\ref{sec:discussion-networks} and Sec.~\\ref{sec:discussion-opinion} for discussions.\n\n\n\\subsection{Summary of our Results} \\label{sec:discussion-summary} \n\nWe simulated our node-weighted BCM with a variety of node-weight distributions (see Table~\\ref{tab:distributions}) on several random and deterministic networks (see Table~\\ref{tab:networks}). \nFor each of these distributions and networks, we systematically investigated the convergence time and opinion fragmentation as we varied the confidence radius $c$ and the compromise parameter $m$. \nTo determine if the nodes in a network reach consensus or if there is opinion fragmentation, we calculated the steady-state number of major clusters in our simulations. To quantify the amount of opinion fragmentation, we calculated the steady-state Shannon entropy and mean local receptiveness. \nFor a given network, we found that entropy and mean local receptiveness show the same trends for which distributions have more opinion fragmentation (see Table~\\ref{tab:trends}).\nBased on our results, we believe that a network's Shannon entropy is more useful than its mean local receptiveness for quantifying opinion fragmentation in the network.\nHowever, calculating local receptiveness is relevant for examining the opinion dynamics of individual nodes.\n\n\nIn our simulations of our node-weighted BCM, we observed a variety of typical trends (see Table~\\ref{tab:trends}). \nIn particular, we found that introducing heterogeneous distributions of node weights results in longer convergence times and more opinion fragmentation in comparison to the baseline DW model (which we obtain by using a constant-weight distribution).\nOpinion fragmentation further increases if either (1) for a fixed distribution mean, we make the tail of the distribution heavier or (2) for a given distribution family, we increase the mean of the distribution. \nFor a set of heterogeneous node weights, we propose that large-weight nodes are selected early in a simulation with large probabilities and quickly associate with their respective steady-state major opinion cluster.\nSmall-weight nodes that are not selected early in a simulation are left behind to form small opinion clusters, resulting in more opinion fragmentation than in the baseline DW model.\n\n\n\n\\subsection{Relating Node Weights to Network Structure} \\label{sec:discussion-networks} \n\nWe examined deterministic and random graphs with various structures, and we observed that the trends in Table~\\ref{tab:trends} hold for each of our networks. \nFor each of our simulations, we determined node weights using a specified distribution and then assigned these weights to nodes uniformly at random.\nTherefore, our investigation conveys what trends to expect with fixed, heterogeneous node weights that are assigned to nodes without regard for network structure. \nHowever, our model provides a flexible framework to study the effects of node weights when they are correlated to network structure. \nFor example, one can investigate assigning weights to nodes based on measures of centrality (such as degree).\n{For a given set of node weights,} larger-weight nodes have larger probabilities of being selected for interaction; their position in a network likely affects the dynamics of BCMs and other models of opinion dynamics.\nOne can also investigate the effects of homophily in the assignment of node weights. \nFor example, in social-media platforms, very active accounts may engage with each other more frequently by sharing or commenting on one anothers' posts. \nWe can incorporate such features into our BCM by incorporating a positive node-weight assortativity (such that large-weight nodes have an increased likelihood of being adjacent to each other).\n\n\nIn line with the standard DW model, we assign the initial opinions uniformly at random in our BCM. However, in a real social network with community structure, this choice may not be realistic. \nOne can investigate a social network with communities with different mean opinion values and investigate the effect of placing large-weight nodes in different communities. For example, how does placing all large-weight nodes in the same community affect opinion dynamics and steady-state opinion-cluster profiles?\nHow does the presence of a small community of ``outspoken'' (i.e., large-weight) nodes influence the final opinions of nodes in other communities in a network?\nWill the small community quickly engender an echo chamber \\cite{flaxman2016}, will it pull other nodes into its final opinion cluster, or will something else occur? \n\n\n\n\\subsection{Relating Node Weights to Node Opinions} \\label{sec:discussion-opinion} \n\nIn the present paper, we explored fixed node weights that are independent of node opinions. One can readily adapt our BCM to incorporate time-dependent node weights, such as ones that depend on node opinions.\nOne can allow the probability of selecting a node for interaction to depend on how extreme its opinion is \\cite{alizadeh2015} or on the similarity of its opinion to that of another node \\cite{sirbu2019}).\n\n\nS\\^{i}rbu et al.~\\cite{sirbu2019} studied a modified DW model with heterogeneous node-selection probabilities that model algorithmic bias on social media.\nIn their model, one first selects an agent uniformly at random. One then calculates the magnitude of the opinion difference between that agent and each of its neighbors and then selects a neighbor with a probability that is proportional to this difference. \nIn the context of our BCM, one can represent their mechanism using time-dependent node weights.\nTo do this, at each time $t$, one first assigns the same constant weight to all nodes when selecting a first node $i$.\nWhen selecting a second node $j$ to interact with $i$, one then assign weights to neighbors of $i$ that are a function of the opinion difference $|x_i(t) - x_j(t)|$.\nWe assign a weight of $0$ to nodes that are not adjacent to $i$.\nThe simulations by S\\^{i}rbu et al. on complete graphs suggest that greater algorithmic bias results in longer convergence times and more opinion clusters \\cite{sirbu2019}.\nVery recently, Pansanella et al.~\\cite{pansanella2022} observed similar trends in a study of the algorithmic-bias model of S\\^{i}rbu et al. using various random-graph models.\n\n\nWe also observed similar trends of longer convergence times and more opinion clusters (and opinion fragmentation) than the baseline DW model in our simulations of our BCM with heterogeneous node-selection probabilities.\nOur results show that it is important to consider the baseline effect of assigning node weights uniformly at random in the study of BCMs with heterogenous node-selection probabilities before attributing trends such as longer convergence times and more opinion fragmentation to specific mechanisms such as algorithmic bias.\n\n\n\\subsection{Edge-Based Heterogeneous Activities} \\label{sec:discussion-edges} \n\nIn the standard DW model on a network, at each time, one selects an edge of a network uniformly at random and the two agents that are attached to the edge interact with each other \\cite{weisbuch2001}.\nMost past work on the DW model and its generalizations has focused on this edge-based selection mechanism \\cite{noorazar2020}.\nIn our BCM, to incorporate node weights (e.g., to encode heterogeneous sociabilities or activity levels of individuals), we instead used a node-based selection mechanism.\nFor voter models of opinion dynamics, it is known that the choice between edge-based and node-based agent selection can substantially affect a model's dynamics \\cite{kureh2020}. \nWe are not aware of a comparison of edge-based and node-based agent selection in asynchronous BCMs (and, in particular, in DW models), and it seems interesting to investigate this issue.\n\n\nWe developed our BCM to incorporate node weights that encode \nheterogeneous activity levels of individuals.\nOne can also examine heterogeneous dyadic-activity levels to account for the fact that individuals do not interact with each of their social contacts with the same probability.\nTo encode such heterogeneity, one can construct a variant of our BCM that incorporates edge weights.\nAt each time step, one can select a pair of agents to interact with a probability that is proportional to weight of the edge between them. \nAs we discuss in Sec.~\\ref{sec:discussion-weights}, it is more common in network science to study edge weights than node weights. \nWe have not yet examined edge-based heterogeneous activity levels in a BCM, and we expect that it will be interesting to investigate them.\n\n\n\n\\subsection{Importance of Node Weights} \\label{sec:discussion-weights} \n\n\nThe key novelty of our BCM is our incorporation of node weights into opinion dynamics. \nNode weights have been used in activity-driven models of temporal networks \\cite{perra2012}. Activity-driven frameworks has been used to model what agent interactions are allowed in models of opinion dynamics \\cite{li2017,zhang2018}.\nIn our BCM, the node weights determine the probabilities of selecting agents for interaction in a time-independent network.\nAlizadeh and Cioffi-Revilla \\cite{alizadeh2015}, S\\^{i}rbu et al. \\cite{sirbu2019}, and Pansanella et al. \\cite{pansanella2022} each examined specific scenarios of heterogeneous node-selection probabilities in DW models. \nOur node-weighted BCM provides a general framework to study node weights in an asynchronous BCM. One can use our model to study node weights that are assigned uniformly at random to nodes and fixed (i.e., as we investigated in this paper), assigned according to some other probability distribution and fixed, or assigned in a time-dependent way (e.g., as we discussed in Sec.~\\ref{sec:discussion-opinion}).\n\n\nNode weights have been studied far less than edge weights in network science, and even the term ``weighted network'' usually refers specifically to edge-weighted networks by default. \nFor example, it is very common to study centralities in edge-weighted networks \\cite{opsahl2010}, but studies of centralities in node-weighted networks (e.g., see Refs.~\\cite{heitzig2012, singh2020}) are much less common.\nHeitzig et al. \\cite{heitzig2012} developed generalizations of common network statistics that use node weights that represent the ``sizes'' of nodes in a network.\nThey used their framework to study brain networks with node weights that encode the areas of regions of interest, international trade networks with node weights that encode the gross domestic products (GDPs) of countries, and climate networks with node weights that encode areas in a regular grid on the Earth's surface.\nSingh et al.~\\cite{singh2020} developed centrality measures that incorporate both edge weights and node weights and used them to study service-coverage problems and the spread of contagions.\nThese studies demonstrate the usefulness of node weights for incorporating salient information in network analysis in a variety of applications.\n\n\nIn our BCM, we are interested in determining which nodes in a network are (in some sense) more influential than others and thereby exert larger effects on a steady-state opinion-cluster profile. \nRecently, Brooks and Porter \\cite{brooks2020} quantified the influence of media nodes in their BCM by examining how their ideologies influence other nodes in a network.\nAn interesting area of future work is to develop ways to quantify the influence of specific nodes in models of opinion dynamics with node weights.\nFor example, can one determine which weighted nodes to seed with extreme opinions to try and best spread such opinions? Are there nodes that make it particularly easy for communities to reach consensus and remain\nconnected in the steady-state effective-receptivity network $G_{\\mathrm{eff}}(T)$?\nOne can adapt the node weights in our BCM to capture a variety of sociological scenarios in which nodes have heterogeneous activity levels and interaction frequencies.\nMore generally, our model illustrates the importance of incorporating node weights into network analysis, and we encourage researchers to spend more time studying the effects of node weights on network structure and dynamics. \n\n\n\n\n\n\\begin{acknowledgements}\n\nWe thank Andrea Bertozzi, Deanna Needell, Jacob Foster, Jerry Luo, and the participants of UCLA's Networks Journal Club for helpful comments and discussions. We acknowledge financial support from the National Science Foundation (grant number 1922952) through the Algorithms for Threat Detection (ATD) program. GJL was also supported by NSF grant number 1829071.\n\n\\end{acknowledgements}\n\n\n\n\n\\providecommand{\\noopsort}[1]{}\\providecommand{\\singleletter}[1]{#1}%\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGroups and clusters of galaxies represent important ingredients in the\nUniverse for many purposes, for example, to test the large-scale structure or\nthe underlying cosmological model. The cluster catalogues by Abell\n(\\cite{abell}) and Abell et al.\\ (\\cite{aco}) were constructed by visual\ninspection of Palomar plates. The catalogues of the new generation of galaxy\ngroups were the Las Campanas catalogue of groups by Tucker et al.\n(\\cite{Tucker00}), the catalogues based on SDSS (Sloan Digital Sky Survey) data\nreleases (EDR, DR1, DR2, DR3, DR4, DR5) and the 2dFGRS (2 degree Field Galaxy\nRedshift Survey) data releases (100K, final, Colless et al.\n\\cite{col01}, \\cite{col03}). This inspired numerous research teams to investigate \nmore refined cluster finding algorithms and to compile catalogues of galaxy\nsystems (de Propris et al.\\ \\cite{dep02a}, Merchan \\& Zandivarez \\cite{mer02}, \n\\cite{mer05}, Bahcall et al.\\ \\cite{bac03}, Lee et al.\\ \\cite{lee04}, Eke et al.\n\\cite{eke04}, Yang et al.\\ \\cite{yang05}, Einasto et al.\\ \\cite{ein05}, Goto\n\\cite{goto02}, Weinmann et al.\\ \\cite{wein06}, Tago et al.\\ \\cite{tago06}, Berlind\net al.\\ \\cite{ber06}). \n\nIn our previous paper Tago et al.\\ (\\cite{tago06}, hereafter Paper~1) we have extracted 2dFGRS \ngroups, and we have\ngiven an extensive review of papers dedicated to group search methods and to\npublished group catalogues. In this introduction we present a short review of\nstudies of galaxy groups. \n\nIn recent years a number of new group finding algorithms and modified well\nknown methods have been applied (Goto et al.\\ \\cite{goto02}, Kim et al.\n\\cite{kim02}, Bahcall et al.\\ \\cite{bac03}, review by Nichol \\cite{nic04}, \nKoester et al.\\ \\cite{koe07}). However, the friends-of-friends method (FoF, \nsometimes called percolation method) remains the most frequently applied for\nredshift surveys. \n\n\n\\begin{table*}\n \\caption[]{The SDSS DR5 Main samples used, and the FoF parameters\n for the group catalogue (DR4 is for comparison but not studied)}\n \\label{Tab1}\n \\begin{tabular}{cccccccccc}\n \\hline\\hline\n \\noalign{\\smallskip}\n Sample & $RA, \\lambda$ & $DEC, \\eta$ & $N_{gal}$ &\n $N_{groups}$&$N_{single}$&$\\Delta V_0$ & $\\Delta R_0$ &\n $ z_* $ & $ a $ \\\\\n & deg & deg & & & & km\/s & Mpc\/h & & \\\\\n \\noalign{\\smallskip}\n \\hline\n 1 & 2 & 3 & 4 & 5 &\n 6 & 7 & 8 & 9 & 10 \\\\\n \\hline\n \\noalign{\\smallskip}\n\nSDSS DR4 E & 120... 255 & -1... 16 & 116471 & 16244 & 65016 &\n250 & 0.25 & 0.138 & 1.46 \\\\\n\nSDSS DR4 N & -63... +63 & 6... 39 & 197481 & 25987 & 115488 &\n250 & 0.25 & 0.138 & 1.46 \\\\\n\n\\\\\nSDSS DR5 E & 120... 255 & -1... 16 & 129985 & 17143 & 75788 &\n250 & 0.25 & 0.055 & 0.83 \\\\\n\nSDSS DR5 N & -63... +63 & 6... 39 & 257078 & 33219 & 152234 &\n250 & 0.25 & 0.055 & 0.83 \\\\\n\n\n \\noalign{\\smallskip}\n \\hline\n \\end{tabular}\\\\\n\n\\small\\rm\\noindent Columns:\n\\begin{itemize}\n\\item[1:] the subsample of the SDSS redshift catalogue used, \n\\item[2:] right ascension limits for the equatorial (E) sample, $\\lambda$\n coordinate limits for the northern (N) sample (degrees), \n\\item[3:] declination limits for the E sample, $\\eta$ coordinate limits for\n the N sample (degrees), \n\\item[4:] number of galaxies in a subsample, \n\\item[5:] number of groups in a subsample, \n\\item[6:] number of single galaxies, \n\\item[7:] the FoF linking length in radial velocity, for $z=0$, \n\\item[8:] the FoF linking length in projected distance in the sky\n , for $z=0$, \n\\item[9:] the characteristic scaling distance for the linking length\n , see Eq.~\\ref{lz}, Sec.~5, \n\\item[10:] the scaling amplitude for the linking length, see\n Eq.~\\ref{lz}, Sec.~5. \n\\end{itemize}\n \\end{table*}\n\nRecently several authors have compiled group catalogues using the 2dF\nGalaxy Redshift Survey. One of the largest sample of groups has been compiled\nby Eke et al.\\ (\\cite{eke04}), who compared the real group samples with\nsamples found for simulated 2dF redshift survey galaxies. \nYang et al.\\ (\\cite{yang05}) applied more strict\ncriteria in group selection, and as a result have obtained a 2dF group\ncatalogue that contains mainly compact groups and a larger fraction of\nsingle galaxies. In Paper~1 we applied criteria yielding groups\nof galaxies with statistical properties between these two catalogues. \n\nUsing earlier releases of the SDSS Lee et al.\\ (\\cite{lee04}, EDR), \nMerchan and Zandivarez (\\cite{mer05}, DR3), Goto (\\cite{goto05}, DR2), Weinmann\net al.\\ (\\cite{wein06}, DR2, see for details Yang et al.\\ \\cite{yang05}), \nZandivarez et al.\\ (\\cite{zan06}, DR4), Berlind et al.\\ (\\cite{ber06}, DR3) have\nobtained catalogues of groups (and clusters) of galaxies with rather different\nproperties. In the present paper we have applied a FoF group search method\nfor the recent public release (DR5) of the SDSS. All these group\ncatalogues are constructed on the basis of spectroscopic data of galaxy\ncatalogues using certain selection criteria. The most important data and\nproperties for these catalogues (if available) are presented in\nTable~\\ref{Tab2}. \n\nApart from the other authors Berlind et al.\\ (\\cite{ber06}) have used\nvolume-limited samples of the SDSS. This yielded one of the most detailed\nsearch method and reliable group catalogue(s). Recently Paz et al.\\ \n(\\cite{paz06}) studied shapes and masses of the 2dFGRS groups (2PIGG), Sloan\nSurvey Data Release 3 groups and numerical simulations, and founda strong\ndependence on richness. \n\nPapers dedicated to group and cluster search show a wide range of both sample\nselection as well as cluster search methods and parameters. The choice of\nthese parameters depends on the goals of the group catalogues obtained. In\nPaper~1 we drew a conclusion that in previous group catalogues the\nluminosity\/density relation in groups have not been applied. In this paper we\napply this property of the observed groups to create a group catalogue for an\nextended sample of the SDSS DR5. \n\nSelection effects in data are important factors in choosing galaxy selection\nmethods and understanding group properties. In the present paper we\ninvestigate various selection effects in SDSS (described in details in\nPaper~1) which influence compilation of group catalogues. We applied for the\nSDSS DR5 (the last published data release) the well-known friends-of-friends\n(FoF) algorithm. Considering earlier experiences we selected a series of\nprocedures discussed below. \n\n\nThe data used are described in Section 2. Sect. 3 discusses the\ngroup-finding algorithm. Selection effects, which influence the\nchoice of parameters for the FoF procedure are discussed in Sect. 4. \nTo select an appropriate cluster-finding algorithm we analyse in Sect. 5\nhow the properties of groups change, if they are observed at various\ndistances. Section 6 describes the final procedure\nused to select the groups, and the group catalogue. We also estimate\nluminosities of groups; this is described in Section 7. \nIn the last Section we compare our groups with groups found by\nother investigators, and present our conclusions. \nAs in Paper~1 we use for simplicity the term ``group'' for all objects in our\ncatalogue including also rich clusters of galaxies. \n\n\n\\section{The Data}\n\nIn this paper we have used the data release 5 (DR5) of the SDSS\n(Adelman-McCarthy et al. \\cite{ade07}; see also \\cite{ade06},DR4) \nthat contains overall 674749 galaxies\nwith observed spectra. The spectroscopic survey is complete from \n${\\rm r} =14.5$ up to ${\\rm r} =17.77$ magnitude.\n\nWe have restricted our study with the main galaxy sample obtained from the\nSDSS Data Archive Server (DAS) which reduced our sample down to 488725\ngalaxies. In present status the survey consists of two main contiguous areas\n(northern and equatorial, hereafter N and E samples, respectively), and 3\nnarrow stripes in the southern sky and a short stripe at high declination. We\nhave excluded smaller areas from our group search. For the two areas the\ncoordinate ranges are given in Table~\\ref{Tab1}.\n\nWe put a lower redshift limit $z=0.009$ to our sample with the aim to exclude\ngalaxies of the Local Supercluster. As the SDSS sample becomes very diluted\nat large distances, we restrict our sample by a upper redshift limit $z=0.2$.\nLater we see that for our purposes this SDSS main sample is more or less\nhomogeneous up to $z=0.12$.\n \nWe have found duplicate galaxies due to repeated spectroscopy for a number of\ngalaxies in the DAS Main galaxy sample. We have excluded from our sample those\n duplicate entries which have spectra of lower accuracy. There were two\ntypes of duplicate galaxies. In one case duplicates had exactly identical ID\nnumbers, coordinates and magnitudes; they were simple to find out and to exclude. \nAnother kind of duplicates had slightly different values of coordinates and\nmagnitudes. This kind of duplicates cannot be seen in the sky distribution of\ngalaxies but were discovered as an enhanced number density of galaxy pairs\nafter the FoF procedure. The majority of the second kind of duplicates have\nbeen found at the common boundary of the data releases DR1 and DR2 (at DEC\n$-1.25$ and $+1.25$). We have excluded them as duplicate galaxies due to features\nseen in Figure~\\ref{fig:duplicate} and Figure~\\ref{fig:rvirduplicate}. In\ntotal we have excluded from both samples 6439 identical galaxies and 1480\ngalaxies with slightly different data. \n\n\\begin{figure}[ht]\n\\centering\n\\resizebox{0.5\\textwidth}{!}{\\includegraphics*{duplicate.eps}}\n\\caption{Duplicate galaxies in the sample E appearing as an increased density\nof groups at the boundaries of the data releases 1 and 2. \n}\n\\label{fig:duplicate}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\resizebox{0.5\\textwidth}{!}{\\includegraphics*{rvirduplicate.eps}}\n\\caption{Duplicate galaxies in the sample E appearing as a separated\nmode (due to false pairs at very low value of virial radius) in the \nvirial radius - distance relation of groups. \n}\n\\label{fig:rvirduplicate}\n\\end{figure}\n\nThe total number of galaxies has reduced to 129985 galaxies in the equatorial\nsample and to 257078 galaxies in the northern sample. Resulting data on the\nsamples are presented in Table~\\ref{Tab1}. In the present paper we have studied\nonly the SDSS DR5 release. The redshifts were corrected for the motion\nrelative to the CMB. For linear dimensions we use co-moving distances (see,\ne.g., Mart{\\`\\i}nez \\& Saar \\cite{mar03}), computed with the standard cosmological\nparameters: the Hubble parameter $H_0=100 h$, the matter density $\\Omega_m =\n0.3$, and the dark energy density $\\Omega_{\\Lambda} = 0.7$.\n\n\n\n\\section{Friends-of-friends algorithm}\n\nOne of the most conventional methods to search for groups of galaxies is\ncluster analysis that was introduced in cosmology by Turner and Gott\n(\\cite{tg76}), and successfully nicknamed as the \"friends-of-friends\"\nalgorithm by Press and Davis (\\cite{pd82}). This algorithm along with the\npercolation method started its world-wide use after suggestions by Zeldovich\net al.\\ (\\cite{zes82}) and by Huchra \\& Geller (\\cite{hg82}). In Paper 1 we\nhave explained the FoF method and the role of linking length (or neighbourhood\nradius) in detail. To summarize here in short: galaxies can be attributed to\nsystems using the FoF algorithm with a certain linking length.\n\nOur experience and analysis show that the choice of the FoF parameters depends\non goals of the authors. For example Weinmann et al.\\ \\cite{wein06} searched\nfor compact groups in a SDSS DR2 sample. They applied strict criteria in FoF\nmethod and obtained, as one of the results, a lower fraction of galaxies in\n\nBerlind et al.\\ (\\cite{ber06} applied the FoF method to volume-limited \nsamples of the SDSS (see Table~\\ref{Tab2}). Their goal\nwas to measure the group multiplicity function and to constrain dark halos.\nThe applied uniform group selection has reduced the incompleteness of the\nsample, but it led also a lower number density of galaxies and of groups.\n\nIn this paper our goal is to obtain DR5 groups for a further determination of\nluminosity density field and to derive properties of \nthe network of the galaxy distribution. Groups are mostly density\nenhancements within filaments, and rich clusters are high-density peaks of the\ngalaxy distribution in superclusters (Einasto et al. \\cite{einm03c}, \n\\cite{einm03d}, \\cite{ein07a}, \\cite{ein07b}). Hence, our goal is to find out\nas many groups as possible to track all of the supercluster network. We\nrealize that differences in the purposes of the different papers which gives a fairly\nwide range of group properties. \n\nA Virialisation condition, or a certain density contrast as alternative methods\ndo not work universally for all density ranges of galaxy distribution.\nHowever, the similar problem arises in the case of FoF method. As shown by\nEinasto et al. (\\cite{e84}), it is not easy to find a suitable linking length\neven for a volume-limited sample of galaxies. The same conclusion has been\nrecently reached by Berlind et al.\\ (\\cite{ber06}), based on a much more larger\nsample and a more detailed analysis. The problem arises due to the variable\nmean density of galaxies in different regions of space. Additional\ndifficulties arise in case of flux-limited samples of galaxies if the linking\nlength depends also on the distance from the observer. In the original\nanalysis by Huchra \\& Geller the linking length was chosen as $l\\sim f^{-1\/3}$,\nwhere $f$ is the selection function of galaxies. This scaling corresponds to\nthe hypothesis that with increasing distance the galaxy field, and the groups,\nare randomly diluted. A recent summary of various methods to find clusters in\ngalaxy samples is given by Eke et al. (\\cite{eke04}).\n\n\\begin{figure}[ht]\n\\centering\n\\resizebox{0.45\\textwidth}{!}{\\includegraphics*{dist_vs_lumgr.eps}}\n\\caption{The total estimated luminosities for groups \n as a function of distance from the observer. \n}\n\\label{fig:2}\n\\end{figure}\n\nThere exists a close correlation between luminosities of galaxies in groups\nand their positions within groups: bright galaxies are concentrated close to\nthe center, and companions lie in the outskirts (for an early analysis of this\nrelationship see Einasto et al. \\cite{eskc74}, for a recent discussion see\nPaper~1). In Paper~1 we have found that while constructing group catalogues\nin the 2dFGRS a slightly growing linking length with distance has to be used.\n\nA similar problem arises in the SDSS. As selection effects were\nanalyzed in detail in Paper~1, then we shall discuss only shortly the selection\neffects in the SDSS survey. We perform tests to find an optimal set of\nparameters for the FoF method in this study.\n\n\n\\section{Selection effects}\n\n\\subsection{Selection effects in group catalogues}\n\nMain selection effects in group catalogues are caused by the fixed interval of\napparent magnitudes in galaxy surveys (see for details in Paper~1). This\neffect is shown for SDSS DR5 groups in Fig.~\\ref{fig:2}.\n\n\n\\begin{figure}[ht]\n\\centering\n\\resizebox{0.45\\textwidth}{!}{\\includegraphics*{spadr5b.eps}}\n\\caption{The number density of the SDSS DR5 MAIN E and N samples of \ngroups in log scale as a function of distance from the observer . \n}\n\\label{fig:3}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\resizebox{0.45\\textwidth}{!}{\\includegraphics*{dr562emult.eps}}\n\\caption{The multiplicity of groups of the sample E \n as a function of distance from the observer. \n}\n\\label{fig:Nrich}\n\\end{figure}\n\n\\begin{figure}[ht]\n\\centering\n\\resizebox{0.45\\textwidth}{!}{\\includegraphics*{dr563nmult.eps}}\\\\\n\\caption{The multiplicity of groups of the sample N\n as a function of distance from the observer. \n}\n\\label{fig:NrichN}\n\\end{figure}\n\nThe main consequence of this selection effect is the inhomogeneous spatial\ndistribution of groups: the decrease of the volume density of groups with\nincreasing distance. The mean volume density of groups as a function of\ndistance is plotted in Fig.~\\ref{fig:3}, separately for the northern and the\nequatorial area.\n\nA consequence of this effect is richness (multiplicity) of groups as a\nfunction of redshift. In Figs.~\\ref{fig:Nrich} and \\ref{fig:NrichN} we show the\nmultiplicity of groups (the number of member galaxies) as a function of\ndistance from the observer for the E and N samples, respectively. We see that\nrich groups are seen only up to a distance of about 300~$h^{-1}$\\thinspace Mpc, thereafter the\nmean multiplicity decreases considerably with distance. This selection effect\nmust be accounted for in the multiplicity analysis. \n\n\\subsection{Selection effects in group sizes}\n\nSizes of groups depend directly on the choice of the linking length, or more\ngenerally on its scaling law. Strong selection effects can be observed\nhere, also. As an example, the median sizes of the distant 2PIGG groups (Eke\net al.\\ \\cite{eke04}) are 7 times larger than those for the nearby groups.\n\nUsually the ratio of radial and transversal linking lengths $ \\Delta V_0 \/\n\\Delta R_0$ is a constant in the FoF process of search of groups. As noted by\nEinasto et al.\\ (\\cite{e84}), and Berlind et al.\\ (\\cite{ber06}) it is impossible\nto fulfill all requirements with any combination of these linking lengths. We\ntry to find the ratio $ \\Delta V_0 \/ \\Delta R_0$ which is the best to fulfill\nthe size ratio of observed groups which was determined by other studies. \nFigure~\\ref{fig:VRmeanratio} demonstrates how the mean group size ratio\ndepends on initial linking length (LL) for three different $\\Delta V \/ \\Delta R $ ratio: 6, 10, \nand 12. If we accept from other considerations the initial $\\Delta R_0 =\n0.25$ $h^{-1}$\\thinspace Mpc, then we could find the best ratio $ \\Delta V_0 \/ \\Delta R_0$ to be 10\n( at $\\Delta R_0=0.25$ the curve 10 is the closest to the same value of\nmean size ratio). \n\nOn the other side, if we accept size ratio 10 (for example from detailedd study\nof cluster shape in redshift space) we could conclude the best $\\Delta R_0$ to\nbe 0.25~$h^{-1}$\\thinspace Mpc\\ where the curve $(\\Delta R_0)$ reach the size ratio $\\Delta V\n\/ \\Delta R= 10$ in Figure~\\ref{fig:VRmeanratio}. \n\nIt is difficult to reliably model the galaxy populations in DM-haloes. Here\nwe summarize in short a solution of the problem.\n\nAt large distances from the observer, only the brightest cluster members are\nvisible, and\nthese brightest members form compact cores of clusters, with sizes much less\nthan the true size of the clusters.\nThis effect work in the opposite direction to the increase of the linking\nlength, and it might cancel it out.\nNext we describe the empirical scaling of\nthe linking length by shifting of the observed groups to growing distances.\n\n\\section{Scaling of linking length}\n\nIn the majority of papers dedicated to group search authors, the group finders\nare tuned using mock $N$-body catalogues (e.g. Eke et al.\\ \\cite{eke04}; Yang\net al.\\ \\cite{yang05}). The mock group catalogues are homogeneous and all\nparameters of the mock groups can be easily found and applied for search of\nreal groups. Still mock groups are only an approximation to the real groups\nusing model galaxies in dark matter haloes. As we have noted, it is difficult\nto properly model the luminosity-density correlation found in real groups.\n\nStarting from these considerations we have used observed groups to study the\nscaling of group properties with distance. The group shifting procedure is\ndescribed in detail in Paper~1. As this is an important part of our search\nmethod, then we present here the method i short and present the results for the SDSS\nDR5 groups.\n \nWe created test group catalogues for the sample SDSS DR5 E with constant and\nvariable linking lengths, selected in the nearby volume $d < 100$~$h^{-1}$\\thinspace Mpc all\nrich groups (with multiplicity $N_{gal}\\ge 20$, in total 222 groups).\nAssuming that the group members are all at the mean distance of the group we\ndetermined their absolute magnitudes and peculiar radial velocities. Then we shifted\nthe groups step by step to larger distances (using a $z=0.001$ step in\nredshift), and calculated new $k$-corrections and apparent magnitudes for the\ngroup members. As with increasing distance more and more fainter members of\ngroups fall outside the observational window of apparent magnitudes, the group\nmembership changes. \nWe found new properties of the groups -- their\nmultiplicities, characteristic sizes, velocity dispersions and densities. \nWe also calculated the minimum FoF linking length, necessary to keep the group\ntogether at this distance. \n\n\\begin{figure}[ht]\n\\centering\n\\resizebox{0.45\\textwidth}{!}{\\includegraphics*{VRmeanratio.eps}}\n\\caption{Mean ratio of radial and perpendicular sizes of groups\n in the sample E as a function of starting value of linking length\nfor three values of linking lengths ratios. \n}\n\\label{fig:VRmeanratio}\n\\end{figure}\n\nTo determine that, we built the minimal spanning\ntree for the group (see, e.g., Martinez and Saar \\cite{mar03}), and found the\nmaximum length of the MST links.\n\nAs the original groups had different sizes and initial redshifts we found the\nrelative changes of their properties, with respect to the redshift change.\nThe individual linking length scaling paths have large scatter. Therefore we\nfound the average scaling path from the individual paths. In\nFigure~\\ref{fig:LLLawdr5} we present the main result of group shifting for our\nlinking length scaling law determination.\n\n\\begin{figure*}[ht]\n\\centering\n\\resizebox{0.48\\textwidth}{!}{\\includegraphics*{LLLawdr5e.eps}}\n\\hspace*{2mm}\n\\resizebox{0.48\\textwidth}{!}{\\includegraphics*{LLLawdr5n.eps}}\n\\caption{The scaling of the group FoF linking length with redshift\n for the samples DR5 E (left panel) and DR5 N (right panel). The ordinate is\n the ratio of the minimal linking length $LL$ at a redshift $z$, necessary to\n keep the group together, to the original linking length $LL_0$ that defined\n the group at its initial redshift $z_0$; the abscissa is the redshift\n difference $\\Delta z=z-z_0$. \n}\n\\label{fig:LLLawdr5}\n\\end{figure*}\n\n\nWe fit the mean values of the linking lengths in $\\Delta z=0.001$ redshift\nbins (the step we used for shifting the groups). We find our scaling law for\nthe case $n\\ge 20$. The fitting law is not sensitive to the richness of\ngroups involved in the LL scaling law determination. The scaling law is\nmoderately different from the scaling law found for the 2dFGRS groups in\nPaper~1 but still can be approximated by a slowly increasing arctan law. \nDue to narrow\nmagnitude window in SDSS, at higher values of $z$ only compact cores of\ngroups or binary galaxies have been found by FoF\nmethod. The deviation from the scaling\nlaw corresponds to the redshift limit above which most groups discovered\ncorrespond only to the compact cores of nearby groups. Therefore, the\ndetermination of the scaling law is a test for redshift limit of homogeneity\nof the group catalogue. A good parametrization of the scaling low is\n\\begin{equation}\n\\label{lz}\nLL\/LL_0=1+a\\, \\mbox{arctan}(z\/z_{\\star}),\n\\end{equation}\nwhere $a=0.83$ and $z_{\\star}=0.055$. \n\nThe main difference between the scaling laws of DR5 and 2dF groups is in the \nvalidity range. This is due to different magnitude limits in these \nflux limited samples. We consider this difference in more details below. \nThe selection of initial groups should not influence much the scaling\nof their properties with distance. \nWe tested group search with three different initial scaling laws for \ngroup selection\n: two lengths constant and one varied with distance. \nThe final scaling relation practically does not depend on\nthe initial group selection (i.e. on initial scaling law). \n \n\n\\section{Group catalogue}\n\n\n\\subsection{The group finder}\n\nWe adopt the scaling of the linking length found above, but we have to select\nyet the initial values for the linking length. In practice, only groups with\nthe observed membership $N_{gal} \\geq 2$ are included in group catalogues.\n\n\\begin{figure*}[ht]\n\\centering\n\\resizebox{0.45\\textwidth}{!}{\\includegraphics*{Size_maxe.eps}}\n\\hspace*{2mm}\n\\resizebox{0.45\\textwidth}{!}{\\includegraphics*{sigVe.eps}}\\\\\n\\caption{ Left panel : the (maximum projected) sizes of our SDSS DR5 groups\n in E sample as a function of distance. \n Right panel shows the velocity dispersions in groups as a function of\n distance in the sample E. The FoF parameters are given in Table~\\ref{Tab1}. \n}\n\\label{fig:10}\n\\end{figure*}\n\n\\begin{figure*}[ht]\n\\centering\n\\resizebox{0.45\\textwidth}{!}{\\includegraphics*{Size_maxn.eps}}\n\\hspace*{2mm}\n\\resizebox{0.45\\textwidth}{!}{\\includegraphics*{sigVn.eps}}\\\\\n\\caption{ Left panel : the (maximum projected) sizes of our SDSS DR5 groups\n in N sample as a function of distance. \n Right panel shows the velocity dispersions of groups as a function of\n distance in the sample N. \n The FoF parameters are given in Table~\\ref{Tab1}. \n}\n\\label{fig:sizesig}\n\\end{figure*}\n\n\nIn order to find the best initial linking lengths in the radial direction, we\ntried a number of different parameter values, $\\Delta V = 100-700$ km\/s and\n$\\Delta R = 0.16 - 0.70$ $h^{-1}$\\thinspace Mpc, and we chose finally the values which were\ndiscussed above, and presented in Table~\\ref{Tab1}. Higher values for $\\Delta\nR$ leads to inclusion of galaxies from neighbouring groups and filaments.\nLower values for $\\Delta V$ exclude the fastest members in intermediate\nrichness groups.\n\nHowever, closer inspection show that one rich group has a richness much larger\n($N=569$) than the rest of them. This is the well-known nearby ($d=27$ $h^{-1}$\\thinspace Mpc)\nbinary Abell cluster A2197\/2199. We consider this cluster as an exception,\nand do not use lower LLs. At slightly lower value of LL this cluster\nfall apart and become the cluster with usual properties.\n\nIn Fig.~\\ref{fig:10} we show the sizes of our groups of the final catalogue.\nWe define the size of the group as its maximum projected diameter, the largest\nprojected galaxy pair distance within the group. We see that the sizes of\nlargest groups slightly increase with distance up to $d = 250$~$h^{-1}$\\thinspace Mpc, and\nthereafter slowly decrease. This decrease is expected since in more distant\ngroups only bright galaxies are seen, and they form the compact cores of\ngroups.\nThe numbers of the groups and the FoF parameters (separately\nfor both SDSS DR5 regions) are given in Table~\\ref{Tab1}. \n\n\n\\subsection{The final catalogue}\n\nOur final catalogue (Table~\\ref{Tab1}) includes 17143 groups in equatorial\narea and 33219 groups in high declination area with richness $\\geq 2$. As an\nexample we present here the first lines of our group table (Table~\\ref{Tab3}),\nwhich include the following columns for each group:\n\n\\begin{itemize}\n \\item[1)] group identification number;\n \\item[2)] group richness (number of member galaxies);\n \\item[3)] RA (J2000.0) in degrees (mean of member galaxies);\n \\item[4)] DEC (J2000.0) in degrees (mean of member galaxies);\n \\item[5)] group distance in $h^{-1}$\\thinspace Mpc\\ (mean comoving distance for member galaxies corrected\nfor CMB);\n \\item[6)] the maximum projected size (in $h^{-1}$\\thinspace Mpc);\n \\item[7)] the rms radial velocity ($\\sigma_V$, in km\/s);\n \\item[8)] the virial radius in $h^{-1}$\\thinspace Mpc\\ (the projected harmonic mean);\n \\item[9)] the luminosity of the cluster main galaxy (in units of $10^{10} h^{-2}\n L_{\\sun}$);\n \\item[10)] the total observed luminosity of visible galaxies ($10^{10} h^{-2}\n L_{\\sun}$);\n \\item[11)] the estimated total luminosity of the group ($10^{10} h^{-2} L_{\\sun}$). \n\\end{itemize}\n\n\\begin{table*}\n \\caption[]{First rows as an example of groups in the SDSS DR5 main\n galaxy catalogue \n described in the present paper}\n \\label{Tab3}\n \\begin{tabular}{ccccccccccc}\n \\hline\\hline\n \\noalign{\\smallskip}\n $ ID_{gr}$ & $N_{g}$ & $RA$ & $DEC$ & Dist &\n $Size_{sky}$&$\\sigma_{V}$&$R_{vir} $ & $ L_{main}$ &\n $ L_{obs} $ & $L_{est}$ \\\\\n & & [deg] & [deg] & [Mpc\/h] & [Mpc\/h] & [km\/s] & [Mpc\/h] &\n [$10^{10} h^{-2} \nL_{\\sun}]$& [$10^{10} h^{-2} L_{\\sun} $] & [$ 10^{10} h^{-2} L_{\\sun} $ ] \\\\\n \\noalign{\\smallskip}\n \\hline\n 1 & 2 & 3 & 4 & 5 &\n 6 & 7 & 8 & 9 & 10 & 11 \\\\\n \\hline\n \\noalign{\\smallskip}\n\n 1 & 4 & 146.57633972 & -0.83209175 & 195.056 & 0.6823 & 53.7783 &\n 0.33341 & 0.17353E+01 & 0.40818E+01 & 0.52815E+01 \\\\\n 2 & 2 & 146.91120911 & -0.31007549 & 385.390 & 0.1291 & 25.2219 &\n 0.12908 & 0.21835E+01& 0.41985E+01 & 0.10160E+02 \\\\\n 3 & 3 & 146.88099670 & -0.49802899 & 249.334 & 0.1522 & 101.6915 &\n 0.09505 & 0.27161E+01& 0.36896E+01 & 0.53377E+01 \\\\\n 4 & 2 & 146.78494263 & 0.02115750 & 368.779 & 0.3185 & 173.4426 & \n 0.31840 & 0.37278E+01& 0.56619E+01 & 0.13310E+02 \\\\\n 5 & 4 & 146.74797058 & -0.25555125 & 383.818 & 0.3404 & 191.9961 &\n 0.15149 & 0.37084E+01& 0.99677E+01 & 0.24499E+02 \\\\\n\n \\noalign{\\smallskip}\n \\hline\n \\end{tabular}\\\\\n \\end{table*}\n\n\nThe identification number is attached to groups by the group finder in\nthe order the groups are found. The calculation of luminosities is\ndescribed in the next section. \n\nWe also give (in an electronic form) a catalogue of all individual\ngalaxies along with their group identification number and the group richness, \nordered by the group identification number, to facilitate search. The\ntables of galaxies end with a list of isolated galaxies (small\ngroups with only one bright galaxy within the observational window of\nmagnitudes); their group identification number is 0 and group richness\nis 1. All tables can be found at\n\\texttt{http:\/\/www.obs.ee\/$\\sim$erik\/index.html}. \n\n\\section{Luminosities of groups}\n\nThe limiting apparent magnitude of the complete sample of the SDSS catalog in\n${\\rm r}$ band is 17.77. The faint limit actually fluctuates from field to\nfield, but in the present context we shall ignore that; we shall take these\nfluctuations into account in our paper on the group luminosity function, based\non our 2dFGRS group catalogue (Einasto et al. \\cite{ets07}).\n\nWe regard every galaxy as a visible member of a group or cluster within the\nvisible range of absolute magnitudes, $M_1$ and $M_2$, corresponding to the\nobservational window of apparent magnitudes at the distance of the galaxy. To\ncalculate total luminosities of groups we have to find for all galaxies of the\nsample the estimated total luminosity per one visible galaxy, taking into\naccount galaxies outside of the visibility window. This estimated total\nluminosity was calculated as follows (Einasto et al.\\ \\cite{e03b})\n\\begin{equation}\nL_{tot} = L_{obs} W_L, \n\\label{eq:ldens}\n\\end{equation}\nwhere $L_{obs}=L_{\\odot }10^{0.4\\times (M_{\\odot }-M)}$ is the\nluminosity of a visible galaxy of an absolute magnitude $M$, and\n\\begin{equation}\nW_L = {\\frac{\\int_0^\\infty L \\phi\n(L)dL}{\\int_{L_1}^{L_2} L \\phi (L)dL}}\n\\label{eq:weight2}\n\\end{equation}\nis the luminous-density weight (the ratio of the expected total luminosity to\nthe expected luminosity in the visibility window). In the last equation\n$L_i=L_{\\odot} 10^{0.4\\times (M_{\\odot }-M_i)}$ are the luminosity limits of\nthe observational window, corresponding to the absolute magnitude limits of\nthe window $M_i$, and $M_{\\odot }$ is the absolute magnitude of the Sun. In\ncalculation of weights we assumed that galaxy luminosities are distributed\naccording to a two power-law function used by Christensen\n(\\cite{chr75}), Kiang (\\cite{kiang76}), Abell (\\cite{abell77}) and Mottmann \\&\nAbell (\\cite{ma77})\n\n\n\\begin{equation}\n \\phi(L)dL \\propto (L\/L^*)^\\alpha(1+(L\/L^*)^\\gamma)^{(\\delta \/ \\gamma)}d(L\/L^*) ,\n\\label{eq:twoplaw}\n\\end{equation}\nwhere $\\alpha $, $\\gamma$, $\\delta$ and $L^{*}$ are parameters. We use two\npower-law rather than Schechter function, because it has\nmore freedom and it gives a better fit for the galaxy luminosity function. \n\nWe used two power-law function with parameters: $\\alpha = -1.123$, $\\gamma=\n1.062$, $\\delta = -17.37$, $L^{*} = 19.61$. We have used all galaxies\n(galaxies in groups and isolated galaxies) for finding the luminosity\nfunction. More detailed explanation about two power-law function and how we\nderive the parameters are given in our paper on the 2dFGRS luminosity function\n(Einasto et al. \\cite{ets07}). \n\nWe derived $k$-correction for SDSS galaxies using the KCORRECT algorithm\n(Blanton \\& Roweis \\cite{bla06}). We also accepted $M_{\\odot} = 4.52$ in the\n${\\rm r}$ photometric system. \n\n We calculated for each group the total observed and corrected luminosities, \nand the mean weight\n\\begin{equation}\nW_m = {\\frac{\\sum L_{tot, i}} {{\\sum L_{obs, i}}}}, \n\\label{eq:sum}\n\\end{equation}\nwhere the subscript $i$ denotes values for individual observed galaxies in\nthe group, and the sum includes all member galaxies of the system. \n\n\\begin{figure}[ht]\n\\centering\n\\resizebox{0.45\\textwidth}{!}\n{\\includegraphics*{dist_vs_weight.eps}}\n\\caption{The mean weights of groups of the SDSS DR5 \n versus the distance from the observer. \n}\n\\label{fig:11}\n\\end{figure}\n\nThe mean weights for the groups of the SDSS DR5 are plotted as a function of\nthe distance $d$ from the observer in Fig.~\\ref{fig:11}. We see that the mean\nweight is slightly higher than unity at a distance $d\\sim 175$~$h^{-1}$\\thinspace Mpc, and\nincreases both toward smaller and larger distances. The increase at small\ndistances is due to the absence of very bright members of groups, which lie\noutside the observational window, and at large distances the increase is\ncaused by the absence of faint galaxies. The weights grow fast for very close\ngroups and for groups farther away than about 400~$h^{-1}$\\thinspace Mpc. At these distances\nthe correction factors start to dominate and the luminosities of groups become\nuncertain.\n\nIn Fig.~\\ref{fig:2} we show the estimated total luminosities of groups as a\nfunction of distance. We produced also colour figures that visualise the\nluminosities of groups. These are too detailed to be presented here, and can be\nfound in our web pages. These figures show that the brightest groups have\ncorrected total luminosities, which are, in the mean, independent of distance.\nThis shows that our calculation of total luminosities is correct.\n\n\n{\\scriptsize\n\\begin{table*}\n \\caption[]{Data for group catalogues based on the SDSS}\n \\label{Tab2}\n\\begin{center}\n \\[\n \\begin{tabular}{llcccccccc}\n \\hline\\hline\n \\noalign{\\smallskip}\n Authors & Release, \n Sample & $N_{gal} $ &\n $N_{gr}(n \\geq 2)$ & $N_{gr}(n \\geq 4)$&$z_{lim}$ & \n $\\Delta V_0 $ & $\\Delta R_0$ & \\% ($\\geq$ 2) & \\% ($\\geq 4$)\\\\\n & & & & & & km\/s & Mpc\/h & & \\\\\n \\noalign{\\smallskip}\n \\hline\n 1 & 2 & 3 & 4 & 5 &\n 6 & 7 & 8 & 9 & 10 \\\\\n \\noalign{\\smallskip}\n \\hline\n\n\nMerchan 2005 & DR3 Main & 300000 & & 10864 & 0 - 0.3 & 200 & & & 22 \\\\\n\nGoto 2005 & DR2 SQL & 259497 & 335 & & 0.03- & 1000 & 1.5 & & 6 ($n\n\\geq$ 20) \\\\ \n\nWeinmann 2006 & DR2 Main VAGC & 184425 & 16012 & 3720 & 0.01 - 0.2 &\n0.3$^1$ & 0.05$^1$ & 30 & 15 \\\\\n\nBerlind 2006 & DR3 sam14 VAGC & 298729 & & & & & & & \\\\\n & vol.lim. Mr20 & 57332 & & 4119$^3$ & 0.015-0.1 & 0.75 &\n 0.14 & 56.3 & 37.2$^3$ \\\\\n & vol.lim. Mr19 & 37938 & & 2696$^3$ & 0.015-0.068 & 0.75 &\n 0.14 & 58.9 & 40.7$^3$ \\\\\n & vol.lim. Mr18 & 18959 & & 1362$^3$ & 0.015-0.045 & 0.75 &\n 0.14 & 60.0 & 42.2$^3$ \\\\\n\n\nTago 2007 & DR5 Main DAS & 387063 & 50362 & 9454 & 0.009 - 0.2 & 250 & 0.25\n& 41.1 & 23.4 \\\\\n \\noalign{\\smallskip}\n \\hline\n \\end{tabular}\n \\]\n\\end{center}\n\n\\small\\rm\\noindent Columns:\n\n\\begin{itemize}\n\\item[1:] authors of group catalog,\n\\item[2:] sample and release number, \n\\item[3:] number of galaxies, \n\\item[4:] number of groups ($n \\geq$ 2), \n\\item[5:] number of groups ($n \\geq$ 4), \n\\item[6:] redshift limits for sample galaxies, \n\\item[7:] the FoF linking length in radial velocity, for $z=0$, \n\\item[8:] the FoF linking length in projected distance in the sky\n , for $z=0$, \n\\item[9:] fraction of galaxies in groups ($n \\geq$ 2), \n\\item[10:] fraction of galaxies in groups ($n \\geq$ 4). \n\\end{itemize}\n\n\\small\\rm\\noindent Notes:\n\n$^1$ for Weinmann et al.\\ groups linking lengths are in the units of mean\ngalaxy separation; \n \n$^3$ for Berlind et al.\\ groups richness $n \\geq 3$\n \n* for Berlind et al.\\ apparent magnitude limit was $r \\leq 17.5$ , for the\n rest $r \\leq 17.77$ \n\n* group-finders : \n \n Merchan: FoF + mock catalog + iterative group re-centering + Schechter LF\n for LL scaling \n\n Goto: FoF + group re-centering \n\n Weinmann: FoF + DM halo mock catalog + group re-centering \n\n Berlind: FoF + DM halo mock catalog\n\n Tago: FoF + DM halo mock + Dens\/Lum relation in groups for LL scaling\n\n \\end{table*}\n}\n\n\n\n\\section{Discussion and conclusions}\n\n\\begin{figure}[ht]\n\\centering\n\\resizebox{0.45\\textwidth}{!}{\\includegraphics*{NSgal.eps}}\n\\caption{The number density of galaxies in the 2dF N and S samples, and SDSS\n DR5 E and N samples as a function of distance from the observer. Histograms\n for 2dF are arbitrary shifted along ordinate axis for clarity. }\n\\label{fig:galden}\n\\end{figure}\n\n\\begin{figure}[ht]\n \\centering \\resizebox{0.5\\textwidth}{!} {\\includegraphics*{dr-comp2.eps}}\n\\caption{The number of sample galaxies, groups and isolated galaxies\n involved in FoF procedure versus total number of galaxies in releases of\n SDSS and 2dF surveys. Note well defined proportional grows with \n releases of SDSS and a higher \"yield\" for 2dF. These relations suggest that\n the FoF method has applied homogeneously to the different releases. \n}\n\\label{fig:dr-comp}\n\\end{figure}\n\n\\subsection{Some issues related to the poor de-blending}\n\nVarious potential caveats related to the automatic pipeline data reduction in\nthe SDSS have been discussed and flagged in the NYU-VAGC, which is based on\nthe SDSS DR2 (Blanton et al.\\ \\cite{bla05}). Most of these issues are related to\npoor de-blending of large and\/or of LSB galaxies with complicated morphology\n(e.g. star-forming regions, dust features etc.). At low redshifts a number\nof SDSS galaxies have been found shredded, i.e. a nearby large galaxy image\nis split by target selection algorithm into several sub-images (e.g. Panter\net al.\\ \\cite{panter07}). Therefore, the treatment of nearby galaxies requires special\ncare. This potential bias is largely reduced in our new catalogue by means of\nsetting reasonably high magnitude ($r > 14.5$) and redshift ($z > 0.009$)\nlimits, which exclude most of luminous and\/or nearby galaxies of the Local\nSupercluster. \n\n\nWe have performed eyeball quality checks of a number of groups in the new\ncatalogue using the SDSS Sky Server Visual Tools. We have inspected a) the\nmembers of the 139 nearest ($z < 0.012$) groups -- 42 groups in the equatorial\n(E) sample and 97 groups in the northern (N) sample; b) conspicuously dense\ngroups as evident on the bottom sections of the Figure~\\ref{fig:rvirduplicate}, \nand of the Figures~\\ref{fig:10} and \\ref{fig:sizesig}.\n The results of these checks can be summarized as follows:\n \n1) {\\it De-blending errors.} In the nearest 139 groups with initially 525\nmember galaxies poor de-blending has been noted for 21 (4\\%) galaxies\ndistributed in 9 (6.5\\%) groups. Poor de-blending means either that the bright\ngalaxy is represented in the DR5 spectroscopic sample with a single off-center\nsource of typically reduced brightness, or that the primary galaxy is shredded\ninto multiple (faint) \\ion{H}{ii} regions. \n\nAs an example of poor de-blending we refer to the group number 30644.\nIts luminous member NGC 3995 ($B_T = 12.7$) with\nknotty morphology is represented in the DR5 with 3 entries, i.e. with 3\ndistinct spectra of its \\ion{H}{ii} knots of magnitudes $r$ = 12.6, 15.13, \nand 17.64, respectively. Other three luminous group members NGC 3966\n($m_B$ = 13.60), NGC 3994 ($B_T$ = 13.30), and NGC 3991 ($m_B$ = 13.50) \nare each represented in the DR5 by two knots with magnitudes\n$r$ = 12.49, 16.88, and $r$ = 12.63, 16.60, and $r$ = 14.81, 17.89, respectively.\nAfter excluding the knots with $r < 14.5$ those intrinsically luminous\ngalaxies will be represented in our catalogue by their faint(er) knots and\ntheir true total magnitudes are underestimated by 1.5 - 3.5 magnitudes.\nIt appears to be one of the most severely biased nearby groups.\n\n2) All the 25 very dense E groups with $R_{vir} < 1~ h^{-1}$ kpc, distributed\nin the bottom section of the Figure~\\ref{fig:rvirduplicate},\n are results from duplicates. Among them there\nare 14 ''pairs'' ( i.e. actually a single galaxy with two records in the DR5\nspectroscopic sample), 7 \"triplets\" and 4 \"quartets\". Among the N groups there\nare only two duplicates in the given $R_{vir}$ range. \n \n3) Considering the Figures~\\ref{fig:10} and \n\\ref{fig:sizesig} (left panels) \\\\\n-- all 13 groups with $Size < 1 h^{-1}$ kpc are among\nthose with $R_{vir} < 1~ h^{-1}$ kpc in the Figure~\\ref{fig:rvirduplicate},\n i.e. they are duplicates. \\\\\n-- The conspicuous lower boundary of the tightly populated region (which varies\nnearly proportional to distance) is probably determined by the fiber collision\ndistance $\\sim 55''$ of the survey. The groups distributed in the range\nbetween this lower boundary and that of $Size = 10~ h^{-1}$ kpc are in the majority\nreal pairs, i.e. no duplicates. \nPairs with $Size < 10 ~h^{-1}$ kpc \nare likely mergers, or advanced mergers\n(with $1 < Size < 5 h^{-1}$ kpc).\\\\\n - The upper boundary of the tightly populated region likely results \nfrom the linking-length\nscaling relation (1), since there is no\nsingle pair above this boundary. That means, our sample could be biased\nagainst the wide (i.e. in the majority optical) pairs. \n \n{\\it To summarize:} As a result of our cursory checks we have found relatively\nfew bad de-blends, either in form of mismatches between spectral targets and\noptical centers, or more severe shreddings of large and\/or LSB galaxies. \nAlthough the redshifts are fine, photometric and structural measurements are\noften erroneous in such cases. The fraction of groups checked so far is small, \nhowever it comprises the nearest, i.e. potentially most affected part of the\nfull sample. We estimate that the net effect of de-blending errors will have\nminor effect, when working with large (sub)samples of groups. \n \n\n\\begin{figure}[ht]\n\\centering\n\\resizebox{0.5\\textwidth}{!}\n{\\includegraphics*{fig15.eps}}\n\\caption{The eight nearby ($z < 0.04$) groups ($n \\geq 2$) as identified \n in this work in a relatively sparce filament. The group members are shown\n with circles and four individual groups are encompassed with large circles. \n The field galaxies in the same redshift range are marked with small circles. \n For comparison, the members of the corresponding Merchan et al. (\\cite{mer05})\n groups ($n\n \\geq 4$) are marked with tilted crosses ($\\times$), and those of the Berlind\n et al. (\\cite{ber06}) groups (Mr18 sample, $n \\geq 3$) are shown with crosses. Note\n that in Merchan et al. (\\cite{mer05}) the rich, elongated group is divided into two (NE\n and SW) subgroups, which are nearly projecting to each other along the\n line-of-sight. }\n\\label{fig:gr8344a}\n\\end{figure}\n\nIn Fig.~\\ref{fig:gr8344a} we give an example of how the group-finder\nalgorithm works. The comparison with groups Merchan et al. (\\cite{mer05})\n and Berlind et al. (\\cite{ber06})\nshows that all three slightly different FoF algorithms identify quite similar\ngroups. The criteria used in Merchan et al. (\\cite{mer05}) tend to split the groups along the\nline-of-sight and\/or exclude the galaxies in outskirts of groups more easily.\n\n\n\\begin{figure}[ht]\n\\centering\n\\resizebox{0.50\\textwidth}{!}{\\includegraphics*{compgr.eps}}\n\\caption{Groups by Berlind et al. (\\cite{ber06}) Mr18 sample (crosses)\ncompared to our groups in the same redshift ($0.015 < z < 0.045$) and \nrichness ($N_{gal} \\geq 3$) range (large circles). The pairs\nof galaxies ($N_{gal} = 2$) in our catalogue are shown with small circles. \n}\n\\label{fig:Berlind_dr5} \n\\end{figure}\n\nIn Fig.~\\ref{fig:Berlind_dr5} we compare the groups in the volume limited Mr18 sample \nof Berlind et al. (\\cite{ber06}) to our groups in a similar redshift range. We conclude that we\ncan detect more groups (121 our groups versus 88 groups in Mr18) and slightly richer groups\n(6.1 galaxies per one our group versus 5.5 galaxies in one Mr18 group), mainly due to \ninclusion of fainter ($Mr > -18$) galaxies.\n\n\n\\subsection{Comparison to other studies}\n\nEarlier catalogues of the SDSS groups of galaxies, based on the first SDSS\nreleases, were obtained by Lee et al. (\\cite{lee04}), Einasto et al.\\ \n(\\cite{e03b}).\n\nAt present there are five extensive catalogues of groups of galaxies available\nto us which are obtained on the basis of the SDSS. Although they are based on\ndifferent SDSS releases they have obtained by incremental addition of\nnew data to previous releases and observational method and parameters are the\nsame. We can reasonably compare these group catalogues. Group catalogues\nare different due to different group search parameters and not under-laying\nsamples of galaxies. An important exception are 3 volume limited samples by\nBerlind et al. At the price of smaller galaxy sample they have the advantage that\nthe most serious incompleteness effect of magnitude limited samples is\nabsent, the missing of faint galaxies in distant parts of the survey. Some characteristics of the\ncatalogues are presented in Table~\\ref{Tab2}. An important characteristic to\ncompare the catalogues is the fraction of single (isolated) galaxies or\nequivalently, the fraction of galaxies in groups. Single galaxies can be\nconsidered as belonging to small groups or to haloes represented only by one\nobserved galaxy in the visibility window. \n\nTherefore, we face the problem how to compare catalogues because different\ngroup-finder criteria have been applied: richness and size of groups, linking\nlengths, the ratio of los\/perpendicular linking lengths, etc. These criteria\ndepend on the goals of a particular study. The last two columns in the table\ngive the fraction of galaxies in groups of richness $n \\geq 2$ and $n \\geq 4$. \nThese are 30 and 42 \\% for the groups by Weinmann et al.\\ and for our groups of\nrichness $\\geq 2$, and 22 and 18.3 \\% for the groups by Merchan et al., and for\nour groups of richness $\\geq 4$, respectively. In fact, these values represent\nthe low richness end of the multiplicity function. \n\nWe note that the fraction of galaxies in our 2dF GRS groups is very similar --\n43 \\% (Paper~1). This suggest that the multiplicity distribution is a robust\ncharacteristic being independent of these two surveys and small\ndifferences in initial parameters of FoF chosen. We see that Weinmann's\ngroups which are intended to determine only compact groups, have remarkably\nlower fraction of galaxies in groups (30 \\%) than ours. Comparing\nthese fractions for Merchan's and our groups the results are much closer (for\nrichness $n \\geq$ 4).\n\nSeveral studies have shown (see, e.g., Kim et al.\\ \\cite{kim02}) that different\nmethods give rather different groups for the SDSS sample. The same is true\nfor the 2dFGRS groups (Paper~1). Although catalogues cited in\nTable~\\ref{Tab2} are FoF-based, the results of Goto et al. \n(\\cite{goto05}) have created a\ncluster catalogue applying a very strong criteria for system search with a\npurpose to study cluster galaxy evolution. It is not much useful to compare\ntheir catalogue with ours due to different purposes and the number of clusters.\nHowever, we present for completeness also properties in\nTable~\\ref{Tab2}. Weinmann et al.\\ (\\cite{wein06}) applied a more strict\ncriteria in group selection based on the idea that galaxies in a common dark\nmatter halo belong to one group. As a result, they obtained a group catalogue\nthat contains mainly compact groups and a large fraction of single galaxies.\n\nThe most detailed search method and reliable group catalogue(s) have been\nobtained by Berlind et al.\\ (\\cite{ber06}; SDSS collaboration). Their purpose\nwas to construct groups of galaxies to test the dark matter halo occupation\ndistribution. For this requirement to get highly reliable groups they choosed\na different way --- volume-limited samples of the SDSS. This way has unwanted\nresult --- much smaller sample, but we see also (Table 2) the advantage ---\nless incompleteness problems and a higher fraction of galaxies in groups than\nin the other catalogues. Berlind et al.\\ (\\cite{ber06}) demonstrated that\nthere exists no combination of radial and perpendicular linking lengths\nsatisfying all three important properties of groups (in mock catalogue):\nthe multiplicity function, the projected size and the velocity dispersion.\n\nThis could explain why the properties of group catalogues, presented in\nTable~\\ref{Tab2}, are so different. We consider this fault as one of\njustifications to use observed groups for determination of linking length\nscaling law.\n \n\n\\subsection{Conclusions}\n\nWe have used the Sloan Digital Sky Survey Data Release 5 to create a new\ncatalogue of groups of galaxies. Our main results are the following:\n\n\\begin{itemize}\n \n\\item[1)] We have taken into account selection effects caused by\n magnitude-limited galaxy samples. Two most important effects are the\n decreasing of group volume density and the decreasing of the group richness\n with increasing distance from the observer. We show that at large distances\n from the observer the population of more massive, luminous and greater\n groups\/clusters dominates. This increase of the mean size of groups is\n almost compensated by the absence of faint galaxies in the observed groups\n at large distances. The remaining bright galaxies form a compact core of\n the group, this compensates for the increase of group sizes caused by\n domination of the population of more massive groups. This confirms the\n similar luminosity\/density relation found for 2dFGRS groups earlier.\n \n\\item[2)] We find the scaling of the group properties and that of the FoF\n linking length empirically, shifting the observed groups to larger\n redshifts. As the SDSS Main and 2dFGRS galaxies have similar redshift\n distributions and luminosity functions, then we find that the linking length\n scaling laws for these catalogues are very close, growing only slightly by\n arctan law, but only up to the redshift $z=0.12$. Beyond this redshift\n the scaling law decreases sharply. At higher redshift we detect mainly compact\n cores of the groups due to more narrow magnitude range (visibility window)\n of the SDSS. This scaling law method can be considered as a test to which\n redshift limit group-finder could be applied. \n \n\\item[3)] We present a catalogue of groups of galaxies for the SDSS Data Release\n 5. We applied the FoF method with a slightly increasing linking length;\n the catalogue is available at the web page\n (\\texttt{http:\/\/www.obs.ee\/$\\sim$erik\/index.html}).\n \n\\item[4)]A wide variety of properties as a result of different purposes of the\n catalogues which involve different parametres for group search algorithms,\n and different samples. \nOthers tried to establish parameters of the halo model\nof the galaxy distribution. We provide a catalogue that was intented most\ncomplete and representative for the survey volume. Thereby we best measure\n the large scale galaxy network over the survey volume.\n\n \n\n\\end{itemize}\n\n\\begin{acknowledgements}\n \n Funding for the Sloan Digital Sky Survey (SDSS) and SDSS-II has been\n provided by the Alfred P. Sloan Foundation, the Participating Institutions, \n the National Science Foundation, the U.S. Department of Energy, the National\n Aeronautics and Space Administration, the Japanese Monbukagakusho, and the\n Max Planck Society, and the Higher Education Funding Council for England. \n The SDSS Web site is http:\/\/www.sdss.org\/. \n \n The SDSS is managed by the Astrophysical Research Consortium (ARC) for the\n Participating Institutions. The Participating Institutions are the American\n Museum of Natural History, Astrophysical Institute Potsdam, University of\n Basel, University of Cambridge, Case Western Reserve University, The\n University of Chicago, Drexel University, Fermilab, the Institute for\n Advanced Study, the Japan Participation Group, The Johns Hopkins University, \n the Joint Institute for Nuclear Astrophysics, the Kavli Institute for\n Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese\n Academy of Sciences (LAMOST), Los Alamos National Laboratory, the\n Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for\n Astrophysics (MPA), New Mexico State University, Ohio State University, \n University of Pittsburgh, University of Portsmouth, Princeton University, \n the United States Naval Observatory, and the University of Washington. \n \n We are pleased to thank the SDSS collaboration for the DAS version of the\n fifth data release, special thanks to James Annis. We acknowledge the\n Estonian Science Foundation for support under grants No. 6104, 6106 and 7146,\n and the Estonian Ministry for Education and Science support by grant\n SF0062465s03. This work has also been supported by the University of\n Valencia through a visiting professorship for Enn Saar and by the Spanish\n MCyT project AYA2003-08739-C02-01 (including FEDER). J.E. thanks\n Astrophysikalisches Institut Potsdam (using DFG-grant 436 EST 17\/2\/06), and\n the Aspen Center for Physics for hospitality, where part of this study was\n performed. \n\n\\end{acknowledgements}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}