diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjcvj" "b/data_all_eng_slimpj/shuffled/split2/finalzzjcvj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjcvj" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nLogical formalisms have been widely used in many areas of computer\nscience to provide high levels of abstraction, thus offering\nuser-friendliness while increasing the ability to perform\nverification. In the field of databases, first-order logic\nconstitutes the basis of relational query languages, which allow to\nwrite queries in a declarative manner, independently of the physical\nimplementation. In this paper, we propose to use logical formalisms\nto express properties of the topology of communication networks,\nthat can be verified in a distributed fashion over the networks\nthemselves.\n\nWe focus on first-order logic over graphs. First-order logic has\nbeen shown to have limited expressive power over finite structures.\nIn particular, it enjoys the locality property, which states that\nall first-order formulae are local \\cite{Gaifman82}, in the sense\nthat local areas of the graphs are sufficient to evaluate them.\n\nFirst-order properties have been shown to be computable with very\nlow complexity in both sequential and parallel models of\ncomputation. It was shown that first-order properties can be\nevaluated in linear time over classes of bounded degree graphs\n\\cite{Seese95} and over classes of locally tree-decomposable\ngraphs\\footnote{Locally tree-decomposable graphs generalize bounded\ndegree graphs, planar graphs, and graphs of bounded genus.}\n\\cite{FG01}. These results follow from the locality of the logic. It\nwas also shown that they can be evaluated in constant time over\nBoolean circuits with unbounded fan-in (AC$^0$) \\cite{Immerman89}.\nThese bounds lead us to be optimistic on the complexity of the\ndistributed evaluation of first-order properties.\n\nWe consider communication networks based on the message passing\nmodel \\cite{AttiyaW04}, where nodes exchange messages with their\nneighbors. The properties to be evaluated concern the graph which\nforms the topology of the network, and whose knowledge is\ndistributed over the nodes, who are only aware of their $1$-hop\nneighbors. We thus focus on connected graphs.\n\nIn distributed computing, the ability to solve problems locally has\nattracted a strong interest since the seminal paper of Linial\n\\cite{Linial92}. The ability to solve global problems in\ndistributed systems, while performing as much as possible local\ncomputations, is of great interest in particular to ensure\nscalability. Moreover relying as much as possible on local\ninformation improves fault-tolerance. Finally, restricting the\ncomputation to local areas allows to optimize time and communication\ncomplexity.\n\nNaor and Stockmeyer \\cite{NaorS95} showed that there were\nnon-trivial locally checkable labelings that are locally computable,\nwhile on the other hand lower-bounds have been exhibited, thus\nresulting in non-local computability results\n\\cite{KuhnMW04,KuhnMW06}.\n\nDifferent notions of local computation have been considered. The\nmost widely accepted restricts the time of the computation to be\nconstant, that is independent of the size of the network\n\\cite{NaorS95}, while allowing messages of size $O(\\log n)$, where\n$n$ is the size of the network. This condition is rather stringent.\nNaor and Stockmeyer \\cite{NaorS95} show their result for a\nrestricted class of graphs (eg bounded odd degree). Godard et al.\nused graph relabeling systems as the distributed computational\nmodel, defined local computations as graph relabeling systems with\nlocally-generated local relabeling rules, and characterized the\nclasses of graphs that are locally computable \\cite{GodardMM04}.\n\nOur initial motivation is to understand the impact of the logical\nlocality on the distributed computation, and its relationship with\nlocal distributed computation. It is easy to verify though that\nthere are simple properties (expressible in first-order logic) that\ncannot be computed locally. Consider for instance the property\n``There exist at least two distinct triangles'', which requires\nnon-local communication to check the distinctness of the two\ntriangles which may be far away from each other. Nevertheless,\nfirst-order properties do admit simple distributed computations.\n\nWe thus introduce frugal distributed computations. A distributed\nalgorithm is \\emph{frugal} if during its computation only a bounded\nnumber of messages of size $O(\\log n)$ are sent over each link. If\nwe restrict our attention to bounded degree networks, this implies\nthat each node is only receiving a bounded number of messages.\nFrugal computations resemble local computations over bounded degree\nnetworks, since the nodes are receiving only a bounded number of\nmessages, although these messages can come from remote nodes through\nmulti-hop paths.\n\nWe prove that first-order properties can be frugally evaluated over\nbounded degree networks and planar networks\n(Theorem~\\ref{thm:FO-bounded-degree} and\nTheorem~\\ref{thm:FO-planar}). The proofs are obtained by\ntransforming the centralized linear time evaluation algorithms\n\\cite{Seese95,FG01} into distributed ones satisfying the restriction\nthat only a bounded number of messages are sent over each link.\nMoreover, we show that the results carry over to the extension of\nfirst-order logic with unary counting. While the transformation of\nthe centralized linear time algorithm is simple for first-order\nproperties over bounded degree networks, it is quite intricate for\nfirst-order properties over planar networks. The most intricate part\nis the distributed construction of an ordered tree decomposition for\nsome subgraphs of the planar network, inspired by the distributed\nalgorithm to construct an ordered tree decomposition for planar\nnetworks with bounded diameter in \\cite{GW09}.\n\nIntuitively, since in the centralized linear time computation each\nobject is involved only a bounded number of times, in the\ndistributed computation, a bounded number of messages sent over each\nlink could be sufficient to evaluate first-order properties. So it\nmight seem trivial to design frugal distributed algorithms for\nfirst-order properties over bounded degree networks and planar\nnetworks. Nevertheless, this is not the case, because in the\ncentralized computation, after visiting one object, any other object\ncan be visited, but in the distributed computation, only the\n\\emph{adjacent} objects (nodes, links) can be visited.\n\nThe paper is organized as follows. In the next section, we recall\nclassical graph theory concepts, as well as Gaifman's locality\ntheorem. In Section~\\ref{sec-dist-FO}, we consider the distributed\nevaluation of first-order properties over respectively bounded\ndegree and planar networks. Finally, in Section~\\ref{sec-beyond-FO},\nwe consider the distributed evaluation of first-order logic with\nunary counting. Proofs can be found in the appendix.\n\n\\vspace*{-1em}\n\n\\section{Graphs, first-order logic and locality}\\label{sec-FO}\n\nIn this paper, our interest is focused to a restricted class of\nstructures, namely finite graphs. Let $G=(V,E)$, be a finite graph.\nWe use the following notations. If $v \\in V$, then $deg(v)$ denotes\nthe {\\it degree} of $v$. For two nodes $u,v \\in V$, the {\\it\ndistance} between $u$ and $v$, denoted $dist_G(u,v)$, is the length\nof the shortest path between $u$ and $v$. For $k \\in \\mathds{N}$,\nthe {\\it $k$-neighborhood} of a node $v$, denoted $N_k(v)$, is\ndefined as $\\{w \\in V | dist_G(v,w) \\le k\\}$. If $\\bar{v}=v_1...v_p$\nis a collection of nodes in $V$, then the $k$-neighborhood of\n$\\bar{v}$, denoted $N_k(\\bar{v})$, is defined by $\\bigcup_{1 \\le i\n\\le p} N_k(v_i)$.\nFor $X \\subseteq V$, let $\\langle X \\rangle^G$ denote the subgraph\ninduced by $X$.\n\nLet $G=(V,E)$ be a connected graph, a \\emph{tree decomposition} of\n$G$ is a rooted labeled tree $\\mathcal{T}=(T,F, r, B)$, where $T$ is\nthe set of vertices of the tree, $F \\subseteq T \\times T$ is the\nchild-parent relation of the tree, $r \\in T$ is the root of the\ntree, and $B$ is a labeling function from $T$ to $2^V$, mapping\nvertices $t$ of $T$ to sets $B(t)\\subseteq V$, called \\emph{bags},\nsuch that\n\\vspace*{-2mm}\n\\begin{enumerate}\n\\item For each edge $(v,w) \\in E$, there is a $t \\in T$, such that $\\{v,w\\} \\subseteq B(t)$.\n\\item For each $v \\in V$, $B^{-1}(v)=\\{t \\in T | v \\in B(t)\\}$ is connected in\n$T$.\n\\end{enumerate}\n\\vspace*{-2mm}\nThe {\\it width} of $\\mathcal{T}$, $width(\\mathcal{T})$, is defined\nas $\\max\\{|B(t)|-1 | t \\in T\\}$. The tree-width of $G$, denoted\n$tw(G)$, is the minimum width over all tree decompositions of $G$.\nAn \\emph{ordered tree decomposition} of width $k$ of a graph $G$ is\na rooted labeled tree $\\mathcal{T}=(T,F,r,L)$ such that:\n\\vspace*{-2mm}\n\\begin{itemize}\n\\item $(T,F,r)$ is defined as above,\n\n\\item $L$ assigns each vertex $t \\in T$ to a $(k+1)$-tuple\n$\\overline{b^t}=(b^t_1,\\cdots,b^t_{k+1})$ of vertices of $G$ (note\nthat in the tuple $\\overline{b^t}$, vertices of $G$ may occur\nrepeatedly),\n\n\\item If $L^\\prime(t):=\\{b^t_j| L(t)= (b^t_1,\\cdots,b^t_{k+1}), 1 \\le j \\le k+1\\}$, then\n$(T,F,r,L^\\prime)$ is a tree decomposition.\n\\end{itemize}\n\\vspace*{-2mm}\nThe \\emph{rank} of an (ordered) tree decomposition is the rank of\nthe rooted tree, i.e. the maximal number of children of its\nvertices.\n\nWe consider first-order logic (FO) over the signature $E$, where $E$\nis a binary relation symbol. The syntax and semantics of first-order\nformulae are defined as usual \\cite{EbbinghausFlum99}. The\n\\emph{quantifier rank} of a formula $\\varphi$ is the maximal number\nof nestings of existential and universal quantifiers in $\\varphi$.\n\nA \\emph{graph property} is a class of graphs closed under\nisomorphisms. Let $\\varphi$ be a first-order sentence, the graph\nproperty defined by $\\varphi$, denoted $\\mathcal{P}_\\varphi$, is the\nclass of graphs satisfying $\\varphi$.\n\n\n\nThe distance between nodes can be defined by first-order formulae\n$dist(x,y) \\le k$ stating that the distance between $x$ and $y$ is\nno larger than $k$, and $dist(x,y) > k$ is an abbreviation of $\\neg\ndist(x,y) \\le k$. In addition, let $\\bar{x}=x_1...x_p$ be a list of\nvariables, then $dist(\\bar{x},y) \\le k$ is used to denote\n$\\mathop{\\vee} \\limits_{1 \\le i \\le p} dist(x_i,y) \\le k$.\n\nLet $\\varphi$ be a first-order formula, $k \\in \\mathds{N}$, and\n$\\bar{x}$ be a list of variables not occurring in $\\varphi$, then\nthe formula bounding the quantifiers of $\\varphi$ to the\n$k$-neighborhood of $\\bar{x}$, denoted $\\varphi^{(k)}(\\bar{x})$, can\nbe defined easily in first-order logic by using formulae\n$dist(\\bar{x},y) \\le k$. For instance, if $\\varphi:= \\exists y\n\\psi(y)$, then\n\\[\\varphi^{(k)}(\\bar{x}) := \\exists y \\left(dist(\\bar{x},y) \\le k \\wedge\n\\left(\\psi(y)\\right)^{(k)}(\\bar{x})\\right).\\]\n\nWe can now recall the notion of logical locality introduced by\nGaifman \\cite{Gaifman82,EbbinghausFlum99}.\n\\vspace*{-.5em}\n\\begin{theorem}\\label{the-gaifman}\n\\cite{Gaifman82} Let $\\varphi$ be a first-order formula with free\nvariables $u_1,...,u_p$, then $\\varphi$ can be written in {\\it\nGaifman Normal Form}, that is into a Boolean combination of (i)\nsentences of the form:\n\\begin{equation}\\label{eqn:gaifman-theorem}\n\\exists x_1 ... \\exists x_s \\left(\\bigwedge \\limits_{1 \\le i 2r \\wedge \\bigwedge \\limits_{i} \\psi^{(r)}(x_i)\\right)\n\\end{equation}\nand (ii) formulae of the form $\\psi^{(t)}(\\overline{y})$, where\n$\\overline{y} = y_1...y_q$ such that $y_i \\in \\{u_1,...,u_p\\}$ for\nall $1 \\le i \\le q$, $r \\le 7^{k-1}$, $s \\le p + k$, $t \\le\n\\left(7^k-1\\right)\/2$ ($k$ is the quantifier rank of\n$\\varphi$)\\footnote{The bound on $r$ has been improved to $4^k-1$ in\n\\cite{KeislerL04}}. \\\\\nMoreover, if $\\varphi$ is a sentence, then the\nBoolean combination contains only sentences of the form\n(\\ref{eqn:gaifman-theorem}).\n\\end{theorem}\n\nThe locality of first-order logic is a powerful tool to demonstrate\nnon-definability results \\cite{Libkin97}. It can be used in\nparticular to prove that counting properties, such as the parity of\nthe number of vertices, or recursive properties, such as the\nconnectivity of a graph, are not first-order.\n\n\n\\vspace*{-1em}\n\\section{Distributed evaluation of FO}\\label{sec-dist-FO}\nWe consider a message passing model of distributed computation\n\\cite{AttiyaW04}, based on a communication network whose topology is\ngiven by a graph $G=(V,E)$ of diameter $\\Delta$, where $E$ denotes\nthe set of bidirectional \\emph{communication links} between nodes.\nFrom now on, we restrict our attention to \\emph{finite connected\ngraphs}.\n\nWe assume that the distributed system is asynchronous and has no\nfailure. The nodes have a unique \\emph {identifier} taken from\n$1,2,\\cdots,n$, where $n$ is the number of nodes. Each node has\ndistinct local ports for distinct links incident to it. The nodes\nhave {\\it states}, including final accepting or rejecting states.\n\nFor simplicity, we assume that there is only one query fired in the\nnetwork by a {\\it requesting node}. We assume also that a {\\it\nbreadth-first-search (BFS) tree} rooted on the requesting node has\nbeen pre-computed in the network, such that each node stores locally\nthe identifier of its parent in the BFS-tree, and the states of the\nports with respect to the BFS-tree, which are either ``parent'' or\n``child'', denoting the ports corresponding to the tree edges, or\n``horizon'', ``upward'', ``downward'', denoting the ports\ncorresponding to the non-tree edges to some node with the same,\nsmaller, or larger depth in the BFS-tree. The computation\nterminates, when the requesting node reaches a final state.\n\nLet $\\mathcal{C}$ be a class of graphs. A distributed algorithm is\nsaid to be \\emph{frugal} over $\\mathcal{C}$ if there is a $k \\in\n\\mathds{N}$ such that for any network $G \\in \\mathcal{C}$ of $n$\nnodes and any requesting node in $G$, the distributed computation\nterminates, with only at most $k$ messages of size $O(\\log n)$ sent\nover each link. If we restrict our attention to bounded degree\nnetworks, frugal distributed algorithms implies that each node only\nreceives a bounded number of messages. Frugal computations resemble\nlocal computations over bounded degree networks, since the nodes\nreceive only a bounded number of messages, although these messages\ncan come from remote nodes through multi-hop paths.\n\nLet $\\mathcal{C}$ be a class of graphs, and $\\varphi$ an FO\nsentence, we say that $\\varphi$ can be distributively evaluated over\n$\\mathcal{C}$ if there exists a distributed algorithm such that for\nany network $G \\in \\mathcal{C}$ and any requesting node in $G$, the\ncomputation of the distributed algorithm on $G$ terminates with the\nrequesting node in the accepting state if and only if $G \\models\n\\varphi$. Moreover, if there is a frugal distributed algorithm to do\nthis, then we say that $\\varphi$ can be frugally evaluated over\n$\\mathcal{C}$.\n\n\n\nFor centralized computations, it has been shown that Gaifman's\nlocality of FO entails linear time evaluation of FO properties over\nclasses of bounded degree graphs and classes of locally\ntree-decomposable graphs \\cite{Seese95,FG01}. In the following, we\nshow that it is possible to design frugal distributed evaluation\nalgorithms for FO properties over bounded degree and planar\nnetworks, by carefully transforming the centralized linear time\nevaluation algorithms into distributed ones with computations on\neach node well balanced.\n\n\\vspace*{-.5em}\n\n\\subsection{Bounded degree networks}\\label{sec-FO-bounded-degree}\n\nWe first consider the evaluation of FO properties over bounded\ndegree networks. We assume that each node stores the degree bound\n$k$ locally.\n\n\\vspace*{-.5em}\n\n\\begin{theorem}\\label{thm:FO-bounded-degree}\nFO properties can be frugally evaluated over bounded degree\nnetworks.\n\\end{theorem}\n\n\\vspace*{-.5em}\n\nTheorem~\\ref{thm:FO-bounded-degree} can be shown by using Hanf's\ntechnique \\cite{FSV95}, in a way similar to the proof of Seese's\nseminal result \\cite{Seese95}.\n\nLet $r \\in \\mathds{N}$, $G=(V,E)$, and $v \\in V$, then the\n\\emph{$r$-type} of $v$ in $G$ is the isomorphism type of $\\left(\n\\langle N_r(v) \\rangle^G,v \\right)$. Let $r, m \\in \\mathds{N}$,\n$G_1$ and $G_2$ be two graphs, then $G_1$ and $G_2$ are said to be\n$(r,m)$-equivalent if and only if for every $r$-type $\\tau$, either\n$G_1$ and $G_2$ have the same number of vertices with $r$-type\n$\\tau$ or else both have at least $m$ vertices with $r$-type $\\tau$.\n$G_1$ and $G_2$ are said to be $k$-equivalent, denoted $G_1\n\\equiv_{k} G_2$, if $G_1$ and $G_2$ satisfy the same FO sentences of\nquantifier rank at most $k$. It has been shown that:\n\n\\vspace*{-.5em}\n\n\\begin{theorem}\\label{thm:hanf-theorem}\n\\cite{FSV95} Let $k,d \\in \\mathds{N}$. There exist $r,m \\in\n\\mathds{N}$ such that $r$ (resp. $m$) depends on $k$ (resp. both $k$\nand $d$), and for any graphs $G_1$ and $G_2$ with maximal degree no\nmore than $d$, if $G_1$ and $G_2$ are $(r,m)$-equivalent, then $G_1\n\\equiv_k G_2$.\n\\end{theorem}\n\n\\vspace*{-.5em}\n\nLet us now sketch the proof of Theorem~\\ref{thm:FO-bounded-degree},\nwhich relies on a distributed algorithm consisting of three phases.\nSuppose the requesting node requests the evaluation of some FO\nsentence with quantifier rank $k$. Let $r,m$ be the natural numbers\ndepending on $k,d$ specified in Theorem~\\ref{thm:hanf-theorem}.\n\n\\vspace*{-.5em}\n\n\\begin{description}\n\\item[Phase I] The requesting node broadcasts messages along the BFS-tree to ask each node to collect the topology information in its $r$-neighborhood;\n\\vspace*{-.5em}\n\\item[Phase II] Each node collects the topology information in its\n$r$-neighborhood;\n\\vspace*{-.5em}\n\\item[Phase III] The $r$-types of the nodes in the network are aggregated\nthrough the BFS-tree to the requesting node up to the threshold $m$\nfor each $r$-type. Finally the requesting node decides whether the\nnetwork satisfies the FO sentence or not by using the information\nabout the $r$-types.\n\\end{description}\n\n\\vspace*{-.5em}\n\nIt is easy to see that only a bounded number of messages are sent\nover each link in Phase I and II. Since the total number of distinct\n$r$-types with degree bound $d$ depends only upon $r$ and $d$ and\neach $r$-type is only counted up to a threshold $m$, it turns out\nthat over each link, only a bounded number of messages are sent in\nPhase III as well. So the above distributed evaluation algorithm is\nfrugal over bounded degree networks.\n\n\\vspace*{-.5em}\n\n\\subsection{Planar networks}\\label{sec-FO-planar}\n\nWe now consider the distributed evaluation of FO properties over\nplanar networks.\n\nA \\emph{combinatorial embedding} of a planar graph $G=(V,E)$ is an\nassignment of a cyclic ordering of the set of incident edges to each\nvertex $v$ such that two edges $(u,v)$ and $(v,w)$ are in the same\nface iff $(v,w)$ is immediately before $(v,u)$ in the cyclic\nordering of $v$. Combinatorial embeddings, which encode the\ninformation about boundaries of the faces in usual embeddings of\nplanar graphs into the planes, are useful for computing on planar\ngraphs. Given a combinatorial embedding, the boundaries of all the\nfaces can be discovered by traversing the edges according to the\nabove condition.\n\nWe assume in this subsection that a combinatorial embedding of the\nplanar network is distributively stored in the network, i.e. a\ncyclic ordering of the set of the incident links is stored in each\nnode of the network.\n\\vspace*{-.5em}\n\n\\begin{theorem}\\label{thm:FO-planar}\nFO properties can be frugally evaluated over planar networks.\n\\end{theorem}\n\n\\vspace*{-.5em}\n\n\nFor the proof of Theorem~\\ref{thm:FO-planar}, we first recall the\ncentralized linear time algorithm to evaluate FO properties over\nplanar graphs in \\cite{FG01}\\footnote{In fact, in \\cite{FG01}, it\nwas shown that FO is linear-time computable over classes of locally\ntree-decomposable graphs.}.\n\nLet $G=(V,E)$ be a planar graph and $\\varphi$ be an FO sentence.\nFrom Theorem~\\ref{the-gaifman}, we know that $\\varphi$ can be\nwritten into Boolean combinations of sentences of the form\n(\\ref{eqn:gaifman-theorem}),\n\n\\vspace*{-.5em}\n\n\n\\begin{equation*}\n\\exists x_1 ... \\exists x_s \\left(\\bigwedge \\limits_{1 \\le i 2r \\wedge \\bigwedge \\limits_{i} \\psi^{(r)}(x_i)\\right).\n\\end{equation*}\n\n\\vspace*{-.5em}\n\nIt is sufficient to show that sentences of the form\n(\\ref{eqn:gaifman-theorem}) are linear-time computable over $G$. The\ncentralized algorithm to evaluate FO sentences of the form\n(\\ref{eqn:gaifman-theorem}) over planar graphs consists of the\nfollowing four phases:\n\n\\vspace*{-.5em}\n\n\\begin{enumerate}\n\\item Select some $v_0 \\in V$, let $\\mathcal{H}=\\{G[i,i+2r] | i \\ge 0\\}$, where $G[i,j]=\\{v \\in V |\ni \\le dist_G(v_0,v) \\le j\\}$;\n\n\\item For each $H \\in \\mathcal{H}$, compute $K_r(H)$, where $K_r(H):=\\{v \\in H | N_r(v) \\subseteq H\\}$;\n\n\\item For each $H \\in \\mathcal{H}$, compute $P_H:=\\{ v \\in K_r(H) | \\langle H \\rangle^G \\models\n\\psi^{(r)}(v)\\}$;\n\n\\item Let $P:=\\cup_H P_H$, determine whether there are $s$ distinct nodes in $P$ such\nthat their pairwise distance is greater than $2r$.\n\\end{enumerate}\n\n\\vspace*{-.5em}\n\nIn the computation of the 3rd and 4th phase above, an\nautomata-theoretical technique to evaluate Monadic-Second-Order\n(MSO) formulae in linear time over classes of graphs with bounded\ntree-width \\cite{Courcelle90,FlumG06,FFG02} is used. In the\nfollowing, we recall this centralized evaluation algorithm.\n\nMSO is obtained by adding set variables and set quantifiers into FO,\nsuch as $\\exists X \\varphi(X)$ (where $X$ is a set variable). MSO\nhas been widely studied in the context of graphs for its expressive\npower. For instance, $3$-colorability, transitive closure or\nconnectivity can be defined in MSO \\cite{Courcelle08}.\n\nThe centralized linear time evaluation of MSO formulae over classes\nof bounded tree-width graphs goes as follows. First an ordered tree\ndecomposition $\\mathcal{T}$ of the given graph is constructed. Then\nfrom the given MSO formula, a tree automaton $\\mathcal{A}$ is\nobtained. Afterwards, $\\mathcal{T}$ is transformed into a labeled\ntree $\\mathcal{T}^\\prime$, finally $\\mathcal{A}$ is ran over\n$\\mathcal{T}^\\prime$ (maybe several times for formulae containing\nfree variables) to get the evaluation result.\n\n\\medskip\n\nIn the rest of this section, we design a frugal distributed\nalgorithm to evaluate FO sentences over planar networks by adapting\nthe above centralized algorithm. The main difficulty is to\ndistribute the computation among the nodes such that only a bounded\nnumber of messages are sent over each link during the computation.\n\n\\vspace*{-.5em}\n\n\\begin{description}\n\\item[Phase I] The requesting node broadcasts the FO sentence of the form (\\ref{eqn:gaifman-theorem}) to all the\nnodes in the network through the BFS tree;\n\\vspace*{-.5em}\n\n\\item[Phase II] For each $v \\in V$, compute $C(v):=\\{i \\ge 0 | v \\in G[i,i+2r]\\}$;\n\\vspace*{-.5em}\n\n\\item[Phase III] For each $v \\in V$, compute $D(v):=\\{i \\ge 0 | N_r(v) \\subseteq G[i,i+2r]\\}$;\n\\vspace*{-.5em}\n\n\\item[Phase IV] For each $i \\ge 0$, compute $P_i:=\\{ v \\in V | i \\in D(v), \\langle G[i,i+2r] \\rangle^G \\models \\psi^{(r)}(v)\\}$;\n\\vspace*{-.5em}\n\n\\item[Phase V] Let $P:=\\bigcup_i P_i$, determine whether there are $s$ distinct nodes labeled by $P$ such that\ntheir pairwise distance is greater than $2r$.\n\\end{description}\n\n\\vspace*{-.5em}\n\n\nPhase I is trivial. Phase II is easy. In the following, we\nillustrate the computation of Phase III, IV, and V one by one.\n\nWe first introduce a lemma for the computation of Phase III.\n\nFor $W \\subseteq V$, let $K_i(W):=\\{v \\in W | N_i(v) \\subseteq W\\}$.\nLet $D_i(v):=\\{j \\ge 0 | v \\in K_i(G[j,j+2r])\\}$.\n\n\\vspace*{-.5em}\n\n\\begin{lemma}\\label{lem-kernel}\nFor each $v \\in V$ and $i > 0$, $D_i(v)=C(v) \\cap \\bigcap\n\\limits_{w: (v,w) \\in E} D_{i-1}(w)$.\n\\end{lemma}\n\n\\vspace*{-.5em}\n\nWith Lemma~\\ref{lem-kernel}, $D(v)=D_r(v)$ can be computed in an\ninductive way to finish Phase III: Each node $v$ obtains the\ninformation $D_{i-1}(w)$ from all its neighbors $w$, and performs\nthe in-node computation to compute $D_i(v)$.\n\n\\smallskip\n\nNow we consider Phase IV.\n\nBecause $\\psi^{(r)}(x)$ is a local formula, $\\psi^{(r)}(x)$ can be\nevaluated separately over each connected component of $G[i,i+2r]$\nand the results are stored distributively.\n\nLet $C_i$ be a connected component of $G[i,i+2r]$, and\n$w^i_1,\\cdots,w^i_l$ be all the nodes contained in $C_i$ with\ndistance $i$ from the requesting node. Now we consider the\nevaluation of $\\psi^{(r)}(x)$ over $C_i$.\n\nLet $C^\\prime_i$ be the graph obtained from $C_i$ by including all\nancestors of $w^i_1,\\cdots,w^i_l$ in the BFS-tree, and $C^\\ast_i$ be\nthe graph obtained from $C^\\prime_i$ by contracting all the\nancestors of $w^i_1,\\cdots,w^i_l$ into one vertex, i.e. $C^\\ast_i$\nhas one more vertex, called the virtual vertex, than $C_i$, and this\nvertex is connected to $w^i_1,\\cdots,w^i_l$. It is easy to see that\n$C^\\ast_i$ is a planar graph with a BFS-tree rooted on $v^\\ast$ and\nof depth at most $2r+1$. So $C^\\ast_i$ is a planar graph with\nbounded diameter.\n\nAn ordered tree decomposition for planar networks with bounded\ndiameter can be distributively constructed with only a bounded\nnumber of messages sent over each link as follows \\cite{GW09}:\n\\vspace*{-.5em}\n\n\\begin{itemize}\n\\item Do a depth-first-search to decompose the network into blocks,\ni.e. biconnected components;\n\\vspace*{-.5em}\n\n\n\\item Construct an ordered tree decomposition for each\nnontrivial block: Traverse every face of the block according to the\ncyclic ordering at each node, triangulate all those faces, and\nconnect the triangles into a tree decomposition by utilizing the\npre-computed BFS tree;\n\\vspace*{-.5em}\n\n\\item Finally the tree decompositions for the blocks are connected\ntogether into a complete tree decomposition for the whole network.\n\\end{itemize}\n\\vspace*{-.5em}\n\nBy using the distributed algorithm for the tree decomposition of\nplanar networks with bounded diameter, we can construct\ndistributively an ordered tree decomposition for $C^\\ast_i$, while\nhaving the virtual vertex in our mind, and get an ordered tree\ndecomposition for $C_i$.\n\nWith the ordered tree decomposition for $C_i$, we can evaluate\n$\\psi^{(r)}(x)$ over $C_i$ by using the automata-theoretical\ntechnique, and store the result distributively in the network (each\nnode stores a Boolean value indicating whether it belongs to the\nresult or not).\n\nA distributed post-order traversal over the BFS tree can be done to\nfind out all the connected components of $G[i,i+2r]$'s and construct\nthe tree decompositions for these connected components one by one.\n\nFinally we consider Phase V.\n\nLabel nodes in $\\bigcup_i P_i$ with $P$.\n\nThen consider the evaluation of FO sentence $\\varphi^\\prime$ over\nthe vocabulary $\\{E,P\\}$,\n\\vspace*{-.5em}\n\n\\begin{equation*}\n\\exists x_1 ... \\exists x_s \\left(\\bigwedge \\limits_{1 \\le i 2r \\wedge \\bigwedge \\limits_{i} P(x_i)\\right).\n\\end{equation*}\n\\vspace*{-.5em}\n\nStarting from some node $w_1$ with label $P$, mark the vertices in\n$N_{2r}(w_1)$ as $Q$, then select some node $w_2$ outside $Q$, and\nmark those nodes in $N_{2r}(w_2)$ by $Q$ again, continue like this,\nuntil $w_l$ such that either $l = s$ or all the nodes with label $P$\nhave already been labeled by $Q$.\n\nIf $l < s$, then label the nodes in $\\bigcup \\limits_{1 \\le i \\le l}\nN_{4r}(v_i)$ as $I$. Each connected component of $\\langle I\n\\rangle^G$ has diameter no more than $4lr < 4sr$. We can construct\ndistributively a tree decomposition for each connected component of\n$\\langle I \\rangle^G$, and connect these tree decompositions\ntogether to get a complete tree-decomposition of $\\langle I\n\\rangle^G$, then evaluate the sentence $\\varphi^\\prime$ by using\nthis complete tree decomposition.\n\nThe details of the frugal distributed evaluation algorithm can be\nfound in the appendix.\n\\vspace*{-1em}\n\n\\section{Beyond FO properties}\\label{sec-beyond-FO}\n\\vspace*{-.5em}\nWe have shown that FO properties can be frugally evaluated over\nrespectively bounded degree and planar networks. In this section, we\nextend these results to FO unary queries and some counting extension\nof FO.\n\nFrom Theorem~\\ref{the-gaifman}, FO formula $\\varphi(x)$ containing\nexactly one free variable $x$ can be written into the Boolean\ncombinations of sentences of the form (1) and the local formulae\n$\\psi^{(t)}(x)$. Then it is not hard to prove the following result.\n\\vspace*{-.5em}\n\\begin{theorem}\\label{thm:FO-Unary}\nFO formulae $\\varphi(x)$ with exactly one free variable $x$ can be\nfrugally evaluated over respectively bounded degree and planar\nnetworks, with the results distributively stored on the nodes of the\nnetwork.\n\\end{theorem}\n\\vspace*{-.5em}\n\nCounting is one of the ability that is lacking to first-order logic,\nand has been added in commercial relational query languages (e.g.\nSQL). Its expressive power has been widely studied\n\\cite{GradelO92,GrumbachT95,Otto96} in the literature. Libkin\n\\cite{Libkin97} proved that first-order logic with counting still\nenjoys Gaifman locality property. We prove that\nTheorem~\\ref{thm:FO-bounded-degree} and Theorem~\\ref{thm:FO-planar}\ncarry over as well for first-order logic with unary counting.\n\nLet FO($\\#$) be the extension of first-order logic with unary\ncounting. FO($\\#$) is a two-sorted logic, the first sort ranges over\nthe set of nodes $V$, while the second sort ranges over the natural\nnumbers $\\mathds{N}$. The terms of the second sort are defined by:\n$t:= \\# x.\\varphi(x) \\ | \\ t_1 + t_2 \\ | \\ t_1 \\times t_2$, where\n$\\varphi$ is a formula over the first sort with one free variable\n$x$. Second sort terms of the form $\\# x. \\varphi(x)$ are called\n\\emph{basic} second sort terms.\n\nThe atoms of FO($\\#$) extend standard FO atoms with the following\ntwo unary counting atoms: $t_1 = t_2 \\ | \\ t_1 < t_2,$ where $ t_1,\nt_2$ are second sort terms. Let $t$ be a second sort term of\nFO($\\#$), $G=(V,E)$ be a graph, then the interpretation of $t$ in\n$G$, denoted $t^G$, is defined as follows:\n\\vspace*{-.5em}\n\\begin{itemize}\n\\item $(\\# x.\\varphi(x))^G$ is the cardinality of $\\{v \\in V | G\\models\\varphi(v) \\}$;\n\\vspace*{-.5em}\n\\item $\\left(t_1 + t_2\\right)^G$ is the sum of $t_1^G$ and $t_2^G$;\n\\vspace*{-.5em}\n\\item $\\left(t_1 \\times t_2\\right)^G$ is the product of $t_1^G$ and $t_2^G$.\n\\end{itemize}\n\\vspace*{-.5em}\nThe interpretation of FO($\\#$) formulae is defined in a standard\nway.\n\n\\vspace*{-.5em}\n\\begin{theorem}\\label{thm:FO-UCNT-frugal}\nFO($\\#$) properties can be frugally evaluated over respectively\nbounded degree and planar networks.\n\\end{theorem}\n\\vspace*{-.5em}\nThe proof of the theorem relies on a normal form of FO($\\#$)\nformulae. A sketch can be found in the appendix.\n\n\n\\vspace*{-1em}\n\\section{Conclusion}\\label{sec-conclusion}\n\nThe logical locality has been shown to entail efficient computation\nof first-order logic over several classes of structures. We show\nthat if the logical formulae are used to express properties of the\ngraphs, which constitute the topology of communication networks,\nthen these formulae can be evaluated very efficiently over these\nnetworks. Their distributed computation, although not local\n\\cite{Linial92,NaorS95,Peleg00}, can be done {\\it frugally}, that is\nwith a bounded number of messages of logarithmic size exchanged over\neach link. The frugal computation, introduced in this paper,\ngeneralizes local computation and offers a large spectrum of\napplications. We proved that first-order properties can be evaluated\nfrugally over respectively bounded degree and planar networks.\nMoreover the results carry over to the extension of first-order\nlogic with unary counting. The distributed time used in the frugal\nevaluation of FO properties over bounded degree networks is\n$O(\\Delta)$, while that over planar networks is $O(n)$.\n\nWe assumed that some pre-computations had been done on the networks.\nIf no BFS-tree has been pre-computed, the construction of a BFS-tree\ncan be done in $O(\\Delta)$ time and with $O(\\Delta)$ messages sent\nover each link \\cite{BDLP08}.\n\nBeyond its interest for logical properties, the frugality of\ndistributed algorithms, which ensures an extremely good scalability\nof their computation, raises fundamental questions, such as deciding\nwhat can be frugally computed. Can a Hamiltonian path for instance\nbe computed frugally?\n\n\n\\vspace*{-1em}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:intro}Introduction}\nRandom matrix theory is used to analyse chaotic quantum systems. The statistics of the discrete energies of the system can yield information on the completeness of the data or the presence of intruder states. The standard procedure is straightforward. After preparing the data by rescaling it so that the level density is unity across its whole range (a process known as ``unfolding\") the spectral statistics of the system are compared with the RMT results for the appropriate ensemble. We would like to see what the effect coupling a chaotic system to the continuum might have on the RMT statistics. The canonical example of this process is the analysis of neutron resonance data. A free neutron is incident on a target nucleus, and they combine to make an excited compound nucleus. The incident channel is but one configuration of many. The initial wave function is simple and consists solely of this component. Through a series of random collisions of the nucleons, the initial wave function ``melts\" and de-excites, emitting gamma rays on the way to the ground state. The initial configuration of a free neutron and a ground state target corresponds to one of the discrete excited states of the compound nucleus. We have come to the picture of a discrete state buried in the continuum. The system is an open quantum system. The states of the compound nucleus have a width. It is the effect of the openness of the system on the level statistics that is the main question addressed in this paper.\n\nThere is a well developed method for dealing with open quantum systems. The basic structure of the model is a Hermitian Hamiltonian with coupling to the continuum modeled by the addition of an imaginary part or doorway state, see \\cite{soze88,soze92,ZeVo2003}. The energies of the original Hamiltonian acquire widths. A common feature of these open quantum systems is the appearance of a super-radiant state. The SR state appears as the coupling to the continuum increases. There is a restructuring of the states and one special (SR) state acquires all the width.\n\nWe will make a very simple model of an open quantum system consisting of an $N\\times N$ GOE matrix with an imaginary part $=\\imath \\kappa \\sqrt{N}$ added to the diagonal. Commonly used RMT statistics are calculated and their behavior as a function of $\\kappa$ is explored. The biggest effects occur around $\\kappa=1$ which is when the SR transition happens. A plot of the energies of the opened GOE matrix vs $\\kappa$ consistently show the migration of a few levels to the center of the spectrum. A plot of the level density reveals a deviation from the RMT semicircle in the middle of the energy range consistent with there being more energies close to zero as $\\kappa$ grows. The entropy of the states evolves also, with the SR state clearly emerging at $\\kappa=1$. Next the $\\Delta_3(L)$ statistic or spectral rigidity is examined. The effect of opening the system was to increase the value of the spectral rigidity. The increase was maximum at $\\kappa=1$ and then decreased, but not to zero. The shape of the $\\Delta_3(L)$ curves for individual opened spectra looked like those of incomplete spectra or ones with intruder levels. In \\cite{mulhall11} it was seen that the effect on $\\Delta_3(L)$ of intruder levels or missed levels was the same. The problem of spurious levels is addressed in \\cite{shriner07} with both $\\Delta_3(L)$ and the thermodynamic internal energy. We performed a search for missed levels on opened spectra using RMT methods, keeping in mind that a search for intruders would give the same results. The spectra are complete, but the tests suggested that there was a fraction of the levels missed. This fraction was biggest at $\\kappa=1$ where it reached a value of about 3\\%. The distribution of widths also undergoes a transformation at $\\kappa=1$. At large values of $\\kappa$ the SR state accounts for all the width and the remaining levels have widths consistent with the Porter-Thomas distribution. We note here that the full probability density of widths for symmetric complex random matrices has been derived in \\cite{fyodorov99a}. The exact solution for the case when the original matrix is from the Gaussian Unitary Ensemble is treated in \\cite{fyodorov96,fyodorov99b}, where the authors derive the distribution of complex eigenvalues.\n\nIn the next section we describe the system, how it is opened and the effect this has on the energies and entropies. The SR state is seen already at this stage. In Sect.~\\ref{sec:dos} we look at the density of states and address issues of unfolding the spectra in anticipation of RMT analysis. This is followed in Sect.~\\ref{sec:width} by an analysis of the width distribution and a look at the SR transition. The spectral rigidity is introduced in Sect.~\\ref{sec:d3} and the effect of opening the system is seen. In Sect.~\\ref{sec:rmt} we perform an RMT analysis on the ensemble of open spectra using three tests for missed levels and see how open systems give false positives for missed levels. We end with concluding remarks in Sect.~\\ref{sec:conc}.\n\n\\section{Opening the system, energies and entropy}\n\\label{sec:opening}\nOpen quantum systems have been treated very successfully with an effective Hamiltonian approach. The main ingredient is the Hamiltonian of a loosely bound system connected to continuum channels via a factorizable non-Hermitian term. The details are worked out in \\cite{soze88}, \\cite{soze92} and \\cite{AuZe2011}. This method provides a general framework applicable to a broad range of systems from loosely bound nuclei \\cite{AuZe2011} to electron transport in nanosystems \\cite{ceka09}. The approach taken here is to make the most minimal adjustment to the GOE that would mimic openness and see how the RMT results are affected. We take a GOE matrix, $H^0$, and add an imaginary part to the diagonal elements, $H_{ii}\\rightarrow H^0_{ii}-\\imath\\frac{\\kappa}{\\sqrt{N}}$, where\n$\\kappa$ is the the strength of the coupling. Because the matrix is random, it is sufficient to just make the replacement $H_{11}\\rightarrow H^0_{11}-\\imath \\kappa \\sqrt{N}$ and leave other matrix elements unchanged. The resulting spectrum will be a set of complex energies $\\varepsilon_n=E_n+\\imath \\Gamma_n$\n\nThe evolution of the energies with $\\kappa$ show robust and interesting features. There are a small number of energies that migrate, then settle down for $\\kappa$ in the range $ 0.5 \\rightarrow 1.5$. In Fig.~\\ref{fig:evsk} we see a specific example of this generic behavior. If we look at the entropy of the corresponding wave functions one state in particular emerges. Starting with a wave function $\\psi=\\sum _i c_i |n\\rangle$ we define the entropy as $S=\\sum _i |c_i|^2 \\ln(|c_i|^2)$. We can calculate $S$ in the original basis in which we wrote out $H$, or in the energy basis, where $H$ is diagonal and $H^0_{ii}=E_i$. In Fig.~\\ref{fig:entvsk} we see the results for an $N=50$ system. The SR state emerges with a very simple structure in the original basis (blue lines), having a very low entropy. The other states stay at the GOE predicted average value $S=\\ln (0.48 N)$ \\cite{ZeVo2003}, which in this case is 3.2. We see that in the energy basis (black lines), where the Hamiltonian is diagonal for $\\kappa=0$, we have the complementary situation with entropy. Now the SR state is a complicated mixture of energy eigenstates with an entropy of around 3, and the other states have lower entropy. Indeed for many states in the energy basis the entropy stays close to zero.\n\n\\begin{figure}\n\\includegraphics[width=0.8\\textheight]{fig1.eps}\n\\caption{\\label{fig:evsk} The evolution of the energy levels with\n$\\kappa$. We just take the 300 unfolded energies of one particular\nmatrix, and plot $E_n$ vs $\\kappa$ for levels 130 to 195 of the\n$N=300$ unfolded spectrum. The curves, or ``trajectories\" around\n$\\kappa=1$ vary from matrix to matrix.}\n\\end{figure}\n\n\\section{The density of states}\\label{sec:dos}\nThe migration of a few energies to the middle of the spectrum as $\\kappa$ increases is reflected in the density of states. In Fig.~\\ref{fig:rho} we see in the empirical DOS a clear deviation from the semicircle for small $E$. Note we use level density and density of states interchangeably here as there is no degeneracy in the eigenvalues of random matrices.\n\n\n\\begin{figure}\n\\includegraphics[width=0.8\\textheight]{fig2.eps}\n\\caption{\\label{fig:entvsk} (Color online) The evolution of the energy and entropy as $\\kappa$ increase for an $N=50$ system. The blue lines correspond to the basis which the original Hamiltonian $H^0$ is written in. The black lines correspond to the basis in which $H^0$ is diagonal. In both cases, the superradiant state is obvious. Notice that one blue line drops down to low entropy as $\\kappa$ increases. One can think of Fig.~\\ref{fig:evsk} as a birds eye view of this plot, where we just see the energy and $\\kappa$ values. Note that here the GOE average value of the entropy is $S=3.2$, which is the level most of the blue lines stay at. Conversely the states in the energy basis grow in complexity. Notice how some of the black lines rise to a higher entropy as $\\kappa$ increases. }\n\\end{figure}\n\n\n\nThis change in $\\rho(E)$ raises an important practical question for how to do an RMT analysis, mainly how do we unfold the spectrum. First a comment on unfolding spectra. The semicircle level density of the GOE bares no relation to the exponential level density of realistic nuclear systems. To remove the system specific (secular) features of the level density we need to rescale the energies so that the level density is unity across the whole energy range. This process is called unfolding \\cite{guhr,brody}. Note that all the fluctuations are preserved even though the unfolded spectrum has a level density of unity. To go from a set of energies $\\{E\\}$, with density $\\rho(E)$ to an unfolded spectrum $\\{\\xi\\}$ with\ndensity $\\rho(\\xi)= 1$, we need to integrate $\\rho(E)$ to get a smooth cumulative level number ${\\mathcal N}(E)$:\n\\begin{equation}\n{\\mathcal N}(E)= \\int_{-\\infty}^{E}\\rho(E') dE'.\\nonumber\\\\\n\\end{equation}\nThe $i^{th}$ unfolded energy is simply $\\xi_i = {\\mathcal N}(E_i)$. In this analysis we unfolded the spectra using the semicircular level density for our convenience. The results were the same as when we used a numerical fit for the level density.\n\n\n\n\\begin{figure}\n\\includegraphics[width=.8\\textheight]{fig3.eps}\n\\caption{\\label{fig:rho} (Color online) Here we have the density of an ensemble of\n2000 matrices with $N=100$ and $\\kappa=0,0.4,1.2$ and 2.8. The level density is close to the semicircle of the GOE even for $\\kappa=0.4$. The deviations are consistent with Fig.~\\ref{fig:evsk} where levels migrate to the center of the energy range.}\n\\end{figure}\n\n\n\\section{The width of the energies.}\n\\label{sec:width}\nThe addition of an imaginary part to $H_{11}$ gives a width to all the levels. These widths can be treated as random variables, and their distribution examined. An ensemble of 200 matrices was prepared and opened as in Sect.~\\ref{sec:opening}. Each random matrix was the start of a sequence of 301 opened matrices with $0 \\leq \\kappa \\leq 3.0$ in steps of 0.01. The complex energies $\\varepsilon_n(\\kappa)$ were calculated. The widths $\\Gamma(\\kappa)$ of the levels were sorted and their size as a function of $\\kappa$ was examined, see Fig.~\\ref{fig:gvsk}. Immediately we see the emergence of the SR state that absorbs all the width. If this state is excluded from the plot we get a completely different behaviour, with the remaining widths having an exponential dependance on $\\kappa$. If we plot $\\bar{\\Gamma}$, the average of all but the biggest widths, vs $\\kappa$ on a log-log plot we get a very simple picture shown in Fig.~\\ref{fig:lglggam}. The straight line sections are roughly $\\ln \\bar{\\Gamma}=\\pm \\ln \\kappa -2.5$. Here the range of $\\kappa$ was from $10^{-3}$ to $10^6$. A qualitatively identical SR transition is seen in a different context in \\cite{ceka12} where the interplay of disorder and SR was examined in the context of the Anderson model.\n\n\\begin{figure}\n\\includegraphics[width=.6\\textheight] {fig4.eps}\n\\caption{\\label{fig:gvsk}When $\\kappa$ is turned on, the levels\nacquire a width, $\\Gamma$. Here we sorted the set of 300 $\\Gamma$\nfor each spectrum in an ensemble of 200. For example, $\\Gamma_3$ is\nthe 3rd largest width. The 9 lines in this plot are the ensemble\naverage of $\\Gamma_i$ vs $\\kappa$, with $i=2\\dots 10$. The insert includes the largest width, $\\Gamma_1$ which eventually becomes linear in $\\kappa$.}\n\\end{figure}\n\nThis picture of the emergence of the SR state is further reinforced by an analysis of the reduced widths. When the special state is excluded from the analysis we recover the Porter-Thomas distribution (PTD). In Fig.~\\ref{fig:pgamma} we see a log-log plot for the\ndistribution of $\\gamma \/ \\bar{\\gamma}$ with and without the 2 largest widths for an ensemble with $\\kappa=1.5$. We stress that this is as deep as we went in analyzing the distribution of widths. There could well be deviations from the PTD and for a derivation of alternative width distribution see \\cite{ShZe12}.\n\n\\begin{figure}\n\\includegraphics[width=.6\\textheight]{fig5.eps}\n\\caption{\\label{fig:lglggam} (Color online) This is a log-log plot of the average of the lines in\nFig.~\\ref{fig:gvsk}.}\n\\end{figure}\n\n\\begin{figure}\n\\includegraphics[width=.8\\textheight]{fig6.eps} \\caption{\\label{fig:pgamma}(Color online) The ensemble result for the distribution of reduced widths $p(\\gamma \/ \\bar{\\gamma})$. In red we see the full set of 2.5 million widths which are a superposition of 250000 matrices with $N=100$ and $\\kappa=1.5$. The mean of this set is $\\bar{\\gamma} = 0.15$. In black we see the results for the smallest 100 widths of 88646 matrices with $N=102$. The mean of this subset is $\\bar{\\gamma} = 0.0549$. In blue we have a plot of the function $P(x)=\\frac{1}{\\sqrt{2 \\pi x}} \\exp(-\\frac{x}{2})$. }\n\\end{figure}\n\n\\section{The spectral rigidity, $\\Delta_3(L)$.}\n\\label{sec:d3}\nThe spectral rigidity or $\\Delta_3(L)$ statistic is a common diagnostic for statistical analysis of data based on RMT. It is a robust statistic and can be used to gauge the purity of a spectrum, giving an estimate of the fraction of missed or spurious levels. It can also be used to gauge the degree to which the system is chaotic \\cite{shriner07,dyson,brody,me1}. $\\Delta_3(L)$ is defined in terms of fluctuations in the cumulative level number, ${\\mathcal N}(E)$, the number of levels with energy $\\leq E$. ${\\mathcal N}(E)$ is a staircase with each step being one unit high, and its slope is the level density $\\rho (E)$. A harmonic oscillator will have a regular staircase, with each step being one unit wide. On the other hand the quantum equivalent of a classically regular system has a random but uncorrelated spectrum. In that case ${\\mathcal N}(E)$ will have steps whose width have a Poissonian distribution. $\\Delta_3(L)$ is a measure of the spectral average deviation of ${\\mathcal N}(E)$ from a regular (constant slope) staircase, within an energy interval of length $L$. The spectral average means that the deviation is averaged over the location in the spectrum of the window. The definition is\n\\begin{eqnarray}\n\\nonumber \\Delta_{3}(L) &=&\\left \\langle {\\rm min}_{A,B}\\; \\frac{1}{L}\\;\\int^{E_i+L}_{E_i}dE'\\,[\\;{\\mathcal N}(E')-AE'-B]^{2}\\; \\;\\right\\rangle \\\\\n&=&\\langle \\delta^i_3(L) \\rangle.\\label{eq:d3}\n\\end{eqnarray}\n$A$ and $B$ are calculated for each $i$ to minimize $\\delta_3^i(L)$. The details of the exact calculation of $A$ and $B$ in terms of the energies $\\{E_i\\}$ are in \\cite{me1}. The harmonic oscillator has $\\Delta_3(L)=1\/12$. At the other extreme, a classically regular system will lead to a quantum mechanical spectrum with no level repulsion. The fluctuations will be far greater because there is no long range correlation giving $\\Delta_3(L)=L\/15$. The angle brackets mean the average is to be taken over all starting positions $E_i$ of the window of length $L$. This is a spectral average. It is an amazing fact of RMT that the spectra of the GOE have huge long range correlations, indeed the GOE result is:\n\\begin{eqnarray}\n\\Delta_3(L)&=&\\frac{1}{\\pi^2}\\,\\left[\\log(2\\pi L)+\\gamma-\\frac{5}{4}-\\frac{\\pi^2}{8}\\right]\\\\\n &=&(\\log L-0.0678)\/\\pi^2\n\\label{eq:d3th},\n\\end{eqnarray}\nwith $\\gamma$ being Euler's constant. We stress that this is the RMT value for the {\\sl ensemble average} of $\\Delta_3(L)$. The graph of $\\Delta_3(L)$ will vary from matrix to matrix but average of many such lines (the ensemble average) will rapidly converge onto Eq. \\ref{eq:d3th}. In our opened ensemble, $\\Delta_3(L)$ deviates from the GOE result. In Fig.~\\ref{fig:d3vskens} we take various fixed values of $L$ and we see how the value of $\\Delta_3(L)$ evolves with $\\kappa$. Deviations from the GOE result ($\\kappa=0$) increases slowly as $\\kappa$ changes from 0 to 1, then the deviations start to decrease. The effect is similar for all $L$ in the range we looked at, so in Fig.~\\ref{fig:d3vskensnrm} we plot all the lines of Fig.~\\ref{fig:d3vskens} but divide them by their value at $\\kappa=0$. Now we see that $\\Delta_3(L)$ increases with $\\kappa$ by a similar factor for a broad range of $L$, and furthermore this factor has a maximum value of around 1.15 which happens when $\\kappa=1$.\n\n\\begin{figure}\n\\includegraphics[width=0.6\\textheight]{fig7.eps} \\caption{\\label{fig:d3vskens}Here we see ensemble average of $\\Delta_3(L)$ vs $\\kappa$. The 23 lines are for the 23 values of $L$, with $5 \\leq L \\leq 50$ in steps of 2. There are 200 spectra in the ensemble, with $N=300$.}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\includegraphics[width=0.6\\textheight]{fig8.eps}\n\\caption{\\label{fig:d3vskensnrm} (Color online) Here we see ensemble average of\n$\\Delta_3(L,\\kappa)$ vs $\\kappa$ for $20 \\leq L \\leq 50$, but now\neach line is divided by $\\Delta_3(L,0)$. The lower $L$ values are not as\nsensitive to $\\kappa$ as the window of $L$ levels is too narrow to\nprobe long range correlations. So it is best to not include the\nrange $L<20$. The legend refers to the 3 bold lines in the plot. The lines for various $L$ become closer as $L$ increases.}\n\\end{figure}\n\n\n\n\\section{RMT tests for missed levels}\n\\label{sec:rmt}\nThe increase in the value of $\\Delta_3(L)$ due to opening 0the system could be misconstrued as evidence of spurious or missed levels. There are RMT tests for missed levels and we will apply these tests to the open spectra and see if there is any consistent picture that emerges. We will concentrate on 3 RMT tests, two of them based on $\\Delta_3(L)$ and another based on the the nearest neighbor distribution (nnd).\n\nIn \\cite{mulhall11} a maximum likelihood method based on $\\Delta_3(L)$ was developed. The $\\Delta_3(L)$ statistic is the spectral average of the set of random numbers $\\delta_3^i(L)$. The distribution of these numbers $p(\\delta)$ was used as the basis of a likelihood function. The basic idea is that ${\\mathcal N}(\\delta)$, the cdf for $p(\\delta)$, is a simple function of $\\log \\delta$. This led to the following parameterization:\n\\begin{equation}\\label{eq:cdf}\n{\\mathcal N}(\\delta) = \\frac{1}{2}(1 - \\textrm{Erf}[a + b \\log \\delta + c\n(\\log \\delta)^2]).\n\\end{equation}\n An ensemble of depleted spectra was made and the parameters for ${\\mathcal N}(\\delta)$ were empirically found for a range of $L$, and $x$ the fraction of levels missed, and fitted to smooth functions $a_L(x) ,b_L(x)$ and $c_L(x)$. Differentiation of ${\\mathcal N}(\\delta)$ gives probability density for $\\delta_3^i(L)$ with $x$ as a continuous parameter:\n\\begin{equation}\\label{eq:plx}\np(\\delta,x)=-\\frac{1}{\\sqrt{\\pi }} \\exp{[-\\big(a_L(x)+b_L(x)\\,\\log \\delta+c_L(x)\n\\log \\delta^2\\big) }^2]\\,\\big(\\frac{b_L(x)}{\\delta}+\\frac{2\\ c_L(x) \\log \\delta}{\\delta}\\big).\n\\end{equation}\nThis is then used as the basis for a maximum likelihood method for determining $x$.\n\nIn \\cite{bohigas2004} Bohigas and Pato gave an expression is given for $\\Delta_3(L)$ for incomplete spectra. The fraction of missed levels $x$ is both a scaling factor and a weighting factor and $\\Delta_3(L,x)$ is the sum of the GOE and Poissonian result:\n\\begin{equation}\\label{eq:bohigas}\n \\Delta_3(L,x)=x^2\\frac{L\/x}{15}+(1-x)^2\\Delta_3^{\\textrm{GOE}}(L\/(1-x)).\n\\end{equation}\nThe $\\Delta_3(L)$ statistic of an open spectrum can be compared with this expression and the best $x$ found.\n\nThe nearest neighbor distribution (nnd) is another commonly used statistic. The nnd for a pure spectrum follows the Wigner distribution,\n\\begin{equation}\n P(s)=\\frac{\\pi}{2}se^{-\\pi s^2\/4},\n\\end{equation}\nwhere $s=S\/D$, $S$ being the spacing between adjacent levels, and $D$ is the average spacing ($D=1$ for an unfolded spectrum). The nnd of a spectrum incomplete by a fraction $x$ is given by\n\\begin{equation}\nP(s)=\\sum_{k=0}^{\\infty} (1-x) x^k P(k;s);\\label{eq:pofsx}\n\\end{equation}\nwhere $P(k;s)$ is the $k^{th}$ nearest neighbor spacing, $E_{k+i}-E_i$. This was first introduced as an ansatz in \\cite{watson81}, and rederived in \\cite{agv} and \\cite{bohigas2004}. Eq.~\\ref{eq:pofsx} was used by Agvaanluvsan et al as the basis for a maximum likelihood method (MLM) to determine $x$ for incomplete spectra \\cite{agv}.\n\n\\begin{figure}\n\\includegraphics[width=.6\\textheight]{fig9.eps}\n\\caption{\\label{fig:mlm}(Color online) Here we see the results of tests to determine the fraction of levels missed in incomplete spectra vs $\\kappa$ for open but complete GOE spectra. $N$ is 300 in all cases.}\n\\end{figure}\n\nThese three tests for missed levels were applied to complete opened GOE spectra of dimension $N=300$. The value of $\\kappa$ went from 0 to 3 in steps of 0.01. The values $x$ for the fraction depleted vs $\\kappa$ are shown in Fig.~\\ref{fig:mlm}. It appears that the spectra look incomplete when the system is opened, and the effect is strongest to the tune of about 3\\% when $\\kappa=1$.\n\n\\section{Conclusion}\n\\label{sec:conc}\n A simple model for an open quantum system was realized by adding an imaginary number $\\imath \\sqrt{N} \\kappa$ to the trace of an $N\\times N$ GOE matrix. The level density deviated from the Wigner semicircle and the deviation grew with $\\kappa$. There was a drifting of some levels to the center of the spectrum at around $\\kappa=1$. A very low entropy state emerged which we identifed with a super-radiant state. The widths $\\Gamma(\\kappa)$ of the levels were sorted and a graph of the biggest 10 showed the emergence of this SR state that absorbs all the width. When the largest width is excluded the remaining widths were consistent with a Porter-Thomas distribution. Their average value had a simple exponential dependance on $\\kappa$. A plot of $\\bar{\\Gamma}$, the average of all but the biggest widths vs $\\kappa$ on a log-log plot reveals a SR transition. The $\\Delta_3(L)$ statistic deviated from the GOE value and looked like that of an incomplete spectra, or an impure spectra with intruder levels. Three separate tests for missed levels based on RMT consistently showed a that at $\\kappa \\approx 1$ the spectra appeared incomplete or contaminated to the tune of about 3\\%.\n\nIt is interesting that such a simple system can capture so much of the systematics of super-radiance in open quantum systems.\n\n\\begin{acknowledgments}\nWe wish to acknowledge the support of the Office of Research Services of the University of Scranton, Vladimir Zelevinsky and Alexander Volya for the original idea and for valuable discussions throughout. Yan V Fyodorov made useful comments regarding the work done on the distribution of complex energies in chaotic quantum systems. The anonymous referee significantly improved the clarity of the presentation with constructive comments.\n\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\nTraditionally, detectors used in general object detection have been applied in a discrete multi-scale sliding-window manner. This enables global search of the optimal warp parameters (object scale and position within the source image), at the expense of only being able to handle these simple transformations. Gradient-based approaches such as Lucas Kanade (LK)~\\cite{baker_IJCV_2004}, on the other hand, can entertain more complex warp parametrizations such as rotations and changes in aspect ratio, but impose the constraint that the image function be smooth and differentiable (analytically or efficiently numerically).\n\nThis constraint is generally satisfied for pixel-based representations that follow natural image statistics~\\cite{simoncelli_NEUROSCIENCE_2001}, especially on constrained domains such as faces, which are known to exhibit low-frequency gradients~\\cite{cootes_report_2004}. For broader object categories that exhibit large intra-class variation and discriminative gradient information in the higher-frequencies (\\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot the interaction of the object with the background) however, non-linear feature transforms that introduce tolerance to contrast and geometry are required. These transforms violate the smoothness requirement of gradient-based methods.\n\nAs a result, the huge wealth of research into gradient-based methods for facial image alignment has largely been ignored by the broader vision community. In this paper, we show that the LK objective can be modified to handle non-linear feature transforms. Specifically, we show,\n\\begin{itemize}\n\\item descent directions on feature images can be computed via linear regression to avoid any assumptions about their statistics,\n\\item for least-squares regression, the formulation can be interpreted as an efficient convolution operation,\n\\item localization results on images from ImageNet using higher-order warp parame-trizations than scale and translation,\n\\item an extension to unsupervised joint alignment of a corpus of images.\n\\end{itemize}\n\nBy showing that gradient-based methods can be applied to non-linear image transforms more generally, the huge body of research in image alignment can be leveraged for general object alignment.\n\n\\section{Image Alignment}\nImage alignment is the problem of registering two images, or parts of images, so that their appearance similarity is maximized. It is a difficult problem in general, because (i) the deformation model used to parametrize the alignment can be high-dimensional, (ii) the appearance variation between instances of the object category can be large due to differences in lighting, pose, non-rigid geometry and background material, and (iii) search space is highly non-convex.\n\n\\subsection{Global Search}\nFor localization of general object categories, the solution has largely been to parametrize the warp by a low-dimensional set of parameters -- $x,y$-translation and scale -- and exhaustively search across the support of the image for the best set of parameters using a classifier trained to tolerate lighting variation and changes in pose and geometry. Though not usually framed in these terms, this is exactly the role of multi-scale sliding-window detection.\n\nHigher-dimensional warps have typically not been used, due to the exponential explosion in the size of the search space. This is evident in graphical models, where it is only possible to entertain a restrictive set of higher-dimensional warps: those that are amenable to optimization by dynamic programming~\\cite{felzenszwalb_PAMI_2010}. A consequence of this limitation is that sometimes underlying physical constraints cannot be well modelled:~\\cite{zhu_CVPR_2012} use a tree to model parts of a face, resulting in floating branches and leaf nodes that do not respect or approximate the elastic relationship of muscles.\n\nA related limitation of global search is the speed with which warp parametrizations can be explored. Searching over translation can be computed efficiently via convolution, however there is no equivalent operator for searching affine warps or projections onto linear subspaces.\n\n\\cite{lankinen_BMVC_2011} introduced a global method for gaining correspondence between images from general object categories -- evaluated on Pascal VOC -- based on homography consensus of local non-linear feature descriptors. They claim performance improvements over state-of-the-art congealing methods, but their only qualitative assessment is on rigid objects, so it is difficult to gauge how well their method generalizes to non-rigid object classes.\n\nA related problem is that of co-segmentation~\\cite{dai_ICCV_2013}, which aims to learn coherent segmentations across a corpus of images by exploiting similarities between the foreground and background regions in these images. Such global methods are slow, but could be used as an effective initializer for local image alignment (in the same way that face detection is almost universally used to initialize facial landmark localization).\n\n\\subsection{Local Search}\nLocal search methods perform alignment by taking constrained steps on the image function directly. The family of Lucas Kanade algorithms consider a first-order Taylor series approximation to the image function and locally approximate its curvature with a quadratic. Convergence to a minima follows if the Jacobian of the linearization is well-conditioned and the function is smooth and differentiable. Popular non-linear features such as Dense SIFT~\\cite{liu_PAMI_2011}, HOG~\\cite{dalal_triggs_CVPR_2005} and LBP~\\cite{ojala_PAMI_2002} are non-differentiable image operators. Unlike pixel representations whose $\\frac{1}{f}$ frequency spectra relates the domain of the optimization basin to the amount of blur introduced, these non-linear operators do not have well-understood statistical structure.\n\nCurrent state-of-the art local search methods that employ non-linear features for face alignment instead use a cascade of regression functions, in a similar manner to Iterative Error Bound Minimization (IEBM)~\\cite{saragih_ICPR_2006}. A common theme of these methods~\\cite{kazemi_CVPR_2014,ren_CVPR_2014,xiong_CVPR_2013} is that they directly regress to positional updates. This sidesteps issues with differentiating image functions, or inverting Hessians. The drawback, however, is that they require vast amounts of training data to produce well-conditioned regressors. This approach is feasible for facial domain data that can be synthesized and trained offline in batch to produce fast runtime performance, but becomes impractical when performing alignment on arbitrary object classes, which have traditionally only had weakly labelled data.\n\nThe least squares congealing alignment algorithm~\\cite{cox_ICCV_2009}, for example, has no prior knowledge of image landmarks, and learning positional update regressors for each pixel in each image is not only costly, their performance is poor when using only the surrounding image context as training data.\n\n\\cite{huang_ICCV_2007} first proposed the use of non-linear transforms (SIFT descriptors in their case) for the congealing image alignment problem, noting like us, that pixel-based representations do not work on sets of images that exhibit high contrast variance. Their entropy-based algorithm treats SIFT descriptors as stemming from a multi-modal Gaussian distribution, and clusters the regions, at each iteration finding the transform that minimizes the cluster entropy. As~\\cite{cox_ICCV_2009} pointed out, however, employing entropy for congealing is problematic due to its poor optimization characteristics. As a result, the method of~\\cite{huang_ICCV_2007} is slow to converge.\n\nThe related field of medical imaging has a large focus on image registration for measuring brain development, maturation and ageing, amongst others. \\cite{jia_CVPR_2010,ying_NEURO_2014} present methods for improving the robustness of unsupervised alignment by embedding the dataset in a graph, with edges representing similarity of images. Registration then proceeds by minimizing the total edge length of the graph. This improves the capture of images which are far from the dataset mean, but which can be found by traversing through intermediate images. Their application domain -- brain scans -- is still highly constrained, permitting the estimation of geodesic distances between images in pixel space. Nonetheless, this type of embedding is beyond what generic congealing algorithms have achieved.\n\nFor general image categories, we instead propose to compute descent directions via appearance regression. The advantage of this approach is that the size of the regression formulation is independent of the dimensionality of the feature transform, so can be inverted with a small amount of training data.\n\n\\section{Problem Formulation}\nThe Inverse Compositional Lucas Kanade problem can be formulated as,\n\\begin{alignat}{4}\n& \\arg\\min_{\\boldsymbol{\\Delta}\\p} &&\\;\\; || \\mathbf{T}(\\mathbf{W}(\\mathbf{x}; \\mathbf{p})) - \\mathbf{I}(\\mathbf{W}(\\mathbf{x}; \\boldsymbol{\\Delta}\\p)) ||^2_2\n\\end{alignat}\nwhere $\\mathbf{T}$ is the reference template image, $\\mathbf{I}$ is the image we wish to warp to the template and $\\mathbf{W}$ is the warp parametrization that depends on the image coordinates $\\mathbf{x}$ and the warp parameters $\\mathbf{p}$. This is a nonlinear least squares (NLS) problem since the image function is highly non-convex.\nTo solve it, the role of the template and the image is reversed and the expression is linearized by taking a first-order Taylor expansion about $\\mathbf{T}(\\mathbf{W}(\\mathbf{x}; \\mathbf{p}))$ to yield,\n\\begin{alignat}{4}\n& \\arg\\min_{\\boldsymbol{\\Delta}\\p} && \\;\\; || \\mathbf{T}(\\mathbf{W}(\\mathbf{x}; \\mathbf{p})) + \\nabla\\mathbf{T} \\frac{\\partial \\W}{\\partial \\p} \\boldsymbol{\\Delta}\\p - \\mathbf{I}(\\mathbf{x}) ||^2_2\n\\end{alignat}\n\n$\\nabla \\mathbf{T} = (\\frac{\\partial T}{\\partial x}, \\frac{\\partial T}{\\partial y})$ is the gradient of the template evaluated at $\\mathbf{W}(\\mathbf{x}; \\mathbf{p})$. $\\frac{\\partial \\W}{\\partial \\p}$ is the Jacobian of the template. The update $\\boldsymbol{\\Delta}\\p$ describes the optimal alignment of $\\mathbf{T}$ to $\\mathbf{I}$. The inverse of $\\boldsymbol{\\Delta}\\p$ is then composed with the current estimate of the parameters,\n\\begin{align}\n\\mathbf{p}_{k+1} = \\mathbf{p}_{k} \\circ \\boldsymbol{\\Delta}\\p^{-1}\n\\end{align}\n and applied to $\\mathbf{I}$.\n\nThe implication is that we always linearize the expression about the template $\\mathbf{T}$, but apply the (inverse of the) motion update $\\boldsymbol{\\Delta}\\p$ to the image $\\mathbf{I}$. The consequence of this subtle detail is that $\\mathbf{T}$ is always fixed, and thus the gradient operator $\\nabla \\mathbf{T}$ only ever needs to be computed once~\\cite{baker_CVPR_2001}. This property extends to our regression framework, where the potentially expensive regressor training step can also happen just once, before alignment.\n\nFor non-linear multi-channel image operators, we can replace the gradient operator $\\nabla \\mathbf{T}$ with a general matrix $\\mathbf{R}$,\n\\begin{alignat}{4}\n& \\arg\\min_{\\boldsymbol{\\Delta}\\p} && \\;\\; || \\mathbf{T}(\\mathbf{W}(\\mathbf{x}; \\mathbf{p})) + \\mathbf{R} \\frac{\\partial \\W}{\\partial \\p} \\boldsymbol{\\Delta}\\p - \\mathbf{I}(\\mathbf{x}) ||^2_2 \\label{eqn:regressor}\n\\end{alignat}\n\nThe role of this matrix is to predict a descent direction for each pixel given context from other pixels and channels. The structure of the matrix determines the types of interactions that are exploited to compute the descent directions. If the Jacobian is constant across all iterates -- as is the case with affine transforms -- it can be pre-multiplied with the regressor so that solving each linearization involves only a single matrix multiplication.\n\n\\subsection{Fast Regression}\nWe now discuss a simple least squares strategy for learning $\\mathbf{R}$. If we consider only a translational warp, the expression of \\eqn{regressor} reduces to,\n\\begin{alignat}{4}\n& \\arg\\min_{\\boldsymbol{\\Delta}\\x} && \\;\\; || \\mathbf{T}(\\mathbf{x}) + \\mathbf{R} \\boldsymbol{\\Delta}\\x - \\mathbf{I}(\\mathbf{x}) ||^2_2 \\label{eqn:translation}\n\\end{alignat}\nwhere $\\boldsymbol{\\Delta}\\x = \\boldsymbol{\\Delta}\\p = (\\Delta x, \\Delta y)$. That is, we want to find the step size along the descent direction that minimizes the appearance difference between the template and the image. If we instead fix the $\\boldsymbol{\\Delta}\\x$, we can solve for the $\\mathbf{R}$ that minimizes the appearance difference,\n\\begin{alignat}{4}\n& \\arg\\min_{\\mathbf{R}} && \\;\\; \\sum_{\\boldsymbol{\\Delta}\\x \\in \\mathcal{D}} || \\mathbf{T}(\\mathbf{x}) + \\mathbf{R} \\boldsymbol{\\Delta}\\x - \\mathbf{T}(\\mathbf{x}+\\boldsymbol{\\Delta}\\x) ||^2_2 \\label{eqn:regressors}\n\\end{alignat}\nHere we have replaced $\\mathbf{I}(\\mathbf{x})$ with the template at the known displacement, $\\mathbf{T}(\\mathbf{x} + \\boldsymbol{\\Delta}\\x)$. The domain of displacements $\\mathcal{D}$ that we draw from for training balances small-displacement accuracy and large-displacement stability. Of course, least-squares regression is not the only possible approach. One could, for example, use support vector regression (SVR) when outliers are particularly problematic with a commensurate increase in computational complexity.\n\nEach regressor involves solving the system of equations:\n\\begin{alignat}{4}\n& \\arg\\min_{\\mathbf{R}_i} && \\;\\; \\sum_{\\boldsymbol{\\Delta}\\x \\in \\mathcal{D}_i} || \\mathbf{T}(\\mathbf{x}_i) + \\mathbf{R}_i \\boldsymbol{\\Delta}\\x - \\mathbf{T}(\\mathbf{x}_i+\\boldsymbol{\\Delta}\\x) ||^2_2\n\\end{alignat}\nwhere $i$ represents the $i$-th pixel location in the image. If the same domain of displacements is used for each pixel, the solution to this objective can be computed in closed form as\n\\begin{alignat}{4}\n& \\mathbf{R}^*_i &=& \\left( \\boldsymbol{\\Delta}\\x \\boldsymbol{\\Delta}\\x^T + \\rho\\mathbf{I}\\right)^{-1} \\left(\\boldsymbol{\\Delta}\\x^T \\left[\\mathbf{T}(\\mathbf{x}_i+\\boldsymbol{\\Delta}\\x) - \\mathbf{T}(\\mathbf{x}_i)\\right]\\right)\n\\end{alignat}\n\nThe first thing to note is that $(\\boldsymbol{\\Delta}\\x^T \\boldsymbol{\\Delta}\\x + \\rho\\mathbf{I})^{-1}$ is a $2 \\times 2$ matrix dependent only on the domain size chosen, and not on pixel location, and can thus be inverted once and for all.\n\nThe $\\boldsymbol{\\Delta}\\x^T \\left[\\mathbf{T}(\\mathbf{x}_i+\\boldsymbol{\\Delta}\\x) - \\mathbf{T}(\\mathbf{x}_i)\\right]$ term within the expression is just a sum of weighted differences between a displaced pixel, and the reference pixel. \\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot,\n\\begin{align}\n\\left[ \\begin{array}{c}\n\\sum_{\\Delta x} \\sum_{\\Delta y} \\Delta x ( \\mathbf{T}(x + \\Delta x, y + \\Delta y) - \\mathbf{T}(x, y) \\\\\n\\sum_{\\Delta x} \\sum_{\\Delta y} \\Delta y ( \\mathbf{T}(x + \\Delta x, y + \\Delta y) - \\mathbf{T}(x, y) \n\\end{array} \\right] \\label{eqn:single-regressor-exploded}\n\\end{align}\n\nOther regression-based methods of alignment such as~\\cite{xiong_CVPR_2013} leverage tens of thousands of warped training examples during offline batch learning to produce fast runtime performance on a single object category (faces). We cannot afford such complexity if we're going to perform regression and alignment on arbitrary object categories without a dedicated training time. \n\nIf we sample $\\boldsymbol{\\Delta}\\x$ on a regular grid that coincides with pixel locations, then \\eqn{single-regressor-exploded} can be cast as two filters -- one each for horizontal weights $\\Delta x$, and vertical weights $\\Delta y$,\n\n\\begin{align}\nf_x = \\left[ \\begin{array}{ccc}\nx_{-n} & \\dots & x_n \\\\\n\\vdots &&\\\\\nx_{-n} & \\dots & x_n\n\\end{array} \\right]\n\\;\\;\\;\\;\\;\\;\nf_y = \\left[ \\begin{array}{ccc}\ny_{-n} & \\dots & y_{-n} \\\\\n\\vdots &&\\\\\ny_n & \\dots & y_n\n\\end{array} \\right]\n\\end{align}\n\nIf the $x$ and $y$ domains are both equal and odd, the contribution of $\\mathbf{T}(x,y)$ is cancelled out. This is clearly a generalization of the central difference operator, which considers a domain of $[-1, 1]$, and forms the filters,\n\\begin{align}\nf_x = \\left[ \\begin{array}{ccc}\n-1 & 0 & 1\n\\end{array} \\right]\n\\;\\;\\;\\;\\;\\;\nf_y = \\left[ \\begin{array}{c}\n-1 \\\\ \\phantom{+}0 \\\\ 1\n\\end{array} \\right]\n\\end{align}\n\nThus, an efficient realization for learning a regressor at every pixel in the image is,\n\\begin{alignat}{4}\n\\mathbf{R} = (\\boldsymbol{\\Delta}\\x^T \\boldsymbol{\\Delta}\\x + \\rho\\mathbf{I})^{-1} \\left[ f_x \\ast \\mathbf{T}(\\mathbf{x}) \\;\\;\\; f_y \\ast \\mathbf{T}(\\mathbf{x}) \\right]\n\\end{alignat}\nwhere $\\ast$ is the convolution operator. For an image with $N$ pixels, $K$ channels and a warp with $P$ motion parameters, the complexity of our image alignment can be stated as a single $O(KN \\log KN + KNP)$ pre-computation of the regressor, followed by an $O(KNP)$ matrix-vector multiply and image warp per iteration, with an overall linear rate of convergence.\n\n\\subsection{Regressors on Feature Images}\nDense non-linear feature transforms can be viewed as mapping each scalar pixel in a (grayscale) image to a vector $\\mathbf{R} \\to \\mathbf{R}^K$. The added redundancy is required to decorrelate the various lighting transforms affecting the appearance of objects. Some feature transforms such as HOG~\\cite{dalal_triggs_CVPR_2005} also introduce a degree of spatial insensitivity for matching dis-similar objects, though we find in practice that alignment performance is more sensitive to lighting than geometric effects \\mbox{(\\fig{ground-truth}).} \n\nDuring alignment, spatial operations are applied across each channel independently. In particular, our regression formulation does not consider correlations between channels, so separate regressors can be learned on each feature plane of the image, then concatenated. This admits a highly efficient representation in the Fourier domain -- the filters $f_x$ and $f_y$ only need to be transformed to the Fourier domain once per image, rather than once per channel.\n\n\\begin{figure}\n\\includegraphics[trim=0 30 0 60,clip,width=\\columnwidth]{basin}\n\\begin{minipage}[t]{0.5\\textwidth}\n\\includegraphics[width=\\columnwidth]{zebra1.jpg}\n\\end{minipage}\n\\begin{minipage}[t]{0.5\\textwidth}\n\\includegraphics[width=\\columnwidth]{zebra2.jpg}\n\\end{minipage}\n\\caption{Pairwise (LK) alignment performance of different methods for increasing initialization error. The number after Dense SIFT indicates the spatial aggregation (cell) size of each SIFT descriptor. The domain is the limit of displacement magnitude from which training examples are gathered for the regressors, or the blur kernel size in the case of central differences. There is a progressive degradation in performance from SVR to least-squares regression to central differences on Dense SIFT. The pixel-based methods fail to converge even when close to the ground truth on challenging images such as the zebra.\n\\label{fig:ground-truth}\n}\n\\end{figure}\n\nTo illustrate the benefit of applying non-linear transforms, we performed an alignment experiment between pairs of images with ground-truth registration, and progressively increased the initialization error, measuring the overall number of trials that converged back to ground-truth (within $\\epsilon$ tolerance). Faces with labelled landmarks constitute a poor evaluation metric because of the proven capacity for pixel representations to perform well. Instead, we adopted the following strategy for defining ground-truth image pairs for general object classes: we manually sampled similar images from ImageNet and visually aligned them w.r.t\\onedot} \\def\\dof{d.o.f\\onedot an affine warp, then ran both LK and SIFT Flow at the ``ground-truth'' and asserted they did not diverge from the initialization (refining the estimate and iterating where necessary).\n\nFor each value of the initialization error, we ran $1000$ trials. \\fig{ground-truth} presents the results, with a representative pair of ground-truth images. There is a progressive degradation in performance from SVR to least-squares regression to central differences on all of the Dense SIFT trials.\n\nImportantly, the pixel-based trials fail to converge even close to the ground-truth -- the background distractors and differences between the zebras dominate the appearance, which results in incoherent descent predictions. At the other end of the spectrum, SVR consistently outperforms least-squares regression by a large margin, indicative that a large number of sample outliers exist over both small and large domain sizes. This highlights the benefit of treating alignment as a regression problem rather than computing numeric approximations to the gradient (\\emph{i.e}\\onedot} \\def\\Ie{\\emph{I.e}\\onedot central differences), and suggests that excellent performance can be achieved with commensurate increase in computational complexity. \n\n\\section{Experiments}\nIn all of our alignment experiments, we extract densely sampled SIFT features~\\cite{liu_PAMI_2011} on a regular grid with a stride of $1$ pixel. We cross-validated the spatial aggregation (cell) size, and found $4 \\times 4$ regions to work best for least-squares regression, and $8 \\times 8$ regions to work best for SVR. Whilst the method is certainly applicable to HOG and other feature transforms, we consider here only Dense SIFT. In the visualizations that follow, results are presented using least-squares regression. \n\n\\begin{figure}\n\\includegraphics[width=\\columnwidth]{gallery}\n\\caption{Representative pairwise alignments. Column-wise from left to right: (i) The template region of interest. (ii) The image we wish to align to the template. The bounding box initialization covers $\\approx 50\\%$ of the image area, to reflect the fact that objects of interest rarely fill the entire area of an image. (iii) The predicted region that best aligns the image to the template. The four examples exhibit robustness to changes in pose, rotation, scale and translation, respectively.\n\\label{fig:gallery}}\n\\end{figure}\n\n\\subsection{Pairwise Image Alignment}\nWe test the performance of our algorithm on a range of animal object categories drawn from ImageNet. In \\fig{gallery}, the first column is the template image. If no bounding box is shown, the whole image is used as the template. The second column shows the image we wish to align, with the initialization bounding the middle $50\\%$ of pixels -- owing to the fact that photographers rarely frame the object of interest to consume the entire image area. The third column shows the converged solutions. In all of the cases shown, pixel-based representations failed to converge.\n\n\\subsection{Unsupervised Localization}\nThe task of unsupervised localization is to discover the bounding boxes of objects of interest in a corpus of images with only their object class labelled. In approaches such as Object Centric Pooling~\\cite{russakovsky_ECCV_2012}, a detector is optimized jointly with the estimated locations of bounding boxes. Importantly, bounding box candidates are sampled in a multi-scale sliding-window manner, perhaps across a fixed number of aspect ratios. Exhaustive search cannot handle more complex search spaces, such as rotations.\n\nGradient-based methods derived from the Lucas Kanade algorithm such as least squares congealing~\\cite{cox_ICCV_2009} and RASL~\\cite{peng_PAMI_2012} have performed well on constrained domains (\\emph{e.g}\\onedot} \\def\\Eg{\\emph{E.g}\\onedot faces, digits, building fa\u00e7ades), but not on general object categories. Here we show that our feature regression framework can be applied to perform unsupervised localization.\n\nThe RASL algorithm performs alignment by attempting to minimize the rank of the overall stack. This only applies to linearly correlated images, however. General object categories that exhibit large appearance variation and articulated deformations are unlikely to form a low-rank basis even when aligned. The introduction of feature transforms also explodes the dimensionality of the problem, making SVD computation infeasible. Finally, RASL has a narrow basin of convergence, requiring that the misalignment can be modelled by the error term so that the low rank term is not simply an average of images in the stack (which is known to result in poor convergence properties~\\cite{cox_ICCV_2009}).\n\nWe therefore present results using the least squares congealing algorithm. It scales to large numbers of feature images, shares the same favourable inverse compositional properties as Lucas Kanade, and is robust to changes in illumination via dense SIFT features.\n\n\\fig{congealing} shows the results of aligning a set of elephants. Recall that there is no oracle or ground truth -- the elephants are ``discovered'' merely as the region the aligns most consistently across the entire image stack. \\fig{mean} illustrates the stack mean before and after congealing. Even though individual elephants appear in different poses, the aligned mean clearly elicits an elephant silhouette.\n\n\\begin{figure}\n\\includegraphics[width=\\textwidth]{congealed}\n\\caption{The results of unsupervised ensemble alignment (congealing) on a set of 170 elephants taken from ImageNet. The objective is to jointly minimize the appearance difference between all of the images in a least-squares sense -- no prior information or training is required. The first 6 rows present exemplar images from the set that converged. The final row presents a number of failure cases.\n\\label{fig:congealing}}\n\\end{figure}\n\n\\begin{figure}\n\\begin{minipage}[t]{0.5\\textwidth}\n\\includegraphics[width=\\columnwidth]{mean_before_alignment}\n\\end{minipage}\n\\begin{minipage}[t]{0.5\\textwidth}\n\\includegraphics[width=\\columnwidth]{mean_after_alignment}\n\\end{minipage}\n\\caption{The mean image (i) before alignment and, (ii) after alignment w.r.t\\onedot} \\def\\dof{d.o.f\\onedot an affine warp. Although individual elephants undergo different non-rigid deformations, one can make out an elephant silhouette in the aligned mean.\n\\label{fig:mean}}\n\\end{figure}\n\n\\newpage\n\\section{Conclusion}\nImage alignment is a fundamental problem for many computer vision tasks, however a large portion of the research that has focussed on alignment in the facial domain has not generalized well to broader image categories. As a result, exhaustive search strategies have dominated general image alignment. In this paper, we showed that regression over image features could be used within a Lucas Kanade framework to robustly align instances of objects differing in pose, illumination, size and position, and presented a range of results from ImageNet categories. We also demonstrated an example of unsupervised image alignment, whereby the appearance of an elephant was automatically discovered in a large number of images. Our future work aims to parametrize more complex warps so that objects can be matched across greater pose and viewpoint variation.\n\n\\bibliographystyle{ieee}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn 1992, Iwaniec and Sbordone \\cite{Iw} introduced grand Lebesgue spaces \nL^{p)}\\left( \\Omega \\right) $, ($10$, equipped with the\nLuxemburg nor\n\\begin{equation*}\n\\left\\Vert f\\right\\Vert _{p(.)}=\\inf \\left\\{ \\lambda >0:\\varrho _{p\\left(\n.\\right) }\\left( \\frac{f}{\\lambda }\\right) \\leq 1\\right\\} \\text{,}\n\\end{equation*\nwhere $\\varrho _{p(.)}(f)=\\dint\\limits_{\\Omega }\\left\\vert f(x)\\right\\vert\n^{p(x)}d\\mu \\left( x\\right) .$ The space $L^{p(.)}(\\Omega )$ is a Banach\nspace with respect to $\\left\\Vert .\\right\\Vert _{p(.)}$. Moreover, the norm \n\\left\\Vert .\\right\\Vert _{p(.)}$ coincides with the usual Lebesgue norm \n\\left\\Vert .\\right\\Vert _{p}$ whenever $p(.)=p$ is a constant function. Let \np^{+}<\\infty $. Then $f\\in L^{p(.)}(\\Omega )$ if and only if $\\varrho\n_{p(.)}(f)<\\infty $.\n\\end{definition}\n\n\\begin{definition}\nLet $\\theta >0.$ The grand variable exponent Lebesgue spaces $L^{p(.),\\theta\n}\\left( \\Omega \\right) $ is the class of all measurable functions for whic\n\\begin{equation*}\n\\left\\Vert f\\right\\Vert _{p(.),\\theta }=\\sup_{0<\\varepsilon\n0$ there is a $g\\in C_{0}^{\\infty }(\\Omega )$ such tha\n\\begin{equation}\n\\left\\Vert f-g\\right\\Vert _{p(.),\\theta }<\\eta . \\label{3.5}\n\\end{equation\nBy the previous step, there is an $n_{0}$ such tha\n\\begin{equation}\n\\left\\Vert g_{av}-\\frac{1}{n}\\tsum\\limits_{j=0}^{n-1}g\\circ T^{j}\\right\\Vert\n_{p(.)-\\varepsilon }<\\eta \\label{3.6}\n\\end{equation\nfor $n\\geq n_{0}$ and $\\varepsilon \\in \\left( 0,p^{-}-1\\right) $. Hence, we\nhave \n\\begin{equation}\n\\left\\Vert g_{av}-\\frac{1}{n}\\tsum\\limits_{j=0}^{n-1}g\\circ T^{j}\\right\\Vert\n_{p(.),\\theta }<\\eta \\label{3.7}\n\\end{equation\nby (\\ref{3.6}) and the definition of the norm $\\left\\Vert .\\right\\Vert\n_{p(.),\\theta }$. This follows from (\\ref{3.4}), (\\ref{3.5}) and (\\ref{3.7})\ntha\n\\begin{eqnarray*}\n\\left\\Vert f_{av}-\\frac{1}{n}\\tsum\\limits_{j=0}^{n-1}f\\circ T^{j}\\right\\Vert\n_{p(.),\\theta } &\\leq &\\left\\Vert f_{av}-g_{av}\\right\\Vert _{p(.),\\theta\n}+\\left\\Vert g_{av}-\\frac{1}{n}\\tsum\\limits_{j=0}^{n-1}g\\circ\nT^{j}\\right\\Vert _{p(.),\\theta } \\\\\n&&+\\left\\Vert \\frac{1}{n}\\tsum\\limits_{j=0}^{n-1}\\left( f-g\\right) \\circ\nT^{j}\\right\\Vert _{p(.),\\theta } \\\\\n&\\leq &2\\left\\Vert f-g\\right\\Vert _{p(.),\\theta }+\\left\\Vert g_{av}-\\frac{1}\nn}\\tsum\\limits_{j=0}^{n-1}g\\circ T^{j}\\right\\Vert _{p(.),\\theta } \\\\\n&<&\\frac{\\eta }{2}+\\frac{\\eta }{2}=\\eta .\n\\end{eqnarray*\nThat is the desired result.\n\\end{proof}\n\n\\bigskip\n\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\n\n\\section{Introduction}\n\\input{sections\/introduction.tex}\n\\section{Related Work}\n\\input{sections\/related.tex}\n\n\n\\section{DAHash}\n\\input{sections\/differentiatedCost.tex}\n\n\\section{Stackelberg Game}\\label{sec: Game}\n\\input{sections\/stackelberg.tex}\n\n\n\\section{Attacker and Defender Strategies} \\label{sec:Analysis}\n\\input{sections\/analysis.tex}\n\n\n\\section{Empirical Analysis} \\label{sec:empirical}\n\\input{sections\/simulation.tex}\n\n\\section{Conclusions}\n\\input{sections\/conclusion.tex}\n\n\\section*{Acknowledgment}\n\\input{sections\/ack.tex}\n\\bibliographystyle{splncs04}\n\n\n\\section{Password Distribution}\n\n\nWe let $\\mathcal{P} = \\{pw_1,pw_2,\\ldots,\\}$ be the set of all possible user-chosen passwords. We will assume that passwords are sorted so that $pw_i$ represents the $i$'th most popular password. Let $\\Pr[pw_i]$ denote the probability that a random user selects password $pw_i$ we have a distribution over $\\mathcal{P}$ with $\\Pr[pw_1] \\geq \\Pr[pw_2] \\geq \\ldots $ and $\\sum_i \\Pr[pw_i] = 1$. Given a user $u$ we will also use $pw_u$ to denote the password selected by this user. We assume that the attacker knows the distribution over $\\mathcal{P}$ from which the password is selected, but does not know the particular password $pw_u$. In particular, we assume that the attacker knows $pw_i$ and $\\Pr[pw_i]$ for each $i \\geq 1$. We argue that it is a reasonable assumption because the attacker will have access to a massive amount of training data from prior password breaches and can also use sophisticated tools such as Probabilistic Context Free Grammars~\\cite{SP:WAMG09,SP:KKMSVB12,NDSS:VerColTho14}, Markov Chain Models~\\cite{NDSS:CasDurPer12,Castelluccia2013,SP:MYLL14,USENIX:USBCCKKMMS15} and even neural networks~\\cite{USENIX:MUSKBCC16} to generate likely password guesses (approximately accurate). Furthermore, an attacker who is trying to crack all passwords in the dataset $D$ can simply keep track of the passwords that s\/he has already cracked along with their frequency. Any password that has already been observed in the dataset is likely to be repeated. For example, the breached LinkedIn dataset contained $170+$ million passwords, but only $12.29\\%$ of those passwords were unique (chosen by only one user).\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{The Password Distribution}\n\\revision{\nOne of the challenges in evaluating DAHash is that the exact distribution over user passwords is unkown. However, there are many empirical password datasets available due to password breaches. We describe two methods for deriving password distributions from password datasets. \n\n\\subsubsection{Empirical Password Datasets} We consider nine empirical password datasets (along with their size $N$): Bfield ($0.54$ million), Brazzers ($0.93$ million), Clixsense ($2.2$ million), CSDN ($6.4$ million), LinkedIn ($174$ million), Neopets ($68.3$ million), RockYou ($32.6$ million), 000webhost ($153$ million) and Yahoo! ($69.3$ million). Plaintext passwords are available for all datasets except for the differentially private LinkedIn ~\\cite{CS:HMBSD20} and Yahoo!~\\cite{SP:Bonneau12,NDSS:BloDatBon16} frequency corpuses which intentionally omit passwords. With the exception of the Yahoo! frequency corpus all of the datasets are derived from password breaches. The differentially LinkedIn dataset is derived from cracked LinkedIn passwords \\footnote{The LinkedIn password is derived from 174 million (out of 177.5 million) cracked password hashes which were cracked by KoreLogic~\\cite{CS:HMBSD20}. Thus, the dataset omits $2\\%$ of uncracked passwords. Another caveat is that the LinkedIn dataset only contains $164.6$ million unique e-mail addresses so there are some e-mail addresses with multiple associated password hashes.}. Formally, given $N$ user accounts $u_1,\\ldots, u_N$ a dataset of passwords is a list $D = pw_{u_1},\\ldots,pw_{u_N} \\in \\mathcal{P}$ of passwords each user selected. We can view each of these passwords $pw_{u_i}$ as being sampled from some unkown distribution $D_{real}$. \n\n\n\\vspace{-0.5cm}\n\\subsubsection{Empirical Distribution.} Given a dataset of $N$ user passwords the corresponding password frequency list is simply a list of numbers $f_1 \\geq f_2 \\geq \\ldots $ where $f_i$ is the number of users who selected the $i$th most popular password in the dataset --- note that $\\sum_{i} f_i = N$. In the empirical password distribution we define the probability of the $i$th most likely password to be $\\hat{p}_i= f_i\/N$. In our experiments using the empirical password distribution we will set $D_{train}=D_{eval}$ i.e., we assume that the empirical password distribution is the real password distribution and that the defender knows this distribution. \n\nIn our experiments we implement \\hard{} by partitioning the password dataset $D_{train}$ into $\\tau$ groups $G_1,\\ldots, G_\\tau$ using $\\tau-1$ frequency thresholds $t_1 > \\ldots > t_{\\tau-1}$ i.e., $G_1 = \\{i:f_i \\geq t_1\\}$, $G_{j} = \\{i: t_{j-1} > f_i \\geq t_j \\} $ for $1< j < \\tau$ and $G_\\tau = \\{i: f_i < t_{\\tau-1}\\}$. Fixing a hash cost vector $\\vec{k}=(k_1,\\ldots, k_{\\tau})$ we will assign passwords in group $G_j$ to have cost $k_j$ i.e., \\hard{pw}$=k_j$ for $pw \\in G_j$. We pick the thresholds to ensure that the probability mass $Pr[G_j] = \\sum_{i \\in G_j} f_i\/N$ of each group is approximately balanced (without separating passwords in an equivalence set). While there are certainly other ways that \\hard{} could be implemented (e.g., balancing number of passwords\/equivalence sets in each group) we found that balancing the probability mass was most effective. }\n \n\\revision{ \\noindent {\\bf Good-Turing Frequency Estimation.} \nOne disadvantage of using the empirical distribution is that it can often overestimate the success rate of an adversary. For example, let $\\hat{\\lambda}_B:= \\sum_{i=1}^B \\hat{p}_i$ and $N^{\\prime} \\leq N$ denote the number of distinct passwords in our dataset then we will {\\em always } have $\\hat{\\lambda}_{N^{\\prime}}:=\\sum_{i\\leq N^{\\prime}} \\hat{p}_i = 1$ which is inaccurate whenever $N \\leq \\left|\\mathcal{P}\\right|$. However, when $B \\ll N$ we will have $\\hat{\\lambda}_B \\approx \\lambda_B$ i.e., the empirical distribution will closely match the real distribution. Thus, we will use the empirical distribution to evaluate the performance of DAHash when the value to cost ratio $v\/C_{max}$ is smaller (e.g, $v\/C_{max} \\ll 10^8$) and we will highlight uncertain regions of the curve using Good-Turing frequency estimation. \n\nLet $N_f = \\vert\\{i : f_i=f \\}\\vert$ denote number of distinct passwords in our dataset that occur exactly $f$ times and let $B_f= \\sum_{i > f} N_i$ denote the number of distinct passwords that occur more than $f$ times. Finally, let $E_f := |\\lambda_{B_f} - \\hat{\\lambda}_{N_{B_f}}|$ denote the error of our estimate for $\\lambda_{B_{ f}}$, the total probability of the top $B_{f}$ passwords in the real distribution. If our dataset consists of $N$ independent samples from an unknown distribution then Good-Turing frequency estimation tells us that the total probability mass of all passwords that appear exactly $f$ times is approximately $U_f:=(f+1)N_{f+1}\/N$ e.g., the total probability mass of unseen passwords is $U_0 = N_1\/N$. This would imply that ${\\lambda}_{B_f} \\geq 1 - \\sum_{j=0}^f U_j = 1-\\sum_{j=0}^i \\frac{(j+1)N_{j+1}}{N}$ and $E_f \\leq U_f$.\n\nThe following table plots our error upper bound $U_f$ for $0 \\leq f \\leq 10$ for 9 datasets. Fixing a target error threshold $\\epsilon$ we define $f_{\\epsilon} = \\min\\{i: U_i \\leq \\epsilon\\}$ i.e., the minimum index such that the error is smaller than $\\epsilon$. In our experiments we focus on error thresholds $\\epsilon \\in \\{0.1, 0.01\\}$. For example, for the Yahoo! (resp. Bfield) dataset we have $f_{0.1} = 1$ (resp. $j_{0.1}=2$) and $j_{0.01}=6$ (resp. $j_{0.01}=5$). As soon as we see passwords with frequency {\\em at most} $j_{0.1}$ (resp. $j_{0.01}$) start to get cracked we highlight the points on our plots with a red (resp. yellow). }\n\\vspace{-0.5cm}\n\\begin{table}[htb]\n\\caption{Error Upper Bounds: $U_i$ for Different Password Datasets}\n\\begin{center}\n\\begin{tabular}{cccccccccc}\n\\hline\n & Bfield & Brazzers & Clixsense & CSDN & Linkedin & Neopets & Rockyou & 000webhost & Yahoo! \\\\ \\hline\n$U_0$ & \\red{0.69} & \\red{0.531} & \\red{0.655} & \\red{0.557} & \\red{0.123} & \\red{0.315} & \\red{0.365} & \\red{0.59} & \\red{0.425} \\\\ \\hline\n$U_1$ & \\red{0.101} & \\red{0.126} & \\red{0.095} & \\red{0.092} &\\red{0.321} & \\red{0.093} & \\red{0.081} & \\red{0.124} & \\red{0.065} \\\\ \\hline\n$U_2$ & \\red{0.036} & \\red{0.054} &\\yellow{0.038} & \\yellow{0.034} & \\red{0.043} & \\yellow{0.051} & \\yellow{0.036} & \\red{0.055} & \\yellow{0.031} \\\\ \\hline\n$U_3$ & \\yellow{0.02} & \\yellow{0.03} & \\yellow{0.023} & \\yellow{0.018} & \\yellow{0.055} &\\yellow{0.034} & \\yellow{0.022} & \\yellow{0.034} & \\yellow{0.021} \\\\ \\hline\n$U_4$ & \\yellow{0.014} & \\yellow{0.02} & \\yellow{0.016} & \\yellow{0.012} & \\yellow{0.018} & \\yellow{0.025} & \\yellow{0.017} & \\yellow{0.022} & \\yellow{0.015} \\\\ \\hline\n$U_5$ & \\yellow{0.01} & \\yellow{0.014} & \\yellow{0.011} & \\yellow{0.008} & \\yellow{0.021} & \\yellow{0.02} & \\yellow{0.013} & \\yellow{0.016} & \\yellow{0.012} \\\\ \\hline\n$U_6$ & 0.008 & \\yellow{0.011} &\\yellow{0.009} & 0.006 & \\yellow{0.011} & \\yellow{0.016} & \\yellow{0.011} & \\yellow{0.012} & \\yellow{0.01} \\\\ \\hline\n$U_7$ & 0.007 & \\yellow{0.01} & 0.007 & 0.005 & \\yellow{0.011} & \\yellow{0.013} & \\yellow{0.01} & \\yellow{0.009} & 0.009 \\\\ \\hline\n$U_8$ & 0.006 & 0.008 & 0.006 & 0.004 & \\yellow{0.008} & \\yellow{0.011} & 0.009 & 0.008 & 0.008 \\\\ \\hline\n$U_9$ & 0.005 & 0.007 & 0.005 & 0.004 & 0.007 & \\yellow{0.01} & 0.008 & 0.006 & 0.007 \\\\ \\hline\n$U_{10}$ & 0.004 & 0.007 & 0.004 & 0.003 & 0.006 & 0.009 & 0.007 & 0.005 & 0.006 \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\label{default}\n\\end{table}\n\\vspace{-1cm}\n\\revision{ \n\\subsubsection{Monte Carlo Distribution} As we observed previously the empirical password distribution can be highly inaccurate when $v\/C_{max}$ is large. Thus, we use a different approach to evaluate the performance of DAHash when $v\/C_{max}$ is large. In particular, we subsample passwords, obtain gussing numbers for each of these passwords and fit our distribution to the corresponding guessing curve. We follow the following procedure to derive a distribution: (1) subsample $s$ passwords $D_s$ from dataset $D$ with replacement; (2) for each subsampled passwords $pw \\in D_s$ we use the Password Guessing Service~\\cite{USENIX:USBCCKKMMS15} to obtain a guessing number $\\guessing$ which uses Monte Carlo methods~\\cite{CCS:DelFil15} to estimate how many guesses an attacker would need to crack $pw$ \\footnote{The Password Guessing Service~\\cite{USENIX:USBCCKKMMS15} gives multiple different guessing numbers for each password based on different sophisticated cracking models e.g., Markov, PCFG, Neural Networks. We follow the suggestion of the authors ~\\cite{USENIX:USBCCKKMMS15} and use the minimum guessing number (over all autmated approached) as our final estimate.}. (3) For each $i \\leq 199$ we fix guessing thresholds $t_0 < t_1 < \\ldots < t_{199}$ with $t_0:=0$, $t_1:=15$, $t_i - t_{i-1} = 1.15^{i+25}$, and $t_{199} = \\max_{pw \\in D_s} \\{\\guessing\\}$. (4) For each $i \\leq 199$ we compute $g_i$, the number of samples $pw \\in D_s$ with $\\guessing \\in[t_{i-1},t_i)$. (5) We output a compressed distribution with $200$ equivalences sets using histogram density i.e., the $i$th equivalence set contains $t_{i}-t_{i-1}$ passwords each with probability $\\frac{g_i}{s \\times (t_i-t_{i-1})}$. \n\nIn our experiments we repeat this process twice with $s=12,500$ subsamples to obtain two password distributions $D_{train}$ and $D_{eval}$. One advantage of this approach is that it allows us to evaluate the performance of DAHash against a state of the art password cracker when the ratio $v\/C_{max}$ is large. The disadvantage is that the distributions $D_{train}$ and $D_{eval}$ we extract are based on {\\em current} state of the art password cracking models. It is possible that we optimized our DAHash parameters with respect to the wrong distribution if an attacker develops an improved password cracking model in the future.\n}\n\n\\revision{\n{\\noindent \\bf Implementing \\hard{} for Monte Carlo Distributions.} For Monte Carlo distribution \\hard{pw} depends on the guessing number $\\guessing$. In particular, we fix thresholds points $x_{1} > \\ldots > x_{\\tau-1}$ and (implicitly) partition passwords into $\\tau$ groups $G_1,\\ldots, G_t$ using these thresholds i.e., $G_i = \\{ pw~:~ x_{i-1} \\geq \\guessing > x_{i}\\}$. Thus, \\hard{pw} would compute $\\guessing$ and assign hash cost $k_i$ if $pw \\in G_i$. As before the thresholds $x_1,\\ldots, x_{\\tau-1}$ are selected to (approximately) balance the probability mass in each group. }\n\n\n\n\n\n\\input{.\/empirical.tex}\n\\input{.\/montecarlo.tex}\n\\input{.\/figRockYou.tex}\n\n\n\n\\subsection{Experiment Results}\n\\revision{Figure \\ref{fig:empirical} evalutes the performance of DAHash on the empirical distributions empirical datasets. To generate each point on the plot we first fix $v\/C_{max} \\in \\{ i \\times 10^{2+j}: 1 \\leq i \\leq 9, 0 \\leq j \\leq 5\\}$, use $\\mathsf{OptHashCostVec}()$ to tune our DAHash parameters $\\vec{k}^*$ and then compute the corresponding success rate for the attacker. The experiment is repeated for the empirical distributions derived from our $9$ different datasets. }\n In each experiment we group password equivalence sets into $\\tau$ groups ($\\tau \\in \\{1,3,5\\}$) $G_1,\\ldots,G_\\tau$ of (approximately) equal probability mass. In addition, we set $k_{min} = 0.1 C_{max}$ and iteration of BITEOPT to be 10000. \\revision{The yellow (resp. red) regions correspond to unconfident zones where we expect that the our results for empirical distribution might differ from reality by $1\\%$ (resp. $10\\%$). }\n\n\\revision{ Figure \\ref{fig:monte} evaluates the performance of DAHash for for Monte Carlo distributions we extract using the Password Guessing Service. For each dataset we extract two distributions $D_{train}$ and $D_{eval}$. For each $v\/C_{max} \\in \\{ j \\times 10^i : ~ 3 \\leq i \\leq 11, j \\in \\{2,4,6,8\\}\\}$ we obtain the corresponding optimal hash cost $\\vec{k}^*$ using $\\mathsf{OptHashCostVec}()$ with the distribution $D_{train}$ as input. Then we compute success rate of attacker on $D_{eval}$ with the same cost vector $\\vec{k}^*$. We repeated this for 6 plaintext datasets: Bfield, Brazzers, Clixsense, CSDN, Neopets and 000webhost for which we obtained guessing numbers from the Password Guessing Service.}\n\n\n\\revision{Figure \\ref{fig:empirical} and Figures \\ref{fig:monte} plot $P_{ADV}$ vs $v\/C_{max}$ for each different dataset under empirical distribution and Monte Carlo distribution. Each sub-figure contains three separate lines corresponding to $\\tau\\in \\{1,3,5\\}$ respectively}. We first remark that $\\tau=1$ corresponds to the status quo when all passwords are assigned the same cost parameter i.e., $\\mathsf{getHardness}(pw_u) = C_{max}$. When $\\tau=3$ we can interpret our mechanism as classifying all passwords into three groups (e.g., weak, medium and strong) based on their strength. The fine grained case $\\tau=5$ has more strength levels into which passwords can be placed. \n\n\\revision{{\\bf \\noindent DAHash Advantage:} For empirical distributions the improvement peaks in the uncertain region of the plot. Ignoring the uncertain region the improvement is still as large as 15\\%. For Monte Carlo distributions we find a 20\\% improvement e.g., $20\\%$ of user passwords could be saved with the DAHash mechanism.}\n\n\n \n\\revision{Figure \\ref{fig:r1} explores how the hash cost vector $\\vec{k}$ is allocated between weak\/medium\/strong passwords as $v\/C_{max}$ varies (using the RockYou empirical distribution with $\\tau=3$). Similarly, Figure \\ref{fig:r2} plots the fraction of weak\/medium\/strong passwords being cracked as adversary value increases. We discuss these each of these figures in more detail below.}\n\n\n\n\\vspace{-0.3cm}\n\\subsubsection{How Many Groups ($\\tau$)?}\nWe explore the impact of $\\tau$ on the percentage of passwords that a rational adversary will crack. Since the untargeted adversary attacks all user accounts in the very same way, the percentage of passwords the adversary will crack is the probability that the adversary succeeds in cracking a random user's account, namely, $P_{ADV}^*$. Intuitively, a partition resulting in more groups can grant a better protection for passwords, since by doing so the authentication server can deal with passwords with more precision and can better tune the fitness of protection level to password strength. We observe in Figure \\ref{fig:empirical} and Figures \\ref{fig:monte} for most of time the success rate reduction when $\\tau = 5$ is larger compared to $\\tau = 3$. However, the marginal benefit plummets, changing $\\tau$ from 3 to 5 does not bring much performance improvement. A positive interpretation of this observation is that we can glean most of the benefits of our differentiated hash cost mechanism without making the $\\mathsf{getHardness}()$ procedure too complicated e.g., we only need to partition passwords into three groups weak, medium and strong. \n\nOur hashing mechanism does not overprotect passwords that are too weak to withstand offline attack when adversary value is sufficiently high, nor passwords that are strong enough so that a rational offline attacker loses interest in cracking. The effort previously spent in protecting passwords that are too weak\/strong can be reallocated into protecting ``savable'' passwords at some $v\/C_{max}$. Thus, our DAHash algorithm beats traditional hashing algorithm without increasing the server's expected workload i.e., the cost parameters $\\vec{k}$ are tuned such that expected workload is always $C_{max}$ whether $\\tau=1$ (no differentiated costs), $\\tau=3$ (differentiated costs) or $\\tau=5$ (finer grained differentiated costs). We find that the defender can reduce the percentage of cracked passwords $P_{ADV}^*$ without increasing the workload $C_{max}$. \n\n\n\n\\vspace{-0.3cm}\n\\subsubsection{Understanding the Optimal Allocation $\\vec{k}^*$}\nWe next discuss how our mechanism re-allocates the cost parameters across $\\tau=3$ different groups as $v\/C_{max}$ increases --- see Figures \\ref{fig:r1}. At the very beginning $v\/C_{max}$ is small enough that a rational password gives up without cracking any password even if the authentication server assigns equal hash costs to different groups of password, e.g., $k_1=k_2=k_3=C_{max}$. \n\nAs the adversary value increases the Algorithm $\\mathsf{OptHashCostVec}()$ starts to reallocate $\\vec{k}$ so that most of the authentication server's effort is used to protect the weakest passwords in group $G_1$ while minimal key-stretching effort is used to protect the stronger passwords in groups $G_2$ and $G_3$\n In particular, we have $k_1 \\approx 3 C_{max}$ for much of the interval $v\/C_{max} \\in [4*10^3, 10^5]$ while $k_2,k_3$ are pretty small in this interval e.g., $k_2, k_3 \\approx 0.1 \\times C_{max}$. However, as the ratio $v\/C_{max}$ continues to increase from $10^6$ to $10^7$ Algorithm $\\mathsf{OptHashCostVec}()$ once again begins to reallocate $\\vec{k}$ to place most of the weight on $k_2$ as it is now necessary to protect passwords in group $G_2$. Over the same interval the value of $k_1$ decreases sharply as it is no longer possible to protect all of the weakest passwords group $G_1$. \n\nAs $v\/C_{max}$ continues to increase Algorithm $\\mathsf{OptHashCostVec}()$ once again reallocates $\\vec{k}$ to place most of the weight on $k_3$ as it is now necessary to protect the strongest passwords in group $G_3$ (and no longer possible to protect all of the medium strength passwords in group $G_2$). Finally, $v\/C_{max}$ gets too large it is no longer possible to protect passwords in any group so Algorithm $\\mathsf{OptHashCostVec}()$ reverse back to equal hash costs , i.e., $k_1=k_2=k_3=C_{max}$. \n\n\\ignore{\nNotice that for Yahoo dataset (Figure \\ref{fig:3a}) the zenith of $k_1$ is above $3C_{max}$. We remark that this is not a violation of server cost constraint $\\sum_{i=1}^3\\left(\\sum_{pw_j\\in G_i} \\Pr[pw_j]\\right) \\cdot k_i\\leq C_{max}$, since the password probability mass in $G_1$ is strictly less than $1\/3$ (even though our group partition approach is trying to bring it to 1\/3 as closely as possible). Similarly, the zenith of $k_3$ is below $3C_{max}$ that is because passwords in $G_3$ are quantitatively dominant in dataset.}\n\nFigures \\ref{fig:r1} and \\ref{fig:r2} tell a complementary story. Weak passwords are cracked first as $v\/C_{max}$ increases, then follows the passwords with medium strength and the strong passwords stand until $v\/C_{max}$ finally becomes sufficiently high. For example, in Figure \\ref{fig:r2} we see that initially the mechanism is able to protect all passwords, weak, medium and strong. However, as $v\/C_{max}$ increases from $10^5$ to $10^6$ it is no longer possible to protect the weakest passwords in group $G_1$. Up until $v\/C_{max}=10^6$ the mechanism is able to protect all medium strength passwords in group $G_2$, but as the $v\/C_{max}$ crosses the $10^7$ threshold it is not feasible to protect passwords in group $G_2$. The strongest passwords in group $G_3$ are completely projected until $v\/C_{max}$ reaches $2\\times 10^7$ at which point it is no longer possible to protect any passwords because the adversary value is too high. \n\n Viewing together with Figure \\ref{fig:r1}, we observe that it is only when weak passwords are about to be cracked completely (when $v\/C_{max}$ is around $7 \\times 10^5$) that the authentication server begin to shift effort to protect medium passwords. The shift of protection effort continues as the adversary value increases until medium strength passwords are about to be massively cracked. The same observation applies to medium passwords and strong password. \nWhile we used the plots from the RockYou dataset for discussion, the same trends also hold for other datasets (concrete thresholds may differ).\n\n{\\bf Robustness} \nWe remark that in Figure \\ref{fig:empirical} and Figure \\ref{fig:monte} the actual hash cost vector $\\vec{k}$ we chose is not highly sensitive to small changes of the adversary value $v$ (only in semilog x axis fluctuation of $\\vec{k}$ became obvious). Therefore, DAHash may still be useful even when it is not possible to obtain a precise estimate of $v$ \\revision{or when the attacker's value $v$ varies slightly over time.}\n\\ignore{\n \\subsubsection{Imperfect Knowledge} In real world settings the defender would not have perfect knowledge of the password distribution when partitioning passwords into $\\tau$ groups $G_i$. Thus, to evaluate the performance of DAHash under more realistic settings we trained used a count min sketch data-structure to partition passwords into groups. The count min sketch is trained on the empirical password dataset e.g., RockYou or LinkedIn and provides a noisy estimate of the number of users who selected each password. See the appendix for additional details about how we used the count min sketch data structure.}\n \n{\\bf Incentive Compatibility} One potential concern in assigning different hash cost parameters to different passwords is that we might inadvertently provide incentive for a user to select weaker passwords. In particular, the user might prefer a weaker password $pw_i$ to $pw_j$ ($\\Pr[pw_i] > \\Pr[pw_j]$) if s\/he believes that the attacker will guess $pw_j$ before $pw_i$ e.g., the hash cost parameter $k(pw_j)$ is so small that makes $r_j > r_i$. We could directly encode incentive compatibility into our constraints for the feasible range of defender strategies $\\mathcal{F}_{C_{max}}$ i.e., we could explicitly add a constraints that $r_j \\leq r_i$ whenever $\\Pr[pw_i] \\leq \\Pr[pw_j]$. However, Figures \\ref{fig:r2} suggest that this is not necessary. Observe that the attacker does not crack any medium\/high strength passwords until {\\em all} weak passwords have been cracked. Similarly, the attacker does not crack any high strength passwords until {\\em all} medium strength passwords have been cracked. \n\n\n\\subsection{Action Space of Defender}\\label{subsection:defender}\nThe defender's action is to implement the function $\\mathsf{GetHardness}()$. The implementation must be efficiently computable, and the function must be chosen subject to maximum workload constraints on the authentication server. Otherwise, the optimal solution would simply be to set the cost parameter $k$ for each password to be as large as possible. In addition, the server should guarantee that each password is granted with at least some level of protection so that it will not make weak passwords weaker.\n\nIn an idealized setting where the defender knows the user password distribution we can implement the function $\\mathsf{GetHardness}(pw_u)$ as follows: the authentication server first partitions all passwords into $\\tau$ mutually exclusive groups $G_i$ with $i\\in\\{1,\\cdots,\\tau\\}$ such that $\\mathcal{P} = \\bigcup_{i = 1}^\\tau G_i$ and $\\Pr[pw] > \\Pr[pw']$ for every $pw \\in G_i$ and $pw' \\in G_{i+1}$. Here, $G_1$ will \\revision{correspond} to the weakest group of passwords and $G_{\\tau}$ corresponds to the group of strongest passwords. For each of the $\\lvert G_i\\rvert$ passwords $pw \\in G_i$ we assign the same hash cost parameter $k_i = \\mathsf{GetHardness}(pw)$. \n\n\\ignore{\n In practice, it may be overly optimistic to assume that the defender has perfect knowledge of the password distribution. Instead, we can partition passwords into groups $G_i$ based on estimates of the password strength. To estimate $\\Pr[pw] $ we could use password strength meters or a differentially private count sketch which can provide a noisy estimate of the number of users who have selected the password $pw$. }\n\nThe cost of authenticating a password that is from $G_i$ is simply $k_i$. Therefore, the amortized server cost for verifying a correct password is: \n\\begin{equation}\\small\nC_{SRV}=\\sum_{i=1}^{\\tau } k_i \\cdot \\Pr[pw \\in G_i],\n\\end{equation}\nwhere $\\Pr[pw\\in G_i] = \\sum_{pw\\in G_i}Pr[pw]$ is total probability mass of passwords in group $G_i$. In general, we will assume that the server has a maximum amortized cost $C_{max}$ that it is willing\/able to incur for user authentication. Thus, the authentication server must pick the hash cost vector $\\vec{k}=\\{k_1,k_2,\\cdots,k_{\\tau}\\}$ subject to the cost constraint $C_{SRV}\\leq C_{max}$. \\revision{Additionally, we require that $k(pw_i) \\geq k_{min}$ to ensure a minimum acceptable level of protection for all accounts.} The attacker will need to repeat this process independently for each user $u$. Thus, in our analysis we can focus on an individual user's account that the attacker is trying to crack. \n\\vspace{-0.3cm}\n\n\\ignore{\n\n\\subsubsection{Implementing $\\mathsf{GetHardness}()$ in Practice}{\\color{blue} \nAn idealized implementation of $\\mathsf{GetHardness}$ would require the defender to work with all passwords whenever an account creation request occurs. However, In practice the authentication server does not have the entire password corpus at hand in the very beginning, it has to process users' passwords in the order that users register their accounts. Thus, $\\mathsf{GetHardness}()$ should work like a stream algorithm in a practical setting. $\\mathsf{GetHardness}()$ consists two steps, classify the input password into a strength group and assign corresponding hash cost, we discuss their practical implementations separately.\n\nFor classification step of $\\mathsf{GetHardness}$, since the server is not able to partition all passwords ex ante in practice, let us consider other ways of classifying passwords. Using prevalent strength meters such as $\\mathsf{zxcvbn}$ (we use this password strength estimation libraries in our experiment) and $\\mathsf{KeePass}$, we can classify passwords into different strength groups based on predefined criterions with respect to some measurement (eg., estimated entropy, crack time). In particular,\nwe would implicitly classify passwords into groups based on an estimate $x = \\mathsf{EstimatedEntropy}(pwd)$ or $x=\\mathsf{EstimatedCrackTime}(pwd)$. Fixing some thresholds $x_0 = 0 \\leq x_1 \\leq \\ldots \\leq x_{\\tau-1} = \\infty$ we could define $G_i = \\{ pwd~: x_{i-1} < x \\leq x_i\\}$ to be the set of all passwords whose estimated entropy\/crack time lies in the interval $(x_{i-1},x_i]$. \n\nApart from $\\mathsf{zxcvbn}$ and $\\mathsf{KeePass}$, the authentication server could also maintain a private count sketch data structure~\\cite{schechter2010popularity,C:WCDJR17} to help classify each new password. The authentication server could use the existing password data to train a count-min (or count-median) sketch. The sketch is later used to estimate password probability (strictly speaking, it is frequency that is being estimated) when a new user registers his\/her account. Based on the estimation the server can classify the new user's password into a certain group. When creating a count sketch, Laplace noise could be added to ensure differential privacy. Besides, maintaining a count sketch is beneficial in that it can remind a user who had picked a frequently used password of risks of her\/his poor password choice. \n\nThere are other sophisticated tools that an adversary would use to estimate password strength, e.g., Probabilistic Context Free Grammars~\\cite{SP:WAMG09,SP:KKMSVB12,NDSS:VerColTho14}, Markov Chain Models~\\cite{NDSS:CasDurPer12,Castelluccia2013,SP:MYLL14,USENIX:USBCCKKMMS15} and even neural networks~\\cite{USENIX:MUSKBCC16}. \n\nAs for practical hash cost assignment step, when the whole password data is not available to derive the optimal hash costs, the authentication server could use password data present in the server to derive $k_i$ that is optimal for those data, and use $k_i$ for future users. When a returned user logs in his\/her account, the server could update the hash cost.}\n\n}\n \n\\subsection{Action Space of Attacker}\nAfter breaching the authentication server the attacker may run an offline dictionary attack. The attacker must fix an ordering $\\pi$ over passwords $\\mathcal{P}$ and a maximum number of guesses $B$ to check i.e., the attacker will check the first $B$ passwords in the ordering given by $\\pi$. If $B=0$ then the attacker gives up immediately without checking any passwords and if $B= \\infty$ then the attacker will continue guessing until the password is cracked. The permutation $\\pi$ specifies the order in which the attacker will guess passwords, i.e., the attacker will check password $pw_{\\pi(1)}$ first then $pw_{\\pi(2)}$ second etc... Thus, the tuple $(\\pi,B)$ forms a \\emph{strategy} of the adversary. Following that strategy \nthe probability that the adversary succeeds in cracking a random user's password is simply sum of probability of all passwords to be checked:\n\\begin{equation}\\small\nP_{ADV}=\\lambda(\\pi,B)=\\sum_{i=1}^B p_{\\pi(i)}\\ .\n\\end{equation}\nHere, we use short notation $p_{\\pi(i)} = \\Pr[pw_{\\pi(i)}]$ which denotes the probability of the $i$th password in the ordering $\\pi$. \n\n\\subsection{Attacker's Utility}\nGiven the estimated average value for one single password $v$ the expected gain of the attacker is simply $v \\times \\lambda(\\pi,B)$ i.e., the probability that the password is cracked times the value $v$. Similarly, given a hash cost parameter vector $\\vec{k}$ the expected cost of the attacker is \n$\\sum^B_{i=1} k(pw_{\\pi(i)})\\cdot \\left(1-\\lambda(\\pi,i-1)\\right).$\nWe use the shorthand $k(pw) = k_i = \\mathsf{GetHardness}(pw)$ for a password $pw \\in G_i$. \nIntuitively, the probability that the first $i-1$ guesses are incorrect is $\\left(1-\\lambda(\\pi,i-1)\\right)$ and we incur cost $k(pw_{\\pi(i)})$ for the $i$'th guess if and only if the first $i-1$ guesses are incorrect. Note that $\\lambda(\\pi,0)=0$ so the attacker always pays cost $k(pw_{\\pi(1)})$ for the first guess.\nThe adversary's expected utility is the difference of expected gain and expected cost:\n\\begin{equation}\\small\n\\begin{aligned}\n&U_{ADV}\\left(v,\\vec{k},(\\pi,B)\\right)=v\\cdot \\lambda(\\pi,B)-\\sum^B_{i=1} k(pw_{\\pi(i)})\\cdot \\left(1-\\lambda(\\pi,i-1)\\right).\n\\end{aligned}\n\\end{equation}\n\\vspace{-0.5cm}\n\n\\subsection{Defender's Utility} After the defender (leader) moves the offline attacker (follower) will respond with his\/her utility optimizing strategy. We let $P_{ADV}^* $ denote the probability that the attacker cracks a random user's password when playing his\/her optimal strategy. \n\\begin{equation}\\small\nP_{ADV}^* = \\lambda(\\pi^*,B^*)\\ , ~~~\\mbox{where~~~} (\\pi^*,B^*)=\\arg \\max_{\\pi, B} U_{ADV}\\left(v,\\vec{k},(\\pi,B)\\right).\n\\end{equation}\n$P_{ADV}^*$ will depend on the attacker's utility optimizing strategy which will in turn depend on value $v$ for a cracked password, the chosen cost parameters $k_i$ for each group $G_i$, and the user password distribution. Thus, we can define the authentication server's utility as\n\\begin{equation}\\small\nU_{SRV}(\\vec{k},v)=-P_{ADV}^* \\ .\n\\end{equation} \n\nThe objective of the authentication is to minimize the success rate $P_{ADV}^*(v,\\vec{k})$ of the attacker by finding the optimal action i.e., a good way of partitioning passwords into groups and selecting the optimal hash cost vector $\\vec{k}$. \nSince the parameter $\\vec{k}$ controls the cost of the hash function in passwords storage and authentication, we should increase $k_i$ for a specific group $G_i$ of passwords only if this is necessary to help deter the attacker from cracking passwords in this group $G_i$. The defender may not want to waste too much resource in protecting the weakest group $G_1$ of passwords when password value is high because they will be cracked easily regardless of the hash cost $k_1$.\n\\vspace{-0.3cm}\n\\subsection{Stackelberg Game Stages}\nSince adversary's utility depends on $(\\pi,B)$ and $\\vec{k}$, wherein $(\\pi,B)$ is the responses to server's predetermined hash cost vector $\\vec{k}$. On the other hand, when server selects different hash cost parameter for different groups of password, it has to take the reaction of potential attackers into account.\nTherefore, the interaction between the authentication server and the adversary can be modeled as a two stage Stackelberg Game. Then the problem of finding the optimal hash cost vector is reduced to the problem of computing the equilibrium of Stackelberg game.\n\n\\ignore{\nWe will assume that the value $v$ is available to the defender before the game begins. For example, the defender might estimate $v$ from black market reports~\\cite{CCS:Allodi17,goldForSilver,stockley2016} \\footnote{We discuss the issue of robustness to inexact estimations of $v$ in \\ref{sec:empirical}. Briefly, we find that the optimal vector $\\vec{k}$ is (usually) not sensitive to moderate changes in the value $v$ which means that our mechanism will be expected to work well even when we only have loose estimates of $v$.} Similarly, we assume that the attacker know $\\Pr[pwd_i]$ for each password $pwd_i\\in \\mathcal{P}$ and the attacker will learn the vector $\\vec{k}$ of hash cost parameters after the server is breached as well as the particular groups $G_1,\\ldots, G_\\tau$ (typically represented implicitly).\n}\n\nIn the Stackelberg game, the authentication server (leader) moves first (stage I); then the adversary follows (stage II). In stage I, the authentication server commits hash cost vector $\\vec{k}=\\{k_1,\\cdots k_{\\tau}\\}$ for all groups of passwords; in stage II, the adversary yields the optimal strategy $(\\pi,B)$ for cracking a random user's password. \nThrough the interaction between the legitimate authentication server and the untargeted adversary who runs an offline attack, there will emerge an equilibrium in which no player in the game has the incentive to unilaterally change its strategy. Thus, an equilibrium strategy profile $\\left\\{\\vec{k}^*,(\\pi^*,B^*)\\right\\}$ must satisfy\n\\begin{equation}\\small\n\\begin{cases}\n U_{SRV}\\left(\\vec{k}^*,v\\right)\\geq U_{SRV}\\left(\\vec{k},v\\right),& \\forall \\vec{k} \\in \\mathcal{F}_{C_{max}} ,\\\\\n U_{ADV}\\left(v,\\vec{k}^*,(\\pi^*,B^*)\\right)\\geq U_{ADV}\\left(v,\\vec{k}^*,(\\pi,B)\\right), &\\forall(\\pi,B)\n\\end{cases}\n\\end{equation}\nAssuming that the grouping $G_1,\\ldots,G_\\tau$ of passwords is fixed. The computation of equilibrium strategy profile can be transformed to solve the following optimization problem, where $\\Pr(pw_i)$, $G_1,\n\\cdots, G_{\\tau}$, $C_{max}$ are input parameters and $(\\pi^*,B^*)$ and $\\vec{k}^*$ \nare variables.\n\\begin{equation}\\small\n\\label{eq:utility}\n\\begin{aligned}\n\\min_{\\vec{k}^*, \\pi^*, B*}& \\lambda(\\pi^*,B^*)\\\\\n \\textrm{s.t.} \\quad\n& U_{ADV}\\left(v,\\vec{k},(\\pi^*,B^*)\\right) \\geq U_{ADV}\\left(v,\\vec{k},(\\pi,B)\\right),~~\\forall (\\pi,B), \\\\\n&\\sum_{i=1}^{\\tau } k_i \\cdot \\Pr[pw \\in G_i] \\leq C_{max},\\\\\n& k_i \\geq k_{min}, \\mbox{~$\\forall i \\leq \\tau$}.\n\\end{aligned}\n\\end{equation}\n\nThe solution of the above optimization problem is the equilibrium of our Stackelberg game. The first constraint implies that adversary will play his\/her utility optimizing strategy i.e., given that the defender's action $\\vec{k}^*$ is fixed the utility of the strategy $(\\pi^*,B^*)$ is at least as large as any other strategy the attacker might follow. Thus, a rational attacker will check the first $B^*$ passwords in the order indicated by $\\pi^*$ and then stop cracking passwords. The second constraint is due to resource limitations of authentication server. The third constraint sets lower-bound for the protection level. In order to tackle the first constraint, we need to specify the optimal checking sequence and the optimal number of passwords to be checked. \n\\vspace{-0.3cm}\n\n\\subsection{Adversary's Best Response (Greedy)}\nIn this section we show that the attacker's optimal ordering $\\pi^*$ can be obtained by sorting passwords by their ``bang-for-buck'' ratio. \\revision{In particular, fixing an ordering $\\pi$} we define the ratio $r_{\\pi(i)}=\\frac{p_{\\pi(i)}}{k(pw_{\\pi(i)})}$ which can be viewed as the priority of checking password $pw_{\\pi(i)}$ i.e., the cost will be $k(pw_{\\pi(i)})$ and the probability the password is correct is $p_{\\pi(i)}$. \\revision{Intuitively, the attacker's optimal strategy is to order passwords by their ``bang-for-buck'' ratio guessing passwords with higher checking priority first. Theorem \\ref{thm:noinversions} formalizes this intuition by proving that the optimal checking sequence $\\pi^*$ has no inversions. }\n\nWe say a checking sequence $\\pi$ has an \\emph{inversion} with respect to $\\vec{k}$ \\revision{ if for some pair $a > b$ we have $r_{\\pi(a)}>r_{\\pi(b)}$ i.e., $pw_{\\pi(b)}$ is scheduled to be checked before $pw_{\\pi(a)}$ even though password $pw_{\\pi(a)}$ has a higher ``bang-for-buck'' ratio. Recall that $pw_{\\pi(b)}$ is the $b$'th password checked in the ordering $\\pi$. } \\revision{The proof of Theorem \\ref{thm:noinversions} can be found in the appendix \\ref{app:proof}. Intuitively, we argue that consecutive inversions can always be swapped without decreasing the attacker's utility.}\n\n\n\\newcommand{\\revision{Let $(\\pi^*,B^*)$ denote the attacker's optimal strategy with respect to hash cost parameters $\\vec{k}$ and let $\\pi$ be an ordering with no inversions relative to $\\vec{k}$ then \\[ U_{ADV}\\left(v,\\vec{k},(\\pi,B^*)\\right) \\geq U_{ADV}\\left(v,\\vec{k},(\\pi^*,B^*)\\right) \\ . \\]}}{\\revision{Let $(\\pi^*,B^*)$ denote the attacker's optimal strategy with respect to hash cost parameters $\\vec{k}$ and let $\\pi$ be an ordering with no inversions relative to $\\vec{k}$ then \\[ U_{ADV}\\left(v,\\vec{k},(\\pi,B^*)\\right) \\geq U_{ADV}\\left(v,\\vec{k},(\\pi^*,B^*)\\right) \\ . \\]}}\n\\begin{theorem}\n\\label{thm:noinversions}\n\\revision{Let $(\\pi^*,B^*)$ denote the attacker's optimal strategy with respect to hash cost parameters $\\vec{k}$ and let $\\pi$ be an ordering with no inversions relative to $\\vec{k}$ then \\[ U_{ADV}\\left(v,\\vec{k},(\\pi,B^*)\\right) \\geq U_{ADV}\\left(v,\\vec{k},(\\pi^*,B^*)\\right) \\ . \\]}\n\\end{theorem}\n\\vspace{-0.4cm}\nTheorem \\ref{thm:noinversions} gives us {\\em an easy way to compute} the attacker's optimal ordering $\\pi^*$ over passwords \\revision{i.e., by sorting passwords according to their ``bang-for-buck'' ratio.} \\revision{It remains to find the attacker's optimal guessing budget $B^*$}. \\revision{As we previously mentioned the password distributions we consider can be compressed by grouping passwords with equal probability into equivalence sets.} \\revision{Once we have our cost vector $\\vec{k}$ and have implemented \\hard{} we can further partition password equivalence sets such that passwords in each set additionally have the same bang-for-buck ratio.\n Theorem \\ref{corollary} tells us that the optimal attacker strategy will either guess {\\em all} of the passwords in such an equivalence set $ec_j$ or {\\em none} of them. Thus, when we search for $B^*$ we only need to consider $n^{\\prime}+1$ possible values of this parameter. } We will use this observation to improve the efficiency of our algorithm to compute the optimal attacker strategy. \n\n\\newcommand{\\thmcompact}{Let $(\\pi^*,B^*)$ be the optimal strategy of \\revision{the} adversary and given two passwords $pw_i$ and $pw_j$ in the same equivalence set. Then\n\\begin{equation}\n\\label{eq:theorem2}\n\\mathsf{Inv}_{\\pi^*}(i)\\leq B^* \\Leftrightarrow \\mathsf{Inv}_{\\pi^*}(j)\\leq B^* \\ . \n\\end{equation}}\n\n\\newcommand{Let $(\\pi^*,B^*)$ denote the attacker's optimal strategy with respect to hash cost parameters $\\vec{k}$. Suppose that passwords can be partitioned into $n$ equivalence sets $es_1,\\ldots, es_{n^{\\prime}}$ such that passwords $pw_a, pw_b \\in es_i$ have the same probability and hash cost i.e., $p_a=p_b = p^i$ and $k(pw_a) = k(pw_b)= k^i$. Let $r^i = p^i\/k^i$ denote the bang-for-buck ratio of equivalence set $es_i$ and assume that $r^1 \\geq r^2 \\geq \\ldots \\geq r_{n^{\\prime}}$ then $B^* \\in \\left\\{0, |es_1| ,|es_1|+|es_2|,\\cdots ,\\sum_{i=1}^{n^{\\prime}}|es_i|\\right\\}$.}{Let $(\\pi^*,B^*)$ denote the attacker's optimal strategy with respect to hash cost parameters $\\vec{k}$. Suppose that passwords can be partitioned into $n$ equivalence sets $es_1,\\ldots, es_{n^{\\prime}}$ such that passwords $pw_a, pw_b \\in es_i$ have the same probability and hash cost i.e., $p_a=p_b = p^i$ and $k(pw_a) = k(pw_b)= k^i$. Let $r^i = p^i\/k^i$ denote the bang-for-buck ratio of equivalence set $es_i$ and assume that $r^1 \\geq r^2 \\geq \\ldots \\geq r_{n^{\\prime}}$ then $B^* \\in \\left\\{0, |es_1| ,|es_1|+|es_2|,\\cdots ,\\sum_{i=1}^{n^{\\prime}}|es_i|\\right\\}$.}\n\\begin{theorem}\\label{corollary}\n\\revision{Let $(\\pi^*,B^*)$ denote the attacker's optimal strategy with respect to hash cost parameters $\\vec{k}$. Suppose that passwords can be partitioned into $n$ equivalence sets $es_1,\\ldots, es_{n^{\\prime}}$ such that passwords $pw_a, pw_b \\in es_i$ have the same probability and hash cost i.e., $p_a=p_b = p^i$ and $k(pw_a) = k(pw_b)= k^i$. Let $r^i = p^i\/k^i$ denote the bang-for-buck ratio of equivalence set $es_i$ and assume that $r^1 \\geq r^2 \\geq \\ldots \\geq r_{n^{\\prime}}$ then $B^* \\in \\left\\{0, |es_1| ,|es_1|+|es_2|,\\cdots ,\\sum_{i=1}^{n^{\\prime}}|es_i|\\right\\}$. }\n\\end{theorem}\nThe proof of both theorems can be found in Appendix \\ref{app:proof}. Theorem \\ref{corollary} implies that when cracking users' accounts the adversary increases number of guesses $B$ by the size of the next equivalence set (if there is net profit by doing so). Therefore, the attacker finds the optimal strategy $(\\pi^*, B^*)$ with Algorithm $\\mathsf{BestRes}(v, \\vec{k}, D)$ in time $\\mathcal{O}(n^{\\prime}\\log n^{\\prime})$ \\revision{ --- see Algorithm \\ref{alg:response} in Appendix \\ref{app:algorithm}. The running time is dominated by the cost of sorting our $n^{\\prime}$ equivalence sets. }\n\\vspace{-0.3cm}\n\n\\ignore{\n\\subsubsection{Compact Representation of Probability Distribution}\nAs we remarked previously when we use an empirical password frequency corpus to model the password distribution we will typically obtain a compact representation i.e., the tuple $(p_j, c_j)$ indicates that there are $c_j$ different passwords which all occur with probability $p_j$ and corresponds to to an equivalence class $E_j \\subseteq \\mathcal{P}$. Suppose that we ensure that our partition of the passwords $\\mathcal{P}$ into groups $G_1,\\ldots,G_{\\tau}$ maintains the invariant that passwords from the same equivalence class remain in the same group i.e., $E_j \\subseteq G_i$ for some $i\\leq \\tau$. Assuming that this is the case we remark that any pair of elements in an equivalence class will have the bang-for-buck ratio and Theorem \\ref{thm:compact} tells us that an optimal attacker will either guess {\\em all} of the password in an equivalence class $E_j$ or {\\em none} of them. This helps to reduce the search space for the attacker's optimal strategy (from $O(N)$ to $O(n')$) in our empirical experiments. \n\nTheorem \\ref{thm:compact} tells us that the optimal attacker strategy will either guess {\\em all} of the passwords in an equivalence class $E_j$ or {\\em none} of them. In particular, let $\\mathsf{Inv}_{\\pi}$ is the inverse map of $\\pi$ (e.g., $\\mathsf{Inv}_{\\pi}(\\pi(i))=i=\\pi\\left(\\mathsf{Inv}_{\\pi}(i) \\right)$) and note that an attacker who plays strategy $(\\pi,B)$ will check password $pwd_i$ if and only if $\\mathsf{Inv}_{\\pi}(i) \\leq B$. Theorem \\ref{thm:compact} states that for the optimal strategy $(\\pi^*,B^*)$ we have $\\mathsf{Inv}_{\\pi^*}(i)\\leq B^* \\Leftrightarrow \\mathsf{Inv}_{\\pi^*}(j)\\leq B^*$. We will use this observation to improve the efficiency of our greedy algorithm to compute the optimal attacker strategy. \n\n\\begin{corollary}\\label{corollary}\nThe optimal passwords checking number $B^*$ lies in the list $S=\\{0, c_1,c_1+c_2,\\cdots ,\\sum_{i=1}^{n'}c_i\\}$.\n\\end{corollary}\nWhen cracking uses' accounts the adversary increases number of guesses $B$ by the size of the next equivalence class (if there is net profit by doing so). In short, the adversary checks passwords class by class instead of one by one.\n\n\n\\begin{theorem} \\label{thm:compact}\nLet $(\\pi^*,B^*)$ be the optimal strategy of adversary and given two passwords $pwd_i$ and $pwd_j$ in the same equivalence class. Then\n\\begin{equation}\\label{eq:theorem2}\n\\mathsf{Inv}_{\\pi^*}(i)\\leq B^* \\Leftrightarrow \\mathsf{Inv}_{\\pi^*}(j)\\leq B^* \\ . \n\\end{equation}\n\\end{theorem}\nThis theorem implies that passwords in the same equivalence class are essentially identical from the perspective of the adversary. If the optimal strategy indicates to check a password in an equivalence class, then checking all the passwords in that equivalence class is also optimal. Immediately follows Corollary \\ref{corollary}.\n\n\\begin{corollary}\\label{corollary}\nThe optimal passwords checking number $B^*$ lies in the list $S=\\{0, c_1,c_1+c_2,\\cdots ,\\sum_{i=1}^{n'}c_i\\}$.\n\\end{corollary}\nWhen cracking users' accounts the adversary increases number of guesses $B$ by the size of the next equivalence class (if there is net profit by doing so). In short, the adversary checks passwords class by class instead of one by one.\n\nBase on the discussion of last subsections, we develop a greedy algorithm to compute the adversary's success rate given a particular hash cost vector to which the authentication serve committed to in stage I. We sort checking priority of each equivalence class $\\frac{p_i}{k_i}$ for $i \\leq n'$ in descending order and reindex them such that $\\frac{p_1}{k_1}\\geq\\cdots\\geq\\frac{p_{n'}}{k_{n'}}$. Then we iterate through the list $S$ and find the element leading to maximum utility as the checking number in adversary's best response. The details of the greedy algorithm can be found as in Algorithm \\ref{alg:response} $\\mathsf{BestRes}(\\vec{k}, v)$. \n}\n\n\\subsection{The Optimal Strategy of Selecting Hash Cost Vector} \\label{subsec:OptimizingCostVector}\n\\revision{In the previous section we showed that there is an efficient greedy algorithm $\\mathsf{BestRes}(v, \\vec{k}, D)$ which takes as input a cost vector $\\vec{k}$, a value $v$ and a (compressed) description of the password distribution $D$ computes the the attacker's best response $(\\pi^*, B^*)$ and outputs $\\lambda(\\pi^*, B^*)$ --- the fraction of cracked passwords. Using this algorithm $\\mathsf{BestRes}(v, \\vec{k}, D)$ as a blackbox we can apply derivative-free optimization to the optimization problem in equation (\\ref{eq:utility}) to find a good hash cost vector $\\vec{k}$ which minimizes the objective $\\lambda(\\pi^*, B^*)$} There are many derivative-free optimization solvers available in the literature \\cite{rios2013derivative}, generally they fall into two categorizes, deterministic algorithms (such as Nelder-Mead) and evolutionary algorithm (such as BITEOPT\\cite{biteopt} and CMA-EA). \\revision{ We refer our solver to as $\\mathsf{OptHashCostVec}(v,C_{max}, k_{min}, D)$. The algorithm takes as input the parameters of the optimization problem (i.e., password value $v$, $C_{max}$, $k_{min}$, and a (compressed) description of the password distribution $D$) and outputs an optimized hash cost vector $\\vec{k}$. }\n\n During each iteration of $\\mathsf{OptHashCostVec}(\\cdot)$, some candidates $\\{\\vec{k}_{c_i}\\}$ are proposed, together they are referred as \\emph{population}. For each candidate solution $\\vec{k}_{c_i}$ we use our greedy algorithm $\\mathsf{BestRes}(v, \\vec{k}_{c_i}, D)$ to compute the attacker's best response $(\\pi^*, B^*)$ \\revision{ i.e., fixing any feasible cost vector $\\vec{k}_{c_i}$ we can compute the corresponding value of the objective function $P_{adv,\\vec{k}_{c_i}} := \\sum_{i=1}^{B^*}p_{\\pi^{*}(i)}$.} We record the corresponding success rate $P_{adv,\\vec{k}_{c_i}}$ of the attacker as ``fitness''. At the end of each iteration, the population is updated according to fitness of its' members, the update could be either through deterministic transformation (Nelder-Mead) or randomized evolution (BITEOPT, CMA-EA). When the iteration number reaches a pre-defined value $ite$, the best fit member $\\vec{k}^*$ and its fitness $P_{adv}^*$ are returned.\n\\vspace{-0.3cm}\n\n\n\\ignore{\n\\subsection{Algorithms}\nAlgorithm \\ref{alg:response} shows how to efficiently compute the adversary's best response given cost parameters $\\vec{k}$. Algorithm \\ref{alg:response} must be quick since it is called multiple times in our brute-force search to find the defender's optimal strategy $\\vec{k}^*$.\n\n\\ignore{\n\\begin{algorithm}[t]\n\\caption{The adversary's best response given $\\vec{k}$}\\label{alg:euclid}\n\\begin{algorithmic}[1]\n\\Require{$\\vec{k}$, $\\{(p_i,c_i)\\}$, $v$}\n\\Ensure{$B_{max}$, $P_{max}$, $U_{max}$}\n\\State $P_{ADV}\\leftarrow0$;\n\\State $P_{max}\\leftarrow0$;\n\\State $B\\leftarrow 0$;\n\\State $B_{max}\\leftarrow 0$;\n\\State $U_{ADV}\\leftarrow 0$;\n\\State $U_{max}\\leftarrow 0$;\n\n\\State sort $\\{\\frac{p_i}{k_i}\\}$ and reindex $\\{(p_i,c_i)\\}$ such that $\\frac{p_1}{k_1}\\geq\\cdots\\geq\\frac{p_{n'}}{k_{n'}}$;\n\\For{$i=1$ to $n'$}\n\\State $P_{ADV}\\leftarrow P_{ADV}+p_i \\cdot c_i$;\n\\State \n$\\Delta \\leftarrow v p_i c_i+k_i{c_i-1\\choose2}p_i-k_ic_i\\cdot \\left(1-\\lambda(\\pi, B)\\right)$;\n\\State $U_{ADV}\\leftarrow U_{ADV}+\\Delta$;\n\\State $B\\leftarrow B+c_i$;\n\\If{$U_{ADV}>U_{max}$};\n\\State $B_{max}\\leftarrow B$;\n\\State $P_{max}\\leftarrow P_{ADV}$;\n\\State $U_{max}\\leftarrow U_{ADV}$;\n\\EndIf\n\\EndFor\n\\State \\textbf{return} $B_{max}$, $P_{max}$, $U_{max}$;\n\n\\end{algorithmic}\n\\label{alg:response}\n\\end{algorithm}\n}\n Algorithm \\ref{alg:findk} uses algorithm \\ref{alg:response} as a subroutine to find a good strategy $\\vec{k}$ for the defender.\n \n \\ignore{\n\\begin{algorithm}[t]\n\\caption{Find $\\vec{k}^*$ and $P_{ADV}^*$}\n\\begin{algorithmic}[1]\n\\Require{$\\mathcal{K}^0$, $\\{(p_i,c_i)\\}$}, $v$, $C_{max}$\n\\Ensure{$\\vec{k}^*$}, $B^*$, $P_{ADV}^*$\n\n\\State $P_{max}^*\\leftarrow 1$;\n\\State $B^*\\leftarrow \\infty$;\n\\ForAll{$\\vec{k}^0\\in \\mathcal{K}^0$}\n\\State $c\\leftarrow C_{max}\/\\left( \\sum_{i=1}^{\\tau}\\left(\\sum_{pwd_j\\in G_i} p_j\\right) \\cdot k_i^0\\right) $;\n\\State $\\vec{k}\\leftarrow c\\cdot \\vec{k}^0$;\n\\State $(B_{max},P_{max},U_{max})\\leftarrow \\mathsf{Algorithm 4}\\left(\\vec{k},\\{p_i,c_i\\},v\\right)$;\n\\If{$P_{max} \\Pr[pwd]\/k_1$, it means the position of $pwd$ in adversary's checking sequence is before the position when no misclassification was involved. Then users who picked $pwd$ would be more vulnerable with respect to no misclassification. Meanwhile, notice $pwd$ going to the ``front end'' of a checking queue implies some password(s) ($pwd^{\\prime},\\cdots,$) are pushed to ``'back end'' of the checking queue, users who have chosen those passwords become more resistant to the attack. Since the adversary has no way of recognizing uses who picked $pwd$, he cannot obtain a better utility from aiming for a specific group of users. More importantly (if we care the total users as a whole), the overall percentage of cracked password is reduced even if there might be some misclassification. Since misclassification happens randomly and $v\/C_{max}$ depends on different dataset there is no discrimination against a certain group of user. Even so, some readers might still consider it to be some what ethically problematic. To avoid the case as much as possible, we recommend to use count-min or count-median sketch to estimate frequency and to further classify passwords.\n\n\\subsubsection{Would our mechanism be vulnerable to side channel attack and timing attack?}\n\nWe remark that hash function computation time is not the only parameter that the server can tune to impose differentiated hash costs. Given memory hard functions were used, other parameters directly affecting memory size independent of computation time, such as degree of parallelism $p$ as in Argon2 and difficulty parameter $d$ (the depth of a stack of subprocedures) in SCRYPT, can be increased in exchange for a short authentication time. By tuning these parameters and fixing authentication time we can ensure that 1) the threat of side channel attack and timing attack are mitigated; 2) login time for a legitimate user is acceptable.}\n}\n\n\n\\section{Algorithms}\\label{app:algorithm}\n\\begin{algorithm}[h]\n\\begin{algorithmic}[1]\n\\Require{$u$, $pw_u$, $L$}\n\\State $s_u \\overset{\\$}{\\leftarrow} \\{0,1\\}^L$;\n\\State $k\\leftarrow \\mathsf{GetHardness}(pw_u)$;\n\\State $h\\leftarrow H(pw_u,s_u;~k)$;\n\\State $\\mathsf{StoreRecord}$ $(u,s_u,h)$\n\\end{algorithmic}\n\\caption{Account creation} \\label{alg:createaccount} \n\\end{algorithm}\n\n\\begin{algorithm}[h]\n\\begin{algorithmic}[1]\n\\Require{$u$, $pw_u'$}\n\\State $(u,s_u,h) \\leftarrow \\mathsf{FindRecord}(u)$;\n\\State $k' \\leftarrow \\mathsf{GetHardness}(pw_u')$;\n\\State $h'\\leftarrow H(pw_u,s_u;~k')$;\n\\State \\textbf{Return} $h == h'$\n\\end{algorithmic}\n\\caption{Password authentication} \\label{alg:authenticate}\n\\end{algorithm}\n\\vspace{-0.3cm}\n\\ignore{\n\\begin{algorithm}[h]\n\\begin{algorithmic}[1]\n\\State \\textbf{Preprocessing:}\n\\State Partition all passwords into $\\tau$ groups $G_i,i\\in\\{1,\\cdots,\\tau\\}$;\n\\State Associate $k_i$ with $G_i$;\n\\Require{$pw_u$}\n\\Ensure{$k(pw_i)$}\n\\For{$i=1$ to $\\tau$}\n\\If{$pw_u\\in G_i$}\n\\State $k(pw_u)=k_i$; \n\\EndIf\n\\EndFor\n\\State \\textbf{return} $k(pw_i)$;\n\\end{algorithmic}\n \\caption{$\\mathsf{GetHardness}(pw_u)$} \\label{alg:getHardness}\n \\label{alg:hardness}\n\\end{algorithm}\n}\n\\vspace{-0.3cm}\n\\begin{algorithm}[h]\n\\caption{The adversary's best response $\\mathsf{BestRes}(v, \\vec{k}, D), $}\n\\begin{algorithmic}[1]\n\\Require{$\\vec{k}$, $v$, $D$}\n\\Ensure{$(\\pi^*, B^*)$}\n\\State sort $\\{\\frac{p_i}{k_i}\\}$ and reindex such that $\\frac{p_1}{k_1}\\geq\\cdots\\geq\\frac{p_{n'}}{k_{n'}}$ to get $\\pi^*$;\n\\State $B^* = \\arg \\max U_{ADV}\\left(v,\\vec{k}, (\\pi^*,B)\\right)$\n\\State \\textbf{return} $(\\pi^*,B^*)$;\n\\end{algorithmic}\n\\label{alg:response}\n\\end{algorithm}\n\n\\section{Missing Proofs}\\label{app:proof}\n\n\\subsection*{Proof of Theorem\\ref{thm:noinversions}}\n\\revision{\n\\begin{remindertheorem}{Theorem \\ref{thm:noinversions}}\n\\revision{Let $(\\pi^*,B^*)$ denote the attacker's optimal strategy with respect to hash cost parameters $\\vec{k}$ and let $\\pi$ be an ordering with no inversions relative to $\\vec{k}$ then \\[ U_{ADV}\\left(v,\\vec{k},(\\pi,B^*)\\right) \\geq U_{ADV}\\left(v,\\vec{k},(\\pi^*,B^*)\\right) \\ . \\]}\n\\end{remindertheorem}}\n\\begin{proofof}{Theorem\\ref{thm:noinversions}}\n\\revision{Fixing $B,v,\\vec{k}$ we let $\\pi$ be the optimal ordering of passwords. If there are multiple optimal orderings we take the ordering $\\pi$ with the fewest number of inversions. Recall that an inversion is a pair $b< a$ such that $r_{\\pi(a)}>r_{\\pi(b)}$ i.e., $pw_{\\pi(b)}$ is scheduled to be checked before $pw_{\\pi(a)}$ but password $pw_{\\pi(a)}$ has a higher ``bang-for-buck'' ratio. We say that we have a consecutive inversion if $a=b+1$. Suppose for contradiction that $\\pi$ has an inversion }\n\\begin{itemize}\n\\item \\revision{If $\\pi$ has an inversion then $\\pi$ also has a consecutive inversion. Let $(a,b)$ be the closest inversion i.e., minimizing $|a-b|$. The claim is that $(a,b)$ is a consecutive inversion. If not there is some $c$ such that $b < c < a$. Now either $r_{\\pi(c)} < r_{\\pi(a)}$ (in which case the pair $(c,a)$ form a closer inversion) or $r_{\\pi(c)} \\geq r_{\\pi(a)} > r_{\\pi(b)}$ (in which case the pair $(b,c)$ forms a closer inversion). In either case we contradict our assumption. } \n\n\\item \\revision{Let $b$, $b+1$ be a consecutive inversion.} We now define $\\pi'$ to be the same ordering as $\\pi$ except that the order of \\revision{$b$ and $b+1$} is flipped \\revision{i.e., $\\pi'(b) = \\pi(b+1)$ and $\\pi'(b+1)=\\pi(b)$ so that we now check password $pw_{\\pi(b+1)}$ before password $pw_{\\pi(b)}$}. Note that $\\pi'$ has one fewer inversion than $\\pi$. \n\n\\item We will prove that $$U_{ADV}\\left(v,\\vec{k},(\\pi',B)\\right) \\geq U_{ADV}\\left(v,\\vec{k},(\\pi,B)\\right)$$ contradicting the choice of $\\pi$ as the optimal ordering with the fewest number of inversions. By definition \\eqref{eq:utility} we have \n\\begin{equation*}\\small\n\\begin{aligned}\n&U_{ADV}\\left(v,\\vec{k},(\\pi,B)\\right) =v\\cdot \\lambda(\\pi,B)-\\sum^ B_{i=1} k(pw_{\\pi(i)})\\cdot \\left(1-\\lambda(\\pi,i-1)\\right),\n\\end{aligned}\n\\end{equation*}\nand \n\\begin{equation*}\\small\n\\begin{aligned}\n&U_{ADV}\\left(v,\\vec{k},(\\pi',B)\\right) =v\\cdot \\lambda(\\pi',B)-\\sum^ B_{i=1} k(pw_{\\pi'(i)})\\cdot \\left(1-\\lambda(\\pi',i-1)\\right).\n\\end{aligned}\n\\end{equation*}\nNote that $\\pi$ and $\\pi'$ only differ at \\revision{guesses $b$ and $b+1$ and coincide at the rest of passwords.} Thus, we have \n$\\lambda(\\pi,i)=\\lambda(\\pi',i)$ when $0\\leq i\\leq b-1$ or when $i \\geq b+1$.\n\\revision{For convenience, set} $\\lambda=\\lambda(\\pi,b-1)$. \n\n\\revision{Assuming that $b+1 \\leq B$ and taking difference of above two equations, }\n\\begin{equation}\\label{eq:diff}\n\\begin{aligned}\n&U_{ADV}\\left(v,\\vec{k},(\\pi,B)\\right) -U_{ADV}\\left(v,\\vec{k},(\\pi',B)\\right)\\\\\n&=k(pw_{\\pi(b)})\\lambda+k(pw_{\\pi(b+1)})(\\lambda+p_{\\pi(b)})\\\\\n&-k(pw_{\\pi(b+1)})\\lambda+k(pw_{\\pi(b)})(\\lambda+p_{\\pi(b+1)})\\\\\n&=p_{\\pi(b)}\\cdot k(pw_{\\pi(b+1)})- p_{\\pi(b+1)}\\cdot k(pw_{\\pi(b)}) \\leq 0.\n\\end{aligned}\n\\end{equation}\n\n\\revision{The last inequality holds since $0> (r_{\\pi(b)} - r_{\\pi(b+1)}) = \\frac{p_{\\pi(b)}}{k(pw_{\\pi(b)})} - \\frac{p_{\\pi(b+1)}}{k(pw_{\\pi(b+1)})}$ (we multiply by both sides of the inequality by $ \\left(k(pw_{\\pi(b+1)}) k(pw_{\\pi(b)})\\right)$ to obtain the result). From equation \\eqref{eq:diff} we see that the new swapped strategy $\\pi'$ has a utility at least as large as $\\pi$. Contradiction! }\n\n\\revision{ If $b> B$ then swapping has no impact on utility as neither password $pw_{\\pi(b)}$ or $pw_{\\pi(b+1)}$ will be checked. \n\nFinally if $B=b$ then checking last password in $\\pi$ provides non-negative utility, i.e.,\n\\begin{equation}\nv\\cdot p_{\\pi(B)} - k(pw_{\\pi(B)})(1-\\lambda(\\pi,B-1)) \\geq 0,\n\\end{equation}\nwhereas continue to check $pw(B+1)$ after executing strategy $(\\pi,B)$ would reduce utility, i.e.,\n\\begin{equation}\nv\\cdot p_{\\pi(B+1)} - k(pw_{\\pi(B+1)})(1-\\lambda(\\pi,B)) < 0.\n\\end{equation}\nFrom the above two equations, we have\n\\begin{equation}\nr_{\\pi(B)}=\\frac{p_{\\pi(B)}}{k(pw_{\\pi(B)})} \\geq \\frac{1-\\lambda(\\pi,B-1)}{v} > \\frac{1-\\lambda(\\pi,B)}{v} > \\frac{p_{\\pi(B+1)}}{k(pw_{\\pi(B)+1})} = r_{\\pi(B+1)}.\n\\end{equation}\nAgain, we have contradiction. Therefore, an optimal checking sequence does not contain inversions.\n}\n\\end{itemize} \n\\end{proofof}\n\n\n\\subsection*{Proof of Theorem \\ref{corollary}}\n\\revision{\n\\begin{remindertheorem}{Theorem \\ref{corollary}}\nLet $(\\pi^*,B^*)$ denote the attacker's optimal strategy with respect to hash cost parameters $\\vec{k}$. Suppose that passwords can be partitioned into $n$ equivalence sets $es_1,\\ldots, es_{n^{\\prime}}$ such that passwords $pw_a, pw_b \\in es_i$ have the same probability and hash cost i.e., $p_a=p_b = p^i$ and $k(pw_a) = k(pw_b)= k^i$. Let $r^i = p^i\/k^i$ denote the bang-for-buck ratio of equivalence set $es_i$ and assume that $r^1 \\geq r^2 \\geq \\ldots \\geq r_{n^{\\prime}}$ then $B^* \\in \\left\\{0, |es_1| ,|es_1|+|es_2|,\\cdots ,\\sum_{i=1}^{n^{\\prime}}|es_i|\\right\\}$.\n\\end{remindertheorem}\n}\n\n\\begin{proofof}{Theorem \\ref{corollary}}\nThe proof of Theorem \\ref{corollary} follows from the following lemma which states that whenever $pwd_i$ and $pwd_j$ are in the same equivalence set the optimal attack strategy will either check both of these passwords or neither.\n\\begin{lemma}\n\\thmcompact\n\\end{lemma}\n\n\\begin{proof}\nSuppose for contradiction that the optimal strategy checks $pwd_i$ but not $pwd_j$. Then WLOG we can assume that $\\mathsf{Inv}_{\\pi^*}(i)= B^*$ is the last password to be checked and that $\\mathsf{Inv}_{\\pi^*}(j) = B^*+1$ is the next password to be checked (otherwise, we can swap $pwd_j$ with the password in the equivalence set that will be checked next). Since $pw_i$ and $pwd_j$ are in the same equivalence set, we have $\\Pr[pw_i]=\\Pr[pw_j]$ and $k(pw_i)=k(pw_j)$. The marginal utility of checking $pwd_i$ is\n$$\\Delta_i=v\\Pr[pw_i]-k(pw_i)(1-\\lambda(\\pi^*,B^*)).$$\nBecause checking $pwd_i$ is part of the optimal strategy, it must be the case $\\Delta_i\\geq 0$. Otherwise, we would immediately derive a contradiction since the strategy $(\\pi^*,B^*-1)$ would have greater utility than $(\\pi^*,B^*)$. Now the marginal utility $\\Delta_j = U_{ADV}\\left(v,\\vec{k},(\\pi^*,B^*+1)\\right) -U_{ADV}\\left(v,\\vec{k},(\\pi^*,B^*)\\right)$ of checking $pw_j$ as well is\n$$\\Delta_j=v\\Pr[pw_j]-k(pw_j)(1-\\lambda(\\pi,B^*)-\\Pr[pw_j])>\\Delta_i \\geq 0 \\ . $$\nSince $\\Delta_j>0$ we have $U_{ADV}\\left(v,\\vec{k},(\\pi^*,B^*+1)\\right) > U_{ADV}\\left(v,\\vec{k},(\\pi^*,B^*)\\right) $ contradicting the optimality of $(\\pi^*,B^*)$. \\hfill $\\square$\n\\end{proof}\n\nFrom Theorem \\ref{thm:noinversions} it follows that we will check the equivalence sets in the order of bang-for-buck ratios. Thus, $B^*$ must lie in the set $\\{0,|es_1|,|es_1|+|es_2|,\\ldots, \\sum_{i=1}^{n^{\\prime}} |es_i|\\}$. \n\\end{proofof}\n\n\n\n\n\n\n\n\\section{FAQ}\n\n\\subsection*{Could this mechanism harm user's who pick weak passwords?} \\revision{\nWe understand the concern that our mechanism might provide less protection for weak passwords since we using a uniform hash cost for all passwords. If our estimation of the value $v$ of a cracked password is way too high then it is indeed possible that the DAHash parameters would be misconfigured in a way that harms users with weak passwords. However, even in this case we ensure that every password recieves a minimum level of acceptable protection by setting a minimum hash cost parameter $k_{min}$ for any password. We note that if our estimation of $v$ is accurate and it is feasible to deter an attacker from cracking weaker passwords then DAHash will actually tend to provide stronger protection for these passwords. On the other hand if the password is sufficiently weak that we cannot deter an attacker then these weak passwords will always be cracked no matter what actions we take. Thus, DAHash will reallocate effort to focus on protecting stronger passwords. }\n\n\\ignore{\n\\subsection*{What about side-channel attacks?}\n\\revision{\nThe concern is that an eavesdropping attacker who observe the time it takes for a user to login successfully might be able to draw inferences about the strength of the user's password. This is a valid concern if the hash cost parameter, which is tied to password strength, is correlated with the authentication time. However, modern memory hard password hash functions have two relevant cost parameters: running time and memory. Space-time cost is measured as the product of these two parameters. Thus, space-time costs can often be adjusted without altering the running time e.g., by increasing memory usage. Another way to mitigate the risk of side-channel leakage would be to fix a delay (e.g., $1$ second) which would be the same for everyone e.g., even if the hash of some user passwords can be computed much faster. }\n}\n\n\n\n\n \n \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}