diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhkwx" "b/data_all_eng_slimpj/shuffled/split2/finalzzhkwx" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhkwx" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sect-intro}\n\nWe take the view that sequential programs are in essence sequences of\ninstructions.\nAlthough finite state programs with direct and indirect jump\ninstructions are as expressive as finite state programs with direct jump\ninstructions only, indirect jump instructions are widely used.\nFor example, return instructions, in common use to implement recursive\nmethod calls in programming language such as Java~\\cite{GJSB00a} and\nC\\#~\\cite{HWG03a}, are indirect jump instructions.\nTherefore, we consider a theoretical understanding of both direct jump\ninstructions and indirect jump instructions highly relevant to\nprogramming.\nIn~\\cite{BL02a}, sequential programs that are instruction sequences with\ndirect jump instructions are studied.\nIn this paper, we study sequential programs that are instruction\nsequences with both direct jump instructions and indirect jump\ninstructions.\n\nWe believe that interaction with components of an execution environment,\nin particular memory devices, is inherent in the behaviour of programs\nunder execution.\nIntuitively, indirect jump instructions are jump instructions where the\nposition of the instruction to jump to is the content of some memory\ncell.\nIn this paper, we consider several kinds of indirect jump instructions,\nincluding return instructions.\nFor each kind, we define the meaning of programs with indirect jump\ninstructions of that kind by means of a translation into programs\nwithout indirect jump instructions.\nFor each kind, the intended behaviour of a program with indirect jump\ninstructions of that kind under execution is the behaviour of the\ntranslated program under execution on interaction with some memory\ndevice.\nWe also describe the memory devices concerned, to wit register files and\nstacks.\n\nThe approach to define the meaning of programs mentioned above is\nintroduced under the name projection semantics in~\\cite{BL02a}.\nProjection semantics explains the meaning of programs in terms of known\nprograms instead of more or less sophisticated mathematical objects that\nrepresent behaviours.\nThe main advantage of projection semantics is that it does not require a\nlot of mathematical background.\nIn the present case, another advantage of projection semantics is that\nit follows immediately that indirect jump instructions of the kinds\nconsidered can be eliminated from programs in the presence of an\nappropriate memory device.\nWe will study sequential programs that are instruction sequences with\ndirect and indirect jump instructions in the setting in which projection\nsemantics has been developed so far: the setting of program algebra and\nbasic thread algebra.%\n\\footnote\n{In~\\cite{BL02a}, basic thread algebra is introduced under the\n name basic polarized process algebra.\n Prompted by the development of thread algebra~\\cite{BM04c}, which is a\n design on top of it, basic polarized process algebra has been renamed\n to basic thread algebra.\n}\n\nProgram algebra is an algebra of deterministic sequential programs based\non the idea that such programs are in essence sequences of instructions.\nBasic thread algebra is a form of process algebra which is tailored to\nthe description of the behaviour of deterministic sequential programs\nunder execution.\nA hierarchy of program notations rooted in program algebra is introduced\nin~\\cite{BL02a}.\nIn this paper, we embroider on two program notations that belong to this\nhierarchy.\nThe program notations in question, called \\PGLC\\ and \\PGLD, are close to\nexisting assembly languages.\nThe main difference between them is that \\PGLC\\ has relative jump\ninstructions and \\PGLD\\ has absolute jump instructions.\n\nA thread proceeds by doing steps in a sequential fashion.\nA thread may do certain steps only for the sake of having itself\naffected by some service.\nIn~\\cite{BP02a}, the use mechanism is introduced to allow for such\ninteraction between threads and services.\nThe interaction between behaviours of programs under execution and some\nmemory device referred to above is an interaction of this kind.\nIn this paper, we will use a slightly adapted form of the use mechanism,\ncalled thread-service composition, to have behaviours of programs under\nexecution affected by services.\n\nThis paper is organized as follows.\nFirst, we review basic thread algebra, program algebra, and the program\nnotations \\PGLC\\ and \\PGLD\\\n(Sections~\\ref{sect-BTA}, \\ref{sect-PGA}, and~\\ref{sect-PGLC-PGLD}).\nNext, we extend basic thread algebra with thread-service composition and\nintroduce a state-based approach to describe services\n(Sections~\\ref{sect-TAtsc} and~\\ref{sect-service-descr}).\nFollowing this, we give a state-based description of register file\nservices and introduce variants of the program notations \\PGLC\\ and\n\\PGLD\\ with indirect jump instructions\n(Sections~\\ref{sect-reg-file}, \\ref{sect-PGLDij}, and~\\ref{sect-PGLCij}).\nWe also introduce a variant of one of those program notations with\ndouble indirect jump instructions (Section~\\ref{sect-PGLDdij}).\nAfter that, we give a state-based description of stack services and\nintroduce a variant of the program notation \\PGLD\\ with returning jump\ninstructions and return instructions\n(Sections~\\ref{sect-stack} and~\\ref{sect-PGLDrj}).\nFinally, we make some concluding remarks (Section~\\ref{sect-concl}).\n\n\n\\section{Basic Thread Algebra}\n\\label{sect-BTA}\n\nIn this section, we review \\BTA\\ (Basic Thread Algebra), a form of\nprocess algebra which is tailored to the description of the behaviour of\ndeterministic sequential programs under execution.\nThe behaviours concerned are called \\emph{threads}.\n\nIn \\BTA, it is assumed that there is a fixed but arbitrary finite set of\n\\emph{basic actions} $\\BAct$.\nThe intuition is that each basic action performed by a thread is taken\nas a command to be processed by a service provided by the execution\nenvironment of the thread.\nThe processing of a command may involve a change of state of the service\nconcerned.\nAt completion of the processing of the command, the service produces a\nreply value.\nThis reply is either $\\True$ or $\\False$ and is returned to the thread\nconcerned.\n\nAlthough \\BTA\\ is one-sorted, we make this sort explicit.\nThe reason for this is that we will extend \\BTA\\ with an additional sort\nin Section~\\ref{sect-TAtsc}.\n\nThe algebraic theory \\BTA\\ has one sort: the sort $\\Thr$ of\n\\emph{threads}.\nTo build terms of sort $\\Thr$, \\BTA\\ has the following constants and\noperators:\n\\begin{iteml}\n\\item\nthe \\emph{deadlock} constant $\\const{\\DeadEnd}{\\Thr}$;\n\\item\nthe \\emph{termination} constant $\\const{\\Stop}{\\Thr}$;\n\\item\nfor each $a \\in \\BAct$, the binary \\emph{postconditional composition}\noperator\\linebreak $\\funct{\\pcc{\\ph}{a}{\\ph}}{\\Thr \\x \\Thr}{\\Thr}$.\n\\end{iteml}\nTerms of sort $\\Thr$ are built as usual (see e.g.~\\cite{ST99a,Wir90a}).\nThroughout the paper, we assume that there are infinitely many variables\nof sort $\\Thr$, including $x,y,z$.\n\nWe use infix notation for postconditional composition.\nWe introduce \\emph{action prefixing} as an abbreviation: $a \\bapf p$,\nwhere $p$ is a term of sort $\\Thr$, abbreviates $\\pcc{p}{a}{p}$.\n\nLet $p$ and $q$ be closed terms of sort $\\Thr$ and $a \\in \\BAct$.\nThen $\\pcc{p}{a}{q}$ will perform action $a$, and after that proceed as\n$p$ if the processing of $a$ leads to the reply $\\True$ (called a\npositive reply), and proceed as $q$ if the processing of $a$ leads to\nthe reply $\\False$ (called a negative reply).\n\nEach closed \\BTA\\ term of sort $\\Thr$ denotes a finite thread, i.e.\\ a\nthread of which the length of the sequences of actions that it can\nperform is bounded.\nGuarded recursive specifications give rise to infinite threads.\n\nA \\emph{guarded recursive specification} over \\BTA\\ is a set of\nrecursion equations $E = \\set{X = t_X \\where X \\in V}$, where $V$ is a\nset of variables of sort $\\Thr$ and each $t_X$ is a term of the form\n$\\DeadEnd$, $\\Stop$ or $\\pcc{t}{a}{t'}$ with $t$ and $t'$ \\BTA\\ terms of\nsort $\\Thr$ that contain only variables from $V$.\nWe write $\\vars(E)$ for the set of all variables that occur on the\nleft-hand side of an equation in $E$.\nWe are only interested in models of \\BTA\\ in which guarded recursive\nspecifications have unique solutions, such as the projective limit model\nof \\BTA\\ presented in~\\cite{BB03a}.\nA thread that is the solution of a finite guarded recursive\nspecification over \\BTA\\ is called a \\emph{finite-state} thread.\n\nWe extend \\BTA\\ with guarded recursion by adding constants for solutions\nof guarded recursive specifications and axioms concerning these\nadditional constants.\nFor each guarded recursive specification $E$ and each $X \\in \\vars(E)$,\nwe add a constant of sort $\\Thr$ standing for the unique solution of $E$\nfor $X$ to the constants of \\BTA.\nThe constant standing for the unique solution of $E$ for $X$ is denoted\nby $\\rec{X}{E}$.\nMoreover, we add the axioms for guarded recursion given in\nTable~\\ref{axioms-rec} to \\BTA,%\n\\begin{table}[!t]\n\\caption{Axioms for guarded recursion}\n\\label{axioms-rec}\n\\begin{eqntbl}\n\\begin{saxcol}\n\\rec{X}{E} = \\rec{t_X}{E} & \\mif X \\!=\\! t_X \\in E & \\axiom{RDP}\n\\\\\nE \\Implies X = \\rec{X}{E} & \\mif X \\in \\vars(E) & \\axiom{RSP}\n\\end{saxcol}\n\\end{eqntbl}\n\\end{table}\nwhere we write $\\rec{t_X}{E}$ for $t_X$ with, for all $Y \\in \\vars(E)$,\nall occurrences of $Y$ in $t_X$ replaced by $\\rec{Y}{E}$.\nIn this table, $X$, $t_X$ and $E$ stand for an arbitrary variable of\nsort $\\Thr$, an arbitrary \\BTA\\ term of sort $\\Thr$ and an arbitrary\nguarded recursive specification over \\BTA, respectively.\nSide conditions are added to restrict the variables, terms and guarded\nrecursive specifications for which $X$, $t_X$ and $E$ stand.\nThe equations $\\rec{X}{E} = \\rec{t_X}{E}$ for a fixed $E$ express that\nthe constants $\\rec{X}{E}$ make up a solution of $E$.\nThe conditional equations $E \\Implies X = \\rec{X}{E}$ express that this\nsolution is the only one.\n\nWe will write \\BTA+\\REC\\ for \\BTA\\ extended with the constants for\nsolutions of guarded recursive specifications and axioms RDP and RSP.\n\nIn~\\cite{BM05c}, we show that the threads considered in \\BTA+\\REC\\ can\nbe viewed as processes that are definable over ACP~\\cite{Fok00}.\n\n\\section{Program Algebra}\n\\label{sect-PGA}\n\nIn this section, we review \\PGA\\ (ProGram Algebra), an algebra of\nsequential programs based on the idea that sequential programs are in\nessence sequences of instructions.\n\\PGA\\ provides a program notation for finite-state threads.\n\nIn \\PGA, it is assumed that there is a fixed but arbitrary finite set\n$\\BInstr$ of \\emph{basic instructions}.\n\\PGA\\ has the following \\emph{primitive instructions}:\n\\begin{iteml}\n\\item\nfor each $a \\in \\BInstr$, a \\emph{plain basic instruction} $a$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{positive test instruction} $\\ptst{a}$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{negative test instruction} $\\ntst{a}$;\n\\item\nfor each $l \\in \\Nat$, a \\emph{forward jump instruction} $\\fjmp{l}$;\n\\item\na \\emph{termination instruction} $\\halt$.\n\\end{iteml}\nWe write $\\PInstr$ for the set of all primitive instructions.\n\nThe intuition is that the execution of a basic instruction $a$ may\nmodify a state and produces $\\True$ or $\\False$ at its completion.\nIn the case of a positive test instruction $\\ptst{a}$, basic instruction\n$a$ is executed and execution proceeds with the next primitive\ninstruction if $\\True$ is produced and otherwise the next primitive\ninstruction is skipped and execution proceeds with the primitive\ninstruction following the skipped one.\nIn the case where $\\True$ is produced and there is not at least one\nsubsequent primitive instruction and in the case where $\\False$ is\nproduced and there are not at least two subsequent primitive\ninstructions, deadlock occurs.\nIn the case of a negative test instruction $\\ntst{a}$, the role of the\nvalue produced is reversed.\nIn the case of a plain basic instruction $a$, the value produced is\ndisregarded: execution always proceeds as if $\\True$ is produced.\nThe effect of a forward jump instruction $\\fjmp{l}$ is that execution\nproceeds with the $l$-th next instruction of the program concerned.\nIf $l$ equals $0$ or the $l$-th next instruction does not exist, then\n$\\fjmp{l}$ results in deadlock.\nThe effect of the termination instruction $\\halt$ is that execution\nterminates.\n\n\\PGA\\ has the following constants and operators:\n\\begin{iteml}\n\\item\nfor each $u \\in \\PInstr$, an \\emph{instruction} constant $u$\\,;\n\\item\nthe binary \\emph{concatenation} operator $\\ph \\conc \\ph$\\,;\n\\item\nthe unary \\emph{repetition} operator $\\ph\\rep$\\,.\n\\end{iteml}\nTerms are built as usual.\nThroughout the paper, we assume that there are infinitely many\nvariables, including $x,y,z$.\n\nWe use infix notation for concatenation and postfix notation for\nrepetition.\n\nClosed \\PGA\\ terms are considered to denote programs.\nThe intuition is that a program is in essence a non-empty, finite or\ninfinite sequence of primitive instructions.\nThese sequences are called \\emph{single pass instruction sequences}\nbecause \\PGA\\ has been designed to enable single pass execution of\ninstruction sequences: each instruction can be dropped after it has been\nexecuted.\nPrograms are considered to be equal if they represent the same single\npass instruction sequence.\nThe axioms for instruction sequence equivalence are given in\nTable~\\ref{axioms-PGA}.%\n\\begin{table}[!t]\n\\caption{Axioms of \\PGA}\n\\label{axioms-PGA}\n\\begin{eqntbl}\n\\begin{axcol}\n(x \\conc y) \\conc z = x \\conc (y \\conc z) & \\axiom{PGA1} \\\\\n(x^n)\\rep = x\\rep & \\axiom{PGA2} \\\\\nx\\rep \\conc y = x\\rep & \\axiom{PGA3} \\\\\n(x \\conc y)\\rep = x \\conc (y \\conc x)\\rep & \\axiom{PGA4}\n\\end{axcol}\n\\end{eqntbl}\n\\end{table}\nIn this table, $n$ stands for an arbitrary natural number greater than\n$0$.\nFor each $n > 0$, the term $x^n$ is defined by induction on $n$ as\nfollows: $x^1 = x$ and $x^{n+1} = x \\conc x^n$.\nThe \\emph{unfolding} equation $x\\rep = x \\conc x\\rep$ is\nderivable.\nEach closed \\PGA\\ term is derivably equal to a term in\n\\emph{canonical form}, i.e.\\ a term of the form $P$ or $P \\conc Q\\rep$,\nwhere $P$ and $Q$ are closed \\PGA\\ terms that do not contain the\nrepetition operator.\n\nEach closed \\PGA\\ term is considered to denote a program of which the\nbehaviour is a finite-state thread, taking the set $\\BInstr$ of basic\ninstructions for the set $\\BAct$ of actions.\nThe \\emph{thread extraction} operator $\\extr{\\ph}$ assigns a thread to\neach program.\nThe thread extraction operator is defined by the equations given in\nTable~\\ref{axioms-thread-extr} (for $a \\in \\BInstr$, $l \\in \\Nat$ and\n$u \\in \\PInstr$)%\n\\begin{table}[!t]\n\\caption{Defining equations for thread extraction operator}\n\\label{axioms-thread-extr}\n\\begin{eqntbl}\n\\begin{eqncol}\n\\extr{a} = a \\bapf \\DeadEnd \\\\\n\\extr{a \\conc x} = a \\bapf \\extr{x} \\\\\n\\extr{\\ptst{a}} = a \\bapf \\DeadEnd \\\\\n\\extr{\\ptst{a} \\conc x} =\n\\pcc{\\extr{x}}{a}{\\extr{\\fjmp{2} \\conc x}} \\\\\n\\extr{\\ntst{a}} = a \\bapf \\DeadEnd \\\\\n\\extr{\\ntst{a} \\conc x} =\n\\pcc{\\extr{\\fjmp{2} \\conc x}}{a}{\\extr{x}}\n\\end{eqncol}\n\\qquad\n\\begin{eqncol}\n\\extr{\\fjmp{l}} = \\DeadEnd \\\\\n\\extr{\\fjmp{0} \\conc x} = \\DeadEnd \\\\\n\\extr{\\fjmp{1} \\conc x} = \\extr{x} \\\\\n\\extr{\\fjmp{l+2} \\conc u} = \\DeadEnd \\\\\n\\extr{\\fjmp{l+2} \\conc u \\conc x} = \\extr{\\fjmp{l+1} \\conc x} \\\\\n\\extr{\\halt} = \\Stop \\\\\n\\extr{\\halt \\conc x} = \\Stop\n\\end{eqncol}\n\\end{eqntbl}\n\\end{table}\nand the rule given in Table~\\ref{rule-thread-extr}.%\n\\begin{table}[!t]\n\\caption{Rule for cyclic jump chains}\n\\label{rule-thread-extr}\n\\begin{eqntbl}\n\\begin{eqncol}\nx \\scongr \\fjmp{0} \\conc y \\Implies \\extr{x} = \\DeadEnd\n\\end{eqncol}\n\\end{eqntbl}\n\\end{table}\nThis rule is expressed in terms of the \\emph{structural congruence}\npredicate $\\ph \\scongr \\ph$, which is defined by the formulas given in\nTable~\\ref{axioms-scongr} (for $n,m,l \\in \\Nat$ and\n$u_1,\\ldots,u_n,v_1,\\ldots,v_{m+1} \\in \\PInstr$).%\n\\begin{table}[!t]\n\\caption{Defining formulas for structural congruence predicate}\n\\label{axioms-scongr}\n\\begin{eqntbl}\n\\begin{eqncol}\n\\fjmp{n+1} \\conc u_1 \\conc \\ldots \\conc u_n \\conc \\fjmp{0}\n\\scongr\n\\fjmp{0} \\conc u_1 \\conc \\ldots \\conc u_n \\conc \\fjmp{0}\n\\\\\n\\fjmp{n+1} \\conc u_1 \\conc \\ldots \\conc u_n \\conc \\fjmp{m}\n\\scongr\n\\fjmp{m+n+1} \\conc u_1 \\conc \\ldots \\conc u_n \\conc \\fjmp{m}\n\\\\\n(\\fjmp{n+l+1} \\conc u_1 \\conc \\ldots \\conc u_n)\\rep \\scongr\n(\\fjmp{l} \\conc u_1 \\conc \\ldots \\conc u_n)\\rep\n\\\\\n\\fjmp{m+n+l+2} \\conc u_1 \\conc \\ldots \\conc u_n \\conc\n(v_1 \\conc \\ldots \\conc v_{m+1})\\rep \\scongr {} \\\\ \\hfill\n\\fjmp{n+l+1} \\conc u_1 \\conc \\ldots \\conc u_n \\conc\n(v_1 \\conc \\ldots \\conc v_{m+1})\\rep\n\\\\\nx \\scongr x\n\\\\\nx_1 \\scongr y_1 \\And x_2 \\scongr y_2 \\Implies\nx_1 \\conc x_2 \\scongr y_1 \\conc y_2 \\And\n{x_1}\\rep \\scongr {y_1}\\rep\n\\end{eqncol}\n\\end{eqntbl}\n\\end{table}\n\nThe equations given in Table~\\ref{axioms-thread-extr} do not cover the\ncase where there is a cyclic chain of forward jumps.\nPrograms are structural congruent if they are the same after removing\nall chains of forward jumps in favour of single jumps.\nBecause a cyclic chain of forward jumps corresponds to $\\fjmp{0}$,\nthe rule from Table~\\ref{rule-thread-extr} can be read as follows:\nif $x$ starts with a cyclic chain of forward jumps, then $\\extr{x}$\nequals $\\DeadEnd$.\nIt is easy to see that the thread extraction operator assigns the same\nthread to structurally congruent programs.\nTherefore, the rule from Table~\\ref{rule-thread-extr} can be replaced by\nthe following generalization:\n$x \\scongr y \\Implies \\extr{x} = \\extr{y}$.\n\nLet $E$ be a finite guarded recursive specification over \\BTA, and let\n$P_X$ be a closed \\PGA\\ term for each $X \\in \\vars(E)$.\nLet $E'$ be the set of equations that results from replacing in $E$ all\noccurrences of $X$ by $\\extr{P_X}$ for each $X \\in \\vars(E)$.\nIf $E'$ can be obtained by applications of axioms PGA1--PGA4, the\ndefining equations for the thread extraction operator and the rule for\ncyclic jump chains, then $\\extr{P_X}$ is the solution of $E$ for $X$.\nSuch a finite guarded recursive specification can always be found.\nThus, the behaviour of each closed \\PGA\\ term, is a thread that is\ndefinable by a finite guarded recursive specification over \\BTA.\nMoreover, each finite guarded recursive specification over \\BTA\\ can be\ntranslated to a closed \\PGA\\ term of which the behaviour is the solution\nof the finite guarded recursive specification concerned.\n\nClosed \\PGA\\ terms are loosely called \\PGA\\ \\emph{programs}.\n\\PGA\\ programs in which the repetition operator do not occur\nare called \\emph{finite} \\PGA\\ programs.\n\n\\section{The Program Notations \\PGLC\\ and \\PGLD}\n\\label{sect-PGLC-PGLD}\n\nIn this section, we review two program notations which are rooted in\n\\PGA.\nThese program notations, called \\PGLC\\ and PGLD, belong to a hierarchy\nof program notations introduced in~\\cite{BL02a}.\n\nBoth \\PGLC\\ and \\PGLD\\ are close to existing assembly languages.\nThe main difference between them is that \\PGLC\\ has relative jump\ninstructions and \\PGLD\\ has absolute jump instructions.\n\\PGLC\\ and \\PGLD\\ have no explicit termination instruction.\n\nIn \\PGLC\\ and \\PGLD, like in \\PGA, it is assumed that there is a fixed\nbut arbitrary set of \\emph{basic instructions} $\\BInstr$.\nAgain, the intuition is that the execution of a basic instruction $a$\nmay modify a state and produces $\\True$ or $\\False$ at its completion.\n\n\\PGLC\\ has the following primitive instructions:\n\\begin{iteml}\n\\item\nfor each $a \\in \\BInstr$, a \\emph{plain basic instruction} $a$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{positive test instruction} $\\ptst{a}$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{negative test instruction} $\\ntst{a}$;\n\\item\nfor each $l \\in \\Nat$, a \\emph{direct forward jump instruction}\n$\\fjmp{l}$;\n\\item\nfor each $l \\in \\Nat$, a \\emph{direct backward jump instruction}\n$\\bjmp{l}$.\n\\end{iteml}\n\\PGLC\\ programs have the form $u_1 \\conc \\ldots \\conc u_k$, where\n$u_1,\\ldots,u_k$ are primitive instructions of \\PGLC.\n\nThe plain basic instructions, the positive test instructions, and the\nnegative test instructions are as in \\PGA, except that termination\ninstead of deadlock occurs in the case where there are insufficient\nsubsequent primitive instructions.\nThe effect of a direct forward jump instruction $\\fjmp{l}$ is that\nexecution proceeds with the $l$-th next instruction of the program\nconcerned.\nIf $l$ equals $0$, then deadlock occurs.\nIf the $l$-th next instruction does not exist, then termination occurs.\nThe effect of a direct backward jump instruction $\\bjmp{l}$ is that\nexecution proceeds with the $l$-th previous instruction of the program\nconcerned.\nIf $l$ equals $0$, then deadlock occurs.\nIf the $l$-th previous instruction does not exist, then termination\noccurs.\n\nWe define the meaning of \\PGLC\\ programs by means of a function\n$\\pglcpga$ from the set of all \\PGLC\\ programs to the set of all \\PGA\\\nprograms.\nThis function is defined by\n\\begin{ldispl}\n\\pglcpga(u_1 \\conc \\ldots \\conc u_k) =\n(\\psi_1(u_1) \\conc \\ldots \\conc \\psi_k(u_k) \\conc\n \\halt \\conc \\halt)\\rep\\;,\n\\end{ldispl}%\nwhere the auxiliary functions $\\psi_j$ from the set of all primitive\ninstructions of \\PGLC\\ to the set of all primitive instructions of \\PGA\\\nare defined as follows ($1 \\leq j \\leq k$):\n\\pagebreak[2]\n\\begin{ldispl}\n\\begin{aceqns}\n\\psi_j(\\fjmp{l}) & = & \\fjmp{l} & \\mif j + l \\leq k\\;, \\\\\n\\psi_j(\\fjmp{l}) & = & \\halt & \\mif j + l > k\\;, \\\\\n\\psi_j(\\bjmp{l}) & = & \\fjmp{k+2-l} & \\mif l < j\\;, \\\\\n\\psi_j(\\bjmp{l}) & = & \\halt & \\mif l \\geq j\\;, \\\\\n\\psi_j(u) & = & u\n & \\mif u\\; \\mathrm{is\\;not\\;a\\;jump\\;instruction}\\;.\n\\end{aceqns}\n\\end{ldispl}%\nThe idea is that each backward jump can be replaced by a forward jump if\nthe entire program is repeated.\nTo enforce termination of the program after execution of its last\ninstruction if the last instruction is a plain basic instruction, a\npositive test instruction or a negative test instruction,\n$\\halt \\conc \\halt$ is appended to\n$\\psi_1(u_1) \\conc \\ldots \\conc \\psi_k(u_k)$.\n\nLet $P$ be a \\PGLC\\ program.\nThen $\\pglcpga(P)$ represents the meaning of $P$ as a \\PGA\\ program.\nThe intended behaviour of $P$ is the behaviour of $\\pglcpga(P)$.\nThat is, the \\emph{behaviour} of $P$, written $\\extr{P}_\\sPGLC$, is\n$\\extr{\\pglcpga(P)}$.\n\n\\PGLD\\ has the following primitive instructions:\n\\begin{itemize}\n\\item\nfor each $a \\in \\BInstr$, a \\emph{plain basic instruction} $a$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{positive test instruction} $\\ptst{a}$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{negative test instruction} $\\ntst{a}$;\n\\item\nfor each $l \\in \\Nat$, a \\emph{direct absolute jump instruction}\n$\\ajmp{l}$.\n\\end{itemize}\n\\PGLD\\ programs have the form $u_1;\\ldots;u_k$, where $u_1,\\ldots,u_k$\nare primitive instructions of \\PGLD.\n\nThe plain basic instructions, the positive test instructions, and the\nnegative test instructions are as in \\PGLC.\nThe effect of a direct absolute jump instruction $\\ajmp{l}$ is that\nexecution proceeds with the $l$-th instruction of the program concerned.\nIf $\\ajmp{l}$ is itself the $l$-th instruction, then deadlock occurs.\nIf $l$ equals $0$ or $l$ is greater than the length of the program, then\ntermination occurs.\n\nWe define the meaning of \\PGLD\\ programs by means of a function\n$\\pgldpglc$ from the set of all \\PGLD\\ programs to the set of all \\PGLC\\\nprograms.\nThis function is defined by\n\\begin{ldispl}\n\\pgldpglc(u_1 \\conc \\ldots \\conc u_k) =\n\\psi_1(u_1) \\conc \\ldots \\conc \\psi_k(u_k)\\;,\n\\end{ldispl}%\nwhere the auxiliary functions $\\psi_j$ from the set of all primitive\ninstructions of \\PGLD\\ to the set of all primitive instructions of \\PGLC\\\nare defined as follows ($1 \\leq j \\leq k$):\n\\begin{ldispl}\n\\begin{aceqns}\n\\psi_j(\\ajmp{l}) & = & \\fjmp{l-j} & \\mif l \\geq j\\;, \\\\\n\\psi_j(\\ajmp{l}) & = & \\bjmp{j-l} & \\mif l < j\\;, \\\\\n\\psi_j(u) & = & u\n & \\mif u\\; \\mathrm{is\\;not\\;a\\;jump\\;instruction}\\;.\n\\end{aceqns}\n\\end{ldispl}%\n\n\\sloppy\nLet $P$ be a \\PGLD\\ program.\nThen $\\pgldpglc(P)$ represents the meaning of $P$ as a \\PGLC\\ program.\nThe intended behaviour of $P$ is the behaviour of $\\pgldpglc(P)$.\nThat is, the \\emph{behaviour} of $P$, written $\\extr{P}_\\sPGLD$, is\n$\\extr{\\pgldpglc(P)}_\\sPGLC$.\n\nWe use the phrase \\emph{projection semantics} to refer to the approach\nto semantics followed in this section.\nThe meaning functions $\\pglcpga$ and $\\pgldpglc$ are called\n\\emph{projections}.\n\n\\PGLC\\ and \\PGLD\\ are very simple program notations.\nThe hierarchy of program notations introduced in~\\cite{BL02a} also\nincludes a program notation, called \\PGLS, that supports structured\nprogramming by offering conditional and loop constructs instead of\n(unstructured) jumps.\nEach \\PGLS\\ program can be translated into a semantically equivalent\n\\PGLD\\ program by means of a number of projections.\n\n\\section{Interaction of Threads with Services}\n\\label{sect-TAtsc}\n\nA thread may perform certain actions only for the sake of getting reply\nvalues returned by a service and that way having itself affected by that\nservice.\nIn this section, we introduce thread-service composition, which allows\nfor threads to be affected by services in this way.\nWe will only use thread-service composition to have program behaviours\naffected by a service.\nThread-service composition is a slightly adapted form of the use\nmechanism introduced in~\\cite{BP02a}.\n\nWe consider only deterministic services.\nThis will do in the case that we address: services that keep private\ndata for a program.\nThe services concerned are para-target services by the classification\ngiven in~\\cite{BM07a}.\n\nIt is assumed that there is a fixed but arbitrary finite set of\n\\emph{foci} $\\Foci$ and a fixed but arbitrary finite set of\n\\emph{methods} $\\Meth$.\nEach focus plays the role of a name of a service provided by the\nexecution environment that can be requested to process a command.\nEach method plays the role of a command proper.\nFor the set $\\BAct$ of actions, we take the set\n$\\set{f.m \\where f \\in \\Foci, m \\in \\Meth}$.\nPerforming an action $f.m$ is taken as making a request to the\nservice named $f$ to process command $m$.\n\nWe introduce yet another sort: the sort $\\Serv$ of \\emph{services}.\nHowever, we will not introduce constants and operators to build terms\nof this sort.\n$\\Serv$ is a parameter of theories with thread-service composition.\n$\\Serv$ is considered to stand for the set of all services.\nIt is assumed that each service can be represented by a function\n$\\funct{H}{\\neseqof{\\Meth}}{\\set{\\True,\\False,\\Blocked}}$ with the\nproperty that\n$H(\\alpha) = \\Blocked \\Implies H(\\alpha \\concat \\seq{m}) = \\Blocked$ for\nall $\\alpha \\in \\neseqof{\\Meth}$ and $m \\in \\Meth$.\nThis function is called the \\emph{reply} function of the service.\nGiven a reply function $H$ and a method $m \\in \\Meth$, the\n\\emph{derived} reply function of $H$ after processing $m$, written\n$\\derive{m}H$, is defined by\n$\\derive{m}H(\\alpha) = H(\\seq{m} \\concat \\alpha)$.\n\nThe connection between a reply function $H$ and the service represented\nby it can be understood as follows:\n\\begin{iteml}\n\\item\nif $H(\\seq{m}) = \\True$, the request to process command $m$ is accepted\nby the service, the reply is positive and the service proceeds as\n$\\derive{m}H$;\n\\item\nif $H(\\seq{m}) = \\False$, the request to process command $m$ is accepted\nby the service, the reply is negative and the service proceeds as\n$\\derive{m}H$;\n\\item\nif $H(\\seq{m}) = \\Blocked$, the request to process command $m$ is\nnot accepted by the service.\n\\end{iteml}\nHenceforth, we will identify a reply function with the service\nrepresented by it.\n\nFor each $f \\in \\Foci$, we introduce the binary \\emph{thread-service\ncomposition} operator $\\funct{\\use{\\ph}{f}{\\ph}}{\\Thr \\x \\Serv}{\\Thr}$.\nIntuitively, $\\use{p}{f}{H}$ is the thread that results from processing\nall actions performed by thread $p$ that are of the form $f.m$ by\nservice $H$.\nService $H$ affects thread $p$ by means of the reply values produced at\ncompletion of the processing of the actions performed by $p$.\nThe actions processed by $H$ are no longer observable.\n\nThe axioms for the thread-service composition operator are given in\nTable~\\ref{axioms-tsc}.%\n\\begin{table}[!t]\n\\caption{Axioms for thread-service composition}\n\\label{axioms-tsc}\n\\begin{eqntbl}\n\\begin{saxcol}\n\\use{\\Stop}{f}{H} = \\Stop & & \\axiom{TSC1} \\\\\n\\use{\\DeadEnd}{f}{H} = \\DeadEnd & & \\axiom{TSC2} \\\\\n\\use{(\\pcc{x}{g.m}{y})}{f}{H} =\n\\pcc{(\\use{x}{f}{H})}{g.m}{(\\use{y}{f}{H})}\n & \\mif f \\neq g & \\axiom{TSC3} \\\\\n\\use{(\\pcc{x}{f.m}{y})}{f}{H} = \\use{x}{f}{\\derive{m}H}\n & \\mif H(\\seq{m}) = \\True & \\axiom{TSC4} \\\\\n\\use{(\\pcc{x}{f.m}{y})}{f}{H} = \\use{y}{f}{\\derive{m}H}\n & \\mif H(\\seq{m}) = \\False & \\axiom{TSC5} \\\\\n\\use{(\\pcc{x}{f.m}{y})}{f}{H} = \\DeadEnd\n & \\mif H(\\seq{m}) = \\Blocked & \\axiom{TSC6}\n\\end{saxcol}\n\\end{eqntbl}\n\\end{table}\nIn this table, $f$ stands for an arbitrary focus from $\\Foci$, $m$\nstands for an arbitrary method from $\\Meth$.\nAxiom TSC3 expresses that actions of the form $g.m$, where $f \\neq g$,\nare not processed.\nAxioms TSC4 and TSC5 express that a thread is affected by a service as\ndescribed above when an action of the form $f.m$ performed by the thread\nis processed by the service.\nAxiom TSC6 expresses that deadlock takes place when an action to be\nprocessed is not accepted.\n\nLet $T$ stand for either \\BTA\\ or \\BTA+\\REC.\nThen we will write $T+\\TSC$ for $T$, taking the set\n$\\set{f.m \\where f \\in \\Foci, m \\in \\Meth}$ for $\\BAct$, extended with\nthe thread-service composition operators and the axioms from\nTable~\\ref{axioms-tsc}.\n\nIn~\\cite{BM05c}, we show that the services considered here can be viewed\nas processes that are definable over an extension of \\ACP\\ with\nconditionals introduced in~\\cite{BM05a}.\n\n\\section{State-Based Description of Services}\n\\label{sect-service-descr}\n\nIn this section, we introduce the state-based approach to describe\nfamilies of services that will be used later on.\nThis approach is similar to the approach to describe state machines\nintroduced in~\\cite{BP02a}.\n\nIn this approach, a family of services is described by\n\\begin{itemize}\n\\item\na set of states $S$;\n\\item\nan effect function $\\funct{\\eff}{\\Meth \\x S}{S}$;\n\\item\na yield function\n$\\funct{\\yld}{\\Meth \\x S}{\\set{\\True,\\False,\\Blocked}}$;\n\\end{itemize}\nsatisfying the following condition:\n\\begin{ldispl}\n\\Exists{s \\in S}\n {\\Forall{m \\in \\Meth}\n {{} \\\\ \\quad\n (\\yld(m,s) = \\Blocked \\And\n \\Forall{s' \\in S}\n {(\\yld(m,s') = \\Blocked \\Implies \\eff(m,s') = s)})}}\\;.\n\\end{ldispl}%\nThe set $S$ contains the states in which the services may be; and the\nfunctions $\\eff$ and $\\yld$ give, for each method $m$ and state $s$, the\nstate and reply, respectively, that result from processing $m$ in state\n$s$.\n\nWe define, for each $s \\in S$, a cumulative effect function\n$\\funct{\\ceff_s}{\\seqof{\\Meth}}{S}$ in terms of $s$ and $\\eff$ as follows:\n\\begin{ldispl}\n\\ceff_s(\\emptyseq) = s\\;,\n\\\\\n\\ceff_s(\\alpha \\concat \\seq{m}) = \\eff(m,\\ceff_s(\\alpha))\\;.\n\\end{ldispl}%\nWe define, for each $s \\in S$, a service\n$\\funct{H_s}{\\neseqof{\\Meth}}{\\set{\\True,\\False,\\Blocked}}$\nin terms of $\\ceff_s$ and $\\yld$ as follows:\n\\begin{ldispl}\nH_s(\\alpha \\concat \\seq{m}) = \\yld(m,\\ceff_s(\\alpha))\\;.\n\\end{ldispl}%\n$H_s$ is called the service with \\emph{initial state} $s$ described by\n$S$, $\\eff$ and $\\yld$.\nWe say that $\\set{H_s \\where s \\in S}$ is the \\emph{family of services}\ndescribed by $S$, $\\eff$ and $\\yld$.\n\nFor each $s \\in S$, $H_s$ is a service indeed: the condition imposed on\n$S$, $\\eff$ and $\\yld$ implies that $H_s(\\alpha) = \\Blocked \\Implies\nH_s(\\alpha \\concat \\seq{m}) = \\Blocked$ for all\n$\\alpha \\in \\neseqof{\\Meth}$ and $m \\in \\Meth$.\nIt is worth mentioning that $H_s(\\seq{m}) = \\yld(m,s)$ and\n$\\derive{m} H_s = H_{\\eff(m,s)}$.\n\n\\section{Register File Services}\n\\label{sect-reg-file}\n\nIn this section, we give a state-based description of the very simple\nfamily of services that constitute a register file of which the\nregisters can contain natural numbers up to some bound.\nThis register file will be used in\nSections~\\ref{sect-PGLDij}--\\ref{sect-PGLDdij} to describe the behaviour\nof programs in variants of \\PGLC\\ and \\PGLD\\ with indirect jump\ninstructions.\n\nIt is assumed that a fixed but arbitrary number $\\maxr$ has been given,\nwhich is considered the number of registers available.\nIt is also assumed that a fixed but arbitrary number $\\maxn$ has been\ngiven, which is considered the greatest natural number that can be\ncontained in a register.\n\nThe register file services accept the following methods:\n\\begin{itemize}\n\\item\nfor each $i \\in [0,\\maxr]$ and $n \\in [0,\\maxn]$,\na \\emph{register set method} $\\setr{:}i{:}n$;\n\\item\nfor each $i \\in [0,\\maxr]$ and $n \\in [0,\\maxn]$,\na \\emph{register test method} $\\eqr{:}i{:}n$.\n\\end{itemize}\nWe write $\\Meth_\\rf$ for the set\n$\\set{\\setr{:}i{:}n,\\eqr{:}i{:}n \\where\n i \\in [0,\\maxr] \\And n \\in [0,\\maxn]}$.\nIt is assumed that $\\Meth_\\rf \\subseteq \\Meth$.\n\nThe methods accepted by register file services can be explained as\nfollows:\n\\begin{itemize}\n\\item\n$\\setr{:}i{:}n$\\,:\nthe contents of register $i$ becomes $n$ and the reply is $\\True$;\n\\item\n$\\eqr{:}i{:}n$\\,:\nif the contents of register $i$ equals $n$, then nothing changes and the\nreply is $\\True$; otherwise nothing changes and the reply is $\\False$.\n\\end{itemize}\n\nLet $\\funct{s}{[1,\\maxr]}{[0,\\maxn]}$.\nThen we write $\\RF_s$ for the service with initial state $s$ described\nby $S = (\\mapof{[1,\\maxr]}{[0,\\maxn]}) \\union \\set{\\undef}$, where\n$\\undef \\not\\in \\mapof{[1,\\maxr]}{[0,\\maxn]}$,\\pagebreak[2] and the\nfunctions $\\eff$ and $\\yld$ defined as follows ($n \\in [0,\\maxn]$,\n$\\funct{\\rho}{[1,\\maxr]}{[0,\\maxn]}$):%\n\\footnote\n{We use the following notation for functions:\n $f \\owr g$ for the function $h$ with $\\dom(h) = \\dom(f) \\union \\dom(g)$\n such that for all $d \\in \\dom(h)$, $h(d) = f(d)$ if $d \\not\\in \\dom(g)$\n and $h(d) = g(d)$ otherwise; and\n $\\maplet{d}{r}$ for the function $f$ with $\\dom(f) = \\set{d}$ such that\n $f(d) = r$.}%\n\\begin{ldispl}\n\\begin{gceqns}\n\\eff(\\setr{:}i{:}n,\\rho) = \\rho \\owr \\maplet{i}{n}\\;,\n\\\\\n\\eff(\\eqr{:}i{:}n,\\rho) = \\rho\\;,\n\\\\\n\\eff(m,\\rho) = \\undef & \\mif m \\not\\in \\Meth_\\rf\\;,\n\\\\\n\\eff(m,\\undef) = \\undef\\;,\n\\eqnsep\n\\yld(\\setr{:}i{:}n,\\rho) = \\True\\;,\n\\\\\n\\yld(\\eqr{:}i{:}n,\\rho) = \\True & \\mif \\rho(i) = n\\;,\n\\\\\n\\yld(\\eqr{:}i{:}n,\\rho) = \\False & \\mif \\rho(i) \\neq n\\;,\n\\\\\n\\yld(m,\\rho) = \\Blocked & \\mif m \\not\\in \\Meth_\\rf\\;,\n\\\\\n\\yld(m,\\undef) = \\Blocked\\;.\n\\end{gceqns}\n\\end{ldispl}%\nWe write $\\RF_\\mathrm{init}$ for\n$\\RF_{\\maplet{1}{0} \\owr \\ldots \\owr \\maplet{I}{0}}$.\n\n\\section{\\PGLD\\ with Indirect Jumps}\n\\label{sect-PGLDij}\n\nIn this section, we introduce a variant of \\PGLD\\ with indirect jump\ninstructions.\nThis variant is called \\PGLDij.\n\nIn \\PGLDij, it is assumed that there is a fixed but arbitrary finite set\nof \\emph{foci} $\\Foci$ with $\\rf \\in \\Foci$ and a fixed but arbitrary\nfinite set of \\emph{methods} $\\Meth$.\nMoreover, we adopt the assumptions made about register file services in\nSection~\\ref{sect-reg-file}.\nThe set $\\set{f.m \\where f \\in \\Foci, m \\in \\Meth}$ is taken as the set\n$\\BInstr$ of basic instructions.\n\n\\PGLDij\\ has the following primitive instructions:\n\\begin{iteml}\n\\item\nfor each $a \\in \\BInstr$, a \\emph{plain basic instruction} $a$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{positive test instruction} $\\ptst{a}$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{negative test instruction} $\\ntst{a}$;\n\\item\nfor each $l \\in \\Nat$, a \\emph{direct absolute jump instruction}\n$\\ajmp{l}$;\n\\item\nfor each $i \\in [1,\\maxr]$, an \\emph{indirect absolute jump instruction}\n$\\iajmp{i}$.\n\\end{iteml}\n\\PGLDij\\ programs have the form $u_1 \\conc \\ldots \\conc u_k$, where\n$u_1,\\ldots,u_k$ are primitive instructions of \\PGLDij.\n\\pagebreak[2]\n\nThe plain basic instructions, the positive test instructions, the\nnegative test instructions, and the direct absolute jump instructions\nare as in \\PGLD.\nThe effect of an indirect absolute jump instruction $\\iajmp{i}$ is that\nexecution proceeds with the $l$-th instruction of the program concerned,\nwhere $l$ is the content of register $i$.\nIf $\\iajmp{i}$ is itself the $l$-th instruction, then deadlock occurs.\nIf $l$ equals $0$ or $l$ is greater than the length of the program,\ntermination occurs.\n\nRecall that the content of register $i$ can be set to $l$ by means of\nthe basic instruction $\\rf.\\setr{:}i{:}l$.\nInitially, its content is $0$.\n\nLike before, we define the meaning of \\PGLDij\\ programs by means of a\nfunction $\\pgldijpgld$ from the set of all \\PGLDij\\ programs to the set\nof all \\PGLD\\ programs.\nThis function is defined by\n\\begin{ldispl}\n\\pgldijpgld(u_1 \\conc \\ldots \\conc u_k) = \\\\ \\quad\n\\psi(u_1) \\conc \\ldots \\conc \\psi(u_k) \\conc\n\\ajmp{0} \\conc \\ajmp{0} \\conc {} \\\\ \\quad\n\\ptst{\\rf.\\eqr{:}1{:}1} \\conc \\ajmp{1} \\conc \\ldots \\conc\n\\ptst{\\rf.\\eqr{:}1{:}n} \\conc \\ajmp{n} \\conc \\ajmp{0} \\conc {} \\\\\n\\qquad \\vdots \\\\ \\quad\n\\ptst{\\rf.\\eqr{:}\\maxr{:}1} \\conc \\ajmp{1} \\conc \\ldots \\conc\n\\ptst{\\rf.\\eqr{:}\\maxr{:}n} \\conc \\ajmp{n} \\conc \\ajmp{0}\\;,\n\\end{ldispl}%\nwhere $n = \\min(k,\\maxn)$ and the auxiliary function $\\psi$ from the set\nof all primitive instructions of \\PGLDij\\ to the set of all primitive\ninstructions of \\PGLD\\ is defined as follows:\n\\begin{ldispl}\n\\begin{aceqns}\n\\psi(\\ajmp{l}) & = & \\ajmp{l} & \\mif l \\leq k\\;, \\\\\n\\psi(\\ajmp{l}) & = & \\ajmp{0} & \\mif l > k\\;, \\\\\n\\psi(\\iajmp{i}) & = & \\ajmp{l_i}\\;, \\\\\n\\psi(u) & = & u & \\mif u\\; \\mathrm{is\\;not\\;a\\;jump\\;instruction}\\;,\n\\end{aceqns}\n\\end{ldispl}%\nand for each $i \\in [1,\\maxr]$:\n\\begin{ldispl}\n\\begin{aeqns}\nl_i & = & k + 3 + (2 \\mul \\min(k,\\maxn) + 1) \\mul (i - 1)\\;.\n\\end{aeqns}\n\\end{ldispl}%\nThe idea is that each indirect absolute jump can be replaced by a direct\nabsolute jump to the beginning of the instruction sequence\n\\begin{ldispl}\n\\begin{aeqns}\n\\ptst{\\rf.\\eqr{:}i{:}1} \\conc \\ajmp{1} \\conc \\ldots \\conc\n\\ptst{\\rf.\\eqr{:}i{:}n} \\conc \\ajmp{n} \\conc \\ajmp{0}\\;,\n\\end{aeqns}\n\\end{ldispl}%\nwhere $i$ is the register concerned and $n = \\min(k,\\maxn)$.\nThe execution of this instruction sequence leads to the intended jump\nafter the content of the register concerned has been found by a linear\nsearch.\nTo enforce termination of the program after execution of its last\ninstruction if the last instruction is a plain basic instruction, a\npositive test instruction or a negative test instruction,\n$\\ajmp{0} \\conc \\ajmp{0}$ is appended to\n$\\psi(u_1) \\conc \\ldots \\conc \\psi(u_k)$.\nBecause the length of the translated program is greater than $k$, care\nis taken that there are no direct absolute jumps to instructions with a\nposition greater than $k$.\nObviously, the linear search for the content of a register can be\nreplaced by a binary search.\n\nLet $P$ be a \\PGLDij\\ program.\nThen $\\pgldijpgld(P)$ represents the meaning of $P$ as a \\PGLD\\ program.\nThe intended behaviour of $P$ is the behaviour of $\\pgldijpgld(P)$ on\ninteraction with a register file.\nThat is, the \\emph{behaviour} of $P$, written $\\extr{P}_\\sPGLDij$, is\n$\\use{\\extr{\\pgldijpgld(P)}_\\sPGLD}{\\rf}{\\RF_\\mathrm{init}}$.\n\nMore than one instruction is needed in \\PGLD\\ to obtain the effect of a\nsingle indirect absolute jump instruction.\nThe projection $\\pgldijpgld$ deals with that in such a way that there is\nno need for the unit instruction operator introduced in~\\cite{Pon02a} or\nthe distinction between first-level instructions and second-level\ninstructions introduced in~\\cite{BB06a}.\n\n\\section{\\PGLC\\ with Indirect Jumps}\n\\label{sect-PGLCij}\n\nIn this section, we introduce a variant of \\PGLC\\ with indirect jump\ninstructions.\nThis variant is called \\PGLCij.\n\nIn \\PGLCij, the same assumptions are made as in \\PGLDij.\nLike in \\PGLDij, the set $\\set{f.m \\where f \\in \\Foci, m \\in \\Meth}$ is\ntaken as the set $\\BInstr$ of basic instructions.\n\n\\PGLDij\\ has the following primitive instructions:\n\\begin{iteml}\n\\item\nfor each $a \\in \\BInstr$, a \\emph{plain basic instruction} $a$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{positive test instruction} $\\ptst{a}$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{negative test instruction} $\\ntst{a}$;\n\\item\nfor each $l \\in \\Nat$, a \\emph{direct forward jump instruction}\n$\\fjmp{l}$;\n\\item\nfor each $l \\in \\Nat$, a \\emph{direct backward jump instruction}\n$\\bjmp{l}$;\n\\item\nfor each $i \\in [1,\\maxr]$, an \\emph{indirect forward jump instruction}\n$\\ifjmp{i}$;\n\\item\nfor each $i \\in [1,\\maxr]$, an \\emph{indirect backward jump instruction}\n$\\ibjmp{i}$.\n\\end{iteml}\n\\PGLCij\\ programs have the form $u_1 \\conc \\ldots \\conc u_k$, where\n$u_1,\\ldots,u_k$ are primitive instructions of \\PGLCij.\n\nThe plain basic instructions, the positive test instructions, the\nnegative test instructions, the direct forward jump instructions, and\nthe direct backward jump instructions are as in \\PGLC.\nThe effect of an indirect forward jump instruction $\\ifjmp{i}$ is that\nexecution proceeds with the $l$-th next instruction of the program\nconcerned, where $l$ is the content of register $i$.\nIf $l$ equals $0$, then deadlock occurs.\nIf the $l$-th next instruction does not exist, then termination occurs.\nThe effect of an indirect backward jump instruction $\\ibjmp{i}$ is that\nexecution proceeds with the $l$-th previous instruction of the program\nconcerned, where $l$ is the content of register $i$.\nIf $l$ equals $0$, then deadlock occurs.\nIf the $l$-th previous instruction does not exist, then termination\noccurs.\n\nWe define the meaning of \\PGLCij\\ programs by means of a function\n$\\pglcijpglc$ from the set of all \\PGLCij\\ programs to the set of all\n\\PGLC\\ programs.\nThis function is defined by\n\\begin{ldispl}\n\\pglcijpglc(u_1 \\conc \\ldots \\conc u_k) = {} \\\\ \\quad\n\\psi_1(u_1) \\conc \\ldots \\conc \\psi_k(u_k) \\conc\n\\bjmp{k+1} \\conc \\bjmp{k+2} \\conc {} \\\\ \\quad\n\\ptst{\\rf.\\eqr{:}1{:}0} \\conc \\bjmp{l'_{1,1,0}} \\conc\n\\ldots \\conc\n\\ptst{\\rf.\\eqr{:}1{:}\\maxn} \\conc \\bjmp{l'_{1,1,\\maxn}} \\conc {} \\\\\n\\qquad \\vdots \\\\ \\quad\n\\ptst{\\rf.\\eqr{:}1{:}0} \\conc \\bjmp{l'_{1,k,0}} \\conc\n\\ldots \\conc\n\\ptst{\\rf.\\eqr{:}1{:}\\maxn} \\conc \\bjmp{l'_{1,k,\\maxn}} \\conc {}\n\\eqnsep\n\\qquad \\quad \\vdots \\eqnsep \\quad\n\\ptst{\\rf.\\eqr{:}\\maxr{:}0} \\conc \\bjmp{l'_{\\maxr,1,0}} \\conc\n\\ldots \\conc\n\\ptst{\\rf.\\eqr{:}\\maxr{:}\\maxn} \\conc \\bjmp{l'_{\\maxr,1,\\maxn}}\n \\conc {} \\\\\n\\qquad \\vdots \\\\ \\quad\n\\ptst{\\rf.\\eqr{:}\\maxr{:}0} \\conc \\bjmp{l'_{\\maxr,k,0}} \\conc\n\\ldots \\conc\n\\ptst{\\rf.\\eqr{:}\\maxr{:}\\maxn} \\conc \\bjmp{l'_{\\maxr,k,\\maxn}}\n \\conc {}\n\\end{ldispl}\n\\begin{ldispl}\n \\quad\n\\ptst{\\rf.\\eqr{:}1{:}0} \\conc \\bjmp{\\ul{l}'_{1,1,0}} \\conc\n\\ldots \\conc\n\\ptst{\\rf.\\eqr{:}1{:}\\maxn} \\conc \\bjmp{\\ul{l}'_{1,1,\\maxn}}\n \\conc {} \\\\\n\\qquad \\vdots \\\\ \\quad\n\\ptst{\\rf.\\eqr{:}1{:}0} \\conc \\bjmp{\\ul{l}'_{1,k,0}} \\conc\n\\ldots \\conc\n\\ptst{\\rf.\\eqr{:}1{:}\\maxn} \\conc \\bjmp{\\ul{l}'_{1,k,\\maxn}} \\conc {}\n\\eqnsep\n\\qquad \\quad \\vdots \\eqnsep \\quad\n\\ptst{\\rf.\\eqr{:}\\maxr{:}0} \\conc \\bjmp{\\ul{l}'_{\\maxr,1,0}} \\conc\n\\ldots \\conc\n\\ptst{\\rf.\\eqr{:}\\maxr{:}\\maxn} \\conc \\bjmp{\\ul{l}'_{\\maxr,1,\\maxn}}\n \\conc {} \\\\\n\\qquad \\vdots \\\\ \\quad\n\\ptst{\\rf.\\eqr{:}\\maxr{:}0} \\conc \\bjmp{\\ul{l}'_{\\maxr,k,0}} \\conc\n\\ldots \\conc\n\\ptst{\\rf.\\eqr{:}\\maxr{:}\\maxn} \\conc\n\\bjmp{\\ul{l}'_{\\maxr,k,\\maxn}}\\;,\n\\end{ldispl}%\nwhere the auxiliary functions $\\psi_j$ from the set of all primitive\ninstructions of \\PGLCij\\ to the set of all primitive instructions of\n\\PGLC\\ is defined as follows ($1 \\leq j \\leq k$):\n\\begin{ldispl}\n\\begin{aceqns}\n\\psi_j(\\fjmp{l}) & = & \\fjmp{l} & \\mif j + l \\leq k\\;, \\\\\n\\psi_j(\\fjmp{l}) & = & \\bjmp{j} & \\mif j + l > k\\;, \\\\\n\\psi_j(\\bjmp{l}) & = & \\bjmp{l}\\;, \\\\\n\\psi_j(\\ifjmp{i}) & = & \\fjmp{l_{i,j}}\\;, \\\\\n\\psi_j(\\ibjmp{i}) & = & \\fjmp{\\ul{l}_{i,j}}\\;, \\\\\n\\psi_j(u) & = & u\\; & \\mif u\\; \\mathrm{is\\;not\\;a\\;jump\\;instruction}\\;,\n\\end{aceqns}\n\\end{ldispl}%\nand for each $i \\in [1,\\maxr]$, $j \\in [1,k]$, and $h \\in [0,\\maxn]$:\n\\begin{ldispl}\n\\begin{aceqns}\nl_{i,j} & = &\nk+3 + 2 \\mul (\\maxn+1) \\mul (k \\mul (i-1) + (j-1))\\;, \\\\\n\\ul{l}_{i,j} & = &\nk+3 + 2 \\mul (\\maxn+1) \\mul (k \\mul (\\maxr + i-1) + (j-1))\\;,\n\\eqnsep\nl'_{i,j,h} & = &\nl_{i,j} + 2 \\mul h + 1 - (j + h) & \\mif j + h \\leq k\\;, \\\\\nl'_{i,j,h} & = &\nk+3 + 2 \\mul (\\maxn+1) \\mul k \\mul \\maxr & \\mif j + h > k\\;,\n\\eqnsep\n\\ul{l}'_{i,j,h} & = &\n\\ul{l}_{i,j} + 2 \\mul h + 1 - (j - h) & \\mif j - h \\geq 0\\;, \\\\\n\\ul{l}'_{i,j,h} & = &\nk+3 + 4 \\mul (\\maxn+1) \\mul k \\mul \\maxr & \\mif j - h < 0\\;.\n\\end{aceqns}\n\\end{ldispl}%\nLike in the case of indirect absolute jumps, the idea is that each\nindirect forward jump and each indirect backward jump can be replaced by\na direct forward jump to the beginning of an instruction sequence whose\nexecution leads to the intended jump after the content of the register\nconcerned has been found by a linear search.\nHowever, the direct backward jump instructions occurring in that\ninstruction sequence now depend upon the position of the indirect jump\nconcerned in $u_1 \\conc \\ldots \\conc u_k$.\nTo enforce termination of the program after execution of its last\ninstruction if the last instruction is a plain basic instruction, a\npositive test instruction or a negative test instruction,\n$\\bjmp{k+1} \\conc \\bjmp{k+2}$ is appended to\n$\\psi_1(u_1) \\conc \\ldots \\conc \\psi_k(u_k)$.\nBecause the length of the translated program is greater than $k$, care\nis taken that there are no direct forward jumps to instructions with a\nposition greater than $k$.\n\nLet $P$ be a \\PGLCij\\ program.\nThen $\\pglcijpglc(P)$ represents the meaning of $P$ as a \\PGLC\\ program.\nThe intended behaviour of $P$ is the behaviour of $\\pglcijpglc(P)$ on\ninteraction with a register file.\nThat is, the \\emph{behaviour} of $P$, written $\\extr{P}_\\sPGLCij$, is\n$\\use{\\extr{\\pglcijpglc(P)}_\\sPGLC}{\\rf}{\\RF_\\mathrm{init}}$.\n\nThe projection $\\pglcijpglc$ yields needlessly long \\PGLC\\ programs\nbecause it does not take into account the fact that there is at most one\nindirect jump instruction at each position in a \\PGLCij\\ program being\nprojected.\nTaking this fact into account would lead to a projection with a much\nmore complicated definition.\n\n\\section{\\PGLD\\ with Double Indirect Jumps}\n\\label{sect-PGLDdij}\n\nIn this section, we introduce a variant of \\PGLDij\\ with double indirect\njump instructions.\nThis variant is called \\PGLDdij.\n\nIn \\PGLDdij, the same assumptions are made as in \\PGLDij.\nLike in \\PGLDij, the set $\\set{f.m \\where f \\in \\Foci, m \\in \\Meth}$ is\ntaken as the set $\\BInstr$ of basic instructions.\n\n\\PGLDdij\\ has the following primitive instructions:\n\\begin{iteml}\n\\item\nfor each $a \\in \\BInstr$, a \\emph{plain basic instruction} $a$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{positive test instruction} $\\ptst{a}$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{negative test instruction} $\\ntst{a}$;\n\\item\nfor each $l \\in \\Nat$, a \\emph{direct absolute jump instruction}\n$\\ajmp{l}$;\n\\item\nfor each $i \\in [1,\\maxr]$, an \\emph{indirect absolute jump instruction}\n$\\iajmp{i}$;\n\\item\nfor each $i \\in [1,\\maxr]$,\na \\emph{double indirect absolute jump instruction} $\\diajmp{i}$.\n\\end{iteml}\n\\PGLDdij\\ programs have the form $u_1 \\conc \\ldots \\conc u_k$, where\n$u_1,\\ldots,u_k$ are primitive instructions of \\PGLDdij.\n\nThe plain basic instructions, the positive test instructions, the\nnegative test instructions, the direct absolute jump instructions, and\nthe indirect absolute jump instruction are as in \\PGLDij.\nThe effect of a double indirect absolute jump instruction $\\diajmp{i}$\nis that execution proceeds with the $l$-th instruction of the program\nconcerned, where $l$ is the content of register $i'$, where $i'$\nis the content of register $i$.\nIf $\\diajmp{i}$ is itself the $l$-th instruction, then deadlock occurs.\nIf $l$ equals $0$ or $l$ is greater than the length of the program,\ntermination occurs.\n\nLike before, we define the meaning of \\PGLDdij\\ programs by means of a\nfunction $\\pglddijpgldij$ from the set of all \\PGLDdij\\ programs to the\nset of all \\PGLDij\\ programs.\nThis function is defined by\n\\pagebreak[2]\n\\begin{ldispl}\n\\pglddijpgldij(u_1 \\conc \\ldots \\conc u_k) = \\\\ \\quad\n\\psi(u_1) \\conc \\ldots \\conc \\psi(u_k) \\conc\n\\ajmp{0} \\conc \\ajmp{0} \\conc\n\\smash{\\overbrace{\\ajmp{0}\n \\conc \\ldots \\conc\n \\ajmp{0}}^{\\max(k+2,\\maxn)-(k+2)}}\n \\conc {} \\\\ \\quad\n\\ptst{\\rf.\\eqr{:}1{:}1} \\conc \\iajmp{1} \\conc \\ldots \\conc\n\\ptst{\\rf.\\eqr{:}1{:}n} \\conc \\iajmp{n} \\conc \\ajmp{0}\n \\conc {} \\\\ \\qquad\n \\vdots \\\\ \\quad\n\\ptst{\\rf.\\eqr{:}\\maxr{:}1} \\conc \\iajmp{1} \\conc \\ldots \\conc\n\\ptst{\\rf.\\eqr{:}\\maxr{:}n} \\conc \\iajmp{n} \\conc \\ajmp{0}\\;,\n\\end{ldispl}%\nwhere $n = \\min(\\maxr,\\maxn)$ and the auxiliary function $\\psi$ from the\nset of all primitive instructions of \\PGLDdij\\ to the set of all\nprimitive instructions of \\PGLDij\\ is defined as follows:\n\\begin{ldispl}\n\\begin{aceqns}\n\\psi(\\ajmp{l}) & = & \\ajmp{l} & \\mif l \\leq k\\;, \\\\\n\\psi(\\ajmp{l}) & = & \\ajmp{0} & \\mif l > k\\;, \\\\\n\\psi(\\iajmp{i}) & = & \\iajmp{i}\\;, \\\\\n\\psi(\\diajmp{i}) & = & \\ajmp{l_i}\\;, \\\\\n\\psi(u) & = & u & \\mif u\\; \\mathrm{is\\;not\\;a\\;jump\\;instruction}\\;,\n\\end{aceqns}\n\\end{ldispl}%\nand for each $i \\in [1,\\maxr]$:\n\\begin{ldispl}\n\\begin{aeqns}\nl_i & = & \\maxn + 1 + (2 \\mul \\min(\\maxr,\\maxn) + 1) \\mul (i - 1)\\;.\n\\end{aeqns}\n\\end{ldispl}%\nThe idea is that each double indirect absolute jump can be replaced by\nan indirect absolute jump to the beginning of the instruction sequence\n\\begin{ldispl}\n\\begin{aeqns}\n\\ptst{\\rf.\\eqr{:}i{:}1} \\conc \\iajmp{1} \\conc \\ldots \\conc\n\\ptst{\\rf.\\eqr{:}i{:}n} \\conc \\iajmp{n} \\conc \\ajmp{0}\\;,\n\\end{aeqns}\n\\end{ldispl}%\nwhere $i$ is the register concerned and $n = \\min(\\maxr,\\maxn)$.\nThe execution of this instruction sequence leads to the intended jump\nafter the content of the register concerned has been found by a linear\nsearch.\nTo enforce termination of the program after execution of its last\ninstruction if the last instruction is a plain basic instruction, a\npositive test instruction or a negative test instruction,\n$\\ajmp{0} \\conc \\ajmp{0}$ is appended to\n$\\psi(u_1) \\conc \\ldots \\conc \\psi(u_k)$.\nBecause the length of the translated program is greater than $k$, care\nis taken that there are no direct absolute jumps to instructions with a\nposition greater than $k$.\nTo deal properly with indirect absolute jumps to instructions with a\nposition greater than $k$, the instruction $\\ajmp{0}$ is appended to\n$\\psi(u_1) \\conc \\ldots \\conc \\psi(u_k) \\conc \\ajmp{0} \\conc \\ajmp{0}$ a\nsufficient number of times.\n\nLet $P$ be a \\PGLDdij\\ program.\nThen $\\pglddijpgldij(P)$ represents the meaning of $P$ as a \\PGLDij\\\nprogram.\nThe intended behaviour of program $P$ is the behaviour of $\\pglddijpgldij(P)$.\nThat is, the \\emph{behaviour} of $P$, written $\\extr{P}_\\sPGLDdij$,\nis $\\extr{\\pglddijpgldij(P)}_\\sPGLDij$.\n\nThe projection $\\pglddijpgldij$ uses indirect absolute jumps to obtain\nthe effect of a double indirect absolute jump in the same way as the\nprojection $\\pgldijpgld$ uses direct absolute jumps to obtain the effect\nof an indirect absolute jump.\nLikewise, indirect relative jumps can be used in that way to obtain the\neffect of a double indirect relative jump.\nMoreover, double indirect jumps can be used in that way to obtain the\neffect of a triple indirect jump, and so on.\n\n\\section{Stack Services}\n\\label{sect-stack}\n\nIn this section, we give a state-based description of the very simple\nfamily of services that constitute a bounded stack of which the elements\nare natural numbers up to some bound.\nThis stack will be used in Section~\\ref{sect-PGLDrj} to describe the\nbehaviour of programs in a variant of \\PGLD\\ with returning jump\ninstructions and return instructions.\n\nIt is assumed that a fixed but arbitrary number $\\maxs$ has been given,\nwhich is considered the greatest length of the stack.\nIt is also assumed that a fixed but arbitrary number $\\maxn$ has been\ngiven, which is considered the greatest natural number that can be an\nelement of the stack.\n\nThe stack services accept the following methods:\n\\begin{itemize}\n\\item\nfor each $n \\in [0,\\maxn]$, a \\emph{stack push method} $\\push{:}n$;\n\\item\nfor each $n \\in [0,\\maxn]$, a \\emph{stack top test method} $\\topeq{:}n$;\n\\item\na \\emph{stack pop method} $\\pop$.\n\\end{itemize}\nWe write $\\Meth_\\st$ for the set\n$\\set{\\push{:}n,\\topeq{:}n \\where n \\in [0,\\maxn]} \\union \\set{\\pop}$.\nIt is assumed that $\\Meth_\\st \\subseteq \\Meth$.\n\nThe methods of stack services can be explained as follows:\n\\begin{itemize}\n\\item\n$\\push{:}n$\\,:\nif the length of the stack is less than $\\maxs$, then the number $n$ is\nput on top of the stack and the reply is $\\True$; otherwise nothing\nchanges and the reply is $\\False$;\n\\item\n$\\topeq{:}n$\\,:\nif the stack is not empty and the number on top of the stack is $n$,\nthen nothing changes and the reply is $\\True$; otherwise nothing changes\nand the reply is $\\False$;\n\\item\n$\\pop$\\,:\nif the stack is not empty, then the number on top of the stack is\nremoved from the stack and the reply is $\\True$; otherwise nothing\nchanges and the reply is $\\False$.\n\\end{itemize}\n\nLet $s \\in \\seqof{[0,\\maxn]}$ be such that $\\len(s) \\leq \\maxs$.\nThen we write $\\St_s$ for the service with initial state $s$ described\nby\n$S =\n \\set{\\sigma \\in \\seqof{[0,\\maxn]} \\where \\len(\\sigma) \\leq \\maxs} \\union\n \\set{\\undef}$,\nwhere\n$\\undef \\not\\in\n \\set{\\sigma \\in \\seqof{[0,\\maxn]} \\where \\len(\\sigma) \\leq \\maxs}$,\nand the functions $\\eff$ and $\\yld$ defined as follows\n($n,n' \\in [0,\\maxn]$, $\\sigma \\in \\seqof{[0,\\maxn]}$):%\n\\footnote\n{We write $\\seqof{D}$ for the set of all finite sequences with elements\n from set $D$.\n We use the following notation for finite sequences:\n $\\emptyseq$ for the empty sequence,\n $\\seq{d}$ for the sequence having $d$ as sole element,\n $\\sigma \\concat \\sigma'$ for the concatenation of finite sequences\n $\\sigma$ and $\\sigma'$, and\n $\\len(\\sigma)$ for the length of finite sequence $\\sigma$.}%\n\\begin{ldispl}\n\\begin{gceqns}\n\\eff(\\push{:}n,\\sigma) = \\seq{n} \\concat \\sigma\n & \\mif \\len(\\sigma) < \\maxs\\;,\n\\\\\n\\eff(\\push{:}n,\\sigma) = \\sigma & \\mif \\len(\\sigma) \\geq \\maxs\\;,\n\\\\\n\\eff(\\topeq{:}n,\\sigma) = \\sigma\\;,\n\\\\\n\\eff(\\pop,\\seq{n} \\concat \\sigma) = \\sigma\\;,\n\\\\\n\\eff(\\pop,\\emptyseq) = \\emptyseq\\;,\n\\\\\n\\eff(m,\\sigma) = \\undef & \\mif m \\not\\in \\Meth_\\st\\;,\n\\\\\n\\eff(m,\\undef) = \\undef\\;,\n\\end{gceqns}\n\\end{ldispl}\n\\begin{ldispl}\n\\begin{gceqns}\n\\yld(\\push{:}n,\\sigma) = \\True & \\mif \\len(\\sigma) < \\maxs\\;,\n\\\\\n\\yld(\\push{:}n,\\sigma) = \\False & \\mif \\len(\\sigma) \\geq \\maxs\\;,\n\\\\\n\\yld(\\topeq{:}n,\\seq{n'} \\concat \\sigma) = \\True & \\mif n = n'\\;,\n\\\\\n\\yld(\\topeq{:}n,\\seq{n'} \\concat \\sigma) = \\False & \\mif n \\neq n'\\;,\n\\\\\n\\yld(\\topeq{:}n,\\emptyseq) = \\False\\;,\n\\\\\n\\yld(\\pop,\\seq{n} \\concat \\sigma) = \\True\\;,\n\\\\\n\\yld(\\pop,\\emptyseq) = \\False\\;,\n\\\\\n\\yld(m,\\sigma) = \\Blocked & \\mif m \\not\\in \\Meth_\\st\\;,\n\\\\\n\\yld(m,\\undef) = \\Blocked\\;.\n\\end{gceqns}\n\\end{ldispl}%\nWe write $\\St_\\mathrm{init}$ for $\\St_\\emptyseq$.\n\n\\section{\\PGLD\\ with Returning Jumps and Returns}\n\\label{sect-PGLDrj}\n\nIn this section, we introduce a variant of \\PGLD\\ with returning jump\ninstructions and return instructions.\nThis variant is called \\PGLDrj.\n\nIn \\PGLDrj, like in \\PGLDij, it is assumed that there is a fixed but\narbitrary finite set of \\emph{foci} $\\Foci$ with $\\st \\in \\Foci$ and a\nfixed but arbitrary finite set of \\emph{methods} $\\Meth$.\nMoreover, we adopt the assumptions made about stack services in\nSection~\\ref{sect-stack}.\nThe set $\\set{f.m \\where f \\in \\Foci \\diff \\set{\\st}, m \\in \\Meth}$ is\ntaken as the set $\\BInstr$ of basic instructions.\n\n\\PGLDrj\\ has the following primitive instructions:\n\\begin{iteml}\n\\item\nfor each $a \\in \\BInstr$, a \\emph{plain basic instruction} $a$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{positive test instruction} $\\ptst{a}$;\n\\item\nfor each $a \\in \\BInstr$, a \\emph{negative test instruction} $\\ntst{a}$;\n\\item\nfor each $l \\in \\Nat$, an \\emph{absolute jump instruction} $\\ajmp{l}$;\n\\item\nfor each $l \\in \\Nat$,\na \\emph{returning absolute jump instruction} $\\arjmp{l}$;\n\\item\nan \\emph{absolute return instruction} $\\return$.\n\\end{iteml}\n\\PGLDrj\\ programs have the form $u_1 \\conc \\ldots \\conc u_k$, where\n$u_1,\\ldots,u_k$ are primitive instructions of \\PGLDrj.\n\nThe plain basic instructions, the positive test instructions, the\nnegative test instructions, and the absolute jump instructions are as in\n\\PGLD.\nThe effect of a returning absolute jump instruction $\\arjmp{l}$ is that\nexecution proceeds with the $l$-th instruction of the program concerned,\nbut execution returns to the next primitive instruction on encountering\na return instruction.\nIf $\\arjmp{l}$ is itself the $l$-th instruction, then deadlock occurs.\nIf $l$ equals $0$ or $l$ is greater than the length of the program,\ntermination occurs.\nThe effect of a return instruction $\\return$ is that execution proceeds\nwith the instruction immediately following the last executed returning\nabsolute jump instruction to which a return has not yet taken place.\n\nLike before, we define the meaning of \\PGLDrj\\ programs by means of a\nfunction $\\pgldrjpgld$ from the set of all \\PGLDrj\\ programs to the set\nof all \\PGLD\\ programs.\nThis function is defined by\n\\begin{ldispl}\n\\pgldrjpgld(u_1 \\conc \\ldots \\conc u_k) = \\\\ \\quad\n\\psi_1(u_1) \\conc \\ldots \\conc \\psi_k(u_k) \\conc \\ajmp{0} \\conc \\ajmp{0}\n \\conc {} \\\\ \\quad\n\\ptst{\\st.\\push{:}1} \\conc \\ajmp{1} \\conc \\ajmp{l''}\n \\conc \\ldots \\conc\n\\ptst{\\st.\\push{:}1} \\conc \\ajmp{k} \\conc \\ajmp{l''} \\conc {} \\\\\n\\qquad \\vdots \\\\ \\quad\n\\ptst{\\st.\\push{:}n} \\conc \\ajmp{1} \\conc \\ajmp{l''}\n\\conc \\ldots \\conc\n\\ptst{\\st.\\push{:}n} \\conc \\ajmp{k} \\conc \\ajmp{l''}\n \\conc {} \\\\ \\quad\n\\ntst{\\st.\\topeq{:}1} \\conc \\ajmp{l''_1} \\conc \\st.\\pop \\conc \\ajmp{1}\n \\conc {} \\\\\n\\qquad \\vdots \\\\ \\quad\n\\ntst{\\st.\\topeq{:}n} \\conc \\ajmp{l''_n} \\conc \\st.\\pop \\conc \\ajmp{n}\n \\conc {} \\\\ \\quad\n\\ajmp{l''}\\;,\n\\end{ldispl}%\nwhere $n = \\min(k,\\maxn)$ and the auxiliary functions $\\psi_j$ from the\nset of all primitive instructions of \\PGLDrj\\ to the set of all\nprimitive instructions of \\PGLD\\ is defined as follows\n($1 \\leq j \\leq k$):\n\\begin{ldispl}\n\\begin{aceqns}\n\\psi_j(\\ajmp{l}) & = & \\ajmp{l} & \\mif l \\leq k\\;, \\\\\n\\psi_j(\\ajmp{l}) & = & \\ajmp{0} & \\mif l > k\\;, \\\\\n\\psi_j(\\arjmp{l}) & = & \\ajmp{l_{j,l}}\\;, \\\\\n\\psi_j(\\return) & = & \\ajmp{l'}\\;, \\\\\n\\psi_j(u) & = & u & \\mif u\\; \\mathrm{is\\;not\\;a\\;jump\\;instruction}\\;,\n\\end{aceqns}\n\\end{ldispl}%\nand for each $j \\in [1,k]$, $l \\in \\Nat$, and $h \\in [1,\\min(k,\\maxn)]$:\n\\begin{ldispl}\n\\begin{aceqns}\nl_{j,l} & = & k + 3 + 3 \\mul k \\mul ((j - 1) + (l - 1))\n & \\mif l \\leq k \\And j \\leq \\maxn\\;, \\\\\nl_{j,l} & = & j & \\mif l \\leq k \\And j > \\maxn\\;, \\\\\nl_{j,l} & = & 0 & \\mif l > k\\;,\n\\eqnsep\nl' & = & k + 3 + 3 \\mul k \\mul \\min(k,\\maxn)\\;,\n\\eqnsep\nl'' & = & l' + 4 \\mul \\min(k,\\maxn)\\;,\n\\eqnsep\nl''_h & = & l' + 4 \\mul h\\;.\n\\end{aceqns}\n\\end{ldispl}%\nThe first idea is that each returning absolute jump can be replaced by\nan absolute jump to the beginning of the instruction sequence\n\\begin{ldispl}\n\\begin{aeqns}\n\\ptst{\\st.\\push{:}j} \\conc \\ajmp{l} \\conc \\ajmp{l''}\\;,\n\\end{aeqns}\n\\end{ldispl}%\nwhere $j$ is the position of the returning absolute jump instruction\nconcerned and $l$ is the position of the instruction to jump to.\nThe execution of this instruction sequence leads to the intended jump\nafter the return position has been put on the stack.\nIn the case of stack overflow, deadlock occurs.\nThe second idea is that each return can be replaced by an absolute jump\nto the beginning of the instruction sequence\n\\begin{ldispl}\n\\begin{aeqns}\n\\ntst{\\st.\\topeq{:}1} \\conc \\ajmp{l''_1} \\conc \\st.\\pop \\conc \\ajmp{1}\n \\conc {} \\\\\n\\quad \\vdots \\\\\n\\ntst{\\st.\\topeq{:}n} \\conc \\ajmp{l''_n} \\conc \\st.\\pop \\conc \\ajmp{n}\n \\conc {} \\\\\n\\ajmp{l''}\\;,\n\\end{aeqns}\n\\end{ldispl}%\nwhere $n = \\min(k,\\maxn)$.\nThe execution of this instruction sequence leads to the intended jump\nafter the position on the top of the stack has been found by a linear\nsearch and has been removed from the stack.\nIn the case of an empty stack, deadlock occurs.\nTo enforce termination of the program after execution of its last\ninstruction if the last instruction is a plain basic instruction, a\npositive test instruction or a negative test instruction,\n$\\ajmp{0} \\conc \\ajmp{0}$ is appended to\n$\\psi_1(u_1) \\conc \\ldots \\conc \\psi_k(u_k)$.\nBecause the length of the translated program is greater than $k$, care\nis taken that there are no non-returning or returning absolute jumps to\ninstructions with a position greater than $k$.\n\nLet $P$ be a \\PGLDrj\\ program.\nThen $\\pgldrjpgld(P)$ represents the meaning of $P$ as a \\PGLD\\ program.\nThe intended behaviour of $P$ is the behaviour of $\\pgldrjpgld(P)$ on\ninteraction with a stack.\nThat is, the \\emph{behaviour} of $P$, written $\\extr{P}_\\sPGLDrj$, is\n$\\use{\\extr{\\pgldrjpgld(P)}_\\sPGLD}{\\st}{\\St_\\mathrm{init}}$.\n\nAccording to the definition of the behaviour of \\PGLDrj\\ programs given\nabove, the execution of a returning jump instruction leads to deadlock\nin the case where its position cannot be pushed on the stack and the\nexecution of a return instruction leads to deadlock in the case where\nthere is no position to be popped from the stack.\nIn the latter case, the return instruction is wrongly used.\nIn the former case, however, the returning jump instruction is not\nwrongly used, but the finiteness of the stack comes into play.\nThis shows that the definition of the behaviour of \\PGLDrj\\ programs\ngiven here takes into account the finiteness of the execution\nenvironment of programs.\n\n\\section{Conclusions}\n\\label{sect-concl}\n\nWe have studied sequential programs that are instruction sequences with\ndirect and indirect jump instructions.\nWe have considered several kinds of indirect jumps, including return\ninstructions.\nFor each kind, we have defined the meaning of programs with indirect\njump instructions of that kind by means of a translation into programs\nwithout indirect jump instructions.\nEach translation determines, together with some memory device\n(a register file or a stack), the behaviour of the programs concerned\nunder execution.\n\nThe increase in the length of a program as a result of translation can\nbe reduced by taking into account which indirect jump instructions\nactually occur in the program.\nThe increase in the number of steps needed by a program as a result of\ntranslation can be reduced by replacing linear searching by binary\nsearching or another more efficient kind of searching.\nOne option for future work is to look for bounds on the increase in\nlength and the increase in number of steps.\n\nIn~\\cite{BM06b}, we have modelled and analysed micro-architectures with\npipe\\-lined instruction processing in the setting of program algebra,\nbasic thread algebra, and Maurer computers~\\cite{Mau66a,Mau06a}.\nIn that work, which we consider a preparatory step in the development of\na formal approach to design new micro-architectures, indirect jump\ninstructions were not taken into account.\nAnother option for future work is to look at the effect of indirect jump\ninstructions on pipelined instruction processing.\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\nAction anticipation refers to detection (\\emph{i.e. anticipation}) of an action before it happens. Many real world applications are related to this predictive capability, for example, a surveillance system can raise alarm before an accident happens, and allow for intervention; robots can use anticipation of human actions to make better plans and interactions \\cite{koppula2013anticipating}. Note that, online action detection \\cite{de2016online} can be viewed as a special case for action anticipation, where the anticipation time is $0$. \n\n\nAction anticipation is challenging for many reasons. First, it needs to overcome all the difficulties of action detection which require strong discriminative representations of video clips and ability to separate action instances from the large and wide variety of irrelevant background data. Then, for anticipation, the representation needs to capture sufficient historical and contextual information to make future predictions that are seconds ahead.\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[scale=0.45]{red_problem.pdf}\n\\caption{Anticipating future actions by inferring from history information: the normal images represent past frames and the transparent images represent future frames. }\n\\end{figure*}\n\nState-of-the-art methods on online action detection \\cite{Ma_2016_CVPR, de2016online, yeung2015every} learn LSTM networks to encode history information and predict actions based on the hidden state of the LSTM. For action anticipation, early work \\cite{lan2014hierarchical, pei2011parsing} was based on traditional hand-crafted features. Recently, Vondrick \\emph{et al.} \\cite{vondrick2016anticipating} proposed to use deep neural networks to first anticipate visual representations of future frames and then categorize the anticipated representations to actions.\nHowever, the future representation is anticipated based on a single past frame's representation, while actions are better modeled in a clip, \\emph{i.e.} multiple frames. Besides, their model only anticipates for single fixed time, it is desirable to be able to anticipate a sequence of continuous future representations. \n\nTo address the anticipation challenges, we propose a Reinforced Encoder-Decoder (RED) network. The encoder-decoder network takes continuous steps of history visual representations as input and outputs a sequence of anticipated future representations. These anticipated representations are processed by a classification network for action classification. Squared loss is used for the representation anticipation and cross-entropy loss is used for action category anticipation (classification) during training. One drawback of the traditional cross-entropy loss is that it only optimizes the encoder-decoder networks greedily at each time step, and lacks sequence level optimization \\cite{ranzato2015sequence}. We propose to use reinforcement learning to train the encoder-decoder networks on sequence level. The reward function is designed to encourage the model to make the correct anticipations as early as possible. We test RED on TVSeries \\cite{de2016online}, THUMOS-14 and TV-Human-Interaction \\cite{patron2010high} for action anticipation and online action detection. State-of-the-art performance has been achieved.\n\n\n\n\\section{Related Work}\n\nIn this section, we introduce work on related topics, including online action detection, offline action detection, action anticipation and reinforcement learning in vision. \n\n\\textbf{Early and Online Action Detection}\nHoai \\emph{et al.} \\cite{hoai2012max, hoai2014max} first proposed the problem of early event detection. They designed a max-margin framework which is based on structured output SVMs. Ma \\emph{et al.} \\cite{Ma_2016_CVPR} address the problem of early action detection. They propose to train an LSTM network with ranking loss and merge the detection spans based on the frame-wise prediction scores generated by the LSTM. Recently, Geest \\emph{et al.} \\cite{de2016online} published a new dataset for online action detection, which consists of 16 hours (27 episodes) of TV series with temporal annotation for 30 action categories. \n\n\\textbf{Offline Action Detection}\nIn the setting of offline action detection, the whole video is given and the task is to detect whether given actions occurs in this video and when does it occurs. S-CNN \\cite{Shou_2016_CVPR} presented a two-stage action localization framework: first using a proposal network to generate temporal proposals and then scoring the proposals with a localization network. TURN \\cite{gao2017turn} proposed to use temporal coordinate regression to refine action boundaries for temporal proposal generation, which is proved to be effective and could be generalized to different action domains. TALL \\cite{gao2017tall} used natural language as query to localize actions in long videos and designed a cross-modal regression model to solve it.\n\n\n\n\\textbf{Action Anticipation}\nThere have been some promising works on anticipating future action categories. Lan \\emph{et al.} \\cite{lan2014hierarchical} designed a hierarchical representation, which describes human movements at multiple levels of granularities, to predict future actions in the wild. Pei \\emph{et al.} \\cite{pei2011parsing} proposed an event parsing algorithm by using Stochastic Context Sensitive Grammar (SCSG) for inferring the goal of agents, and predicting the intended actions. Xie \\emph{et al.} \\cite{xie2013inferring} proposed to infer people's intention of performing actions, which is a good clue for predicting future actions. Vondrick \\emph{et al.} \\cite{vondrick2016anticipating} proposed to anticipate visual representation by training CNN on large-scale unlabelled video data.\n\n\\textbf{Reinforcement Learning in Vision}\nWe get inspiration from recent approaches that used REINFORCE \\cite{williams1992simple} to learn task-specific policies. Yeung \\emph{et al.} \\cite{Yeung_2016_CVPR} proposed to learn policies to predict next observation location for action detection task by using LSTM networks. Mnih \\emph{et al.} \\cite{mnih2014recurrent} proposed to adaptively select a sequence of regions in images and only processing the selected regions at high resolution for the image classification task. Ranzato \\emph{et al.} \\cite{ranzato2015sequence} proposed a sequence-level training algorithm for image captioning that directly optimizes the metric used at test time by policy gradient methods.\n\n\n\\section{Reinforced Encoder-Decoder Network}\nRED contains three modules: a video representation extractor; an encoder-decoder network to encode history information and anticipate future video representations; a classification network to anticipate action categories and a reinforcement module to calculate rewards, which is incorporated in training phase using a policy gradient algorithm \\cite{williams1992simple}. The architecture is shown in Figure \\ref{model}.\n\n\n\n\n\\subsection{Video Processing}\nA video is segmented into small chunks, each chunk contains $f=6$ consecutive frames. The video chunks are processed by a feature extractor $E_v$. It takes the video chunks $u_i$ as input and outputs chunk representation $V_i=E_v(u_i)$. More details on video pre-processing and feature extractors could be found in Section 4.1.\n\n\n\n\\subsection{Encoder-Decoder Network}\nThe encoder-decoder network uses a LSTM network as basic cell. The input to this network is a vector sequence $S_{in}=\\{V_i\\}, i \\in [t-T_{enc},t)$, vector $V_i$ is a chunk visual representation, $T_{enc}$ is the length of the input sequence, $t$ is the time point in the video. After the last input vector has been read, the decoder LSTM takes over the last hidden state of encoder LSTM and outputs a prediction for the target sequence $S_{out}=\\{\\hat{V}_j\\} j \\in [t,t+T_{dec})$, where $T_{dec}$ is the length of the output sequence, \\emph{i.e.}, the anticipation steps. The target sequence are representations of the video chunks that come after the input sequence.\n\nThe goal of decoder LSTM is to regress future visual representations, based on the last hidden state of the encoder networks. The loss function for training the encoder-decoder networks is the squared loss,\n\\begin{equation}\nL_{reg}=\\frac{1}{N}\\sum_{k=1}^{N}\\sum_{j=1}^{T_{dec}}||\\hat{V}_j^k-V_j^k||\n\\end{equation}\nwhere $N$ is the batch size, $\\hat{V}_j^k$ is the anticipated representation and $V_j^k$ is the ground truth representation.\n\\begin{figure*}[]\n\\centering\n\\includegraphics[scale=0.47]{red_model.pdf}\n\\caption{Reinforced Encoder-Decoder (RED) networks architecture for action anticipation. }\n\\label{model}\n\\end{figure*}\n\n\n\\subsection{Classification Network}\nThe output vector sequence, $S_{out}$, of the encoder-decoder networks is processed by the classification network, which has two fully connected layers, to output a classification distribution on action categories. The loss function of classification is the cross-entropy loss: $L_{cls}=\\frac{1}{N}\\sum_{k=1}^{N}\\sum_{t=1}^{T_{dec}} log(p(y_t^k|y_{1:t-1}^k))$, \nwhere $p(y_k^j)$ is the probability score.\n\n\\subsection{Reinforcement Module}\nA natural expectation of action anticipation is to make the correct anticipation as early as possible. For example, we consider two anticipation sequences \"000111\" and \"001110\", assuming the ground truth sequence is \"011111\", where \"1\" represents that an action category is happening and \"0\" represents that no action is happening (\\emph{i.e} background). \"001110\" gives the correct anticipation earlier then \"000111\", so we consider it is a better anticipation at sequence level. However, cross-entropy loss would not capture such sequence-level distinctions, as it is calculated at each step to output higher confident scores on the ground truth category, and no sequence-level information is involved.\n\nTo consider sequence-level reward, we incorporate reinforcement learning into our system. The anticipation module (the encoder-decoder networks) and the classification module (the FC layers) together can be viewed as an agent, which interacts with the external environment (the feature vector taken as input at every time step). The parameters of this agent define a policy, whose execution results in the agent making an \\emph{prediction}. In the action detection and anticipation setting, a \\emph{prediction} refers to predicting the action category in the sequence at each time step. After making a \\emph{prediction}, the agent updates its internal state (the hidden state of LSTM), and observes a reward. \n\nWe design a reward function to encourage the agent to make the correct anticipation as early as possible. Assuming the agent outputs an anticipation sequence $\\{\\hat{y}_i\\}$, and the corresponding ground truth labels are $\\{y_i\\}$. In the ground truth label sequence, we denote the time position $t_f$ that the label start to change from background to some action class as \\emph{transferring time}, for example, in \"001111\", $t_f=2$. At each step $t$ of the anticipation, the reward $r_t$ is calculated as \n\\begin{equation}\nr_t= \\frac{\\alpha}{t+1-t_f}, \\text{ if } t\\geq t_f \\text{ and } \\hat{y}_t=y_t; ~~~ r_t=0, \\text{ otherwise}\n\\end{equation}\nwhere $\\alpha$ is a constant parameter. If the agent makes the correct prediction at the transferring time $t_s$, it would receive the largest reward. The reward for making correct anticipation decays with time. The cumulative reward of the sequence is calculated as $R=\\sum_{t=1}^{T_{dec}}r_t$.\n\nThe goal is to maximize the reward expectation $R$ when interacting with the environment, that is, to encourage the agent to output correct anticipations as early as possible. More formally, the policy of the agent induces a distribution over possible anticipation sequences $p_{y(1:T)}$, and we want to maximize the reward under this distribution:\n\n\\begin{equation}\nJ(\\theta)=E_{p(y(1:t));\\theta)}[\\sum_{t=1}^{T_{dec}} r_t]=E_{p(y(1:t));\\theta)}[R]\n\\end{equation}\nwhere $y(1:t)$ is the action category predicted by our model. To maximize $J(\\theta)$, as shown in REINFORCE\\cite{williams1992simple}, an approximation to the gradient is given by \n\\begin{equation}\n\\nabla_\\theta J\\approx\\frac{1}{N}\\sum_{k=1}^N\\sum_{t=1}^{T_{dec}}\\nabla_\\theta \\text{log}\\pi(a_t^k|h_{1:t}^k,a_{1:t-1}^k)R_t^k\n\\end{equation}\nwhere $\\pi$ is the agent's policy. In our case, the policy $\\pi$ is the probability distribution over action categories at each time step. Thus, the gradient can be written as\n\\begin{equation}\n\\nabla_\\theta J\\approx\\frac{1}{N}\\sum_{k=1}^N\\sum_{t=1}^{T_{dec}}\\nabla_\\theta \\text{log}p(y_t^k|y_{1:t-1}^k)(R_t^k-b_t^k)\n\\end{equation}\nwhere $b_t^k$ is a baseline reward, which is estimated by a separate network. The network consists of two fully connected layer and takes the last hidden state of the encoder network as input. \n\n\\subsection{Training Procedure}\n\nThe goal of encoder-decoder networks is to anticipate future representations, so unlabelled video segments (no matter whether there contain actions) could be used as training samples. As the positive segments (\\emph{i.e.} some action is happening) are very small part of videos in the whole datasets, RED is trained by a two-stage process. In the first stage, the encoder-decoder networks for representation anticipation are trained by the regression loss $L_{reg}$ on all training videos in the dataset as initialization. In the second stage, the encoder-decoder networks are optimized by the overall loss function $L$, which includes the regression loss $L_{reg}$, a cross-entropy loss $L_{cls}$ introduced by classification networks and $J$ introduced from reinforcement module on the positive samples in the videos: \n\\begin{equation}\nL=L_{reg}+L_{cls}-J\n\\end{equation}\nThe classification network is only trained in the second stage by $L_{cls}$ and $J$.\n\nThe training samples for the first stage do not require any annotations, so they could be collected at any position in the videos. Specifically, at a time point $t$, the sequence for encoder networks is $[V_{t-T_{enc}},V_{t})$, the output ground truth sequence for decoder networks is $[V_{t},V_{t+T_{dec}})$, where $V_t$ is the visual representation at $t$. For the second stage, the training samples are collected around positive action intervals. Specifically, given a positive action interval $[t_s, t_e]$, the central time point $t$ could be selected from $t>t_s-T_{enc}$ to $t