diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhjvw" "b/data_all_eng_slimpj/shuffled/split2/finalzzhjvw" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhjvw" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\\input{intro}\n\n\\section{Background}\n\\label{sec:bg}\n\\input{background}\n\n\\subsection{Motivation}\n\\label{sec:motiv}\n\\input{motivation}\n\n\\section{Gossip Service}\n\\label{sec:protocol}\n\\input{protocol}\n\n\\section{Performance Evaluation}\n\\label{sec:impl}\n\\input{implementation}\n\n\\section{Related Work}\n\\label{sec:related}\n\\input{related}\n\n\\section{Conclusion}\n\\label{sec:concl}\n\\input{conclusion}\n\n\n\\section{Availability of Code}\n\nThe source code developed and used for the performance evaluation comprised in this paper is available as open source, allowing the experiments to be reproduced.\nIn detail, our implementation of WS-Gossip on the Web Services for Devices (WS4D) Java Multi Edition DPWS Stack (JMEDS) is available at \\url{https:\/\/github.com\/filipecampos\/ws_gossip}.\nThe code used in the WS-Eventing scenarios, as well as for setting up and controlling all the experiments, is available at \\url{https:\/\/github.com\/filipecampos\/ws_gossip_tests}.\n\n\n\\section*{Acknowledgments}\n\\small{\nThis work has been partially supported by the Portuguese National Science Foundation FCT - Funda\u00e7\u00e3o da Ci\u00eancia e Tecnologia, through grant SFRH\/BD\/66242\/2009.\n}\n\n\\bibliographystyle{abbrv}\n\n\\subsection{Service Oriented Standards}\n\n\\subsubsection{Eventing and messaging}\n\nThe WS-Eventing specification supports simple\npublisher-subscriber interaction, by defining how Web Services can subscribe to\nor accept subscriptions for event notification messages\\,\\cite{ws_eventing}.\n\nThe specification defines the four following roles:\n\\begin{description}\n \\item [Event Source]Sends notifications on triggered events,\nand accepts requests for creating subscriptions.\n \\item [Subscription Manager]Manages event subscriptions. \nIt notifies \\textbf{Subscribers} when their subscriptions are terminated unexpectedly,\n and replies to their subscription management enquiries, such as subscription's\nstatus retrieval, renewal or deletion.\n \\item [Event Sink]Receives event notification messages.\n \\item [Subscriber]Contacts an \\textbf{Event Source} to create a subscription to\nmanifest the interest of its associated \\textbf{Event Sink} to be notified on the\noccurrence of some event.\nnotifications. It is also responsible for issuing subscription management\nrequests to the \\textbf{Subscription Manager}.\n\\end{description}\n\nIn the simplest scenario, with only two intervening Web Services, a \\textbf{Publisher}\nwill comprise both the \\textbf{Event Source} and the \\textbf{Subscription \nManager} roles, whereas the entity that accumulates both the\n\\textbf{Subscriber} and \\textbf{Event Sink} roles will be referred as a \\textbf{Subscriber}.\n\n\\begin{figure}[htbp]\n \\centering\n\\includegraphics[width=.8\\textwidth]{ws_eventing}\n\\caption{WS-Eventing components.}\n\\label{fig:ws_eventing}\n\\end{figure}\n\nWS-Eventing defines the concept of delivery mode, in order to better adapt the\ngeneral publish\/subscribe pattern to scenarios with different event delivery\nrequirements. The default delivery mode of this specification is the single\ndelivery asynchronous \\textit{push} mode. But, for instance, situations with slow event\nconsumers where they poll for event messages may be preferred, in order to\ncontrol the rate of message arrival and avoid overwhelming them if the rate of\ngeneration and transmission of event messages is far superior to their\nprocessing rate.\n The subscription request message defines the specific delivery mode to be used in\nnotifying the identified \\textbf{Event Sink}, and new delivery modes can be freely used,\nif both event sources and consumers support them. In the event that a \\textbf{Subscriber}\nrequests a delivery mode that is not supported by the \\textbf{Event Source}, it will\nrespond signaling this situation, and it may convey a list of the supported\ndelivery modes.\nThis specification proposes, as an example, that notifications can be wrapped in\na standard message instead of the default unwrapped mode where each notification\nis transmitted as a message typed according to the event's action.\n\nAlthough it lacks explicit support for brokered dissemination, it embodies a\nflexible filtering mechanism in the base specification, favoring lightweight\nimplementations and many-to-one dissemination scenarios. And since it was backed\nby major vendors, such as IBM, Microsoft or TIBCO, it has therefore been the\npreferred choice for connected devices, namely, within WS-Management\\,\\cite{ws_man} and DPWS.\n\nThe alternative family of standards OASIS WS-Notification\\,\\cite{ws_bn,ws_brn,ws_t} also provides, besides simple notification and subscription mechanisms, extensible topic definition and brokered dissemination.\n\n\nSince the early 1990s, Reliable Messaging has been seen as a solution for such\nscenarios by the IT community, and so, several message queueing technologies\nhave been used, such as IBM's WebSphereMQ and Microsoft's MSMQ, in addition to\nreliable publish\/subscribe technologies, such as Tibco Rendezvous. In an effort\nto bridge all these different technologies, the Java Message Service (JMS) API\nwas developed by the Java Community Process. Some of these technologies were\nadapted to Web Services. However, due to the exploitation of proprietary\nprotocols, interoperability can only be achieved recurring to gateways that\ntranslate between specific pairs of environments.\n\nWith the emergence of Web Services as the preferred integration solution for\ndistributed systems, WS-Reliable\\-Mes\\-sag\\-ing (WS-RM)\\,\\cite{wsrm} is the\ncurrently adopted standard for achieving reliable message exchange between\ndistributed applications in the presence of software component, system and\nnetwork failures. Due to the interoperable nature of Web Services, WS-RM allows\nto bridge two different infrastructures, such as different operating or\nmiddleware systems, into an end-to-end model where messages are exchanged\nreliably\\,\\cite{ibm_secure}. So, this standard ensures the interoperability of\nservices in what comes to\nReliable Messaging, which also simplifies the development of services,\nsince they must implement the protocols, minimizing the number of errors in\nbusiness logic\\,\\cite{ibm_secure}.\n\nWS-RM distinguishes all the entities\ninvolved in an interaction, as well as the various meanings of the terms\n\\textit{send}, \\textit{transmit}, \\textit{receive} and \\textit{deliver}, as they\nrelate to different components. In that sense, the basic model of\nWS-RM is described in Figure\\,\\ref{rm_fig} and it\nincludes four distinct entities:\n\\begin{description}\n\\item [Application Source]Service or application logic that \\textit{sends}\nthe message to the \\textbf{RM Source};\n\\item [RM Source]Physical processor or node that performs the actual wire\ntransmission;\n\\item [RM Destination]Target processor or node that \\textit{receives} the\nmessage and then \\textit{delivers} it to the \\textit{application destination};\n\\item [Application Destination]Target service of the message.\n\\end{description}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=.6\\textwidth]{ws_rm}\n\\caption{WS-ReliableMessaging basic interaction.}\n\\label{rm_fig}\n\\end{figure}\n\nThese nodes are endpoints, which according to the WS-RM\nstandard, represent addressable entities that send and receive Web Services\nmessages.\n\nThe basic mechanism of the standard works, in a simplified way, as follows: the source\nnode sends a Web Service message containing a WS-RM\nheader, which is received by the destination node that then replies by sending\nan acknowledgment message to the source node.\n\nThere are several types of assurances defined in WS-RM, in terms of message\ndelivery:\n\\begin{description}\n\\item [AtMostOnce]A message is delivered at most once, but it might not be\ndelivered at all.\n\\item [AtLeastOnce]A message is delivered at least once, but it could be\ndelivered more times.\n\\item [ExactlyOnce]This type is a combination of the previous two. A message is\ndelivered only once.\n\\item [InOrder]When there are several ordered messages, they are delivered in\nthe same order as they were sent.\n\\end{description}\nEach reliable message exchanging sequence will enforce the message delivery guarantees according to the type defined in the sequence's creation.\n\n\n\\subsubsection{Transactional Coordination}\n\nThe term coordination sometimes refers to a type of orchestration that is\ndefined in the WS-Coordination specification\\,\\cite{wscoor}. It\nspecifies an extensible framework for \\textit{context} management, that provides\ncoordination for the actions of distributed applications. This coordination is\nachieved through provided protocols that support distributed applications, for\ninstance, those that need to reach consistent agreement on the outcome of\ndistributed transactions.\n\nAn application service can create a context needed to propagate coordination\ninformation to other services involved in some activity. These services will\nthen need to register as participants of that activity. For this purpose, the\napplication must include the created \\textit{coordination context} in the\nmessages that it sends to the referred services.\n\nA \\textit{coordination context} can be transmitted using application-specific\nmechanisms, such as the header element of a SOAP application message. This kind\nof conveyance is commonly referred to as flowing the \\textit{context}.\n\nThe structure of a \\textit{context} and the requirements to propagate it between\ncooperating services are also defined in WS-Coordination, and can depend on the\ntype of coordination that is used. A \\textit{coordination context} contains\ninformation on:\n\\begin{itemize}\n\\item how to access a coordination registration service;\n\\item the coordination type;\n\\item relevant extensions.\n\\end{itemize}\n\nThis framework also enables existing transaction processing, workflow, and other\nsystems for coordination to hide their proprietary protocols and to operate in\nan heterogeneous environment.\n\nThis specification is not sufficient by itself to coordinate Web Services, since\nit provides only a coordination framework, leaving undefined the concrete\nprotocol and targeted coordination type. The standards WS-Atomic{\\-}Transaction (WS-AT)\\,\\cite{wsat} and\nWS-BusinessActivity (WS-BA)\\,\\cite{wsba} implement the WS-Coordination framework and also extend it \nby defining their own coordination type: short-term atomic transactions, and\nlong-running business activities, respectively.\n\nThe WS-AT specification defines a\nprotocol that can be plugged into WS-Coordination to provide an adaptation for\nWeb Services of the classic \\textit{2PC}\\,\\cite{understanding_soa} mechanism, making the changes,\nresulting from the activity of some service, persistent. It\nis often said that this protocol does not adapt well to Web Services.\nNonetheless, it is adequate for interoperability across short-lived, co-located\nservices that need to ensure consistent, all-or-nothing results for a\ntransaction.\n\nA \\textit{2PC} process\nconsists on a poll conducted by the \\textit{coordinator} that will lead it to\nsend two alternative directives to all the participants in the transaction:\n\t\\begin{description}\n\t \\item [commit]If all of the registered services have responded\nindicating that the changes were successful.\n\t \\item [rollback]If at least one of the registered services\nfails to respond or responds indicating a failure. \n\t\\end{description}\n\nA service that participates in an atomic transaction can register for more than\none of the different types of coordination protocols, as defined in the WS-AT\nspecification:\n\n\t\\begin{description}\n\t\\item [Two-Phase Commit]Coordinates registered participants to\nreach a commit or abort decision, and ensures that all participants are informed\nof the final result. It has two variants:\n\t\t\\begin{description}\n\t\t\\item [Volatile 2PC]Participants manage volatile\nresources such as a cache register or a window manager.\n\t\t \\item [Durable 2PC]Participants manage durable\nresources such as a da{\\-}ta{\\-}base register or a file.\n\t\t\\end{description}\n\t \\item [Completion]Initiates commit processing when an\napplication tells the \\textit{coordinator} to either try to commit or abort an\natomic transaction. Based on the registered participants for each protocol, the\n\\textit{coordinator} begins with \\textit{Volatile 2PC} and then proceeds through\n\\textit{Durable 2PC}. After the transaction has completed, a status is returned\nto the application and the final result to the service that initiates the\ntransaction (\\textit{initiator}), if it has registered for this protocol.\n\t\\end{description}\n\n\\subsubsection{Combining services}\n\nOne of the main benefits of using standards lies in the ability to combine them due\nto their well known interfaces and behavior, in order to extract the most suitable features for each scenario.\n\nFor instance, both of the aforementioned event-driven specifications, WS-Eventing and WS-Notification, can\nbe combined with a coordination protocol in\norder to guarantee the atomicity of an event\\,\\cite{4279671}. Although this\ncomposition ensures that notifications reach all the relevant targets, it proves\nto be a heavy process for resource constrained devices, specially in scenarios\nwith a large number of targets, or with a large amount of communication errors,\nwhere WS-RM could help mitigating them, but also at the cost of increasing\nresource consumption.\n\nWS-RM can be combined with several WS-* standards. On the one hand, it can\nimprove its features by leveraging:\n\\begin{itemize}\n \\item WS-Addressing, which enables the identification of messages and addresses\nof\nendpoints. This specification was modified to accommodate some needs of the WS-RM\nspecification, like the reuse of a message ID when retransmitting identical\nmessages to counter communication errors\\,\\cite{soa_concepts_tech_design};\n \\item WS-Security, to protect the integrity and confidentiality of the\nexchanged messages;\n \\item WS-Policy, to specify the delivery assurance, among other requirements,\nfor a\nsequence\\,\\cite{soa_concepts_tech_design,soa_field_guide,soa_approach}.\n\\end{itemize}\nOn the other hand, due to its ability to ensure reliable communication between\ntwo endpoints, WS-RM can be leveraged by other standards, such as\nWS-Eventing, WS-Notification, WS-AT, WS-BA and WS-Coordination to\nachieve reliable communication among the intervening parties.\n\nAlthough the WS-RM specification allows to condition\nservice activities, it is different from WS-AT or\nWS-BA, in the sense that a coordinating entity is not\nneeded to inspect the progress of the activities, being the reliability rules\nconveyed as SOAP headers in the exchanged\nmessages\\,\\cite{soa_concepts_tech_design}.\n\nRegarding the questions posed in the beginning of this section, WS-RM would be a \nsuitable standard to ensure point-to-point reliable message delivery. However, it\nwould be very inefficient and poise a heavy weight on the message sender in terms \nof processing power, if there are lots of message recipients or if lots of errors \noccur. In order for WS-RM to guarantee atomic delivery to all targets, it would have to \nrely on WS-AT, or a similar protocol based in WS-Coordination, which would, once again, increase the\nconsumption of the sender's processing and communication resources, due to the additional message traffic.\n\n\n\\subsection{Gossip}\n\nIn computer networking, gossiping describes the process where a participant that\nintends to disseminate some information chooses a small random subset of other\nparticipants and forwards the information to them. Each of these destinations,\nupon receiving the information, repeats the same procedure, hence, the gossip\nmoniker. This procedure mimics also how epidemics spread in populations and,\ntherefore, are also known as epidemic protocols\\,\\cite{Eugster:2004p10747}.\n\n\\subsubsection{Reliability and Scale}\n\n\\begin{figure}[t]\n \\centering\n \\begin{tikzpicture}\n \\begin{axis}[xlabel=Fanout (f),ylabel=(\\%),ymax=130]\n \\addplot table[header=false,x index=0,y index=1]{gossip.dat};\n \\addplot table[header=false,x index=0,y index=2]{gossip.dat};\n \\legend{Average Receivers,Atomic Runs}\n \\end{axis}\n \\end{tikzpicture}\n\\caption{Reliability of gossip (250 participants, 10 dissemination runs,\nvariable fanout).}\n\\label{fig:gosreliability}\n\\end{figure}\n\nMost interestingly, gossip protocols don't need a reactive mechanism to deal\nwith failures, namely, buffering, acknowledgement, retransmission, and garbage\ncollection, which account for most of the complexity in common communication\nprotocols. Instead, reliability is proactively achieved by the protocol's\ninherent redundancy and randomization, that cope with both process and network\nlink failures.\n\nThe expected probability for a message being delivered to each destination and\nto all destinations as a whole can be derived directly from protocol parameters\n$f$, the number of targets that are locally selected by each process for\ngossiping, and $r$, maximum number of times a message is relayed before being\nignored. Figure~\\ref{fig:gosreliability} illustrates the impact of these parameters\nby showing simulation results of disseminating 10 messages to 250 receivers,\nwith $r=5$ and a variable $f$. Notice that with $f>4$ each destination gets each\nmessage with a very high probability. With $f>7$, each message is atomically\nreceived by all destinations also with a very high probability.\n\nBy adjusting $r$ and $f$ parameters according to system size and expected\nfaults, gossip can be configured such that any desired average number of\nreceivers successfully get the message. Better yet, parameters can be set such\nthat the message is atomically delivered to all the receivers with high\nprobability leading to guaranteed atomic delivery\\,\\cite{1297243}. The key to\nscalability is that the required fanout configuration is at worst\nlogarithmically proportional to system size.\n\n\\subsubsection{Variants}\n\nThere are two main variants of gossip protocols\\,\\cite{892324}, which provide\ndifferent message exchange patterns and performance trade-offs. In \\emph{push\ngossip}, a node that becomes aware of new information, conveys it immediately to\ntarget nodes. This variant is adequate for one-to-many dissemination of small\nmessages and events. With \\emph{pull gossip}, instead of gossiping upon arrival\nof new information, a node periodically selects a number of peers and asks them\nfor new information. It has been shown that combining \\emph{push} and \\emph{pull\ngossip} results in dissemination being achieved in a lower number of\nsteps\\,\\cite{892324} and provides a generic framework for gossiping that can be\ntailored for multiple purposes by parameterizing it with different aggregation\nfunctions\\,\\cite{newscast03}.\n\nIn addition, lazily deferring the transmission of payload improves performance\nin heterogeneous networks, allowing gossip protocols to approximate ideal\nresource usage efficiency\\,\\cite{Pereira06}. Such \\emph{lazy} variants are most\nuseful when the data payload is very large, but also when it is very likely that\nthe data is already known throughout the network.\n\nFinally, there are two options regarding relaying duplicate messages. In the\n\\textit{infect-and-die} model, a participant that receives the message (i.e. is\ninfected), sends the received message to other nodes, and then never sends it\nagain, becoming dead in the analogy with epidemics. In the\n\\textit{infect{\\--}forever} model, also known as\n\\textit{balls-and-bins}\\,\\cite{koldehofe02simple}, a participant might relay\nreceived message multiple times, possibly until $r$ rounds are reached. This\nlast alternative has the advantage of requiring no state at participants to\nrecall recently relayed messages. On the other hand, it usually requires more\nnetwork resources as the relay limit has to be set conservatively.\n\n\\subsubsection{Membership Management}\n\nA key component of a gossip protocol is the ability to obtain random subsets of\nparticipants to direct messages at in each gossip operation. This component has\nto provide an uniform random sample and, as much as possible, drawn from a\ncurrent view of operational participants\\,\\cite{1045666}. The first option is to\nshare the full list of participants, allowing each of them to locally draw\nsubsets as desired\\,\\cite{312207}. This is adequate when the list does not\nchange frequently, to avoid taxing the network with constant updates, and is\nsmall enough to fit each participants memory.\n\nIf these conditions are not met, it has also been shown that sufficiently good\nrandom samples can be obtained by having each participant keep a small partial\nview of the system, which is itself maintained using a gossip\nprotocol\\,\\cite{945507,Eugster:2004p10747}. A particularly simple but effective\napproach\\,\\cite{Voulgaris:2005p6235} is allowing a node to exchange some\nelements in its local list with the same number of elements from some other\nnode. This progressively shuffles the list of each participant and leads to the\ndesired uniform random sample. By adding a time-based lease and renewal\nmechanism, it also deals with participants entering and leaving the system.\n\n\n\\subsection{Experimental setting}\n\nExperimental evaluation is done using the Minha middleware test platform\\,\\cite{Carvalho:2011:EED:2093185.2093188,minha_url}, which virtualizes multiple devices within a single JVM while simulating the performance characteristics of a real system.\nIt also allowed us to inject network faults to better assess the reliability of WS-Gossip.\n\n\nEach test corresponds to the simulation of the runtime of a given number of devices collocated in the same LAN, in a single host with the following configuration: 64-bit Ubuntu Server 10.04.4 Linux, two 12-core AMD Opteron\\textsuperscript{TM} Processor 6172, 2.1GHz, 128 GB RAM, 64-bit Sun Microsystems Java SE 1.6.0\\_26.\n\nThe evaluation consists in executing a periodic event dissemination, for the mentioned scenarios, where a new value is propagated from a single producer device to a given number of consumer devices.\nA centralized managing device was used to control peer management\\footnote{Discovery proxy is not yet implemented in JMEDS 2.0 beta 3a. Instead we used a custom registry service.} and the execution of the test.\n\nThe following scenarios were analyzed:\n\\begin{description}\n\\item [WS-Eventing]A publish\/subscribe communication protocol was selected as it is one of the most used event dissemination patterns. Hence, the WS-Eventing standard, as provided by JMEDS, was evaluated using HTTP\/TCP communication.\n\\item [WS-Gossip]The \\textit{push} variant of WS-Gossip was selected to be evaluated in conjunction with SOAP-over-UDP.\nTwo different scenarios were evaluated for WS-Gossip in terms of communication errors to compare the achieved reliability and latency degradation.\nEach of these scenarios designation is then suffixed with (0\\% Loss), when there are no message losses, and with (10\\% Loss), when 10\\% of communication losses are introduced by the Minha simulator.\n\\end{description}\n\nThe execution procedure of each test comprised the following steps:\n\\begin{enumerate}\n \\item The manager and the producer devices are started.\n \\item The consumer devices are then started. In WS-Eventing,\nthey subscribe with the producer as soon as they are started. In WS-Gossip, the manager, informs each consumer of its neighbors according with the configured fanout value, so they can convey new messages to them.\nFor both scenarios, the manager verifies if all the devices started correctly before signaling the producer to start the dissemination.\n \\item The producer begins disseminating events periodically, which are propagated across the network.\n \\item The producer terminates and notifies the manager.\n \\item The manager informs, sequentially, all the devices about the file they should write their run statistics to.\n\\end{enumerate}\nThe tests for each scenario consisted in 5 runs for each given number of devices, where 120 events were periodically emitted with an interval of 5 seconds.\n\nThe interval between the initial emission of a message and its reception by a consumer was measured in nanoseconds since Minha enables the execution of all the intervening devices inside a single JVM on a single host.\nThe sampling of the instant of emission was performed right before the producer sends a message, and the reception time measurement was done in the first operation of the method invoked to deal with a new message at a consumer.\n\nIn WS-Gossip, the used values for the fanout parameter were computed according to\\,\\cite{amk-from-epidemics-to-dc}, taking into account the number of devices, as well as an expected error rate (e) of 5\\% and a delivery assurance (p) of 99\\%, ranging from a value of 8 for 10 devices to 11 for 250 devices.\nIn these very same scenarios, the publisher is randomly selected from all the nodes, contrarily to the WS-Eventing scenario where the publisher is the first device.\n\n\\subsection{Results and discussion}\n\n\\begin{figure}[htbp]\n \\centering\n \\begin{tikzpicture}\n \\begin{axis}[xlabel=devices,ylabel=ms,ymax=130, legend columns=1, legend style={\n \tat={(1.03,0.48)},\n\tanchor=west}]\n \\addplot table[header=false,x index=0,y index=1]{pushudp-0.dat};\n \\addplot table[header=false,x index=0,y index=1]{pushudp-10.dat};\n \\addplot table[header=false,x index=0,y index=1]{notif.dat};\n \\legend{WS-Gossip(0\\% Loss),WS-Gossip(10\\% Loss),WS-Eventing}\n \\end{axis}\n \\end{tikzpicture}\n \\caption{WS-Eventing vs. WS-Gossip (latency).}\n \\label{fig:wse_wsg}\n \\end{figure}\n \n\\begin{figure}[htbp]\n \\centering\n \\begin{tikzpicture}\n \\begin{axis}[xlabel=devices,ylabel=hops,ymin=0, legend columns=1, legend style={\n \tat={(1.03,0.48)},\n\tanchor=west}]\n \\addplot table[header=false,x index=0,y index=3]{pushudp-0.dat};\n \\addplot table[header=false,x index=0,y index=3]{pushudp-10.dat};\n \\legend{WS-Gossip(0\\% Loss),WS-Gossip(10\\% Loss)}\n \\end{axis}\n \\end{tikzpicture}\n \\caption{Average hops to delivery in WS-Gossip.}\n \\label{fig:hops}\n\\end{figure}\n\nResults presented in Figures~\\ref{fig:wse_wsg} and \\ref{fig:hops} are the average of all 5 runs for each scenario.\nFor latency measurements, the first and the last 10 iterations were discarded in order to minimize the effect of Java JIT compilation, although it also masks the delay of TCP connection establishment in WS-Eventing.\n \nIn Figure\\,\\ref{fig:wse_wsg}, the message delivery latency of the WS-Eventing grows linearly with the number of targets, from 10 to 126 milliseconds, whereas that of WS-Gossip(0\\% Loss) is very small and grows very slowly, between 2.8 to 10.5 milliseconds.\nThis can be justified by scattering the load of propagating a message throughout an entire network, by the devices on that network, instead of overloading a single device, such as the publisher in WS-Eventing.\nThe message delivery latency of WS-Gossip(10\\% Loss) is very close to that of WS-Gossip(0\\% Loss), suffering a small increase of around 0.1 milliseconds.\n \nFigure\\,\\ref{fig:hops} presents the logarithmic growth of the average number of hops a message goes through from emission to reception in WS-Gossip, between 1.2 to 2.64 hops, confirming that the gossip protocol scales logarithmically with system size. This figure also shows that the introduction of communication losses has little effect on the number of hops a message goes through, with an average increase of around 0.6 hops in WS-Gossip(10\\% Loss) compared to the baseline scenario, where no messages are lost in the network.\n \nMessage delivery rate is not presented graphically since it is 100\\% both in WS-Eventing and in WS-Gossip(0\\% Loss) and it is always greater than 99.9\\% in WS-Gossip(10\\% Loss), and most frequently 100\\%, even with a rate of communication losses that corresponds to the double of the expected 5\\%.\n\nTo conclude the analysis of the results, considering an environment with \\textit{n} devices, where the WS-Eventing producer will always have to send \\textit{n} messages for each event, whereas gossip peers will send a number of messages equal to its fanout \\textit{f}, thus spreading the load throughout the network which results in savings in the consumption of resources by the producer for cases where \\textit{n} $ > $ \\textit{f}.\n\n\n\n\n\\subsection{Header information}\n\nAs previously stated, the unit of information being gossiped is the SOAP envelope. Messages in a gossip interaction contain an entry in the SOAP header section of the SOAP envelope describing how to relay such messages. These are initialized by the initiator device, either within a shadow service or by a gossip-aware client. Moreover, there is also the assumption of WS-Addressing\\,\\cite{ws_a} providing a unique identifier for each message and support for asynchronous replies. Briefly, it contains the following information:\n\\begin{description}\n\\item [Scope\/Type]As defined by WS-Discovery, this field implicitly describes the set of targets. Devices can be configured to relay messages only within a specific scope and type.\n\\item [Fanout]The number of peers to target in each interaction.\n\\item [Hops]The remaining number of hops. This must be decremented by each device that relays the message. The message is discarded when it reaches zero.\n\\item [IdTTL]The time that each device should buffer the message identifier for duplicate detection. If this is set to zero, the protocol degenerates to the \\textit{balls-and-bins} variant\\,\\cite{ballsandbins}.\n\\item [DataTTL]The time that each device should buffer the message itself for retransmission in lazy gossip variants. If this is set to zero, the protocol will never issue advertisements and will always use an \\textit{eager} variant.\n\\item [Filter]An optional item, specifying a rule to filter replies, which must be specified using XSLT. Valid rules are configured by the deployer and advertised as policies by the shadow service.\n\\end{description}\n\n\\subsection{Operation styles}\n\nSOAP and WSDL support several operation styles\\,\\cite{soap,wsdl_11,wsdl_12}. Besides a typical client-server interaction (i.e. \\textit{request-response}), it is also possible to have input-only operations (i.e. \\textit{one-way}), output-only operations (i.e. \\textit{notification}), and call-back operations (i.e. \\textit{solicit-response}). It is also possible that a \\textit{two-way} operation leads to multiple replies. These different operation styles allow WS-Gossip to support different gossip variants in addition to the previously described eager \\textit{push-style}, such as the \\textit{lazy} and the \\textit{pull} variants.\n\nGossiping in \\textit{one-way} and \\textit{notification} operations is handled as described previously: Upon reception of a message, it is propagated and no reply is expected. In \\textit{request-reply} and \\textit{solicit-response} operation, the message is propagated and then all replies received are propagated back to the initiator. This requires the initiator's address to be stored alongside with the message identifier used for duplicate detection during the specified \\textbf{IdTTL}. Consider the following example: A \\textit{request-response} to query available disk space of servers in a data center. A client invokes the operation on the shadow service, which eventually reaches all targets. All responses then travel back along the same tree implicitly created by the request message and will eventually reach the initiator.\n\nAn alternative is to make use of a filter. This can omit or aggregate replies according to a rule specified when gossip is initiated. \nConsider the following example: The same \\textit{request-response} operation is used to determine which server has the most available disk space in a data center. This requires that upon deployment, devices are configured to support the maximum filter on the disk space query operation. A client invokes the operation on the shadow service, which eventually reaches all targets. Responses then travel back along the same tree implicitly created by request message, but they are buffered and filtered such that only the maximum discovered downstream is returned by each peer. Each peer's reply is sent as soon as all its targets have replied, with a value or with a fault, or when a timeout expires.\n\n\\subsection{Gossip styles}\n\nIn addition to eager \\textit{push-style} gossip described so far, \\textit{lazy} and \\textit{pull} variants are supported as follows. Besides offering the same port type as the hosted service, the shadow gossip service provides a gossip port with the following operations:\n\\begin{description}\n\\item [Push]Alternative to directly using the interface. This allows a set of messages to be submitted in a single interaction.\n\\item [PushIds]Informs the target that a number of messages are locally available. These should then be requested using \\textbf{Fetch}.\n\\item [Pull]Returns currently buffered messages during a time interval specified as a parameter.\n\\item [PullIds]Variant of the previous operation, which requests identifiers instead of the actual messages. These can then be requested using \\textbf{Fetch}.\n\\item [Fetch]Returns currently buffered messages, as specified by a list of identifiers provided as a parameter.\n\\end{description}\nGossip variants can be achieved through the composition of the previous operations. Namely, \\textit{lazy push} is obtained by using \\textbf{PushIds} instead of \\textbf{Push} and then waiting for \\textbf{Fetch} to be used later on selected identifiers. \\textit{Eager pull} is obtained by periodically invoking \\textbf{Pull}. Finally, \\textit{lazy pull} is obtained by periodically invoking \\textbf{PullIds} and then using \\textbf{Fetch} on the resulting identifiers that are unknown.\n\nThe gossip variant chosen for each operation depends on configuration by the service deployer. In particular, the optimum configuration for \\textit{push} gossip is to use the \\textit{eager} variant for early rounds and then \\textit{lazy}. For \\textit{pull} gossip, the \\textit{lazy} variant is interesting for very large payloads. The combination of both \\textit{push} and \\textit{pull} is known to ensure rapid and robust dissemination of information\\,\\cite{892324,312207}.\n\n\\subsection{Peer service}\n\n\\begin{figure}[htbp]\n\\centering\n\\includegraphics[width=.75\\textwidth]{peers-crop}\n\\caption{Overview of peer management.}\n\\label{fig:memb}\n\\end{figure}\n\nBy default, WS-Gossip does not need an explicit peer management service. Instead, each gossip interaction can be configured with a scope or a service type that is then used to discover the full set of reachable peers through WS-Discovery. This is most useful in scenarios where a discovery proxy device exists, since a set of peers can be obtained efficiently by querying the proxy. This leads to a configuration with centralized peer information while information dissemination is distributed, which is adequate for scenarios with low churn and relatively high messaging rate.\n\nIf a proxy is not available, the usage of the Ad-Hoc mode of WS-Discovery would lead to a large number of multicast messages that would most likely defeat the purpose of gossip. Instead, our proposal allows that peers discovered to be cached locally and exchanged with other peers to implicitly create an overlay network using the Newscast protocol\\,\\cite{newscast03}.\nThe structure of the stored peer information comprises a list where each service instance is represented by an entry that contains the following elements:\n\\begin{description}\n\\item [Address]Corresponds to the service endpoint address.\n\\item [Type]Identifies the type of the service.\n\\item [DeviceId]Identifies the device where the service is hosted. This field is not applicable to services that are not associated with any device.\n\\item [Heartbeat]Counter that is incremented as messages, such as the invocation of the \\textbf{Exchange} operation, are issued by other peer.\n\\end{description}\nThis information is exchanged among different devices and also updated through the examination of WS-Dis\\-cov\\-er\\-y multicast messages issued by target services entering or leaving the network.\nPeriodically, if the instance has not received a request for exchanging its membership information during a certain time frame, it selects another instance of the peer service to which it sends such a request containing the list of the known endpoints. Upon reception of such a message, the contacted instance returns to the requester its own list of known endpoints, and merges it with the received one.\n\nThe heartbeat counter of a service instance that never sends a new message, or eventually sends but without reaching a peer service instance, stays unchanged, implying that it will move towards the end of the membership list as the counter of other services is being updated and new services are discovered. That service instance will eventually be discarded when the cache of the Peer Service reaches the configured maximum size.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:introduction}\n\nIndependence in graph products has been studied by many authors but almost always in the context of the independence number, commonly denoted by $\\alpha$.\nWe mention just samples of papers concerning the independence number of a Cartesian product $\\alpha(G \\,\\square\\, H)$ (see~\\cite{f-2011, hk-1996, js-1994, k-2005, nr-1996})\nand of a direct product $\\alpha(G \\times H)$ (see~\\cite{jk-1998, kr-2022, nr-1996}). In addition, for both of these two products some investigation has also been done\non the so-called ultimate independence ratios, $\\lim_{m \\to \\infty}\\frac{\\alpha(\\,\\square\\,_{i=1}^m G)}{n(G)^m}$ and $\\lim_{m \\to \\infty}\\frac{\\alpha(\\times_{i=1}^m G)}{n(G)^m}$.\nSee for example~\\cite{al-2007, bnr-1996, hyz-1994, t-2014}.\n\n\nNowakowski and Rall~\\cite{nr-1996} studied the behavior of a number of domination, independence and coloring type invariants on nine associative graph products whose edge sets depend on the edge sets of both factors. In particular, they proved some lower and upper bounds for the\ncardinality of a smallest maximal independent set, the independent domination number, of these products. For an excellent survey of independent domination\nsee the paper~\\cite{gh-2013} by Goddard and Henning.\nIn this work we will focus on the independent domination number of the direct product of two graphs. In particular, we are interested\nin how the independent domination number of a direct product relates to the independent domination numbers of the two factors.\nIn the process we give a counterexample to the following conjecture of Nowakowski and Rall.\n\\begin{conj} {\\rm \\cite[Section 2.4]{nr-1996}} \\label{conj:lowerbound}\nFor all graphs $G$ and $H$, $i(G \\times H) \\ge i(G)i(H)$.\n\\end{conj}\nIn fact, we prove a stronger result; namely\n\\begin{thm} \\label{thm:extremecounterexample}\nFor any positive integer $n$ such that $n>10$, there exists a pair of graphs $G$ and $H$ such that\n$\\min\\{i(G),i(H)\\}=n+2$ and $i(G \\times H) \\leq 12$.\n\\end{thm}\n\nThe organization of the paper is as follows. In the next section we provide necessary definitions and several previous results.\nIn Section~\\ref{sec:productwithcomplete} we restrict our attention to direct products in which one of the factors is a complete\ngraph, and introduce a method for calculating the independent domination number of $G \\times K_n$ in terms of minimizing\na certain kind of labelling of $V(G)$. Using this scheme we find the values of $i(P_m \\times K_n)$ and $i(C_m \\times K_n)$.\nLower bounds for $i(G \\times H)$, in terms of other domination-type invariants of $G$ and $H$, are given in Section~\\ref{sec:lowerbounds}.\nThe main result of the paper is in Section~\\ref{sec:counterexamples} where we give an infinite collection of counterexamples to\nConjecture~\\ref{conj:lowerbound} and prove Theorem~\\ref{thm:extremecounterexample}.\n\n\n\\section{Definitions and preliminary results} \\label{sec:defns}\n\nWe denote the order of a finite graph $G=(V(G),E(G))$ by $n(G)$. For a positive integer $n$ we let $[n]=\\{1,\\ldots,n\\}$; the vertex set of the complete graph $K_n$\nwill be $[n]$ throughout. A subset $D\\subseteq V(G)$ \\emph{dominates} a subset $S \\subseteq V(G)$ if $S \\subseteq N[D]$. If $D$ dominates~$V(G)$, then we will also say that $D$ dominates the graph $G$ and that $D$ is a \\emph{dominating set} of $G$. If $D$, in addition to being a dominating set\nof $G$, has the property that every vertex in $D$ is adjacent to at least one other vertex of $D$, then $D$ is a \\emph{total dominating set} of $G$. The\n\\emph{total domination number} of $G$ is the minimum cardinality among all total dominating sets of $G$; it is denoted $\\gamma_t(G)$. The \\emph{$2$-packing number} of $G$, denoted $\\rho(G)$, is the largest cardinality of a vertex subset $A$ such that the distance in $G$ between $a_1$ and $a_2$ is at least $3$ for every pair $a_1,a_2$\nof distinct vertices in $A$.\nA set $I \\subseteq V(G)$ is an \\emph{independent dominating} set if $I$ is simultaneously independent and dominating. This is equivalent to $I$ being a maximal independent set with respect to set inclusion. The \\emph{independence number} of $G$ is the cardinality, $\\alpha(G)$, of a largest independent set in $G$. We denote by $i(G)$ the smallest cardinality of a maximal independent set in $G$; this invariant is called the \\emph{independent domination number} of $G$.\n\nThe \\emph{direct product}, $G\\times H$, of graphs $G$ and $H$ is defined as follows:\n\\begin{itemize}\n\\item $V(G \\times H)=V(G) \\times V(H)$;\n\\item $E(G \\times H)= \\{ (g_1,h_1)(g_2,h_2) \\, \\colon\\, g_1g_2 \\in E(G) \\,\\,\\text{and}\\,\\,h_1h_2 \\in E(H) \\}$\n\\end{itemize}\nThe direct product is both commutative and associative. For a vertex $g$ of $G$, the \\emph{$H$-layer over $g$} of $G\\times H$ is the set $\\{ \\, (g,h) \\mid h\\in V(H) \\,\\}$, and it is denoted by $\\LSs g H$. Similarly, for $h \\in V(H)$, the \\emph{$G$-layer over $h$}, $G^h$, is the set $\\{ \\, (g,h) \\mid g\\in V(G) \\,\\}$. Note that each $G$-layer and each $H$-layer is an independent set in $G\\times H$. The \\emph{projection to $G$} is the map $p_G: V(G\\times H) \\to V(G)$ defined by $p_G(g,h)=g$. Similarly, the \\emph{projection to $H$} is the map $p_H: V(G\\times H) \\to V(H)$ defined by $p_H(g,h)=h$. If $A \\subseteq V(G\\times H)$ and $g \\in V(G)$, then we employ\n$\\LSs g A$ to denote $A \\cap\\, \\LSs g H$. Similarly, $A^h=A \\cap G^h$ for a vertex $h$ of $H$.\n\nThe following result of Topp and Volkmann will be useful in establishing our main results.\n\\begin{lem} {\\rm \\cite[Proposition 11]{tv-1992}} \\label{lem:inverseimage}\nLet $H$ be a graph with no isolates. If $I$ is a maximal independent set of any graph $G$, then $I \\times V(H)$ is a maximal independent set of $G \\times H$.\n\\end{lem}\n\n\\medskip\nAs an immediate consequence of Lemma~\\ref{lem:inverseimage} we get a lower bound for $\\alpha(G \\times H)$, which is well-known, and an upper bound for $i(G \\times H)$.\nBoth were established earlier by Nowakowski and Rall~\\cite{nr-1996}.\n\\begin{cor} {\\rm \\cite[Table 3]{nr-1996}} \\label{cor:triviallower}\nIf both $G$ and $H$ have no isolated vertices, then\n\\begin{itemize}\n\\item $\\alpha(G \\times H) \\ge \\max \\{\\alpha(G) n(H), \\alpha(H) n(G)\\}$;\n\\item $i(G \\times H) \\le \\min \\{i(G) n(H), i(H) n(G)\\}$.\n\\end{itemize}\n\\end{cor}\n\n\\section{Independent domination in $G \\times K_n$} \\label{sec:productwithcomplete}\n\nIn this section we focus on direct products in which one of the factors is a complete graph, and we will use notation introduced in our paper~\\cite{kr-2022}.\n\nLet $I$ be a maximal independent set of $G \\times H$. Suppose $g$ is a vertex of $G$ such that $\\LSs{g}{I} \\not=\\emptyset$ but $\\LSs{g}{I} \\not= {\\LSs{g}{H}}$. Let $(g,h)\\in {\\LSs{g}{H}- \\,\\LSs{g}{I}}$. Since $I$ is a dominating set of $G \\times H$, it follows that there exists $g' \\in N_G(g)$ and $h'\\in N_H(h)$ such that $(g',h') \\in I$. Note that such a vertex $h'$ does not belong to $N_H(p_H(\\LSs{g}{I}))$. For if $h'x \\in E(H)$ for some $(g,x) \\in I$, then $(g',h')$ and $(g,x)$ are adjacent vertices of $I$, which\nis a contradiction. However, it is possible that $h' \\in p_H(\\LSs{g}{I})$.\n\nConsider now the special case $G \\times K_n$ for $n \\ge 2$. The following lemma is from~\\cite{kr-2022}. For the sake of completeness we give its short proof.\n\n\\begin{lem} {\\rm \\cite[Lemma 9]{kr-2022}}\\label{lem:sizeoflayers}\nLet $n \\ge 2$ and let $G$ be any graph. If $I$ is any maximal independent set of $G \\times K_n$, then $\\left| I \\cap {\\LSs{g}{K_n}} \\right | \\in \\{0,1,n\\}$, for any $g \\in V(G)$.\n\\end{lem}\n\\begin{proof} If $n=2$, then the conclusion is obvious. Assume $n\\ge 3$ and suppose for the sake of contradiction that $\\left| I \\cap {\\LSs{g}{K_n}} \\right |=m$ for some $2 \\le m 10$, there exists a pair of graphs $G$ and $H$ such that\n$\\min\\{i(G),i(H)\\}=n+2$ and $i(G \\times H) \\leq 12$.\n}\n\\begin{proof}\nFor each positive integer $n$ such\nthat $n>10$, we now define a pair of graphs $G_n$ and $H_n$.\n\nLet ${\\cal A}$ be the collection of subsets of $[6]$ defined by\n\\[{\\cal A}= \\{\\{3, 4, 5, 6\\}, \\{2, 5, 6\\}, \\{1, 2, 3, 4\\}, \\{1, 3, 4, 6\\}, \\{1, 2, 5\\}\\}\\,,\\]\nand let $A_s=\\{u_s,v_s\\}$, for each $s \\in [6]$. For each $J \\in {\\cal A}$, we let $A_J$ be an independent set of $n$ vertices.\nThe graph $G_n$ has vertex set\n\\[V(G_n) = \\left(\\bigcup_{s=1}^6 A_s\\right) \\cup \\left(\\bigcup_{J\\in{\\cal A}} A_J\\right )\\,.\\]\nThe only edges of $G_n$ are given by the following three conditions.\n\\begin{itemize}\n\\item For each $s \\in [6]$, the vertex $u_s$ is adjacent to $v_s$.\n\\item For each $J \\in {\\cal A}$ and for every $s \\in J$, each of the $n$ vertices of $A_J$ is adjacent to both vertices of $A_s$.\n\\item Each of the sets $A_1 \\cup A_5$, $A_1 \\cup A_6$, $A_2 \\cup A_3$, $A_2 \\cup A_4$, $A_2 \\cup A_6$, $A_3 \\cup A_5$, and $A_4 \\cup A_5$ induces a clique in $G_n$.\n\\end{itemize}\n\nWe claim that $i(G_n) = n+2$. To see this, observe first that $\\{u_1, u_2\\} \\cup A_{\\{3, 4, 5, 6\\}}$ is an independent dominating set of $G_n$.\nTo see that $i(G_n) \\ge n+2$, let $X = \\cup_{i=1}^6 A_i$. It is easy to see that the only maximal independent sets in $G_n[X]$ are the following:\n\\begin{enumerate}\n\\item[(a)] $\\{x, y\\}$ where $x \\in A_1$ and $y \\in A_2$,\n\\item[(b)] $\\{x, y, z\\}$ where $x \\in A_1$, $y\\in A_3$, and $z \\in A_4$\n\\item[(c)] $\\{x, y\\}$ where $x \\in A_2$ and $y \\in A_5$\n\\item[(d)] $\\{x, y, z\\}$ where $x \\in A_3$, $y\\in A_4$, and $z \\in A_6$\n\\item[(e)] $\\{x, y\\}$ where $x \\in A_5$ and $y \\in A_6$\n\\end{enumerate}\n\nMoreover, for each maximal independent set $I$ of $G_n[X]$ listed above, there exists a $J \\in {\\cal A}$ such that\nno vertex of $A_J$ is adjacent to a vertex of $I$. Thus, $i(G_n) \\ge n+2$.\n\\vskip5mm\nThe graph $H_n$ is defined in a similar way.\nLet ${\\cal B}$ be the collection of subsets of $[6]$ defined by\n\\[ {\\cal B}= \\{\\{2, 3, 4, 6\\}, \\{2, 3, 4, 5\\}, \\{1, 3, 5, 6\\}, \\{1, 2, 4, 6\\}, \\{1, 3, 4, 5\\}, \\{1, 2, 3, 6\\}, \\{1, 4, 5, 6\\}\\}\\,.\\]\nFor each $K \\in {\\cal B}$ we let $B_K$ be an independent set of $n$ vertices.\nThe vertex set of $H_n$ is given by\n\\[V(H_n) = \\{y_1, y_2, y_3, y_4, y_5, y_6\\} \\cup \\left(\\bigcup_{K\\in{\\cal B}} B_K\\right )\\,,\\]\nand the edge set of $H_n$ is given by the following two conditions.\n\n\\begin{itemize}\n\\item $\\{y_1y_2, y_1y_3, y_1y_4, y_2y_5, y_3y_6, y_3y_4, y_4y_6, y_5y_6\\} \\subset E(H_n)$\n\\item For every $K \\in {\\cal B}$, the vertex $y_k$ is adjacent to each vertex of $B_K$ if and only if $k \\in K$.\n\\end{itemize}\n\nWe claim that $i(H_n) = n+2$. One can easily verify that the only maximal independent sets in the induced subgraph\n$H_n[\\{y_1, y_2, y_3, y_4, y_5, y_6\\}]$ are the following: $\\{y_1, y_5\\}$, $\\{y_1, y_6\\}$, $\\{y_2, y_4\\}$, $\\{y_3, y_5\\}$, $\\{y_2, y_6\\}$,\n$\\{y_2, y_3\\}$, and $\\{y_4, y_5\\}$.\n\n\nMoreover, for each maximal independent set $I$ of $H_n[\\{y_1, y_2, y_3, y_4, y_5, y_6\\}]$ listed above, there exists a set $B_K$ such that no vertex of $B_K$\nis adjacent either vertex of $I$. Thus, $i(H_n) \\ge n+2$. On the other hand, $\\{y_1, y_5\\} \\cup B_{\\{2, 3, 4, 6\\}}$ is an independent dominating set of $H_n$.\n\nTherefore, we have shown that $i(G_n)=i(H_n)=n+2$. We claim that the set $D$ defined by $D = \\cup_{s=1}^6 \\{(u_s, y_s), (v_s,y_s)\\}$\nis an independent dominating set of $G_n \\times H_n$.\nIt is clear that $\\{(u_s, y_s), (v_s,y_s)\\}$ is independent for each $s \\in [6]$. Now suppose $(a, y_j)$ and $(b, y_k)$ are adjacent where $a \\in A_j$ and\n$b \\in A_k$ for $1 \\le j\\le k \\le 6$. It follows that $ab \\in E(G_n)$ and $y_jy_k \\in E(H_n)$. However, by construction, each vertex of $A_j$ is adjacent to each vertex of $A_k$ only if\n$\\{y_j, y_k\\}$ is an independent set in $H_n$, which is a contradiction. Hence, $D$ is independent in $G_n \\times H_n$.\n\n Now we verify that $D$ dominates $G_n\\times H_n$. First, we show that all vertices of $X\\times \\{y_1, y_2, y_3, y_4, y_5, y_6\\}$ are dominated.\n\\begin{itemize}\n\\item $A_1 \\times \\{y_1\\}$ dominates $A_1 \\times \\{y_1, y_2, y_3, y_4\\}$, $A_5 \\times \\{y_2, y_3, y_4\\}$ and $A_6 \\times \\{y_2, y_3, y_4\\}$.\n\\item $A_2 \\times \\{y_2\\}$ dominates $A_2 \\times \\{y_1, y_2, y_5\\}$, $A_3 \\times \\{y_1, y_5\\}$, $A_4\\times \\{y_1, y_5\\}$, and $A_6\\times \\{y_1, y_5\\}$.\n\\item $A_3 \\times \\{y_3\\}$ dominates $A_3 \\times \\{y_1, y_3, y_4, y_6\\}$, $A_2\\times \\{y_1, y_4, y_6\\}$, and $A_5\\times\\{y_1, y_4, y_6\\}$.\n\\item $A_4 \\times \\{y_4\\}$ dominates $A_4\\times \\{y_1, y_3, y_4, y_6\\}$ and $A_2 \\times \\{y_3\\}$.\n\\item $A_5 \\times \\{y_5\\}$ dominates $A_1 \\times \\{y_6\\}$, $A_3 \\times \\{y_2\\}$, and $A_4\\times \\{y_2\\}$.\n\\item $A_6 \\times \\{y_6\\}$ dominates $A_1\\times \\{y_5\\}$.\n\\end{itemize}\n\nNext, let $J \\in {\\cal A}$ and let $g \\in A_J$. It is easy to see that $\\{y_j: j \\in J\\}$ is a total dominating set of $H_n$. This implies that\n$\\cup_{j \\in J} \\{(u_j,y_j),(v_j,y_j)\\}$ dominates $\\LSs {g}{H_n}$.\nFinally, let $K \\in {\\cal B}$ and let $h \\in B_K$. Again it is straightforward to verify that $\\cup_{k \\in K}A_k$ totally dominates $G_n$.\nIt follows that $\\cup_{k \\in K} \\{(u_k,y_k),(v_k,y_k)\\}$ dominates $G_n^h$.\n\nTherefore, $D$ is an independent dominating set of $G_n \\times H_n$ and\n\\[i(G_n \\times H_n) \\le |D|=12 < n+2=\\min\\{i(G_n),i(H_n)\\}\\,.\\]\n\\end{proof}\n\n\\section{Conclusion}\nNowakowski and Rall posited the following list of conjectures involving a direct or Cartesian product in \\cite{nr-1996}.\n\\begin{conj} {\\rm \\cite[Section 2.4]{nr-1996}} For all graphs $G$ and $H$\n\\begin{enumerate}\n\\item[1.] $ir(G\\Box H) \\ge ir(G)ir(H)$\n\\item[2.] $i(G\\times H) \\ge i(G)i(H)$\n\\item[3.] $\\gamma(G\\Box H) \\ge \\gamma(G)\\gamma(H)$ (Vizing's conjecture)\n\\item[4.] $\\Gamma(G\\times H) \\ge \\Gamma(G)\\Gamma(H)$; $\\Gamma(G\\Box H) \\ge \\Gamma(G)\\Gamma(H)$\n\\end{enumerate}\n\\end{conj}\n\nBre\\v{s}ar proved that $\\Gamma(G\\Box H) \\ge \\Gamma(G)\\Gamma(H)$ in \\cite{b-2005} and Bre\\v{s}ar, Klav\\v{z}ar, and Rall proved that $\\Gamma(G\\times H) \\ge \\Gamma(G)\\Gamma(H)$ in \\cite{bkr-2007}. It is still unknown whether $ir(G\\Box H) \\ge ir(G)ir(H)$ ($ir$ denotes the lower irredundance number), and Vizing's conjecture remains unsettled. In this paper, we proved that there exist pairs of graphs for which $i(G\\times H) < \\min\\{i(G), i(H)\\}$. We also studied the behavior of $i(G\\times K_n)$ for a general graph $G$ and were able to provide the exact values for $i(G\\times K_n)$ when $G \\in \\{P_m, C_m\\}$.\n\nConsider the following computational problem.\n\\begin{center}\n\\fbox{\\parbox{0.85\\linewidth}{\\noindent\n{\\sc Independent Domination of Direct Products}\\\\[.8ex]\n\\begin{tabular*}{.93\\textwidth}{rl}\n{\\em Input:} & A graph $G$, a positive integer $n \\geq 3$ and an integer $k$.\\\\\n{\\em Question:} & Is $i(G \\times K_n) \\leq k$?\n\\end{tabular*}\n}}\n\\end{center}\n\nAs presented in Section~\\ref{sec:productwithcomplete}, showing that $i(G \\times K_n) \\leq k$ is equivalent to finding a weak partition $V_0,V_1,\\ldots,V_n,V_{[n]}$ of $V(G)$\nthat satisfies the four conditions necessary to construct an independent dominating set such that the weight is at most $k$. We pose the following problem.\n\n\\begin{prob}\nDetermine the complexity of {\\sc Independent Domination of Direct Products}\n\\end{prob}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:Intro}Introduction}\nThere has been a renewed interest in the iron-based superconductors since the discovery of superconductivity (SC) at about 30\\,K in the $A_{0.8+\\delta}$Fe$_{1.6+\\beta}$Se$_{2}$ ($A$ = K, Rb, Cs or Tl\/K) \\cite{KFeSe-prb,RbFeSe32K,CsFeSe-MSC,TlFeSe-Mott} compounds due to their unprecedented physical properties, such as the coexistence of high temperature SC with strong antiferromagnetic (AFM) order \\cite{CsFeSe-MSC,KMSC-neutron,CsMSC-neutron,Magn-MSC,Magnon-MSC}. However, whether SC and AFM order coexist microscopically or SC only occurs in the non-magnetic phase is still highly debated since some reports support the coexistence picture \\cite{CsFeSe-MSC,KMSC-neutron,CsMSC-neutron,Magn-MSC,Magnon-MSC} and others favor the phase separation scenario \\cite{PS-XRD,Ricci-XRD,PS-Moss,PS-ARPES}. Local probe techniques such as muon-spin relaxation\/rotation ($\\mu$SR) \\cite{CsFeSe-MSC} and M\\\"ossbauer spectroscopy (MS) \\cite{PS-Moss,Moss-Ryan,Moss-Nowik,Moss-Li} have shown that a two-component picture is inescapable to describe the system correctly, namely, all samples are phase-separated into a major AFM phase and a minor paramagnetic (PM) phase. Nuclear magnetic resonance (NMR) \\cite{NMR-prl} and $\\mu$SR \\cite{uSR-PM} experiments reveal that the PM phase becomes superconducting below $T_c$. However, the scenario that only the PM phase becomes superconducting alone can not explain all the above mentioned experiments. Thus further studies on these compounds are desired to settle the debate. In this case, an investigation on the local magnetic property is helpful in understanding the correlation between SC and magnetic ordering of these systems.\n\nMS has been proved to be a very useful tool to probe local specific information of the iron-based superconductors \\cite{PS-Moss,Moss-Ryan,Moss-Nowik}. Especially, when possible coexistence of magnetic order and SC presents in the same sample, a MS study might reveal rich information. So far, only a few work using MS to study these materials \\cite{PS-Moss,Moss-Ryan,Moss-Nowik} have been reported. A detailed study focusing on the temperature dependence of the local magnetic field at the iron site near the superconducting transition temperature is still missing, which might hold the key to understand the interesting interplay between AFM order and SC. Therefore, in the present work, MS was used to study the magnetic structure and temperature dependence of the hyperfine magnetic field (HMF) at the iron nucleus of K$_{0.84}$Fe$_{1.99}$Se$_2$ single crystals. The results provide evidence that a spin excitation gap opens up before entering the SC state. Using a simple spin model, we show that the ferromagnetically coupled (FMC) four spins can be viewed as a net spin, which couples antiferromagnetically with each other to form the checkerboard-like AFM structure.\n\n\n\\section{\\label{sec:Experiment}Experiments}\nSingle crystals of potassium intercalated iron-selenides of nominal composition K$_{0.8}$Fe$_2$Se$_2$ were grown by the self melting method similar to previous reports \\cite{KFeSe-prb,CsFeSe-MSC}. Stoichiometry of high purity K pieces, Fe and Se powders were mixed and put in a sealed quartz tube. The samples were heated to 1273\\,K slowly, kept for 2\\,h, cooled down to 973\\,K at the rate of 5\\,K\/h and then furnace cooled to room temperature by shutting down the furnace. The resulting plate-like crystals with a shiny surface are of a size up to $6\\times4\\times2$\\,mm$^3$. The actual composition is determined to be K$_{0.84}$Fe$_{1.99}$Se$_2$ by energy dispersive X-ray spectrum (EDXS).\n\nSingle crystal x-ray diffraction (XRD) measurements were performed on a Philips X'pert diffractometer with Cu K$_{\\alpha}$ radiation. AC susceptibility measurements were carried out through a commercial (Quantum Design) superconducting quantum interference device (SQUID) magnetometer. Transmission M\\\"ossbauer spectra were recorded using a conventional constant acceleration spectrometer with a $\\gamma$-ray source of 25\\,mCi $^{57}$Co in palladium matrix moving at room temperature. The absorber was kept static in a temperature-controllable cryostat filled with helium gas. The velocity of the spectrometer was calibrated with $\\alpha$-Fe at room temperature and all the isomer shift quoted in this work are relative to that of the $\\alpha$-Fe.\n\n\n\\section{\\label{sec:Results}Results and Discussion}\nSingle crystal x-ray diffraction pattern of K$_{0.84}$Fe$_{1.99}$Se$_2$ is shown in the inset of Fig. \\ref{Fig1}. As can be seen, only ($00l$) diffraction peaks are observed, indicating the crystallographic $c$-axis is perpendicular to the plane of the plate-like single crystal. Interestingly, two sets of ($00l$) reflections corresponding $c_1$=14.098\\,{\\AA} and $c_2$=14.272\\,{\\AA} are observed, which are attributed to the inhomogeneous distribution of the intercalated K atoms \\cite{AFeSe-TwoSet}. The temperature dependence of the AC susceptibility of K$_{0.84}$Fe$_{1.99}$Se$_2$ single crystal measured along the $ab$-plane with $H_{ac}$=1\\,Oe and $f$=300\\,Hz is shown in Fig. \\ref{Fig1}. The onset superconducting transition temperature, $T_c$, is determined to be 28\\,K from the real part of the susceptibility. The superconducting volume fraction is estimated to be $\\sim$80\\% at 2\\,K.\n\n\\begin{figure}[htp]\n\\includegraphics[width=8 cm]{Fig1-ACXRD\n\\caption{\\label{Fig1} Temperature dependence of AC susceptibility of the K$_{0.84}$Fe$_{1.99}$Se$_2$ crystal measured along the $ab$-plane with $H_{ac}$=1\\,Oe and $f$=300\\,Hz. Inset is the single crystal X-ray diffraction pattern of K$_{0.84}$Fe$_{1.99}$Se$_2$ crystal.}\n\\end{figure}\n\n\nM\\\"ossbauer spectra of a mosaic of single crystal flakes, oriented on a thin paper underlayer so that the $c$-axis is perpendicular to the plane of the M\\\"ossbauer absorber, recorded below room temperatures are shown in Fig. \\ref{Moss}. All spectra share similar spectral shapes and are fitted with two components: a dominant magnetic sextet and a nonmagnetic quadrupole doublet, with \\textsc{MossWinn 4.0} \\cite{MossWinn} programe.\n\n\\begin{figure}[htp]\n\\includegraphics[width=8 cm]{Fig2-Moss\n\\caption{\\label{Moss} M\\\"ossbauer spectra taken at indicated temperatures of the K$_{0.84}$Fe$_{1.99}$Se$_2$ single crystal. The M\\\"ossbauer absorber was prepared with well-cleaved flake-like single crystals, which were put together with the $c$-axis aligned perpendicular to the plane of the M\\\"ossbauer absorber.}\n\\end{figure}\n\nMake an intense study of the doublet, one finds that the two peaks are strongly polarized with an intensity ratio close to 3:1. This means that the orientation of the main axis of the electric field gradient (EFG) of the PM phase is parallel to the $c$ axis of the crystal, which agrees well with the fitted angle of $\\theta_{pm}$=8(3)$^{\\circ}$ by V. Ksenofontov \\textit{et al} \\cite{PS-Moss}. Due to the low statistical of the data and to simplify the fitting procedure, we fitted the doublet assuming that $\\theta_{pm}$=0$^{\\circ}$. The derived isomer shift and quadrupole splitting at 15\\,K is found to be $\\delta$=0.631\\,mm\/s and $eQV_{zz}\/2$=-0.272\\,mm\/s, respectively. These hyperfine parameters are close to the reported values of $\\beta$-FeSe \\cite{FeSe-PseudoPhase,MossFeSe}, while the quadrupole splitting is a little bit smaller than the corresponding doublet in the Rb$_{0.8}$Fe$_{1.6}$Se$_2$ compound \\cite{PS-Moss}, which might be due to different stoichiometries of these samples. Therefore, we may attribute the doublet to the FeSe phase (pseudo-FeSe phase), which corresponds to the FeSe$_4$ tetrahedrons that have K vacancy neighbors in the crystal structure. The coexistence of nonmagnetic pseudo-FeSe phase with the main AFM phase can be naturally understood in the phase separation scenario, which is supported by scanning nanofocused x-ray diffraction \\cite{PS-XRD,Ricci-XRD}, NMR \\cite{NMR-prl}, and previous M\\\"ossbauer \\cite{PS-Moss} studies.\n\nIn order to get a better fit of the spectra, special care should be taken in adjusting the sextet. In our previous manuscript \\cite{Moss-Li}, we fitted the spectra with a relatively small EFG value, assuming that the axis of the main component of the EFG coincide with the crystallographic $c$-axis and the direction of the magnetic moments of the Fe atoms. A close inspection of the spectra reveals that this procedure can not account for the slightly asymmetries of the line pairs (1,6) and (3,4) and the positions of the line pairs (2,5) and (3,4) as pointed out by V. Ksenofontov \\textit{et al} \\cite{PS-Moss}. Therefore, in the present paper we refitted the spectra according to the procedures given by V. Ksenofontov \\textit{et al}, which solves the static Hamiltonian for mixed magnetic and quadrupole interactions with arbitrary relative orientation. The asymmetry parameter $\\eta=(V_{xx}-V_{yy})\/V_{zz}$ is assumed to be zero to further simplify the problem since the fitted value for Rb$_{0.8}$Fe$_{1.6}$Se$_2$ is rather small $\\sim0.1$ \\cite{PS-Moss}. Fitting the spectra yields an averaged relative intensity of 75\\% for the AFM phase, which is significantly smaller than the reported value of 88\\% for Rb$_{0.8}$Fe$_{1.6}$Se$_2$ \\cite{PS-Moss} and K$_{0.8}$Fe$_{1.76}$Se$_2$ \\cite{Moss-Ryan} compounds. This may be caused by the high amounts of iron in our K$_{0.84}$Fe$_{1.99}$Se$_2$ crystal, which may favor the pseudo-FeSe phase. The derived hyperfine parameters for the AFM phase are $\\delta$=0.654\\,mm\/s and $eQV_{zz}\/2$=1.172\\,mm\/s with an angle $\\theta_{afm}$=44(1)$^{\\circ}$ between the axis of $V_{zz}$ and the HMF $B_{hf}$=28.32\\,T at 15\\,K, compares well with previously reported values of some similar compounds \\cite{PS-Moss,Moss-Nowik}.\n\n\nIn order to get a better understanding of the magnetic properties in these materials, we investigated the temperature dependence of the AFM order parameter. The temperature dependence of the HMF, $B_{hf}(T)$, at the Fe site in K$_{0.84}$Fe$_{1.99}$Se$_2$ is depicted in Fig. \\ref{HF}, together with different fitting results. Similar to the behavior of neutron powder diffraction (NPD) (101) magnetic Bragg peak intensity profile \\cite{KMSC-neutron}, the HMF shows an plateau below $\\sim50\\,K$ and then decreases gradually with increasing temperature. A simple Brillouin function was used to fit the HMF data in a previous work \\cite{Moss-Ryan} and a rough agreement was found in the temperature range 10-530\\,K. However, as can be seen from Fig. \\ref{HF}, the Brillouin function (dot line) together with the power law (dashdotted line) fails to describe the low temperature behavior of the HMF. As is well known, in the temperature range of $T\\ll T_N$, the decrease in HMF with increasing temperature can be well explained by spin excitations \\cite{SpinWave-book} within the spin wave theory. For a three dimensional antiferromagnet, the temperature dependence of HMF at low temperatures follows \\cite{SpinWave-book2},\n\\begin{eqnarray}\nB_{hf}(T) = B_{hf}(0)(1 - C T^{2}e^{-\\Delta E\/k_BT})\n\\label{MSEG}\n\\end{eqnarray}\nwhere $C$ is a constant that contains the spin wave stiffness. $\\Delta E$ is the spin excitation gap (SEG), which is necessary to reproduce the plateau of the HMF at low temperatures. Applying equation (\\ref{MSEG}) to the data yields the following results, $B_{hf}(0)$=28.47\\,T, $\\Delta E$=63\\,K ($\\sim$5.5\\,meV). Interestingly, the fitted $\\Delta E\\sim$5.5\\,meV has a nonzero value, suggesting that a substantial SEG due to spin anisotropy opens up above the the superconducting transition temperature. Actually, the SEG has been predicted theoretically \\cite{SpinWave-Theory} and was recently observed by neutron scattering studies in Rb$_{0.89}$Fe$_{1.58}$Se$_2$ \\cite{SEG-RbFeSe} compound. SEG was also observed in YBa$_2$Cu$_3$O$_{7-\\delta}$ (YBCO) and La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) \\cite{SEG-Cuprates,Kofu-SEG,Millis-SEG1993,Millis-SEG1994,Bourges-SEG,Anderson-SEG} cuprate superconductors and whether the SEG is related to superconductivity is still an open question. There is experimental evidence that well-defined SEG ($\\sim$6\\,meV) in the incommensurate spin fluctuations is observed in the superconducting state only for samples close to the optimal doping \\cite{optimaldoping} for LSCO systems. And a rough proportionality between T$_C$ and SEG was also observed for YBCO superconductors: in the weakly doped region, $E_G\\approx k_BT_C$, while in the heavily doped region, $E_G\\approx 3.8k_BT_C$ \\cite{SEGPropTC,Dai2001,Regnault1994}. Thus, SEG and SC might closely related with each other and a thorough investigation of the evolution of SEG and $T_C$ with different carrier-doping levels is highly desired. In this aspect, in-depth neutron scattering studies of the AFM spin excitation spectrum may yield fruitful information on this issue.\n\n\\begin{figure}[htp]\n\\includegraphics[width=8 cm]{Fig3-Bhf\n\\caption{\\label{HF} Temperature dependence of the hyperfine field, $M(T)$, extracted from least-squares fits of the M\\\"ossbauer spectra. Fitting to the $M(T)$ data with different theories are also compared, the power law (dashdotted line), Brillouin function (dot line) and gaped spin wave theory (solid line). As can be seen, the 3D spin wave theory with an energy gap of $\\Delta E \\sim$5.5\\,meV can better reproduce the low-temperature plateau of the hyperfine field (see text).}\n\\end{figure}\n\nTo understand the temperature dependence of the HMF and estimate the magnetic exchange interactions of our sample, we go to the novel magnetic ordering structure of this compound below $T_N$. The magnetic moments of the four irons in each $\\sqrt{5}\\times\\sqrt{5}$ unit cell align ferromagnetically along the crystalline $c$-axis \\cite{KMSC-neutron,PS-Moss}. And the ferromagnetic blocks interact with each other antiferromagnetically to form a block-checkerboard AFM pattern. Though accredited values of exchange interactions in this system are not reached, strong interactions within the FMC blocks have been predicted theoretically and observed experimentally \\cite{SpinWave-Theory,SEG-RbFeSe}. A recent neutron scattering experiment showed that the acoustic spin waves between $\\sim$9\\,meV to $\\sim$70\\,meV arise mostly from AFM interactions between the FMC blocks, while the optical spin waves associated with the exchange interactions of iron spins within the FM blocks are above $\\sim$80\\,meV \\cite{SEG-RbFeSe}.\n\nConsidering the energy scales of the acoustic and optical spin waves and the large energy separation between them, it is reasonable to assume that, at low temperatures, the decrease in HMF with increasing temperature below room temperature is controlled by the AFM interactions of the FMC blocks. Thus, by fitting the temperature dependence of the HMF data we can deduce the effective interaction, $J_{eff}$, between two nearest FMC blocks, as illustrated in Fig. \\ref{Heisenberg} (a). To simplify the calculation, we make a premise that the iron spins in the FMC block fluctuate coherently. This is reasonable at least at temperatures below room temperature due to the strong ferromagnetic interactions within the FMC block. In this case, an FMC block can be regarded as a super-spin with $S_{eff}=8$.\n\n\\begin{figure}[htp]\n\\includegraphics[width=8 cm]{Fig4-Heisenberg\n\\caption{\\label{Heisenberg} (a) schematic representation of the effective interaction model used in the text. $J_0$ represents the effective interaction constant within each FMC block and is strong enough with respect to the inter-block interaction constant $J_{eff}$ to force the four spins fluctuate coherently at finite temperatures. (b) temperature dependence of the HMF together with fitting results using equation (\\ref{BhfT}) (see text).}\n\\end{figure}\n\nIn the simplest case of two interacting spins, the energy levels are $E(S)=J_{eff}S(S+1)$, where $\\vec{S}=\\vec{S_1}+\\vec{S_2}$ and $\\vec{S_1}$ and $\\vec{S_2}$ are the angular momenta of the two coupled spins. The HMF probed by M\\\"ossbauer measurements will be $B_{hf}=C\\langle S_Z\\rangle$, where $\\langle S_Z\\rangle$ is the expectation value of the $z$ component of $\\vec{S_i}$, and reaches its maximum value at ground state (zero temperature). While at elevated temperatures, states with higher $S$ are accessible, which results in the decrease of HMF as observed above. If we assume the relaxation between the electronic states is fast with respect to the Larmor precession time, we can express the finite temperature HMF as $B_{hf}(T)=B_{hf}(0)(1-\\sum_Sh_Sn_S)$, and\n\\begin{eqnarray}\n\\small\n\\sum_Sh_Sn_S= \\frac{\\sum\\limits_{S=0}^{16}\\sum\\limits_{S_z=0}^S h(S,S_Z)e^{-J_{eff}S(S+1)\/k_BT}}{\\sum\\limits_{S=0}^{16}\\sum\\limits_{S_z=-S}^S e^{-J_{eff}S(S+1)\/k_BT}},\n\\label{BhfT}\n\\end{eqnarray}\nwhere $n_S$ is the populations and $h(S,S_Z)$ is the decrease in HMF corresponding to state $|S,S_Z\\rangle$. If we assume that $h(S,S_Z)$ is proportional to $S_Z$ with the same proportionate constant for all $S$ states, $h(S,S_Z)=h_0S_Z$, then we can fit the experimental data with equation (\\ref{BhfT}). As can be seen from Fig. \\ref{Heisenberg} (b), a good agreement between the theoretical curve and the experimental data can be obtained and the fitted parameters are $J_{eff}$=22.8\\,meV, $B_{hf}(0)$=28.44\\,T and $h_0$=0.697\\,T.\n\nTo see the efficiency of our simple model in describing the low-energy spin excitations, we compare our results with that deduced from the effective spin Hamiltonian model, which has been widely used to describe the ground state and spin excitations for this type of compounds. Usually, the Hamiltonian involves intra-block nearest and second nearest neighbor interactions $J_1$, $J_2$ and the inter-block nearest and second nearest neighbor interactions $J_1'$, $J_2'$. Even the third nearest neighbor interactions $J_3$, $J_3'$ have been adopted to fit the spin wave spectra by Miaoyin Wang \\textit{et al} \\cite{SEG-RbFeSe}. In terms of the $J_1$-$J_1'$-$J_2$-$J_2'$-$J_3$-$J_3'$ model, the low-energy spin waves can be approximately described by ($J_1'+2J_2'+2J_3)S\/4\\sim$17\\,meV. Obviously, our results of $J_{eff}$ agrees reasonably well with the neutrons scattering results, which proves the validity of our above assumption.\n\n\n\n\n\\section{\\label{sec:Conclusion}Concluding remarks}\nHigh quality single crystals of K$_{0.84}$Fe$_{1.99}$Se$_2$ have been prepared and studied by M\\\"ossbauer spectroscopy. Temperature dependence of the hyperfine magnetic field is well explained within the gaped spin wave theory. Fitting the experimental data yields a spin excitation gap of about 5.5\\,meV\/63\\,K. Supposing the blocked spins fluctuate coherently, the effective exchange interaction between these coupled spin blocks is estimated to be $J_{eff}$=22.8\\,meV, which agrees reasonably well with previous NPD estimated value ($J_{eff}\\sim$17\\,meV).\n\n\n\n\\begin{acknowledgments}\nWe thank T. Xiang, P. C. Dai and Y. Z. You for useful discussions.\nThis work was supported by the National Natural Science Foundation of China under Grants No. 10975066.\n\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSelf-excited dynamo action refers to the instability of a plasma in\na non-magnetic equilibrium state to amplify magnetic fields within the\nframework of resistive magnetohydrodynamics (MHD).\nCharge separation effects are assumed absent, i.e.\\ no battery-type\neffects are explicitly involved, although they do play a role in producing\na weak initial seed magnetic field that is needed to provide a perturbation\nto the otherwise field-free initial state.\nA dynamo instability may occur when the magnetic Reynolds number is large\nenough, i.e.\\ the fluid motions and the scale of the domain are large enough.\nThis instability is normally a linear one, but some dynamos\nare subcritical and require then a finite-amplitude initial field.\nIn the linear case one speaks about slow and fast dynamos depending on\nwhether or not the growth rate of the dynamo scales with resistivity.\nFor fast dynamos the growth rate scales with the rms velocity of the\nflow, which is turbulent in most cases.\nDynamos saturate when the magnetic energy becomes comparable with the\nkinetic energy.\nSome of the kinetic energy is then channelled through the magnetic\nenergy reservoir and is eventually dissipated via Joule heating.\n\nThere are several applications where one considers non self-excited\ndynamo action.\nExamples can be found in magnetospheric physics and in plasma physics\nwhere one is interested in the electromotive force induced by a flow\npassing through a given magnetic field.\nLater in this paper we will discuss the reversed field pinch (RFP)\nbecause of its connection with the $\\alpha$ effect that plays\nan important role in astrophysical dynamos.\nThe physics of the RFP has been reviewed by Ortolani \\& Schnack (1993)\nand, in a broader context with reference to astrophysical large-scale\ndynamos, by Ji \\& Prager (2002) and Blackman \\& Ji (2006).\n\nThe $\\alpha$ effect is one of a few known mechanisms able to generate\nlarge-scale magnetic fields, i.e.\\ fields whose typical length scale\nis larger than the scale of the energy carrying motions.\nIt was also one of the first discussed mechanisms able to produce\nself-excited dynamo action at all.\nIndeed, Parker (1955) showed that the swirl of a convecting flow under\nthe influence of the Coriolis force can be responsible for producing\na systematically oriented poloidal magnetic field from a toroidal field.\nThe toroidal field in turn is produced by the shear from the differential\nrotation acting on the poloidal field.\nParker's paper was before Herzenberg (1958) produced the first existence\nproof of dynamos.\nUntil that time there was a serious worry that Cowling's (1933) anti-dynamo\ntheorem might carry over from two-dimensional fields to three-dimensional\nfields, as is evident from sentimental remarks made by Larmor (1934).\n\nLarmor (1919) proposed the idea of dynamo action in the\nastrophysical context nearly 100 years ago.\nNowadays, with the help of computers, it is quite easy to solve the\ninduction equation in three dimensions in simple geometries and obtain\nself-excited dynamo solutions with as little as $16^3$ mesh points\nusing, for example, the sample ``\\texttt{kin-dynamo}'' that comes with\nthe {\\sc Pencil Code} (\\url{http:\/\/pencil-code.googlecode.com}).\n\nIn addition to dynamos in helical flows, which can generate large-scale\nfields, there are also dynamos in non-helical flows that produce only\nsmall-scale fields.\nThis possibility was first addressed by Batchelor (1950) based on\nthe analogy between the induction equation and the vorticity equation.\nAgain, this was not yet very convincing at the time.\nThe now accepted theory for small-scale dynamos was first proposed by\nKazantsev (1968) and the first simulations were produced by\nMeneguzzi et al.\\ (1981).\nSuch simulations are computationally somewhat more demanding and require\nat least $64^3$ mesh points (or collocation points in spectral schemes).\nIn the past few years this work has intensified (Cho et al.\\ 2002,\nSchekochihin et al.\\ 2002, Haugen 2003).\nWe will not discuss these dynamos in the rest of this paper.\nInstead, we will focus on large-scale dynamos.\nMore specifically, we focus here on a special class of large-scale\ndynamos, namely those where kinetic helicity plays a decisive role\n(the so-called $\\alpha$ effect dynamos).\nNevertheless, we mention at this point two other mechanisms that\ncould produce large-scale magnetic field without net helicity.\nOne is the incoherent $\\alpha$--shear effect that was originally proposed\nby Vishniac \\& Brandenburg (1997) to explain the occurrence of large-scale\nmagnetic fields in accretion discs, and later also for other astrophysical\napplications (e.g., Proctor 2007).\nIt requires the presence of shear, because otherwise only small-scale\nmagnetic fields would be generated (Kraichnan 1976, Moffatt 1978).\nThe other mechanism is the shear--current effect of\nRogachevskii \\& Kleeorin (2003), which can operate if the turbulent\nmagnetic diffusion tensor is anisotropic, so the mean electromotive\nforce from the turbulence is given by $-\\eta_{ij}\\overline{J}_j$ such that the\nsign of $\\eta_{ij}\\overline{U}_{i,j}$ is positive, and that this quantity is\nbig enough to overcome resistive effects.\nHere, a comma denotes partial differentiation, $\\overline{\\bm{U}}$ is the mean flow,\n$\\overline{\\mbox{\\boldmath $J$}}{}}{$ is the mean current density,\nand summation over repeated indices is assumed.\nThis effect too requires shear, because otherwise $\\eta_{ij}\\overline{U}_{i,j}$\nwould be zero.\nSimulations show large-scale dynamo action in the presence\nof just turbulence and shear, and without net helicity, but there\nare indications that this process may also be just the result of incoherent\n$\\alpha$--shear dynamo action (Brandenburg et al.\\ 2008a).\n\nThroughout this paper, overbars denote suitable spatial averages\nover one or two coordinate directions.\nFurthermore, we always assume that the scale of the energy-carrying eddies\nis at least three times smaller than the scale of the domain.\nWe refer to this property as ``scale separation''.\nIn this sense, scale separation is a natural requirement, because we\nwant to explain the occurrence of fields on scales large compared with\nthe scale of the energy-carrying motions.\nScale separation has therefore nothing to do with a gap in the kinetic\nenergy spectrum, as is sometimes suggested.\n\nMuch of the work on $\\alpha$ effect dynamos has been done in the\nframework of analytic approximations.\nHowever, this is only a technical aspect that is unimportant for the\nactual occurrence of large-scale fields under suitable conditions.\nThis has been demonstrated by numerical simulations, as will be discussed\nbelow.\n\n\\section{Helical large-scale dynamos}\n\nA possible way of motivating the physics behind helical dynamo action\nis the relation to the concept of an inverse turbulent cascade.\nThis particular idea was first proposed by Frisch et al.\\ (1975) and is\nbased on the conservation of magnetic helicity,\n\\begin{equation}\nH_{\\rm M}=\\int_V\\mbox{\\boldmath $A$} {}\\cdot\\mbox{\\boldmath $B$} {}\\,{\\rm d} {} V,\n\\end{equation}\nwhere $\\mbox{\\boldmath $A$} {}$ is the magnetic vector potential and $\\mbox{\\boldmath $B$} {}=\\mbox{\\boldmath $\\nabla$} {}\\times\\mbox{\\boldmath $A$} {}$\nis the magnetic field in a volume $V$.\n\nIt is convenient to define spectra of magnetic energy and magnetic\nhelicity, $E_{\\rm M}(k)$ and $H_{\\rm M}(k)$, respectively.\nAs usual, these spectra are obtained by calculating the three-dimensional\nFourier transforms of magnetic vector potential and magnetic field,\n$\\hat{\\bm{A}}_{\\bm{k}}$ and $\\hat{\\bm{B}}_{\\bm{k}}$, respectively,\nand integrating $|\\hat{\\bm{B}}_{\\bm{k}}|^2$ and\nthe real part of $\\hat{\\bm{A}}_{\\bm{k}}\\cdot\\hat{\\bm{B}}_{\\bm{k}}^*$ over shells of constant\n$k=|\\bm{k}|$ to obtained $E_{\\rm M}(k)$ and $H_{\\rm M}(k)$, respectively.\n(Here, an asterisk denotes complex conjugation.)\nThese spectra are normalized such that\n$\\int E_{\\rm M}(k)\\,{\\rm d} {} k=\\bra{\\mbox{\\boldmath $B$} {}^2}\/2\\mu_0$ and\n$\\int H_{\\rm M}(k)\\,{\\rm d} {} k=\\bra{\\mbox{\\boldmath $A$} {}\\cdot\\mbox{\\boldmath $B$} {}}$ for $k$ from 0 to $\\infty$,\nwhere angular brackets denote volume averages over a periodic domain\nand $\\mu_0$ is the vacuum permeability.\nUsing the Schwartz inequality one can then derive the so-called realizability\ncondition,\n\\begin{equation}\nk|H_{\\rm M}(k)|\/2\\mu_0\\leq E_{\\rm M}(k).\n\\end{equation}\nFor fully helical magnetic fields with (say) positive helicity,\ni.e.\\ $H_{\\rm M}=2\\mu_0 E_{\\rm M}(k)\/k$, one can show that energy\nand magnetic helicity cannot cascade directly, i.e.\\ the interaction\nof modes with wavenumbers $\\bm{p}$ and $\\bm{q}$ can only produce fields whose\nwavevector $\\bm{k}=\\bm{p}+\\bm{q}$ has a length that is equal or smaller than the maximum\nof either $|\\bm{p}|$ or $|\\bm{q}|$ (Frisch et al.\\ 1975), i.e.\\\n\\begin{equation}\n|\\bm{k}|\\leq\\max(|\\bm{p}|,|\\bm{q}|).\n\\label{kpq}\n\\end{equation}\nThis means that magnetic helicity and magnetic energy are transformed to\nprogressively larger length scales.\nA clear illustration of this can be seen in decaying helical turbulence.\n\\FFig{InvCasc} shows magnetic energy spectra from a simulation of\nChristensson et al.\\ (2001) at different times for a\ncase where the initial magnetic field was fully helical and had a spectrum\nproportional to $k^4$ with a resolution cutoff near the largest possible\nwavenumber.\nNote that the entire spectrum appears to shift to the left, i.e.\\ toward\nlarger length scales, in an approximately self-similar fashion.\nThe details of the argument that led to \\Eq{kpq} are due to\nFrisch et al.\\ (1975), and can also be found in the reviews by\nBrandenburg et al.\\ (2002) and Brandenburg \\& Subramanian (2005a).\n\n\\begin{figure}[t!]\\begin{center}\n\\includegraphics[width=.6\\columnwidth]{InvCasc}\n\\end{center}\\caption[]{\nMagnetic energy spectra at different times\n(increasing roughly by a factor of 2).\nThe curve with the right-most location of the peak corresponds to\nthe initial time, while the other lines refer to later times (increasing\nfrom right to left). Note the\npropagation of spectral energy to successively smaller wavenumbers $k$,\ni.e.\\ to successively larger scales.\nAdapted from Christensson et al.\\ (2001).\n}\\label{InvCasc}\\end{figure}\n\nIn the non-decaying case, when the flow is driven by energy input at some\nforcing wavenumber $k_{\\rm f}$, the inverse cascade is clearly seen if there is\nsufficient scale separation, i.e.\\ if $k_{\\rm f}$ is large compared with the\nsmallest wavenumber $k_1$ that fits into a domain of size $L=2\\pi\/k_1$.\nAn example is shown in \\Fig{FpMkt2}, where kinetic energy is injected\nat the wavenumber $k_{\\rm f}=30k_1$.\nIt is evident that there are two local maxima of spectral magnetic\nenergy, one at the forcing wavenumber $k_{\\rm f}$, and another one at a smaller\nwavenumber that we call $k_{\\rm m}$, which is near $7k_1$ in \\Fig{FpMkt2}.\nDuring the kinematic stage the entire spectrum moves upward, with the\nspectral energy increasing at the same rate at all wavenumbers.\nEventually, when the field has reached a certain level, the spectrum\nbegins to change its shape and the second local maximum at $k_{\\rm m}$\nmoves toward smaller values, suggestive of an inverse cascade.\nHowever, a more detailed analysis (Brandenburg 2001) shows that the\nenergy transfer is nonlocal, i.e.\\ most of the energy is transferred\ndirectly from the forcing wavenumber to the wavenumber where most of\nthe mean field resides.\nThis suggests that we have merely a nonlocal inverse {\\it transfer} rather than\na proper inverse {\\it cascade}, where the energy transfer would be local in\nspectral space.\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=.5\\textwidth]{pMkt2}\\caption{\nMagnetic energy spectra for a run with forcing at $k=30$. The times,\nin units of $(c_{\\rm s} k_1)^{-1}$,\nrange from 0 (dotted line) to 10, 30, ..., 290 (solid lines). The thick\nsolid line gives the final state at $c_{\\rm s} k_1 t=1000$,\ncorresponding to $\\urmsk_{\\rm f} t\\approx2000$ turnover times.\nHere, $c_{\\rm s}$ is the sound speed and $u_{\\rm rms}$ is the turbulent rms velocity.\nNote that at early times\nthe spectra peak at $k_{\\max}\\approx7k_1$. The $k^{-1}$ and $k^{+3\/2}$\nslopes are given for orientation as dash-dotted lines.\nAdapted from Brandenburg (2001).\n}\\label{FpMkt2}\\end{figure}\n\nThe position of the local maximum can readily be explained by mean-field\ndynamo theory with an $\\alpha$ effect.\nThe evolution equation of such a dynamo is\n\\begin{equation}\n{\\partial\\overline{\\mbox{\\boldmath $B$}}{}}{\\over\\partial t}=\\mbox{\\boldmath $\\nabla$} {}\\times\\alpha\\overline{\\mbox{\\boldmath $B$}}{}}{+\\eta_{\\rm T}\\nabla^2\\overline{\\mbox{\\boldmath $B$}}{}}{,\n\\label{MFD}\n\\end{equation}\nwhere $\\alpha$ is a pseudo-scalar, $\\eta_{\\rm T}=\\eta+\\eta_{\\rm t}$ is the sum of\nmicroscopic Spitzer resistivity and turbulent resistivity\\footnote{Note\nthat resistivity and magnetic diffusivity differ by a $\\mu_0$ factor.\nHere, we always mean the magnetic diffusivity, although we use the two\nnames sometimes interchangeably.}, and an overbar denotes a suitably\ndefined spatial average (e.g.\\ planar average).\nAssuming $\\overline{\\mbox{\\boldmath $B$}}{}}{=\\hat{\\bm{B}}_{\\bm{k}}\\exp(\\lambda t+{\\rm i}\\bm{k}\\cdot\\bm{x})$ with\neigenfunction $\\hat{\\bm{B}}_{\\bm{k}}$, one finds the dispersion relation to be\n(Moffatt 1978)\n\\begin{equation}\n\\lambda(k)=|\\alpha|k-\\eta_{\\rm T} k^2,\n\\end{equation}\nwhere $k=|\\bm{k}|$.\nThe maximum growth rate is attained for a value of $k$ where\n${\\rm d} {}\\lambda\/{\\rm d} {} k=0$, i.e.\\ for $k=k_{\\rm m}=\\alpha\/2\\eta_{\\rm T}$.\nThe migration of the spectral maximum to smaller $k$ can then be\nexplained as the result of a suppression of $\\alpha$.\nIn other words, as the dynamo saturates, $\\alpha$ decreases, and so does\n$k_{\\rm m}$, which corresponds to the spectral maximum moving to the left.\n\nIt should be noted that there is also the possibility of a suppression\nof $\\eta_{\\rm t}$, which would work the other way, so that this interpretation\nmight not work.\nIndeed, there are arguments for a suppression of $\\eta_{\\rm t}$ that would be as\nstrong as that of $\\alpha$, but this applies only to the two-dimensional\ncase and has to do with the conservation of the mean-squared vector\npotential in that case (Gruzinov \\& Diamond 1994).\nSimulations also find evidence that in three dimensions the suppression\nof $\\alpha$ is stronger than that of $\\eta_{\\rm t}$ (Brandenburg et al.\\ 2008b).\n\nIn the following we discuss in detail the role played by magnetic helicity.\nThis has been reviewed extensively in the last few years\n(Ji \\& Prager 2002, Brandenburg \\& Subramanian 2005, Blackman \\& Ji 2006).\n\n\\section{Slow saturation}\n\nIn a closed or periodic domain, the saturation of a helical large-scale\ndynamo is found to be\nresistively slow and the final field strength is reached with a time\nbehavior of the form (Brandenburg 2001)\n\\begin{equation}\n\\bra{\\overline{\\mbox{\\boldmath $B$}}{}}{^2}(t)=B_{\\rm eq}^2{k_{\\rm m}\\overk_{\\rm f}}\n\\left[1-e^{-2\\eta k_{\\rm m}^2(t-t_{\\rm s})}\\right]\n\\quad\\mbox{for $t>t_{\\rm s}$},\n\\label{SlowSat}\n\\end{equation}\nwhere $B_{\\rm eq}$ is the equipartition field strength and $t_{\\rm s}$ is\nthe time when the slow saturation phase begins.\nWe emphasize that it is the microscopic $\\eta$ that enters \\Eq{SlowSat},\nand that the relevant length scale, $2\\pi\/k_{\\rm m}$, is that of the\nlarge-scale field, so the saturation behavior is truly very slow.\n\nThe reason for this slow saturation behavior is related to the conservation\nof magnetic helicity, which obeys the evolution equation\n(e.g., Ji et al.\\ 1995, Ji 1999)\n\\begin{equation}\n{{\\rm d} {}\\over{\\rm d} {} t}\\bra{\\mbox{\\boldmath $A$} {}\\cdot\\mbox{\\boldmath $B$} {}}=-2\\eta\\mu_0\\bra{\\mbox{\\boldmath $J$} {}\\cdot\\mbox{\\boldmath $B$} {}}\n-\\bra{\\mbox{\\boldmath $\\nabla$} {}\\cdot\\mbox{\\boldmath $F$} {}_{\\rm H}}.\n\\label{dABdt}\n\\end{equation}\nwhere $\\mbox{\\boldmath $F$} {}_{\\rm H}$ is the magnetic helicity flux, but for the periodic\ndomain under consideration we have $\\mbox{\\boldmath $\\nabla$} {}\\cdot\\mbox{\\boldmath $F$} {}_{\\rm H}=0$.\nClearly, in the final state we have then $\\bra{\\mbox{\\boldmath $J$} {}\\cdot\\mbox{\\boldmath $B$} {}}=0$.\nThis can only be satisfied for nontrivial helical fields if\nsmall-scale and large-scale fields have values of opposite sign,\nbut equal magnitude, i.e.\\ $\\bra{\\mbox{\\boldmath $j$} {}\\cdot\\mbox{\\boldmath $b$} {}}=-\\bra{\\overline{\\mbox{\\boldmath $J$}}{}}{\\cdot\\overline{\\mbox{\\boldmath $B$}}{}}{}$,\nwhere $\\mbox{\\boldmath $B$} {}=\\overline{\\mbox{\\boldmath $B$}}{}}{+\\mbox{\\boldmath $b$} {}$ and $\\mbox{\\boldmath $J$} {}=\\overline{\\mbox{\\boldmath $J$}}{}}{+\\mbox{\\boldmath $j$} {}$ are the decompositions of\nmagnetic field and current density into mean and fluctuating parts.\nHere we choose to define mean fields as one- or two-dimensional coordinate\naverages.\nExamples include planar averages such as $xy$, $yz$, or $xz$ averages\nin a periodic Cartesian domain, as well as one-dimensional averages\nsuch as $y$ or $\\phi$ averages in Cartesian or spherical domains,\n$(r,\\theta,\\phi)$, where the other two directions are non-periodic.\n\n\\EEq{SlowSat} can be derived under the assumption that large-scale and\nsmall-scale fields are fully helical with\n$\\overline{\\mbox{\\boldmath $J$}}{}}{\\cdot\\overline{\\mbox{\\boldmath $B$}}{}}{=k_{\\rm m}^2\\overline{\\mbox{\\boldmath $A$}}{}}{\\cdot\\overline{\\mbox{\\boldmath $B$}}{}}{=\\mp k_{\\rm m}\\overline{\\mbox{\\boldmath $B$}}{}}{^2$\nand $\\bra{\\mbox{\\boldmath $j$} {}\\cdot\\mbox{\\boldmath $b$} {}}=\\pmk_{\\rm f}\\bra{\\mbox{\\boldmath $b$} {}^2}\\approx\\pmk_{\\rm f}B_{\\rm eq}^2$,\nwhere upper and lower signs refer to positive and negative\nhelicity of the small-scale turbulence and we have assumed that\nthe small-scale field has already reached saturation, i.e.\\\n$\\bra{\\mbox{\\boldmath $b$} {}^2}\\approxB_{\\rm eq}^2\\equiv\\mu_0\\bra{\\rho\\mbox{\\boldmath $u$} {}^2}$, where $\\rho$ is\nthe density and $\\mbox{\\boldmath $u$} {}=\\mbox{\\boldmath $U$} {}-\\overline{\\bm{U}}$ is the fluctuating velocity.\n\nA mean-field theory that obeys magnetic helicity conservation was\noriginally developed by Kleeorin \\& Ruzmaikin (1982) and has recently\nbeen applied to explaining slow saturation (Field \\& Blackman 2002,\nBlackman \\& Brandenburg 2002, Subramanian 2002).\nThe main idea is that the $\\alpha$ effect has two contributions\n(Pouquet et al.\\ 1976),\n\\begin{equation}\n\\alpha=\\alpha_{\\it K}+\\alpha_{\\it M},\n\\label{alphaKM}\n\\end{equation}\nwhere $\\alpha_{\\it K}=-{\\textstyle{1\\over3}}\\tau\\overline{\\mbox{\\boldmath $\\omega$} {}\\cdot\\mbox{\\boldmath $u$} {}}$ is the usual kinetic\n$\\alpha$ effect related to the kinetic helicity,\nwith $\\mbox{\\boldmath $\\omega$} {}=\\mbox{\\boldmath $\\nabla$} {}\\times\\mbox{\\boldmath $u$} {}$ being the vorticity, and\n$\\alpha_{\\it M}={\\textstyle{1\\over3}}\\tau\\overline{\\mbox{\\boldmath $j$} {}\\cdot\\mbox{\\boldmath $b$} {}}\/\\rho_0$ is a magnetic\n$\\alpha$ effect that can, for example, be produced by the growing\nmagnetic field in an attempt to conserve magnetic helicity.\n(Here, $\\rho_0$ is an average density, but we note that there is at present\nno adequate theory for compressible systems with nonuniform density.)\n\nNote that $\\alpha_{\\it M}$ is related to the small-scale current helicity\nand hence to the small-scale magnetic helicity which, in turn, obeys\nan evolution equation similar to \\Eq{dABdt}, but with an additional\nproduction term, $2\\overline{\\mbox{\\boldmath ${\\cal E}$}}{}}{\\cdot\\overline{\\mbox{\\boldmath $B$}}{}}{$, that arises from mean-field\ntheory via $\\overline{\\mbox{\\boldmath ${\\cal E}$}}{}}{=\\alpha\\overline{\\mbox{\\boldmath $B$}}{}}{-\\eta_{\\rm t}\\mu_0\\overline{\\mbox{\\boldmath $J$}}{}}{$, and thus from\nthe $\\alpha$ effect itself.\nThe equation for $\\alpha_{\\it M}$ has then the form\n\\begin{equation}\n{\\partial\\alpha_{\\it M}\\over\\partial t}=-2\\etatk_{\\rm f}^2\n\\left({\\overline{\\mbox{\\boldmath ${\\cal E}$}}{}}{\\cdot\\overline{\\mbox{\\boldmath $B$}}{}}{\\overB_{\\rm eq}^2}+{\\alpha_{\\it M}\\overR_{\\rm m}}\\right)\n-\\mbox{\\boldmath $\\nabla$} {}\\cdot\\overline{\\mbox{\\boldmath $F$}}{}}{_\\alpha,\n\\label{dalphaMdt}\n\\end{equation}\nwhere $R_{\\rm m}=\\eta_{\\rm t}\/\\eta$ is a measure of the ratio of turbulent to\nmicroscopic magnetic diffusivity.\nWith $\\eta_{\\rm t}=u_{\\rm rms}\/3k_{\\rm f}$ (Sur et al.\\ 2008) we can relate this to\nthe more usual definition for the magnetic Reynolds number,\n$\\tildeR_{\\rm m}=u_{\\rm rms}\/\\etak_{\\rm f}$, via $\\tildeR_{\\rm m}=3R_{\\rm m}$.\nFurthermore, we have allowed for the possibility of fluxes of magnetic and\ncurrent helicities that also lead to a flux of $\\alpha_{\\it M}$.\nSuch fluxes are primarily important in inhomogeneous domains and especially\nin open domains where one can have an outward helicity flux (Ji 1999).\n\nThe use of \\Eq{alphaKM} is sometimes criticized because it is based on\na closure assumption.\nIndeed, there are questions regarding the meaning of the term\n$\\overline{\\mbox{\\boldmath $j$} {}\\cdot\\mbox{\\boldmath $b$} {}}$ and whether it really applies to the actual\nfield, or the field in the unquenched case.\nThis has been discussed in detail in a critical paper by\nR\\\"adler \\& Rheinhardt (2007).\nPart of this ambiguity can already be clarified in the low conductivity limit.\nSur et al.\\ (2007) have shown that one can express $\\alpha$ either\ncompletely in terms of the helical properties of the velocity field or,\nalternatively, as the sum of two terms, a so-called kinetic $\\alpha$\neffect and an oppositely signed term proportional to the helical part\nof the small scale magnetic field.\nHowever, it is fair to say that the problem is not yet completely understood.\nThe strongest argument in favor of \\Eqs{alphaKM}{dalphaMdt} is that\nthey reproduce catastrophic (i.e.\\ $R_{\\rm m}$-independent) quenching of $\\alpha$\nand that this approach has led to the prediction that such quenching can be\nalleviated by magnetic helicity fluxes.\nThis prediction has subsequently been tested successfully on various\noccasions (Brandenburg 2005, K\\\"apyl\\\"a et al.\\ 2008).\nFinally, it should be noted that \\Eq{alphaKM} has also been confirmed\ndirectly using turbulence simulations (Brandenburg et al.\\ 2005c, 2007).\n\nThe idea to model dynamo saturation and suppression of $\\alpha$ by solving\na dynamical equation for $\\alpha_{\\it M}$ is called dynamical quenching.\nIn addition to the resistively slow saturation behavior described by\n\\Eq{SlowSat}, this approach has also been applied to decaying turbulence\nwith helicity (Yousef et al.\\ 2003; Blackman \\& Field 2004), where the\nconservation of magnetic helicity results in a slow-down of the decay.\nThis can be modeled by an $\\alpha$ effect that offsets the turbulent\ndecay proportional to $\\eta_{\\rm t} k^2$ such that the decay rate becomes nearly\nequal to the resistive value, $\\eta k^2$.\nThis is explained in detail in the following section.\n\n\\section{Decay in a Cartesian domain}\n\\label{Sdynquench}\n\nIn the context of driven turbulence, the properties of solutions of a\ndecaying helical magnetic field were studied earlier by\nYousef et al.\\ (2003), who found that for fields with\n$\\overline{\\mbox{\\boldmath $B$}}{}}{^2\/B_{\\rm eq}^2\\ga R_{\\rm m}^{-1}$ the decay of $\\overline{\\mbox{\\boldmath $B$}}{}}{$ is slowed\ndown and can quantitatively be described by the dynamical quenching model.\nThis model applies even to the case where the turbulence is nonhelical and\nwhere there is initially no $\\alpha$ effect in the usual sense.\nHowever, the magnetic contribution to $\\alpha$ is still non-vanishing,\nbecause the $\\alpha_{\\it M}$ term is driven by the helicity of the\nlarge-scale field.\n\nTo demonstrate this quantitatively, Yousef et al.\\ (2003) have adopted\na one-mode approximation with $\\overline{\\mbox{\\boldmath $B$}}{}}{=\\hat{\\bm{B}}(t)\\exp({\\rm i} k_1z)$,\nand used the mean-field induction equation together with the dynamical\n$\\alpha$-quenching formula \\eq{dalphaMdt},\n\\begin{equation}\n{{\\rm d} {}\\hat{\\bm{B}}\\over{\\rm d} {} t}={\\rm i}\\bm{k}_1\\times\\hat{\\mbox{\\boldmath ${\\cal E}$} {}}\n-\\eta k_1^2\\hat{\\bm{B}},\n\\label{ODE1}\n\\end{equation}\n\\begin{equation}\n{{\\rm d} {}\\alpha\\over{\\rm d} {} t}=-2\\eta_{\\rm t} k_{\\rm f}^2\n\\left({{\\rm Re}(\\hat{\\mbox{\\boldmath ${\\cal E}$} {}}^*\\cdot\\hat{\\bm{B}}) \\over B_{\\rm eq}^2}\n+{\\alpha\\overR_{\\rm m}}\\right),\n\\label{ODE2}\n\\end{equation}\nwhere the flux term is neglected,\n$\\hat{\\mbox{\\boldmath ${\\cal E}$} {}}=\\alpha\\hat{\\bm{B}}-\\eta_{\\rm t}{\\rm i}\\bm{k}_1\\times\\hat{\\bm{B}}$\nis the electromotive force, and $\\bm{k}_1=(0,0,k_1)$.\n\n\\begin{figure}[t!]\n\\centering\\includegraphics[width=0.5\\textwidth]{pselected_decay}\\caption{\nDynamical quenching model with helical and nonhelical initial fields,\nand an additional $\\eta_{\\rm t}$ quenching function,\n$\\eta_{\\rm t}=\\eta_{\\rm t0}\/(1+\\tilde{g}|\\overline{\\mbox{\\boldmath $B$}}{}}{|\/B_{\\rm eq})$.\nThe quenching parameters are $\\tilde{g}=0$ (solid line) and 3 (dotted line).\nThe graph for the nonhelical cases has been shifted in $t$ so that one\nsees that the decay rates are asymptotically equal at late times.\nThe value of $\\eta_{\\rm T}$ used to normalize the abscissa is based\non the unquenched value.\nAdapted from Yousef et al.\\ (2003).\n}\\label{Fpselected_decay}\\end{figure}\n\n\\FFig{Fpselected_decay} compares the evolution of $\\overline{\\mbox{\\boldmath $B$}}{}}{\/B_{\\rm eq}$\nfor helical and nonhelical initial conditions, $\\hat{\\bm{B}}\\propto(1,{\\rm i},0)$\nand $\\hat{\\bm{B}}\\propto(1,0,0)$, respectively.\nIn the case of a nonhelical field, the decay rate is not quenched at all,\nbut in the helical case quenching sets in for\n$\\overline{\\mbox{\\boldmath $B$}}{}}{^2\/B_{\\rm eq}^2\\ga R_{\\rm m}^{-1}$.\nThe onset of quenching at $\\overline{\\mbox{\\boldmath $B$}}{}}{^2\/B_{\\rm eq}^2\\approx R_{\\rm m}^{-1}$\nis well reproduced by the simulation.\nIn the nonhelical case, however, some weaker form of quenching sets in\nwhen $\\overline{\\mbox{\\boldmath $B$}}{}}{^2\/B_{\\rm eq}^2\\approx1$.\nWe refer to this as standard quenching (e.g.\\ Kitchatinov et al.\\ 1994)\nwhich is known to be always present; see \\Eq{SlowSat}.\nBlackman \\& Brandenburg (2002) found that, for a range of different\nvalues of $R_{\\rm m}$, $\\tilde{g}=3$ results in a good description of\nthe simulations of cyclic $\\alpha\\Omega$-type dynamos that were reported\nby Brandenburg et al.\\ (2002.).\n\n\\section{Relevance to the reversed field pinch}\n\nThe dynamical quenching approach has been applied to modeling\nthe dynamics of the reversed field pinch, where one has an initially\nhelical magnetic field of the form\n\\begin{equation}\n\\overline{\\mbox{\\boldmath $B$}}{}}{=\\hat{\\bm{B}}\\pmatrix{0\\cr J_1(kr)\\cr J_0(kr)},\n\\end{equation}\nwhere we have adopted cylindrical coordinates $(r,\\theta,z)$.\nIn a cylinder of radius $R$ such an initial field becomes kink unstable\nwhen $kR\\ga\\pi$.\nBoth laboratory measurements (e.g.\\ Caramana \\& Baker 1984) and numerical\nsimulations (Ho et al.\\ 1989) confirm the idea that the field-aligned\ncurrent leads to kink instability, and hence to small-scale turbulence\nand thereby to the emergence of $\\eta_{\\rm t}$ and $\\alpha$.\n\n\\begin{figure}[t]\\begin{center}\n\\includegraphics[width=.6\\columnwidth]{ZT40a}\n\\end{center}\\caption[]{\nTime evolution of toroidal flux in the RFP experiment of\nCaramana \\& Baker (1984) compared with a calculation with no dynamo effect.\nThe decay around $t=18$ms is due to termination of the applied electric field.\nCourtesy of E. J. Caramana and D. A. Baker.\n}\\label{toroidal_flux}\\end{figure}\n\nJust as in the case discussed in \\Sec{Sdynquench}, the emergence of $\\alpha$\nslows down the decay in such a way that the toroidal field remains nearly\nconstant and is maintained against resistive decay; see \\Fig{toroidal_flux}.\nThe details of this mechanism have been discussed by Ji \\& Prager (2002).\nIn particular, they show that the parallel electric field cannot be\nbalanced by the resistive term alone, and that there must be an additional\ncomponent resulting from small-scale correlations of velocity and magnetic\nfield, $\\overline{\\mbox{\\boldmath $u$} {}\\times\\mbox{\\boldmath $b$} {}}$, that explain the observed profiles of\nmean electric field and mean current density.\nFurthermore, the parallel electric field reverses sign near the edge\nof the device, while the parallel current density does not.\nAgain, this can only be explained by additional contributions from\nsmall-scale correlations of velocity and magnetic field.\nThe RFP experiment also shows that magnetic helicity evolves on time scales\nfaster than the resistive scale, which is only compatible with the presence\nof a finite magnetic helicity flux divergence (Ji et al.\\ 1995, Ji 1999).\n\n\\section{Magnetic helicity in realistic dynamos}\n\nAstrophysical dynamos saturate and evolve on dynamical time scales and\nare thus not resistively slow.\nCurrent research shows that this can be achieved by expelling\nmagnetic helicity from the domain through helicity fluxes.\nThis is why we have allowed for the $\\mbox{\\boldmath $\\nabla$} {}\\cdot\\overline{\\mbox{\\boldmath $F$}}{}}{_\\alpha$ term\nin \\Eq{dalphaMdt}.\nSince the magnetic $\\alpha$ effect is proportional to the current\nhelicity of the fluctuating field, the $\\overline{\\mbox{\\boldmath $F$}}{}}{_\\alpha$ flux should\nbe proportional to the current helicity flux of the\nfluctuating field.\n\nThe presence of the flux term generally lowers the value of\n$|\\alpha_{\\it M}|$, and since the $\\alpha_{\\it M}$ term quenches the total\nvalue of $\\alpha$ ($=\\alpha_{\\it K}+\\alpha_{\\it M}$), the effect of this helicity flux\nis to alleviate an otherwise catastrophic quenching.\nIndeed, in an open domain and without a flux divergence, the $\\alpha_{\\it M}\/R_{\\rm m}$\nterm in \\Eq{dalphaMdt} can result in ``catastrophically low'' saturation field\nstrengths that are by a factor $R_{\\rm m}^{1\/2}$ smaller than the equipartition\nfield strength (Gruzinov \\& Diamond 1994; Brandenburg \\& Subramanian 2005b).\n\nMagnetic helicity obeys a conservation law and is therefore\nconceptually easier to tackle than current helicity.\nHowever, there is the difficulty of gauge dependence of magnetic\nhelicity density and its flux.\nIt is therefore safer to work with the current helicity, which is\nalso the quantity that enters in \\Eqs{alphaKM}{dalphaMdt}.\nHowever, more work needs to be done to establish the connection between\nthe two approaches.\n\nOver the past 10 years there has been mounting evidence that the Sun\nsheds magnetic helicity (and hence current helicity) through coronal\nmass ejections and other events.\nUnderstanding the functional form of such fluxes is very much a matter\nof ongoing research (Subramanian \\& Brandenburg 2004, 2006,\nBrandenburg et al.\\ 2009).\nIn the following section we present a simple calculation that allows\nus to estimate the amount of magnetic helicity losses required for the\nsolar dynamo to work.\n\n\\section{Estimating the required magnetic helicity losses}\n\nIn order to estimate the magnetic helicity losses required to alleviate\ncatastrophic quenching we make use of the relation between the $\\alpha$ effect\nand the divergence of the current helicity flux\n(Brandenburg \\& Subramanian 2005a),\n\\begin{equation}\n\\alpha={\\alpha_{\\it K}+R_{\\rm m}\\left[\n\\eta_{\\rm t}\\mu_0\\overline{\\mbox{\\boldmath $J$}}{}}{\\cdot\\overline{\\mbox{\\boldmath $B$}}{}}{\/B_{\\rm eq}^2\n-\\mbox{\\boldmath $\\nabla$} {}\\cdot\\overline{\\mbox{\\boldmath $F$}}{}}{_{\\rm C}\/(2 k_{\\rm f}^2B_{\\rm eq}^2)\n-\\dot\\alpha\/(2\\eta_{\\rm t} k_{\\rm f}^2)\\right]\n\\over1+R_{\\rm m}\\overline{\\mbox{\\boldmath $B$}}{}}{^2\/B_{\\rm eq}^2},\n\\label{QuenchExtra2}\n\\end{equation}\nwhere $\\dot\\alpha=\\partial\\alpha\/\\partial t$ and $\\overline{\\mbox{\\boldmath $F$}}{}}{_{\\rm C}$\nis the mean flux of current helicity from the small-scale field,\n$\\overline{(\\mbox{\\boldmath $\\nabla$} {}\\times\\mbox{\\boldmath $e$} {})\\times(\\mbox{\\boldmath $\\nabla$} {}\\times\\mbox{\\boldmath $b$} {})}$, where $\\mbox{\\boldmath $e$} {}$ is\nthe fluctuating component of the electric field; see also\nSubramanian \\& Brandenburg (2004).\nIn the steady-state limit and at large $R_{\\rm m}$ we have\n\\begin{equation}\n\\alpha\\approx\n\\eta_{\\rm t}{\\mu_0\\overline{\\mbox{\\boldmath $J$}}{}}{\\cdot\\overline{\\mbox{\\boldmath $B$}}{}}{\\over\\overline{\\mbox{\\boldmath $B$}}{}}{^2}\n-{\\mbox{\\boldmath $\\nabla$} {}\\cdot\\overline{\\mbox{\\boldmath $F$}}{}}{_{\\rm C}\\over2 k_{\\rm f}^2\\overline{\\mbox{\\boldmath $B$}}{}}{^2}.\n\\label{QuenchExtra3}\n\\end{equation}\nWe neglect the $\\overline{\\mbox{\\boldmath $J$}}{}}{\\cdot\\overline{\\mbox{\\boldmath $B$}}{}}{$ term, because the catastrophic\nquenching in dynamos with boundaries has never been seen to be alleviated\nby this term (Brandenburg \\& Subramanian 2005b).\nThus, we use \\Eq{QuenchExtra3} to estimate $\\mbox{\\boldmath $\\nabla$} {}\\cdot\\overline{\\mbox{\\boldmath $F$}}{}}{_{\\rm C}$ as\n\\begin{equation}\n\\mbox{\\boldmath $\\nabla$} {}\\cdot\\overline{\\mbox{\\boldmath $F$}}{}}{_{\\rm C}=2\\alpha k_{\\rm f}^2\\overline{\\mbox{\\boldmath $B$}}{}}{^2.\n\\end{equation}\nNext, we take the volume integral over one hemisphere, i.e.\\\n\\begin{equation}\n{\\cal L}_{\\rm C}\\equiv\\oint_{2\\pi}\\overline{\\mbox{\\boldmath $F$}}{}}{_{\\rm C}\\cdot{\\rm d} {}\\mbox{\\boldmath $S$} {}\n=\\int\\mbox{\\boldmath $\\nabla$} {}\\cdot\\overline{\\mbox{\\boldmath $F$}}{}}{_{\\rm C}\\,{\\rm d} {} V\n={2\\pi\\over3}R^3\\bra{2\\alpha k_{\\rm f}^2\\overline{\\mbox{\\boldmath $B$}}{}}{^2},\n\\end{equation}\nwhere ${\\cal L}_{\\rm C}$ is the ``luminosity'' or ``power'' of current helicity.\nWe estimate $\\bra{\\alpha}$ using $\\alpha\\Omega$ dynamo theory\nwhich predicts that (e.g., Robinson \\& Durney 1982, see also\nBrandenburg \\& Subramanian 2005a)\n\\begin{equation}\n\\alpha k_1\\Delta\\Omega\\approx\\omega_{\\rm cyc}^2,\n\\end{equation}\nwhere $\\omega_{\\rm cyc}=2\\pi\/T_{\\rm cyc}$ is the cycle frequency\nof the dynamo, $T_{\\rm cyc}$ is the 22 year cycle period of the Sun,\n$\\alpha$ is assumed constant over each hemisphere, and $\\Delta\\Omega$\nis the total latitudinal shear, i.e.\\ about $0.3\\Omega$ for the Sun.\nThere is obviously an uncertainty in relating local values of\n$\\alpha$ to volume averages.\nHowever, if we do set the two equal, we obtain at least an upper limit\nfor ${\\cal L}_{\\rm C}$.\nWe may then relate this to the luminosity of {\\it magnetic} helicity,\n${\\cal L}_{\\rm H}$, that we assume to be proportional to ${\\cal L}_{\\rm C}$\nvia\n\\begin{equation}\n{\\cal L}_{\\rm C}=k_{\\rm f}^2{\\cal L}_{\\rm H}.\n\\label{LCLH}\n\\end{equation}\nWith this we find for the total magnetic helicity loss over half a\ncycle (one 11 year cycle)\n\\begin{equation}\n{\\textstyle{1\\over2}}{\\cal L}_{\\rm H}T_{\\rm cyc}\\leq{4\\pi\\over3}R^3\n{\\omega_{\\rm cyc}^2\\over k_1\\Delta\\Omega}T_{\\rm cyc}\\bra{\\overline{\\mbox{\\boldmath $B$}}{}}{^2}\n={4\\pi\\over3}LR^3{\\omega_{\\rm cyc}\\over\\Delta\\Omega}\\bra{\\overline{\\mbox{\\boldmath $B$}}{}}{^2},\n\\end{equation}\nwhere we have estimated $k_1=2\\pi\/L$ for the relevant wavenumber of the\ndynamo in terms of the thickness of the convection zone $L$.\nInserting now values relevant for the Sun, $L=200\\,{\\rm Mm}$, $R=700\\,{\\rm Mm}$,\n$\\omega_{\\rm cyc}\/\\Delta\\Omega=10^{-2}$, and $\\overline{\\mbox{\\boldmath $B$}}{}}{\\sim300\\,{\\rm G}$,\nwe obtain\n\\begin{equation}\n{\\textstyle{1\\over2}}{\\cal L}_{\\rm H}T_{\\rm cyc}\\leq10^{46}\\,{\\rm Mx}^2\/\\mbox{cycle}.\n\\label{estimate}\n\\end{equation}\nThis is comparable to earlier estimates based partly on observations\n(Berger \\& Ruzmaikin 2000; DeVore 2000) and partly on turbulence simulations\n(Brandenburg \\& Sandin 2004), but we recall that \\Eq{estimate} is only\nan upper limit.\nFurthermore, the connection between current helicity and magnetic helicity\nassumed in relation \\eq{LCLH} is quite rough and has only been seriously\nconfirmed under isotropic conditions.\nIn addition, it is not clear that the gauge-invariant magnetic helicity\nflux defined by Berger \\& Field (1984) and Finn \\& Antonsen (1985)\nis actually the quantity of interest.\nFollowing earlier work of Subramanian \\& Brandenburg (2006), the\ngauge-invariant magnetic helicity is not in any obvious way related\nto the current helicity which, in turn, is related to the density\nof the flux linking number and is at least approximately equal to\nthe magnetic helicity in the Coulomb gauge.\n\nIn summary, although magnetic helicity is conceptually advantageous\nin that it obeys a conservation equation, the difficulty in dealing\nwith a gauge-dependent quantity can be quite serious.\nMoreover, as emphasized before, it is really the current helicity that\nis primarily of interest, so it would be useful to shift attention from\nmagnetic helicity fluxes to current helicity fluxes.\n\n\\section{Conclusions}\n\nIn this review we have attempted to highlight the importance of\nmagnetic helicity in modern nonlinear dynamo theory.\nMuch of the early work on the reversed field pinch since the\nmid 1980s now proves to be extremely relevant in view of the\npossibility of resistively slow saturation by a self-inflicted\nbuild-up of small-scale current helicity, $\\overline{\\mbox{\\boldmath $j$} {}\\cdot\\mbox{\\boldmath $b$} {}}$,\nas the dynamo produces large-scale current helicity, $\\overline{\\mbox{\\boldmath $J$}}{}}{\\cdot\\overline{\\mbox{\\boldmath $B$}}{}}{$.\nSince this concept is not yet universally accepted, the additional evidence\nfrom the reversed field pinch experiment can be quite useful.\n\nThe interpretation of resistively slow saturation has led to the\nproposed solution that magnetic helicity fluxes (or current helicity\nfluxes) are responsible for removing excess small-scale magnetic\nhelicity from the system.\nThis allows the dynamo to reach saturation levels that can otherwise\nbe $R_{\\rm m}^{1\/2}$ times smaller than the equipartition value given\nby the kinetic energy density of the turbulent motions.\nThe consequences of this prediction have been tested in direct simulations\n(Fig.~3 of Brandenburg 2005 and Fig.~17 of K\\\"apyl\\\"a et al.\\ 2008),\nconfirming thereby ultimately basic aspects of incorporating the\nmagnetic helicity equation into mean-field models.\nHowever, more progress is needed in addressing questions regarding the\nrelative importance of magnetic and current helicity fluxes through the\nsurface compared to diffusive fluxes across the equator\n(Brandenburg et al.\\ 2009).\n\nWe reiterate that the reversed field pinch experiment is not\n{\\it directly} relevant to the self-excited dynamo, but rather\nits nonlinear saturation mechanism.\nSo far, successful self-excited dynamo experiments have only been\nperformed with liquid sodium (Gailitis et al.\\ 2000;\nStieglitz \\& M\\\"uller 2001; Monchaux et al.\\ 2007).\nThis may change in future given that one can usually achieve much\nhigher magnetic Reynolds numbers in plasmas than in liquid metals\n(Spence et al.\\ 2009).\nIt might then, for the first time, be possible to address experimentally\nquestions regarding the relative importance of magnetic and current\nhelicity fluxes for dynamos.\n\n\\subsection*{Acknowledgments}\n\nI thank the referees for detailed and constructive comments\nthat have helped to improve the presentation.\nThis work was supported in part by the Swedish Research Council,\ngrant 621-2007-4064, and the European Research Council under the\nAstroDyn Research Project 227952.\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nFinancial options are securities that\nconvey rights to conduct specific trades in the future.\nFor example, an S\\&P\\,500\\xspace \\emph{call option} with \\emph{strike price} 4500 and \\emph{expiration date} December 31, 2021, provides the right to buy one share of S\\&P\\,500\\xspace at $\\$4500$ on that day.%\n\\footnote{For simplicity, examples are given without the typical 100x contract multiplier.}\nA standard financial option is a derivative instrument of a \\emph{single} underlying asset or index, and its payoff is a function of the underlying variable. \nThe example option contract pays $\\max\\{S-4500,0\\}$, where $S$ is the value of the S\\&P\\,500\\xspace index on the expiration date.%\n\\footnote{\n\tThroughout this paper, we restrict to \\emph{European options} which can be exercised only at expiration.\n\tThe settlement value is calculated as the opening value of the index on the expiration date or the last business day (usually a Friday) before the expiration date.\n\tIn cash-settled markets, instead of actual physical delivery of the underlying asset, the option holder gets a cash payment that is equivalent to the value of the asset.\n} \nInvestors trade options to hedge risks and achieve certain return patterns, or to speculate about the movement of the underlying asset price, buying an option when its price falls below their estimate of its expected value. \nThus, option prices reveal the collective risk-neutral belief distribution of the underlying asset's future value.\n\nDespite the significant volume of trade and interest in option contracts, financial exchanges that support trading of standard options have two limitations that compromise its expressiveness and efficiency.\n\\textit{First}, the exchange offers only a selective set of markets for options written on a specific underlying asset (i.e., option chain), in which each market features a predetermined strike price and expiration date.\nFor example, as of this writing, the Chicago Board Options Exchange (CBOE) offers 120 distinct strike prices, ranging from 1000 to 6000 at intervals of 25, 50, or 100, for S\\&P\\,500\\xspace options expiring by the end of 2021. \nWhile traders can engineer custom contracts (e.g., a S\\&P\\,500\\xspace call option with a strike price of \\$4520) by simultaneously purchasing multiple available options in appropriate proportions, they must monitor several markets to ensure that a bundle can be constructed at a desired price. \nOften, execution risk and transaction costs prevent individual traders from carrying out such strategies. \nAs the prescriptive markets prevent traders from conveniently expressing custom strike values, the exchange may fail to aggregate supply and demand requests of greater detail, leading to a loss of economic efficiency.\n\n\\textit{Second}, each market within an option chain independently aggregates and matches orders in regard to a single contract with a specified option type, strike price, and expiration date, despite its interconnectedness to other option markets and their common dependency on the underlying asset.\nSuch independent market design fails to notify traders of strictly better, cheaper options\nand generically allows arbitrage opportunities.\nMoreover, investments get diluted across independent markets even when participants are interested in the same underlying asset.\nThis can lead to the problem of thin markets, where few trades happen and bid-ask spreads become wide.\nEven for some of the most actively traded option families, empirical evidence has shown that liquidity can vary much across option types and strikes.\n\\citeauthor{Cao2010} studied eight years of options trading data and find consistently lower liquidity in puts and deep in-the-money options~\\cite{Cao2010}.\n\n\\subsection{An Exchange for Standard Financial Options}\nThe paper first focuses on the design of an exchange for standard financial options to address limitations discussed above.\nWe propose a mechanism to aggregate and match orders on options across different strike prices that are logically related to the same underlying asset.\nAs a result, the exchange operates a consolidated market for each underlying security and expiration, where traders can specify custom options of arbitrary strike value.\nThe mechanism is an exchange that generalizes the double auction; it is not a market maker or any individual arbitrageur. \nIt works by finding a match that maximizes net profit subject to zero loss even in the worst case, and thus poses no risk to the exchange regardless of the value of the underlying security at expiration.\nWe show that the mechanism is computationally efficient, and key market operations (i.e., match and price quotes) can be computed in time polynomial in the number of orders.\nWe conduct experiments on real-market options data, using our mechanism to consolidate outstanding orders from independently-traded options markets.\nEmpirical results demonstrate that the proposed mechanism can match options that the current exchange cannot and provide more competitive bid and ask prices.\nThe improved efficiency and expressiveness may help to aggregate more fine-grained information and recover a complete and fully general probability distribution of the underlying asset's future value at expiration.\n\n\\subsection{An Exchange for Combinatorial Financial Options}\nIn the second part of the work, we generalize standard financial options to define \\textit{combinatorial financial options}---derivatives that give contract holders the right to buy or sell \\emph{any linear combination} of underlying assets at any strike price.\nFor example, a call option written on ``$1$\\texttt{AAPL}\\xspace$+1$\\texttt{MSFT}\\xspace'' with strike price 300 specifies the right to buy one share of Apple \\emph{and} one share of Microsoft at $\\$300$ total price on the expiration date, whereas a call on ``$2$\\texttt{AAPL}\\xspace$-1$\\texttt{MSFT}\\xspace'' with strike price 50 confers the ability to buy two shares of Apple and sell one share of Microsoft at expiration for a net cost of \\$50.\nCombinatorial options allow traders to conveniently and precisely hedge their exact portfolio, replicating any payoff functions that standard options can achieve and exponentially more. \nSome of the standard options on mutual funds and stock indices, like the S\\&P\\,500\\xspace and Dow Jones Industrial Average (DJI), are indeed combinatorial options written on the respectively predefined portfolios. \nThe CBOE recently launched options on eleven Select Sector Indices,%\n\\footnote{\\url{https:\/\/markets.cboe.com\/tradable_products\/sp_500\/cboe_select_sectors_index_options\/}}\neach of which can be considered as a combinatorial option of a pre-specified linear combination of stocks representing one economic sector.\nThe goal of this work is to allow traders to create options on their own custom indices, representing any linear combination of stocks and sectors that they want.\n\nWe design an exchange for combinatorial financial options.\nSuch a market enables the elicitation and recovery of future correlations (e.g., complimentary or substitution effects) among assets.\nHowever, as traders are offered the expressiveness to specify weights (or shares) for multiple underlying assets, new challenges arise and the thin market problem exacerbates.\nOpening a new, separate exchange for each new combination of stocks and weights would rapidly grow intractable.\nNaively matching buy and sell orders for only the exact same portfolio may yield few or no trades, despite plenty of acceptable trades among orders.\n\nExtending the mechanism that consolidates standard options on a single underlying security, we propose an optimization formulation that can match combinatorial options written on different linear combinations of underlying assets.\nThe mechanism maximizes net profit subject to no risk to the exchange, regardless of (any combination of) values of all assets at expiration. \nWe show that the proposed matching mechanism with increased expressiveness, however, comes at the cost of higher computational complexity: the optimal clearing of a combinatorial options market is coNP-hard. \n\nWe demonstrate that the proposed mechanism can be solved relatively efficiently by exploiting constraint generation techniques.\nWe propose an algorithm that finds the exact optimal-matching solution by satisfying an increasing set of constraints iteratively generated from different future values of the underlying assets.\nIn experiments on synthetic combinatorial orders generated from real-market standard options prices, we show that the matching algorithm terminates quickly, with its running time growing linearly with the number of orders and the size of underlying assets. \n\n\\subsection{Roadmap}\nSection~\\ref{sec:related} discusses related work on option pricing and combinatorial markets.\nSection~\\ref{sec:notations} introduces notations and background on the current options market.\nWe first focus on the market design for standard financial options. \nSection~\\ref{sec:standard_option} introduces the formal setting of a consolidated options market with limit orders, and proposes a matching mechanism for options across types and strikes (\\Mech{single_match}).\nSection~\\ref{sec:combo_options}, our main contribution, defines a combinatorial financial options exchange (Mechanism \\ref{mech:combo_match}), analyzes the computational complexity of optimally clearing such a market (Theorem~\\ref{thm:NP-complete} and Theorem~\\ref{thm:coNP-hard}), and proposes a constraint generation algorithm to find the exact optimal match (Algorithm~\\ref{algo:comb_match}). \nSection~\\ref{sec:exp_standard_option} and Section~\\ref{sec:exp_combo_option} evaluate the two proposed mechanisms on real-market standard options data and synthetically generated combinatorial options data, respectively.\nSection~\\ref{sec:conclusion} concludes and discusses possible future directions.\n\n\\section{Related Work}\n\\label{sec:related}\n\\subsection{Rational Option Pricing}\nOur proposed market designs relate to arbitrage conditions that have been studied extensively in financial economics ~\\cite{Modigliani1958,Varian1987}.\nIn short, an arbitrage describes the scenario of ``free lunches''---configurations of prices such that one can get something for nothing.\nThe matching operation of an exchange (i.e., the auctioneer function) can be considered as arbitrage elimination: matching orders in effect works by identifying combinations of orders that reflect a risk-free surplus (i.e., gains from trade). \nIn the classic double auction, a match accepting the highest buy order at, say, \\$12 and the lowest sell order at \\$10 in effect yields an arbitrage surplus of \\$2 that can go to the buyer, the seller, the exchange, or split among them in any way. \n\n\\citeauthor{Merton1973} first investigated the no-arbitrage pricing for options, stating the necessity of convexity in option prices~\\cite{Merton1973}.\nOther relevant works examine no-arbitrage conditions for options under different scenarios, such as modeling the stochastic behavior of the underlying asset (e.g., the Black-Scholes model)~\\cite{bs_model,pricing_discrete_time} and considering the presence of other types of securities (e.g., bonds and futures)~\\cite{Garman1976, Ritchken1985}.\nThe most relevant work to our proposed approach is by \\citeauthor{Herzel2005}, who makes little assumption on the underlying process of the asset price and other financial instruments~\\cite{Herzel2005}.\nThe paper proposes a linear program to check the convexity between every strike-price pair; it finds arbitrage opportunities that yield positive cash flows now and no liabilities in the future on European options written on the same underlying security.\n\nOur first contribution on a market design that consolidates standard options generalizes Herzel's by adding an extra degree of freedom to incorporate potential future payoff gain or loss into the matching.\nTo our knowledge, no prior work has defined general combinatorial financial options or investigated the matching mechanism design and complexity for such a market.\n\\subsection{Combinatorial Market Design}\nMuch prior work examines the design of combinatorial markets, both exchanges and market makers \\cite{Chen2008b,Hanson03}, for different applications including prediction markets with Boolean combinations \\cite{Fortnow2005}, permutations \\cite{Chen2007}, hierarchical structures \\cite{Guo2009}, tournaments \\cite{Chen2008a}, and electronic sourcing \\cite{Sandholm2007}. \n\\citeauthor{DudikEtAl13} show how to employ constraint generation in linear programming to keep complex related prices consistent~\\cite{DudikEtAl13}. \n\\citeauthor{KroerDudik16} generalize this approach using integer programming~\\cite{KroerDudik16}.\n\\citeauthor{Rothschild14} investigate misaligned prices for logically related binary-payoff contracts in prediction market, and uncover persistent arbitrage opportunities for risk-neutral investors across different exchanges~\\cite{Rothschild14}.\n\nDesigning combinatorial markets faces the tradeoff between expressiveness and computational complexity: \ngiving participants\ngreater flexibility to express preferences can help to elicit better information and increase economic efficiency, but in the meantime leads to a more intricate mechanism that is computationally harder.\nSeveral works have formally described and quantified such tradeoff \\cite{Benisch2008,Golovin2007}, and studied how to balance it by exploiting the outcome space structure and limiting expressivity \\cite{Chen2008b,XiaPe11,LaskeyEtAl18,Dudik2020}.\nOur work here contributes to this rich literature by designing a vastly more expressive version of the popular financial options market. \nThe state space and payoff function for combinatorial financial options differ from the previously studied combinatorial structures in both meaning and computational complexity.\nThe associated matching problem is novel (with its new form of payoff function), and is not a specialization of previously studied combinatorial prediction market.\n\n\\section{Background and Notations}\n\\label{sec:notations}\nThere are two types of options, referred to as \\emph{call} and \\emph{put} options.\nWe denote a call option as $C(S, K, T)$ and a put option as $P(S, K, T)$, which respectively gives the option buyer the right to \\emph{buy} and \\emph{sell} an underlying asset $S$ at a specified \\emph{strike price} $K$ on the \\emph{expiration date} $T$.\nIn the rest of this paper, we omit $T$ from the tuples for simplicity, as the mechanism aggregates options within the same expiration.\n\nThe option buyer decides whether to exercise an option.\nSuppose that a buyer spends \\$8 and purchases a call option, $C(S\\&P\\,500\\xspace, 4500, 20211231)$.\nIf the S\\&P\\,500\\xspace index is \\$4700 at expiration, the buyer will pay the agreed strike \\$4500, receive the index, and get a \\textit{payoff} of \\$200 and a \\textit{net profit} of \\$192 (assuming no time value). \nIf the S\\&P\\,500\\xspace price is \\$4200, the buyer will walk away without exercising the option.\nTherefore, the payoff of a purchased option is \n\\[\\Psi := \\max\\{\\chi(S-K), 0\\},\\]\nwhere $S$ is the value of underlying asset at expiration and $\\chi \\in \\{-1, 1\\}$ equals 1 for calls and $-1$ for puts.\nAs the payoff for a buyer is always non-negative, the seller receives a \\emph{premium} now (e.g., \\$8) to compensate for future obligations.\n\nOption contracts written on the same underlying asset, type, strike, and expiration are referred to as an \\emph{option series}.\nConsider options of a single security offering both calls and puts, ten expiration dates and fifty strike prices.\nAll option series render a total of a thousand markets, with each maintaining a separate \\emph{limit order book}.\nIn such a market, deciding the existence of a transaction takes $\\mathcal{O}(1)$ time by comparing the best bid and ask prices, and matching an incoming order can take up to $\\mathcal{O}(n_\\text{o})$ time depending on its quantity, where $n_\\text{o}$ is the number of orders on the opposite side of the order book.\n\n\\section{Consolidating Standard Financial Options}\n\\label{sec:standard_option}\nThis section proposes a mechanism to consolidate buy and sell orders on standard financial options written on the same underlying asset across different types and strike prices.\nThe model does not make any assumption on the option's pricing model or the stochastic behavior of the underlying security. \n\\subsection{Matching Orders on Standard Financial Options}\nConsider an option market in regard to a single underlying asset $S$ with an expiration date $T$\\@.\nTraders can specify orders on such options with any positive strike value.\nThe market has a set of buy orders, indexed by $m \\in \\{1, 2, ..., M\\}$, and a set of sell orders, indexed by $n \\in \\{1, 2, ..., N\\}$. \nBuy orders are represented by a type vector $\\boldsymbol{\\phi} \\in \\{-1, 1\\}^M$ with each entry specifying a put or a call option, a strike price vector $\\boldsymbol{p} \\in \\mathbb{R}_{+}^M$, and bid prices $\\boldsymbol{b} \\in \\mathbb{R}_{+}^M$.\nSell orders are denoted by a separate type vector $\\boldsymbol{\\psi} \\in \\{-1, 1\\}^N$, a strike vector ${\\boldsymbol{q}} \\in \\mathbb{R}_{+}^N$, and ask prices $\\boldsymbol{a} \\in \\mathbb{R}_{+}^N$.\n\nThe exchange aims to match buy and sell orders submitted by traders.\nSpecifically, it decides the fraction $\\boldsymbol{\\gamma} \\in [0, 1]^M$ to sell to buy orders and the fraction $\\boldsymbol{\\delta} \\in [0, 1]^{N}$ to buy from sell orders.\nWe start with a formulation that matches orders by finding a common form of arbitrage in which the exchange maximizes net profit at the time of order transaction, subject to \\textit{zero} payoff loss in the future for all possible states of $\\S$:\n\\begin{align}\n\t\\max \\limits_{\\boldsymbol{\\gamma}, \\boldsymbol{\\delta}} & \\displaystyle \\quad \\boldsymbol{b}^\\top \\boldsymbol{\\gamma} - \\boldsymbol{a}^\\top \\boldsymbol{\\delta} \\notag\\\\\n\t\\text{s.t.} & \\displaystyle \\quad \\,\\,\\underbrace{\\!\\!\\sum_{m} \\gamma_m \\max\\{\\phi_m(S - p_m), 0\\}\\!\\!}_{\\Psi_{\\Gamma}}\\,\\, - \\,\\,\\underbrace{\\!\\!\\sum_{n} \\delta_n \\max\\{\\psi_n(S - q_n), 0\\}\\!\\!}_{\\Psi_{\\Delta}}\\,\\,\\leq 0, \\quad \\forall S \\in [0, \\infty) \\notag\n\\end{align}\nHere, the first term $\\Psi_{\\Gamma}:=\\sum_{m} \\gamma_m \\max\\{\\phi_m(S - p_m), 0\\}$ in the constraint calculates the total payoff of sold options as a function of $S$, which is the obligation or liability of the exchange at the time of option expiration.\nThe second term $\\Psi_{\\Delta} := \\sum_{n} \\delta_n \\max\\{\\psi_n(S - q_n), 0\\}$ computes the total payoff of bought options, by which the exchange has the right to exercise.\nThe constraint guarantees that the liability of the exchange does not exceed its payoff, regardless of the value of $S$ on the expiration date.\nWe denote options bought by the exchange as Portfolio $\\Delta$ and options sold as Portfolio $\\Gamma$, and the constraint enforces that Portfolio $\\Delta$ \\emph{(weakly) dominates} Portfolio $\\Gamma$.\n\nWe further extend the above formulation.\nInstead of restricting the exchange to zero loss at expiration, we allow it to take into account the potential (worst-case) deficit or gain in the future.\nWe denote this extra degree of freedom as a decision variable $L \\in \\mathbb{R}$, and have the following matching mechanism:\n\\begin{align}\\tag{M.1}\n\\label{mech:single_match}\n\\max \\limits_{\\boldsymbol{\\gamma}, \\boldsymbol{\\delta}, L} & \\displaystyle \\quad \\boldsymbol{b}^\\top \\boldsymbol{\\gamma} - \\boldsymbol{a}^\\top \\boldsymbol{\\delta} - L\\\\\n\\text{s.t.} & \\displaystyle \\quad \\,\\,\\underbrace{\\!\\!\\sum_{m} \\gamma_m \\max\\{\\phi_m(S - p_m), 0\\}\\!\\!}_{\\Psi_{\\Gamma}}\\,\\, - \\,\\,\\underbrace{\\!\\!\\sum_{n} \\delta_n \\max\\{\\psi_n(S - q_n), 0\\}\\!\\!}_{\\Psi_{\\Delta}}\\,\\,\\leq L, \\quad \\forall S \\in [0, \\infty) \\label{eq:single_constraint}\n\\end{align}\n\nThe constraint now guarantees that the difference between the liability and the payoff of the exchange will be bounded by $L$, regardless of the value of $S$ on the expiration date. \nFollowing the definition below, we say that the constraint enforces Portfolio $\\Delta$ to \\emph{(weakly) dominate} Portfolio $\\Gamma$ with some constant offset $L$.\n\\begin{definition} [Payoff dominance with an offset. Adapted from~\\citeauthor{Merton1973}~\\cite{Merton1973}]\n\t\\label{def:opt_dominate}\n\n\tPortfolio $\\Delta$ \\emph{(weakly) dominates} Portfolio $\\Gamma$ with an offset $L$, if the payoff of Portfolio $\\Delta$ plus a constant $L$ is\n\tgreater than or equal to that of Portfolio $\\Gamma$ for all possible states of the underlying variable at expiration. \n\tPortfolio $\\Gamma$ is said to be \\emph{(weakly) dominated} by Portfolio $\\Delta$ with an offset $L$.\n\\end{definition}\nSince this potential gain or loss is further incorporated in the objective at the time of matching, the mechanism \\ref{mech:single_match} guarantees no overall loss for each match, and enjoys an extra degree of freedom to trade.\nWe give two motivating examples based on real-market options data to illustrate the economic meaning and the usefulness of a flexible $L$.\nEach of the matches shown below would not have been found if we restrict a zero payoff loss for the exchange.\n\\Ex{nonneg_L} showcases the scenario where $L$ allows the exchange to take a (worst-case) deficit at the time of option expiration, if it is preemptively covered by a surplus (i.e., revenue) at the time of order transaction.\n\\begin{example} [A match with a positive $L$]\n\t\\label{ex:nonneg_L}\n\n\tWe use \\ref{mech:single_match} to consolidate options of Walt Disney Co. (\\texttt{DIS}\\xspace) that are priced on January 23, 2019 and expire on June 21, 2019.\n\tWe find the following match, where each order would not transact in its corresponding independent market. \n\tThe exchange can\n\t\\begin{itemize}\n\t\t\\setlength{\\itemsep}{2pt}\n\t\t\\item sell to the buy order on $C(\\texttt{DIS}\\xspace, 110)$ at bid \\$7.2,\n\t\t\\item sell to the buy order on $P(\\texttt{DIS}\\xspace, 150)$ at bid \\$38.75,\n\t\t\\item buy from the sell order on $C(\\texttt{DIS}\\xspace, 150)$ at ask \\$0.05,\n\t\t\\item buy from the sell order on $P(\\texttt{DIS}\\xspace, 110)$ at ask \\$5.1.\n\t\\end{itemize}\n\tTherefore, the exchange gets an immediate gain of \\$40.8 ($7.2+38.75-5.1-0.05$).\n\n\tFigure~\\ref{subfig:lgtzero} plots the payoffs of bought and sold options as a function of DIS, showing that the exchange will have a net liability of $\\$40$ (i.e., $L=40$), regardless of the DIS value at expiration.\n\tThe exchange makes a net profit of \\$0.80 from the match at no risk.\n\t\\qed\n\\end{example}\n\\Ex{neg_L} below demonstrates a different scenario where the decision variable $L$ allows the exchange to have a temporary deficit (i.e., expense) at the time of order transaction, if it is guaranteed to earn it back later at the time of option expiration. \n\\begin{example} [A match with a negative $L$]\n\t\\label{ex:neg_L}\n\tWe consolidate options of Apple Inc. (\\texttt{AAPL}\\xspace) that are priced on January 23, 2019 and expire on January 17, 2020.\n\tWe find the following match, where the exchange can\n\t\\begin{itemize}\n\t\t\\setlength{\\itemsep}{2pt}\n\t\t\\item sell to the buy order on $C(\\texttt{AAPL}\\xspace, 160)$ at bid \\$14.1,\n\t\t\\item sell to the buy order on $P(\\texttt{AAPL}\\xspace, 80)$ at bid \\$0.62,\n\t\t\\item buy from the sell order on $C(\\texttt{AAPL}\\xspace, 80)$ at ask \\$74.2,\n\t\t\\item buy from the sell order on $P(\\texttt{AAPL}\\xspace, 160)$ at ask \\$19.1.\n\t\\end{itemize}\n\tThe match incurs an expense of \\$78.58 now ($14.1+0.62-74.2-19.1$), and yields a guaranteed payoff of \\$80 (i.e., $L=-80$) at expiration. \n\tFigure~\\ref{subfig:lstzero} depicts the respective payoffs of bought and sold options.\n\tFrom the match, we can infer an interest rate of 1.82\\%, calculated by $78.58 e^{r \\Delta t} = 80$.\n\t\\qed\n\\end{example}\n\\begin{figure}[t]\n\t\\centering\n\t\\begin{subfigure}{0.47\\columnwidth}\t\n\t\n\t\t\\includegraphics[width=0.9\\columnwidth]{opt_img\/DIS_payoff.pdf}\n\t\t\\caption{Payoffs of standard options bought and sold in \\Ex{nonneg_L} as a function of DIS value.}\n\t\t\\label{subfig:lgtzero}\n\t\\end{subfigure}\n\t\\hspace{0.04\\columnwidth}\n\t\\begin{subfigure}{0.47\\columnwidth}\t\n\t\n\t\t\\includegraphics[width=0.9\\columnwidth]{opt_img\/AAPL_payoff.pdf}\n\t\t\\caption{Payoffs of standard options bought and sold in \\Ex{neg_L} as a function of AAPL value.}\n\t\t\\label{subfig:lstzero}\n\t\\end{subfigure}\n\n\n\n\n\n\t\\caption[Payoffs of the matched options as a function of the value of the underlying asset at expiration.]{Payoffs of the matched options as a function of the value of the underlying asset at expiration. Fig.~\\ref{subfig:lgtzero} shows the case of $L>0$, and Fig.~\\ref{subfig:lstzero} the case of $L<0$.}\n\t\\label{fig:arb_example}\n\\end{figure}\n\nWe now analyze the complexity of running \\Mech{single_match}.\nThe left-hand side of constraint \\eqref{eq:single_constraint} is a linear combination of max functions, and thus is a piecewise linear function of $S$.\nTherefore, it suffices to solve \\ref{mech:single_match} by satisfying constraints defined by $S$ at each breakpoint.\nIn our case, breakpoints of the constraint \\eqref{eq:single_constraint}\nare the defined strike values in the market, plus two endpoints: $\\boldsymbol{p} \\cup {\\boldsymbol{q}} \\cup \\{0, \\infty\\}$.\nLet $n_K$ denotes the number of distinct strike values in the market, which is bounded above by\n$n_{\\text{orders}} = M+N$, the total number of orders in the market.\nTherefore, \\ref{mech:single_match} is a linear program that has $n_K+2$ payoff constraints, and requires time polynomial in the size of the problem instance to solve.\nWe defer the complete proof to Appendix A.1.\n\\begin{theorem}\n\t\\label{thm:consolidate_standard_options}\n\t\\Mech{single_match} matches financial options written on the same underlying asset and expiration date across all types and strike prices in time polynomial in the number of orders.\n\\end{theorem}\n\n\n\\paragraph{Remarks.}\nSeveral extensions can be directly applied to the mechanism:\n\\begin{enumerate}\n \\setlength{\\itemsep}{0pt}\n \\item We can incorporate the time value of investments by multiplying $L$ by a (discount) rate in the objective of \\ref{mech:single_match}.\n \\item We note that matching solutions returned by \\ref{mech:single_match} may involve fractional shares of stocks. Ideally, an exchange (e.g., cash-settled markets) allows it, and if an exchange (e.g., physical-settled markets) does not, we would need to normalize or round.\n \\item Mechanism~\\ref{mech:single_match} identifies a maximal bundle of orders that can be accepted without risk to the exchange, but does not specify what to do with the surplus if any. The surplus can be split arbitrarily among the involved traders and the exchange. We note that while the surplus is guaranteed to be nonnegative, it may have an uncontingent (cash) component and a state-contingent component.\n\\end{enumerate}\n\n\\subsection{Quoting Prices for Standard Financial Options}\nA standard exchange maintains the best quotes (i.e., the highest bid and lowest ask) for each independent options market.\nThis section extends \\Mech{single_match} to quote the most competitive prices for any \\emph{custom} option of a specified type and strike by considering all other options related to the same underlying security.\nWe describe the price quote procedure in an arbitrage-free options market\\footnote{That is, no match will be returned by \\Mech{single_match}.} in regard to a single underlying asset $S$ with an expiration date $T$, represented by $(\\boldsymbol{\\phi}, \\boldsymbol{p}, \\boldsymbol{b}, \\boldsymbol{\\psi}, {\\boldsymbol{q}}, \\boldsymbol{a})$. \nWe defer the detailed proof of correctness to Appendix A.2.\n\n\\begin{enumerate}\n\t\\setlength{\\itemsep}{8pt}\n\t\\item The best bid $b^*$ for an option $(\\chi, S, K)$ is the maximum gain of selling a portfolio of options that is \\emph{weakly dominated} by $(\\chi, S, K)$ with some constant offset $L$.\n\t\n\tWe derive $b^*$ by adding the option $(\\chi, S, K)$ to the sell side of the market indexed $N+1$ (as the exchange buys from sell orders), initializing its price $a_{N+1}$ to 0, and solving for \\ref{mech:single_match}. \n\tThe best bid $b^*$ is then the returned objective.\n\t\n\t\\item The best ask $a^*$ for an option $(\\chi, S, K)$ is the minimum cost of buying a portfolio of options that \\emph{weakly dominates} $(\\chi, S, K)$ with some constant offset $L$.\n\t\n\tWe derive $a^*$ by adding the option $(\\chi, S, K)$ to the buy side of the market indexed $M+1$, initializing its price $b_{M+1}$ to a large number (e.g., $10^6$), and solving for \\ref{mech:single_match}. \n\tThe best ask $a^*$ is then $b_{M+1}$ minus the returned objective.\n\\end{enumerate}\nIn the case of matching orders with multiple units, it is necessary to consider all orders in the market.\nFor deciding the existence of a match or quoting (instantaneous) prices, however, we only need to consider a set of orders that have the most competitive prices.\nWe define a set with such orders as a frontier set $\\mathcal{F}$.\n\\begin{definition} [A Frontier Set of Options Orders]\n\t\\label{def:opt_frontier}\n\tAn option order is in the \\emph{frontier set} $\\mathcal{F}$ if its bid price is greater than or equal to the maximum gain of selling a weakly dominated portfolio of options with some offset $L$, or if its ask price is less than or equal to the minimum cost of buying a weakly dominant portfolio of options for some offset $L$.\n\\end{definition}\n\n\\begin{corollary}\n\t\\label{coro:frontier_set}\n\t\\Mech{single_match} determines the existence of a match and returns the instantaneous price quote in time polynomial in $\\card{\\mathcal{F}}$.\n\\end{corollary}\n\nThe proof of Corollary~\\ref{coro:frontier_set} is deferred to Appendix A.3,\nwhich shows that in order to determine the existence of a match or quote the most competitive prices (i.e., the highest bid and the lowest ask) for any target option $(\\chi, S, K)$, it suffices to consider options orders in $\\mathcal{F}$ and run \\Mech{single_match}.\nThe runtime complexity follows immediately from \\Thm{consolidate_standard_options}.\n\n\\section{Combinatorial Financial Options}\n\\label{sec:combo_options}\nThis section introduces \\emph{combinatorial financial options} and designs a market to trade such options. \nCombinatorial options extend standard options to more general derivative contracts that can be written on any linear combination of $U$ underlying assets.\nWe formally define a combinatorial financial option and its specifications.\n\\vspace{1ex}\n\\begin{definition} [Combinatorial Financial Options]\n\tConsider a set of $~U$ underlying assets.\n\tCombinatorial financial options are derivative contracts that specify the right to buy or sell a linear combination of the $U$ underlying assets at a strike price on an expiration date.\n\tEach contract specifies a call or put type $\\chi \\in \\{1, -1\\}$, a weight vector $\\boldsymbol{\\omega} \\in \\mathbb{R}^{U}$, a strike price $K \\geq 0$, and an expiration date $T$.\n\tIt has a payoff of $\\max\\{\\chi(\\boldsymbol{\\omega}^\\top \\S-K), 0\\}$, where $\\S \\in \\mathbb{R}^U_{\\geq 0}$ is a vector of the underlying assets' values at $T$.\n\n\n\n\\end{definition}\n\\vspace{2ex}\nConsider a combinatorial option $C(\\texttt{MSFT}\\xspace-\\texttt{AAPL}\\xspace, 0)$ that has weight $1$ for \\texttt{MSFT}\\xspace, weight $-1$ for \\texttt{AAPL}\\xspace, and a strike price of zero.\nAn investor who buys the option bets on the event that Microsoft outperforms Apple Inc., and will exercise it if $S_\\texttt{MSFT}\\xspace > S_\\texttt{AAPL}\\xspace$.\nThus, unlike standard options that will pay off due to price changes of a single security, combinatorial options bet on relative movements between assets or groups of assets, thus enabling the expression of future correlations among different underlying assets.\n\nWe note several distinctions and interpretations in regard to the definition of combinatorial financial options:\n\\begin{enumerate}\n \\setlength{\\itemsep}{6pt}\n \n \\item Distinction between a combinatorial option and a combination of standard options. \n \n Consider the above combinatorial option $C(\\texttt{MSFT}\\xspace-\\texttt{AAPL}\\xspace, 0)$ and a combination of two standard options $C(\\texttt{MSFT}\\xspace, K)$ and $P(\\texttt{AAPL}\\xspace, K)$. One would exercise the combinatorial option and possess the portfolio, i.e., long a share of \\texttt{MSFT}\\xspace and short sell a share of \\texttt{AAPL}\\xspace, as long as $S_\\texttt{MSFT}\\xspace \\geq S_\\texttt{AAPL}\\xspace$, whereas in the combination of standard options, one would exercise both and possess the same portfolio, if $S_\\texttt{MSFT}\\xspace\\geq K$ \\emph{and} $S_\\texttt{AAPL}\\xspace \\leq K$.%\n \\footnote{In terms of possessing the portfolio \\texttt{MSFT}\\xspace-\\texttt{AAPL}\\xspace, $C(\\texttt{MSFT}\\xspace-\\texttt{AAPL}\\xspace, 0)$ captures all combination of two standard options of the form, $C(\\texttt{MSFT}\\xspace, K)$ and $P(\\texttt{AAPL}\\xspace, K)$ for all positive $K$.}\n Thus, the exercise of a combinatorial option does not imply a simultaneous exercise of all standard options in the combination, and vice versa, the simultaneous buy and sell of several standard financial options with different underlying securities and strike prices (e.g., multi-leg options orders) cannot replicate the payoff of a combinatorial option.\n\n \\item Distinction between a call and a put combinatorial option.\n \n We note that the distinction between a call and a put for combinatorial options depends on the strike price and coefficients that one specifies in a contract.\n For instance, in the above example, $C(\\texttt{MSFT}\\xspace-\\texttt{AAPL}\\xspace, 0)$ is identical to $P(\\texttt{AAPL}\\xspace-\\texttt{MSFT}\\xspace, 0)$, as they have the same payoff function $\\max\\{S_\\texttt{MSFT}\\xspace-S_\\texttt{AAPL}\\xspace, 0\\}$.\n Despite the different interpretations and expressions, we follow the convention of standard financial options, and have the strike price always be non-negative.\n\\end{enumerate}\n\n\n\\subsection{Matching Orders on Combinatorial Financial Options}\nThe increased expressiveness in combinatorial options brings new challenges in market design: \nonly matching buy and sell orders on options related to the same assets and weights may yield few or no trades, despite plenty of profitable trades among options written on different portfolios.\nWe start by giving the following motivating examples to illustrate such scenarios.\n\\begin{example}[Matching combinatorial option orders]\n\t\\label{ex:combo_options}\n\tConsider a combinatorial options market with four orders\n\t\\begin{itemize}\n\t\t\\setlength{\\itemsep}{2pt}\n\t\t\\item $o_1$: buy one $C(1\\texttt{AAPL}\\xspace+2\\texttt{MSFT}\\xspace, 300)$ at bid \\$110;\n\t\t\\item $o_2$: buy one $C(1\\texttt{AAPL}\\xspace+1\\texttt{MSFT}\\xspace, 300)$ at bid \\$70;\n\t\t\\item $o_3$: sell one $C(1\\texttt{AAPL}\\xspace+3\\texttt{MSFT}\\xspace, 300)$ at ask \\$160;\n\t\t\\item $o_4$: sell one $C(1\\texttt{AAPL}\\xspace, 250)$ at ask \\$5.\n\t\\end{itemize}\n\tThe exchange returns no match if it only considers options related to the same combination of assets.\n\tHowever, a profitable match does exist.\n\tThe exchange can sell to $o_1$ and $o_2$ and simultaneously buy from $o_3$ and $o_4$ to get an immediate gain of \\$15 (110+70-160-5).\n\tFigure~\\ref{fig:combo_match_example} plots the overall payoff (which is always non-negative), as a function of $S_\\texttt{AAPL}\\xspace$ and $S_\\texttt{MSFT}\\xspace$ on the expiration date:\n\t%\n\t{\n\t\t\\begin{align}\n\t\t\\Psi := &\\max\\{S_\\texttt{AAPL}\\xspace+3S_\\texttt{MSFT}\\xspace-300, 0\\} + \\max\\{S_\\texttt{AAPL}\\xspace-250, 0\\} - \\notag\\\\\n\t\t&\\max\\{S_\\texttt{AAPL}\\xspace+2S_\\texttt{MSFT}\\xspace-300, 0\\} - \\max\\{S_\\texttt{AAPL}\\xspace+S_\\texttt{MSFT}\\xspace-300, 0\\},\\notag\n\t\t\\end{align}\n\t}\n\tTherefore, the exercises of options cannot subtract from the \\$15 immediate gain, but could add to it, depending on the future prices of the two stocks.\n\n\n\t\\qed\n\\end{example}\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.45\\columnwidth]{opt_img\/combo_example}\n\t\\caption[Payoff of combinatorial options matched in \\Ex{combo_options} as a function of $S_\\texttt{AAPL}\\xspace$ and $S_\\texttt{MSFT}\\xspace$.]{Payoff of combinatorial options matched in \\Ex{combo_options} as a function of $S_\\texttt{AAPL}\\xspace$ and $S_\\texttt{MSFT}\\xspace$. The example demonstrates the case of $L=0$.}\n\t\\label{fig:combo_match_example}\n\\end{figure}\n\nIn the above example, the exchange can consider matching each individual buy order sequentially to some combination of sell orders.\nFor example, the exchange can first sell to buy order $o_1$ and buy from sell orders $\\frac{2}{3}o_3$ and $\\frac{1}{3}o_4$, and then sell to buy order $o_2$ and buy from sell orders $\\frac{1}{3}o_3$ and $\\frac{2}{3}o_4$.\nBoth are profitable trades subject to no loss, leading to the same ultimate match as in Example~\\ref{ex:combo_options}.\nThe next example shows that such sequential matching of each individual buy order to a combination of sell orders may fail to find valid trades.\n\\begin{example}[Matching combinatorial option orders in batch]\n\t\\label{ex:combo_options_2}\n\tConsider the following four combinatorial options orders\n\t\\begin{itemize}\n\t\t\\setlength{\\itemsep}{2pt}\n\t\t\\item $o_1$: buy one $C(\\text{A}+\\text{B}, 10)$ at bid \\$6;\n\t\t\\item $o_2$: buy one $C(\\text{B}+\\text{C}, 7)$ at bid \\$6;\n\t\t\\item $o_3$: sell one $C(\\text{A}+\\text{B}+\\text{C}, 7)$ at ask \\$10;\n\t\t\\item $o_4$: sell one $C(\\text{B}, 3)$ at ask \\$2.\n\t\\end{itemize}\n\tNo match will be found if we consider each buy order individually: covering a sold combinatorial option in $o_1$ or $o_2$ requires buying the same fraction of $o_3$, which is at a higher price and will incur a net loss.\n\tHowever, a valid match does exist by selling to $o_1$ and $o_2$ and buying from $o_3$ and $o_4$.\n\tIt costs the exchange \\$0, and can yield a positive payoff in the future. \n\tThe exchange has\n\t%\n\t{\n\t\t\\begin{align*}\n\t\t\\max\\{S_A+S_B-10, 0\\} + \\max\\{S_B+S_C-7, 0\\}\n\t\t\\leq \\max\\{S_A+S_B+S_C-7, 0\\} + \\max\\{S_B-3, 0\\},\n\t\t\\end{align*}\n\t}\n\t%\n\tmeaning the liability will always be no larger than the payoff of bought options, for all non-negative $S_A, S_B, S_C$.%\n\t\\footnote{To see this, we can add $\\max\\{S_A-7, 0\\}$ on both sides of the inequality, and have $\\max\\{S_A-7, 0\\} + \\max\\{S_B+S_C-7, 0\\} \\leq \\max\\{S_A+S_B+S_C-7, 0\\}$ and $\\max\\{S_A+S_B-10, 0\\} \\leq \\max\\{S_A-7, 0\\} + \\max\\{S_B-3, 0\\}$ by Jensen's inequality.}\n\t\\qed\n\\end{example}\n\nWe now introduce the formal setting of a combinatorial financial options market.\nA combinatorial options market is a two-sided market with a set of buy limit orders, indexed by $m \\in \\{1, 2, ..., M\\}$, and a set of sell limit orders, indexed by $n \\in \\{1, 2, ..., N\\}$. \nBuy orders are represented by a type vector $\\boldsymbol{\\phi} \\in \\{1, -1\\}^M$ with each entry specifying a put or a call combinatorial option, a weight matrix $\\boldsymbol{\\alpha} \\in \\mathbb{R}^{U \\times M}$ specifying the linear combinations, a strike vector $\\boldsymbol{p} \\in \\mathbb{R}_{\\geq 0}^M$, and a bid-price vector $\\boldsymbol{b} \\in \\mathbb{R}_{+}^M$.\nSell orders are defined by a separate type vector $\\boldsymbol{\\psi} \\in \\{1, -1\\}^N$, a weight matrix $\\boldsymbol{\\beta} \\in \\mathbb{R}^{U \\times N}$, a strike vector ${\\boldsymbol{q}} \\in \\mathbb{R}_{\\geq 0}^N$, and an ask-price vector $\\boldsymbol{a} \\in \\mathbb{R}_{+}^N$.\nSimilar to a standard options market, the exchange decides the fraction $\\boldsymbol{\\gamma} \\in [0, 1]^M$ to sell to buy orders and the fraction $\\boldsymbol{\\delta} \\in [0, 1]^{N}$ to buy from sell orders to maximize net profit.\nWe generalize the proposed mechanism \\ref{mech:single_match} for standard options to facilitate trading combinatorial options:\n\\begin{align}\\tag{M.2}\n\\label{mech:combo_match}\n\\max \\limits_{\\boldsymbol{\\gamma}, \\boldsymbol{\\delta}, L} & \\displaystyle \\quad \\boldsymbol{b}^\\top \\boldsymbol{\\gamma} - \\boldsymbol{a}^\\top \\boldsymbol{\\delta} - L\\\\\n\\text{s.t.} & \\displaystyle \\quad \\displaystyle \\sum_{m} \\gamma_m \\max\\{\\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m), 0\\} - \\sum_{n} \\delta_n \\max\\{\\psi_n(\\boldsymbol{\\beta}_n^\\top \\S - q_n), 0\\} \\leq L \\quad \\forall \\S \\in \\mathbb{R}_{\\geq 0}^U \\label{eq:constraint}\n\\end{align}\n\nHowever, unlike \\ref{mech:single_match}, it is no longer feasible to solve the optimization problem \\ref{mech:combo_match} by enumerating the constraint at each breakpoint, which requires iterating over every combination of underlying asset values.\nThe number of constraints can grow exponentially as $\\mathcal{O}(2^{M+N})$ or $\\mathcal{O}((M+N)^U)$ by Sauer's Lemma.\\footnote{In fact, we can write \\ref{mech:combo_match} as an exponential-sized linear program.}\n\nWe analyze the complexity of finding the optimal match in a combinatorial options market, i.e., solving for \\ref{mech:combo_match}.\nWe first show in the following theorem that given a market instance, it is NP-complete to decide if a certain matching assignment, $\\boldsymbol{\\gamma}$ and $\\boldsymbol{\\delta}$, violates the constraint \\eqref{eq:constraint} for a fixed $L$. \n\\begin{theorem}\n\t\\label{thm:NP-complete}\n\tConsider all combinatorial options in the market $(\\boldsymbol{\\phi}, \\boldsymbol{\\alpha}, \\boldsymbol{p}, \\boldsymbol{\\psi}, \\boldsymbol{\\beta}, {\\boldsymbol{q}})$. \n\tFor any fixed $L$, it is NP-complete to decide \n\t\\begin{itemize}\n\t\t\\setlength{\\itemsep}{5pt}\n\t\t\\item Yes: $\\boldsymbol{\\gamma} = \\boldsymbol{\\delta} = \\mathbf{1}$ violates the constraint in \\ref{mech:combo_match} for some $\\S$,\n\t\t\\item No: $\\boldsymbol{\\gamma} = \\boldsymbol{\\delta} = \\mathbf{1}$ satisfies the constraint in \\ref{mech:combo_match} for all $\\S$,\n\t\\end{itemize}\n\teven assuming that each combinatorial option is written on at most two underlying assets.\n\\end{theorem}\n\\begin{proof}\n\tThe decision problem is in NP.\n\tGiven a certificate which is a value vector $\\S \\in \\mathbb{R}^U_{+}$, we plug $\\S$ into the constraint of \\ref{mech:combo_match} to compute the payoff and check whether it is less than $L$.\n\tThis takes time $\\mathcal{O}(U(M+N))$.\n\t\n\tFor the NP-hardness, we prove by reducing from the Vertex Cover problem --- given an undirected graph $G=(V,E)$ and an integer $k$, decide if there is a subset of vertices $V' \\subseteq V$ of size $k$ such that each edge has at least one vertex in $V'$.\t\n\t\n\t\tGiven a Vertex Cover instance $(G, k)$, we construct an instance of the combinatorial options matching problem.\n\t\tLet the set of underlying assets correspond to vertices in $G$, i.e., $U=|V|$.\n\t\tFor each vertex indexed $i$, we associate four options with it, one on the buy side and three on the sell side. They have payoff functions as follows:\n\t\t\\begin{align*}\n\t\tf_i &=\\max\\{2K_1S_i-K_1, 0\\}, &g^{(1)}_i &=\\max\\{K_1S_i, 0\\},\\\\\n\t\tg^{(2)}_i &=\\max\\{K_2S_i-K_2, 0\\}, &g^{(3)}_i &=\\max\\{S_i, 0\\},\n\t\t\\end{align*}\n\t\twhere we choose $K_1$ and $K_2$ for some large numbers with $K_2 \\gg K_1$.\n\t\tFor example, we choose $K_1=10|E|$ and $K_2=100|E|$.\n\t\tFor each edge $e = (i, j)$, we define two options that involve its two end-points $i$ and $j$, one on the buy side and one on the sell side.\n\t\tThey have payoff functions:\n\t\t\\begin{align*}\n\t\tf_e &= \\max\\{S_i+S_j, 0\\}, &g_e = \\max\\{S_i+S_j-1, 0\\}.\n\t\t\\end{align*}\n\t\tFinally, we include one sell order on an option with payoff $g^\\star = \\max\\{|E|-k-L-0.5,0\\}$. %\n\t\n\t\tSince $L$ is fixed in advance, we assume $|E|-k-L-0.5>0$ without loss of generality.\n\t\tThus, we have $M=|V|+|E|$ buy orders and $N=3|V|+|E|+1$ sell orders.\n\t\tThe construction takes time polynomial in the size of the Vertex Cover instance.\n\t\t\n\t\t\\emph{Suppose the Vertex Cover instance is a Yes instance.}\n\t\tWe assume that $\\{v_1, v_2, ..., v_k\\}$ is a vertex cover.\n\t\tWe show that assigning $S_1,S_2,...,S_k$ to 1 for the selected underlying assets (i.e., vertices) and 0 for the rest unselected ones gives an $\\S$ that violates the constraint \\eqref{eq:constraint}.\n\t\tThe left-hand side of the constraint \\eqref{eq:constraint} is\n\t\t\\begin{align}\n\t\t\\label{eq:LHS}\n\t\tz := & \\,\\,\\underbrace{\\!\\!\\sum_{i \\in V}\\left(f_i-g^{(1)}_i-g^{(2)}_i-g^{(3)}_i\\right)\\!\\!}_{z_v}\\,\\,+\\,\\,\\underbrace{\\!\\!\\sum_{e \\in E} (f_e-g_e)\\!\\!}_{z_e}\\,\\,-g^\\star.\n\t\t\\end{align}\n\t\tFor $S_i\\in\\{0,1\\}$, it is easy to see that $f_i-g_i^{(1)}-g_i^{(2)}=0$.\n\t\tThus, we have $z_v = -\\sum_{i \\in V}g^{(3)} = -k$ by our assignment.\n\t\tSince at least one of $S_i,S_j$ is $1$ for any edge $(i,j)\\in E$, we have $f_e - g_e = 1$ and $z_e = \\card{E}$.\n\t\tTherefore, we have \n\t\t\\begin{align*}\n\t\tz = & -k + |E| - (|E| - k - L - 0.5) = L + 0.5 > L.\n\t\t\\end{align*}\n\t\t\n\t\t\\emph{Suppose the Vertex Cover instance is a No instance.}\n\t\tWe aim to show that for the given $\\boldsymbol{\\gamma}$ and $\\boldsymbol{\\delta}$, there does not exist an $\\S$ that violates the constraint.\n\t\tWe prove by maximizing $z$ and demonstrating $z \\leq L$.\t\n\t\tWe start by proving the following claim.\n\t\t\\begin{claim}\n\t\t\t\\label{thm:claim1}\n\t\t\tFor an optimal $z$, we have $S_i \\in \\{0,1\\}$.\n\t\t\\end{claim}\n\t\tWe prove by contradiction. First assume $S_j > 1$ for some $j$.\n\t\tWe rewrite Eq.~\\eqref{eq:LHS} and have\n\t\t\\begin{align*}\n\t\tz := & \\,\\,\\underbrace{\\!\\!f_j-g^{(1)}_j-g^{(2)}_j-g^{(3)}_j\\!\\!}_{z_j}\\,\\, + \\,\\,\\underbrace{\\!\\!\\sum_{i\\in V\\setminus j}\\left(f_i-g^{(1)}_i-g^{(2)}_i-g^{(3)}_i\\right)\\!\\!}_{z_i}\\,\\, + \\,\\,\\underbrace{\\!\\!\\sum_{e \\in E} (f_e-g_e)\\!\\!}_{z_e}\\,\\,-g^\\star.\n\t\t\\end{align*}\n\t\tWe first analyze $z_j$ and have\n\t\t\\begin{align*}\n\t\tz_j = & \\max\\{2K_1 S_j-K_1, 0\\} - \\max\\{K_1S_j, 0\\} - \\max\\{K_2 S_j-K_2, 0\\} - \\max\\{S_j, 0\\}\\\\\n\t\t= & 2K_1 S_j-K_1 - K_1S_j - (K_2S_j-K_2) - S_j \\tag{by assumption of $S_j > 1$}\\\\\n\t\t= & K_2 - K_1 - (K_2-K_1+1)S_j.\n\t\t\\end{align*}\n\t\tRecall that we choose $K_2 \\gg K_1$, e.g., $K_1=10|E|$ and $K_2=100|E|$.\n\t\tThus, we have $z_j$ increase with rate $K_2-K_1+1$, as $S_j$ decreases uniformly.\n\t\tSince $z_e$ decreases at most $|E|$ and the rest two terms, $z_i$ and $g^\\star$, do not depend on $S_j$, decreasing $S_j$ increases $z$.\n\t\tIt is sub-optimal to have $S_j > 1$ for some $j$.\n\t\t\n\t\tNext, we assume $0 \\leq S_j \\leq 1$ for some $j$ and have\n\t\t\\begin{align*}\n\t\tz_j\n\t\n\t\t&= \\begin{cases}\\displaystyle\n\t\t-K_1S_j-S_j &\\quad 0 \\leq S_j \\leq 0.5,\\\\\n\t\t\\displaystyle\n\t\tK_1(S_j-1)-S_j &\\quad 0.5 < S_j \\leq 1.\\\\\n\t\t\\end{cases}\n\t\t\\end{align*}\n\t\tAs $K_1$ is large, by a similar argument analyzing the growth rate of each term, we show that $z_j$ (and also $z$) increases by assigning $S_j$ to $0$ if $0 \\leq S_j \\leq 0.5$ and by assigning $S_j$ to $1$ if $0.5 < S_j \\leq 1$.\n\t\t\n\t\tWe finish proving Claim~\\ref{thm:claim1}, and now have $S_i \\in \\{0,1\\}$.\n\t\tFollowing Eq.~\\eqref{eq:LHS}, we have\n\t\t\\[z = -\\sum_{i \\in V}S_i + \\sum_{e \\in E} (f_e-g_e) - (|E|-k)+L+0.5.\\]\n\t\tThe goal is to maximize $z$ and show $z\\leq L$.\n\t\tAs $S_i$ is an integer, to show $z\\leq L$, it suffices to show that $\\sum_{e \\in E} (f_e-g_e)-\\sum_{i \\in V}S_i<|E|-k$.\n\t\tWe prove by contradiction, assuming $\\sum_{e \\in E} (f_e-g_e)-\\sum_{i \\in V}S_i \\geq |E|-k$.\n\t\tRecall that to have $f_e-g_e = 1$ for $e=(i,j)$, we need at least one of $S_i,S_j$ to be $1$.\n\t\tWe consider the following two possible cases, and aim to refute them:\n\t\t\\begin{enumerate}[(a)]\n\t\t\t\\setlength{\\itemsep}{5pt}\n\t\t\t\\item $\\sum_{e \\in E} (f_e-g_e)=|E|$ and $\\sum_{i \\in V}S_i \\leq k$.\n\t\t\t\n\t\t\tThis means we cover all edges with at most $k$ vertices assigned to 1.\n\t\t\t\n\t\t\t\\item $\\sum_{e \\in E} (f_e-g_e)<|E|$ and $\\sum_{i \\in V}S_i < k$.\n\t\t\t\n\t\t\tFor any $e$ with $f_e-g_e=0$, we can assign 1 to one of its end-points, and have $\\sum_{e \\in E} (f_e-g_e)=|E|$ without changing $\\sum_{e \\in E} (f_e-g_e)-\\sum_{i \\in V}S_i$. This leads back to (a).\n\t\t\\end{enumerate}\n\t\tBoth cases contradict to the fact that the Vertex Cover instance is a No instance.\n\t\tWe have $\\sum_{e \\in E} (f_e-g_e)-\\sum_{i \\in V}S_i < |E|-k$ as desired, and thus $z \\leq L$ for all $\\S$.\n\\end{proof}\n\\vspace{1ex}\n\nUsing a slightly stronger version of \\Thm{NP-complete} (see Theorem 5 in Appendix B.1),\nwe show that optimal clearing of a combinatorial options market is coNP-hard.\nWe defer the proof of \\Thm{coNP-hard} to Appendix B.1\n\\begin{theorem}\n\t\\label{thm:coNP-hard}\n\tOptimal clearing of a combinatorial options market $(\\boldsymbol{\\phi}, \\boldsymbol{\\alpha}, \\boldsymbol{p}, \\boldsymbol{b}, \\boldsymbol{\\psi}, \\boldsymbol{\\beta}, {\\boldsymbol{q}}, \\boldsymbol{a})$ is coNP-hard, even assuming that each combinatorial option is written on at most two underlying assets.\n\\end{theorem}\n\\subsection{A Constraint Generation Algorithm to Match Combinatorial Financial Options}\nSince it is no longer practical to solve \\ref{mech:combo_match} by identifying all constraints defined by different combinations of underlying asset values, we propose an algorithm (\\Algo{comb_match}) that finds the exact optimal match through iterative constraint generation.\nWe first explain how \\Algo{comb_match} works, and then prove that it is an equivalent formulation of \\ref{mech:combo_match}.\n\nAt the core of \\Algo{comb_match} is a constraint generation process, where a new optimization problem is defined per iteration to find a violated constraint.\nThe constraint set $\\mathcal{C}$ includes the $\\S \\in \\mathbb{R}^U_{\\geq 0}$ value vectors that define different constraints. We can plug each $\\S$ back into the constraint to define $\\boldsymbol{f}$ and $\\boldsymbol{g}$, which are the realized payoffs of bought and sold options with respect to the $\\S$.\nWe start with the constraint set $\\mathcal{C}$ of a zero vector, meaning that all underlying assets have price zero at expiration.\nIn each iteration, the upper-level optimization problem (\\ref{eq:combo_upper}) computes the optimal match (i.e., $\\boldsymbol{\\gamma}^*$, $\\boldsymbol{\\delta}^*$, and $L^*$) that satisfies this restrictive set of generated constraints.\nThus, it is a linear program with $\\card{\\mathcal{C}}$ constraints.\nWe then use the lower-level optimization problem (\\ref{eq:comb_milp}) to find the $\\S^*$ value vector that violates the returned matching solution the most.\nThat is, given $\\boldsymbol{\\gamma}^*$, $\\boldsymbol{\\delta}^*$, and $L^*$, it generates the most adversarial realization of $\\S$ that yields the worst-case payoff loss for the exchange at expiration.\nWe show that this lower-level optimization can be formulated as a mixed-integer linear program.\nWe then include this generated $\\S^*$ vector into the constraint set.\n\nThe exact optimal match is returned when the lower-level \\ref{eq:comb_milp} gives an objective value of zero, meaning there exists no such $\\S$ that violates the upper-level matching assignment.\nTherefore, the final number of constraints in $\\mathcal{C}$ is the number of iterations that takes \\Algo{comb_match} to terminate.\nThe algorithm trivially terminates finitely, but similar to the simplex method, it has no guarantee on the rate of convergence.\nWe later evaluate its performance on synthetic combinatorial options markets of different scales, and demonstrate that \\Algo{comb_match} converges relatively fast, with the number of iterations increasing linearly in the size of a market instance.\n\nWe next prove \\Algo{comb_match} returns the same optimal clearing solution as \\ref{mech:combo_match}.\nWe first show in the Lemma that \\ref{eq:comb_milp} finds the value vector $\\S$ that violates the constraint \\eqref{eq:constraint} in \\ref{mech:combo_match} the most.\n\\begin{algorithm}[t]\n\t\\caption{Match orders in a combinatorial options market.}\n\t\\label{algo:comb_match}\n\t\\begin{algorithmic}[1]\n\t\t\\Statex \\textbf{Input:}~%\n\t\tA combinatorial options market defined by\n\t\n\t\t$(\\boldsymbol{\\phi}, \\boldsymbol{\\alpha}, \\boldsymbol{p}, \\boldsymbol{b}, \\boldsymbol{\\psi}, \\boldsymbol{\\beta}, {\\boldsymbol{q}}, \\boldsymbol{a})$.\n\t\t\\Statex \\textbf{Output:}~%\n\t\tAn optimal clearing that matches $\\boldsymbol{\\gamma}^*$ buy orders to $\\boldsymbol{\\delta}^*$ \n\t\n\t\tsell orders.\n\t\t\\medskip\n\t\t\\State Initialize $z \\gets \\infty$, $\\S \\gets \\boldsymbol{0}$, \n\t\t\\Statex ~\\hphantom{Initializ} $\\boldsymbol{f} \\gets \\max\\{\\boldsymbol{\\phi}(\\boldsymbol{\\alpha}^\\top \\S-\\boldsymbol{p}), \\boldsymbol{0}\\}$, $\\boldsymbol{g} \\gets \\max\\{\\boldsymbol{\\psi}(\\boldsymbol{\\beta}^\\top \\S-{\\boldsymbol{q}}), \\boldsymbol{0}\\}$, $\\mathcal{C} \\gets \\{(\\boldsymbol{f}, \\boldsymbol{g})\\}$.\n\t\n\t\n\t\t\\While {$z > 0$}\n\t\t\\State Solve the following upper level LP and get the optimal $(\\boldsymbol{\\gamma}^*, \\boldsymbol{\\delta}^*, L^*)$\n\t\n\t\t\\begin{align*}\n\t\t\\label{eq:combo_upper}\n\t\t&\\max \\limits_{\\boldsymbol{\\gamma}, \\boldsymbol{\\delta}, L} \\quad \\boldsymbol{b}^\\top \\boldsymbol{\\gamma} - \\boldsymbol{a}^\\top \\boldsymbol{\\delta} - L \\tag{M.3U}\\\\\n\t\t&\\quad\\text{s.t.} \\quad \\boldsymbol{\\gamma}^\\top \\boldsymbol{f} - \\boldsymbol{\\delta}^\\top \\boldsymbol{g} \\leq L \\quad &\\forall (\\boldsymbol{f}, \\boldsymbol{g}) \\in \\mathcal{C}\n\t\t\\end{align*}\n\t\t\\State Given $(\\boldsymbol{\\gamma}^*, \\boldsymbol{\\delta}^*, L^*)$, solve the following lower level MILP and get the optimal\n\t\t\\Statex ~\\hphantom{w } $(\\S^*, \\boldsymbol{f}^*, \\boldsymbol{g}^*, \\boldsymbol{I}^*, z^*)$\n\t\t\\begin{align}\n\t\t\t\\label{eq:comb_milp}\n\t\t\n\t\t\n\t\t\n\t\t\t& \\max \\limits_{\\S, \\boldsymbol{f}, \\boldsymbol{g}, \\boldsymbol{I}} \\quad z := \\boldsymbol{\\gamma}^\\top \\boldsymbol{f} - \\boldsymbol{\\delta}^\\top \\boldsymbol{g} - L \\tag{M.3L}\\\\\n\t\t\t& \\text{ s.t.} \\quad \\quad \\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m) \\geq \\mathcal{M}(\\mathcal{I}_m-1) \\notag\\\\\n\t\t\t& \\hphantom{\\text{ s.t.}} \\quad \\quad \\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m) \\leq \\mathcal{M} \\mathcal{I}_m \\notag\\\\\n\t\t\t& \\hphantom{\\text{ s.t.}} \\quad \\quad f_m \\leq \\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m) - \\mathcal{M}(\\mathcal{I}_m-1) \\notag\\\\\n\t\t\t& \\hphantom{\\text{ s.t.}} \\quad \\quad f_m \\leq \\mathcal{M} \\mathcal{I}_m \\notag\\\\\n\t\t\t& \\hphantom{\\text{ s.t.}} \\quad \\quad \\mathcal{I}_m \\in \\{0, 1\\} \\qquad \\qquad \\forall m \\in \\{1,...,M\\} \\notag\\\\\n\t\t\t& \\hphantom{\\text{ s.t.}} \\quad \\quad g_n \\geq \\psi_n(\\boldsymbol{\\beta}_n^\\top \\S - q_n) \\notag\\\\\n\t\t\t& \\hphantom{\\text{ s.t.}} \\quad \\quad g_n \\geq 0 \\qquad \\qquad \\qquad \\forall n \\in \\{1,...,N\\} \\notag\n \t\t\\end{align}\n\t\t\\State $\\mathcal{C} \\gets \\mathcal{C} \\cup (\\boldsymbol{f}^*, \\boldsymbol{g}^*)$, $z\\gets z^*$\n\t\t\\EndWhile\n\t\t\\State \\Return $\\boldsymbol{\\gamma}^*$ and $\\boldsymbol{\\delta}^*$\n\t\n\t\\end{algorithmic}\n\\end{algorithm}\n\\begin{lemma}\n\tGiven fixed $\\boldsymbol{\\gamma}$, $\\boldsymbol{\\delta}$, and $L$ for a combinatorial options market $(\\boldsymbol{\\phi}, \\boldsymbol{\\alpha}, \\boldsymbol{p}, \\boldsymbol{b}, \\boldsymbol{\\psi}, \\boldsymbol{\\beta}, {\\boldsymbol{q}}, \\boldsymbol{a})$, \\ref{eq:comb_milp} returns the value of underlying assets $\\S$ that violates the constraint of \\ref{mech:combo_match} the most.\n\\end{lemma}\n\\begin{proof}\n\tFirst, it is easy to see that the formulation below returns the $\\S$ that violates constraint \\eqref{eq:constraint} the most, since we will have the largest feasible $\\boldsymbol{f}$ and the smallest feasible $\\boldsymbol{g}$ at the optimum.\n\tThat is, $ f_m = \\max\\{\\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m), 0\\}$ and $g_n = \\max\\{\\psi_n(\\boldsymbol{\\beta}_n^\\top \\S - q_n), 0\\}$.\n\t\t\\begin{align}\n\t\t& \\max \\limits_{\\S, \\boldsymbol{f}, \\boldsymbol{g}} \\quad \\boldsymbol{\\gamma}^\\top \\boldsymbol{f} - \\boldsymbol{\\delta}^\\top \\boldsymbol{g} - L \\notag\\\\\n\t\t& \\text{ s.t.} \\quad \\quad f_m \\leq \\max\\{\\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m), 0\\} &\\forall m \\in \\{1,...,M\\} \\notag\\\\\n\t\t& \\phantom{\\text{ s.t.}} \\quad \\quad g_n \\geq \\psi_n(\\boldsymbol{\\beta}_n^\\top \\S - q_n) \\notag\\\\\n\t\t& \\phantom{\\text{ s.t.}} \\quad \\quad g_n \\geq 0 &\\forall n \\in \\{1,...,N\\} \\notag\t\n\t\t\\end{align}\n\t\n\tIt remains to show the set of constraints related to any buy order $m$ in \\ref{eq:comb_milp} is equivalent to $f_m \\leq \\max\\{\\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m), 0\\}$.\n\tIn short, the set of constraints in \\ref{eq:comb_milp} linearize the max functions by using the big-$\\mathcal{M}$ trick on each binary decision variable $\\mathcal{I}_m$, where $\\mathcal{M}$ is a large constant, say $10^6$.\n\t\n\tConsider each case of $\\mathcal{I}_m \\in \\{0, 1\\}$.\n\tThe first two constraints, $\\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m) \\geq \\mathcal{M}(\\mathcal{I}_m-1)$ and $\\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m) \\leq \\mathcal{M} \\mathcal{I}_m$, guarantee that $\\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m) \\geq 0 \\iff \\mathcal{I}_m=1$.\n\tThen, following the third and fourth constraints, i.e., $f_m \\leq \\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m) - \\mathcal{M}(\\mathcal{I}_m-1)$ and $f_m \\leq \\mathcal{M} \\mathcal{I}_m$, we have \n\t\\[f_m \\leq 0 \\quad \\text{if $\\mathcal{I}_m = 0$}; \\quad f_m \\leq \\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m) \\quad \\text{if $\\mathcal{I}_m = 1$.}\\]\n\tTherefore, at the optimum, we have \n\t\\[f_m = 0 \\quad \\text{if $\\mathcal{I}_m = 0$}; \\quad f_m = \\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m) \\quad \\text{if $\\mathcal{I}_m = 1$,}\\]\n\twhich is equivalent to $f_m = \\max\\{\\phi_m(\\boldsymbol{\\alpha}_m^\\top \\S - p_m), 0\\}$.\n\n\\end{proof}\nTherefore, when \\ref{eq:comb_milp} returns an objective value of zero, the constraint \\eqref{eq:constraint} in \\ref{mech:combo_match} is satisfied for all $\\S$, and \\Algo{comb_match} returns a valid match that optimizes for overall profit.\n\\begin{theorem}\n\t\\label{thm:equivalent_form}\n\tGiven a combinatorial options market instance $(\\boldsymbol{\\phi}, \\boldsymbol{\\alpha}, \\boldsymbol{p}, \\boldsymbol{b}, \\boldsymbol{\\psi}, \\boldsymbol{\\beta}, {\\boldsymbol{q}}, \\boldsymbol{a})$, \\Algo{comb_match} returns the optimal clearing defined in \\ref{mech:combo_match}.\n\\end{theorem}\n\n\\section{Experiments: Real-Market Standard Financial Options}\n\\label{sec:exp_standard_option}\nWe first evaluate the proposed mechanism~\\ref{mech:single_match} that matches orders on standard options across types and strike values.\nWe conduct empirical analysis on the OptionMetrics dataset provided by the Wharton Research Data Services (WRDS), which contains real-market option prices, i.e., the best bid and ask prices, for each market defined by an underlying asset, an option type, a strike price, and an expiration date.%\n\\footnote{Our data includes American options that allow exercise before expiration. \n\tIn practice, American options are\n\talmost always more profitable to sell than to exercise early \\cite{Singh2019}.\n\tIn experiments, we ignore early exercise and treat them as European options.}\nWe choose options data on 30 stocks that compose the DJI, as these stocks have actively traded options that cover a wide range of moneyness and maturity levels.\nThere are a total of 25,502 distinct options markets for the 30 stocks in DJI on January 23, 2019,\ncovering around 12 expiration dates for each stock.\n\nWe use \\ref{mech:single_match} to consolidate the outstanding buy orders and sell orders from independently-traded options markets that associated with the same security and expiration date.\nThis reduces the original 25,502 distinct markets to a total of 366 markets, which have standard options across different types and strikes.\nWe run \\ref{mech:single_match} on these consolidated markets to\n\t\\begin{enumerate}[(1)]\n\t\t\\setlength{\\itemsep}{2pt}\n\t\t\\item find trades that the current independent-market design cannot match,\n\t\t\\item compute new price quotes implied by this consolidated market design, and\n\t\t\\item compare matches and price quotes to those of the case when we restrict $L$ to 0.\n\t\\end{enumerate}\n\nOut of the 366 consolidated options markets, we find profitable matches in 150 markets, which failed to transact under the independent market design.\nAmong those trades, 94 have non-negative $L$ (i.e., the case of \\Ex{nonneg_L}), making an average net profit of \\$1.03, with a maximum of \\$9.64; \nthe remaining 56 have negative $L$ (i.e., the case of \\Ex{neg_L}), implying an average interest rate of 0.7\\%, with a maximum at 2.02\\%.\nWhen we restrict $L$ to 0 (i.e., $L$ is no longer a decision variable), we are able to find matches in 74 markets.\nDetailed statistics for options of each stock are available in the Appendix C.\n\nFor the arbitrage-free consolidated markets (i.e., markets with no match returned by \\ref{mech:single_match}), we find that approximately 49\\% of the orders belong to the frontier set.\nUsing these orders to derive the most competitive bids and asks, we find that the bid-ask spreads can be reduced by 73\\%, from an average of 80 cents for each option series in the independent markets to 21 cents in consolidated options markets.\nFor the case of $L$ set to 0, the bid-ask spreads can be reduced by 52\\%.\n\nThese results show that the exchange, by consolidating options markets across types and strike prices, can potentially achieve a higher economic efficiency, matching orders that the current independent design cannot and providing more competitive bid and ask prices.\n\n\\section{Experiments: Synthetic Combinatorial Options Market}\n\\label{sec:exp_combo_option}\nSince there is no combinatorial option traded in financial markets, we evaluate the proposed Algorithm~\\ref{algo:comb_match} on synthetic combinatorial options, with prices calibrated using real-market standard options written on each related underlying security.%\n\\footnote{We adopt the same dataset as Section~\\ref{sec:exp_standard_option}, and use standard options that expire on Febuary 1, 2019, to calibrate order prices for generated combinatorial options.}\nWe implement Algorithm~\\ref{algo:comb_match} using Gurobi 9.0~\\cite{gurobi}.\nWe are interested in quantifying the performance of \\Algo{comb_match} in parametrically different markets that vary in the likelihood of matching, the number of orders, and the number of underlying assets.\nWe first describe our synthetic dataset.\n\n\\subsection{Generate Synthetic Orders}\nWe generate combinatorial options markets of $U$ underlying assets.\nEach combinatorial option is written on a combination of two stocks, $S_i$ and $S_j$, randomly selected from the $U$ underlying assets.\nThis gives a total number of $U \\choose 2$ asset pairs.\nWeights for the selected assets, $w_i$ and $w_j$, are picked uniformly randomly from $\\{\\pm1, \\pm2, \\dotsc, \\pm9\\}$ and are preprocessed to be relatively prime.\n\nWe generate strikes and bid\/ask prices using real-market standard options data related to each individual asset to realistically capture the value of the synthetic portfolio.\nLet $\\mathcal{K}_i$ and $\\mathcal{K}_j$ respectively denote the set of strike prices offered by standard options on each selected asset.\nWe generate the strike $K$ by first sampling two strike values, $k_i \\sim \\mathcal{K}_i$ and $k_j \\sim \\mathcal{K}_j$, and scaling them by the associated weights to get $K = w_ik_i+w_jk_j$.\nIf $K$ is positive, we have a call option.\nInstead, if $K$ is negative, we generate a put option, and update the strike to $-K$ and weights to $-w_i$ and $-w_j$ to comply with the representation and facilitate payoff computations.\n\nWe randomly assign each option to the buy or sell side of the market with equal probability, and generate a bid or ask price accordingly.\nSimilar to the price quote procedure in Section~\\ref{sec:standard_option}, we derive the bid $b$ by calculating the maximum gain of selling a set of standard options whose payoff is dominated by the combinatorial option of interest, and compute the ask $a$ by calculating the minimum cost of buying a set of standard options whose payoff dominates the generated conbinatorial option.\nWe add noises around the derived prices to control the likelihood of matching in a market, and set final prices to $b(1+\\zeta)$ or $a(1-\\zeta)$, where $\\zeta \\sim [0, \\eta]$ and $\\eta$ is a noise parameter.\n\n\\subsection{Evaluation}\n\\begin{figure}[t]\n\t\\centering\n\t\\begin{subfigure}{0.48\\columnwidth}\t\n\t\t\\centering\n\t\t\\includegraphics[width=0.9\\columnwidth]{opt_img\/combo_vary_noise_level.pdf}\n\t\t\\caption{Vary noise $\\eta$ added to order prices in markets with $U=4$ and $n_\\text{orders}=150$.}\n\t\t\\label{subfig:combo_plots_a}\n\t\t\\vspace{2ex}\n\t\\end{subfigure}\n\t\\hspace{0.02\\columnwidth}\n\t\\begin{subfigure}{0.48\\columnwidth}\t\n\t\t\\centering\n\t\t\\includegraphics[width=0.9\\columnwidth]{opt_img\/combo_vary_book_size.pdf}\n\t\t\\caption{Vary number of orders $n_\\text{orders}$ in markets with $U=4$ and $\\eta = 2^{-4}$.}\n\t\t\\label{subfig:combo_plots_b}\n\t\t\\vspace{2ex}\n\t\\end{subfigure}\n\t\\hspace{0.02\\columnwidth}\n\t\\begin{subfigure}{0.48\\columnwidth}\t\n\t\t\\centering\n\t\t\\includegraphics[width=0.9\\columnwidth]{opt_img\/combo_vary_stock_size.pdf}\n\t\t\\caption{Vary size of underlying assets $U$ in markets with $n_\\text{orders} = 150$ and $\\eta = 2^{-4}$.}\n\t\t\\label{subfig:combo_plots_c}\n\t\\end{subfigure}\n\t\\caption[Results of using \\Mech{combo_match} to match orders in synthetic combinatorial options markets.]{Results of using \\Mech{combo_match} to match orders in synthetic combinatorial options markets. The number of generated constraints (solid lines) and the net profits (dashed line), as the markets vary in price noise, the number of orders, and the size of underlying assets. Red lines represent markets that offer a restrictive set of asset pairs, which covers all $U$ underlying assets.}\n\t\\label{fig:combo_plots}\n\\end{figure}\nWe explore a wide range of markets that vary in price noise $\\eta$, the number of orders $n_\\text{orders}$, and the number of underlying assets $U$.\nFor each market, we measure (1)~the number of iterations (i.e., the total number of constraints generated) that \\Algo{comb_match} takes to find an exact optimal clearing and (2)~the net profit made from the trade.\nFor all experiments, we show results averaged over 40 simulated markets, with the error bars denoted one standard error around the means.\n\nWe first validate that as larger noise is added to the derived bids and asks, the likelihood of matching in our simulated combinatorial options market becomes higher.\nWe generate markets with four underlying assets (arbitrarily selected from the 30 stocks in DJI) and 150 synthetic combinatorial options orders, and vary the noise level $\\eta \\in \\{2^{-7}, 2^{-6}, 2^{-5}, 2^{-4}, 2^{-3}\\}$. \nFigure~\\ref{subfig:combo_plots_a} plots the averaged results.\nAs expected, the net profit made from the optimal match increases, as $\\eta$ increases.\nMoreover, we find that as the matching probability increases, the number of iterations that takes to find the optimal solution consistently decreases.\nThis makes sense as intuitively, in thin markets where few trades are likely to occur, the lower-level optimization program will keep coming up with candidate $\\S$ values to refute a large number of matching proposals until convergence.\n\nFigure~\\ref{subfig:combo_plots_b} further quantifies the change in iteration numbers and net profits, as we vary the number of combinatorial options orders.\nWe fix these markets to have four underlying assets with a price noise of $2^{-4}$. \nAs we see from Figure~\\ref{subfig:combo_plots_b}, as a market aggregates more orders, transactions are more likely to happen, leading to larger net profits.\nWe also find that the number of generated constraints grows (sub)linear in the number of orders.\nSince different $\\S$ values are generated to define payoffs of \\emph{distinct} options and the number of distinct options increases sublinear in the number of total orders, the rate of increase in the number of iterations tends to decrease as a market aggregates more orders.\n\nFinally, we evaluate how \\Algo{comb_match} scales to markets with increasingly larger numbers of underlying assets.\nIn this case, as the dimension of $\\S$ becomes large, the number of asset-value combinations grows exponentially.\nFigure~\\ref{subfig:combo_plots_c} (black lines) demonstrates a much faster increase in the number of iterations and a steady decrease in the net profit.%\n\\footnote{We report average runtimes to quantify the impact of increasing constraints.\n\tThe average times (in seconds) that \\Algo{comb_match} computes the optimal match are 4, 31, 47, 59, 68, and 72 for the respective markets with $U \\in \\{2, 4, 8, 12, 16, 20\\}$.\n}\nIt suggests that as the market provides a large set of underlying assets (e.g., all 30 stocks in DJI), the thin market problem may still arise even when the mechanism facilitates matching options written on different combinations of underlying assets.\nHere, in this set of experiments, we make the assumption that every asset pair in the $U \\choose 2$ is equally likely to be traded.\nIn real markets, investors may be more interested in certain asset pairs, trading them more frequently than the others.\nBased on such observations, a market can specify a prescriptive set of asset pairs, $\\mathcal{P}$, to alleviate the thin market problem.\nFor the experiments, we choose $\\card{\\mathcal{P}} = U$ and have underlying securities in those asset pairs cover the $U$ assets.\nTraders can choose from any asset pairs within this prescriptive set and specify custom weights for each security.\nFigure~\\ref{subfig:combo_plots_c} (red lines) shows that such prescriptive design may indeed attenuate the thin market problem, leading to a faster convergence and higher profits.\n\\section{Discussion}\n\\label{sec:conclusion}\nThe paper proposes market designs to facilitate trading standard options and the more general \\emph{combinatorial financial options}.\nWe start by examining standard financial exchanges that operate separate options markets and have each independently aggregate and match orders on options of a designated type, strike price, and expiration.\nWhen logically related financial markets run independently, traders may remove arbitrage and close bid-ask spreads themselves. \nOur OptionMetrics experiments show that they do so suboptimally.\nExecution risk and transaction cost often prevent arbitrage opportunities from being fully removed.\nMoreover, profits\ncan flow to agents with better computational power and little information. \nOur consolidated market design, by putting computational power into the exchange, supports trading related options across different strikes and reduces bid-ask spreads, thus reducing the arms race among agents and rewarding informed traders only.\n\nWe extend standard options to define combinatorial financial options.\nWe are interested in designing a fully expressive combinatorial options market that allows options to be specified on all linear combinations of assets. \nIn such a market, traders are granted the convenience and expressiveness to speculate on correlated movements among stocks and more precisely hedge their risks (e.g., in one transaction, purchase a put option on their exact investment portfolio).\nIncreasing expressiveness may also benefit the exchange: the market is able to incorporate information of greater detail to optimize outcome, improving economic efficiency, and obtain a surplus from trade to split between the exchange and traders.\nLike many other combinatorial markets, the combinatorial options market comes at the cost of a higher computational complexity: optimal clearing of such a market is coNP-hard.\nWe propose a constraint generation algorithm to solve for the exact optimal match, and demonstrate its viability on synthetically generated combinatorial options markets of different scales. \n\nTwo immediate questions arise from our work.\n\\emph{First}, can we design computationally efficient matching algorithms by limiting expressivity within a combinatorial options market or imposing certain assumptions on the underlying assets?\nOne next step is to explore naturally structured markets where combinations are limited to components in a graph of underlying assets. \nOne special case is a hierarchical graph \\cite{Guo2009}, for example, the S\\&P\\,500\\xspace, sectors like travel and technology, subsectors like airlines and internet within those, etc.\nAnother direction is to relax our current constraint that guarantees no loss for all possible states, assuming or learning a probability distribution over the security's future price (e.g., recovering probability from option prices~\\cite{recover_prob_options}).\n\n\\emph{Second}, when operating such a combinatorial options market, how should an exchange set the market clearing rule?\nWhile the proposed mechanism (\\ref{mech:combo_match}) and matching algorithm (\\Algo{comb_match}) allow flexibility in the timing of market clear operations, and so can work in either continuous or periodic (batch) form, adopting an appropriate clearing rule is important and can affect both computational and economic efficiency.\nOur experiments on synthetic combinatorial options market (Section~\\ref{sec:exp_combo_option}) provide some preliminary understanding.\nResults on markets with low noise suggest potential limitations of continuous clearing: as there is no match among existing orders and the matching probability is low with only one incoming order, the lower-level program may take a long time to refute a large number of matching proposals until convergence. \nSurplus can also be low in a continuous clearing market that looks for immediate trades.%\n\\footnote{We conduct preliminary experiments on synthetic data and find that the average net profit achieved in a market with continuous clearning is about 30\\% lower than that of a market with batch clearing of the same 100 orders.}\nBatch clearing might be suitable for this combinatorial market, achieving a more practical convergence speed and higher surplus.\nThe next question becomes how frequently should the market clear? \nLike discussed in prior works studying the proposal of having financial markets move from continuous to batch clearing \\cite{budish2014,budish2015,brinkman2017empirical}, choosing the appropriate clearing interval is an art.\nIn our case, it depends on the market scale (e.g., the number of underlying assets and orders) and liquidity (e.g., matching probability), and a designer needs to balance price discovery and matching surplus (longer batches improve trading surplus, but can significantly slow down price discovery).\nAnalysis based on empirical mechanism design~\\cite{emd,brinkman2017empirical} may be useful to help a market designer set the optimal clearing interval and maximize efficiency.\n\nWe leave these questions open for future research.\n\n\\begin{acks}\nWe thank Miroslav Dud\\'ik and Haoming Shen for helpful discussions. \nWe are also grateful to the anonymous reviewers for constructive feedback. \n\\end{acks}\n\n\\bibliographystyle{ACM-Reference-Format}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}