diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzivcv" "b/data_all_eng_slimpj/shuffled/split2/finalzzivcv" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzivcv" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nCommunity (wireless) networks~\\cite{Bruno_2005,Frangoudis_2011} provide independent, community-owned network infrastructure for user communication and data exchange.\nThey are mostly built using standard wireless (IEEE802.11) infrastructure~\\cite{Akyildiz_2005}, by laying own optical fiber and, more recently, by the use of free-space optical systems~\\cite{Mustafa_2013}.\nThey use, reuse and repurpose existing communication technologies, like inexpensive off-the-shelf WiFi routers, to form a widespread network.\nThese networks can cover everything from local neighbourhoods~\\cite{RedHook_2013}, to cities~\\cite{AWMN} and countries~\\cite{wlanslovenija_2009, guifi_2003, Funkfeuer_2003, Freifunk_2003}.\nTheir common aim is to empower people with new ways of communication and access to the wider public networks like the Internet.\nMotivations for such networks are diverse and multiplex, but often networks are formed out of necessity~\\cite{WNDW_2013}.\nEven in developed countries many rural areas are underserved with Internet connectivity infrastructure built and offered by traditional Internet service providers, population density being too low for investments to be profitable enough, leaving to people themselves to build needed infrastructure to connect to the Internet.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{figures\/community.pdf}\n \\caption{Illustration of conceptual difference between proprietary networks (a) and community networks (b).}\n \\label{fig:community}\n\\end{figure}\n\nThe common property of all community networks is that they grow organically~-- there is no central planning body that would decide how and where the network is built, as is usually the case with proprietary networks.\nInstead, the network grows in a bottom-up fashion as more people express interest in participating in the community and connect with their neighbours as illustrated in Figure~\\ref{fig:community}.\nWhen a new person or a new group decides to expand the community network, they can do this themselves by deploying new access points, routers, antennas, and other devices, connecting them to the rest of the network, effectively growing it.\nOften only very local coordination, with direct neighbours, is needed.\nThrough this process, both the community and the network grows further, enabling even more people to participate in the network in the future.\nBecause of this bottom-up growth pattern and community involvement, management of such networks poses some unique challenges:\n\n\\begin{itemize}\n\\item In most community networks, people who maintain the network infrastructure are volunteers with limited free time that they can spend on network management.\nThis makes efficient management very important for network growth.\nBesides efficiency, an important factor in increasing growth is also the accessibility of network management functions to users who do not have deep technical knowledge of computer networks and especially mesh networks.\n\n\\item There are many repetitive tasks in community network operation, mostly related to configuration, deployment and monitoring of network equipment.\nWithout a suitable overall management system, all these steps (flashing\\footnote{Flashing is the process of reprogramming the wireless network device with an unofficial firmware image, commonly Linux based.} and configuring devices, allocating resources, diagnosing problems) need to be done manually which is both time-consuming and prone to errors.\n\n\\item In addition to technical issues related to device deployment, there is also the need for community coordination so that people know what is going on in the network and can familiarize themselves better with its operation and what others are doing.\nIn general it is not possible to anticipate how, when, and where a new participant will start participating by deploying one of the network's nodes\\footnote{Meaning of a ``network node'' is understood differently in different communities. For the purpose of this paper it is used as one routing network device, and as a basic unit of participation by participants.}.\n\n\\item For volunteers it is important to have feedback on how they themselves are contributing to the network.\nIs the node that they are maintaining highly used and crucial for the part of the network and users?\nVolunteers are often motivated by the value they are contributing to the network as a whole, but even when their node is not highly used at the moment, having an insight into operation of the node is important for them to understand that their contribution is something tangible and real, especially for less technically skilled volunteers who might otherwise perceive the node as a black box.\n\\end{itemize}\n\nVarious solutions have appeared in an attempt to address these challenges. Each community developed its own model of operation with its own accompanying set of tools.\nThe problem with this approach is that work is being duplicated between communities and that these various solutions are mostly not interoperable between each other.\nSo why would new communities not reuse existing tools?\nThe problem lies in the fact that these tools have not been built to be customized to the needs and operation of individual communities.\nEach community has slight differences in their vision and operation philosophy, technological stack they are using and technical knowledge they have, or the local environment they are in, and this requires customization on several fronts, to only name a few:\n\\begin{enumerate*}[label=\\itshape\\alph*\\upshape)]\n\\item different routing protocols may be used;\n\\item used WiFi equipment and its operating systems can vary widely between networks;\n\\item some communities use VPN tunnels to establish certain long-range links using different VPN protocols;\n\\item network topologies may differ among communities, some use central clusters of nodes as gateways to peer with other networks, some use a more distributed topology;\n\\item some communities attach various sensors to nodes and would like to monitor their outputs through time;\n\\item networks can decide to use a captive portal and\/or choose to require users to pay to use the network;\n\\item there may be differences in operation due to local regulations.\n\\end{enumerate*}\n\nIn order to address this, there are at least two approaches that can be taken.\nThe first one is \\textit{the large common base approach} which tries to create a common system that addresses the needs of as many of communities as possible by providing a large feature set and a large configuration schema (database model) encompassing all possible scenarios.\nThis approach has been tried as part of an interoperability effort~\\cite{interop_2010, cnml_2007} established between community networks, to come up with a common schema upon which node databases\\footnote{``Node databases'' is a generic term communities use for their network management needs. They can be everything from a simple wiki page with a list of nodes to a fully-fledged and specialized software solutions.} could be built.\nThe problem is that it is hard to come up with a one-fits-all solution and large monolithic schemas can quickly become unmanageable.\nMoreover, it requires sustained effort from participating community networks first to establish the standard and then to keep it updated as technologies and practices evolve.\nBecause community networks are mostly volunteer run, such sustained participation is unattainable for many community networks.\nAdditionally, it is practically impossible to involve all community networks in this process and there are always new networks with new differences.\nThis process favours large and established network communities.\n\nThe other way is \\textit{the extensible core approach} where the aim is to create a minimal core with highly modular and extensible design so that community networks can tailor it to their needs, whatever they might be.\nThis is the approach that we are taking with the \\textit{nodewatcher}{} v3 platform.\nWe make the following novel contributions:\n\\begin{itemize}\n\\item A modular open platform that may be easily tailored to the needs of any community network.\n\\item An extensible per-node firmware image generation system that enables generation of pre-configured images for specific nodes in order to eliminate any manual configuration requirement on the nodes.\n\\item An extensible monitoring system with a scalable time-series data storage backend enabling large-scale collection of status and other telemetry data while supporting interactive visualizations.\n\\item User interface designed and structured so that it is suitable both for novice and expert users, tailored for collaboration and coordination of volunteers.\n\\end{itemize}\n\nThe rest of this paper is organized as follows.\nSection~\\ref{sec:related-work} presents related work done in the area of community network management tools.\nSection~\\ref{sec:platform} presents the design and functioning of the \\textit{nodewatcher}{} platform.\nSection~\\ref{sec:evaluation} shows the results of platform evaluation in the \\textit{wlan slovenija}{} community wireless network.\nSection~\\ref{sec:conclusion} presents conclusions and ideas for future work.\n\n\\section{Related Work}\n\\label{sec:related-work}\n\nA lot of research has been done into individual wireless mesh network building blocks. Among them are routing~\\cite{Murray_2010,Neumann_2012,Neumann_2013}, security~\\cite{Siddiqui_2007}, and analyses of topologies, performance, mobility~\\cite{Vega_2012,Zakrzewska_2008,Braem_2014,Cerda_2013,Maccari_2015}.\nBut research into community network management solutions and best practices still remains scarce.\nThis is why most of the related work in this area comes from the individual community networks which have each developed its own solutions, practices and philosophy.\n\nIn this section we survey the most visible network management solutions, traditional ones and specialized solutions developed by community networks worldwide.\nWe compare them to the approach taken by \\textit{nodewatcher}{} platform.\n\n\\subsection{Traditional Network Management}\n\nThere are many existing network management systems, suited for more traditional (non-community) networks~\\cite{Cacti_2004,Nagios_1999,SmokePing_2001,Zabbix_2004,Puppet_2005,Salt_2011}.\nBy their management function, they can be segmented into two major classes as follows:\n\\begin{description}\n\\item[Monitoring systems] enable the operators to remotely monitor a set of devices to see whether they are reachable and to get some insights into their operation.\nSome of these systems can generate events and notify the operators when errors are detected, while others can only visualize the data without interpreting it.\n\n\\item[Configuration management systems] enable the operators to maintain a central repository of device configurations that can be used to provision devices.\nIn these systems, configuration is mostly input using a domain-specific or a scripting language and deployment is done using remote agents that interpret configuration scripts and apply configurations.\n\\end{description}\n\nWhile these systems can be used in community networks, they are not tailored to their specific needs.\nFirst, some of the monitoring and configuration management systems require agents which consume too much resources to run on simple off-the-shelf network equipment commonly in use in community networks.\nThey might be suitable to monitor some better-equipped network nodes, but would require different systems for managing other parts of the network, which increases the work load of volunteer participants.\nAdditionally, these systems are independent solutions, requiring manual integration which needs to be performed by each community.\nThis increases the chance that each community will pick different solutions and end up with systems which are not interoperable with each other.\nConfiguration management systems are not suitable for embedded systems, or if they are, they target devices of a particular manufacturer, and not a diverse range of often customized devices found in community networks.\n\nBut most importantly, since they have not been designed with community networks in mind, none of these systems provide community coordination capabilities.\nThey are highly technical to setup and use, which makes it harder to properly configure by communities without highly skilled members.\nThey are designed for general networks and do not encode experiences and particularities of community networks in their source code, which would help new community networks to start with reasonable and tested defaults and configurations.\nReading information from monitoring systems is designed for trained network operators and entering configuration into management systems requires good understanding of computer networks terminology and operation.\nThey are often developed as an independent codebase which makes it much harder to integrate with other open source solutions used by a community network, like community management solutions, and they lack one unified interface, which further confuses novice users.\n\n\\subsection{Node Databases}\n\n\\newcommand{$\\uparrow$}{$\\uparrow$}\n\\newcommand{$\\downarrow$}{$\\downarrow$}\n\n\\newcommand{$\\times$}{$\\times$}\n\\newcommand{$\\checkmark$}{$\\checkmark$}\n\\newcommand{$\\circ$}{$\\circ$}\n\n\\newcommand{S}{S}\n\\newcommand{D}{D}\n\n\\begin{table*}[t!]\n\\caption{\\label{tbl:comparision}Feature comparison of existing node database systems as of May 2015. In the table we show the developer group of a project, if it is actively developed (A) and when it was first officially mentioned and deployed (Y). In terms of project functionalities we list network monitoring (NM), federation support (FDR), configuration generation (CF), firmware generator (FW), application program interface (API), resource allocation (RA), node authentication (NA), link planning (LP), topology visualization (TP), and map visualization (MP). We also show which projects can be easily extended in terms of user interface (UI), application program interface (API), network monitoring (NM), support of platforms (PL), schema (SC), and routing protocols (RP). In terms of developer accessibility we highlight the documentation status (DOC), programming language (LAN), web framework (WF), and code license (LIC).}\n\n\\bgroup\n\\def1.5{1.5}\n\n\\centering\n\\renewcommand{\\tabcolsep}{2pt}\n\\scriptsize{\n\\begin{tabular}{|p{9.5em}|c|p{3em}|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|p{4em}|c|c|}\n\\hline\n\\multirow{2}{*}{Project (Developers)} & \\multicolumn{2}{c|}{General} & \\multicolumn{10}{c|}{Features} & \\multicolumn{6}{c|}{Modularity\/Extensibility} & \\multicolumn{4}{c|}{Developer Accessibility} \\\\ \\cline{2-23} \n & A & Y & NM & FDR & CF & FW & API & RA & LP & NA & TP & MP & UI & API & NM & PL & SC & RP & DOC & LAN & WF & LIC \\\\ \\hline\nGuifi.net frw. \\newline (Guifi.net) & $\\checkmark$ & 2004 \\newline 2004 & $\\circ$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & S & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\checkmark$ & PHP & Drupal & GPLv2 \\\\ \\hline\nAWMN WiND \\newline (AWMN) & $\\checkmark$ & 2005 \\newline 2005 & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & S & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & PHP & custom & AGPLv3 \\\\ \\hline\nFFM\/CNDB \\newline (FunkFeuer) & $\\checkmark$ & 2012 \\newline 2015 & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\checkmark$ & PY & custom & BSD \\\\ \\hline\nNodeshot \\newline (Ninux) & $\\checkmark$ & 2011 \\newline 2011 & $\\times$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\checkmark$ & D & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\circ$ & $\\checkmark$ & $\\checkmark$ & PY & Django & GPLv3 \\\\ \\hline\nNetmon \\newline (Freifunk) & $\\checkmark$ & 2009 \\newline 2009 & $\\uparrow$, $\\downarrow$ & $\\times$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & $\\times$ & D & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & PHP & custom & GPLv3 \\\\ \\hline\nLibreMap \\newline (Altermundi) & $\\checkmark$ & 2013 \\newline 2013 & $\\uparrow$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & D & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & JS & custom & GPLv3 \\\\ \\hline\nkalua \\newline (weimarnetz.de) & $\\checkmark$ & 2001 \\newline 2001 & $\\uparrow$ & $\\times$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\checkmark$ & D & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\times$ & Sh, PHP & custom & GPLv2 \\\\ \\hline\nmeshviewer \\newline (Freifunk L\\\"{u}beck) & $\\checkmark$ & 2015 \\newline 2015 & $\\uparrow$, $\\downarrow$ & $\\times$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & D & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\times$ & JS & custom & AGPLv3 \\\\ \\hline\nK-Net \\newline (DTU Students) & $\\checkmark$ & 1996 \\newline 1996 & $\\downarrow$ & $\\times$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\circ$ & $\\checkmark$ & $\\times$ & D & $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & C, C++, PY, HS & Django & none \\\\ \\hline\nGeronimo \\newline (Opennet Initiative) & $\\checkmark$ & 2011 \\newline 2011 & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & D & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\circ$ & $\\times$ & $\\checkmark$ & PHP, PY & Django & none \\\\ \\hline\nOndataservice \\newline (Opennet Initiative) & $\\times$ & 2009 \\newline 2009 & $\\uparrow$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & D & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & C, PHP & custom & BSD \\\\ \\hline\nNCD \\newline (Routek\/qMp\/Guifi) & $\\checkmark$ & 2014 \\newline 2015 & $\\downarrow$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & D & $\\times$ & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & Lua, JS & D3.js & GPLv3 \\\\ \\hline\nmanman \\newline (Funkfeuer Graz) & $\\times$ & 2006 \\newline 2006 & $\\downarrow$ & $\\times$ & $\\times$ & $\\times$ & $\\circ$ & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\times$ & RB, PL & Rails & none \\\\ \\hline\nnodewatcher v2 \\newline (wlan slovenija) & $\\times$ & 2009 \\newline 2009 & $\\downarrow$ & $\\times$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & $\\checkmark$ & $\\times$ & $\\times$ & D & $\\checkmark$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & $\\times$ & PY & Django & AGPLv3 \\\\ \\hline\nnodewatcher v3 \\newline (wlan slovenija) & $\\checkmark$ & 2012 \\newline 2015 & $\\uparrow$, $\\downarrow$ & $\\circ$ & $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & $\\times$ & $\\checkmark$ & D & $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & $\\checkmark$ & PY, C & Django & AGPLv3 \\\\ \\hline\n\\end{tabular}\n}\n\n\\egroup\n\\end{table*}\n\nMany wireless mesh network communities have quickly recognized the need for having a central system that would be able to manage the growing number of wireless mesh nodes.\nWe present a comprehensive overview of properties of existing solutions in Table~\\ref{tbl:comparision}. The data was obtained by studying the public information available about the projects (e.g. source code) and verified by reaching out to the members of the corresponding communities.\nWe list several properties of community network management solutions, starting with the name of the project and the community that forms its core development group.\nWe show whether the project is being actively developed (A) and when was it first officially mentioned and deployed in an actual network (Y).\nProperties are then organized into three major areas, namely features (functionality that the system has), modularity\/extensibility (can the individual functionalities be extended without modifying the core) and developer accessibility.\nFor most of the features, we list whether the described system implements the feature fully ($\\checkmark$), only partially ($\\circ$) or not at all ($\\times$).\nIn case of network monitoring (NM), we also list whether the system supports pulling data from nodes ($\\downarrow$) and if nodes can push data to the system ($\\uparrow$).\nFor topology visualization (TP), we mark whether the system renders the topology just based on configuration (S) or it can also display live topology as it changes based on some routing protocol (D).\n\nOne of the oldest and largest mesh networks are Guifi.net~\\cite{Guifinode_2003,Vega_2012} and the Athens Wireless Metropolitan Network (AWMN)~\\cite{AWMN_WIND_2002}.\nAs the node database solutions evolved with their respective networks, both are tailored to specific structures of their networks and its management structure.\nTheir codebase is monolithic, making it hard to extend with new features or customizations.\nThis applies to the schema (which lacks any kind of object-relational mapping~\\cite{ONeil_2008} and is therefore hard to manage when developing extensions), frontend interface and core functionalities.\nAWMN WIND is built on a custom web framework, further limiting the ease of adoption by a new community network.\nAdditionally, advanced network monitoring functionality is missing, requiring the use of external utilities, which causes duplication of configuration and is an error-prone process (see Section~\\ref{sec:network-monitoring}).\nThe Guifi.net framework does support including network monitoring graphs, but it requires the use of external monitoring services to perform the actual monitoring.\n\nTwo more recent representatives of community network node databases are Nodeshot~\\cite{Nodeshot_2012} and FFM\/CNDB~\\cite{Funkfeuer_2012}, developed by Ninux and FunkFeuer community networks, respectively.\nNodeshot is an extensible web application for management of community geographic data.\nIt focuses on mapping features with a more modular approach where various functionalities are extracted into modules.\nThe modularity makes extending the application easier, but in contrast to \\textit{nodewatcher}{}, there is no common approach to managing schema extensions which makes the database models hard to extend in a modular fashion.\nThere is some monitoring support, but it is mostly limited to device discovery in an existing network and is not meant for long-term monitoring and diagnostics.\n\nFFM\/CNDB features an extensive database schema which models everything from devices to companies and people.\nWith its very detailed modelling of individual objects it is an example of the large common base approach.\nSimilar to Nodeshot, there is no common approach to managing schema extensions and there is also no monitoring support.\nIt uses a custom web framework with almost nonexistent documentation, having a steep learning curve before extending and adapting the system to another community network is possible.\nWhile the Guifi.net framework does support generating device configuration, none of the existing solutions support generating functional and pre-configured firmware images that can be directly flashed to devices.\nThis would reduce the required administration burden and make the whole process easier for novices.\nAs a consequence, these solutions also do not support comparing running configuration with provisioned one and detecting misconfiguration and unwanted changes.\nThis makes detecting possible issues in advance impossible, and debugging issues once they occur slower.\n\nIn this section we have presented an overview of existing community network management solutions.\nFrom these, we summarize a list of open problems:\n(a) existing solutions mostly lack modularity and easy extensibility in several areas including user interface, network monitoring, schema, configuration\/firmware generation and routing protocols;\n(b) there is limited support for generating configuration, and no support for generating pre-configured firmware images;\n(c) there is limited support for interactive display of time-series monitoring data.\nThe next section presents possible solutions to these problems.\n\n\\section{Platform}\n\\label{sec:platform}\n\nThe design and development of the \\textit{nodewatcher}{} platform comes from the needs and evolution of the \\textit{wlan slovenija}{} community wireless network.\nIn \\textit{wlan slovenija}{} we had similar needs and issues as other community networks and to address them we iteratively developed our own node database system, tailored to our needs and practices.\nThose were versions v1 and v2 of the \\textit{nodewatcher}{} platform.\nBut as we started to collaborate with other community networks, we soon discovered many of more broader issues outlined in Section~\\ref{sec:related-work}.\nWe learned that if we want our development efforts to benefit also other community networks and help grow new ones, a different approach is needed.\nOne where we do not focus development only on the needs of our own network, but think about other networks and their use of the platform from the very beginning.\nAs a consequence, the \\textit{nodewatcher}{} v3 platform has been designed to be maximally extensible and reusable among different communities.\nThis section describes all the components of the platform, beginning with a quick high-level overview of the architecture.\n\n\\subsection{Overview}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{figures\/device-mgmt-cycle.pdf}\n \\caption{Illustration of the device management cycle in a \\textit{nodewatcher}{}-supported community network. Traditionally the configuration (a), generation (b) and monitoring (e) steps are performed manually, while \\textit{nodewatcher}{} enables automation of these tasks, freeing up resources inside the community. On the other hand, the deployment step (c) is still community-driven, allowing the network to automatically grow and adapt with it.}\n \\label{fig:device-mgmt-cycle}\n\\end{figure}\n\nThe core idea behind \\textit{nodewatcher}{} is the \\textit{device management cycle}, illustrated by Figure~\\ref{fig:device-mgmt-cycle}.\nThe management cycle is a model of how devices are managed in a community network.\nFirst, members of the community (usually the node maintainer) decide that a node should be deployed or reconfigured.\nThis is handled in the \\textit{configuration stage} (Figure~\\ref{fig:device-mgmt-cycle}, step a), where devices are configured to be used at a specific location in the network.\nAt this stage, the device configuration does not depend on the hardware that will be later used to deploy the device, making it platform-independent (Section~\\ref{sec:platform-independent-configuration}).\nConfiguration can be adjusted by members of the community using a dynamically generated web interface (Section~\\ref{sec:form-generation}), reflecting the schema that is used for describing configuration.\nAdditionally in the course of configuration, \\textit{nodewatcher}{} may automatically allocate the resources (e.g. IP addresses) needed for proper functioning of a device in the network, taking into account any defined allocation policies (Section~\\ref{sec:resource-allocation}).\nThe next stage is the \\textit{generation stage} (Figure~\\ref{fig:device-mgmt-cycle}, step b), where platform-independent configuration is transformed to device- and platform-specific configuration. \nIf the transformation is successful, a firmware image, suitable for the target device, may also be generated (Section~\\ref{sec:firmware-generator}) in order to ease deployment.\nIn case of successful outcome of the generation process (Figure~\\ref{fig:device-mgmt-cycle}, step c), the next stage is the \\textit{deployment stage} (Figure~\\ref{fig:device-mgmt-cycle}, step d), in which members of the community apply the firmware to a suitable target device and deploy the device at the desired location.\nAs soon as the device is deployed and is able to join the network, it is monitored by the platform (Section~\\ref{sec:network-monitoring}).\nIn the \\textit{monitoring stage} (Figure~\\ref{fig:device-mgmt-cycle}, step e), the device is actively validated for correct operation in context of the whole network.\nIn case errors in configuration or other problems are detected, the device's maintainer is notified (Figure~\\ref{fig:device-mgmt-cycle}, step f), so the device may be fixed and\/or reconfigured, repeating the cycle.\n\nThe \\textit{nodewatcher}{} platform aims to provide components for all stages of the described device management cycle, automating the repetitive tasks, freeing the human resources of the community and lowering the required entry technical skills level for active participation in the community network. \nEach part is designed to be easily extensible to networks with various topologies, routing protocols, operating systems and hardware devices.\n\n\\subsection{Platform-independent Configuration}\n\\label{sec:platform-independent-configuration}\n\n\\begin{figure}\n\\centering\n\\begin{lstlisting}[language=Python,frame=single,basicstyle=\\ttfamily\\footnotesize,breaklines=true]\n# Module A\nclass InfoCfg(RegistryItem):\n name = CharField()\n\n# Register schema item into the schema which makes\n# it available to any other module.\nregistration.point('node.config').register_item(\n InfoCfg\n)\n\n# Module B\nclass ExtendedCfg(module_a.InfoCfg):\n device = ChoiceField('core.general#device')\n version = IntegerField()\n\nregistration.point('node.config').register_item(\n ExtendedCfg\n)\n\\end{lstlisting}\n\\caption{A simplified example of a schema definition for a node's name and used device hardware, split over two modules to show extension capabilities.}\n\\label{fig:schema-node-general}\n\\end{figure}\n\nCommunity networks are built using a wide range of devices, containing everything from off-the-shelf home routers to specialized devices used for backbone links and regular servers.\nThe \\textit{nodewatcher}{} uses an extensible platform-independent schema to describe configuration for all these types of nodes, regardless of their hardware and\/or operating system.\nOne of the motivations behind this choice is that platform-independent configuration enables replacement of devices without the need to do re-configuration even when a replacement device is of a different model or even manufacturer.\nIt is a frequent occurrence that deployed devices need to be replaced when they stop functioning properly due to various hardware failures.\nInstead of taking down the hardware device, spending time on the roof or in the laboratory to fix it, and then taking it back to install it, with extended downtime during that period, it is better that the device is immediately replaced with a new one, but with exactly the same configuration as the previous one had.\nThe broken device can then be fixed without hurry and reused at some other location.\nBecause community networks are mostly operated by volunteers, dealing with urgent matters adds extra pressure on volunteers for their scarce time.\nAdditionally, replacement of a broken hardware device can be done easily by one volunteer, while fixing a device can be done at a later time by another volunteer.\nObtaining exactly the same device for a replacement is hard in community networks which utilize diverse hardware devices, so having a way to apply the same configuration to a new and different hardware device is needed.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.47]{figures\/registry-example-models.pdf}\n \\caption{Schema item definition from Figure~\\ref{fig:schema-node-general}, displayed using an entity relationship diagram.\n The \\textit{core} module only contains a minimal \\texttt{Node} while any attributes are provided by attached schema items in other modules.}\n \\label{fig:registry-schema-example}\n\\end{figure}\n\nHowever, while having a platform-independent configuration is a noble goal, some configuration properties depend on features which are inherently device-dependent (for example the number of Ethernet ports, available wireless radios, supported protocols, etc.).\nIn such cases the user editing the platform-independent configuration may create a configuration which will fail to work when applied to the target device.\nThis can further delay problem discovery until the \\textit{deployment} stage when it is already too late and costly to fix problems, especially in wireless networks where nodes may be deployed in hard-to-reach locations.\nThis clearly shows the need to have instant validation and feedback (step c in Figure~\\ref{fig:device-mgmt-cycle}) when updating platform-independent configuration.\nSuch validation must be based on the selected target device with all its hardware and software properties.\nThe \\textit{nodewatcher}{} enables instant validation which is handled by the firmware generator component (see Section~\\ref{sec:firmware-generator}).\n\nIn the introduction we have mentioned the problem with attempting to design an all-encompassing schema or a single node database application that would cover every possible deployment of community networks.\nCommunities will usually have some specifics regarding their operations~-- for example, because of different local regulations, hardware availability, or differences in philosophy.\nHaving a single unified schema can quickly become a limiting factor that prohibits straightforward adaptation of the system for the local community.\n\n\\textit{nodewatcher}{} avoids this problem by making the platform-independent schema itself completely extensible.\nIndividual modules may register schema items and the final schema is the union of all these items.\nAn example of a schema item definition is shown in Figure~\\ref{fig:schema-node-general}, where several properties of the schema extension mechanism can be shown:\n\\begin{itemize}\n \\item The schema items are class-based, which means they can be extended later on by other modules (in the example, \\texttt{ExtendedCfg} from module B augments a simpler \\texttt{InfoCfg} provided by module A to insert additional fields).\n \\item Fields that represent enumerations (in the example \\texttt{device} is such a field) do not hard-code the possible options, but only provide an \\textit{extension point} where additional choices can later be registered by other modules.\n Each such extension point is attached to a unique name (e.g. \\texttt{core.general\\#device}) that may be referenced later when extending it.\n \\item Schema items may also reference other items in order to build hierarchical configuration trees. This functionality may be used to define relationships, for example when a single wireless radio, defined by a schema item, may contain multiple virtual network interfaces with their own ESSID configurations, defined by other schema items (not shown in the example due to limited space).\n\\end{itemize}\n\nThe system that supports such schema item registrations is called the \\textit{registry}.\nIt is essentially a lightweight extension of the standard object-relational mapper (ORM) concept~\\cite{Bernstein_2007,ONeil_2008} as the mentioned schema items are actually database models.\nIn the platform, it is implemented using a popular Django web framework ORM \\cite{django_2005}.\nThe problem that it aims to solve is the one of simplified model discovery.\n\nIn an extensible platform like \\textit{nodewatcher}{}, modules may want to query on fields defined by other modules somewhere in the schema.\nReusing the previous example, there may be a schema item called \\texttt{InfoCfg} in the base schema, enabling users to configure a \\texttt{name} for a node (an entity relationship diagram corresponding to schema definition from Figure~\\ref{fig:schema-node-general} is shown in Figure~\\ref{fig:registry-schema-example}).\nThis base item does not provide any other fields besides the node name, leaving potential extensions to other modules.\nSuppose that we would like to also add two more fields which would specify what device is in use on a specific node using an extensible choice field and some version information (in reality, version specification is more complex than a single integer, but we simplify this for our example).\nWe may do that in another module by defining \\texttt{ExtendedCfg} as shown in Figure~\\ref{fig:schema-node-general}.\nNow, another module would like to perform a query, listing only nodes that use a device called \\texttt{tp-wr741ndv4}.\nAs mentioned, these two classes actually represent ORM model definitions, so a traditional query traversing these two relations could be written using the SQL-like relational query notation introduced in~\\cite{ONeil_2008}:\n\\begin{lstlisting}[language=sql,frame=single,basicstyle=\\ttfamily\\footnotesize,breaklines=true]\nSELECT n FROM Node n\nWHERE n.infocfg.extendedcfg.device = 'tp-wr741ndv4'\n\\end{lstlisting}\n\nHere we make an assumption that there is a one-to-one relation between a \\texttt{Node} and \\texttt{InfoCfg}.\nWhile one-to-many relations are also supported (so each node can have multiple instances of a model, for example multiple network interfaces), we limit ourselves to this simple case for ease of exposition.\nNote how even in this very simple example we had to explicitly specify the hierarchical path that is spanning these two relations to get to the wanted field.\nThis makes such a method not very developer-friendly under the requirement of extensibility and even more complex schemas.\nOne of the features that the registry enables, is that we can simplify the same query as:\n\\begin{lstlisting}[language=sql,frame=single,basicstyle=\\ttfamily\\footnotesize,breaklines=true]\nSELECT n FROM Node n\nWHERE REGISTRY info.device = 'tp-wr741ndv4'\n\\end{lstlisting}\n\nIn this case, \\texttt{info} is the \\textit{registry identifier} that we attached to the base model using its meta \\textit{Registry ID} attribute (see Figure~\\ref{fig:registry-schema-example}) and represents itself or any of its subclasses at the same time.\nIn the background, the proper relations are automatically deduced and the query executed.\nThis abstracts away the subclass relationships, enabling easier refactoring and improving code readability.\nSimilar extensions are also provided for simplified fetching of fields deeply nested in the schema by deducing and performing the required table join operations.\n\nIn order to bootstrap module development, \\textit{nodewatcher}{} already provides a minimal base schema for platform-independent node configuration.\nTo construct it, we have surveyed all the different solutions mentioned in Section~\\ref{sec:related-work} and included a small amount of items common to most of the communities.\nBut even these base schema items are just module registrations which makes them removable in case some community needs to really change the baseline of how configuration is organized.\nUsing such an approach means that communities are not forced to adapt to a specific platform.\nInstead, it empowers them to adapt the platform for their own use cases.\nIn case such modifications show themselves to also benefit other communities, they may also be more easily reused due to their modular nature.\n\n\\subsubsection{Extensible User Interface}\n\\label{sec:form-generation}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.5]{figures\/defaults-rules-tree.pdf}\n \\caption{Example showing a compacted expression tree for specification of context-sensitive defaults for a wireless ESSID configuration. Hexagons are rule condition expressions while squares are action expressions.}\n \\label{fig:defaults-rules-example}\n\\end{figure}\n\nHaving an extensible configuration schema is a good step towards reusable modules shared between community networks.\nBut, a schema is only useful for module developers that need a place to store various configuration values.\nFor platform users, the frontend (web interface) is even more important.\n\nThere are two issues regarding user interaction with the configuration schema:\n(\\textit{a}) there must be a way for the users to enter configuration values conforming to the specified schema;\n(\\textit{b}) since the schema may be complex, there needs to be a way for project maintainers to be able to specify context-sensitive configuration defaults.\nIssue (\\textit{a}) is addressed in \\textit{nodewatcher}{} by the registry API's ability to automatically generate a user interface (forms) conforming to the schema.\nAutomatic form generation simplifies the module development process and reduces code duplication.\nThe automatically generated forms may be customized by the module developers where needed, but even defaults are immediately usable for simple schema items.\nAddressing issue (\\textit{b}) is especially important in order for community networks to be more accessible to people that do not have all the deep technical knowledge on how to configure devices.\nHaving the ability to define sensible defaults for such users is a feature that enables the community to grow by also including them.\nA possible solution would be for network maintainers to provide pre-defined templates of configuration defaults.\nThe problem with static templates is that defaults may differ when applied to devices with different capabilities or a different project, in other words the defaults may be context-sensitive.\nOne would then be required to create static templates for all combinations, which quickly becomes unmanageable due to a combinatorial explosion in the number of required templates.\n\n\\textit{nodewatcher}{} takes a different approach, enabling specification of defaults in the form of simple declarative rules.\nThe example in Figure~\\ref{fig:defaults-rules-example} shows context-sensitive default wireless ESSID configuration.\nRules in this example only apply to a specific project and configure two virtual interfaces (VIFs), one in mesh mode and the other in access point mode, in case the radio supports them.\nIn case the radio does not support configuring virtual interfaces, only one network may be set, and in this case a single mesh mode interface is configured.\nThe benefit of using a declarative approach for specifying rules instead of an imperative one is that the rules may only be evaluated when needed~-- in the above example, the inner rules will be evaluated only when the project changes and not when any other fields in the schema change.\nThis is an important detail as defaults should not overwrite configuration when changing an unrelated setting.\nEvaluation of these rules is achieved by first generating an expression tree and then lazily evaluating only those sub-trees which contain expressions that match the current configuration value change.\nIn order to detect which rules have already been evaluated, forms generated by the registry contain internal state.\n\nAutomatic form generation combined with context-sensitive defaults also enables the possibility of generating different forms for the same schema, for example, depending on user settings or permissions.\nNovice users may otherwise get confused by the sheer amount of things to configure, so simplified configuration forms may be shown to them while defaults are configured in the background.\n\nIn addition to configuration forms, \\textit{nodewatcher}{} also includes interfaces, which allow developers to build upon modularity when they are reading, displaying and\/or visualizing data for the user.\nWe extended the Django templating engine to allow modules to override or extend existing templates in a cascading way~\\cite{Overextend_2013}.\nBy structuring templates appropriately, this enables developers to augment or change any server-side rendered content.\nThey can modify and define new menus, add new partials, or wrap or change existing ones.\nIn a similar way how we can automatically render forms based on the registry ORM, we can provide an automatically generated REST endpoints\/APIs for all the data that is stored in these schemas.\nAs a consequence of all this, the community can quickly prototype new solutions over the existing base, or just use the exposed REST APIs in their own custom solutions or frameworks.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.5]{figures\/firmware-buildsystem.pdf}\n \\caption{An overview of the firmware build system, coming from platform-independent configuration in the first stage to the fully configured firmware image that may be flashed directly onto the target device in the final stage.}\n \\label{fig:firmware-build-system}\n\\end{figure*}\n\n\\subsubsection{Resource Allocation}\n\\label{sec:resource-allocation}\n\nAs in any network, there is also a need to perform IP address resource allocation in community networks.\nThis is especially the case in IPv4-based networks where it is hard to automatically generate node addresses without collisions due to the small available space.\n\nIn order to support that, \\textit{nodewatcher}{} implements a hierarchical buddy allocation scheme~\\cite{Peterson_1977} extended with support for hold down timers.\nAt the top level IP space is split into multiple pools from which other objects (for example nodes) may request specific allocations.\nHold down timers are necessary to avoid collisions with nodes that have been recently removed from the database.\nWhen an allocation is freed, it is still marked as \\textit{reserved} until the hold down timer expires.\n\nThis is necessary especially in community networks where there is limited coordination between volunteers.\nNodes may be removed from the database, but not yet permanently removed from the network and may actually reappear at a later time, causing routing conflicts.\n\n\\subsection{Firmware Generator}\n\\label{sec:firmware-generator}\n\nTraditionally, in the generation stage of the device management cycle, devices are configured manually before they are deployed.\nThis is usually done either through a command-line interface via telnet or secure shell (SSH) or a web-based user interface running on the device, depending on the device's firmware.\nIn both cases, however, this is an error-prone process due to manual user input.\nMistakes can easily happen and sometimes they might propagate to the deployment stage where they are hard and costly to fix.\nIn community wireless networks, devices are sometimes deployed in hard-to-reach locations like rooftops or high towers and fixing certain problems requires physical access to the device.\n\n\\subsubsection{Transforming Configuration}\n\nAs described in Section~\\ref{sec:platform-independent-configuration}, \\textit{nodewatcher}{} can be used to store device configurations in a platform-independent way using the schema items exposed by the registry.\nBut this configuration cannot directly be used on devices.\nDifferent operating systems like the open source OpenWrt~\\cite{OpenWrt_2004} or the proprietary RouterOS~\\cite{RouterOS_1995}, that are frequently used on devices in community networks, have completely different ways of being configured.\nThis is why an additional platform-dependent transformation step is needed.\n\nHowever, such a step introduces some additional challenges that prevent a straightforward transformation of any configuration based on the platform-independent schema to a device-specific one.\nThis is due to the fact that there are differences between operating systems which may prevent certain configurations from being properly instantiated on one operating system even when those same configurations work without issues on another one.\nAdditionally even using the same operating system, devices have different capabilities due to differences in their hardware.\nConfiguration which was platform-independent and unaware of the target device in the first stage can produce problems while being applied to a specific device and operating system.\nThis conflict may result in some unpleasant, but realistic, scenarios:\n\\begin{enumerate}[label=\\roman*)]\n\\item Devices have different default network switch layouts, VLAN tags and interface names.\nAs usually nodes use the WAN-designated port for the internet uplink and the LAN-designated port for routing to nodes in the same location, such a misconfiguration will cause connectivity issues.\n\n\\item Configuration of some wireless authentication mechanisms requires the installation of specialized packages on some operating systems.\nWithout them even an otherwise valid configuration will not work.\n\n\\item Different devices have different radio capabilities.\nFor example, some devices only support IEEE802.11a channels and if the configuration system is not aware of this, blind configuration transformation to the target platform results in a failure to bring up the wireless device.\n\\end{enumerate}\n\nThese scenarios show that supporting informed decisions of the configuration transformation process requires the use of a {\\em device database}.\n\\textit{nodewatcher}{} takes a declarative approach to device descriptors which enumerate all the hardware and software properties of a given device.\nA declarative descriptor is composed from multiple fields as follows:\n\\begin{itemize}\n\\item A unique model identifier in the form of \\texttt{tp-wr741ndv1} which identifies a specific version of a device model in the database. The version is part of the identifier because there may be multiple hardware revisions of the same model, often with substantial differences.\n\n\\item A name used for representation of the device in user interfaces together with the name of the manufacturer and a URL to its website.\n\n\\item The hardware architecture (e.g. \\texttt{ar71xx}) used by the CPU of the device.\n\n\\item A list of wireless radios embedded on the device.\nEach radio contains a unique identifier, a name and a list of supported IEEE802.11 protocols and features.\nIt also contains a list of physical antenna connectors so that the configuration system may know to which port a specific antenna is attached in case there are multiple antenna ports available on the device.\n\n\\item A list of Ethernet switches connected to the CPU together with VLAN tag information and designation of the ports connected directly to the CPU.\n\n\\item A list of Ethernet interfaces and their connections to declared switches together with their VLAN tag definitions. \nDifferent devices have different switch and Ethernet port configurations and these two options abstract all of this into simpler identifiers like \\texttt{lan0} and \\texttt{wan0}.\n\n\\item A list of antennas that are included in the device package.\nEach antenna contains radio propagation properties which characterize the antenna.\n\\end{itemize}\n\nDevice descriptors may be subclassed in order to simplify definitions of new devices with only slight variations.\nThis feature greatly improves the time to fully support new devices which is important in the quickly evolving community networks.\n\nUsing device descriptors and the platform-independent configuration provided in the first stage, \\textit{nodewatcher}{} is able to generate device-specific configuration using a transformation step (see Figure~\\ref{fig:firmware-build-system} for an overview of the whole process).\nThe transformation step is built from a pipeline of modules where each of them gets the platform-independent configuration as input and may produce modifications to the device-specific configuration as its output.\nSuch a modular transformation step ensures that the pipeline can be adapted to a wide range of transformations (for example, supporting various routing protocols, sensor inputs, network configurations etc.), so everything that a target device and operating system support may be used.\nThe transformation module pipeline may also raise errors when parts of the input configuration could not be properly transformed. These errors are immediately visible to the user who is entering configuration via the \\textit{nodewatcher}{}'s web interface (as a part of step c in Figure~\\ref{fig:device-mgmt-cycle}).\nThe validation system will not save a configuration which has outstanding errors, preventing invalid configurations from being used to deploy devices.\n\nIn order to further ease deployment the resulting device-specific configuration can be automatically packaged together with a firmware image which can then be flashed directly onto the target device.\nPre-generated firmware images further reduce the room for errors as no configuration needs to be transferred separately or entered manually.\nAfter flashing devices boot directly into the configured and known state, ready to be used and deployed.\nBesides reducing errors, packaging software (firmware) together with its configuration is also beneficial for ensuring that the configuration really is applicable to the used software versions as both can be tested together.\nOtherwise, using a stale configuration on a newer operating system or newer versions of some packages may result in failing devices.\n\nBundling firmware together with node-specific configuration is a novel way of provisioning devices in community networks that is beneficial to existing and emerging networks alike.\nIt makes the whole device preparation process fast which allows volunteers to focus their time on deployment of the device on location.\nMoreover, while experienced volunteers who often help maintaining multiple nodes gain the benefit of this streamlined node preparation process which minimizes time spent and possible errors, the important advantage is that the whole process becomes accessible also to the novice volunteers who are preparing their device for the first time.\nThey can register a new node, use provided defaults, select the hardware target, generate the image, flash it onto the device, reboot, and the device is ready to be used.\n\nMoreover, the process of node preparation is repeatable and can be easily duplicated which helps debugging any potential issues.\nIf issues with a deployed node are observed, instead of having to retrieve the device and debug it, a new device with exactly the same configuration and firmware can be recreated in a laboratory environment.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.5]{figures\/monitoring-pipeline.pdf}\n \\caption{An overview of the \\textit{nodewatcher}{} monitoring components. Telemetry data is collected on the devices by \\texttt{nodewatcher-agent} modules and is then transported over HTTP(S) to the monitoring pipeline. This can be done either by pulling from the nodes or by the nodes pushing data to nodewatcher. One of the modules in the pipeline is the datastream processor which stores all the historical data as time series, supporting later interactive visualization through time.}\n \\label{fig:monitoring-pipeline}\n\\end{figure*}\n\n\\subsubsection{Supporting Hardware Diversity}\n\nDifferent community networks use various hardware devices. Additionally, new hardware is being developed all the time.\nA network management platform is therefore only really useful if it enables easy inclusion of new devices and even new operating systems.\nTo enable this, \\textit{nodewatcher}{} splits hardware support into different components, which may be provided by independent modules:\n\n\\begin{description}\n \\item[Runtime platform.] The runtime platform is dependent upon the operating system that runs on the target device.\n OpenWrt and RouterOS are examples of runtime platforms.\n But there is no hardcoded concept of how a platform should behave.\n What defines the runtime platform are the transformation modules, which contain the logic of converting the platform-independent configuration into something that can be understood by the target device.\n\n \\item[Firmware builders.] Separate from the runtime platform the system includes one or more firmware builders.\n Each contains a set of tools, which are able to generate firmware images that may be copied directly onto the target device (also called a {\\em toolchain}).\n In case the devices are using proprietary operating systems, these may not even exist.\n Decoupling the builders from the runtime platform means that proprietary runtime platforms can also be supported.\n In such cases, configuration will still need to be applied manually.\n\n \\item[Hardware device descriptors.] These are the device descriptors that we have already defined.\n They provide to the transformation modules the knowledge required to correctly adapt the platform-dependent configuration to the target device.\n Since a device may in theory support different runtime platforms (e.g. some Mikrotik devices may use either OpenWrt or RouterOS), the same device descriptor can be reused by multiple runtime platforms.\n In this case, the platform-specific properties are specified for each platform, while the common ones are specified only once for each device.\n\\end{description}\n\nAs seen in Figure~\\ref{fig:firmware-build-system}, firmware builders are kept separate from the runtime platform, so that proprietary systems may be supported.\nThe link between the runtime platform and the firmware builders is the platform-specific configuration.\nThis configuration is the output of the transformation modules defined for the target runtime platform.\n\nThe tools used to build firmware images for embedded devices can be complex and may vary wildly between the runtime platforms.\nThis is why the firmware build system in \\textit{nodewatcher}{} has been designed in such a way that it can be used by completely different sets of tools in a modular fashion.\nThe build system is structured into multiple Docker containers~\\cite{Docker_2013}. Docker containers are a lightweight wrapper around the Linux namespacing API and filesystem layers with a goal to enable an interface for packaging applications in a reusable and extensible way. Namespaces provide container isolation (the containers still share the host kernel), so that adjacent containers running on the same host are unable to see or influence each other's processes, network configuration, etc. Each container can be thought of as a very lightweight virtual instance, but without the overhead of running a full virtual machine with its own kernel.\n\\textit{nodewatcher}{} uses the Docker container features in order to generate and run firmware image builders for multiple runtime platforms.\n\nAfter the platform-specific configuration has been generated by the transformation step of the given runtime platform (see the node \\textit{Device and OS Specific Configuration} in Figure~\\ref{fig:firmware-build-system}), it is directed to the suitable builder that is selected based on the hardware architecture specified in the device descriptor.\nOnce the generated firmware images are prepared, they are made available to the user via the \\textit{nodewatcher}{} frontend. This decoupling of platform-specific configuration and firmware builders enables the builder containers to be distributed and deployed on a cluster of machines to better handle the load, resource availability and utilization.\nThe described system is also extensible~-- adding support for new architectures simply requires a new builder container to be prepared, while adding support for new runtime platforms also requires an extension of the configuration transformation modules.\n\nBut the main advantage of using separate containers for individual firmware builders is that it enables simple reuse and sharing of ready-made builders between community networks, which is one of principles of community networks.\nCompiling and preparing a range of firmware builder images can be a lengthy and resource-intensive process, requiring manual configuration and testing.\nContainerized organization enables communities to simply download pre-built builder images and use them in their \\textit{nodewatcher}{} installations or \\textit{nodewatcher}{}-compatible systems without needing to compile anything.\nThis further simplifies the bootstrapping of new community networks which can simply reuse firmware builders from existing community networks.\n\n\\subsection{Network Monitoring}\n\\label{sec:network-monitoring}\n\nAfter the firmware images are prepared and devices are deployed in the field, we now enter the monitoring stage of the device management cycle.\nIn this stage we constantly monitor the devices for status, performance and compliance.\nIn the same way as the configuration transformation step performs static validation of device configuration before it is deployed, the role of the monitoring component is to perform dynamic validation of device configuration after the device is running.\nWhile validation is common to both, the scope of the latter is much bigger~-- when deployed in a large network, the functioning of one node may also affect other nodes in its vicinity or sometimes even in completely different parts of the network (for example when considering network announce conflicts in routing protocols).\nBesides performing configuration compliance validation, monitoring may also be used to collect various sensor data through time.\nThis is useful for diagnostics under changing network conditions and can also be used to collect sensor data coming from external sources like temperature, humidity and lightning strike detection sensors.\n\nOne may ask why should monitoring be integrated into the provisioning platform?\nIt is true that existing network monitoring tools could easily be used instead, but an integrated solution enables the monitoring modules to easily perform validation of current device state and behaviour against the static platform-independent configuration that is stored in the provisioning database.\nThis enables the system to quickly detect configuration errors (for example after someone manually edits configuration on a device) or failure modes (loss of a redundant VPN link only when such a redundant link has been previously configured).\nSuch capabilities could be replicated using existing systems, but this would require either manual duplication of configuration (an error-prone process) or specific import scripts for the target monitoring system (which is usually not very portable among different communities).\nThus, an integrated monitoring component is key in ensuring ease of deployment and transfer of good practices between community networks in the form of modules implementing the validation procedures.\n\n\\begin{figure}[t]\n\\centering\n\\begin{lstlisting}[frame=single,basicstyle=\\ttfamily\\footnotesize,breaklines=true]\n{\n \"core.general\": {\n \"_meta\": { \"version\": 4 },\n \n \"uuid\": \"64840ad9-aac1-4494-b4d1-9de5d8cbedd9\",\n \"hostname\": \"test-4\",\n },\n \"core.resources\": {\n \"_meta\": { \"version\": 2 },\n \n \"memory\": {\n \"total\": 32768,\n \"free\": 24611\n }\n }\n}\n\\end{lstlisting}\n\\caption{Example part of the JSON schema compiled by the \\textit{nodewatcher}{} monitoring agent, showing sample output for two monitoring modules.}\n\\label{fig:monitoring-json-schema}\n\\end{figure}\n\nIn traditional computer networks, especially of such scale, provisioning of network devices would be done automatically and centrally.\nThere would be little reason to assume misconfiguration of nodes.\nMost issues with operation would be because of software or hardware bugs, hardware failure, or human error from operations control.\nIn any case, the misconfiguration would be easy to address: simply provision the node again.\nOn the other hand, most community networks see decentralized nature of their networks as an important aspect of their networks and do not want a centralized and automatic control of nodes.\nNodes are often maintained by individuals who might use tools like \\textit{nodewatcher}{} to streamline the process of maintenance, but they still want to retain control of and access to the device.\nThis can lead to potential issues, from having devices with old and potentially obsolete firmware versions in the network, to simply misconfigured nodes.\nSuch nature of community networks has to be taken into account and \\textit{nodewatcher}{} helps detect any issues and guide maintainers towards resolving them.\nThey can use the provided firmware image, or can be guided through detected issues and suggested steps to resolve them manually.\nFor example, in \\textit{wlan slovenija}{} network we had cases where maintainers connected devices with vanilla OpenWrt installed to the network.\n\\textit{nodewatcher}{} then detected invalid or missing configuration, provided instructions to resolve them, and maintainers manually vetted and followed them one by one until the node was brought into the compliance with the rest of the network.\nSuch approach is of course time-consuming, but it is an important part of community networks spirit to be able to retain complete control over the node and all aspects of its operation and configuration.\n\n\\subsubsection{Obtaining Telemetry Data}\n\nAn overview of the monitoring system is given in Figure~\\ref{fig:monitoring-pipeline} which shows the data flowing from sources on the devices towards the time series data store and current network state as the sinks.\nData collection starts on a node and is implemented by the \\texttt{nodewatcher-agent} process.\nIt is a small C application with a minimal core that is able to periodically request the loaded modules to provide their state updates which are then compiled into the current node status and exported in a JSON form.\nThere are then two ways for the agent to transfer the data to the nodewatcher backend:\n\\begin{itemize}\n \\item \\textbf{Pull.} The JSON data may be served over HTTP(S) and the nodewatcher monitoring backend will periodically request new data from the nodes.\n\n \\item \\textbf{Push.} The agent on the node will periodically push its monitoring data to the backend using HTTP(S) POST requests. This requires that the push URL and interval be configured on the node.\n\\end{itemize}\n\nThis behaviour of the agent may be configured per-node.\nSome nodes may push data while data is pulled from others.\nSupporting both modes of operation is beneficial for situations where the \\textit{nodewatcher}{} backend installation does not directly see every node in the mesh network, but is instead located somewhere in the public Internet, without VPN access to the network itself.\nWith push support, nodes may provide telemetry data even in this case, by pushing data to a public URL.\nAs we have already mentioned, the agent is composed from multiple modules.\nEach module is a shared library which is loaded when the agent process starts.\nHaving modules as shared libraries enables simple extension of the monitoring agent by third-party packages.\nModules may independently fetch data from external sources providing state like current resource usage reported by the Linux kernel, status of various interfaces, wireless configuration and site survey, connected clients, external sensor input, topology information obtained from routing daemons etc.\n\nAs can be seen from its description, the monitoring agent follows a similar modular design as other parts of \\textit{nodewatcher}{}.\nAn important feature of a monitoring agent that runs on remote devices in a community network is the ability for modules to independently evolve their schema.\nIn order to add features to existing modules there needs to be a way to version the state schema which is reported back to the monitoring pipeline.\nThis is especially the case in community networks where there are many different devices with different firmware versions and also with different versions of the monitoring modules.\nA single version for all modules is not enough as modules may be developed by different developers, possibly from different community networks and independent schema evolution is required.\n\nNode state compiled by the agent is a structured JSON document where the top level contains one dictionary element for each module with the element's key being the module identifier.\nA partial example of such a state is shown in Figure~\\ref{fig:monitoring-json-schema}.\nEach module element may provide whatever elements it wants to report for the current state.\nThe agent will automatically create a special \\texttt{\\_meta} element containing the module metadata~-- currently a module version number.\nBy inspecting the metadata, the processing pipeline is able to handle multiple versions of the schema for different modules.\n\nThe implemented agent that uses JSON over HTTP(S) connections is just one of the possible monitoring data source implementations.\nThe architecture enables other data collection protocols to be used side-by-side.\nOne possible such protocol, that many existing device operating systems already support, is SNMP.\nWhile our custom protocol enables easier schema evolution through per-module versions, SNMP may be used in cases where custom monitoring agents cannot be installed on target devices.\nThis co-existence of data sources is enabled by the modular design of the monitoring backend.\n\n\\begin{algorithm}[t]\n\\begin{algorithmic}\n\\Procedure{MonitoringRun}{$P$}\n \\State $W \\gets \\emptyset$\\Comment{Initialize the working set.}\n \\State $C \\gets \\emptyset$\\Comment{Initialize the context.}\n \\For{$p \\in P$}\\Comment{Iterate over the pipeline.}\n \\If{$p \\in P_n$}\\Comment{Network processor.}\n \\State $\\langle W, C \\rangle \\gets p.\\mathrm{process}(W, C)$\n \\ElsIf{$p \\in P_m$}\\Comment{Node processor.}\n \\For{$n \\in W$}\n \\State $C \\gets p.\\mathrm{process}(n, C)$\n \\EndFor\n \\EndIf\n \\EndFor\n\\EndProcedure\n\\end{algorithmic}\n\\caption{A single monitoring run.}\n\\label{alg:monitoring-pipeline}\n\\end{algorithm}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.4]{figures\/storage-relationships.pdf}\n \\caption{Overview of relationship between the configuration and monitoring data schemas and the flow of monitoring data from the pipeline to the historical time-series data storage.}\n \\label{fig:storage-relationships}\n\\end{figure}\n\n\\subsubsection{Monitoring Pipelines}\nOn the backend, the processing of all operations related to monitoring is handled by the monitoring pipelines.\nConforming to the modular philosophy, the pipeline consists of processors which are implemented by modules.\nThroughout the execution of the pipeline two pieces of state are maintained: a working set of node instances and a context.\nThe context can contain arbitrary structured data which is communicated between the different processors.\nWorking set of nodes represents the instances that next processors will operate on.\nWhen execution of the pipeline begins, the working set is empty.\nThere are two basic types of processors:\n\\begin{description}\n\\item[Network processors $P_n$] are executed once with all nodes in the working set and context as arguments. They may modify both the working set and the context to change the flow of downstream processors.\n\n\\item[Node processors $P_m$] are executed for each node in the working set with context as an argument. They may modify the context but not the working set.\n\\end{description}\n\nAlgorithm~\\ref{alg:monitoring-pipeline} shows a simplified version of the processing run execution.\nAs can be seen from the algorithm, the pipeline is completely generic and its content depends entirely on the processor implementations which are provided by modules.\nIn order to increase performance, the pipeline implementation actually performs some optimizations that cannot be observed in the above pseudocode.\nNetwork processors must be executed sequentially as each may modify the working set which is part of the state for downstream processors.\nBut there is no reason why execution of node processors cannot be done in parallel for each node.\nAdditionally, if there are multiple consecutive node processors which will run on the same working set (note that only network processors may change the working set), they can all be executed inside the same thread, one after the other.\nThis greatly reduces the amount of context transfer between processes and speeds up the monitoring run execution.\n\nThe modular nature of the monitoring pipeline enables different communities to completely adapt it to their own network.\nSince the base framework is common to all, modules from different community networks may interoperate inside the same pipeline, passing data through the context in a loosely coupled manner and increasing code reuse possibilities.\n\n\\subsubsection{Storing Monitored Data}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.4]{figures\/datastream-storage.pdf}\n \\caption{Datastream stream storage organization. Tag based metadata store compact indices into the stream data, downsampled at different granularities.}\n \\label{fig:datastream-storage}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.4]{figures\/datastream-counter-reset.pdf}\n \\caption{Correct rate computation example using the counter derivative operator, automatically derived from two streams.}\n \\label{fig:datastream-counter-reset}\n\\end{figure}\n\nIn Section~\\ref{sec:platform-independent-configuration} we described how \\textit{nodewatcher}{} uses the registry to store nodes' platform-independent configuration, described by the configuration schema.\nFor storing monitoring data we use two different approaches that serve slightly different goals:\n\\begin{itemize}\n\\item Latest data obtained by the monitoring pipeline, describing the current network state, is stored in a similar form as configuration for each node.\nIn addition to a configuration schema, also a monitoring schema is defined and data is stored into a relational database in the form of schema items.\nReusing the same functionality, this automatically gives us the same schema item extensibility.\nData stored in this way may be queried by value and is thus used for performing configuration validation against observed data.\n\n\\item In addition to the above, we also want to keep historical data of how the network operated through time.\nThe storage and query requirements for historical data are completely different as these data are represented as time-series, indexed by time, not value.\nBeing a modular platform, the historical data storage is implemented as a module which may be added or removed as needed by the community.\n\\end{itemize}\n\nAn overview of the relationships between these storage components is shown in Figure~\\ref{fig:storage-relationships}.\nLatest data are sampled on every monitoring run and stored into the time-series data store.\nThe monitoring pipeline may generate large amounts of time-series data during its operation (for example \\textit{wlan slovenija}{} has accumulated over 200 GiB of data in the last five years, storing everything from network diagnostics to external sensor data).\nAs shown in Figure~\\ref{fig:monitoring-pipeline}, one of the processors in the monitoring pipeline can be a \\textit{datastream} processor, storing time-series data.\nDatastream is a system we developed, enabling storage, processing and retrieval of time-series data.\nIn contrast to round-robin databases~\\cite{Oetiker_1999}, where the database size is fixed in advance and old data is simply discarded, we are taking a different approach, storing all the data that we can for possible later analysis, enabling future research on improving automatic network diagnostics.\n\n\\begin{figure*}\n \\centering\n \\includegraphics[scale=0.45]{figures\/implementation-interactive-visualization.png}\n \\caption{Interactive visualization frontend module in \\textit{nodewatcher}{} for visualizing collected monitoring data from datastream storage allows various combinations of metrics to be visualized, enabling introspection into the functioning of the network.\n The visualization shows a plot of link quality, averaged over all links to neighbouring nodes, for a node in the mesh network as reported by the routing protocol over the course of five days.\n It displays the mean value together with minimum and maximum in the reported time interval which are all easily available due to downsampling.}\n \\label{fig:interactive-visualization}\n\\end{figure*}\n\nThe database is built around a concept of append-only streams, each stream being an independent time series.\nIn order to organize the streams, each stream may be tagged using arbitrary key-value pairs (see Figure~\\ref{fig:datastream-storage}), which are indexed and can be used for fast lookup of streams matching specific tags.\nFor stream storage to be efficient, datums are stored in separate collections, one for each time granularity, and connected with their metadata entries using compact indices.\nThe default storage backend used by datastream is TokuMX~\\cite{TokuMX_2007}, a scalable document database based on MongoDB~\\cite{MongoDB_2007}, which implements data compression and uses the fractal tree index~\\cite{Brodal_2003,Bender_2007} data structure for indices, both of which greatly improve performance.\nThe design of datastream is modular, supporting implementations of alternative storage backends while continuing to expose the same API.\nAs other \\textit{nodewatcher}{} components, it is available as an open source project~\\cite{Datastream_2012}.\n\nEach stream has some special tags which define its base operation.\nStreams are typed, meaning that they may be used for storage of different data point types.\nSupported types currently include numeric values (integers, floats, arbitrary precision numbers) and graph values (nodes and edges for storing how topologies evolve over time).\nStreams also define the highest granularity at which data points will be inserted.\nThis setting is an optimization which allows the datastream storage backend to reduce the number of granularities when downsampling data points.\nDatapoints are inserted only at the highest granularity and are then automatically downsampled in the background, making them ready for efficient querying and visualization.\n\nDownsampling is a crucial component for ensuring that the system maintains good performance as the number of data points grows.\nThe motivation for downsampling comes from interactive time-series data visualizations which enable one to see the data at various zoom levels and for different time intervals.\nFor example, there is usually no need to display data points for every minute when looking at data in the interval of several months.\nEven fetching this many data points from the database and sending it to the user's web browser may be overwhelming.\nDownsampling aggregates higher-frequency data into buckets covering larger time intervals.\nBy design, any downsampling will cause loss of certain information in lower granularities.\nIn order to reduce the effect of information loss, datastream supports multiple aggregation functions that determine what data is preserved in downsampled versions of data points.\nAggregation functions include the point count, sum, sum of squares, mean, minimum, maximum and standard deviation, all computed over the data points allocated to the same time bucket.\nStoring different statistical moments enables a better understanding of aggregated data and also enables improved visualizations where not only the average is shown (see Figure~\\ref{fig:interactive-visualization}).\n\nStreams may be automatically computed from other streams using different operators.\nThe most prominent use of this feature in monitoring is to support correct rate computations under the possibility of counter wraps and resets which could otherwise cause apparent rate spikes when the counter is reset due to a reboot but the system incorrectly classifies it as a counter wrap.\nTo illustrate why this can be a problem, imagine a simple 8-bit unsigned integer counter (its size is known to the monitoring platform), sampled every 10 seconds (see \\textit{Counter Stream} in Figure~\\ref{fig:datastream-counter-reset}).\nFocusing on the instance where counter value decreases from $212$ to $37$ this can be treated either as a counter wrap (as its maximum value is $255$) or as a counter reset due to device rebooting.\nWithout additional information, such events must be classified uniformly: if they are all classified as wraps, rates may incorrectly spike (in our example the rate would be computed as $(255 - 212 + 37) \/ 10s = 8.0s^{-1}$); if they are always classified as resets, data points may be lost.\n\nUsing datastream, a \\textit{counter derivative} operator accepts two streams, one containing raw counter data (for example the number of bytes transferred on a network interface) and one containing discrete events at which a device was rebooted (for example derived from its uptime by the \\textit{reset operator}).\nUsing both pieces of information, the rate is computed as a derivative, but only when there was no reset event during the last time interval.\nIn case of a reset, a \\texttt{null} value is inserted instead.\nHowever, if there was no reset and the counter value decreased, the event is classified as a counter wrap and the rate is computed using the known maximum value of the counter.\nComputing streams from other streams may be chained as shown in the example in Figure~\\ref{fig:datastream-counter-reset}.\nDevice uptime, counting seconds from the device's boot time, is received as raw measurements which the \\textit{reset} operator then transforms into a stream suitable for use in the counter derivative operator.\nSuch chaining naturally describes data transformations which happen while the data is streamed.\nAdditional operators include computing sums of multiple streams which is useful to compile aggregate traffic rates.\n\nThe datastream module also provides a REST API as a way to access current and historical time-series data with all the benefits of datastream data storage: quick access to datapoints at various granularities, diverse data types (numeric and graphs), easy navigation and searching for streams using custom tags which can also serve to guide visualization.\nUsing all these ways and general \\textit{nodewatcher}{} modularity allows various ways of providing user interfaces to the data.\nFrom server-side rendered templates, to dynamic JavaScript based visualizations like the one seen in Figure~\\ref{fig:interactive-visualization}.\nBecause any data is provided through REST APIs as well it can be made easily exportable and shared with others.\n\n\\section{Evaluation and Discussion}\n\\label{sec:evaluation}\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=0.45]{figures\/wlansi-nodes-up.pdf}\n \\caption{Number of online nodes in \\textit{wlan slovenija}{} community wireless network as reported by the monitoring system for the past few years.}\n \\label{fig:wlansi-nodes-up}\n\\end{figure}\n\nIn this section we evaluate \\textit{nodewatcher}{} in the context of \\textit{wlan slovenija}{} wireless community network.\n\\textit{wlan slovenija}{} has been an active network since the year 2009.\nIt is a medium-sized network with around $400$ online nodes at the moment (the growth of the number of online nodes since 2010 may be seen in Figure~\\ref{fig:wlansi-nodes-up}).\n\nPrimary management of the network is currently still based on the older version v2 of \\textit{nodewatcher}{}, while the new version presented in this paper is being run in parallel, since the beginning of 2015, in order to enable a smooth transition.\nThe old version is designed similarly to other existing community network platforms that we surveyed in Section~\\ref{sec:related-work} and shares many of the same problems.\nDue to this state we have a unique opportunity to qualitatively compare the workings of both solutions and show how the new version of the platform substantially improves network management by addressing the exposed problems.\n\n\\subsection{Device Support}\n\nIt is much easier to support new devices and keep up with the pace of requirements from other communities in the new platform.\nCurrently, our platform is also being used by the neighbouring community network in Croatia where they use some unique devices due to their cheap local availability.\nIn old version of the platform, every new device required substantial effort to support it properly as \\textit{nodewatcher}{} required substantial changes to the provisioning system code.\nAdditionally, \\textit{nodewatcher}{} did not have advanced validation capabilities, resulting several times in erroneous configurations which required re-flashing to fix.\nIn contrast, the new platform enabled us to quickly support the new device, simply by writing a new device descriptor for it (see Section~\\ref{sec:firmware-generator}).\n\n\\subsection{Monitoring}\n\nThe new modular monitoring pipeline brings multiple improvements over the old monolithic version.\nMeasurements are performed faster and are easily parallelized over multiple cores due to node\/network processor separation, as described in Section~\\ref{sec:network-monitoring}.\nA side effect of enabling parallel execution of processors is also support for performing monitoring runs at different intervals.\nFor example, topology information may be updated more quickly as it is readily and locally available from the OLSR routing protocol and does not require polling of all the nodes.\nOn the other side, device telemetry measurements require data requests over a wide network which may consume more time.\nEven slower are the measurements of node reachability and packet loss using ICMP ECHO requests under varying packet sizes to check for MTU issues.\nPreviously, the slowest measurement affected the execution time of the whole monitoring run.\nThe new modular design enables such runs to be isolated so some may execute entirely in parallel and with higher frequency than other, slower, measurements.\n\nDue to our new time-series data storage system, we are able to store a complete set of monitoring data, forever.\nNot being limited by the fixed size of round-robin databases enables later analysis of network events over long periods of time.\nSuch analysis may enable new insights into how certain network problems are correlated~\\cite{Steinder_2004}.\nAs we store both, configuration and latest monitoring data, we are able to perform comparisons and validate that the monitored operation matches exactly with what is configured for specific nodes.\n\n\\subsection{Interactivity}\n\nThe old version used a round-robin database for time-series data storage together with its visualization module which generated static images.\nStatic visualizations are limiting when it comes to having a good overview over multiple measurements.\nCombining measurements over arbitrary time spans at arbitrary resolution greatly enhances the ability to diagnose problems.\nAdditionally, generating a large set of static images is very resource-intensive for the server.\nInteractive visualizations, like the one seen in Figure~\\ref{fig:interactive-visualization}, transfer only data points to the client machine and the web browser then performs all the rendering.\nThis naturally distributes the required computations and reduces strain on the central server.\n\n\\subsection{Modularity and Interoperability}\n\nThe Trac Project~\\cite{Trac_2003} is an example of a modular system for managing open source projects.\nVery early on in its development, a plugin system has been developed and hundreds of community-made plugins have been made since then, many available through their Trac Hacks~\\cite{TracHacks_2004} repository.\nWe were inspired by it and decided to do something similar for managing community networks.\nWe built \\textit{nodewatcher}{} using the Django framework~\\cite{django_2005} and not a custom framework to leverage standard Django packages and modularity.\nWe can reuse package repositories and tools that the Django ecosystem provides.\nFor example, the Django Packages~\\cite{DjangoPackages_2010} repository with more than 2600 packages available.\n\nIn 2014 interoperability efforts between community networks have been revived~\\cite{interop_2010}.\nSome very active communities started discussing and comparing data schemas used for configuration and monitoring.\nDuring this ongoing process we have been re-evaluating schemas used by other communities and the resulting compromises stemming from the discussions.\nAgain and again we have been reaffirmed that we achieved our design goal because we are discovering that \\textit{nodewatcher}{} can support all of the proposed schemas with little effort and are actually able to quickly provide working implementations of the proposed ideas.\nThis strengthens the case for its use in any of the participating community networks, or even to migrate between used data schemas.\n\nAdditionally, in the beginning of 2015, the \\textit{nodewatcher}{} platform has been chosen to be the basis for the Commotion Wireless platform~\\cite{Commotion_2015}.\nIn order to adapt to the specifics of the Commotion network, \\textit{nodewatcher}{} has been extended to support security features, such as node-server mutual authentication and encryption of monitoring data, and support for data push in addition to data pull.\nThe platform's modular architecture has shown itself to be particularly suitable for customizing solutions for specific communities.\n\n\\subsection{Security, Availability and Federation}\n\nCommunity networks are by their nature decentralized networks which grow in an ad-hoc fashion.\nSome community networks may be concerned that having a centralized management system presents a single point of failure for the network, compromise its security or centralize the community too much.\nIn this section we analyse these concerns and argue that this may not be the case.\n\nIn order to support high availability scenarios, standard approaches like using multiple redundant servers and performing database replication, should be considered.\nBut even in case all redundancy fails, this will not affect the functioning of the actual community network, as operation of the nodes and routing protocols does not in any way rely on there being a \\textit{nodewatcher}{} server.\nTherefore, in the worst case, the only process that will be interrupted is network monitoring and support for managing nodes.\n\nThe issue of centralization can be addressed by federation. \nWhile \\textit{nodewatcher}{} core does not explicitly support federated deployments, its modular nature enables communities to easily implement it as a module.\nAssuming that there are multiple independent subcommunities within one larger community, there are two basic approaches in making \\textit{nodewatcher}{} federated.\n\n\\begin{description}\n \\item[Independent installations.] Each independent subcommunity or routing domain would have its own \\textit{nodewatcher}{} installation which performs registration and monitoring for its own nodes.\n In this way, it would be completely independent from the centralized instance.\n If the community then wants to have an aggregated picture of the whole network, another top-level \\textit{nodewatcher}{} instance may be deployed which will use pull and\/or push from all the other subcommunity \\textit{nodewatcher}{} installations.\n In this manner, the top-level installation would not support registration of nodes and would not monitor the nodes directly.\n Since in \\textit{nodewatcher}{} these are all modules, they can be easily removed.\n Instead, the top-level instance would just get the data from subcommunity installations and use it as is.\n Because of the modular design, one would only need to develop a module that knows how to aggregate this data from multiple subcommunities and store it using the existing schema.\n\n \\item[Single installation.] The problem with having multiple installations is that it may be hard to handle merge\/split scenarios.\n So instead of having multiple installations, one could also use just a single \\textit{nodewatcher}{} installation and just structure the nodes and permissions in such a way that each subcommunity has their space.\n This is similar to how the Guifi.net~\\cite{Guifinode_2003,Vega_2012} dashboard splits nodes into zones.\n In this case, the server infrastructure would still be shared, but control would be distributed over multiple communities.\n Since there is no need for all the nodes to see each other (only \\textit{nodewatcher}{} needs to be able to communicate with them), this is already possible using the current implementation.\n Currently, there is a module that supports \\textit{projects}, but these have a completely flat structure.\n A flat structure does not scale to a large number of subcommunities.\n A better way would be to develop a module that would enable nicer visual grouping of nodes, using a parent-child concept similar to the Guifi.net zones.\n As far as topology and map visualizations go, the existing implementation already supports disconnected islands of nodes.\n Note that supporting large single installations (several thousand nodes and more) would most likely require performance optimizations in the monitoring modules, but since the whole monitoring pipeline is modular and designed to be easy to distribute over multiple servers, it can be improved upon by communities of such size.\n\\end{description}\n\nThe last concern regards security.\nA centralized network management installation might be an attractive target for attackers.\nSince \\textit{nodewatcher}{} holds node configurations, those might contain sensitive information like passwords.\nIn order to minimize this exposure, public key authentication is supported and should be used instead of passwords whenever possible.\nIn this case only the public keys are stored by \\textit{nodewatcher}{} and access to them does not grant access to the nodes themselves.\nAn additional security concern is for nodes to misreport data of other nodes, which would confuse the monitoring system, so it would display incorrect data.\nThis is why \\textit{nodewatcher}{} also supports secure authentication of node data by using public keys mutually verified via the TLS~\\cite{RFC_5246} protocol.\n\n\\section{Conclusion and Future Work}\n\\label{sec:conclusion}\n\nIn this paper we have presented \\textit{nodewatcher}{}, a community network management platform, which is built around the core principle of modularity and extensibility, making it suitable for reuse by different community networks.\nDevices are configured using platform-independent configuration which \\textit{nodewatcher}{} can transform into deployable firmware images, eliminating any manual device configuration, reducing errors, and enabling participation of novice maintainers.\nAn embedded monitoring system enables live overview and validation of the whole community network.\nWe have shown how the system successfully operates in an actual community wireless network, \\textit{wlan slovenija}{}, while it is also starting to be used by other network communities.\n\nThere are many possible improvements that could make various aspects of community networks easier to manage, e.g. help with radio signal and propagation planing using realistic models and geographic data, support for more routing protocols, and further optimizations for really large networks. Instead of developing all these new features ourselves, we envision the next step is to engage other network communities to develop features that are specific to their networks. We have already established partnerships with other community networks who are starting to work in this direction.\n\nWe also expect to see completely new features added to the platform by third-party developers.\nFor example, a warehouse module to help organize the inventory of equipment in community networks, a store module helping users order preconfigured devices, and local community and services modules to help people who are using the community networks to form better communities.\nUsing a strong and modular base is enabling further innovation and progress.\n\nData collected through \\textit{nodewatcher}{} can be used in future research~\\cite{Braem_2013}.\nResearchers could analyze how community, wireless, or mesh networks operate over larger time spans.\nBoth on a technical level and also community and societal levels.\nResearchers are already using the platform to monitor KORUZA~\\cite{Mustafa_2013} deployments around the world and use it to further guide development and research in DIY free-space optics connectivity.\n\n\\section*{Acknowledgement}\n\nThe authors have been supported by the following institutions: Jernej Kos by the Slovenian Research Agency (Grant 1000-11-310153), by the Shuttleworth Foundation Flash Grant and by the NLnet Foundation (Grant 2014-05-015).\nWe would like to thank for their support everyone participating in the \\textit{wlan slovenija}{} network and all the community networks, which responded to the survey.\nWithout this global community none of this work would be possible.\nWe would like to thank everyone who read early drafts of this paper, especially anonymous reviewers, and provided us with invaluable feedback.\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\newcommand{\\rmnum}[1]{\\romannumeral #1}\n\\newcommand{\\Rmnum}[1]{\\expandafter\\@slowromancap\\romannumeral #1@}\n\\makeatother\nReal-world systems can be represented as networks, with nodes representing the components and links representing the connections between them~\\cite{newman2003structure, zhang2016dynamics}. The study of complex networks pervades in different fields~\\cite{costa2011analyzing}. For example, with biological or chemical networks, scientists study interactions between proteins or chemicals to discover new drugs~\\cite{qi2006evaluation,girvan2002community}. With social networks, researchers tend to classify or cluster users into groups or communities, which is useful for many tasks, such as advertising, search and recommendation~\\cite{jacob2014learning,traud2012social}. With communication networks, learning the network structure can help understand how information spreads over the networks~\\cite{zhang2016dynamics}. These are only a few examples of the important role of analyzing networks. For all these examples, the data may be incomplete. If so, it could be important to be able to predict the link most likely to be missing. If the network is evolving, it could be crucial to forecast the next link to be added. For both of these applications one needs link prediction~\\cite{liben2007link, lu2011link, lu2012recommender, martinez2017survey,liu2019computational}.\n\nIn link prediction, one estimates the likelihood that two nodes are adjacent to each other based on the observed\nnetwork structure~\\cite{getoor2005link}. Methods using similarity-based metrics, maximum likelihood algorithms and probabilistic models are major families of link prediction methods~\\cite{cui2018survey}.\nRecently, network embedding, which embeds nodes into a low-dimensional vector space, has attracted much attention in solving the link prediction problem~\\cite{cui2018survey, wang2017community}. The similarity between the embedding vectors of two nodes is used to evaluate whether they would be connected or not.\nDifferent algorithms have been proposed to obtain network embedding vectors. A simplest embedding method is to take the row or column vector in the adjacency matrix, which is called an adjacency vector of the corresponding node, as the embedding vector. Then, the representation space is $N$-dimensional, where $N$ is the number of nodes. As real-world networks are mostly large and sparse, the adjacency vector of a node is sparse and high-dimensional. In addition, the adjacency matrix only contains the first-order neighborhood information, and therefore the adjacency vector neglects the high-order structure of the network such as paths longer than an edge. These factors limit the precision of network embedding based on the adjacency vector in link prediction tasks. Work in the early 2000s attempted to embed nodes into a low dimension space using dimension reduction techniques~\\cite{tenenbaum2000global, roweis2000nonlinear, belkin2002laplacian}. Isomap~\\cite{tenenbaum2000global}, locally linear embedding (LLE)~\\cite{roweis2000nonlinear} and Laplacian eigenmap~\\cite{belkin2002laplacian} are algorithms based on the $k$-nearest graph, where nodes $i$ and $j$ are connected by a link in the $k$-nearest graph if the length of the shortest path between $i$ and $j$ is within the $k$-th shortest among the length of all the shortest paths from $i$ to any other nodes.\n Matrix factorization algorithms decompose the adjacency matrix into the product of two low-dimensional rectangular matrices. The columns of the rectangular matrices are the embedding vectors for nodes. Singular value decomposition (SVD)~\\cite{golub1971singular} is one commonly used and simple matrix factorization. However, the computation complexity of most of the aforementioned algorithms is at least quadratic in terms of $N$, limiting their applicability to large networks with millions of nodes\n\nRandom-walk-based network embedding is a promising family of computationally efficient algorithms. These algorithms exploit truncated random walks to capture the proximity between nodes~\\cite{perozzi2014deepwalk,tang2015line,mikolov2013distributed} generally via the following three steps~\\cite{grover2016node2vec, cao2018link, zhang2019degree}: (1) Sample the network by running random walks to generate trajectory paths. (2) Generate a node pair set from the trajectory paths: each node on the trajectory path is viewed as a center node, the nearby nodes within a given distance are considered as the neighboring nodes. A node pair in the node pair set is formed by a center node and each of its neighboring nodes.\n(3) Apply a word embedding model such as Skip-Gram to learn the embedding vector for each node by using the node pair set as input. Skip-Gram assumes nodes that are similar in topology or content tend to have similar representations~\\cite{mikolov2013distributed}. Algorithms have been designed using different random walks to capture high-order structure on networks. For example, DeepWalk~\\cite{perozzi2014deepwalk} and Node2Vec~\\cite{grover2016node2vec} adopted uniform and biased random walks, respectively, to sample the network structure. In addition, random-walk-based embedding methods have also been developed for temporal networks, signed networks and multilayer networks~\\cite{nguyen2018continuous,yuan2017sne, bagavathi2018multi, qu2019temporal}.\n\n\nIn contrast to random-walk-based embedding, here we propose SI-spreading-based network embedding algorithms for static and temporal networks. We deploy the susceptible-infected (SI) spreading process on the given network, either static or temporal, and use the corresponding spreading trajectories to generate the node pair set, which is fed to the Skip-Gram to derive the embedding vectors. The trajectories of an SI spreading process capture the tree-like sub-network centered at the seed node, whereas random walk explores long walks that possibly revisit the same node.\n We evaluate our static network embedding algorithm, which refer to as \\textit{SINE}, and temporal network embedding, \\textit{TSINE}, via a missing link prediction task in six real-world social networks. We compare our algorithms with state-of-the-art static and temporal network embedding methods. We show that both \\textit{SINE} and \\textit{TSINE} outperform other static and temporal network embedding algorithms, respectively. In most cases, the static network embedding, \\textit{SINE}, performs better than \\textit{TSINE}, which additionally uses temporal network information. In addition, we evaluate the efficiency of SI-spreading-based network embedding via exploring the sampling size for the Skip-Gram, quantified as the sum of the length of the trajectory paths, in relation to its performance on the link prediction task. We show that high performance of SI-spreading-based network embedding algorithms requires a significantly smaller sampling size compared to random-walk-based embeddings. We further explore what kind of links can be better predicted to further explain why our proposed algorithms show better performance than the baselines.\n\n\n\nThe rest of the paper is organized as follows. We propose our method in Section~\\ref{Proposed Method}. In Section~\\ref{SI based static network sampling}, we propose our SI-spreading-based sampling method for static networks and the generation of the node pair set from the trajectory paths. Skip-Gram model is introduced in Section~\\ref{Skip-Gram model}. We introduce an SI-spreading-based sampling method for temporal networks in Section~\\ref{SI based temporal network sampling}. In Section~\\ref{Evaluation Method}, our embedding algorithms are evaluated on a missing link prediction task on real-world static and temporal social networks.\nThe paper is concluded in Section~\\ref{Conclusions}.\n\n\n\\section{SI-spreading-based Embedding}\n\\label{Proposed Method}\n\n\nThis section introduces SI-spreading-based network embedding methods. Firstly, we illustrate our SI-spreading-based network embedding method for static networks in Sections~\\ref{SI based static network sampling} and~\\ref{Skip-Gram model}. Section~\\ref{SI based temporal network sampling} generalizes the method to temporal network embedding.\n\nBecause we propose the network embedding methods for both static and temporal networks, we start with the notations for temporal networks, of which the static networks are special cases.\nA temporal network is represented as $\\mathcal{G} = (\\mathcal{N}, \\mathcal{L})$, where $\\mathcal{N}$ is the node set and $\\mathcal{L}=\\{l(i, j, t), t \\in[0, T], i, j\\in \\mathcal{N}\\}$ is the set of time-stamped contacts. The element $l(i, j, t)$ in $\\mathcal{L}$ represents a bidirectional contact between nodes $i$ and $j$ at time $t$. We consider discrete time and assume that all contacts have a duration of one discrete time step. We use $[0, T]$ to represent the observation time window, $N = |\\mathcal{N}|$ is the number of nodes. The aggregated static network $G=(\\mathcal{N}, E)$ is derived from a temporal network $\\mathcal{G}$. Two nodes are connected in $G$ if there is at least one contact between them in $\\mathcal{G}$. $E$ is the edge set of $G$. The network embedding problem is formulated as follows:\n\n\n\n Given a network $G=(\\mathcal{N}, E)$, static network embedding aims to learn a low-dimensional representation for each node $i \\in\\mathcal{N}$. The node embedding matrix for all the nodes is given by $\\textbf{U}\\in R^{d\\times N}$, where $d$ is the dimension of the embedding vector ($d < N$). The $i$-th column of $\\textbf{U}$, i.e., $\\overrightarrow{u_{i}}\\in R^{d\\times 1}$, represents the embedding vector of node $i$.\n\n\\subsection{SI-spreading-based static network sampling}\n\\label{SI based static network sampling}\n\n\nThe SI spreading process on a static network is defined as follows: each node is in one of the two states at any time step, i.e., susceptible (S) or infected (I); initially, one seed node is infected; an infected node independently infects each of its susceptible neighbors with an infection probability $\\beta$ at each time step; the process stops when no node can be infected further. To derive the node pair set as the input for Skip-Gram, we carry out the following steps:\n\\begin{algorithm}[!ht]\n \\caption{Generation of trajectory paths from SI spreading}\\label{alg:walkgenerator}\n \n\\begin{flushleft}\n\\textbf{Input:} {$G = (\\mathcal{N}, E)$, $B$, $L_{\\rm max}$, $\\beta$, $m_{i}$ } \\\\\n\n\\textbf{Output:} {node trajectory path set $D$}\n\\end{flushleft}\n \\begin{algorithmic}[1]\n \\State Initialize number of context windows $C=0$\n \\State Initialize node trajectory path set $D = \\varnothing$\n\\While{$\\mathcal{B} - C > 0$}\n \\State Randomly choose node $i$ as the seed to start the SI spreading\n \\State Generate spreading trajectory tree $\\mathcal{T}_{i}(\\beta)$\n \\State Randomly choose $m_{i}$ trajectory paths $D_{g_{i}} (g_{i}=1, \\ldots, m_{i})$ from $\\mathcal{T}_{i}(\\beta)$\n \\For {$g_{i}=1, \\ldots, m_{i}$}\n \\If{$|D_{g_{i}}| > L_{\\rm max}$}\n \\State Choose the first $L_{\\rm max}$ nodes from $D_{g_{i}}$ to form $D_{g_{i}}^{*}$\n \\State Add the trajectory $D_{g_{i}}^{*}$ to $D$\n \\State $C = C + |D_{g_{i}}^{*}|$\n \\Else{}\n \\State Add the trajectory $D_{g_{i}}$ to $D$\n \\State $C = C + |D_{g_{i}}|$\n \\EndIf\n \\EndFor\n\\EndWhile\\label{euclidendwhile}\n\\State \\textbf{return} $D$\n\\end{algorithmic}\n\\end{algorithm}\n\\begin{figure}\n\\centering\n\\includegraphics[width=18cm]{gram}\n\\caption{\\label{Fig:gram}Generating node pairs from a trajectory path $1,3,6,8,9,10,7,5$. The window size $\\omega=2$ and only the first four nodes 1, 3, 6 and 8 as the center node are illustrated as examples.}\n\\end{figure}\n\\subsubsection{Construction of spreading trajectory paths.} In each run of the SI spreading process, a node $i$ is selected uniformly at random as the seed. The SI spreading process starting from $i$ is performed. The spreading trajectory $\\mathcal{T}_{i}(\\beta)$ is the union of all the nodes that finally get infected supplied with all the links that have transmitted infection between node pairs.\n\n\nFrom each of the spreading trajectory $\\mathcal{T}_{i}(\\beta)$, we construct $m_{i}$ trajectory paths, each of which is the path between the root node $i$ and a randomly selected leaf node in $\\mathcal{T}_{i}(\\beta)$. The number $m_{i}$ of trajectory paths to be extracted from $\\mathcal{T}_{i}(\\beta)$ is assumed to be given by \\[m_{i}=\\max\\left\\{1, \\frac{\\mathcal{K}(i)}{\\sum_{j\\in\\mathcal{N}}\\mathcal{K}(j)}m_{\\max}\\right\\},\\] where $m_{\\max}$ is a control parameter and $\\mathcal{K}(i)$ is the degree of the root node $i$ in the static network (or aggregated network).\n\n\n\n\n\n\n\n\n\n\n\n\nThe trajectory paths may have different lengths (i.e., number of nodes in the path). For a trajectory path whose length is larger than $L_{\\max}=20$, we only take the first $L_{\\max}$ nodes on the path. For a randomly chosen seed node $i$, we can generate $m_{i}$ trajectory paths from $\\mathcal{T}_{i}(\\beta)$. We stop running the SI spreading process until the sum of the length of the trajectory paths reaches the sampling size $B=NX$, where $X$ is a control parameter. We consider $X \\in \\{1,2, 5, 10, 25, 50, 100, 150, 200, 250, 300, 350\\}$. We compare different algorithms using the same $B$ for fair comparison~\\cite{nguyen2018continuous} to understand the influence of the sampling size. We show how to sample the trajectory paths in Algorithm~\\ref{alg:walkgenerator}.\n\n\n\n\n\n\n\n\\subsubsection{Node pair set generation.} We illustrate how to generate the node pairs, the input of the Skip-Gram, from a trajectory path in Figure~\\ref{Fig:gram}. Consider a trajectory path, $1,3,6,8,9,10,7,5$, starting from node 1 and ending at node 5. We set each node, e.g., node 3, as the center node, and the neighboring nodes of the center node are defined as nodes within $\\omega=2$ hops. The neighboring nodes of node 3 are, 1, 6 and 8. We thus obtain ordered node pairs $(3, 1)$, $(3, 6)$, and $(3, 8)$. Thus, we use the union of node pairs centered at each node in each of trajectory path as the input to the Skip-Gram model.\n\n\n\\subsection{Skip-Gram model}\n\\label{Skip-Gram model}\n\n\nWe illustrate how the Skip-Gram derives the embedding vector for each node based on the input node pair set. We denote by $N_{SI}(i)$ the neighboring set for a node $i$ derived from the SI spreading process. A neighboring node $j$ of $i$ may appear multiple times in $N_{SI}(i)$ if $(i, j)$ appears multiple times in the node pair set.\n\nLet $p(j|i)$ be the probability of observing neighboring node $j$ given node $i$. We model the conditional probability $p(j|i)$ as the softmax unit parametrized by the product of the embedding vectors, i.e., $\\overrightarrow{u_{i}}$ and $\\overrightarrow{u_{j}}$, as follows:\n \\begin{eqnarray}\\label{likelihood function}\n p(j|i)\n&=& \\log \\frac{\\exp\n(\\overrightarrow{u_{i}}\\cdot \\overrightarrow{u_{j}}^{T})}{\\sum_{k\\in\\mathcal{N}}\\exp\n(\\overrightarrow{u_{i}}\\cdot \\overrightarrow{u_{k}}^{T})}\n\\end{eqnarray} Skip-Gram is to derive the set of the $N$ embedding vectors that maximizes the log probability of observing every neighboring node from $N_{SI}(i)$ for each $i$. Therefore, one maximizes\n\\begin{eqnarray}\\label{log objective function}\n\\max\\quad \\mathcal{O}&=&\\sum_{i\\in\\mathcal{N}}\\sum_{j\\in N_{SI}(i)}\\log p(j|i).\n\\end{eqnarray}\n\n\nEquation~(\\ref{log objective function}) can be further simplified to\n\\begin{eqnarray}\\label{log objective function1}\n\\max\\quad \\mathcal{O} &=&\\sum_{i\\in\\mathcal{N}}\\left(-\\log Z_{i} + \\sum_{j\\in N_{SI}(i)}\\overrightarrow{u_{i}}\\cdot \\overrightarrow{u_{k}}^{T}\\right),\n\\end{eqnarray}\nwhere\n\\begin{equation}\nZ_{i}=\\sum_{k\\in\\mathcal{N}} \\exp(\\overrightarrow{u_{i}}\\cdot \\overrightarrow{u_{k}}^{T}).\n\\end{equation}\nTo compute $Z_{i}$ for a given $i$, we need to traverse the entire node set $\\mathcal{N}$, which is computationally costly. To solve this problem, we introduce negative sampling~\\cite{mikolov2013distributed}, which randomly selects a certain number of nodes $k$ from $\\mathcal{N}$ to approximate $Z_{i}$. To get the embedding vectors for each node, we use the stochastic gradient ascent to optimize Eq.~(\\ref{log objective function1}).\n\n\nThe static network embedding algorithm proposed above from the SI-spreading-based static network sampling and Skip-Gram model is named as \\textit{SINE}.\n\\subsection{SI-spreading-based temporal network sampling}\n\\label{SI based temporal network sampling}\nWe generalize \\textit{SINE} to the SI-spreading-based temporal network embedding by deploying SI spreading processes on the given temporal network, namely, \\textit{TSINE}.\nFor a temporal network $\\mathcal{G} = (\\mathcal{N}, \\mathcal{L})$, SI spreading follows the time step of the contacts in $\\mathcal{G}$. Initially, node $i$ is chosen as the seed of the spreading process. At every time step $t\\in [0, T]$, an infected node infects each of its susceptible neighbor in the snapshot through the contact between them with probability $\\beta$. The process stops at time $T$.\nWe construct the spreading trajectory starting from node $i$ as $\\mathcal{T}_{i}(\\beta)$, which records the union of nodes that get infected together with the contacts through which these nodes get infected. We propose two protocols to select the seed node of the SI spreading. In the first protocol, we start by selecting uniformly at random a node $i$ as the seed. Then, we select uniformly at random a time step from all the times of contacts made by node $i$ as the starting point of the spreading process, i.e., the time when $i$ gets initially infected. We refer to this protocol as \\textit{TSINE1}. In the second protocol, we choose a node $i$ uniformly at random as the seed and start the spreading at the time when node $i$ has the first contact. We refer to this protocol as \\textit{TSINE2}.\n\nBoth \\textit{TSINE1} and \\textit{TSINE2} generate\nthe node pair set from the spreading trajectory $\\mathcal{T}_{i}(\\beta)$ in the same way as described in Section~\\ref{SI based static network sampling}. The node pairs from the node pair set is the input of of Skip-Gram for calculating the embedding vector for each node. The SI-spreading-based temporal network embedding uses the information on the time stamps of contacts in addition to the information used by the static network embedding.\n\n\n\n\n\n\\section{Results}\n\\label{Evaluation Method}\nFor the link prediction task in a static network, we remove a certain fraction of links from the given network and predict these missing links based on the remaining links. We apply our static network embedding algorithm to the remaining static network to\nderive the embedding vectors for the nodes, which are used for link prediction. For a temporal network, we select a fraction of node pairs that have at least one contact. We remove all the contacts between the selected node pairs from the given temporal network. Then, we attempt to predict whether the selected node pairs have at least one contact or not based on the remaining temporal network. We use the area under the curve (AUC) score to evaluate the performance of the algorithms on the link prediction task. The AUC quantifies the probability of ranking a random node pair that is connected or has at least a contact higher than a random node pair that is not connected or has no contact.\n\n\n\n\n\n\\subsection{Empirical Networks}\n\\label{Empirical Networks}\nWe consider temporal networks, each of which records the contacts and their corresponding time stamps between every node pair. For each temporal network $\\mathcal{G}$, one can obtain the corresponding static network $G$ by aggregating the contacts between each node pair over time. In other words, two nodes are connected in static network $G$ if there is at least one contact between them in $\\mathcal{G}$. The static network $G$ derived from $\\mathcal{G}$ is unweighted by definition. We consider the following temporal social network data sets.\n\\begin{itemize}\n \\item \\textit{HT2009}~\\cite{isella2011s} is a network of face-to-face contacts between the attendees of the ACM Hypertext 2009 conference.\n \\item \\textit{Manufacturing Email (ME)}~\\cite{michalski2011matching} is an email contact network between employees in a mid-sized manufacturing company.\n \\item \\textit{Haggle}~\\cite{chaintreau2007impact} records the physical contacts between individuals via wireless devices.\n \\item \\textit{Fb-forum}~\\cite{opsahl2013triadic} captures the contacts between students at University of Califonia, Irvine, in a Facebook-like online forum.\n \\item \\textit{DNC}~\\cite{konect:2017:dnc-temporalGraph} is an email contact network in the 2016 Democratic National Committee email leak.\n \\item \\textit{CollegeMsg}~\\cite{opsahl2009clustering} records messages between the users of an online community of students from the University of California, Irvine.\n\\end{itemize}\nTable~\\ref{TB:1} provides some properties of the empirical temporal networks. In the first three columns we show the properties of the temporal networks, i.e., the number of nodes ($N$), timestamps ($T$) and contacts ($|\\mathcal{L}|$). In the remaining columns, we show the properties of the corresponding aggregate static networks, including the number of links ($|E|$), link density, average degree, and clustering coefficient. The temporal networks are considerably different in size, which ranges from hundreds to thousands of nodes, as well as in the network density and clustering coefficient. Choosing networks with different properties allows us to investigate whether the performance of our algorithms can be consistent across networks.\n\n\\begin{table*}[!ht]\n\\centering\n\\caption{\\label{TB:1}Properties of the empirical temporal networks. The number of nodes ($N$), timestamps ($T$), and contacts ($|\\mathcal{L}|$) are shown. In addition, the number of links ($|E|$), link density, average degree, and clustering coefficient of the corresponding static network are shown. }\n\n\\begin{tabular}{cccccccccc}\n\\hline\n&Dataset &$N$ &$T$ &$|\\mathcal{L}|$ &$|E|$ &Link Density &Average Degree &Clustering Coefficient\\\\ \\hline\n&HT2009 & 113 & 5,246 & 20,818 & 2,196 & 0.3470 & 38.87 &0.5348\\\\\n&ME & 167 & 57,842 & 82,927 & 3,251 & 0.2345 & 38.93 &0.5919\\\\\n&Haggle &274 &15,662 &28,244 &2,124 &0.568 & 15.5 &0.6327\\\\\n&Fb-forum & 899 & 33,515 & 33,720 & 7,046 & 0.0175 & 15.68 &0.0637\\\\\n&DNC & 1,891 & 19,383 & 39,264 & 4,465 & 0.0025 &4.72 &0.2091\\\\\n&CollegeMsg &1,899 &58,911 &59,835 &13,838 & 0.0077 &14.57 &0.1094\\\\\n\\hline\n\\end{tabular}\n\\end{table*}\n\n\n\\subsection{Baseline algorithms}\n\\label{Baseline algorithms}\nWe consider three state-of-the-art network embedding algorithms based on Skip-Gram. These baseline algorithms and the algorithms that we proposed differ only in the method to sample trajectory paths, from which the node pair set, i.e., the input to the Skip-Gram, is derived.\n \\textit{DeepWalk}~\\cite{perozzi2014deepwalk} and \\textit{Node2Vec}~\\cite{grover2016node2vec} are static network embedding algorithms based on random walks. \\textit{CTDNE}~\\cite{nguyen2018continuous} is a temporal network embedding algorithm based on random walks.\n\\begin{itemize}\n\\item \\textit{DeepWalk}~\\cite{perozzi2014deepwalk} deploys classic random walks on a given static network.\n\n\\item \\textit{Node2vec}~\\cite{grover2016node2vec} deploys biased random walks on a given static network. The biased random walk gives a trade-off between breadth-first-like sampling and depth-first-like sampling of the neighborhood, which is controlled via two hyper-parameters $p$ and $q$. We use a grid search over $p, q \\in \\{0.01, 0.25, 0.5, 1, 2, 4\\}$ to obtain embeddings that achieve the largest AUC value for link prediction.\n\n\\item \\textit{CTDNE}~\\cite{nguyen2018continuous}: \\textit{CTDNE} is a temporal network embedding algorithm based on temporal random walks. The main idea is that the timestamp of the next temporal contact on the walk should be larger than the timestamps of previously traversed contacts.\nGiven a temporal network $\\mathcal{G} = (\\mathcal{N}, \\mathcal{L})$, the starting contact for the temporal random walk is selected uniformly at random. Thus, every contact has probability $1\/|\\mathcal{L}|$ to be selected as the starting contact. Assume that a random walker visits node $i$ at time step $t$. We define $\\Gamma_{t}(i)$ as the set of nodes that have contacted node $i$ after time $t$ allowing duplicated elements. A node may appear multiple times in $\\Gamma_{t}(i)$ because it may have multiple contacts with node $i$ over the course of time. The next node to walk to is uniformly selected from $\\Gamma_{t}(i)$, i.e., every node in $\\Gamma_{t}(i)$ is chosen with probability $1\/|\\Gamma_{t}(i)|$. Nguyen et al.~\\cite{nguyen2018continuous} generalized the starting contact and the successor node of a temporal walk to other distributions beyond the uniform distribution illustrated here.\n When we compare the performance of the algorithms on link prediction, we explore the embeddings that give the largest AUC value for link prediction of \\textit{CTDNE} by taking into account all possible generalizations proposed by Nguyen et al.\n \\end{itemize}\n\n\nIn our SI-spreading-based algorithms for both static and temporal networks, we set $\\beta \\in \\{0.001, 0.01, 0.1, \\linebreak 0.2, 0.3, 0.4, 0.5, 0.6, 0.7,0.8, 0.9, 1.0\\}$. We use $\\omega=10$ and embedding dimension $d=128$ for our algorithms and the baseline algorithms.\n\n\n\\subsection{Performance Evaluation}\n\\label{Performance Evaluation}\n\n\\subsubsection{Training and test sets}\nIn this section, we illustrate how to generate the training and test sets in the link prediction task in temporal and static networks.\nWe run the network embedding algorithms on the corresponding training set and obtain embedding vector for each node,\nand use the AUC to evaluate the link prediction performance in the test set.\n\n Given a temporal network $\\mathcal{G}$, we select uniformly at random $75\\%$ node pairs among the node pairs that have at least one contact between them in $\\mathcal{G}$ as the training set for temporal embedding algorithms, including all the contacts and their timestamps. The training set for static network embedding algorithms is the aggregation of the training set for temporal embedding algorithms. In other words, for every node pair, there is a link between the two nodes in the training set for static network embedding if and only if they have at least one contact in the training set for temporal embedding algorithms.\n\n We use the remaining $25\\%$ node pairs among the node pairs that have at least one contact of $\\mathcal{G}$ as the positive links in the test set. We label these node pairs 1. Then, we uniformly randomly sample an equal number of node pairs in $\\mathcal{G}$ which have no contact between them. These node pairs are used as negative links in the test set, which we label 0. The same test set is used for the link prediction task in both temporal and static networks.\n\nFor each temporal network data set, we randomly split the network to obtain the training and test set according to the procedures given above five times. Both random walks and SI spreading processes are stochastic. For each split data, we run each algorithm on the training set and perform the link prediction on the test set for ten realizations. Therefore, we obtain ten AUC scores for each splitting of the data into the training and test sets, evening the randomness stemming from stochasticity of the random walk or SI spreading processes. We obtain the AUC score for each algorithm with a given parameter set as an average over 50 realizations in total.\n\n\n\n\\subsubsection{Evaluation Results}\n\n\\begin{table}[!ht]\n\\centering\n\\caption{\\label{TB:AUC}AUC scores for link prediction. All the results shown are the average over 50 realizations. Bold indicates the optimal AUC among the embedding algorithms, $^{*}$ indicates the optimal AUC among all the algorithms. L2, L3, L4 are the short for link prediction metrics which counts the number of $l=2,3,4$ paths, respectively.}\n\\resizebox{\\textwidth}{18mm}{\n\\begin{tabular}{ccccccccccc}\n \\hline\n &Dataset &DeepWalk &Node2Vec &CTDNE &TSINE1 &TSINE2 & SINE & L2 & L3 & L4 \\\\ \\hline\n &HT2009 & 0.5209 & 0.5572 & 0.6038 & 0.6740 &\\textbf{0.6819} &0.6726 & $0.7069^{*}$ & 0.7066 & 0.7055 \\\\\n &ME & 0.6439 & 0.6619 & 0.6575 & 0.7329 &0.7462 &\\textbf{0.7744} & 0.7855 & $0.7878^{*}$ & 0.7790 \\\\\n &Haggle & 0.3823 & 0.7807 & 0.7796 & 0.8051 &0.8151 &$\\textbf{0.8267}^{*}$ & 0.8167 & 0.8255 &0.8226 \\\\\n &Fb-forum &0.5392 &0.6882 & 0.6942 & 0.7104 &0.7195 &$\\textbf{0.7302}^{*}$ & 0.5606 & 0.7179 &0.7203 \\\\\n &DNC &0.5822 &0.5933 &0.7274 & 0.7539 &0.7529 &\\textbf{0.7642} &$0.7704^{*}$ &0.7627 &0.7193\\\\\n &CollegeMsg &0.5356 &0.5454 &0.7872 & 0.8257 &0.8321 &\\textbf{0.8368} &0.7176 &$0.8609^{*}$ &0.8203\\\\\n \\hline\n\\end{tabular}}\n\\end{table}\n\\begin{figure*}[!ht]\n\\centering\n \n\t\\includegraphics[width=7.5cm]{haggle-DW.png}\n\t\\includegraphics[width=7.5cm]{haggle-Node2Vec.png}\n\t\\includegraphics[width=7.5cm]{haggle-CTDNE.png}\n\t\\includegraphics[width=7.5cm]{haggle-TSINE2.png}\n\t\\includegraphics[width=7.5cm]{haggle-SINE.png}\n \\caption{The dot product distribution of the two end nodes' embedding vectors of the positive and negative links in the test set. We show the result of the \\textit{Haggle} data set. For each algorithm, we use the same parameter settings as that of Table~\\ref{TB:AUC} to obtain the embeddings. Dot products of positive links are shown in grey. Negative links are shown in pink. The results are shown for algorithms (a) \\textit{DeepWalk}; (b) \\textit{Node2Vec}; (c) \\textit{CTDNE}; (d) \\textit{TSINE2} and (e) \\textit{SINE}.}\n \\label{fig:dot_product_hist_Haggle}\n\\end{figure*}\nWe summarize the overall performance of the algorithms on missing link prediction in Table~\\ref{TB:AUC}.\nFor each algorithm, we tune the parameters and show the optimal average AUC score. Among the static network embedding algorithms, \\textit{SINE} significantly outperforms \\textit{DeepWalk} and \\textit{Node2Vec}. The improvement in the AUC score is up to 30\\% on the \\textit{CollegeMsg} dataset. Embedding algorithms \\textit{CTDNE}, \\textit{TSINE1} and \\textit{TSINE2} are for temporal networks. The SI-spreading-based algorithms (i.e., \\textit{TSINE1} and \\textit{TSINE2}) also show better performance than random-walk-based one (\\textit{CTDNE}). Additionally, \\textit{TSINE2} is slightly better than \\textit{TSINE1} on all data sets. Therefore, we will focus on \\textit{TSINE2} in the following analysis. In fact, \\textit{SINE} shows better performance than temporal network embedding methods including \\textit{TSINE2} on all data sets except for \\textit{HT2009}.\nIt has been shown that temporal information is important for learning embeddings~\\cite{nguyen2018continuous, zuo2018embedding, zhou2018dynamic}. However, up to our numerical efforts, \\textit{SINE} outperforms the temporal network algorithms although \\text{SINE} deliberately neglects temporal information.\n\n\n\n\\begin{figure*}[!ht]\n\\centering\n \n\t\\includegraphics[width=16cm]{optimal_degree_distri}\n \\caption{Cumulative degree distribution of the static network derived from the training set and that of the sampled networks $G_{S}$ from different algorithms. We show the results for (a)\\textit{HT2009}; (b)\\textit{ME}; (c)\\textit{Haggle}; (d)\\textit{Fb-forum}; (e)\\textit{DNC}; (f)\\textit{CollegeMsg}.}\n \\label{fig:degree distri}\n\\end{figure*}\n\nTo get insights into the different performance among the embedding algorithms, we further investigate the distribution of the dot product of node embedding vectors. Given a link $(i, j)$ in the test set, we compute the dot product of the two end nodes' embedding vectors, i.e., $\\overrightarrow{u_{i}}\\cdot \\overrightarrow{u_{j}}^{T}$. We show the dot product distribution for the positive links and negative links in the test set separately. For each embedding algorithm, we consider only the parameter set that maximizes the AUC, i.e., the parameter values with which the results are shown in Table~\\ref{TB:AUC}. We show the distribution of the dot product for \\textit{Haggle} in Figure~\\ref{fig:dot_product_hist_Haggle} and for the other data sets in Figure~S1--S5 in the Appendix.\nCompared to the random-walk-based algorithms, \\textit{TSINE2} and \\textit{SINE} yield more distinguishable distributions between the positive (grey) and the negative links (pink). This result supports the better performance of SI-spreading-based embeddings than random-walk-based ones.\n\n\n\n\nThe embedding algorithms differ only in the sampling method to generate the node pair set. These algorithms use the same Skip-Gram architecture, which takes the node pair set as input, to deduce the embedding vector for each node. We explore further how the algorithms differ in the node pair sets that they sampled. The objective is to discover the relation between the properties of the sampled node pairs and the performance of an embedding method.\nWe represent the node pair set generated by an embedding method as a network $G_{S}=(\\mathcal{N}, E_{S})$, so called the sampled network. Two nodes are connected in $G_{S}$ if they form a node pair in the node pair set. It should be noted that $G_{S}$ is an unweighted network. For each algorithm, with the parameter set that maximizes the AUC, we show the cumulative degree distribution of its sampled network $G_{S}$ in Figure~\\ref{fig:degree distri}. The cumulative degree distribution of the training set for static network is also given. Compared to the cumulative degree distribution of the training set, the sampled networks tend to have a higher node degree. Zhang et al.\\ and Gao et al.~\\cite{zhang2019degree,gao2018bine} have shown that when the degree distribution of $G_{S}$ is closer to that of the training set, the prediction performance of a random-walk-based algorithm tends to be better. Even though SI-spreading based algorithms perform the best across the data sets, we have not found a direct relation between the performance of the embedding algorithm and similarity between the degree distribution of the sampled network and that of the training set.\n\\begin{figure*}[!ht]\n\\centering\n \n\t\\includegraphics[width=18cm]{l-path_fig}\n \\caption{Illustration of $l$ paths between a pair of nodes $i$ and $j$.\n Here we show $l=2,3,4$.}\n \\label{fig:l-path}\n\\end{figure*}\n\n\nSimilarity-based methods such as the number of $l=2,3,4$ paths have been used for link prediction problem~\\cite{lu2011link}. A $l$ path between two nodes refers to a path that contains $l$ links. We show examples of $l=2, 3, 4$ path between a node pair $i$ and $j$ in Figure~\\ref{fig:l-path}.\nKov\\'{a}cs et al.~\\cite{kovacs2019network} have shown that $l$ paths ($l=3, 4$) outperform existing link prediction methods in predicting protein interaction. Cao et al.~\\cite{cao2019network} found that network embedding algorithms based on random walks sometimes perform worse in link prediction than the number of $l=2$ paths or equivalently the number of common neighbors. This result suggests a limit of random-walk-based embedding in identifying the links between node pairs that have many common neighbors.\nTherefore, we explore further whether our SI-spreading-based algorithms can overcome this limitation, thus possibly explain their outperformance.\n\n We investigate what kind of network structure surrounding links makes them more easily be predicted.\n For every positive link in the test set, we study its two end nodes' topological properties (i.e., the number of $l=2$, $l=3$ and $l=4$ paths) and the dot product of the embedding vectors of its two end nodes. Given a network, the parameters of each embedding algorithm are tuned to maximize the AUC, as given in Table 2. We take the data set \\textit{Haggle} as an example. Figure~\\ref{fig:Haggle-path-len2-dot-product} show the relation between the dot product of the embedding vectors and the number of $l=2,3,4$ paths of the two end nodes of a positive link in the test set for all the embedding methods. The Pearson correlation coefficient (PCC) between the two variables for all the networks and algorithms is given in Table~S1 in the Appendix. Figure~\\ref{fig:Haggle-path-len2-dot-product} and Table~S1 together show that the dot product of the embedding vectors constructed from \\textit{TSINE2} and \\textit{SINE} is more strongly correlated with the number of $l$ paths, where $l=2$, 3 or 4, than the random-walk-based embeddings. This result suggests that SI-spreading-based algorithms may better predict the links whose two end nodes have many $l$-paths, thus overcoming the limit of random-walk-based embedding algorithms.\n \\begin{figure*}[!ht]\n\\centering\n \n\t\\includegraphics[width=18cm]{haggle-l234-path}\n \\caption{Relation between the dot product of the two end nodes' embedding vectors and the number of $l=2,3,4$ paths between the two end nodes of the positive links in the test set for \\textit{Haggle} data set. (a1--a5), (b1--b5) and (c1--c5) are the results for the number of $l=2,3,4$ paths, respectively. }\n \\label{fig:Haggle-path-len2-dot-product}\n\\end{figure*}\n\n\n\n\n The number of $l=2,3$ paths has been used to predict links in ~\\cite{lu2011link,kovacs2019network,cao2019network}. The observation and the limit of random-walk-based embedding algorithms motivate us to use the number of $l=2,3,4$ paths between a node pair to predict the missing links.\n Take $l=2$ paths as an example. For every link in the test set, the number of $l=2$ paths between the two end nodes in the training set is used to estimate the likelihood of connection between them. In the networks we considered, two end nodes of a link tend to be connected by $l=2$, $l=3$ and $l=4$ paths (see Figures~\\ref{fig:Haggle-path-len2-dot-product}). Table~\\ref{TB:AUC} ($L2,L3,L4$ shown in the table correspond to the method of using the number of $l=2,3,4$ path for link prediction) shows that in such networks, the similarity-based methods do not evidently outperform the SI-spreading-based embedding. Actually, the SI-spreading-based embedding performs better in two out of six networks.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[width=18cm]{AUC-model}\n \\caption{\\label{Fig:X} Influence of the sampling size $B=NX$ on the link prediction performance, i.e., AUC score. The error bar shows the standard deviation of the AUC score calculated on the basis of 50 realizations.\n We show the results for (a)\\textit{HT2009}; (b)\\textit{ME}; (c)\\textit{Haggle}; (d)\\textit{Fb-forum}; (e)\\textit{DNC}; (f)\\textit{CollegeMsg}.}\n\\end{figure}\n\nNext, we study the effect of the sampling size, $B$, on the performance of each algorithm. The sampling size is quantified as the the total length of the trajectory paths as defined in Section~\\ref{SI based static network sampling}. Given a network, we set $\\mathcal{B}=NX$, where $N$ is the size of the network and $X \\in \\{1,2, 5, 10, 25, 50, 100, 150\\}$. We evaluate our SI-spreading-based embedding algorithms \\textit{SINE} and \\textit{TSINE2}, and one random-walk-based embedding algorithm \\textit{CTDNE}, because \\textit{CTDNE} performs mostly the best among all random-walk-based algorithms. The result is shown in Figure~\\ref{Fig:X}. For each $X$, we tune the other parameters to show the optimal AUC in the figure. Both \\textit{SINE} and \\textit{TSINE2} perform better than \\textit{CTDNE} and are relatively insensitive to the sampling size. This means that they achieve a good performance even when the sampling size is small, even with $X=1$. The random-walk-based algorithm, \\textit{CTDNE}, however, requires a relatively large sampling size to achieve a comparable performance with \\textit{SINE} and \\textit{TSINE2}.\n\nFinally, the AUC as a function of the infection probability, $\\beta$, is shown in Figure~\\ref{Fig:AUC-beta-opt}. For each $\\beta$, we tune the other parameters to show the optimal AUC. The SI-spreading-based algorithms achieve high performance with a small infection probability ($0.001 \\leq \\beta \\leq 0.1$) for all the data sets. The high performance of SI-spreading-based embedding algorithms with the small value of $X$ and $\\beta$ across different networks motivates the further study\nwhether one can optimize the performance by searching a smaller range of the parameter values.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=18cm]{AUC-beta-opt}\n \\caption{\\label{Fig:AUC-beta-opt} AUC as a function of $\\beta$. We show the results for (a)\\textit{HT2009}; (b)\\textit{ME}; (c)\\textit{Haggle}; (d)\\textit{Fb-forum}; (e)\\textit{DNC}; (f)\\textit{CollegeMsg}.}\n\\end{figure}\n\n\n\\section{Conclusions}\n\\label{Conclusions}\nIn this paper, we proposed network embedding algorithms based on SI spreading processes in contrast to the previously proposed embedding algorithms based on random walks~\\cite{zhan2019information, zhan2018coupling}. We further evaluated the embedding algorithms on the missing link prediction task.\n The key point of an embedding algorithm is how to design a strategy to sample trajectories to obtain embedding vectors for nodes. We used the SI model to this end. The algorithms that we proposed are \\textit{SINE} and \\textit{TSINE}, which use static and temporal networks, respectively.\n\n\nOn six empirical data sets, the SI-spreading-based network embedding algorithm on the static network, i.e., \\textit{SINE}, gains much more improvement than state-of-the-art random-walk-based network embedding algorithms across all the data sets. The SI-spreading-based network embedding algorithms on the temporal network, \\textit{TSINE1} and \\textit{TSINE2}, also show better performance than the temporal random-walk-based algorithm. Temporal information provides additional information that may be useful for constructing embedding vectors~\\cite{nguyen2018continuous, zuo2018embedding, zhou2018dynamic}. However, we find that \\textit{SINE} outperforms \\textit{TSINE}, which uses timestamps of the contacts. This result suggests that temporal information does not necessarily improve the embedding for missing link prediction. Moreover, when the sampling size of the Skip-Gram is small, the performance of the SI-spreading-based embedding algorithms is still high. Sampling trajectory paths takes time especially for large-scale networks. Therefore, our observation that the SI-spreading-based algorithms require less samples than other algorithms promises the applicability of the SI-spreading-based algorithms to larger networks than the random-walk-based algorithms. Finally, we show insights of why SI-spreading-based embedding algorithms performs the best by investigating what kind of links are likely to be predicted.\n\nWe deem that the following future work as important. We have already applied susceptible-infected-susceptible (SIS) model and evaluated the SIS-spreading-based embedding. However, this generalization has not improved the performance in the link prediction task. Therefore, one may explore whether or not sampling the network information via the other spreading processes, such as susceptible-infected-recovered (SIR) model, further improves the embedding. It is also interesting to explore further the performance of the SI-spreading-based algorithms in other tasks such as classification and visualization. Moreover, the SI-spreading-based sampling strategies can also be generalized to other types of networks, e.g., directed networks, signed networks, and multilayer networks.\n\n\n\n\n\n\n\n\\section{Competing interests}\n The authors declare that they have no competing interests.\n\n\\section{Author's contributions}\nAll authors planed the study; X.Z. and Z.L. performed the experiments, analyzed the data and prepared\nthe figures. All authors wrote the manuscript.\n\n\\section{Acknowledgements}\nWe thank the SocioPatterns collaboration (http:\/\/\nwww.sociopatterns.org) for providing the data sets. This work has been partially supported by the China Scholarship Council (CSC).\n\\clearpage\n\n\\bibliographystyle{naturemag}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\\subsection{The boundary manifold} \n\nLet $\\mathcal{A}$ be an arrangement of hyperplanes in the complex \nprojective space ${\\mathbb{CP}}^m$, $m>1$. \nDenote by $V= \\bigcup_{H\\in\\mathcal{A}} H$ the corresponding \nhypersurface, and by $X={\\mathbb{CP}}^m \\setminus V$ its complement. \nAmong the origins of the topological study of arrangements \nare seminal results of Arnol'd \\cite{Ar69} and Cohen \\cite{FC}, \nwho independently computed the cohomology of the \nconfiguration space of $n$ ordered points in $\\C$, the \ncomplement of the braid arrangement. The cohomology \nring of the complement of an arbitrary arrangement $\\mathcal{A}$ \nis by now well known. It is isomorphic to the Orlik--Solomon \nalgebra of $\\mathcal{A}$, see Orlik and Terao \\cite{OT1} as a general \nreference.\n\nIn this paper, we study a related topological space, namely \nthe \\emph{boundary manifold} of $\\mathcal{A}$. By definition, this \nis the boundary $M=\\partial N$ of a regular neighborhood of \nthe variety $V$ in ${\\mathbb{CP}}^m$. Unlike the complement $X$, \nan open manifold with the homotopy type of a CW--complex \nof dimension at most $m$, the boundary manifold $M$ is a \ncompact (orientable) manifold of dimension $2m-1$. \n\nIn previous work \\cite{CS06}, we have shown that \nthe cohomology ring of $M$ is functorially determined \nby that of $X$ and the ambient dimension. In particular, \n$H_*(M;\\Z)$ is torsion-free, and the respective Betti numbers \n are related by $b_k(M)=b_k(X)+b_{2m-k-1}(X)$. \nSo we turn our attention here to another topological invariant, \nthe fundamental group. The inclusion map \n$M \\to X$ is an $(m-1)$--equivalence, see Dimca \\cite{Dimca}. \nConsequently, \nfor an arrangement $\\mathcal{A}$ in ${\\mathbb{CP}}^m$ with $m \\ge 3$, \nthe fundamental group of the boundary is isomorphic \nto that of the complement. In light of this, we focus \non arrangements of lines in~${\\mathbb{CP}}^2$.\n\n\\subsection{Fundamental group} \n\nLet $\\mathcal{A}=\\{\\ell_0, \\dots, \\ell_n\\}$ be a line arrangement in ${\\mathbb{CP}}^2$. \nThe boundary manifold $M$ is a graph manifold in the sense \nof Waldhausen \\cite{Wa1,Wa2}, modeled on a certain weighted graph \n${\\Gamma_{\\!\\!\\mathcal{A}}}$. This structure, which \nwe review in \\fullref{sec:bdry}, has been used by a number \nof authors to study the manifold $M$. For instance, \nJiang and Yau \\cite{JY93,JY98} investigate the relationship \nbetween the topology of $M$ and the combinatorics of $\\mathcal{A}$, \nand Hironaka \\cite{Hi} analyzes the relationship between \nthe fundamental groups of $M$ and $X$. \n\nIf $\\mathcal{A}$ is a pencil of lines, then $M$ is a connected sum of \n$n$ copies of $S^1 \\times S^2$. Otherwise, $M$ is aspherical, \nand so the homotopy type of $M$ is encoded in its fundamental group.\nUsing the graph manifold structure, and a method due \nto Hirzebruch \\cite{Hir}, Westlund finds a presentation \nfor the group $G=\\pi_1(M)$ in \\cite{We}. In \\fullref{sec:pi1}, we \nbuild on this work to find a {\\em minimal} presentation \nfor the fundamental group, of the form \n\\begin{equation}\n\\label{eq:pi1 intro}\nG=\\langle x_j, \\gamma_{i,k} \\mid R_j, R_{i,k}\\rangle, \n\\end{equation}\nwhere $x_j$ corresponds to a meridian loop around line\n$\\ell_j$, for $1\\le j \\le n=b_1(X)$, and $\\gamma_{i,k}$ corresponds\nto a loop in the graph $\\Gamma_{\\mathcal{A}}$, indexed by a pair\n$(i,k)\\in \\nbc_2({\\mathsf{d}\\mathcal{A}})$, where $\\abs{\\nbc_2({\\mathsf{d}\\mathcal{A}})}=b_2(X)$. The\nrelators $R_j$, $R_{i,k}$ (indexed in the same way) are certain \nproducts of commutators in the generators. In other words, \n$G$ is a commutator-relators group, with both \ngenerators and relators equal in number to $b_1(M)$. \n\n\n\\subsection{Twisted Alexander polynomial and related invariants}\n\nSince $M$ is a graph manifold, the group $G=\\pi_1(M)$ \nmay be realized as the fundamental group of a graph of groups. \nIn \\fullref{sec:AlexPolys} and \\fullref{sec:alex poly arr}, this structure \nis used to calculate the twisted Alexander polynomial $\\Delta^\\phi(G)$ \nassociated to $G$ and an arbitrary complex representation \n$\\phi\\colon G \\to \\GL_k(\\C)$. In particular, we show that the \nclassical multivariable Alexander polynomial, arising from \nthe trivial representation of $G$, is given by\n\\begin{equation}\n\\label{eq:delta intro}\n\\Delta(G) = \\prod_{v \\in \\mathcal{V}({\\Gamma_{\\!\\!\\mathcal{A}}})} (t_v-1)^{m_v-2}, \n\\end{equation}\nwhere $\\mathcal{V}({\\Gamma_{\\!\\!\\mathcal{A}}})$ is the vertex set of ${\\Gamma_{\\!\\!\\mathcal{A}}}$, \n$m_v$ denotes the multiplicity or degree of the vertex $v$, \nand $t_v=\\prod_{i\\in v} t_i$. \n\nTwisted Alexander polynomials \ninform on invariants such as the Alexander and Thurston norms, \nand Bieri--Neumann--Strebel (BNS) invariants. As such, they are \na subject of current interest in $3$--manifold theory. In the case \nwhere $G$ is a link group, a number of authors, including \nDunfield \\cite{Dun} and Friedl and Kim \\cite{FK}, have used \ntwisted Alexander polynomials to distinguish between the \nThurston and Alexander norms. This is not possible for \n(complex representations of) the fundamental group of the \nboundary manifold of a line arrangement. In \\fullref{sec:alex balls}, \nwe show that the unit balls in the norms on $H^1(G;\\R)$ \ncorresponding to any two twisted Alexander polynomials \nare equivalent polytopes. Analysis of the structure of these \npolytopes also enables us to calculate the number of \ncomponents of the BNS invariant of $G$ and the Alexander \ninvariant of $G$.\n\n\\subsection{Cohomology ring and graded Lie algebras} \n\nIn \\fullref{sect:coho}, we revisit the cohomology ring of \nthe boundary manifold $M$, in our $3$--dimensional context. \n{F}rom \\cite{CS06}, we know that $H^*(M;\\Z)$ is isomorphic \nto $\\db{A}$, the ``graded double\" of $A=H^*(X;\\Z)$. In particular, \n$\\db{A}^1=A^1\\oplus \\bar{A}^2$, where $\\bar{A}^k=\\Hom(A^k,\\Z)$. \nThis information allows us to identify the $3$--form \n$\\eta_M$ which encodes all the cup-product structure in \nthe Poincar\\'{e} duality algebra $H^*(M;\\Z)$. If $\\{e_j\\}$ \nand $\\{f_{i,k}\\}$ denote the standard bases \nfor $A^1$ and $A^2$, then \n\\begin{equation}\n\\label{eq:eta intro}\n\\eta_M =\\sum_{(i,k)\\in\\nbc_2({\\mathsf{d}\\mathcal{A}})}\ne_{I(i,k)} \\wedge e_k \\wedge \\bar{f}_{i,k}, \n\\end{equation}\nwhere \n$I(i,k)=\\set{j \\mid \\ell_j \\supset \\ell_i \\cap \\ell_k,\\ 1\\le j \\le n}$ \nand $e_J=\\sum_{j\\in J} e_j$. \n\nThe explicit computations described in \\eqref{eq:pi1 intro} \nand \\eqref{eq:eta intro} facilitate analysis of two Lie algebras \nattached to our space $M$: the graded Lie algebra $\\gr(G)$ \nassociated to the lower central series of $G$, and the holonomy \nLie algebra ${\\mathfrak{h}}(\\db{A})$ arising from the multiplication map \n$\\db{A}^1\\otimes \\db{A}^1\\to \\db{A}^2$. For the complement \n$X$, the corresponding Lie algebras are isomorphic \nover the rationals, as shown by Kohno \\cite{K}. For the \nboundary manifold, though, such an isomorphism no longer holds, \nas we illustrate by a concrete example in \\fullref{sect:formal}. \nThis indicates that the manifold $M$, unlike the complement \n$X$, need not be formal, in the sense of Sullivan \\cite{Su77}. \n\n\\subsection{Jumping loci and formality}\n\nThe non-formality phenomenon identified above is fully \ninvestigated in \\fullref{sect:cjl} and \\fullref{sect:formal} by \nmeans of two types of varieties attached to $M$: the \ncharacteristic varieties $V^1_d(M)$ and the resonance varieties \n$\\mathcal{R}^1_d(M)$. Our calculation of $\\Delta(G)$ recorded in \n\\eqref{eq:delta intro} enables us to give a complete description \nof the first characteristic variety of $M$, the set of all characters \n$\\phi \\in \\Hom(G,\\C^*)$ for which the corresponding local system \ncohomology group $H^1(M;\\C_\\phi)$ is non-trivial:\n\\begin{equation}\n\\label{eq:v1 intro}\nV^1_1(M) = \\bigcup_{v \\in \\mathcal{V}({\\Gamma_{\\!\\!\\mathcal{A}}}),m_v \\ge 3} \\{t_v-1=0\\}.\n\\end{equation}\nThe resonance varieties of $M$ are the analogous jumping loci \nfor the cohomology ring $H^*(M;\\C)$. Unlike the resonance \nvarieties of the complement $X$, the varieties $\\mathcal{R}^1_d(M)$, \nfor $d$ sufficiently large, may have non-linear components. \nNevertheless, the first resonance variety $\\mathcal{R}^1_1(M)$ is \nvery simple to describe: with a few exceptions, it is equal to the \nambient space, $H^1(M;\\C)$. Comparing the tangent cone \nto $V^1_1(M)$ at the identity to $\\mathcal{R}^1_1(M)$, and making use \nof a recent result of Dimca, Papadima, and Suciu \\cite{DPS}, \nwe conclude that the boundary manifold of a line arrangement \n$\\mathcal{A}$ is formal precisely when $\\mathcal{A}$ is a pencil or a near-pencil. \n\n\\section{Boundary manifolds of line arrangements}\n\\label{sec:bdry}\nLet $\\mathcal{A}=\\set{\\ell_0,\\dots,\\ell_n}$ be an arrangement of lines \nin ${\\mathbb{CP}}^2$. The boundary manifold of $\\mathcal{A}$ may be realized \nas the boundary of a regular neighborhood of the curve \n$C=\\bigcup_{i=0}^n \\ell_i$ in ${\\mathbb{CP}}^2$. In this section, \nwe record a number of known results regarding this manifold.\n\n\\subsection{The boundary manifold}\n\\label{subsec:bdry nbhd}\n\nChoose homogeneous coordinates $\\b{x}=(x_0 \\colon x_1 \n\\colon x_2)$ on ${\\mathbb{CP}}^2$. For each $i$, $0\\le i \\le n$, \nlet $f_i = f_i(x_0,x_1,x_2)$ be a linear form which \nvanishes on the line $\\ell_i$ of $\\mathcal{A}$. Then $Q=Q(\\mathcal{A})=\n\\prod_{i=0}^n f_i$ is a homogeneous polynomial of degree $n+1$, \nwith zero locus $C$. The {\\em complement} of $\\mathcal{A}$ is the open \nmanifold $X=X(\\mathcal{A})={\\mathbb{CP}}^2 \\setminus C$.\n\nA closed, regular neighborhood $N$ \nof $C$ may be constructed as follows. Define \n$\\phi\\colon{\\mathbb{CP}}^2 \\to \\R$ by $\\phi(\\b{x}) = \n\\abs{Q(\\b{x})}^2\\!\/\\, \\norm{\\b{x}}^{2(n+1)}$, and \nlet $N = \\phi^{-1}([0,\\delta])$ for $\\delta>0$ \nsufficiently small. Alternatively, triangulate \n${\\mathbb{CP}}^2$ with $C$ as a subcomplex, and take $N$ \nto be the closed star of $C$ in the second barycentric \nsubdivision. As shown by Durfee \\cite{Durfee} in greater \ngenerality, these approaches yield isotopic neighborhoods, \nindependent of the choices made in the respective constructions.\nThe \\emph{boundary manifold} of $\\mathcal{A}$ is the \nboundary of such a regular neighborhood:\n\\begin{equation}\n\\label{eq:bdry reg nbhd}\nM=M(\\mathcal{A})=\\partial N.\n\\end{equation}\nThis compact, connected, orientable $3$--manifold will \nbe our main object of study. We start with a couple of simple \nexamples.\n\n\\begin{example} \n\\label{ex:boundary pencil}\nLet $\\mathcal{A}$ be a pencil of $n+1$ lines in ${\\mathbb{CP}}^{2}$, \ndefined by the polynomial $Q=x_1^{n+1}-x_2^{n+1}$. \nThe complement $X$ of $\\mathcal{A}$ is diffeomorphic to \n$(\\C \\setminus \\set{n\\ \\text{points}})\\times \\C$, so \nhas the homotopy type of a bouquet of $n$ circles. \nOn the other hand, ${\\mathbb{CP}}^{2}\\setminus N=\n(D^2\\setminus \\{\\text{$n$ disks}\\})\\times D^{2}$; \nhence $M$ is diffeomorphic to the $n$--fold connected \nsum $\\sharp^{n} S^{1}\\times S^{2}$. \n\\end{example} \n\n\\begin{example} \n\\label{ex:boundary near-pencil}\nLet $\\mathcal{A}$ be a near-pencil of $n+1$ lines in ${\\mathbb{CP}}^{2}$, \ndefined by the polynomial $Q=x_0(x_1^n-x_2^n)$. \nIn this case, $M=S^1\\times\\Sigma_{n-1}$, \nwhere $\\Sigma_g=\\sharp^{g} S^1\\times S^1$ \ndenotes the orientable surface of genus $g$, \nsee \\cite{CS06} and \\fullref{example:near-pencil pres}. \n\\end{example} \n\n\\subsection{Blowing up dense edges}\n\\label{subsec:blow up}\n\nA third construction, which sheds light on the structure \nof $M$ as a $3$--manifold, may also be used to obtain the \ntopological type of the boundary manifold. This involves \nblowing up (certain) singular points of $C$. Before \ndescribing it, we establish some notation.\n\nAn edge of $\\mathcal{A}$ is a non-empty intersection of lines of $\\mathcal{A}$. \nAn edge $F$ is said to be {\\em dense} if the subarrangement \n$\\mathcal{A}_F=\\{\\ell_j \\in \\mathcal{A} \\mid F \\subseteq \\ell_j\\}$ \nof lines containing $F$ is not a product arrangement. \nHence, the dense edges are the lines of $\\mathcal{A}$, and the \nintersection points $\\ell_{j_1} \\cap \\ldots \\cap \\ell_{j_k}$ \nof multiplicity $k \\ge 3$. Denote the set of dense edges \nof $\\mathcal{A}$ by ${\\sf D}(\\mathcal{A})$, and let $F_1,\\dots,F_r$ be the \n$0$--dimensional dense edges. We will occasionally \ndenote the dense edge \n$\\bigcap_{j \\in J} \\ell_j$ by $F_J$.\n\nBlowing up ${\\mathbb{CP}}^2$ at each $0$--dimensional dense edge \nof $\\mathcal{A}$, we obtain an arrangement \n$\\tilde\\mathcal{A}=\\{L_i\\}_{i=0}^{n+r}$ in $\\widetilde{{\\mathbb{CP}}}{}^2$ \nconsisting of the proper transforms $L_i=\\tilde\\ell_i$, \n$0\\le i \\le n$, of the lines of $\\mathcal{A}$, and exceptional \nlines $L_{n+j}=\\tilde{F_j}$, $1\\le j\\le r$, arising \nfrom the blow-ups. \n\nBy construction, the curve \n$\\tilde{C}=\\bigcup_{i=0}^{n+r} L_i$ in $\\widetilde{{\\mathbb{CP}}}{}^2$ \nis a divisor with normal crossings. Let $U_i$ be a tubular \nneighborhood of $L_i$ in $\\widetilde{{\\mathbb{CP}}}{}^2$. For \nsufficiently small neighborhoods, we have \n$U_i \\cap U_j=\\emptyset$ if $L_i \\cap L_j=\\emptyset$. \nThen, rounding corners, $N(\\tilde C) = \\bigcup_{i=0}^{n+r} U_i$ \nis a regular neighborhood of $\\tilde C$ in $\\widetilde{{\\mathbb{CP}}}{}^2$. \nContracting the exceptional lines of $\\tilde\\mathcal{A}$ gives rise to \na homeomorphism $M \\cong \\partial{N}(\\tilde C)$.\n\n\\subsection{Graph manifold structure}\n\\label{subsec:graph manifold}\n\nThis last construction realizes the boundary manifold $M$ \nof $\\mathcal{A}$ as a {\\em graph manifold}, in the sense of \nWaldhausen \\cite{Wa1,Wa2}. \nThe underlying graph ${\\Gamma_{\\!\\!\\mathcal{A}}}$ may be described as follows. \nThe vertex set $\\mathcal{V}({\\Gamma_{\\!\\!\\mathcal{A}}})$ is in one-to-one correspondence \nwith the dense edges of $\\mathcal{A}$ (that is, the lines of $\\tilde\\mathcal{A}$). \nLabel the vertices of \n${\\Gamma_{\\!\\!\\mathcal{A}}}$ by the relevant subsets of $\\{0,1,\\dots,n\\}$: \nthe vertex corresponding to $\\ell_i$ is labeled $v_i$, \nand, if $F_J$ is a $0$--dimensional dense edge (that is, \nan exceptional line in $\\tilde{\\mathcal{A}}$), label the \ncorresponding vertex $v_J$. \nIf $\\ell_i$ and $\\ell_j$ meet in a double point of $\\mathcal{A}$, we \nsay that $\\ell_i$ and $\\ell_j$ are transverse, and (sometimes) \nwrite $\\ell_i\\pitchfork\\ell_j$. \nThe graph ${\\Gamma_{\\!\\!\\mathcal{A}}}$ has an \nedge $e_{i,j}$ from $v_i$ to $v_j$, $i:<0mm,14mm>::}\n[]*D(3){v_{123}}*-{\\blacklozenge}\n(\n-@{--}[dl]*R(2){v_1}*{\\text{\\:\\:\\circle*{0.35}}}(-@{--}[dr]*U(3){v_0}*{\\text{\\:\\:\\:\\circle*{0.35}}})\n,-^(0.6){}[d]*R(2){v_2}*{\\text{\\:\\:\\:\\circle*{0.35}}}(-@{--}[d])\n,-^{}[dr]*L(2){v_3}*{\\text{\\:\\:\\circle*{0.35}}}(-@{--}[dl])\n)\n}\n\\end{picture}\n\\end{minipage}\n}\n\\caption{A near-pencil of $4$ lines and \nits associated graph $\\Gamma$ (with maximal \ntree $\\mathcal{T}$ in dashed lines)}\n\\label{fig:nearpencil}\n\\end{figure}\n\nLet $m_v$ denote the multiplicity (that is, the degree) of the vertex $v$ \nof ${\\Gamma_{\\!\\!\\mathcal{A}}}$. Note that, if $v$ corresponds to the line $L_i$ of \n$\\tilde\\mathcal{A}$, then $m_v$ is given by the number of lines \n$L_j \\in \\tilde\\mathcal{A} \\setminus\\{L_i\\}$ which intersect $L_i$. \nThe graph manifold structure of the boundary manifold \n$M=\\partial{N}(\\tilde C)$ may be described as follows. \nIf $v\\in \\mathcal{V}({\\Gamma_{\\!\\!\\mathcal{A}}})$ corresponds to $L_i \\in \\tilde\\mathcal{A}$, \nthen the vertex manifold, $M_v$, is given by\n\\begin{equation} \n\\label{eq:vertex manifold}\nM_v =\\partial U_i \\setminus \\{\\Int(U_j \\cap \\partial U_i) \n\\mid L_j \\cap L_i \\neq\\emptyset \\}\n\\cong S^1 \\times \\Bigl( {\\mathbb{CP}}^1 \\setminus \n\\bigcup_{j=1}^{m_v} B_j\\Bigr), \n\\end{equation}\nwhere $\\Int(X)$ denotes the interior of $X$, and the \n$B_j$ are open, disjoint disks. Note that the boundary \nof $M_v$ is a disjoint union of $m_v$ copies of the torus \n$S^1 \\times S^1$. \nThe boundary manifold $M$ is obtained by gluing together \nthese vertex manifolds along their common boundaries by \nstandard longitude-to-meridian orientation-preserving \nattaching maps.\n\nGraph manifolds are often aspherical. As noted in \n\\fullref{ex:boundary pencil}, if $\\mathcal{A}$ is a pencil, \nthen the boundary manifold of $\\mathcal{A}$ is a connected sum \nof $S^1\\times S^2$'s, hence fails to be a $K(\\pi,1)$--space. \nPencils are the only line arrangements for which this \nfailure occurs.\n\n\\begin{prop}[Cohen and Suciu \\cite{CS06}]\n\\label{prop:aspherical} \nLet $\\mathcal{A}$ be a line arrangement in ${\\mathbb{CP}}^2$. The boundary manifold \n$M=M(\\mathcal{A})$ is aspherical if and only if $\\mathcal{A}$ is essential, that is, not a pencil. \n\\end{prop}\n\n\\section{Fundamental group of the boundary}\n\\label{sec:pi1}\n\nUsing the graph manifold structure described in the previous \nsection, and a method due to Hirzebruch \\cite{Hir}, \nWestlund \\cite{We} obtained a presentation for the fundamental \ngroup of the boundary manifold of a projective line arrangement. \nIn this section, we recall this presentation, and use it to obtain \na minimal presentation.\n\n\\subsection{The group of a weighted graph}\n\\label{subsec:group weighted graph}\nLet $\\Gamma$ be a loopless graph with $N+1$ vertices. \nIdentify the vertex set of $\\Gamma$ with $ \\set{0,1,\\dots,N}$, \nand assume that there is a weight $w_i\\in\\Z$ given for each \nvertex. Identify the edge set $\\mathcal{E}$ of $\\Gamma$ with a \nsubset of $\\set{(i,j)\\mid 0\\le ij=\\min J_l$. Note that if\n$J_l \\neq J$, then the word $x_{J_l}$ does not involve the generator\n$x_0$. Additionally, we have the relator $R_{j_p}$, \nwhich may be expressed as\n\\[\nR_{j_p}=x_{J} \\cdot \\prod_{J_l\\neq J} x_{J_l}^{u_{j_p,J_l}} \\cdot \n\\prod_{i:<0mm,13mm>::}\n[]*D(2.5){v_1}*{\\,\\text{\\:\\:\\circle*{0.35}}}\n(\n-|{\\gamma_{1,2}}[ddr]*L(1.8){v_2}*{\\text{\\:\\circle*{0.35}}}(-|{\\gamma_{2,3}}[ll]),\n-|{\\gamma_{1,3}}[ddl]*R(2){v_3}*{\\text{\\:\\:\\circle*{0.35}}},\n-@{--}[d]*U(3.3){v_0}*{\\,\\text{\\:\\:\\circle*{0.35}}}\n(-@{--}[dr],-@{--}[dl])\n)\n}\n\\end{picture}\n\\end{minipage}\n}\n\\caption{A general position arrangement and \nits associated graph}\n\\label{fig:generic}\n\\end{figure}\n\nUsing the maximal tree $\\mathcal{T}=\\set{e_{0,i}\\mid 1\\le i\\le n}$ \n(indicated by dashed edges in \\fullref{fig:generic}),\n\\fullref{prop:THEpres} yields a presentation for $G(\\mathcal{A})$\nwith generators $x_i$ ($1\\le i \\le n$) and $\\gamma_{i,j}$\n($1\\le i1$. Since $G$ is \nabelian, the automorphisms $\\phi(t_i) \\in \\GL_k(\\C)$, \n$1\\le i \\le n$, all commute. Consequently, they have \na common eigenvector, say $v$. Let $\\lambda_i$ be the \neigenvalue of $\\phi(t_i)$ with eigenvector $v$, and let \n$\\{w_1,\\dots,w_{k-1}\\}$ be a basis for $\\langle v \\rangle^\\perp$. \nWith respect to the basis $\\{v,w_1,\\dots,w_{k-1}\\}$ for $\\C^k$, \nthe matrix $A_i$ of $\\phi(t_i)$ is of the form\n\\[\nA_i = \\begin{pmatrix} \\lambda_i & * \\\\ 0 & \\bar{A}_i \\end{pmatrix}\n\\]\nwhere $\\bar{A}_i$ is an invertible $(k-1)\\times (k-1)$ matrix. \nDefine representations $\\phi'\\colon G \\to \\C^*$ and \n$\\phi''\\colon G \\to \\GL_{k-1}(\\C)$ by $\\phi'(t_i) = \n\\lambda_i$ and $\\phi''(t_i) = \\bar{A}_i$.\nThen we have a short exact sequence of $G$--modules\n\\[\n\\disablesubscriptcorrection\\xysavmatrix{\n0 \\ar[r] & \\Lambda^1_{\\phi'} \\ar[r] & \\Lambda^k_\\phi \\ar[r] \n& \\Lambda^{k-1}_{\\phi''} \\ar[r] & 0\n},\n\\]\nand a corresponding long exact sequence in homology\n\\[\n\\disablesubscriptcorrection\\xysavmatrix{\n\\dots\\ar[r] & H_i(G;\\Lambda^1_{\\phi'}) \\ar[r] &H_i(G;\\Lambda^k_{\\phi}) \n\\ar[r] & H_i(G;\\Lambda^{k-1}_{\\phi''})\\ar[r] & \\dots\n}.\n\\]\nUsing this sequence, the case $k=1$, and the inductive hypothesis, \nwe conclude that $H_i(G;\\Lambda^k_\\phi)=0$ for $i \\ge 1$, and \nthat $\\ord H_0(G;\\Lambda^k_\\phi) = 1$.\n\\end{proof}\n\nLet $\\Gamma$ be a connected, directed graph, \nand let $\\mathcal{V}=\\mathcal{V}(\\Gamma)$ and $\\mathcal{E}=\\mathcal{E}(\\Gamma)$ \ndenote the vertex and edge sets of $\\Gamma$. A graph of groups is \nsuch a graph, together with vertex groups $\\{G_v \\mid v \\in \\mathcal{V}\\}$, \nedge groups $\\{G_e \\mid e \\in \\mathcal{E}\\}$, and monomorphisms \n$\\theta_0\\colon G_e \\to G_v$ and $\\theta_1\\colon G_e \\to G_w$ \nfor each directed edge $e=(v,w)$. Choose a maximal tree $T$ \nfor $\\Gamma$. The fundamental group $G=G(\\Gamma)$ \n(relative to $T$) is the group generated by the vertex groups \n$G_v$ and the edges $e$ of $\\Gamma$ not in $T$, with the \nadditional relations\n$e\\cdot \\theta_1(x) = \\theta_0(x) \\cdot e$, for $x\\in G_e$ if \n$e \\in \\Gamma\\setminus T$, and \n$\\theta_1(y) = \\theta_0(y)$, for $y\\in G_e$ if $e \\in T$.\n\n\\begin{thm} \n\\label{thm:graph of groups}\nLet $(\\Gamma, \\{G_e\\}_{e\\in\\mathcal{E}(\\Gamma)}, \n\\{G_v\\}_{v\\in\\mathcal{V}(\\Gamma)})$ be a graph of groups, \nwith fundamental group $G$, vertex groups of type \n$FL$, and free abelian edge groups. Assume that the inclusions \n$G_e \\hookrightarrow G$ induce monomorphisms in homology. \nIf $\\phi \\colon G \\to \\GL_k(\\C)$ is a representation, then \n\\begin{romenum}\n\\item \\label{gg1}\n$H_i(G;\\Lambda^k_\\phi) = \\bigoplus_{v\\in \\mathcal{V}} \nH_i(G_v;\\Lambda^k_\\phi)$ for $i \\ge 2$, and\n\\item \\label{gg2}\n$\\ord\\bigl( H_1(G;\\Lambda^k_\\phi) \\bigr) = \n\\ord\\bigl( \\bigoplus_{v\\in \\mathcal{V}} H_1(G_v;\\Lambda^k_\\phi)\\bigr)$.\n\\end{romenum}\n\\end{thm}\n\n\\begin{proof}\nFor simplicity, we will suppress the coefficient module \n$\\Lambda^k_\\phi$ for the duration of the proof. \nGiven a graph of groups, there is a Mayer--Vietoris sequence\n\\begin{equation*}\n\\xymatrixcolsep{14pt}\n\\disablesubscriptcorrection\\xysavmatrix{\n\\dots\\! \\ar[r]& \\bigoplus_{e \\in \\mathcal{E}} H_i(G_e) \n\\ar[r]& \\bigoplus_{v \\in \\mathcal{V}} H_i(G_v)\n\\ar[r]& H_i(G) \n\\ar[r]^(.34){\\partial}& \\bigoplus_{e \\in \\mathcal{E}} \nH_{i-1}(G_e)\n\\ar[r]& \\!\\dots\n}\n\\end{equation*}\nsee Brown \\cite[Section VII.9]{brown}. Since the edges groups are free \nabelian and the inclusions $G_e \\hookrightarrow G$ induce \nmonomorphisms in homology, we may apply \n\\fullref{lem:free abelian} to conclude that \n$H_i(G_e)=0$ for all $i \\ge 1$. \nAssertion \\eqref{gg1} follows. \n\n\\fullref{lem:free abelian} also implies that\n$\\ord\\bigl(H_0(G_e)\\bigr)=1$, \nfor each $e \\in \\mathcal{E}$. Consequently, \n$\\ord\\bigl(\\bigoplus_{e\\in\\mathcal{E}}H_0(G_e)\\bigr)=1$. \nThe above Mayer--Vietoris sequence reduces to\n\\[\n\\disablesubscriptcorrection\\xysavmatrix{\n0 \\ar[r] &\\bigoplus_{v \\in \\mathcal{V}} H_1(G_v)\n\\ar[r] & H_1(G) \\ar[r]^(.38){\\partial} & \n\\bigoplus_{e \\in \\mathcal{E}} H_{0}(G_e) \\ar[r] & \\dots\n}.\n\\]\n{F}rom this, we obtain a short exact sequence\n\\[\n\\disablesubscriptcorrection\\xysavmatrix{\n0 \\ar[r] & \\bigoplus_{v \\in \\mathcal{V}} H_1(G_v)\n\\ar[r] & H_1(G) \\ar[r]^(.5){\\partial} &\n\\Image(\\partial) \\ar[r] & 0\n}.\n\\]\nSince $\\Image(\\partial)$ is a submodule of $\\bigoplus_{e \\in \\mathcal{E}} \nH_{0}(G_e)$, and the latter has order $1$, we have \n$\\ord\\bigl(\\Image(\\partial)\\bigr)=1$ as well. \nAssertion \\eqref{gg2} follows. \n\\end{proof}\n\n\\section{Alexander polynomials of line arrangements} \n\\label{sec:alex poly arr}\nLet $\\mathcal{A}=\\{\\ell_0,\\dots ,\\ell_n\\}$ be an arrangement of \n$n+1$ lines in ${\\mathbb{CP}}^2$, with boundary manifold $M$. \nSince $M$ is a graph manifold, the fundamental group \n$G=\\pi_1(M)$ is the fundamental group of a graph of \ngroups. Recall from \\fullref{sec:bdry} that, in the \ngraph manifold structure, the vertex manifolds are of \nthe form $M_v \\cong S^1 \\times \\bigl( {\\mathbb{CP}}^1 \\setminus \n\\bigcup_{j=1}^{m} B_j\\bigr)$, where the $B_j$ are disjoint \ndisks and $m$ is the multiplicity (degree) of the vertex \n$v$ of ${\\Gamma_{\\!\\!\\mathcal{A}}}$, and these vertex manifolds are glued \ntogether along tori. Consequently, the vertex groups \nare of the form $\\Z \\times F_{m-1}$, and the edge groups \nare free abelian of rank $2$. \n\nThe edge groups are generated by meridian loops \nabout the lines $L_i$ of $\\tilde\\mathcal{A}$ in $\\widetilde{{\\mathbb{CP}}}{}^2$. \nIn terms of the generators $x_i$ of $G$, these generators \nare of the form $x_i^y$ or $x_{i_1}^{y_1}\\ldots x_{i_k}^{y_k}$ \nif $L_i$ is the proper transform of $\\ell_i\\in\\mathcal{A}$ or $L_i$ is \nthe exceptional line arising from blowing up the dense edge \n$F_I$ of $\\mathcal{A}$, where $I=\\set{i_1,\\dots,i_k}$.\nBy \\eqref{eq:meridian product}, $x_0 x_1 \\ldots x_n=1$ \nin $G$. This fact may be used to check that the inclusions \nof the edge groups in $G$ induce monomorphisms in homology. \nTherefore, \\fullref{thm:graph of groups} may be applied \nto calculate twisted Alexander polynomials of $G$. We first \nrecord a number of preliminary facts.\n\n\\begin{lem} \n\\label{lem:hopf alex}\nLet $G = \\Z \\times F_{m-1}$, and let $\\phi\\colon G \\to \\GL_k(\\C)$ \nbe a representation. Then the twisted Alexander polynomial \n$\\Delta^\\phi_1(G)$ is given by\n\\[\n\\Delta^\\phi_1(G) = \\bigl[ p(A,t) \\bigr]^{m-2},\n\\]\nwhere $t$ is the image of a generator $z$ of the center $\\Z$ \nof $G$ under the abelianization map, and $p(A,t)$ is the \ncharacteristic polynomial of the automorphism $A=\\phi(z)$ \nin the variable $t$. In particular, the classical Alexander \npolynomial is $\\Delta(G)=(t-1)^{m-2}$.\n\\end{lem}\n\\begin{proof}\nWrite $G=\\Z\\times F_{m-1} = \\langle z,y_1,\\dots,y_{m-1} \\mid \n[z,y_1], \\dots, [z,y_{m-1}]\\rangle$. Applying the Fox calculus \nto this presentation yields a free $\\Z{G}$--resolution of $\\Z$, \n\\[\n\\disablesubscriptcorrection\\xysavmatrix{\n(\\Z{G})^{m-1} \\ar^{\\partial_2}[r]& (\\Z{G})^m \\ar^(.55){\\partial_1}[r] \n& \\Z{G} \\ar^{\\epsilon}[r] & \\Z \\ar[r] & 0\n},\n\\]\nwhere $\\epsilon\\colon \\Z{G} \\to \\Z$ is the augmentation map, and \nthe matrices of $\\partial_1$ and $\\partial_2$ are given by \n$[\\partial_1]=\\begin{pmatrix} z-1 & y_1-1 & \\cdots \n& y_{m-1}-1\\end{pmatrix}^\\top$ and\n\\[\n[\\partial_2] = \\begin{pmatrix}\n1-y_1 & z-1 & 0 & \\cdots & 0 \\\\\n1-y_2 & 0 & z-1 & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n1-y_{m-1} & 0 & 0 & \\cdots & z-1\n\\end{pmatrix}\n\\]\nA calculation with this resolution yields the result.\n\\end{proof}\n\nLet ${\\Gamma_{\\!\\!\\mathcal{A}}}$ denote the graph underlying the graph manifold \nstructure of the boundary manifold $M$ of the line arrangement \n$\\mathcal{A}=\\{\\ell_i\\}_{i=0}^n$ in ${\\mathbb{CP}}^2$ and the graph \nof groups structure of the fundamental group $G=\\pi_1(M)$. \nFor a vertex $v$ of ${\\Gamma_{\\!\\!\\mathcal{A}}}$ with multiplicity $m_v$, in the \nidentification $G_v = \\Z \\times F_{m_v-1}$ of the vertex \ngroups of $G$, the center $\\Z$ of $G_v$ is generated by \n$z_v$, an meridian loop about the corresponding line \n$L_i$ of $\\tilde\\mathcal{A}$. Denoting the images of the \ngenerators $x_i$ of $G$ under the abelianization \n$\\alpha\\colon G \\to G\/G'$ by $t_i$, there is a \nchoice of generator $z_v$ so that \n\\[\n\\alpha(z_v)=t_v=\\begin{cases}\nt_i & \\text{if $v=v_i$, $0 \\le i \\le n$,}\\\\\nt_{i_1} \\ldots t_{i_k} & \\text{if $v=v_I$, where \n$I=\\set{i_1,\\dots,i_k}$ and $F_I\\in{\\sf D}(\\mathcal{A})$.}\n\\end{cases}\n\\]\nIf $I=\\set{i_1,\\dots,i_k}$, we subsequently write \n$t_I= t_{i_1} \\ldots t_{i_k}$.\n\n\\fullref{thm:graph of groups} and \\fullref{lem:hopf alex} \nyield the following result.\n\n\\begin{thm} \n\\label{thm:alex poly arr}\nLet $\\mathcal{A}$ be an essential line arrangement in ${\\mathbb{CP}}^2$, let ${\\Gamma_{\\!\\!\\mathcal{A}}}$ \nbe the associated graph, and let $G$ be the fundamental \ngroup of the boundary manifold $M$ of $\\mathcal{A}$. If \n$\\phi\\colon G \\to \\GL_k(\\C)$ is a representation, \nthen the twisted Alexander polynomial $\\Delta^\\phi_1(G)$ \nis given~by\n\\[\n\\Delta^\\phi_1(G) = \\prod_{v \\in \\mathcal{V}({\\Gamma_{\\!\\!\\mathcal{A}}})} \\bigl[ p(A_v,t_v) \\bigr]^{m_v-2}, \n\\]\nwhere $t_v$ is the image of a generator of the center $\\Z$ \nof $G_v$ under the abelianization map, and $p(A_v,t_v)$ is \nthe characteristic polynomial of the automorphism $A_v=\\phi(z_v)$ \nin the variable $t_v$. In particular, the classical Alexander \npolynomial of $G$ is \n\\[\n\\Delta(G) = \\prod_{v \\in \\mathcal{V}({\\Gamma_{\\!\\!\\mathcal{A}}})} (t_v-1)^{m_v-2}.\n\\]\n\\end{thm}\n\n\\begin{remark} \nBy gluing formulas of Meng and Taubes \\cite{MT} and Turaev \\cite{Tu}, \nwith appropriate identifications, Milnor torsion is multiplicative \nwhen gluing along tori. Since Milnor torsion coincides with the \nAlexander polynomial for a $3$--manifold $M$ with $b_1(M)>1$, the\ncalculation of $\\Delta(G)$ in \\fullref{thm:alex poly arr} \nabove may alternatively be obtained using these gluing formulas, \nsee Vidussi \\cite[Lemma~7.4]{Vi}.\n\nThe above formula for $\\Delta(G)$ is also reminiscent of the \nEisenbud--Neumann formula for the Alexander polynomial $\\Delta_L(t)$ \nof a graph (multi)-link $L$, see Eisenbud and Neumann\n\\cite[Theorem~12.1]{EN}. For example, if $L$ is the $n$--component\nHopf link (that is, the singularity link of a pencil of $n\\ge 2$ lines),\nthen $\\Delta_L(t)=(t_1\\ldots t_n-1)^{n-2}$.\n\\end{remark}\n\nRecall from \\eqref{eq:meridian product} that the meridian generators \n$x_i$ of $G$ corresponding to the lines $\\ell_i$ of $\\mathcal{A}$, $0 \\le i \\le n$, \nsatisfy the relation $x_0 x_1 \\ldots x_n=1$. Consequently, \n$t_0 t_1 \\ldots t_n=1$ and the twisted Alexander polynomial \n$\\Delta_1^\\phi(G)$ may be viewed as an element of \n$\\Lambda=\\C[t_1^{\\pm 1},\\dots,t_n^{\\pm 1}]$. In particular, in the \nclassical Alexander polynomial, if $I = \\{0\\} \\cup J$, then \n$t_I - 1 \\doteq t_{[n]\\setminus J} - 1$, since Alexander polynomials \nare defined up to multiplication by units. In what follows, we make \nsubstitutions such as these without comment.\n\nIn light of \\fullref{thm:alex poly arr}, we focus on the classical \nAlexander polynomial for the remainder of this section.\n\n\\begin{example} \n\\label{ex:falk}\nIn \\cite{Fa}, Falk considered a pair of arrangements whose \ncomplements are homotopy equivalent, even though the two \nintersection lattices are not isomorphic. In this example, \nwe analyze the respective boundary manifolds. \n\nThe Falk arrangements $\\mathcal{F}_1$ and $\\mathcal{F}_2$ have defining polynomials\n\\begin{align*}\nQ(\\mathcal{F}_1)&=x_0(x_1+x_0)(x_1-x_0)(x_1+x_2)x_2(x_1-x_2) \\\\\n\\text{and}\\qquad\nQ(\\mathcal{F}_2)&=x_0(x_1+x_0)(x_1-x_0)(x_2+x_0)(x_2-x_0)(x_2+x_1-x_0).\n\\end{align*}\nThese arrangements, and the associated graphs, are depicted in \nFigures \\ref{fig:falk1} and~\\ref{fig:falk2}.\n\n\\begin{figure}%\n\\subfigure{%\n\\label{fig:f1}%\n\\begin{minipage}[t]{0.4\\textwidth}\n\\setlength{\\unitlength}{15pt}\n\\begin{picture}(5,5.8)(-3.6,-1.5)\n\\put(2,2){\\oval(6.5,6)[t]}\n\\put(2,2){\\oval(6.5,7)[b]}\n\\put(0,0){\\line(1,1){4}}\n\\put(-1,2){\\line(1,0){6}}\n\\put(0,4){\\line(1,-1){4}}\n\\put(0.5,-0,5){\\line(0,1){5}}\n\\put(3.5,-0,5){\\line(0,1){5}}\n\\put(-1.9,3){\\makebox(0,0){$\\ell_0$}}\n\\put(0.5,-1){\\makebox(0,0){$\\ell_1$}}\n\\put(3.5,-1){\\makebox(0,0){$\\ell_2$}}\n\\put(4.5,-0.4){\\makebox(0,0){$\\ell_3$}}\n\\put(4.6,1.5){\\makebox(0,0){$\\ell_4$}}\n\\put(4.5,4.3){\\makebox(0,0){$\\ell_5$}}\n\\end{picture}\n\\end{minipage}\n}\n\\subfigure{%\n\\label{fig:f1graph}%\n\\begin{minipage}[t]{0.41\\textwidth}\n\\setlength{\\unitlength}{18pt}\n\\begin{picture}(5,5.3)(-0.5,-2.8)\n\\disablesubscriptcorrection\\xysavgraph{!{0;<15mm,0mm>:<0mm,14mm>::}\n!~:{@{-}|@{~}}\n[]*D(3){v_0}*{\\text{\\:\\:\\circle*{0.35}}}\n(\n-@{--}[dr]\n,-@{--}[dl]*R(1.8){v_{012}}*-{\\blacklozenge}\n(-@{--}[dr]*U(3){v_2}*{\\text{\\:\\:\\circle*{0.35}}}(-[r],-[uur],-[ur])\n,-@{--}[r])\n,[d]*D(3){v_1}*{\\text{\\:\\:\\circle*{0.35}}}(-[ur],-[dr])\n,[dr]*D(3){v_4}*-{\\text{\\:\\:\\circle*{0.35}}}\n(\n-@\/^-7pt\/[l]\n,[r]*L(1.8){v_{345}}*-{\\blacklozenge}\n(-@{--}[ul],-[l])\n)\n,-@{--}[r]*D(3){v_3}*-{\\text{\\:\\:\\circle*{0.35}}}\n,-@{--}[ddr]*U(3){v_5}*{\\text{\\:\\:\\circle*{0.35}}}(-[l],-[ur])\n)\n}\n\\end{picture}\n\\end{minipage}\n}\n\\caption{The Falk arrangement $\\mathcal{F}_1$ and \nits associated graph}\n\\label{fig:falk1}\n\\end{figure}\n\n\\begin{figure\n\\subfigure{%\n\\label{fig:f2}%\n\\begin{minipage}[t]{0.4\\textwidth}\n\\setlength{\\unitlength}{15pt}\n\\begin{picture}(5,5.8)(-3.6,-1.5)\n\\put(2,2){\\oval(6.5,6.3)[t]}\n\\put(2,2){\\oval(6.5,7)[b]}\n\\put(-1,0.5){\\line(1,0){6}}\n\\put(-1,3.5){\\line(1,0){6}}\n\\put(0.5,-0,5){\\line(0,1){5.3}}\n\\put(3.5,-0,5){\\line(0,1){5.3}}\n\\put(-0.95,-0.25){\\line(1,1){5}}\n\\put(-1.9,3){\\makebox(0,0){$\\ell_0$}}\n\\put(0.5,-1){\\makebox(0,0){$\\ell_1$}}\n\\put(3.5,-1){\\makebox(0,0){$\\ell_2$}}\n\\put(4.5,1){\\makebox(0,0){$\\ell_3$}}\n\\put(4.5,4){\\makebox(0,0){$\\ell_4$}}\n\\put(2.1,2){\\makebox(0,0){$\\ell_5$}}\n\\end{picture}\n\\end{minipage}\n}\n\\subfigure{%\n\\label{fig:f2graph}%\n\\begin{minipage}[t]{0.41\\textwidth}\n\\setlength{\\unitlength}{18pt}\n\\begin{picture}(5,6)(-0.3,-3.9)\n\\disablesubscriptcorrection\\xysavgraph{!{0;<12mm,0mm>:<0mm,7mm>::}\n!~:{@{-}|@{~}}\n[]*D(3){v_0}*-{\\text{\\:\\:\\circle*{0.35}}}\n(\n-@\/^-1.3pc\/@{--}[ddll]*R(1.8){v_{012}}*-{\\blacklozenge}\n(\n-@{--}[ur]\n,-@{--}[dr]\n)\n,-@\/^1.3pc\/@{--}[ddrr]*L(1.8){v_{034}}*-{\\blacklozenge}\n(\n-@{--}[ul]\n,-@{--}[dl]\n)\n,-@{--}[dddd]*U(3){v_5}*{\\text{\\:\\:\\circle*{0.35}}}\n(\n-[uuul],-[ul],-[uuur],-[ur]\n)\n,[dl]*D(3){v_1}*-{\\text{\\:\\:\\circle*{0.35}}}\n(\n-[rr]\n)\n,[dr]*D(3){v_3}*{\\text{\\:\\:\\circle*{0.35}}}\n(\n-[ddll]\n)\n,[dddl]*U(3){v_2}*-{\\text{\\:\\:\\circle*{0.35}}}\n(\n-[rr]\n)\n,[dddr]*U(3){v_4}*{\\text{\\:\\:\\circle*{0.35}}}\n(\n-[uull]\n)\n)\n}\n\\end{picture}\n\\end{minipage}\n}\n\\caption{The Falk arrangement $\\mathcal{F}_2$ and \nits associated graph}\n\\label{fig:falk2}\n\\end{figure}\n\nBy \\fullref{thm:alex poly arr}, the fundamental groups, \n$G_i=\\pi_1(M(\\mathcal{F}_i))$, of the boundary manifolds of these \narrangements have Alexander polynomials\n\\begin{align}\n\\label{eq:falk alex polys}\n\\Delta_1&= \n[(t_1{-}1)(t_2{-}1)(t_3{-}1)(t_4{-}1)(t_5{-}1)(t_{[5]}{-}1)(t_{345}{-}1)]^2\n\\\\\n\\qquad\\text{and}\\quad\\Delta_2&=\n[(t_1{-}1)(t_2{-}1)(t_3{-}1)(t_4{-}1)]^2(t_5{-}1)^3(t_{[5]}{-}1)(t_{345}{-}1)\n(t_{125}-1),\\notag\n\\end{align}\nwhere $\\Delta_i=\\Delta(G_i)$. \nSince these polynomials have different numbers of distinct factors, \nthere is no monomial isomorphism of \n$\\Lambda=\\C[t_1^{\\pm 1},\\dots ,t_5^{\\pm 1}]$ taking $\\Delta_1$ \nto $\\Delta_2$. Hence, the groups $G_1$ and $G_2$ are not isomorphic, \nand the boundary manifolds $M(\\mathcal{F}_1)$ and $M(\\mathcal{F}_2)$ are not \nhomotopy equivalent. It follows that the complements of the two \nFalk arrangements are not homeomorphic---a result obtained \npreviously by Jiang and Yau \\cite{JY98} by invoking the \nclassification of Waldhausen graph manifolds.\n\\end{example}\n\nNote that the number of distinct factors in the Alexander polynomial \n$\\Delta(G_2)$ above is equal to the number of vertices in the graph \n$\\Gamma_{\\!{\\mathcal{F}_2}}$, while $\\Delta(G_1)$ has fewer factors than \n$|\\mathcal{V}(\\Gamma_{\\!{\\mathcal{F}_1}})|$. In general, the cardinality of $\\mathcal{V}({\\Gamma_{\\!\\!\\mathcal{A}}})$ \nis equal to $|{\\sf D}(\\mathcal{A})|$, the number of dense edges of $\\mathcal{A}$. We \nrecord several families of arrangements for which the Alexander \npolynomial $\\Delta(G)$ is ``degenerate'', that is, the number of distinct factors \nis less than the number of dense edges.\n\n\\begin{example} \n\\label{ex:degenerate} \nLet $\\mathcal{A}$ be a line arrangement in ${\\mathbb{CP}}^2$, with boundary \nmanifold $M$, and let $G=\\pi_1(M)$. If $I=\\{i_1,\\dots,i_k\\}$, \nrecall that $t_I = t_{i_1}\\ldots t_{i_k}$. In particular, write \n$t_{[k]}=t_1\\ldots t_k$ and $t_{[i,j]}=t_i t_{i+1} \\ldots t_{j-1} t_j$. \nIf $Q$ is a defining polynomial for $\\mathcal{A}$, \norder the lines of $\\mathcal{A}$ (starting with $0$) as indicated in $Q$.\n\n\\begin{enumerate}\n\\item If $Q = x_1^{n+1}-x_2^{n+1}$, then $\\mathcal{A}$ is a pencil with \n$\\abs{{\\sf D}(\\mathcal{A})}=n+1$ dense edges, and $G=F_{n}$ is a free group \nof rank $n$. Thus, $\\Delta(G)=0$ if $n\\ne 1$, and $\\Delta(G)=1$ if $n=1$. \n\\item If $Q = x_0(x_1^n-x_2^n)$, where $n \\ge 3$, then $\\mathcal{A}$ is a near-pencil with \n$\\abs{{\\sf D}(\\mathcal{A})}=n+2$, while \n$\\Delta(G)= (t_{[n]} - 1)^{n-2}$ has a single (distinct) factor.\n\\item \\label{item:3}\nIf $Q=x_0(x_0^m-x_1^m)(x_0^n-x_2^n)$, where $m,n\\ge 3$, then $\\abs{{\\sf D}(\\mathcal{A})}=m+n+3$.\nWriting $J=[m+1,m+n]$, \n$\\Delta(G)$ is given by\n\\[\n[(t_1-1)\\ldots (t_m-1)(t_{[m]}-1)]^{n-1}\n[(t_{m+1}-1)\\ldots (t_{m+n}-1)(t_{J}-1)]^{m-1}. \n\\]\n\\item \\label{item:4}\nIf $Q=x_0(x_0^m-x_2^m)(x_1^n-x_2^n)$, where $m,n\\ge 3$, then $\\abs{{\\sf D}(\\mathcal{A})}=m+n+3$. \nWriting $J=[m+1,m+n]$ and $k=m+n-3$, \n$\\Delta(G)$ is given by\n\\[\n[(t_1-1)\\ldots (t_m-1)(t_{[m+n]}-1)]^{n-1}[(t_{m+1}-1)\\ldots \n(t_{m+n}-1)]^m (t_{J}-1)^{k}.\n\\]\nNote that, after a change of coordinates, the Falk arrangement \n$\\mathcal{F}_1$ is of this form.\n\\end{enumerate}\n\\end{example}\n\nThe arrangements recorded in \\fullref{ex:degenerate} \\eqref{item:3} \nand \\eqref{item:4} have the property that there are two $0$--dimensional \ndense edges which exhaust the lines of the arrangement. That is, there \nare edges $F=\\bigcap_{i\\in I} \\ell_i$ and $F'=\\bigcap_{i\\in I'} \\ell_i$ \nso that $\\mathcal{A}=\\{\\ell_i \\mid i \\in I \\cup I'\\}$. We say $F$ and $F'$ cover $\\mathcal{A}$. \nThis condition insures that the Alexander polynomial is degenerate.\n\n\\begin{prop} \n\\label{prop:degenerate}\nLet $\\mathcal{A}$ be an arrangement of $n+1$ lines in ${\\mathbb{CP}}^2$ \nthat is not a pencil or a near-pencil. If $\\mathcal{A}$ has two \n$0$--dimensional dense edges which cover $\\mathcal{A}$, then \nthe number of distinct factors in the Alexander polynomial \nof the boundary manifold of $\\mathcal{A}$ is $\\abs{{\\sf D}(\\mathcal{A})}-1$. Otherwise, \nthe number of distinct factors is $\\abs{{\\sf D}(\\mathcal{A})}$.\n\\end{prop}\n\n\\begin{proof}\nIf $\\mathcal{A}$ satisfies the hypotheses of the proposition, it is \nreadily checked that, up to a coordinate change, $\\mathcal{A}$ \nis one of the arrangements recorded in \n\\fullref{ex:degenerate} \\eqref{item:3} and \\eqref{item:4}. \nSo assume that these hypotheses do not hold.\n\nIf $\\mathcal{A}$ has no $0$--dimensional dense edges, then $\\mathcal{A}$ \nis a general position arrangement. Since $\\mathcal{A}$ is, by \nassumption, not a near-pencil, the cardinality of $\\mathcal{A}$ \nis at least $4$, that is, $n \\ge 3$. In this instance, the \nAlexander polynomial of the boundary manifold,\n\\[\n\\Delta(G)=[(t_1-1) \\ldots (t_n-1) (t_{[n]}-1)]^{n-2},\n\\]\nhas $n+1=\\abs{{\\sf D}(\\mathcal{A})}$ factors.\n\nSuppose $\\mathcal{A}$ has one $0$--dimensional dense edge. \nSince $\\mathcal{A}$ is not a pencil or near pencil, there are at \nleast two lines of $\\mathcal{A}$ which do not contain the dense \nedge. Write $\\mathcal{A}=\\{\\ell_0,\\ell_1,\\dots,\\ell_n\\}$, where \n$\\bigcap_{i=1}^k \\ell_i$, $k \\ge 3$, is the unique \n$0$--dimensional dense edge. Since $\\mathcal{A}$ has a \nsingle $0$--dimensional dense edge, the subarrangement \n$\\{\\ell_0,\\ell_{k+1},\\dots,\\ell_n\\}$ is in general position. \nBy \\fullref{thm:alex poly arr}, the Alexander \npolynomial of the boundary of $\\mathcal{A}$ is\n\\[\n\\Delta(G)=\\prod_{i=1}^n (t_i-1)^{m_i-2}\\cdot (t_{[n]}-1)^{m_0-2} \n\\cdot (t_{[k]}-1)^{k-2},\n\\]\nand one can check that $m_i \\ge 3$ for each $i$, $0 \\le i \\le n$.\n\nNow consider the case where $\\mathcal{A}$ has two $0$--dimensional \ndense edges, but they do not cover $\\mathcal{A}$. Either there is a \nline of $\\mathcal{A}$ containing both dense edges, or not. Assume \nfirst there is no such line. Write $\\mathcal{A}=\\{\\ell_i\\}_{i=0}^n$, and \nassume without loss that the two dense edges are \n$\\bigcap_{i=0}^k \\ell_i$ and $\\bigcap_{i=k+1}^m \\ell_i$, \nwhere $k\\ge 2$, $m-k \\ge 3$, and $m0$ and $t_v=t_1^{q_1}\\ldots t_n^{q_n}$, \nthe Newton polytope $\\mathcal{N}\\bigl[(t_v-1)^{d_v}\\bigr]$ is the convex \nhull of $\\b{0}=(0,\\dots,0)$ and $(d_v q_1,\\dots,d_v q_n)$ in \n$\\R^n$, a line segment. Thus, the Newton polytope $\\mathcal{N}(\\Delta)$ \nis a Minkowski sum of line segments, that is, a zonotope. As such, \nit is determined by the matrix\n\\begin{equation} \n\\label{eq:Zmatrix}\nZ=\\begin{pmatrix} \\b{q}_1 & \\cdots & \\b{q}_j\\end{pmatrix},\n\\end{equation}\nwhere $j$ is the number of vertices $v \\in \\mathcal{V}({\\Gamma_{\\!\\!\\mathcal{A}}})$ for which \n$d_v>0$, and\n$$\\b{q}_i = \\begin{pmatrix} d_v q_1&\\cdots\n &d_v q_n\\end{pmatrix}^\\top$$\nif $t_v=t_1^{q_1}\\ldots t_n^{q_n}$.\n\nNow consider the Newton polytope of the twisted Alexander polynomial $\\Delta^\\phi$, \n\\[\n\\mathcal{N}(\\Delta^\\phi)=\\sum_{v \\in \\mathcal{V}({\\Gamma_{\\!\\!\\mathcal{A}}})} \n\\mathcal{N}\\bigl[p(A_v,t_v)^{d_v}\\bigr].\n\\] \nSince the characteristic polynomial \n$p(A_v,t_v)$ is monic of degree $k$, the Newton polytope \n$ \\mathcal{N}\\bigl[p(A_v,t_v)^{d_v}\\bigr]$ is the convex hull of $\\b{0}$ and \n$k\\cdot (d_v q_1,\\dots,d_v q_n)$ if $t_v=t_1^{q_1}\\ldots t_n^{q_n}$. \nHence, the Newton polytope $\\mathcal{N}(\\Delta^\\phi)$ is the zonotope determined \nby the matrix $k\\cdot Z$, which is clearly equivalent to $\\mathcal{N}(\\Delta)$.\n\\end{proof}\n\nThe Alexander and Thurston norm balls arise in the context of \nBieri--Neumann--Strebel (BNS) invariants of the group $G=\\pi_1(M)$. \nLet \n\\[\n\\mathbb{S}(G)=\\bigl(H^1(G;\\R)\\setminus\\set{\\b{0}}\\bigr)\/\\R^+,\n\\] \nwhere \n$\\R^+$ acts by scalar multiplication, and view points $[\\xi]$ as \nequivalence classes of homomorphisms $G\\to\\R$. For $[\\xi] \\in \\mathbb{S}(G)$, \ndefine a submonoid $G_\\xi$ of $G$ by $G_\\xi=\\set{g\\in G \\mid \\xi(g)\\ge 0}$. \nIf $K$ is a group upon which $G$ acts, with the commutator subgroup $G'$ \nacting by inner automorphisms, the BNS invariant of $G$ and $K$ is the \nset $\\varSigma_{G,K}$ of all elements $[\\xi]\\in\\mathbb{S}(G)$ for which $K$ \nis finitely generated over a finitely generated submonoid of $G_\\xi$. \nThe set $\\varSigma_{G,K}$ is an open subset of the sphere $\\mathbb{S}(G)$.\n\nLet $K=G'$, with $G$ acting by conjugation. When $G=\\pi_1(M)$, \nwhere $M$ is a compact, irreducible, orientable $3$--manifold, \nBieri, Neumann, and Strebel \\cite{BNS} show that the BNS invariant \n$\\varSigma_{G,G'}$ is equal to the projection to $\\mathbb{S}(G)$ of \nthe interiors of the fibered faces of the Thurston norm ball $\\mathbb{B}_T^{\\,}$.\n\nAssume that $H_1(M)$ is torsion-free, and consider the maximal abelian \ncover $M'$ of $M$, with fundamental group $\\pi_1(M')=G'$. The first \nhomology of $M'$, $B=H_1(M') = G'\/G''$, admits the structure of a \nmodule over $\\Z[H]$, where $H=G\/G'$, and is known as the \nAlexander invariant of $M$. Note that the Alexander polynomial \n$\\Delta(G)=\\Delta(M)$ is the order of the Alexander invariant. \nAs shown by Dunfield \\cite{Dun}, the BNS invariant \n$\\varSigma_{G,B}$ is closely related to the Alexander polynomial.\n\n\\begin{thm} \n\\label{thm:BNS & alex poly}\nLet $\\mathcal{A}$ be an essential line arrangement in ${\\mathbb{CP}}^2$, with boundary manifold $M$. \nLet $G$ be the fundamental group of $M$, $B=G'\/G''$ the Alexander \ninvariant, and $\\Delta=\\ord(B)$ the Alexander polynomial. \nThen the BNS invariant $\\varSigma_{G,B}$ is equal to the projection \nto $\\mathbb{S}(G)$ of the interiors of the top-dimensional faces of the \nAlexander ball $\\mathbb{B}_A$.\n\\end{thm}\n\n\\begin{proof}\nWrite $\\Delta = \\sum c_i g_i$, where $c_i \\neq 0$ and $g_i \\in H=G\/G'$. \nThe Newton polytope $\\mathcal{N}(\\Delta)$ is the convex hull of the $g_i$ in \n$H_1(M;\\R)$. Call a vertex $g_i$ of $\\mathcal{N}(\\Delta)$ a ``$\\pm 1$ vertex'' \nif the corresponding coefficient $c_i$ is equal to $\\pm 1$. \nFor an arbitrary compact, orientable $3$--manifold $M$ whose boundary, \nif any, is a union of tori, Dunfield \\cite{Dun} proves that the BNS \ninvariant $\\varSigma_{G,B}$ is given by the projection to \n$\\mathbb{S}(G)$ of the interiors of the top-dimensional faces of $\\mathbb{B}_A$ \nwhich correspond to $\\pm 1$ vertices of $\\mathcal{N}(\\Delta)$. \n\nIf $M$ is the boundary manifold of a line arrangement \n$\\mathcal{A}\\subset {\\mathbb{CP}}^2$, then, as shown in the proof of \n\\fullref{thm:same alex ball}, the Newton polytope \n$\\mathcal{N}(\\Delta)$ of the Alexander polynomial is a zonotope. \nSince the factors $(t_v-1)^{m_v-2}$ of the Alexander polynomial $\\Delta$ \nhave leading coefficients and constant terms equal to $\\pm 1$, \n\\emph{every} vertex of the associated zonotope $\\mathcal{N}(\\Delta)$ is\na $\\pm 1$ vertex. The result follows.\n\\end{proof}\n\nLet $\\Delta$ be the Alexander polynomial of the boundary \nmanifold of a line arrangement $\\mathcal{A}\\subset{\\mathbb{CP}}^2$. Recall \nthat the Newton polytope $\\mathcal{N}(\\Delta)$ is determined by the \n$n \\times j$ integer matrix $Z$ given in \\eqref{eq:Zmatrix}, \nwhere $\\abs{\\mathcal{A}}=n+1$ and $j$ is the number of distinct \nfactors in $\\Delta$. The matrix $Z$ also determines a \n``secondary'' arrangement $\\mathcal{S}=\\set{H_i}_{i=1}^j$ of $j$ \nhyperplanes in $\\R^n$, where $H_i$ is the orthogonal \ncomplement of the $i$th column of $Z$. The complement \n$\\R^n \\setminus\\bigcup_{i=1}^j H_i$ of the real arrangement \n$\\mathcal{S}$ is a disjoint union of connected open sets known as \nchambers. Let $\\cham(\\mathcal{S})$ be the set of chambers. \nThe number of chambers may be calculated by a well known result \nof Zaslavsky \\cite{Zas}. If $P(\\mathcal{S},t)$ is the Poincar\\'{e} \npolynomial of (the lattice of) $\\mathcal{S}$, then\n\\[\n\\abs{\\cham(\\mathcal{S})} = P(\\mathcal{S},1).\n\\] \nThe number of chambers of the arrangement $\\mathcal{S}$ \ndetermined by the matrix $Z$ is also known to be equal to the \nnumber of vertices of the zonotope $\\mathcal{N}(\\Delta)$ determined by $Z$, \nsee Bj\\\"orner, Las Vergnas, Sturmfels, White and Ziegler \\cite{BLSWZ}.\nHence, we have the following corollary to \\fullref{thm:BNS & alex poly}.\n\n\\begin{cor} \n\\label{cor:BNS count} \nThe BNS invariant $\\varSigma_{G,B}$ has $P(\\mathcal{S},1)$ connected components.\n\\end{cor}\n\n\\begin{example} \n\\label{ex:falk2}\nRecall the Falk arrangements $\\mathcal{F}_1$ and $\\mathcal{F}_2$ from \n\\fullref{ex:falk}. Let $G_i$ be the fundamental \ngroup of the boundary manifold of $\\mathcal{F}_i$, \n$B_i$ the corresponding Alexander invariant, etc. \nThe Alexander polynomials $\\Delta_i=\\Delta(G_i)$ are \nrecorded in \\eqref{eq:falk alex polys}. The zonotopes \n$\\mathcal{N}(\\Delta_1)$ and $\\mathcal{N}(\\Delta_2)$ are determined by the matrices\n\\[\nZ_1=\\begin{pmatrix}\n2 & 0 & 0 & 0 & 0 & 2 & 0\\\\\n0 & 2 & 0 & 0 & 0 & 2 & 0\\\\\n0 & 0 & 2 & 0 & 0 & 2 & 2\\\\\n0 & 0 & 0 & 2 & 0 & 2 & 2\\\\\n0 & 0 & 0 & 0 & 2 & 2 & 2\n\\end{pmatrix}\n,\\quad \nZ_2=\\begin{pmatrix}\n2 & 0 & 0 & 0 & 0 & 1 & 0 & 1\\\\\n0 & 2 & 0 & 0 & 0 & 1 & 0 & 1\\\\\n0 & 0 & 2 & 0 & 0 & 1 & 1 & 0\\\\\n0 & 0 & 0 & 2 & 0 & 1 & 1 & 0\\\\\n0 & 0 & 0 & 0 & 3 & 1 & 1 & 1\n\\end{pmatrix}.\n\\]\nThe Poincar\\'e polynomials of the associated secondary \narrangements $\\mathcal{S}_1$ and $\\mathcal{S}_2$ are\n\\begin{align*}\nP(\\mathcal{S}_1,t)&=1+7t+21t^2+33t^3+27t^4+9t^5 \\\\\n\\text{and}\\qquad P(\\mathcal{S}_2,t)&=1+8t+28t^2+51t^3+47t^4+17t^5.\n\\end{align*}\nConsequently, the BNS invariant $\\varSigma_{G_1,B_1}$ \nhas $P(\\mathcal{S}_1,1)=98$ connected components, while \n$\\varSigma_{G_2,B_2}$ has $P(\\mathcal{S}_2,1)=152$ \nconnected components.\n\\end{example}\n\n\\section{Cohomology ring and holonomy Lie algebra}\n\\label{sect:coho}\n\nAs shown in \\cite{CS06}, the cohomology ring of the boundary \nmanifold $M$ of a hyperplane arrangement has a very \nspecial structure: it is the ``double\" of the cohomology \nring of the complement. For a line arrangement, this \nstructure leads to purely combinatorial descriptions of the \nskew $3$--form encapsulating $H^*(M;\\Z)$, and of the \nholonomy Lie algebra of $M$.\n\n\n\\subsection{The doubling construction}\n\\label{subsec:double}\n\nLet $R$ be a coefficient ring; we will assume either \n$R=\\Z$ or $R=\\mathbb{F}$, a field of characteristic $0$. \nLet $A=\\bigoplus_{k=0}^{m} A^k$ be a graded, \nfinite-dimensional algebra over $R$. Assume \nthat $A$ is graded-commutative, of finite type (that is, each graded \npiece $A^k$ is a finitely generated $R$--module), and connected \n(that is, $A^0=R$). Let $b_k=b_k(A)$ denote the rank of $A^k$.\n\nLet $\\bar{A} = \\Hom_{R}(A,R)$ be the dual of the $R$--module \n$A$, with graded pieces $\\bar{A}^{k} = \\Hom_{R}(A^k,R)$. \nThen $\\bar{A}$ is an $A$--bimodule, with left and right \nmultiplication given by $(a\\cdot f) (b)= f(ba)$ and \n$(f \\cdot a) (b) =f(ab)$, respectively. Note that, if \n$a\\in A^k$ and $f\\in \\bar{A}^{j}$, then \n$af, fa\\in \\bar{A}^{j-k}$. \n\nFollowing \\cite{CS06}, we define the \n{\\em (graded) double} of $A$ to be the graded \n$R$--algebra $\\db{A}$ with underlying $R$--module \nstructure the direct sum $A\\oplus \\bar{A}$, \nmultiplication \n\\begin{equation}\n\\label{eq:double mult}\n(a,f)\\cdot (b,g) = (ab,ag+fb),\n\\end{equation}\nfor $a,b\\in A$ and $f,g\\in \\bar{A}$, and \ngrading \n\\begin{equation}\n\\label{eq:double grading}\n\\db{A}^{k}=A^{k} \\oplus \\bar{A}^{2m-1-k}.\n\\end{equation}\n\n\\subsection{Poincar\\'{e} duality}\n\\label{subsec:pd}\n\nLet $A=\\bigoplus_{k=0}^{m} A^k$ be a graded algebra as \nabove. We say $A$ is a {\\em Poincar\\'{e} duality} algebra \n(of formal dimension $m$) if the $R$--module $A^{m}$ is \nfree of rank $1$ and, for each $k$, the pairing \n$A^{k} \\otimes A^{m-k} \\to A^{m}$ given by multiplication \nis non-singular. In particular, each graded piece $A^{k}$ \nmust be a free $R$--module. \n\nGiven a $\\PD_{m}$ algebra $A$, fix a generator $\\omega$ \nfor $A^{m}$. We then have an alternating $m$--form, \n$\\eta_A\\colon A^1 \\wedge \\ldots \\wedge A^1 \\to R$, \ndefined by \n\\begin{equation}\n\\label{eq:eta}\na_1\\ldots a_{m}= \\eta_A(a_1 ,\\dots, a_{m}) \\cdot \\omega.\n\\end{equation}\nIf $A$ is $3$--dimensional, the full multiplicative structure \nof $A$ can be recovered from the form $\\eta_A$ \n(and the generator $\\omega\\in A^{3}$). \n\nThe classical example of a Poincar\\'{e} duality algebra \nis the rational cohomology ring, $H^*(M;\\Q)$, of an \n$m$--dimensional closed, orientable manifold $M$. \nAs shown by Sullivan \\cite{Su75}, any rational, alternating \n$3$--form $\\eta$ can be realized as $\\eta=\\eta_{H^*(M;\\Q)}$, \nfor some $3$--manifold $M$. \n\n\\begin{lem}\n\\label{lem:pddouble}\nLet $A=\\bigoplus_{k=0}^{m} A^k$ be a graded, graded \ncommutative, connected, finite-type algebra over $R=\\Z$ or $\\mathbb{F}$. \nAssume $A$ is a free $R$--module, and $m>1$. If $\\db{A}$ is \nthe graded double of $A$, then:\n\\begin{enumerate}\n\\item $\\db{A}$ is a Poincar\\'{e} duality algebra over $R$, of \nformal dimension $2m-1$. \n\\item If $m> 2$, then $\\eta_{\\db{A}}=0$. \n\\item If $m=2$, then for every $a,b,c\\in A^1$ and $f,g,h\\in \\bar{A}^2$,\n\\[\n\\eta_{\\db{A}}( (a,f), (b,g) , (c,h) ) =f(bc)+ g(ca)+h(ab). \n\\] \n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\n(1)\\qua The $R$--module $\\db{A}^{2m-1}=\\bar{A}^0$ is isomorphic \nto $R$ via the map $f \\mapsto f(1)$. Take $\\omega=\\bar{1}$ \nas generator of $\\db{A}^{2m-1}$. The pairing \n$\\db{A}^{k} \\otimes \\db{A}^{2m-1-k} \n\\to \\db{A}^{2m-1}$ is non-singular: its adjoint, \n\\[\n\\db{A}^{k} \\to \\Hom_R( \\db{A}^{2m-1-k}, \\db{A}^{2m-1}), \\quad\n(a,f) \\mapsto ((b,g) \\mapsto ag+fb),\n\\]\nis readily seen to be an isomorphism.\n\n(2)\\qua If $m>2$, then $\\db{A}^1=A^1$, and $\\eta_{\\db{A}}$ vanishes, \nsince $A^{2m-1}=0$. \n\n(3)\\qua If $m=2$, then $\\db{A}^1=A^1 \\oplus \\bar{A}^2$, and the expression \nfor $\\eta_{\\db{A}}$ follows immediately from \\eqref{eq:double mult}.\n\\end{proof}\n\n\\subsection{The double of a $2$--dimensional algebra}\n\\label{subsec:2dim double}\n\nIn view of the above Lemma, the most interesting case \nis when $m=2$, so let us analyze it in a bit more detail. \nWrite $A=A^0 \\oplus A^1 \\oplus A^2$, and fix ordered bases, \n$\\{\\alpha_1,\\dots ,\\alpha_{b_1}\\}$ for $A^1$ and \n$\\{\\beta_1,\\dots , \\beta_{b_2}\\}$ for $A^2$. \nThe multiplication map, \n$\\mu\\colon A^1 \\otimes A^1 \\to A^2$, is then given by \n\\begin{equation}\n\\label{eq:multiplication}\n\\mu(\\alpha_i , \\alpha_j) =\n\\sum_{k=1}^{b_2}\\mu_{i,j,k}\\, \\beta_k, \n\\end{equation}\nfor some integer coefficients $\\mu_{i,j,k}$ \nsatisfying $\\mu_{j,i,k}=-\\mu_{i,j,k}$. \n\nNow consider the double \n\\[\n\\db{A}= \\db{A}^0 \\oplus \\db{A}^1 \\oplus \n\\db{A}^2 \\oplus \\db{A}^3 =A^0 \\oplus (A^1 \\oplus \\bar A^2) \\oplus \n(A^2 \\oplus \\bar A^1) \\oplus \\bar A^0.\n\\] \nPick dual bases \n$\\{\\bar\\alpha_j\\}_{1\\le j \\le b_1}$ for $\\bar{A}^{1}$ and \n$\\{\\bar\\beta_k\\}_{1\\le k \\le b_2}$ for $\\bar{A}^{2}$. \nThe multiplication map \n$\\dbl{\\mu}\\colon \\db{A}^1 \\otimes \\db{A}^1 \\to \\db{A}^2$ \nrestricts to $\\mu$ on $A^1 \\otimes A^1$, vanishes on \n$\\bar{A}^2\\otimes \\bar{A}^2$, while on $A^1\\otimes \\bar{A}^2$, \nit is given by\n\\begin{equation}\n\\label{eq:multi}\n\\dbl{\\mu}(\\alpha_j ,\\bar \\beta_k) =\n\\sum_{i=1}^{b_1} \\mu_{i,j,k}\\, \\bar \\alpha_i. \n\\end{equation}\nAs a consequence, we see that the multiplication maps \n$\\mu$ and $\\hat{\\mu}$ determine one another. \n\nIn the chosen basis for $\\db{A}^1=A^1 \\oplus \\bar{A}^2$, \nthe form $\\eta_{\\db{A}}\\in \\bigwedge^3 \\db{A}^1$ \ncan be expressed~as\n\\begin{equation}\n\\label{eq:eatmu}\n\\eta_{\\db{A}} =\\sum_{1\\le i< j\\le b_1} \\sum_{k=1}^{b_2} \n\\mu_{i,j,k} \\, \\alpha_i\\wedge \\alpha_j \\wedge \\bar{\\beta}_k.\n\\end{equation}\nThis shows again that the multiplication map $\\dbl{\\mu}$ \ndetermines, and is determined by the $3$--form $\\eta_{\\db{A}}$. \n\n\n\\subsection{The cohomology ring of the boundary}\n\\label{subsec:coho ring bdry}\n\nNow let $\\mathcal{A}=\\{\\ell_i\\}_{i=0}^n$ be a line arrangement in ${\\mathbb{CP}}^2$, \nwith complement $X$, and let $A=H^*(X;\\Z)$ be the integral \nOrlik--Solomon algebra of $\\mathcal{A}$. As is well known, \n$A=\\bigoplus_{k=0}^{2} A^k$ is torsion-free, and \ngenerated in degree $1$ by classes $e_1,\\dots ,e_n$ dual \nto the meridians $x_1,\\dots , x_n$ of the decone ${\\mathsf{d}\\mathcal{A}}$. \nChoosing a suitable basis $\\{f_{i,k}\\mid (i,k)\\in \\nbc_2({\\mathsf{d}\\mathcal{A}}) \\}$ \nfor $A^2$, the multiplication map $\\mu\\colon A^1\\wedge A^1 \\to A^2$ \nis given on basis elements $e_i, e_j$ with $ii} [x_j, y_{(i,j)}],\n&&1\\le i \\le n.\n\\end{align*}\n\\end{example}\n\n\n\\section{Cohomology jumping loci}\n\\label{sect:cjl}\n\nIn this section, we discuss the characteristic varieties \nand the resonance varieties of the boundary manifold \nof a line arrangement. \n\n\\subsection{Characteristic varieties}\n\\label{subsect:char var}\n\nLet $X$ be a space having the homotopy type of \na connected, finite-type CW--complex. For simplicity, we will \nassume that the fundamental group $G=\\pi_{1}(X)$ has \ntorsion-free abelianization $H_{1}(G)=\\mathbb{Z}^n$. Consider \nthe character torus $\\Hom(G,\\C^*)\\cong (\\C^*)^{n}$.\nThe {\\em characteristic varieties} of $X$ are the \njumping loci for the cohomology of $X$, \nwith coefficients in rank~$1$ local systems over $\\C$:\n\\begin{equation} \n\\label{eq:charvar}\nV^{k}_{d}(X)=\\{ \\phi \\in \\Hom(G,\\C^*) \\mid \n\\dim H^{k}(X; \\C_{\\phi})\\ge d\\}, \n\\end{equation}\nwhere $\\C_{\\phi}$ denotes the abelian group \n$\\C$, with $\\pi_{1}(X)$--module structure given by the \nrepresentation $\\phi\\colon \\pi_{1}(X)\\to \\C^*$. \nThese loci are sub\\-varieties of the algebraic torus \n$(\\C^*)^{n}$; they depend only on the homotopy \ntype of $X$, up to a monomial isomorphism of the \ncharacter torus. \n\nFor a finitely presented group $G$ (with torsion-free \nabelianization), set $V^{k}_{d}(G):=V^{k}_{d}(K(G,1))$. \nWe will be only interested here in the degree $1$ characteristic \nvarieties. If $G=\\pi_1(X)$ with $X$ a space as above, \nthen clearly $V^{1}_{d}(G)=V^{1}_{d}(X)$. \n\nThe varieties $V^{1}_{d}(G)$ can be computed \nalgorithmically from a finite presentation of the group. \nIf $G$ has generators $x_i$ and relations $r_j$, let \n$J_G=\\begin{pmatrix} \\partial r_i \/\\partial x_j\\end{pmatrix}$ \nbe the corresponding Jacobian matrix of Fox derivatives. \nThe abelianization $J_G^{\\ab}$ is the {\\em Alexander matrix} \nof $G$, with entries in $\\Lambda=\\C[t_1^{\\pm 1},\\dots, t_n^{\\pm 1}]$, \nthe coordinate ring of $(\\C^*)^n$. Then:\n\\begin{equation} \n\\label{eq:alex matrix}\nV^{1}_{d}(G) \\setminus\\{1\\}=V (E_d(J_G^{\\ab})) \\setminus\\{1\\}.\n\\end{equation}\nIn other words, $V^{1}_{d}(G)$ consists of all those characters \n$\\phi\\in \\Hom(G,\\C^*)\\cong (\\C^*)^n$ for which the evaluation \nof $J_G^{\\ab}$ at $\\phi$ has rank less than $n-d$ (plus, possibly, \nthe identity $1$). \n\n\\subsection{Characteristic varieties of line arrangements}\n\\label{subsect:char var arr}\n\nLet $\\mathcal{A}=\\{\\ell_0,\\dots,\\ell_n\\}$ be a line arrangement in \n${\\mathbb{CP}}^2$. The characteristic varieties of the complement \n$X$ are fairly well understood. It follows from foundational work \nof Arapura \\cite {Ar} that $V^1_d(X)$ is a union of subtori \nof the character torus $\\Hom(\\pi_1(X), \\C^*)=(\\C^*)^n$, \npossibly translated by roots of unity. Moreover, \ncomponents passing through $1$ admit a \ncompletely combinatorial description. See \\cite{CS99} and Libgober and\nYuzvinsky \\cite{LY00}. \n\nTurning to the characteristic varieties of the boundary manifold \n$M$, we have the following complete description of \n$V^1_1(M)$. \n\n\\begin{thm}\n\\label{thm:cv bdry} Let $\\mathcal{A}$ be an essential line arrangement \nin ${\\mathbb{CP}}^2$, and let $G$ be the fundamental group of \nthe boundary manifold $M$. \nThen \n\\[\nV^1_1(G) = \\bigcup_{v \\in \\mathcal{V}({\\Gamma_{\\!\\!\\mathcal{A}}}),m_v \\ge 3} \\{t_v-1=0\\}.\n\\]\n\\end{thm}\n\n\\begin{proof}\nBy \\fullref{prop:THEpres}, the group $G$ \nadmits a commutator-relators presentation, with equal \nnumber of generators and relations. So the Alexander \nmatrix $J_G^{\\ab}$ is a square matrix, which augments to zero. \nIt follows that the characteristic variety $V_1^1(G)$ is the variety defined by \nthe vanishing of the codimension $1$ minors of $J_G^{\\ab}$. \nThe ideal $I(G)=E_1(J_G^{\\ab})$ of codimension $1$ minors, \nthe Alexander ideal, is given by $I(G)=\\mathfrak{m}^2 \\cdot (\\Delta(G))$, \nwhere $\\mathfrak{m}$ is the maximal ideal of $\\Z H_1(G)$, \nsee McMullen \\cite{Mc}. Consequently, \n\\begin{equation}\n\\label{eq:cv bdry}\nV^1_1(G)=\\{ \\Delta(G) = 0\\}. \n\\end{equation}\nOn the other hand, we know from \\fullref{thm:alex poly arr} \nthat the Alexander polynomial of $G$ is given by \n$\\Delta(G) = \\prod_{v \\in \\mathcal{V}({\\Gamma_{\\!\\!\\mathcal{A}}})} (t_v-1)^{m_v-2}$. \nThe conclusion follows.\n\\end{proof}\n\nBy \\fullref{thm:cv bdry}, $V^1_1(G)$ is the union of an arrangement\nof codimension $1$ subtori in $\\Hom(G,\\C^*)=(\\C^*)^n$, indexed by the\nvertices of the graph ${\\Gamma_{\\!\\!\\mathcal{A}}}$. We do not have an explicit description of\nthe varieties $V^1_d(G)$, for $d>1$.\n\n\\subsection{Resonance varieties}\n\\label{subsec: res var}\n\nLet $A$ be a graded, graded-com\\-mutative, connected, \nfinite-type algebra over $\\C$. \nSince $a\\cdot a=0$ for each $a\\in A^1$, \nmultiplication by $a$ defines a \ncochain complex\n\\begin{equation}\n\\label{eq:aomoto}\n\\xymatrixcolsep{22pt}\n(A,a)\\colon \\:\\:\n\\disablesubscriptcorrection\\xysavmatrix{\n0 \\ar[r] &A^0 \\ar[r]^{a} & A^1\n\\ar[r]^{a} & A^2 \\ar[r]^(.46){a}&\\, \\cdots }.\n\\end{equation}\nThe {\\em resonance varieties} of $A$ are the \njumping loci for the cohomology of these complexes:\n\\begin{equation} \n\\label{eq:res var}\n\\mathcal{R}^{k}_{d}(A)=\\{ a\\in A^1 \\mid \\dim H^k(A,a) \\ge d\\},\n\\end{equation}\nfor $k\\ge 1$ and $1\\le d \\le b_k(A)$. \nThe sets $\\mathcal{R}^{k}_{d}(A)$ are homogeneous algebraic \nsubvarieties of the complex vector space $A^1=\\C^{b_1}$. \n\nWe will only be interested here in the degree $1$ resonance \nvarieties, $\\mathcal{R}^1_{d}(A)$. Let $S=\\Sym(A_1)$ be the symmetric \nalgebra on the dual of $A^1$. If $\\{x_1,\\dots ,x_{b_1}\\}$ \nis the basis for $A_1$ dual to the basis \n$\\{\\alpha_1,\\dots,\\alpha_{b_1}\\}$ for $A^1$, then \n$S$ becomes identified with the polynomial ring \n$\\C[x_1,\\dots ,x_{b_1}]$. Also, let \n$\\mu\\colon A^1\\otimes A^1 \\to A^2$ is the \nmultiplication map, given by \\eqref{eq:multiplication}. \nThen, as shown by Matei and Suciu \\cite{MS00} (generalizing a result \nfrom \\cite{CS99}): \n\\begin{equation}\n\\label{eq:res from matrix}\n\\mathcal{R}^1_{d}(A)=V(E_{d}(\\Theta)), \n\\end{equation}\nwhere $\\Theta=\\Theta_A$ is the $b_1 \\times b_2$ matrix of \nlinear forms over $S$, with entries\n\\begin{equation}\n\\label{eq:delta matrix}\n\\Theta_{j,k}=\\sum_{i=1}^{b_1} \\mu_{i,j,k} x_i. \n\\end{equation}\nIf $X$ is a space having the homotopy type of \na connected, finite-type CW--complex, define the \nresonance varieties of $X$ to be those of $A=H^*(X;\\C)$. \nSimilarly, if $G$ is a finitely presented group, define the \nresonance varieties of $G$ to be those of a $K(G,1)$ space. \nIf $G=\\pi_1(X)$, then $R^1_d(G)=R^1_d(X)$. \nFurthermore, \nif $G$ is a commutator-relators group, then \nthe matrix $\\Theta$ above is (equivalent to) the ``linearization\" \nof the (transposed) Alexander matrix $J_G^{\\ab}$, \nsee \\cite{MS00}. This suggests a relationship between \n$V^1_d(G)$ and $R^1_d(G)$. For more on this, \nsee~\\fullref{subsec:tcone}. \n\n\n\\subsection{Resonance of line arrangements}\n\\label{subsec:res arr}\n\nLet $\\mathcal{A}=\\set{\\ell_i}_{i=0}^n$ be an arrangement of lines in ${\\mathbb{CP}}^2$,\nwith complement $X$. The resonance varieties of the Orlik--Solomon\nalgebra $A=H^*(X;\\C)$, first studied by Falk \\cite{Fa97}, are by now\nwell understood. It follows from \\cite{CS99} and from Libgober and\nYuzvinsky \\cite{LY00} that $R^1_d(A)$ is the union of linear subspaces\nof $A^1=\\C^n$; these subspaces (completely determined by the underlying\ncombinatorics) have dimension at least $2$; and intersect only at $0$.\n\nNow let $M$ be the boundary manifold, and \n$\\db{A}=H^*(M;\\C)$ its cohomology ring. \nRecall that $\\db{A}^1=A^1 \\oplus \\bar{A}^2$, with basis \n$\\{\\alpha_i,\\bar{\\beta}_k\\}$, where $1\\le i\\le b_1=n$ and \n$1\\le k \\le b_2=\\abs{\\nbc_2({\\mathsf{d}\\mathcal{A}})}$. Identify the ring \n$\\db{S}=\\Sym(\\db{A}^1)$ with the polynomial ring in \nvariables $\\{x_i, y_k\\}$. It follows from \\eqref{eq:multi} that \nthe matrix $\\db{\\Theta}=\\Theta_{\\db{A}}$ has the form \n\\begin{equation}\n\\label{eq:bmat}\n\\db{\\Theta} = \\begin{pmatrix} \n\\Phi & \\Theta \\\\ \n-\\Theta^\\top & 0 \\end{pmatrix}, \n\\end{equation}\nwhere $\\Phi$ is the $b_1 \\times b_1$ skew-symmetric \nmatrix with entries \n$\\Phi_{i,j} = \\sum_{k=1}^{b_2} \\mu_{i,j,k} y_k$. \nUsing this fact, one can derive the following information \nabout the resonance varieties of $M$. \nWrite $\\beta = 1-b_1(A) + b_2(A)$ and $\\mathcal{R}_d(\\Phi)= V(E_d(\\Phi))$. \n \n\\begin{prop}[Cohen and Suciu \\cite{CS06}]\n\\label{prop:res var double}\nThe resonance varieties of the doubled algebra \n$\\db{A}=H^*(M;\\C)$ satisfy:\n\\begin{enumerate}\n\\item \\label{rr1}\n$\\mathcal{R}^1_d(\\db{A}) = \\db{A}^1$ for $d \\le \\beta$.\n\\item \\label{rr2}\n$\\mathcal{R}^1_{d}(A) \\times \\bar{A}^2\n\\subseteq \\mathcal{R}^1_{d+\\beta}(\\db{A})$.\n\\item \\label{rr3}\n$\\mathcal{R}_{d}(\\Phi) \\times \\{0\\}\n\\subseteq \\mathcal{R}^1_{d+b_2}(\\db{A})$. \n\\end{enumerate}\n\\end{prop}\n\nThis allows us to give a complete characterization of \nthe resonance variety $\\mathcal{R}^1_1(G)$, for $G$ a boundary \nmanifold group. \n\n\\begin{cor} \n\\label{cor:pen-nearpen}\nLet $\\mathcal{A}=\\{\\ell_0,\\dots ,\\ell_n\\}$ be a line arrangement in ${\\mathbb{CP}}^2$, \n$n\\ge 2$, and $G=\\pi_1(M)$. Then:\n\\[\n\\mathcal{R}^1_1(G)=\\begin{cases}\n\\C^n &\\text{if $\\mathcal{A}$ is a pencil,}\\\\\n\\C^{2(n-1)} &\\text{if $\\mathcal{A}$ is a near-pencil,}\\\\ \n\\C^{b_1+b_2} &\\text{otherwise.}\n\\end{cases}\n\\]\n\\end{cor}\n\n\\begin{proof} \nIf $\\mathcal{A}$ is a pencil, then $G=F_n$, and so $\\mathcal{R}^1_1(G)=\\C^{n}$. \n \nIf $\\mathcal{A}$ is a near-pencil, then $G= \\Z\\times \\pi_1(\\Sigma_{n-1})$ \nand a calculation yields $\\mathcal{R}^1_1(G)=\\C^{2(n-1)}$.\n\nIf $\\mathcal{A}$ is neither a pencil, nor a near-pencil, then $n \\ge 3$, and \na straightforward inductive argument shows that \n$\\beta\\ge 1$. Consequently, $\\mathcal{R}^1_1(G) =H^1(G;\\C)$ \nby \\fullref{prop:res var double}.\n\\end{proof}\n\n\\subsection{A pair of arrangements}\n\\label{subsec:braid prod}\n\n\\begin{figure}%\n\\subfigure{%\n\\label{fig:p23}%\n\\begin{minipage}[t]{0.45\\textwidth}\n\\setlength{\\unitlength}{14pt}\n\\begin{picture}(4,6.0)(-5,-1.6)\n\\put(2,2){\\oval(6.5,6)[t]}\n\\put(2,2){\\oval(6.5,7)[b]}\n\\put(-0.8,0.2){\\line(1,0){5.6}}\n\\put(-0.8,2){\\line(1,0){5.6}}\n\\put(-0.8,3.8){\\line(1,0){5.6}}\n\\put(0.8,-0,5){\\line(0,1){5}}\n\\put(3.2,-0,5){\\line(0,1){5}}\n\\put(-1.9,3){\\makebox(0,0){$\\ell_0$}}\n\\put(0.8,-1){\\makebox(0,0){$\\ell_1$}}\n\\put(3.2,-1){\\makebox(0,0){$\\ell_2$}}\n\\put(4.5,-0.3){\\makebox(0,0){$\\ell_3$}}\n\\put(4.6,1.5){\\makebox(0,0){$\\ell_4$}}\n\\put(4.5,3.2){\\makebox(0,0){$\\ell_5$}}\n\\end{picture}\n\\end{minipage}\n}\n\\subfigure{%\n\\label{fig:braidpict}%\n\\begin{minipage}[t]{0.45\\textwidth}\n\\setlength{\\unitlength}{14pt}\n\\begin{picture}(4,6.0)(-4.5,-1.6)\n\\put(2,2){\\oval(6.5,6.3)[t]}\n\\put(2,2){\\oval(6.5,7)[b]}\n\\put(-1,0.5){\\line(1,0){6}}\n\\put(-1,3.5){\\line(1,0){6}}\n\\put(0.5,-0,5){\\line(0,1){5.3}}\n\\put(3.5,-0,5){\\line(0,1){5.3}}\n\\put(-0.5,-0.5){\\line(1,1){4.8}}\n\\put(-1.9,3){\\makebox(0,0){$\\ell_0$}}\n\\put(0.5,-1){\\makebox(0,0){$\\ell_1$}}\n\\put(3.5,-1){\\makebox(0,0){$\\ell_2$}}\n\\put(4.5,1){\\makebox(0,0){$\\ell_3$}}\n\\put(4.5,3.1){\\makebox(0,0){$\\ell_4$}}\n\\put(1.75,2.5){\\makebox(0,0){$\\ell_5$}}\n\\end{picture}\n\\end{minipage}\n}\n\\caption{The product arrangement \n$\\mathcal{A}$ and the braid arrangement $\\mathcal{A}'$}\n\\label{fig:prodbraid}\n\\end{figure}\n\nThe arrangements $\\mathcal{A}$ and $\\mathcal{A}'$ depicted in \\fullref{fig:prodbraid}\nhave defining polynomials\n\\begin{align*}\nQ(\\mathcal{A})&=x_0(x_1+x_0)(x_1-x_0)(x_2+x_0)x_2(x_2-x_0)\\\\\n\\text{and}\\qquad Q(\\mathcal{A}')&=x_0(x_1+x_0)(x_1-x_0)(x_2+x_0)(x_2-x_0)(x_2-x_1).\n\\end{align*}\nThe respective boundary manifolds, $M$ and $M'$, \nshare the same Poincar\\'{e} polynomial, namely \n$P(t)=(1+t)(1+10t+t^2)$. \nYet their cohomology rings, $\\db{A}$ and $\\db{A}'$, are \nnot isomorphic---they \nare distinguished by their resonance varieties. Indeed, \na computation with Macaulay~2 \\cite{M2} reveals that\n$$\\mathcal{R}^1_7(\\db{A})=V(x_1,x_2,x_3,x_4,x_5,\\, \ny_3y_5-y_2y_6, \\, y_3y_4-y_1y_6, \\, y_2y_4-y_1y_5),$$\nwhich is a variety of dimension $4$, whereas\n\\begin{multline*}\n\\mathcal{R}^1_7(\\db{A}')=V(x_1,x_2,x_3,x_4,x_5,\ny_2 y_4{-}y_1 y_6,\\, y_2 y_5{-}y_3 y_6,\ny_3 y_4{-}y_4 y_5{-}y_3 y_6+y_4 y_6,\\\\\n y_1 y_5{-}y_4 y_5{-}y_3 y_6+y_4 y_6,\ny_1 y_3{-}y_2 y_3{-}y_4 y_5+y_1 y_6{-}y_3 y_6+y_4 y_6),\n\\end{multline*}\nwhich is a variety of dimension $3$.\n\n\\section{Formality}\n\\label{sect:formal}\nIn this section, we characterize those arrangements $\\mathcal{A}$ \nfor which the boundary manifold $M$ is formal, in the \nsense of Sullivan \\cite{Su77}. It turns out that, with the \nexception of pencils and near-pencils, $M$ is never formal. \n\n\\subsection{Formal spaces and $1$--formal groups}\n\\label{subsect:formal spaces}\n\nLet $X$ be a space having the homotopy type of a connected, \nfinite-type CW--complex. Roughly speaking, $X$ is {\\em formal}, \nif its rational homotopy type is completely determined \nby its rational cohomology ring. More precisely, $X$ is \nformal if there is a zig-zag sequence of morphisms of \ncommutative differential graded algebras connecting \nSullivan's algebra of polynomial forms, $(A_{PL} (X,\\Q),d)$, \nto $(H^*(X;\\Q),0)$, and inducing isomorphisms in cohomology. \nWell known examples of formal spaces include spheres; \nsimply-connected Eilenberg--Mac\\,Lane spaces; \ncompact, connected Lie groups and their classifying spaces; and \ncompact K\\\"{a}hler manifolds. The formality property \nis preserved under wedges and products of spaces, and \nconnected sums of manifolds.\n\nA finitely presented group $G$ is said to be {\\em $1$--formal}, in\nthe sense of Quillen \\cite{Q}, if its Malcev Lie algebra (that is,\nthe Lie algebra of the prounipotent completion of $G$) is quadratic;\nsee Papadima and Suciu \\cite{PS} for details. If $X$ is a formal space,\nthen $G=\\pi_1(X)$ is a $1$--formal group, as shown by Sullivan \\cite{Su77}\nand Morgan \\cite{Mo}. Complements of complex projective hypersurfaces are\nnot necessarily formal, see \\cite{Mo}. Nevertheless, their fundamental\ngroups are $1$--formal, as shown by Kohno \\cite{K}.\n\nIf $X$ is the complement of a complex hyperplane arrangement, Brieskorn's\ncalculation of the integral cohomology ring of $X$ (see Orlik and\nTerao \\cite{OT1}) implies that $X$ is (rationally) formal. However, the\nanalogous property of $\\Z_p$--formality does not necessarily hold, due to\nthe presence of non-vanishing triple Massey products in $H^*(X;\\Z_p)$,\nsee Matei \\cite{Ma}.\n\nAs mentioned above, our goal in this section is to decide, \nfor a given line arrangement $\\mathcal{A}$, whether the boundary \nmanifold $M$ is formal, and whether $G=\\pi_1(M)$ is \n$1$--formal. \nIn our situation, Massey products in $H^*(G;\\Z)$\nmay be computed directly from the commutator-relators\npresentation given in \\fullref{prop:THEpres},\nusing the Fox calculus approach described by Fenn and Sjerve \\cite{FS}.\nYet determining whether such products vanish\nis quite difficult, as Massey products are\nonly defined up to indeterminacy. So we turn to other, more manageable, \nobstructions to formality.\n\n\\subsection{Associated graded Lie algebra}\n\\label{subsect:gr lie}\n\nThe lower central series of a group $G$ is the sequence \nof normal subgroups $\\{G_k \\}_{k\\ge 1}$, defined \ninductively by $G_1=G$, $G_2=G'$, and $G_{k+1} =[G_k,G]$. \nIt is readily seen that the quotient groups, $G_k\/G_{k+1}$, \nare abelian. Moreover, if $G$ is finitely generated, \nso are all the LCS quotients. \nThe {\\em associated graded Lie algebra} of $G$ is the \ndirect sum $\\gr(G)=\\bigoplus\\nolimits_{k\\ge 1} G_k\/ G_{k+1}$, \nwith Lie bracket induced by the group commutator, \nand grading given by bracket length. \n\nIf the group $G$ is finitely presented, there is another \ngraded Lie algebra attached to $G$, the (rational) \nholonomy Lie algebra, ${\\mathfrak{h}}(G):={\\mathfrak{h}}(H^*(G;\\Q))$. In fact, \nif $X$ is any space having the homotopy type of a \nconnected CW--complex with finite $2$--skeleton, \nand if $G=\\pi_1(X)$, then ${\\mathfrak{h}}(G)={\\mathfrak{h}}(H^*(X;\\Q))$, see Papadima and Suciu \\cite{PS}. \nNow suppose $G$ is a $1$--formal group. Then, \n\\begin{equation}\n\\label{eq:holo gr}\n \\gr (G)\\otimes \\Q\\cong {\\mathfrak{h}}(G), \n\\end{equation}\nas graded Lie algebras; see Quillen \\cite{Q} and Sullivan \\cite{Su77}. \nIn particular, the respective Hilbert series must be equal. \n\nReturning to our situation, let $\\mathcal{A}$ be a line arrangement \nin ${\\mathbb{CP}}^2$, with boundary manifold $M$. A finite presentation \nfor the group $G=\\pi_1(M)$ is given in \\fullref{prop:THEpres}. On the other hand, \nwe know that $H^*(M;\\Q)=\\db{A}$, \nthe double of the (rational) Orlik--Solomon algebra. Thus, \n${\\mathfrak{h}}(G)={\\mathfrak{h}}(\\db{A})$, with presentation given in \\fullref{prop:holo lie bdry arr}. Using these explicit presentations, \none can compute, at least in principle, the Hilbert series of \n$\\gr (G)\\otimes \\Q$ and ${\\mathfrak{h}}(G)$.\n\n\\begin{example}\n\\label{ex:gr holo gen pos}\nLet $\\mathcal{A}$ be an arrangement of $4$ lines in general position \nin ${\\mathbb{CP}}^2$, and $M$ its boundary manifold. A presentation \nfor $G=\\pi_1(M)$ is given in \\fullref{ex:general position}, \nwhile a presentation for ${\\mathfrak{h}}(G)$ is given in \n\\fullref{ex:holo gen pos}. Direct computation shows that \n$$\\Hilb( \\gr(G) \\otimes \\Q, t)\n= 6 + 9t + 36 t^2 + 131t^3 + 528t^4 + \\cdots,$$\nwhereas\n$$\\Hilb({\\mathfrak{h}}(G),t) = 6 + 9t + 36 t^2 + 132 t^3 + 534 t^4 + \\cdots.$$\nConsequently, $G$ is not $1$--formal, and so $M$ is not formal, \neither.\n\\end{example}\n\nWe can use the formality test \\eqref{eq:holo gr} to \nshow that several other boundary manifolds \nare not formal, but we do not know a general formula \nfor the Hilbert series of the two graded Lie algebras attached \nto a boundary manifold group. Instead, \nwe turn to another formality test.\n\n\\subsection{The tangent cone formula}\n\\label{subsec:tcone}\nLet $G$ be a finitely presented group, with $H_1(G)$ torsion-free. \nConsider the map $\\exp\\colon \\Hom(G,\\C) \\to \\Hom(G,\\C^*)$, \n$\\exp(f)(z)=e^{f(z)}$. Using this map, we may identify \nthe tangent space at $1$ to the torus $\\Hom(G,\\C^*)$ \nwith the vector space $\\Hom(G,\\C)=H^1(G,\\C)$. Under this \nidentification, the exponential map takes the resonance variety \n$R^1_d(G)$ to $V^1_d(G)$. Moreover, the tangent \ncone at $1$ to $V^1_d(G)$ is contained in $R^1_d(G)$, \nsee Libgober \\cite{Li}. While this inclusion is in \ngeneral strict, equality holds under a formality assumption.\n\n\\begin{thm}[Dimca, Papadima and Suciu \\cite{DPS}] \n\\label{thm:tcone}\nSuppose $G$ is a $1$--formal group. Then, for each $d\\ge 1$, \nthe exponential map induces a complex analytic isomorphism \nbetween the germ at $0$ of $R^1_d(G)$ and the germ at $1$ \nof $V^1_d(G)$. Consequently, \n\\begin{equation}\n\\label{eq:tcone}\n\\operatorname{TC}_{1}(V^1_d(G))=R^1_d(G).\n\\end{equation}\n\\end{thm}\n\nIn particular, this ``tangent cone formula\" holds \nin the case when $X$ is the complement of a \ncomplex hyperplane arrangement, \nand $G$ is its fundamental group (see \\cite{CS99} for \na direct approach in this situation). \n\n\\subsection{Formality of boundary manifolds}\n\\label{subsect:bdry formal}\n\nWe can now state the main result of this section, \ncharacterizing those line arrangements for which the \nboundary manifold is formal. \n\n\\begin{thm}\n\\label{thm:nonformal}\nLet $\\mathcal{A}=\\{\\ell_0,\\dots,\\ell_n\\}$ be a line arrangement in \n${\\mathbb{CP}}^2$, with boundary manifold $M$. The following \nare equivalent:\n\\begin{enumerate}\n\\item \\label{f1} The boundary manifold $M$ is formal. \n\\item \\label{f2} The group $G=\\pi_1(M)$ is $1$--formal.\n\\item \\label{f3} The tangent cone to $V^1_1(G)$ at the \nidentity is equal to $\\mathcal{R}^1_1(G)$. \n\\item \\label{f4} $\\mathcal{A}$ is either a pencil or a near-pencil. \n\\end{enumerate}\n\\end{thm}\n\n\\begin{proof}\n\\eqref{f1} $\\Rightarrow$ \\eqref{f2} This follows \nfrom Quillen \\cite{Q} and Sullivan \\cite{Su77}. \n\n\\eqref{f2} $\\Rightarrow$ \\eqref{f3} This follows \nfrom Dimca, Papadima and Suciu \\cite{DPS}.\n\n\\eqref{f3} $\\Rightarrow$ \\eqref{f4} \nSuppose $\\mathcal{A}$ is neither a pencil nor a near-pencil. Then \n\\fullref{cor:pen-nearpen} implies that $\\mathcal{R}^1_1(G) =H^1(G;\\C)$. \nOn the other hand, \\fullref{thm:alex poly arr} implies \nthat $V^1_1(G)$ is a union of codimension $1$ subtori in \n$\\Hom(G,\\C^*)$. Hence, the tangent cone $\\operatorname{TC}_{1}(V^1_1(G))$ \nis the union of a hyperplane arrangement in $H^1(G;\\C)$; thus, \nit does not equal $\\mathcal{R}^1_1(G)$.\n\n\\eqref{f4} $\\Rightarrow$ \\eqref{f1} If $\\mathcal{A}$ is a pencil, \nthen $M=\\sharp^n S^1\\times S^2$. If $\\mathcal{A}$ is a near-pencil, \nthen $M=S^1\\times \\Sigma_{n-1}$. In either case, $M$ \nis built out of spheres by successive product and \nconnected sum operations. Thus, $M$ is formal. \n\\end{proof}\n\n\\begin{rem}\nThe structure of the Alexander polynomial of the boundary manifold $M$\nexhibited in \\fullref{thm:alex poly arr} and \\fullref{prop:degenerate} has\nrecently been used by Dimca, Papadima and Suciu \\cite{DPS2} to show that\nthe fundamental group $G=\\pi_1(M)$ is quasi-projective if and only if one\nof the equivalent conditions of \\fullref{thm:nonformal} holds.\n\\end{rem}\n\n\\begin{ack}\nThis research was partially supported by National Security Agency \ngrant H98230-05-1-0055 and a Louisiana State University Faculty \nResearch Grant (D~Cohen), and by NSF grant DMS-0311142 \n(A~Suciu). \n\nWe thank the referee for pertinent remarks.\n\\end{ack}\n\n\\bibliographystyle{gtart}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Conclusions}\n\nIn this report, we introduce two super-resolution networks \\textit{WDSR-A} and \\textit{WDSR-B} based on the central idea of wide activation. We demonstrate in our experiments that with same parameter and computation complexity, models with wider features before ReLU activation have better accuracy for single image super-resolution. We also find training with weight normalization leads to better accuracy for deep super-resolution networks comparing to batch normalization or no normalization. The proposed methods may help to other low-level image restoration tasks like denoising and dehazing.\n\n\\printbibliography\n\n\\end{document}\n\n\\section{Introduction}\n\nDeep convolutional neural networks (CNNs) have been successfully applied to the task of single image super-resolution (SISR)~\\cite{kim2016accurate, lim2017enhanced, liu2016robust, 2018arXiv180208797Z}. SISR aims at recovery of a high resolution (HR) image from its low resolution (LR) counterpart (typically a bicubic downsampled version of HR). It has many applications in security, surveillance, satellite, medical imaging~\\cite{peled2001superresolution, thornton2006sub} and can serve as a built-in module for other image restoration or recognition tasks~\\cite{fan2018wide, liu2017robust, wang2016studying, yu2018free, yu2018generative}.\n \nPrevious image super-resolution networks including SRCNN~\\cite{dong2014learning}, FSRCNN~\\cite{dong2016accelerating}, ESPCN~\\cite{shi2016real} utilized relatively shallow convolutional neural networks (with its depth from 3 to 5). They are inferior in accuracy compared with later proposed deep SR networks (e.g.,\\ VDSR~\\cite{kim2016accurate}, SRResNet~\\cite{ledig2016photo} and EDSR~\\cite{lim2017enhanced}). The increasing of depth brings benefits to representation power~\\cite{cohen2016expressive, eldan2016power, liang2016deep, scarselli1998universal} but meanwhile under-use the feature information from shallow layers (usually represent low-level features). To address this issue, methods including SRDenseNet~\\cite{tong2017image}, RDN~\\cite{2018arXiv180208797Z}, MemNet~\\cite{tai2017memnet} introduce various skip connections and concatenation operations between shallow layers and deep layers, formalizing holistic structures for image super-resolution.\n\nIn this work we address this issue in a different perspective. Instead of adding various shortcut connections, we conjecture that the non-linear ReLUs impede information flow from shallow layers to deeper ones~\\cite{sandler2018inverted}. Based on residual SR network, we demonstrate that without additional parameters and computation, simply expanding features before ReLU activation leads to significant improvements for single image super-resolution, beating SR networks with complicated skip connections and concatenations including SRDenseNet~\\cite{tong2017image} and MemNet~\\cite{tai2017memnet}. The intuition of our work is that expanding features before ReLU allows more information pass through while still keeps highly non-linearity of deep neural networks. Thus low-level SR features from shallow layers may be easier to propagate to the final layer for better dense pixel value predictions.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=\\textwidth]{figs\/teaser.png}\n\\caption{\\textbf{Left:} vanilla residual block. \\textbf{Middle \\textit{WDSR-A}:} residual block with wide activation. \\textbf{Right \\textit{WDSR-B}:} residual block with wider activation and linear low-rank convolution. We demonstrate different residual building blocks for image super-resolution networks. Compared with vanilla residual blocks used in EDSR~\\cite{lim2017enhanced}, we introduce \\textit{WDSR-A} which has a slim identity mapping pathway with wider (\\(2\\times\\) to \\(4\\times\\)) channels before activation in each residual block. We further introduce \\textit{WDSR-B} with linear low-rank convolution stack and even widen activation (\\(6\\times\\) to \\(9\\times\\)) without computational overhead. In \\textit{WDSR-A} and \\textit{WDSR-B}, all ReLU activation layers are only applied between two wide features (features with larger channel numbers).}\n\\label{fig:wide}\n\\end{figure}\n\nThe central idea of wide activation leads us to explore efficient ways to expand features before ReLU, since simply adding more parameters is inefficient for real-time image SR scenarios~\\cite{goto2014super}. We first introduce SR residual network \\textit{WDSR-A}, which has a slim identity mapping pathway with wider (\\(2\\times\\) to \\(4\\times\\)) channels before activation in each residual block. However when the expansion ratio is above \\(4\\), channels of the identity mapping pathway have to be further slimmed and we find it dramatically deteriorates accuracy. Thus as the second step, we keep constant channel numbers of identity mapping pathway, and explore more efficient ways to expand features. We first consider group convolution~\\cite{xie2017aggregated} and depthwise separable convolution~\\cite{chollet2016xception}. However, we find both of them have unsatisfactory performance for the task of image super-resolution. To this end, we propose \\textit{linear low-rank convolution} that factorizes a large convolution kernel into two low-rank convolution kernels. With wider activation and \\textit{linear low-rank convolutions}, we construct our SR network \\textit{WDSR-B}. It has even wider activation (\\(6\\times\\) to \\(9\\times\\)) without additional parameters or computation, and boosts accuracy further for image super-resolution. The illustration of \\textit{WDSR-A} and \\textit{WDSR-B} is shown in Figure~\\ref{fig:wide}. Experiments show that wider activation consistently beats their baselines under different parameter budgets.\n\nAdditionally, compared with batch normalization~\\cite{ioffe2015batch} or no normalization, we find training with weight normalization~\\cite{salimans2016weight} leads to better accuracy for deep super-resolution networks. Previous works including EDSR~\\cite{lim2017enhanced}, BTSRN~\\cite{fan2017balanced} and RDN~\\cite{2018arXiv180208797Z} found that batch normalization~\\cite{ioffe2015batch} deteriorates the accuracy of image super-resolution, which is also confirmed in our experiments. We provide three intuitions and related experiments showing that batch normalization, due to 1) mini-batch dependency, 2) different formulations in training and inference and 3) strong regularization side-effects, is not suitable for training SR networks. However, with the increasing depth of neural networks for SR (e.g.\\ MDSR~\\cite{lim2017enhanced} has depth around 180), the networks without batch normalization become difficult to train. To this end, we introduce weight normalization for training deep SR networks. The weight normalization enables us to train SR network with an order of magnitude higher learning rate, leading to both faster convergence and better performance.\n\nIn summary, our contributions are as follows. 1) We demonstrate that in residual networks for SISR, wider activation has better performance with same parameter complexity. Without additional computational overhead, we propose network \\textit{WDSR-A} which has wider (\\(2\\times\\) to \\(4\\times\\)) activation for better performance. 2) To further improve efficiency, we also propose \\textit{linear low-rank convolution} as basic building block for construction of our SR network \\textit{WDSR-B}. It enables even wider activation (\\(6\\times\\) to \\(9\\times\\)) without additional parameters or computation, and boosts accuracy further. 3) We suggest batch normalization~\\cite{ioffe2015batch} is not suitable for training deep SR networks, and introduce weight normalization~\\cite{salimans2016weight} for faster convergence and better accuracy. 4) We train proposed \\textit{WDSR-A} and \\textit{WDSR-B} built on the principle of wide activation with weight normalization, and achieve better results on large-scale DIV2K image super-resolution benchmark. Our method also won 1st places in NTIRE 2018 Challenge on Single Image Super-Resolution in all three realistic tracks.\n\\section{Related Work}\n\n\\subsection{Super-Resolution Networks}\nDeep learning-based methods for single image super-resolution significantly outperform conventional ones~\\cite{park2003super, yang2010image} in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). SRCNN~\\cite{dong2014learning} was the first work utilizing an end-to-end convolutional neural network as a mapping function from LR images to their HR counterparts. Since then, various convolutional neural network architectures were proposed for improving the accuracy and efficiency. In this section, we review these approaches under several groups.\n\n\\textbf{Upsampling layers} Super-resolution involves upsampling operation of image resolution. The first super-resolution network SRCNN~\\cite{dong2014learning} applied convolution layers on the pre-upscaled LR image. It is inefficient because all convolutional layers have to compute on high-resolution feature space, yielding \\(S^2\\) times computation than on low-resolution space, where \\(S\\) is the upscaling factor. To accelerate processing speed without loss of accuracy, FSRCNN~\\cite{dong2016accelerating} utilized parametric deconvolution layer at the end of SR network~\\cite{dong2016accelerating}, making all convolution layers compute on LR feature space. Another non-parametric efficient alternative is pixel shuffling~\\cite{shi2016real} (a.k.a., sub-pixel convolution). Pixel shuffling is also believed to introduce less checkerboard artifacts~\\cite{odena2016deconvolution} than the deconvolutional layer.\n\n\\textbf{Very deep and recursive neural networks} The depth of neural networks is of central importance for deep learning~\\cite{he2016deep, simonyan2014very, szegedy2017inception}. It is also experimentally proved in single image super-resolution task~\\cite{fan2017balanced, kim2016accurate, ledig2016photo, lim2017enhanced, tai2017memnet, tong2017image, 2018arXiv180208797Z}. These very deep networks (usually more than 10 layers) stack many small-kernel (i.e., \\(3 \\times 3\\)) convolutions and have higher accuracy than shallow ones~\\cite{dong2016accelerating, shi2016real}. However, the increasing depth of convolutional neural networks introduces over-parameterization and difficulty of training. To address these issues, recursive neural networks~\\cite{kim2016deeply, tai2017image} are proposed by re-using weights repeatedly.\n\n\\textbf{Skip connections} On one hand, deeper neural networks have better performance in various tasks~\\cite{simonyan2014very}, on the other hand low-level features are also important for image super-resolution task~\\cite{2018arXiv180208797Z}. To address this contradictory, VDSR~\\cite{kim2016accurate} proposed a very deep VGG-like~\\cite{simonyan2014very} network with global residual connection (i.e.\\ identity skip connection) for SISR. SRResNet~\\cite{ledig2016photo} proposed a ResNet-like~\\cite{he2016deep} network. Densely connected networks~\\cite{huang2017densely} are also adapted for SISR in SRDenseNet~\\cite{tong2017image}. MemNet~\\cite{tai2017memnet} integrated skip connections and recursive unit for low-level image restoration tasks. To further exploit the hierarchical features from all the convolutional layers, residual dense networks (RDN)~\\cite{2018arXiv180208797Z} are proposed. All these works benefit from additional skip connections between different levels of features in deep neural networks.\n\n\\textbf{Normalization layers} As image super-resolution networks going deeper and deeper (from 3-layer SRCNN~\\cite{dong2014learning} to 160-layer MDSR~\\cite{lim2017enhanced}), training becomes more difficult. Batch normalization layers are one of the cures for this problem in many tasks~\\cite{he2016deep, szegedy2017inception}. It is also introduced in SISR networks in SRResNet~\\cite{ledig2016photo}. However, empirically it is found that batch normalization~\\cite{ioffe2015batch} hinders the accuracy of image super-resolution. Thus, in recent image SR networks~\\cite{fan2017balanced, lim2017enhanced, 2018arXiv180208797Z}, batch normalization is abandoned.\n\n\\subsection{Parameter-Efficient Convolutions}\n\nIn this subsection, we also review several related methods proposed for improving efficiency of convolutions.\n\n\\textbf{Flattened convolution} Flattened convolutions~\\cite{jin2014flattened} consist of consecutive sequence of one-dimensional filters across all directions in 3D space (lateral, vertical and horizontal) to approximate conventional convolutions. The number of parameters in flattened convolution decreases from \\(XYC\\) to \\(X+Y+C\\), where \\(C\\) is the number of input planes, \\(X\\) and \\(Y\\) denote filter width and height.\n\n\\textbf{Group convolution} Group convolutions~\\cite{xie2017aggregated} divide features into groups channel-wisely and perform convolutions inside the group individually, followed by a concatenation to form the final output. In group convolutions, the number of parameters can be reduced by \\(g\\) times, where \\(g\\) is the group number. Group convolutions are the key components to many efficient models (e.g.\\ ResNeXt~\\cite{xie2017aggregated}).\n\n\\textbf{Depthwise separable convolution} Depthwise separable convolution is a stack of depthwise convolution (i.e.\\ a spatial convolution performed independently over each channel of an input) followed by a pointwise convolution (i.e.\\ a 1x1 convolution) without non-linearities. It can also be viewed as a specific type of group convolution where the number of groups \\(g\\) is the number of channels. The depthwise separable convolution formulates the basic architecture in many efficient models including Xception~\\cite{chollet2016xception}, MobileNet~\\cite{howard2017mobilenets} and MobileNetV2~\\cite{2018arXiv180104381S}.\n\n\\textbf{Inverted residuals} Another work~\\cite{2018arXiv180104381S} expands features before activation for image recognition tasks (named inverted residuals). The intermediate expansion layer uses lightweight depthwise convolutions to filter features as a source of non-linearity. The inverted residual shares similar merits with our proposed wide activation, however we found the inverted residual proposed in~\\cite{2018arXiv180104381S} has unsatisfactory performance on the task of image SR. In this work we mainly explore different network architectures to improve the accuracy and efficiency for the task of image super-resolution with the central idea of wide activation.\n\\section{Proposed Methods}\n\\subsection{Wide Activation: \\textit{WDSR-A}}\nIn this part, we mainly describe how we expand features before ReLU activation layer without computational overhead. We consider the effects of wide activation inside a residual block. A naive way is to directly add channel numbers of all features. However, it proves nothing except that more parameters lead to better performance. Thus, in this section, we design our SR network to study the importance of wide features before activation with \\textit{same parameter and computational budgets}. Our first step towards wide activation is extremely simple: we slim the features of residual identity mapping pathway while expand the features before activation, as shown in Figure~\\ref{fig:wide}.\n\nTwo-layer residual blocks are specifically studied following baseline EDSR~\\cite{lim2017enhanced}. Assume the width of identity mapping pathway (Fig.~\\ref{fig:network}) is \\(w_1\\) and width before activation inside residual block is \\(w_2\\). We introduce expansion factor before activation as \\(r\\) thus \\(w_2 = r \\times w_1\\). In the vanilla residual networks (e.g.,\\ used in EDSR and MDSR) we have \\(w_2 = w_1\\) and the number of parameters are \\(2 \\times w_1^2 \\times k^2\\) in each residual block. The computational (Mult-Add operations) complexity is a constant scaling of parameter numbers when we fix the input patch size. To have same complexity \\(w_1^2 = \\hat{w_1} \\times \\hat{w_2} = r \\times \\hat{w_1}^2\\), the residual identity mapping pathway need to be slimmed as a factor of \\(\\sqrt{r}\\) and the activation can be expanded with \\(\\sqrt{r}\\) times meanwhile.\n\nThis simple idea forms our first widely-activated SR network \\textit{WDSR-A}. Experiments show that \\textit{WDSR-A} is extremely effective for improving accuracy of SISR when \\(r\\) is between 2 to 4. However, for \\(r\\) larger than this threshold the performance drops quickly. This is likely due to the identity mapping pathway becoming too slim. For example, in our baseline EDSR (16 residual blocks with 64 filters) for \\(\\times 3\\) super-resolution, when \\(r\\) is beyond 6, \\(w_1\\) will be even smaller than the final HR image representation space \\(S^2*3\\) (we use pixel shuffle as upsampling layer) where \\(S\\) is the scaling factor and 3 represents RGB. Thus we seek for parameter-efficient convolution to further improve accuracy and efficiency with wider activation.\n\n\\subsection{Efficient Wider Activation: \\textit{WDSR-B}}\nTo address the above limitation, we keep constant channel numbers of identity mapping pathway, and explore more efficient ways to expand features. Specifically we consider \\(1 \\times 1\\) convolutions. \\(1 \\times 1\\) convolutions are widely used for channel number expansion or reduction in ResNets~\\cite{he2016deep}, ResNeXts~\\cite{xie2017aggregated} and MobileNetV2~\\cite{2018arXiv180104381S}. In \\textit{WDSR-B} (Fig. \\ref{fig:wide}) we first expand channel numbers by using \\(1 \\times 1\\) and then apply non-linearity (ReLUs) after the convolution layer. We further propose an efficient \\textit{linear low-rank convolution} which factorizes a large convolution kernel to two low-rank convolution kernels. It is a stack of one \\(1 \\times 1\\) convolution to reduce number of channels and one \\(3 \\times 3\\) convolution to perform spatial-wise feature extraction. We find adding ReLU activation in \\textit{linear low-rank convolutions} significantly reduces accuracy, which also supports wide activation hypothesis.\n\n\\subsection{Weight Normalization vs. Batch Normalization}\n\nIn this part, we mainly analyze the different purposes and effects of batch normalization (BN)~\\cite{ioffe2015batch} and weight normalization (WN)~\\cite{salimans2016weight}. We offer three intuitions why batch normalization is not appropriate for image SR tasks. Then we demonstrate that weight normalization does not have these drawbacks like BN, and it can be effectively used to ease the training difficulty of deep SR networks.\n\n\\textbf{Batch normalization} BN re-calibrates the mean and variance of intermediate features to solve the problem of \\textit{internal covariate shift}~\\cite{ioffe2015batch} in training deep neural networks. It has different formulations in training and testing. For simplicity, here we ignore the re-scaling and re-centering learnable parameters of BN. During training, features in each layer are normalized with mean and variance of the current training mini-batch:\n\\begin{equation}\n\\hat x_B = \\frac{x_B - E_B[x_B]}{\\sqrt{Var_B[x_B]+\\epsilon}},\n\\end{equation}\nwhere \\(x_B\\) is the features of current training batch, \\(\\epsilon\\) is a small value (e.g.\\ 1e-5) to avoid zero-division. The first order and second order statistics are then updated to global statistics in a moving average way:\n\n\\begin{equation}\nE[x] \\leftarrow E_B[x_B]\\\\,\n\\end{equation}\n\\begin{equation}\nVar[x] \\leftarrow Var_B[x_B], \n\\end{equation}\nwhere \\(\\leftarrow\\) means assigning moving average. During inference, these global statistics are used instead to normalize the features:\n\\begin{equation}\n\\hat x_{test} = \\frac{x_{test} - E[x]}{\\sqrt{Var[x]+\\epsilon}}.\n\\end{equation}\n\nAs shown in the formulations of BN, it will cause following problems.\n1) For image super-resolution, commonly only small image patches (e.g.\\ \\(48 \\times 48\\)) and small mini-batch size (e.g.\\ 16) are used to speedup training~\\cite{fan2017balanced, kim2016accurate, ledig2016photo, lim2017enhanced, tai2017memnet, tong2017image, 2018arXiv180208797Z}, thus the mean and variance of small image patches differ a lot among mini-batches, making theses statistics unstable, which is demonstrated in the section of experiments.\n2) BN is also believed to act as a regularizer and in some cases can eliminate the need for Dropout~\\cite{ioffe2015batch}. However, it is rarely observed that SR networks overfit on training datasets. Instead, many kinds of regularizers, for examples, weight decaying and dropout, are not adopted in SR networks~\\cite{fan2017balanced, kim2016accurate, ledig2016photo, lim2017enhanced, tai2017memnet, tong2017image, 2018arXiv180208797Z}.\n3) Unlike image classification tasks where softmax (scale-invariant) is used at the end of networks to make prediction, for image SR, the different formulations of training and testing may deteriorate the accuracy for dense pixel value predictions.\n\n\\textbf{Weight normalization} Weight normalization, on the other hand, is a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. It does not introduce dependencies between the examples in a mini-batch, and has the same formulation in training and testing. Assume the output \\(\\mathbf{y}\\) is with the form:\n\\begin{equation}\n\\mathbf{y} = \\mathbf{w} \\cdot \\mathbf{x} + b, \n\\end{equation}\nwhere \\(\\mathbf{w}\\) is a k-dimensional weight vector, \\(b\\) is a scalar bias term, \\(\\mathbf{x}\\) is a k-dimensional vector of input features. WN re-parameterizes the weight vectors in terms of the new parameters using\n\\begin{equation}\n\\mathbf{w} = \\frac{g}{||\\mathbf{v}||} \\mathbf{v}, \n\\end{equation}\nwhere v is a k-dimensional vector, g is a scalar, and \\(||\\mathbf{v}||\\) denotes the Euclidean norm of \\(\\mathbf{v}\\). With this formalization, we will have \\(||\\mathbf{w}|| = g\\), independent of parameters \\(\\mathbf{v}\\). As shown in~\\cite{salimans2016weight}, the decouples of length and direction speed up convergence of deep neural networks. And more importantly, for image SR, it does not introduce troubles of BN as described above, since it is just a reparameterization technique and has exact same representation ability.\n\nIt is also noteworthy that introducing WN allows training with higher learning rate (i.e.\\ \\(10 \\times\\)), and improves both training and testing accuracy.\n\n\n\\subsection{Network Structure}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\textwidth]{figs\/network.png}\n\\caption{Demonstration of our simplified SR network compared with EDSR~\\cite{lim2017enhanced}.}\n\\label{fig:network}\n\\end{figure}\n\nIn this part, we overview the \\textit{WDSR} network architectures. We made two major modifications based on EDSR~\\cite{lim2017enhanced} super-resolution network.\n\n\\textbf{Global residual pathway} Firstly we find that the global residual pathway is a linear stack of several convolution layers, which is computational expensive. We argue that these linear convolutions are redundant (Fig.~\\ref{fig:network}) and can be absorbed into residual body to some extent. Thus, we slightly modify the network structure and use single convolution layer with kernel size \\(5 \\times 5\\) that directly take \\(3 \\times H \\times W\\) LR RGB image\/patch as input and output \\(3S^2 \\times H \\times W\\) HR counterparts, where \\(S\\) is the scale. This results in less parameters and computation. In our experiments we have not found any accuracy drop with our simpler form.\n\n\\textbf{Upsampling layer} Different from previous state-of-the-arts~\\cite{lim2017enhanced, 2018arXiv180208797Z} where one or more convolutional layers are inserted after upsampling, our proposed \\textit{WDSR} extracts all features in low-resolution stage (Fig.~\\ref{fig:network}). Empirically we find it does not affect accuracy of SR networks while improves speed by a large margin.\n\\section{Experimental Results}\n\nWe train our models on DIV2K dataset~\\cite{timofte2017ntire} since the dataset is relatively large and contains high-quality (2K resolution) images. The default splits of DIV2K dataset consist 800 training images, 100 validation images and 100 testing images. We use 800 training images for training and 10 validation images for validation during training. The trained models are evaluated on 100 validation images (testing images are not publicly available) of DIV2K dataset. We mainly measure PSNR on RGB space. ADAM optimizer~\\cite{kingma2014adam} is used with \\(\\beta_1 = 0.9\\), \\(\\beta_2 = 0.999\\) and \\(\\epsilon = 10^{-8}\\). The batch size is set to 16. The learning rate is initialized the maximum convergent value (10-4 for models without weight normalization and 10-3 for models with weight normalization). The learning rate is halved at every \\(2 \\times 10^5\\) iterations.\n\nWe crop \\(96 \\times 96\\) RGB input patches from HR image and its bicubic downsampled image as training output-input pairs. Training data is augmented with random horizontal flips and rotations following common data augmentation methods~\\cite{fan2017balanced, lim2017enhanced}. During training, the input images are also subtracted with the mean RGB values of the DIV2K training images.\n\n\\subsection{Wide and Efficient Wider Activation:}\n\nIn this part, we show results of baseline model EDSR~\\cite{lim2017enhanced} and our proposed \\textit{WDSR-A} and \\textit{WDSR-B} for the task of image bicubic x2 super-resolution on DIV2K dataset. To ensure fairness, each model is evaluated at different parameters and computational budgets by controlling the number of residual blocks with fixed number of channels. The results are shown in Table~\\ref{figs:wide_activation}. We compare each model with its number of residual blocks. The results suggest that our proposed \\textit{WDSR-A} and \\textit{WDSR-B} have better accuracy and efficiency than EDSR~\\cite{lim2017enhanced}. \\textit{WDSR-B} with wider activation also has better or similar performance compared with \\textit{WDSR-A}, which supports our wide activation hypothesis and demonstrates the effectiveness of our proposed \\textit{linear low-rank convolution}.\n\n\\input{tabs\/sr_edsr_wdsr.tex}\n\n\\subsection{Normalization layers:}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{figs\/weight_norm\/training_l1_loss.png}\n\\includegraphics[width=0.48\\textwidth]{figs\/weight_norm\/valid_psnr.png}\n\\caption{Training L1 loss and validation PSNR of same model trained with weight normalization, batch normalization or no normalization.}\n\\label{figs:weight_norm}\n\\end{figure}\n\nWe also demonstrate the effectiveness of weight normalization for improved training of SR networks. We compare the training and testing accuracy (PSNR) when train the same model with different normalization methods, i.e. weight normalization, batch normalization or no normalization. The results in Figure~\\ref{figs:weight_norm} show that the model trained with weight normalization has faster convergence and better accuracy. The model trained with batch normalization is unstable during testing, which is likely due to different formulations of BN in training and testing.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.48\\textwidth]{figs\/weight_norm\/bn_training_l1_loss.png}\n\\includegraphics[width=0.48\\textwidth]{figs\/weight_norm\/bn_valid_psnr.png}\n\\caption{Training L1 loss and validation PSNR of model trained with batch normalization but different learning rates.}\n\\label{figs:batch_norm}\n\\end{figure}\n\n\nTo further study whether this is because the learning rate is too large for models trained with batch normalization, we also train the same model with different learning rates. The results are shown in Figure~\\ref{figs:batch_norm}. Even with \\(lr = 10^{-4}\\) when the training curves are stable, the validation PSNR is still not stable across training. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{Sec-Intro}\nTime-division duplex (TDD) is a communication protocol where the receptions and transmissions of the network nodes are allocated to non-overlapping time slots in the same frequency band. TDD has wide use in 3G, 4G, and 5G since it allows for an easy and flexible control over the flow of uplink and downlink data at the nodes, which is achieved by changing the portion of time slots allocated to reception and transmission at the nodes \\cite{holma2011lte,TechNoteLET}.\n\n\nIn general, the TDD scheme can be static or dynamic. In static-TDD, each node pre-allocates a fraction of the total number of time slots for transmission and the rest of the time slots for reception regardless of the channel conditions and the interference in the network \\cite{holma2011lte}. Due to the scheme being static, the time slots in which the nodes perform reception and the time slots in which the nodes perform transmission are prefixed and unchangeable over long periods \\cite{TechNoteLET}. On the other hand, in dynamic (D)-TDD, each time slot can be dynamically allocated either for reception or for transmission at the nodes based on the channel gains of the network links in order to maximize the overall network performance. Thereby, D-TDD schemes achieve higher performance gain compared to static-TDD schemes at the expense of overheads. As a result, D-TDD schemes have attracted significant research interest, see \\cite{8408762,8016428,7386709,957300,8119931,1044604,7073589,8399860} and references therein. Motivated by this, in this paper we investigate D-TDD schemes.\n \n\nD-TDD schemes can be implemented in either distributed or centralized fashion. In distributed D-TDD schemes, the individual nodes, or a group of nodes, make decisions for transmission, reception, or silence without synchronizing with the rest of the nodes in the network \\cite{8403595,8334580,8812711,6213034}. As a result, a distributed D-TDD scheme is practical for implementation, however, it does not maximize the overall network performance. On the other hand, in centralized D-TDD schemes, the decision of whether a node should receive, transmit or stay silent in a given time slot is performed at a central processor in the network, which then informs the node about its decision. To this end, centralized D-TDD schemes require full channel state information (CSI) of all network links at the central processor. In this way, the receptions, transmissions, and silences of the nodes are synchronized by the central processor in order to maximize the overall network performance. Since centralized D-TDD schemes require full CSI of all network links, they induce excessive overhead and thus are not practical for implementation. However, knowing the performance of the optimal centralized D-TDD scheme is highly valuable since it serves as an upper bound and thus serves as an (unattainable) benchmark for any practical TDD scheme. The optimal centralized D-TDD scheme for a wireless network is an open problem. Motivated by this, in this paper we derive the optimal centralized D-TDD scheme for a wireless network.\n\n\n\nA network node can operate in two different modes, namely, full-duplex (FD) mode and half-duplex (HD) mode. In the FD mode, transmission and reception at the node can occur simultaneously and in the same frequency band. However, due to the in-band simultaneous reception and transmission, nodes are impaired by self-interference (SI), which occurs due to leakage of energy from the transmitter-end into the receiver-end of the nodes. Currently, there are advanced hardware designs which can suppress the SI by about 110 dB in certain scenarios, see \\cite{7024120}. On the other hand, in the HD mode, transmission and reception take place in the same frequency band but in different time slots, or in the same time slot but in different frequency bands, which avoids the creation of SI. However, since a FD node uses the resources twice as much as compared to a HD node, the achievable data rates of a network comprised of FD nodes may be significantly higher than that comprised of HD nodes. Motivated by this, in this paper we investigate a network comprised of FD nodes, while, as a special case, we also obtain the optimal centralized D-TDD for a network comprised of HD nodes.\n\n\nD-TDD schemes have been investigated in\n\\cite{6666413,7070655,1705939,4556648,1638665,7136469,4524858,8611365,1261897,8812711,8004461,7000558,7876862}, where \\cite{6666413,7070655,1705939,4556648,1638665,7136469,4524858} investigate distributed D-TDD schemes and centralized D-TDD schemes are investigated in \\cite{8611365,1261897,8812711,8004461,7000558,7876862}. The works in \\cite{8611365,1261897,8812711} propose non-optimal heuristic centralized scheduling schemes. Specifically, the authors in \\cite{8611365} propose a centralized D-TDD scheme named SPARK that provides more than 120$\\%$ improvement compared to similar distributed D-TDD schemes. In \\cite{1261897} the authors proposed a centralized D-TDD scheme but do not provide a mathematical analysis of the proposed scheme. In \\cite{8812711}, the authors applied a centralized D-TDD scheme to optimise the power of the network nodes in order to reduce the inter-cell interference, however, the proposed solution is sub-optimal. The work in \\cite{8004461} proposes a centralized D-TDD scheme for a wireless network where the decisions for transmission and reception at the nodes are chosen from a finite and predefined set of configurations, which is not optimal in general and may limit the network performance. A network comprised of two-way links is investigated in \\cite{7000558}, where each link can be used either for transmission or reception in a given time slot, with the aim of optimising the direction of the two-way links in each time slot. However, the difficulty of the problem in \\cite{7000558} also leads to a sub-optimal solution being proposed. The work in \\cite{7876862} investigates a wireless network, where the nodes can select to transmit, receive, or be silent in a given time slot. However, the proposed solution in \\cite{7876862} is again sub-optimal due to the difficulty of the investigated problem. On the other hand, \\cite{7801002,7491359} investigate centralized D-TDD schemes for a wireless network comprised of FD nodes. Specifically, the authors in \\cite{7801002}\nused an approximation to develop a non-optimum game theoretic centralized D-TDD scheme, which uses round-robin scheduling, and they provide analysis for a cellular network comprised of two cells. In \\cite{7491359}, the authors investigate a sub-optimal centralized D-TDD scheme that performs FD and HD mode selection at the nodes based on geometric programming.\n\nTo the best of our knowledge, the optimal centralized D-TDD scheme for a wireless network comprised of FD or HD nodes is an open problem in the literature. As a result, in this paper, we derive the optimal centralized D-TDD scheme for a wireless network comprised of FD nodes. In particular, we derive the optimal scheduling of the reception, transmission, simultaneous reception and transmission, or silence at every FD node in a given time slot such that the rate region of the network is maximized. In addition, as a special case, we also derive the optimal centralized D-TDD scheme for a network comprised of HD nodes as well as a network comprised of FD and HD nodes. Our numerical results show that the proposed optimal centralized D-TDD scheme achieves significant gains over existing centralized D-TDD schemes.\n\nThe rest of this paper is organized as follows. In Section \\ref{Sec-Sys}, we present the system and channel model. In Section \\ref{PRoblem_def}, we formulate the centralized D-TDD problem. In Section \\ref{Sec-DTDD}, we present the optimal centralized D-TFDD scheme for a wireless network comprised of FD and HD nodes. In Section \\ref{Sec-QoS}, we investigate rate allocation fairness and propose a corresponding rate allocation scheme. Simulation and numerical results are provided in Section \\ref{Sec-Num}, and the conclusions are drawn in Section \\ref{Sec-Conc}.\n\n\n \\begin{figure}[t]\n\\centering\\includegraphics[width=5.8in]{sys-mod-TDD13.eps}\n\\vspace{-8mm}\n\\caption{A wireless network comprised of 5 nodes.}\n\\label{general_networ_Sys_Model}\n\\end{figure}\n\n\\section{System Model}\\label{Sec-Sys}\nIn this section, we present the system and channel models.\n\n\\subsection {System Model}\nWe consider a wireless network comprised of $K$ FD nodes. Each network node is be able to wirelessly communicate with the rest of the nodes in the network and in a given time slot operate as: 1) a receiver that receives information from other network nodes, 2) a transmitter that sends information to other network nodes, 3) simultaneously receive and transmit information from\/to other network nodes, or 4) be silent. The nodes can change their state from one time slot to the next.\nMoreover, in the considered network, we assume that each node is able to receive information from multiple nodes simultaneously utilizing a multiple-access channel scheme, see \\cite[Ch.~15.1.2]{cover}, however, a node cannot transmit information to more than one node, i.e., we assume that information-theoretic broadcasting schemes, see \\cite[Ch.~15.1.3]{cover}, are not employed. Hence, the considered network is a collection of many multiple-access channels all operating in the same frequency band.\n\n\n\n\nIn the considered wireless network, we assume that there exist a link between any two nodes in the network, i.e., that the network graph is a complete graph. Each link is assumed to be impaired by independent flat fading, which is modelled via the channel gain of the link. The channel gain between any two nodes can be set to zero during the entire transmission time, which in turn models the case when the wireless signal sent from one of the two nodes can not propagate and reach the other node. Otherwise, if the channel gain is non-zero in any time slot during the transmission, then the wireless signal sent from one of the two nodes can reach the other node. Obviously, not all of the links leading to a given node carry desired information and are thereby desired by the considered node. There are links which carry undesired information to a considered node, which are referred to as interference links. An interference link causes the signal transmitted from a given node to reach an unintended destination node, and acts as interference to that node. For example, in Fig.~\\ref{general_networ_Sys_Model}, node 2 wants to receive information from node 1. However, since nodes 3 and 4 are also transmitting in the same time slot, node 2 will experience interference from nodes 3 and 4. Similarly, nodes 4 and 5 experience interference from node 1. It is easy to see that for node 2 it is beneficial if all other nodes, except node 1, are either receiving or silent. However, such a scenario would be harmful for the rest of the network nodes since they will not be able to receive and transmit any data. \n\nIn order to model the desired and undesired links for each node, we introduce a binary matrix $\\mathbf{ Q}$ defined as follows. The $(j,k)$ element of $\\mathbf{ Q}$ is equal to 1 if node $k$ regards the signal transmitted from node $j$ as a desired signal, and is equal to 0 if node $k$ regards the signal transmitted from node $j$ as an interference signal. Moreover, let $\\mathbf{\\bar Q}$ denote an identical matrix as $\\mathbf{ Q}$ but with flipped binary values. Hence, the $(j,k)$ element of $\\mathbf{\\bar Q}$ assumes the value 1 if node $k$ regards the signal transmitted from node $j$ as interference, and the $(j,k)$ element of $\\mathbf{\\bar Q}$ is 0 when node $k$ regards the signal transmitted from node $j$ as a desired signal.\n\n\n \nThe matrix $\\mathbf{Q}$, and thereby also the matrix $\\mathbf{\\bar Q}$, are set before the start of the transmission in the network. How a receiving node decides from which nodes it receives desired signals, and thereby from which node it receives interference signals, is unconstrained for the analyses in this paper.\n \n\n\n\\subsection{Channel Model}\nWe assume that each node in the considered network is impaired by unit-variance additive white Gaussian noise (AWGN), and that the links between the nodes are impaired by block fading. In addition, due to the in-band simultaneous reception-transmission, each node is also impaired by SI, which occurs due to leakage of energy from the transmitter-end into the receiver-end of the node. The SI impairs the decoding of the received information signal significantly, since the SI signal has a relatively higher power compared to the power of the desired signal. Let the transmission on the network be carried-out over $T\\to\\infty$ time slots, where a time slot is small enough such that the fading on all network links, including the SI links, can be considered constant during a time slot. Hence, the instantaneous signal-to-noise-ratios (SNRs) of the links are assumed to change only from one time slot to the next and not within a time slot. Let $g_{j,k}(i)$ denote the fading coefficient of the channel between nodes $j$ and $k$ in the considered network in time slot $i$. Then $\\gamma_{j,k}(i)={|g_{j,k}(i)|^2}$ denotes the instantaneous SNR of the channel between nodes $j$ and $k$, in time slot $i$. The case when $j=k$ models the SNR of the SI channel of node $k$ in time slot $i$, given by $\\gamma_{j,k}(i)={|g_{j,k}(i)|^2}$. Note that, since the links are impaired by fading, the values of $\\gamma_{j,k}(i)$ change from one time slot to the next. All the CSIs, $\\gamma_{j,k}(i), \\forall i,j$ should be aggregated at the central node. \n\nFinally, let $\\mathbf{G}(i)$ denote the weighted connectivity matrix of the graph of the considered network in time slot $i$, where the $(j,k)$ element in the matrix $\\mathbf{G}(i)$ is equal to the instantaneous SNR of the link $(j,k)$, $\\gamma_{j,k}(i)$. \n\n\\subsection{Rate Region}\nLet $SINR_k(i)$ denote the signal-to-interference-plus-noise-ratio (SINR) at node $k$ in time slot $i$. Then, the average rate received at node $k$ over $T\\to\\infty$ time slots is given by\n\\begin{align}\\label{eq_11aint}\n\\bar R_k=\\lim_{T\\to\\infty}\\frac{1}{T}\\sum_{i=1}^T \\log_2 \\left(1+SINR_k(i) \\right).\n\\end{align}\nUsing (\\ref{eq_11aint}), $\\forall k$, we define a weighted sum-rate as\n\\begin{align}\\label{eq_11bint}\n\\Lambda= \\sum_{k=1}^K \\mu_k \\bar R_k,\n\\end{align}\nwhere the values of $\\mu_k$, for $0 \\leq \\mu_k \\leq 1$, $\\sum_{k=1}^K \\mu_k=1$ are fixed. By maximizing (\\ref{eq_11bint}) for any fixed $\\mu_k$, $\\forall k$, we obtain one point of the boundary of the rate region. All possible values of $\\mu_k, \\forall k$, provide all possible values of the boundary line of the rate region of the network.\n\n\n\n\\section{Problem Formulation}\\label{PRoblem_def}\nEach node in the network can be in one of the following four states: receive ($r$), transmit ($t$), simultaneously receive and transmit ($f$), and silent ($s$). The main problem in the considered wireless network is to find the optimal state of each node in the network in each time slot, based on global knowledge of the channel fading gains, such that the weighted sum-rate of the network, given by (\\ref{eq_11bint}), is maximized. To model the modes of each node in each time slot, we define the following binary variables for node $k$ in time slot $i$\n\\begin{align}\nr_k(i)&=\\left\\{\n\\begin{array}{ll}\\label{eq_1}\n1 &\\textrm{if node } k \\textrm{ receives in time slot } i\\\\\n0 & \\textrm{otherwise},\n\\end{array}\n\\right.\\\\\nt_k(i)&=\\left\\{\n\\begin{array}{ll}\\label{eq_2}\n1 &\\textrm{if node } k \\textrm{ transmits in time slot } i\\\\\n0 & \\textrm{otherwise},\n\\end{array}\n\\right.\\\\\nf_k(i)&=\\left\\{\n\\begin{array}{ll}\\label{eq_3}\n1 &\\textrm{if node } k \\textrm{ simultaneously receives and transmits in time slot } i\\\\\n0 & \\textrm{otherwise},\n\\end{array}\n\\right.\\\\\ns_k(i)&=\\left\\{\n\\begin{array}{ll}\\label{eq_41}\n1 &\\textrm{if node } k \\textrm{ is silent in time slot } i\\\\\n0 & \\textrm{otherwise}.\n\\end{array}\n\\right.\n\\end{align}\nSince node $k$ can be in one and only one mode in each time slot, i.e., it can either receive, transmit, simultaneously receive and transmit, or be silent, the following has to hold\n\\begin{align}\\label{eq_4}\nr_k(i)+t_k(i)+f_k(i)+s_k(i)=1,\\;\\forall k.\n\\end{align}\nFor the purpose of simplifying the analytical derivations, it is more convenient to represent (\\ref{eq_4}) as\n\\begin{align}\\label{eq_5}\nr_k(i)+t_k(i)+f_k(i)\\in\\{0,1\\},\\;\\forall k,\n\\end{align}\nwhere if $r_k(i)+t_k(i)+f_k(i)=0$ holds, then node $k$ is silent in time slot $i$.\n\n\n\nNow, using the binary variables defined in (\\ref{eq_1})-(\\ref{eq_41}), we define vectors $\\mathbf{r}(i)$, $\\mathbf{t}(i)$, $\\mathbf{f}(i)$, and $\\mathbf{s}(i)$ as\n\\begin{align}\n\\mathbf{r}(i)&=[r_1(i),\\; r_2(i),\\;...,\\; r_K(i)],\\label{eq_7a}\\\\\n\\mathbf{t}(i)&=[t_1(i),\\; t_2(i),\\;...,\\; t_K(i)],\\label{eq_7b}\\\\\n\\mathbf{f}(i)&=[f_1(i),\\; f_2(i),\\;...,\\; f_K(i)],\\label{eq_7c}\\\\\n\\mathbf{s}(i)&=[s_1(i),\\; s_2(i),\\;...,\\; s_K(i)].\\label{eq_7d}\n\\end{align}\nHence, the $k$-th element of the vector $\\mathbf{r}(i)$\/$\\mathbf{t}(i)$\/$\\mathbf{f}(i)$\/$\\mathbf{s}(i)$ is $r_k(i)$\/$t_k(i)$\/$f_k(i)$\/$s_k(i)$, and this element shows whether the $k$-th node is receiving\/transmitting\/simultaneously receiving and transmitting\/silent. Therefore, the four vectors $\\mathbf{r}(i)$, $\\mathbf{t}(i)$, $\\mathbf{f}(i)$, and $\\mathbf{s}(i)$, given by (\\ref{eq_7a})-(\\ref{eq_7d}), show which nodes in the network are receiving, transmitting, simultaneously receiving and transmitting, and are silent in time slot $i$, respectively. Due to condition (\\ref{eq_4}), the elements in the vectors $\\mathbf{r}(i)$, $\\mathbf{t}(i)$, $\\mathbf{f}(i)$, and $\\mathbf{s}(i)$ are mutually dependent and have to satisfy the following condition \n\\begin{equation}\n\\mathbf{r}(i)+\\mathbf{t}(i)+\\mathbf{f}(i)+\\mathbf{s}(i)=\\mathbf{e},\n\\end{equation}\nwhere $\\mathbf{e}$ is the all-ones vector, i.e., $\\mathbf{e}=[1,1,...,1]$.\n\nThe main problem in the considered wireless network is finding\nthe optimum vectors $\\mathbf{r}(i)$, $\\mathbf{t}(i)$, $\\mathbf{f}(i)$, and $\\mathbf{s}(i)$ that maximize the boundary of the rate region of the network, which can be obtained by using the following optimization problem\n\\begin{align}\n& {\\underset{\\mathbf{r}(i), \\mathbf{t}(i), \\mathbf{f}(i), \\mathbf{s}(i),\\forall i} { \\textrm{Maximize:} }}\\; \n \\Lambda \\nonumber\\\\\n&{\\rm{Subject\\;\\; to \\; :}} \\nonumber\\\\\n&\\qquad\\qquad{\\rm C1:}\\; t_v(i)\\in\\{0,1\\}, \\; \\forall v\\nonumber\\\\\n&\\qquad\\qquad{\\rm C2:}\\; r_v(i)\\in\\{0,1\\}, \\; \\forall v\\nonumber\\\\\n&\\qquad\\qquad{\\rm C3:}\\; f_v(i)\\in\\{0,1\\}, \\; \\forall v\\nonumber\\\\\n&\\qquad\\qquad{\\rm C4:}\\; s_v(i)\\in\\{0,1\\}, \\; \\forall v\\nonumber\\\\\n&\\qquad\\qquad{\\rm C5:}\\; s_v(i)+r_v(i)+f_v(i)+s_v(i)=1, \\; \\forall v,\n\\label{eq_max_1ab}\n\\end{align}\nwhere $ \\mu_k$ are fixed. The solution of this problem is given in Theorem~\\ref{theo_2} in Section~\\ref{Sec-DTDD}.\n\nBefore investigating the problem in (\\ref{eq_max_1ab}), we define two auxiliary matrices that will help us derive the main result. Specifically, using matrices $\\mathbf{G}(i)$, $\\mathbf{Q}$, and $\\mathbf{\\bar Q}$ defined in Sec.~\\ref{Sec-Sys}, we define two auxiliary matrices $\\mathbf{D}(i)$ and $\\mathbf{I}(i)$, as\n\\begin{align}\n\\mathbf{D}(i) &= \\mathbf{G}(i)\\circ \\mathbf{Q},\\label{eq_10a}\\\\\n\\mathbf{I}(i) &= \\mathbf{G}(i)\\circ \\mathbf{\\bar Q}\\label{eq_10b},\n\\end{align}\nwhere $\\circ$ denotes the Hadamard product of matrices, i.e., the element wise multiplication of two matrices. Hence, elements in the matrix $\\mathbf{D}(i)$ are the instantaneous SNRs of the desired links which carry desired information. Conversely, the elements in the matrix $\\mathbf{I}(i)$ are the instantaneous SNRs of the interference links which carry undesired information. Let $\\mathbf{d}_k^{\\intercal}(i)$ and $\\mathbf{i}_k^{\\intercal}(i)$ denote the $k$-th column vectors of the matrices $\\mathbf{D}(i) $ and $\\mathbf{I}(i)$, respectively. The vectors $\\mathbf{d}_k^{\\intercal}(i)$ and $\\mathbf{i}_k^{\\intercal}(i)$ show the instantaneous SNRs of the desired and interference links for node $k$ in time slot $i$, respectively. For example, if the third and fourth elements in $\\mathbf{d}_k^{\\intercal}(i)$ are non-zero and thereby equal to $\\gamma_{3,k}(i)$ and $\\gamma_{4,k}(i)$, respectively, then this means that the $k$-th node receives desired signals from the third and the fourth elements in the network via channels which have squared instantaneous SNRs $\\gamma_{3,k}(i)$ and $\\gamma_{4,k}(i)$, respectively. Similar, if the fifth, sixth, and $k$-th elements in $\\mathbf{i}_k^{\\intercal}(i)$ are non-zeros and thereby equal to $\\gamma_{5,k}(i)$, $\\gamma_{6,k}$ and $\\gamma_{k,k}(i)$, respectively, it means that the $k$-th node receives interference signals from the fifth and the sixth nodes in the network via channels which have squared instantaneous SNRs $\\gamma_{5,k}(i)$ and $\\gamma_{6,k}(i)$, respectively, and that the $k$-th node suffers from SI with squared instantaneous SNR $\\gamma_{k,k}(i)$.\n\n\\begin{remark}\nA central processor is assumed to collect all instantaneous SNRs, $\\gamma_{j,k}(i)$, and thereby construct $\\mathbf{G}(i)$ at the start of time slot $i$. This central unit will then decide the optimal values of $\\mathbf{r}(i)$, $\\mathbf{t}(i)$, $\\mathbf{f}(i)$ and $\\mathbf{s}(i)$, defined in (\\ref{eq_7a})-(\\ref{eq_7d}), based on the proposed centralized D-TDD scheme, and broadcast these values to the rest of the nodes. Once the optimal values of $\\mathbf{r}(i)$, $\\mathbf{t}(i)$, $\\mathbf{f}(i)$, and $\\mathbf{s}(i)$ are known at all nodes the transmissions, receptions, simultaneous transmission and reception, and silences of the nodes can start in time slot $i$. Obviously, acquiring global CSI at a central processor is impossible in practice as it will incure a huge overhead and, by the time it is used, the CSI will likely be outdated. However, this assumption will allow us to compute an upper bound on the network performance which will serve as an upper bound to the performance of any D-TDD scheme.\n\\end{remark}\n\n\\begin{remark}\nNote that the optimal state of the nodes of the network (i.e., receive, transmit, simultaneously receive and transmit, or silent) in each time slot can also be obtained by brute-force search. Even if this is possible for a small network, an analytical solution of the problem will provide depth insights into the corresponding problem.\n\\end{remark}\n\n\\remark{\nIn this paper, we only optimize the reception-transmission schedule of the nodes, and not the transmission coefficients of the nodes, which leads to interference alignment \\cite{4533226}. Combining adaptive reception-transmission with interference alignment is left for future work.\n} \n\n\n\n\\section{The Optimal Centralized D-TDD Scheme}\\label{Sec-DTDD}\nUsing the notations in Sections\\ref{Sec-Sys} and~\\ref{PRoblem_def}, we state a theorem that models the received rate at node $k$ in time slot $i$.\n\\begin{theorem}\\label{theo_1}\nAssuming that all nodes transmit with power $P$, then the received rate at node $k$ in time slot $i$ is given by\n\\begin{align}\\label{Eqe_SINR_3as}\nR_k(i)=\\log_2\\left(1+ [r_k(i)+f_k(i)] \\frac{P\\, \\left [ \\mathbf{t}(i)+\\mathbf{f}(i) \\right] \\mathbf{d}_k^{\\intercal}(i)}{1+ P\\, \\left [\\mathbf{t}(i)+\\mathbf{f}(i) \\right] \\mathbf{i}_k^{\\intercal}(i)}\\right),\n\\end{align} \nwhich is achieved by a multiple-access channels scheme between the desired nodes of node $k$ acting as transmitter and node $k$ acting as a receiver. To this end,\n node $k$ employs successive interference cancellation to the codewords from the desired nodes whose rates are appropriately adjusted in order for (\\ref{Eqe_SINR_3as}) to hold.\n\\end{theorem}\n\\begin{IEEEproof}\nPlease refer to Appendix~\\ref{app_PR_Local} for the proof.\n\\end{IEEEproof}\n\nIn (\\ref{Eqe_SINR_3as}), we have obtained a very simple and compact expression for the received rate at each node of the network in each time slot. As can be seen from (\\ref{Eqe_SINR_3as}), the rate depends on the fading channel gains of the desired links via $\\mathbf{d}_k^{\\intercal}(i)$ and the interference links via $\\mathbf{i}_k^{\\intercal}(i)$, as well as the state selection vectors of the network via $\\mathbf{t}(i)$, $\\mathbf{r}(i)$, and $\\mathbf{f}(i)$. \n\nUsing the received rate at each node of the network, defined by (\\ref{Eqe_SINR_3as}), we obtain the average received rate at node $k$ as\n\\begin{align}\\label{eq_11a}\n\\bar R_k=\\lim_{T\\to\\infty}\\frac{1}{T}\\sum_{i=1}^T R_k(i), \\forall k.\n\\end{align}\n\n Inserting (\\ref{Eqe_SINR_3as}) into (\\ref{eq_11a}), and then (\\ref{eq_11a}) into (\\ref{eq_11bint}), we obtain the weighted sum-rate of the network as \n\n\\begin{align}\\label{eq_11c}\n\\Lambda= \\lim_{T\\to\\infty}\\frac{1}{T} \\sum_{i=1}^T \\sum_{k=1}^N \\mu_k \\log_2 \\left(1+\\left [r_k(t)+f_k(t) \\right]\\frac{P\\, \\left [ \\mathbf{t}(i)+\\mathbf{f}(i) \\right] \\mathbf{d}_k^{\\intercal}(i)}{1+ P\\, \\left [\\mathbf{t}(i)+\\mathbf{f}(i) \\right] \\mathbf{i}_k^{\\intercal}(i)} \\right).\n\\end{align}\nNow, note that the only variables that can be manipulated in (\\ref{eq_11c}) in each time slot are the values of the elements in the vectors $\\mathbf{t}(i)$, $\\mathbf{r}(i)$, and $\\mathbf{f}(i)$, and the values of $\\mu_k, \\forall k$. We use $\\mathbf{t}(i)$, $\\mathbf{r}(i)$, and $\\mathbf{f}(i)$ to maximize the boundary of the rate region for a given $\\mu_k, \\forall k$, in the following. In addition, later on in Section \\ref{Sec-QoS}, we use the constants $\\mu_k, \\forall k$, to establish a scheme that achieves fairness between the nodes of the network.\n\n\nThe optimum vectors $\\mathbf{r}(i)$, $\\mathbf{t}(i)$, $\\mathbf{f}(i)$, and $\\mathbf{s}(i)$ that maximize the boundary of the rate region of the network can be obtained by the following optimization problem\n\\begin{align}\n& {\\underset{\\mathbf{r}(i), \\mathbf{t}(i), \\mathbf{f}(i), \\mathbf{s}(i),\\forall i} { \\textrm{Maximize:} }}\\; \n \\lim_{T\\to\\infty}\\frac{1}{T} \\sum_{i=1}^T \\sum_{k=1}^N \\mu_k \\log_2 \\left(1+\\left [r_k(t)+f_k(t) \\right]\\frac{P\\, \\left [ \\mathbf{t}(i)+\\mathbf{f}(i) \\right] \\mathbf{d}_k^{\\intercal}(i)}{1+ P\\, \\left [\\mathbf{t}(i)+\\mathbf{f}(i) \\right] \\mathbf{i}_k^{\\intercal}(i)} \\right) \\nonumber\\\\\n&{\\rm{Subject\\;\\; to \\; :}} \\nonumber\\\\\n&\\qquad\\qquad{\\rm C1:}\\; t_v(i)\\in\\{0,1\\}, \\; \\forall v\\nonumber\\\\\n&\\qquad\\qquad{\\rm C2:}\\; r_v(i)\\in\\{0,1\\}, \\; \\forall v\\nonumber\\\\\n&\\qquad\\qquad{\\rm C3:}\\; f_v(i)\\in\\{0,1\\}, \\; \\forall v\\nonumber\\\\\n&\\qquad\\qquad{\\rm C4:}\\; s_v(i)\\in\\{0,1\\}, \\; \\forall v\\nonumber\\\\\n&\\qquad\\qquad{\\rm C5:}\\; s_v(i)+r_v(i)+f_v(i)+s_v(i)=1, \\; \\forall v,\n\\label{eq_max_1a}\n\\end{align}\nwhere $ \\mu_k$ are fixed. The solution of this problem is given in the following theorem.\n\\begin{theorem}\\label{theo_2}\n The optimal values of the vectors $\\mathbf{t}(i)$, $\\mathbf{r}(i)$, $\\mathbf{f}(i)$, and $\\mathbf{s}(i)$, which maximize the boundary of the rate region of the network, found as the solution of (\\ref{eq_max_1a}), is given by Algorithm~\\ref{Lyp_algorithm}, which is explained in details in the following.\n\n\n\n\\begin{algorithm}\n\\caption{Finding the optimal vector, $\\mathbf{t}(i)$}\n\\label{Lyp_algorithm}\n\\begin{algorithmic}[1]\n\\Procedure{ $\\forall \\;i \\in \\{1,2,...,T\\}$}{}\\label{lyp_proc}\n\\State Initiate $n=0$, and $t_x(i)_0$, $w_x(i)_0$ and $l_x(i)_0$ randomly, $x \\in \\{1,2,..,K\\}$, where $\\mathcal{K}_d$ is the set of desired nodes.\n\\State \\textrm{****** Iterative-loop starts*****}\n\\While{exit-loop-flag $==$ FALSE}\n\\If{ (\\ref{eq_outer_T19}) holds} \\label{cvx_found}\n\\State exit-loop-flag $\\gets$ TRUE\n\\Else\n\\State \\label{Lable_1} n++\n\\State compute $t_x(i)_n$ with (\\ref{eq_scheme_D-TDD})\n\\State compute $w_x(i)_n$ with (\\ref{eq_max_app_T4})\n\\State compute $l_x(i)_n$ with (\\ref{eq_max_app_T7})\n\\EndIf\n\\EndWhile\n\\State \\textrm{****** Iterative-loop end*****}\n\\If{ $t_x(i)=0$ and $t_k(i)=1, \\forall k$, where $(x,k)$ element of $\\mathbf{ Q}$ is one }\n\\State $r_x(i)=1$\n\\EndIf\n\\If{ $t_x(i)=0$ and $t_k(i)=0, \\forall k$, where $(x,k)$ element of $\\mathbf{ Q}$ is one }\n\\State $s_x(i)=1$\n\\EndIf\n\\If{ $t_x(i)=1$ and $t_k(i)=1, \\forall k$, where $(x,k)$ element of $\\mathbf{ Q}$ is one }\n\\State $f_x(i)=1$ and $t_x(i)=0$\n\\EndIf\n\\If{ $t_x(i)=1$ and $t_k(i)=0, \\forall k$, where $(x,k)$ element of $\\mathbf{ Q}$ is one }\n\\State $t_x(i)$ remains unchanged\n\\EndIf\n\n\\State \\Return $\\mathbf{t}(i)$, $\\mathbf{r}(i)$, $\\mathbf{f}(i)$, $\\mathbf{s}(i)$ \n\\EndProcedure\n\\end{algorithmic}\n\\end{algorithm}\n\n \n\nAlgorithm~\\ref{Lyp_algorithm} is an iterative algorithm. Each iteration has its own index, denoted by $n$. In each iteration, we compute the vector $\\mathbf{t}(i)$ in addition to two auxiliary vectors $\\mathbf{w}(i)=\\{w_1(i),w_2(i),...$ $,w_N(i)\\}$ and $\\mathbf{l}(i)=\\{l_1(i),l_2(i),...,l_N(i)\\}$. Since the computation process is iterative, we add the index $n$ to denote the $n$'th iteration. Hence, the variables $t_x(i)$, $w_x(i)$, and $l_x(i)$ in iteration $n$ are denoted by $t_x(i)_n$, $w_x(i)_n$, and $l_x(i)_n$, respectively. In each iteration, $n$, the variable $t_x(i)_{n}$, for $x \\in \\{1,2,...,N\\}$, is calculated as\n\\begin{align}\\label{eq_scheme_D-TDD}\n\\bullet \\; t_x(i)_{n}=0 & \\; \\textrm{ if}\\;\\; \\sum_{k=1}^N \\frac{P \\mu_k l_k(i)_{n-1}}{\\ln{2}} \\hspace{-1.5mm} \\left[ d_{x,k}(i) \\hspace{-1.0mm} \\left (\\hspace{-1.0mm}1\\hspace{-1.0mm}-\\hspace{-1.0mm}\\frac{w_{k}(i)_{n-1}}{\\sqrt{P\\sum_{v=1,v \\ne x}^N {t_v(i)_{n-1} d_{v,k}(i)} }} \\right)+i_{x,k}(i) \\right] \\geq 0, \\nonumber\\\\\n\\bullet \\; t_x(i)_{n}=1 & \\;\\textrm{ if}\\;\\; \\textrm{otherwise}. \n\\end{align}\nIn (\\ref{eq_scheme_D-TDD}), $d_{v,k}(i)$ and $i_{v,k}(i)$ are the $(v,k)$ elements of the matrices $\\mathbf{D}(i)$ and $\\mathbf{I}(i)$, respectively. Whereas, $l_k(i)_n$ and $w_{k}(i)_n$ are the auxiliary variables, and they are treated as constants in this stage and will be given in the following. \n\n\n\nIn iteration $n$, the variable $w_{x}(i)_n$, for $x \\in \\{1,2,...,N\\}$, is calculated as\n \\begin{align}\nw_{x}(i)_n=&\\frac{{A}_{x}(i)+{B}_{x}(i)}{\\sqrt{{A}_{x}(i)}},\n\\label{eq_max_app_T4}\n\\end{align} \nwhere ${A}_{x}(i)$ and ${B}_{x}(i)$ are defined as\n\\begin{align}\n{A}_{x}(i)&= P \\mathbf{t}(i)_n \\mathbf{d}_x^{\\intercal}(i),\\label{eq_T7a}\\\\\n{B}_{x}(i)&= 1+P \\mathbf{t}(i)_n \\mathbf{i}_x^{\\intercal}(i).\\label{eq_T7b}\n\\end{align}\n\nIn iteration $n$, the variable $l_x(i)_n$, for $x \\in \\{1,2,...,N\\}$, is calculated as\n \\begin{align}\nl_x(i)_n=&\\frac{1}{\\left( |\\sqrt{{A}_{x}(i)}-w_{x}(i)_n|^2+{B}_{x}(i) \\right)},\n\\label{eq_max_app_T7}\n\\end{align} \nwhere $w_{x}(i)_n$ is treated as constant in this stage. In addition, ${A}_{x}(i)$ and ${B}_{x}(i)$, are given by (\\ref{eq_T7a}) and (\\ref{eq_T7b}), respectively.\n\n\nThe process of updating the variables $t_x(i)_n$, $w_x(i)_n$, and $l_x(i)_n$ for each time slot $i$ is repeated until convergence occurs, which can be checked by the following equation\n\\begin{align}\\label{eq_outer_T19}\n|\\Lambda_n - \\Lambda_{n-1}| < \\epsilon,\n\\end{align}\nwhere $\\Lambda_n=\\sum_{k=1}^N \\mu_k \\log_2 \\left(1+\\frac{P \\mathbf{t}(i)_n \\mathbf{d}_k^{\\intercal}(i)}{1+P \\mathbf{t}(i)_n \\mathbf{i}_k^{\\intercal}(i)} \\right)$. Moreover, $\\epsilon>0$ is a relatively small constant, such as $\\epsilon=10^{-6}$.\n\nOnce $t_x(i), \\forall x$, is decided, the other variables, $r_x(i)$, $f_x(i)$, and $s_x(i)$ can be calculated as follows. If $t_x(i)=0$, $t_k(i)=1$, and the $(x,k)$ element of $\\mathbf{ Q}$ is equal to one, then $r_x(i)=1$. If $t_x(i)=0$, $t_k(i)=0$, and $(x,k)$ element of $\\mathbf{ Q}$ is equal to one, then $s_x(i)=1$. If $t_x(i)=1$, $t_k(i)=1$, and $(x,k)$ element of $\\mathbf{ Q}$ is equal to one, then $f_x(i)=1$ and we set $t_x(i)=0$. Finally, if $t_x(i)=1$, $t_k(i)=0$, and $(x,k)$ element of $\\mathbf{ Q}$ is equal to one, then $t_x(i)$ remains unchanged. \n\n\n\\end{theorem}\n\\begin{IEEEproof}\nPlease refer to Appendix~\\ref{app_PR_Outage_L} for the proof.\n\\end{IEEEproof}\n\n\\subsection{Special Case of the Proposed Centralized D-TDD Scheme for HD nodes}\n\n\nAs a special case of the proposed centralized D-TDD scheme for a network comprised of FD nodes proposed in Theorem~\\ref{theo_2}, we investigate the optimal centralized D-TDD scheme for network comprised of HD nodes that maximizes the rate region.\n\nFor the case of a network comprised of HD nodes, we again use the vectors $\\mathbf{r}(i)$, $\\mathbf{t}(i)$, and $\\mathbf{s}(i)$, and set the vector $\\mathbf{f}(i)$ to all zeros due to the HD mode. The optimum vectors $\\mathbf{r}(i)$, $\\mathbf{t}(i)$, and $\\mathbf{s}(i)$ that maximize the boundary of the rate region of a network comprised of HD nodes can be obtained by the following optimization problem\n\\begin{align}\n& {\\underset{\\mathbf{r}(i), \\mathbf{t}(i), \\mathbf{s}(i),\\forall i} { \\textrm{Maximize:} }}\\; \n \\lim_{T\\to\\infty}\\frac{1}{T} \\sum_{i=1}^T \\sum_{k=1}^N \\mu_k \\log_2 \\left(1+r_k(t)\\frac{P\\, \\mathbf{t}(i) \\mathbf{d}_k^{\\intercal}(i)}{1+ P\\, \\mathbf{t}(i) \\mathbf{i}_k^{\\intercal}(i)} \\right) \\nonumber\\\\\n&{\\rm{Subject\\;\\; to \\; :}} \\nonumber\\\\\n&\\qquad\\qquad{\\rm C1:}\\; t_v(i)\\in\\{0,1\\}, \\; \\forall v\\nonumber\\\\\n&\\qquad\\qquad{\\rm C2:}\\; r_v(i)\\in\\{0,1\\}, \\; \\forall v\\nonumber\\\\\n&\\qquad\\qquad{\\rm C3:}\\; s_v(i)\\in\\{0,1\\}, \\; \\forall v\\nonumber\\\\\n&\\qquad\\qquad{\\rm C4:}\\; s_v(i)+r_v(i)+s_v(i)=1, \\; \\forall v,\n\\label{eq_max_1a3}\n\\end{align}\nwhere $\\mu_k, \\forall k$ is fixed. The solution of this problem is given in the following theorem.\n\\begin{theorem}\\label{theo_3}\nThe optimal values of the vectors $\\mathbf{t}(i)$, $\\mathbf{r}(i)$, and $\\mathbf{s}(i)$ which maximize the boundary of the rate region of the considered network comprised of HD nodes, found as the solution of (\\ref{eq_max_1a3}), is also given by Algorithm~\\ref{Lyp_algorithm} where\n lines 17-18 in Algorithm~\\ref{Lyp_algorithm} need to be removed and where $\\gamma_{j,k}(i)$ is set to $\\gamma_{j,k}(i)=\\infty, \\forall j=k$ and $\\forall i$ in the weighted connectivity matrix $\\mathbf{G}(i)$. \n\n\n\\end{theorem}\n\\begin{IEEEproof}\nPlease refer to Appendix~\\ref{app_PR_Outage_L3} for the proof.\n\\end{IEEEproof}\n\n\n\n\\begin{remark}\nFor the case when the network is comprised of both FD and HD nodes, Theorem~\\ref{theo_3} needs to be applied only to the HD nodes in order to obtain the optimal centralized D-TDD scheme for this case\n\\end{remark}\n\n\n\\section{Rate Allocation Fairness}\\label{Sec-QoS}\nThe nodes in a network have different rate demands based on the application they employ. In this section, we propose a scheme that allocates resources to the network nodes based on the rate demand of the network nodes. To this end, in the following, we assume that the central processor has access to the rate demands of the network nodes.\n\nRate allocation can be done using a prioritized rate allocation policy, where some nodes have a higher priority compared to others, and thereby, should be served preferentially. For example, some nodes are paying more to the network operator compared to the other nodes in exchange for higher data rates. In this policy, nodes with lower priority are served only when higher priority nodes are served acceptably. On the other hand, nodes that have the same priority level should be served by a fair rate allocation scheme that allocates resources proportional to the node needs.\n\nIn the optimal centralized D-TDD scheme given in Theorem~\\ref{theo_2}, the average received rate of user $k$ can be controlled via the constant $0\\leq \\mu_k\\leq 1, \\forall k $. By varying $\\mu_k$ from zero to one, the average received rate of user $k$ can be increased from zero to the maximum possible rate. Thereby, by optimizing the value of $\\mu_k, \\forall k$, we can establish a rate allocation scheme among the users which allocates resources based on the rate demand of the nodes. In the following, we propose a practical centralized D-TDD scheme for rate allocation in real-time by adjusting the values of $\\boldsymbol{\\mu}=[\\mu_1,\\; \\mu_2,\\;...,\\; \\mu_K]$.\n\n\\subsection{Proposed Rate Allocation Scheme For a Given Fairness}\\label{Sec-Fair-pri}\n\nThe average received rate at node $k$ obtained using the proposed optimal centralized D-TDD scheme is given by\n\\begin{align}\\label{eq_max_Fairness_12a}\n\\bar R_k(\\boldsymbol{\\mu})=\\lim_{T\\to\\infty} \\frac{1}{T}\\sum_{i=1}^T R_k^*(i,\\boldsymbol{\\mu}),\n\\end{align}\nwhere $R_k^*(i,\\mu_k)$ is the maximum received rate at node $k$ in the time slot $i$, obtained by Algorithm~\\ref{Lyp_algorithm} for fixed $\\boldsymbol{\\mu}$. \n\nLet $\\boldsymbol{\\tau} =[\\tau_1,\\; \\tau_2,\\;...,\\; \\tau_K]$, where $\\tau_k \\geq 0$ be a vector of the rate demands of the nodes and let $\\boldsymbol{\\alpha}=[\\alpha_1,\\; \\alpha_2,\\;...,\\; \\alpha_K]$ be the priority level vector of the nodes, where $0\\leq \\alpha_k\\leq 1$ and $\\sum_{k=1}^N \\alpha_k=1$. The priority level vector, $\\alpha_k$, determines the importance of user $k$ such that the higher the value of $\\alpha_k$, the higher the priority of the $k$-th node.\n \n\nIn order to achieve rate allocations according to the rate demands in $\\boldsymbol{\\tau}$ and the priority levels in $\\boldsymbol{\\alpha}$, we aim to minimize the weighted squared difference between the average received rate $\\bar R_k(\\boldsymbol{\\mu})$ and the rate demand, given by $\\tau_k, \\forall k$, i.e., to make the weighted sum squared error, $\\sum_{k=1}^N \\alpha_k \\left ( \\bar R_k(\\boldsymbol{\\mu}) -\\tau_k \\right )^2$, as smallest as possible. Note that there may not be enough network resource to make the weighted sum squared error to be equal to zero. However, the higher $\\alpha_k$ is, more network resources need to be allocated to node $k$ in order to increase its rate and bring $\\bar R_k(\\boldsymbol{\\mu})$ close to $\\tau_k$.\n\nUsing $\\boldsymbol{\\tau}$ and $\\boldsymbol{\\alpha}$, we devise the following rate-allocation problem \n\\begin{align}\n& {\\underset{\\boldsymbol{\\mu}} { \\textrm{Minimize:} }}\\; \n \\sum_{k=1}^N \\alpha_k\\left ( \\bar R_k(\\boldsymbol{\\mu}) -\\tau_k \\right )^2 \\nonumber\\\\\n&{\\rm{Subject\\;\\; to \\; :}} \\nonumber\\\\\n&\\qquad\\qquad{\\rm C1:}\\; 0\\leq \\mu_k\\leq 1, \\; \\forall k \\nonumber\\\\\n&\\qquad\\qquad{\\rm C1:}\\; \\sum_{k=1}^N \\mu_k=1. \n\\label{eq_max_Fairness_1a}\n\\end{align}\nThe optimization problem in (\\ref{eq_max_Fairness_1a}) belongs to a family of a well investigated optimization problems in \\cite{doi:10.1137\/1.9781611970920}, which do not have closed form solutions. Hence, we propose the following heuristic solution of (\\ref{eq_max_Fairness_1a}) by setting $\\boldsymbol{\\mu}$ to $\\boldsymbol{\\mu}=\\boldsymbol{\\mu}^e(i)$, where each element of $\\boldsymbol{\\mu}^e(i)$ is obtained as\n\\begin{align}\\label{eq_F1_mu_priori}\n&\\mu_k^e(i+1)=\\mu_k^e(i) +\\delta_k(i) \\alpha_k \\left[ \\bar R_k^e(i,\\boldsymbol{\\mu}^e(i)) - \\tau_k \\right],\n\\end{align}\nwhere $\\delta_k( i)$, $\\forall k$, can be some properly chosen monotonically decaying function of $i$ with $\\delta_k( 1)<1$, such as $\\frac{1}{2i}$. Note that after updating $\\mu_k^e(i), \\forall k$, values, we should normalize them to bring $\\mu_k^e(i), \\forall k$ in the range ($0\\leq \\mu_k\\leq 1$). To this end, we apply the following normalization method\n\n\\begin{align}\\label{eq_F1_norm}\n&\\mu_k^e(i+1)= \\frac{\\mu_k^e(i+1)}{\\sum_{k=1}^N \\mu_k^e(i+1)}, \\forall k. \n\\end{align}\n\nIn (\\ref{eq_F1_mu_priori}), $ \\bar R_k^e(i,\\boldsymbol{\\mu}^e(i))$ is the real time estimation of $\\bar R_k(\\boldsymbol{\\mu})$, which is given by\n\\begin{align}\\label{eq_F1_R1}\n\\bar R_k^e(i,\\boldsymbol{\\mu}^e(i))&=\\frac{i-1}{i} \\bar R_k^e(i-1,\\mu_k^e(i-1)) +\\frac{1}{i} R_k^*(i,\\boldsymbol{\\mu}^e(i)).\n\\end{align}\n\n\n\\section{Simulation and Numerical Results}\\label{Sec-Num}\nIn this section, we provide numerical results where we compare the proposed optimal centralized D-TDD scheme with benchmark centralized D-TDD schemes found in the literature. All of the presented results in this section are generated for Rayleigh fading by numerical evaluation of the derived results and are confirmed by Monte Carlo simulations.\n\n\n\\textit{The Network:} In all numerical examples, we use a network covering an area of $\\rho \\times \\rho$ $m^2$. In this area, we place 50 pairs of nodes randomly as follows. We randomly place one node of each pair in the considered area and then the paired node is placed by choosing an angle uniformly at random from $0^\\circ$ to $360^\\circ$ and choosing a distance uniformly at random from $\\chi$=10 m to 100 m, from the first node. For a given pair of two nodes, we assume that only the link between the paired nodes is desired and all other links act as interference links. The channel gain corresponding to the each link is assumed to have Rayleigh fading, where the mean of $\\gamma_{j,k}(i)$ is calculated using the standard path-loss model \\cite{6847175} as\n\\begin{eqnarray}\nE\\{\\gamma_{j,k}(i)\\} = \\left(\\frac{c}{{4\\pi {f_c}}}\\right)^2\\chi_{jk}^{ - \\beta }\\ \\textrm{, for } k\\in\\{U,D\\},\n\\end{eqnarray}\nwhere $c$ is the speed of light, $f_c=1.9$ GHz is the carrier frequency, $\\chi_{jk}$ is the distance between node $j$ and $k$, and $\\beta=3.6$ is the path loss exponent. In addition, the average SI suppression varies from 110 dB to 130 dB.\n\n\n\n\n\\textit{Benchmark Scheme 1 (Conventional scheme):} This benchmark is the TDD scheme used in current wireless networks. The network nodes are divided into two groups, denoted by A and B. In odd time slots, nodes in group A send information to the desired nodes in group B. Then, in the even time slots, nodes in group B send information to the desired nodes in group A. With this approach there is no interference between the nodes within group A and within group B since the transmissions are synchronized. However, there are interferences from the nodes in group A to the nodes in group B, and vice versa.\n \n\\textit{Benchmark Scheme 2 (Interference spins scheme):} The interference spins scheme, proposed in \\cite{7000558}, has been considered as the second benchmark scheme.\n\n\\textit{Benchmark Scheme 3 (Conventional FD scheme):} This benchmark is the TDD scheme used in a wireless networks with FD nodes. The network nodes are divided into two groups, denoted by A and B. In all the time slots, nodes in group A send information to the desired nodes in group B, and also nodes in group B send information to the desired nodes in group A. The SI suppression is set to 110 dB.\n\n\\subsection{Numerical Results}\n\nIn Fig.~\\ref{Rate_Power}, we show the sum-rates achieved using the proposed scheme for different SI suppression levels and the benchmark schemes as a function of the transmission power at the nodes, $P$. This example is for an area of 1000*1000 $m^2$, where $\\mu_k$ is fixed to $\\mu_k=\\frac{1}{k}, \\forall k$. As can be seen from Fig.~\\ref{Rate_Power}, for the low transmit power region, where noise is dominant, all schemes achieve a similar sum-rate. However, increasing the transmit power causes the overall interference to increase, in which case the optimal centralized D-TDD scheme achieves a large gain over the considered benchmark schemes. The benchmark schemes show limited performance since in the high power region they can not avoid the interference as effective as the proposed scheme.\n\n\n\\begin{figure}[t]\n\\vspace*{-2mm}\n\\centering\\includegraphics[width=6.7in]{Rate_Power.eps}\n\\caption{Sum-rate vs. transmit power P of the proposed schemes and the benchmark schemes for $\\rho$=1000 $m$.}\n\\label{Rate_Power}\n\\end{figure}\n\n\n In Fig.~\\ref{Rate_Dim}, the sum-rates gain with respect to (w. r. t.) Benchmark Scheme 1 (BS 1) is presented for different schemes as a function of the dimension of the considered area, $\\rho$. We assume that the transmit power is fixed to $P$=20 dBm, and $\\mu_k=\\frac{1}{k}, \\forall k$. Since the nodes are placed randomly in an area of $\\rho \\times \\rho$ $m^2$, for large $\\rho$, the links become more separated and the interference has a weeker effect. As a result, all of the schemes have close sum-rate results. However, decreasing the dimension, $\\rho$, causes the overall interference to increase, which leads to the optimal centralized D-TDD scheme to have a considerable gain over the benchmark schemes.\n \n\n\n\\begin{figure}[t]\n\\vspace*{-2mm}\n\\centering\\includegraphics[width=6.7in]{Rate_Dim.eps}\n\\caption{Sum-rate vs. dimension D of the proposed schemes and the benchmark schemes for P=20 dBm.}\n\\label{Rate_Dim}\n\\end{figure}\n\n\n\n In Fig.~\\ref{Rate_region}, we show the rate region achieved using the optimal centralized D-TDD scheme for two different group of nodes, where all the nodes that belong in each group have the same values of $\\mu$. Let $\\mu_1$ be assigned to the first group and $\\mu_2$ to the second group of nodes. By varying the value of $\\mu_1$ from zero to one, and setting $\\mu_2=1-\\mu_1$, as well as aggregating the achieved rates for each group we can get the rate region of the network of the two groups. In this example, the transmit power is fixed to $P$=20 dBm and the area dimension is 1000$\\times$1000 $m^2$. As shown in Fig.~\\ref{Rate_region}, the proposed scheme with HD nodes has more than 15$\\%$ improvement in the rate region area compared to the benchmark schemes. More importantly, the proposed scheme for FD nodes with SI suppression of 110 dB performs approximately four times better then Benchmark Scheme 3, in addition to outperforming the other benchmark schemes as well, which is a huge gain and a promising result for using FD nodes.\n\n\\begin{figure}[t]\n\\vspace*{-2mm}\n\\centering\\includegraphics[width=6.7in]{Rate_region.eps}\n\\caption{Sum-rate 1 vs. sum-rate 2 of the proposed schemes and the benchmark schemes for $P$=20 dBm, $\\rho$=1000 $m$.}\n\\label{Rate_region}\n\\end{figure}\n\n\n\nIn Fig.~\\ref{Time_Node}, we present the total time required by the optimal algorithm presented in Algorithm~\\ref{Lyp_algorithm} to obtain the solution as a function of the number of nodes in the network. For comparison purpose, we also present the total time required by a general brute-force search algorithm to search over all the possible solutions in order to to find the optimal one. To this end, we set the power at the nodes to $P=20$ dBm, and the area to 1000$\\times$1000 $m^2$. As it can be seen from Fig.~\\ref{Time_Node}, the brute-force search algorithm's computation time increases exponentially, however, the computation time with the proposed algorithm increases linearly.\n\n\n\n\\begin{figure}[t]\n\\vspace*{-2mm}\n\\centering\\includegraphics[width=6.7in]{Time_Node.eps}\n\\caption{Complexity vs. node count of the proposed schemes and the benchmark scheme for $P$=20 dBm and $\\rho$=1000 $m$.}\n\\label{Time_Node}\n\\end{figure}\n\n\nIn Fig.~\\ref{Fainess_Node}, we illustrate the rate achieved using the proposed scheme applying the rate allocation scheme for $N=10$, as a function of node number index. Moreover, we assume that the transmit power is fixed to $P=20$ dBm, the dimension is $\\rho$=1000 $m$, and the SI suppression is 110 dB. We have investigated two cases where in both cases the users have same priority, i.e., $\\alpha_k=0.1, \\forall k$. However, in one case data demand by users (right plot) is set to $\\tau_k=\\frac{k}{2}, \\forall k$, and in the other the data demand by users (left plot) is set to $\\tau_k=k, \\forall k$. As can be seen in the right plot of the Fig.~\\ref{Fainess_Node}, the rate allocation scheme is able to successfully answer the data demanded by users. However, in the case of the left plot of Fig.~\\ref{Fainess_Node}, the rate allocation scheme \nwas not able to answer the rates demand of the nodes due to capacity limits. Regardless, it successfully managed to hold the average received rates as close as possible to the demanded rates.\n\n\\begin{figure}[t]\n\\vspace*{-2mm}\n\\centering\\includegraphics[width=6.7in,height=4in]{Fairness.eps}\n\\caption{Rate vs. node number of the proposed scheme applying rate allocation scheme with under-capacity data demand (right), and with over-capacity data demand (left), $P$=20 dBm. SI=110 dB, and $\\rho$=1000 $m$.}\n\\label{Fainess_Node}\n\\end{figure}\n\\section{Conclusion}\\label{Sec-Conc}\nIn this paper, we devised the optimal centralized D-TDD scheme for a wireless network comprised of $K$ FD or HD nodes, which maximizes the rate region of the network. The proposed centralized D-TDD scheme makes an optimal decision of which node should receive, transmit, simultaneously receive and transmit, or be silent in each time slot. In addition, we proposed a fairness scheme that allocates data rates to the nodes according to the user data demands. We have shown that the proposed optimal centralized D-TDD scheme has significant gains over existing centralized D-TDD schemes.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}