diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzsiwd" "b/data_all_eng_slimpj/shuffled/split2/finalzzsiwd" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzsiwd" @@ -0,0 +1,5 @@ +{"text":" \n* * *\n\n# Knowledge, Technology and Law\n\n* * *\n\nThe relationships between knowledge, technologies, and legal processes are central to the constitution of contemporary societies. As such, they have come to provide the focus for a range of academic projects, across interdisciplinary legal studies and the social sciences. The domains of medical law and ethics, intellectual property law, environmental law and criminal law are just some of those within which the pervasive place and 'impact' of technoscience is immediately apparent. At the same time, social scientists investigating the making of technology and expertise \u2013 in particular, scholars working within the tradition of science and technology studies \u2013 frequently interrogate how regulation and legal processes, and the making of knowledge and technologies, are intermingled in complex ways that come to shape and define each other. This book charts the important interface between studies of law, science and society, as explored from the perspectives of socio-legal studies and the increasingly influential field of science and technology studies. It brings together scholars from both areas to interrogate the joint roles of law and science in the construction and stabilization of socio-technical networks, objects, and standards, as well as their place in the production of contemporary social realities and subjectivities.\n\n**Emilie Cloatre** is a Senior Lecturer in Kent Law School at the University of Kent.\n\n**Martyn Pickersgill** is Wellcome Trust Senior Research Fellow in Biomedical Ethics in the Centre for Population Health Sciences at the University of Edinburgh.\n* * *\n\nLaw, Science and Society series\n\nGeneral editors\n\nJohn Paterson | Julian Webb \n---|--- \n_University of Aberdeen, UK_ | _University of Melbourne, Australia_\n\n* * *\n\nLaw's role has often been understood as one of implementing political decisions concerning the relationship between science and society. Increasingly, however, as our understanding of the complex dynamic between law, science and society deepens, this instrumental characterisation is seen to be inadequate, but as yet we have only a limited conception of what might take its place. If progress is to be made in our legal and scientific understanding of the problems society faces, then there needs to be space for innovative and radical thinking about law and science. _Law, Science and Society_ is intended to provide that space.\n\nThe overarching aim of the series is to support the publication of new and groundbreaking empirical or theoretical contributions that will advance understanding between the disciplines of law, and the social, pure and applied sciences. General topics relevant to the series include studies of:\n\n\u2022 law and the international trade in science and technology;\n\n\u2022 risk and the regulation of science and technology;\n\n\u2022 law, science and the environment;\n\n\u2022 the reception of scientific discourses by law and the legal process;\n\n\u2022 law, chaos and complexity;\n\n\u2022 law and the brain.\n\n**Titles in this series:**\n\n**Absent Environments**\n\nTheorising environmental law and the city\n\n_Andreas Philippopoulos-Mihalopoulos_\n\n**Uncertain Risks Regulated**\n\n_Edited by Ellen Vos and Michelle Everson_\n\n**The Regulation of Animal Health and Welfare**\n\nScience, law and policy\n\n_John McEldowney, Wyn Grant, and Graham Medley_\n\n**Knowledge, Technology and Law**\n\n_Edited by Emilie Cloatre and Martyn Pickersgill_\n* * *\n\nKnowledge, Technology and Law\n\n* * *\n\nEdited by\n\nEmilie Cloatre and Martyn Pickersgill\n\nFirst published 2015 \nby Routledge \n2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN\n\nand by Routledge \n711 Third Avenue, New York, NY 10017\n\na GlassHouse Book\n\n_Routledge is an imprint of the Taylor & Francis Group, an informa business_\n\n\u00a9 2015 Emilie Cloatre and Martyn Pickersgill\n\nThe right of Emilie Cloatre and Martyn Pickersgill to be identified as author of this work has been asserted by them in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.\n\nAll rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers.\n\n_Trademark notice_ : Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe.\n\n_British Library Cataloguing in Publication Data_ \nA catalogue record for this book is available from the British Library\n\n_Library of Congress Cataloging-in-Publication Data_ \nKnowledge, technology, and law \/ edited by Emilie Cloatre, Martyn \nPickersgill. \npages cm. \u2014 (Law, science, and society) \nIncludes bibliographical references and index. \nISBN 978-0-415-62862-4 (hardback) \u2014 ISBN 978-0-203-79760-0 (ebk) \nI.Technology and law. 2. Science and law. I. Cloatre, Emilie, editor of \ncompilation. II. Pickersgill, Martyn, editor of compilation. \nK487.T4K59 2014 \n340\u2032.11\u2014dc23 \n2014013342\n\nISBN: 978-0-415-62862-4 (hbk) \nISBN: 978-0-203-79760-0 (ebk)\n\nTypeset in Galliard \nby FiSH Books Ltd, Enfield\n* * *\n\n# Contents\n\n* * *\n\n_Notes on contributors_\n\n_Acknowledgements_\n\n**Introduction**\n\nEMILIE CLOATRE AND MARTYN PICKERSGILL\n\n[**SECTION 1 \nLaw, expertise and public participation**](008_9780203797600_section1.html#sect1)\n\n **1** **Science, uncertainty and the normative question of epistemic governance in policymaking**\n\nSUJATHA RAMAN\n\n **2** **Contingent participation: imaginaries of sustainable technoscientific innovation in the European Union**\n\nMARK L. FLEAR AND THOMAS PFISTER\n\n **3** **The voice of silence: UK patients' silent resistance to the assisted reproduction regulations**\n\nILKE TURKMENDAG\n\n[**SECTION 2 \nObjects and epistemologies in criminal law**](011_9780203797600_section2.html#sect2)\n\n **4** **Unchaining research: processes of dis\/empowerment and the social study of criminal law and investigation**\n\nBARBARA PRAINSACK\n\n **5** **Making the colposcope 'forensic': the medico-legal management of a controversial visualisation device**\n\nGETHIN REES\n\n **6** **Telling tales: some episodes from the multiple lives of the polygraph machine**\n\nANDREW BALMER\n\n[**SECTION 3 \nRegulation, ethics and values**](014_9780203797600_section3.html#sect3)\n\n **7** **Through the thicket and across the divide: successfully navigating the regulatory landscape in life sciences research**\n\nGRAEME LAURIE AND SHAWN H. E. HARMON\n\n **8** **Misconduct hunting: research integrity via law, science and technology**\n\nMARIE-ANDR\u00c9E JACOB\n\n **9** **Financial derivatives and the challenge of performation: where contingency meets contestability**\n\nDONATELLA ALESSANDRINI\n\n[**SECTION 4 \nLaw, technoscience and the stabilization of knowledge**](017_9780203797600_section4.html#sect4)\n\n**10** **Epistemic jurisdictions: science and courts in regulatory (de)centralisation**\n\nDAVID E. WINICKOFF\n\n**11** **Un-knowing exposure: toxic emergency housing, strategic inconclusivity and governance in the US Gulf South**\n\nNICHOLAS SHAPIRO\n\n**12** **A likely story: HIV and the definition of disability in UK employment equality law, 1996\u20132005**\n\nEMILY GRABHAM\n\n**13** **Paper prototypes**\n\nALAIN POTTAGE\n\n_Index_\n* * *\n\n# Notes on contributors\n\n* * *\n\n**Donatella Alessandrini** is Reader in Law in Kent Law School at the University of Kent.\n\n**Andrew Balmer** is Simon Research Fellow in Sociology at the University of Manchester.\n\n**Emilie Cloatre** is a Senior Lecturer in Kent Law School at the University of Kent.\n\n**Mark L. Flear** is a Lecturer in the School of Law at Queen's University Belfast.\n\n**Emily Grabham** is a Reader in Law in Kent Law School at the University of Kent.\n\n**Shawn H.E. Harmon** is Lecturer in Regulation and Risk in the School of Law at the University of Edinburgh.\n\n**Marie-Andr\u00e9e Jacob** is a Senior Lecturer in the School of Law at Keele University.\n\n**Graeme Laurie** is Professor of Medical Jurisprudence in the School of Law at the University of Edinburgh.\n\n**Thomas Pfister** is Director of the Energy Cultures Research Group in the Department of Social Science at Zeppelin University.\n\n**Martyn Pickersgill** is Wellcome Trust Senior Research Fellow in Biomedical Ethics in the Centre for Population Health Sciences at the University of Edinburgh.\n\n**Alain Pottage** is Professor of Law in the Law Department at the London School of Economics and Political Science.\n\n**Barbara Prainsack** is Professor of Sociology at the Department of Social Science, Health and Medicine at King's College London.\n\n**Sujatha Raman** is Deputy Director of the Leverhulme Trust 'Making Science Public' Research Programme in the Institute for Science and Society (ISS) at the University of Nottingham.\n\n**Gethin Rees** is Lecturer in Criminology in the Department of Sociology, Social Policy and Criminology at the University of Southampton.\n\n**Nicholas Shapiro** is a Postdoctoral Researcher in the Sociology Department at Goldsmiths, University of London.\n\n**Ilke Turkmendag** is a Research Associate in the Department of Sociological Studies at the Universtiy Sheffield.\n\n**David E. Winickoff** is Associate Professor of Bioethics and Society in the Department of Environmental Science, Policy and Management at the University of California, Berkeley.\n* * *\n\n# Acknowledgements\n\n* * *\n\nWe are grateful, most of all, to our contributors: for their efforts, ideas, excitement, and patience during the process of this book project. We acknowledge too the work of Colin Perrin, Rebekah Jenkins, and the other staff at Routledge who have facilitated its production. We would also like to thank the UK Arts and Humanities Research Council (AHRC) for sponsoring the 'Technoscience, Law and Society' Research Network. This provided a forum for \u2013 and further propelled \u2013 a number of scholarly engagements that generated debate and exchange which informed the contents of this volume. Martyn gratefully acknowledges too the support of the Wellcome Trust. Last, we are indebted to our home institutions who have generously supported us in this endeavour: the University of Kent and the University of Edinburgh.\n* * *\n\n# **Introduction**\n\n_Emilie Cloatre and Martyn Pickersgill_\n\n* * *\n\nThe relationships between knowledge, technologies, and legal processes are central to the constitution of contemporary societies. As such, they have come to be the focus for a range of academic projects, across interdisciplinary legal studies and the social sciences. Legal scholars are commonly drawn to exploring the legal developments that respond to or are shaped by scientific innovation. The domains of medical law and ethics, intellectual property law, environmental law and criminal law are just some of those within which the pervasive place and 'impact' of technoscience is immediately apparent. At the same time, social scientists investigating the making of technology and expertise \u2013 in particular, scholars working within the tradition of science and technology studies (STS) \u2013 frequently interrogate how regulation and legal processes, and the making of knowledge and technologies, are intermingled in complex ways that come to shape and define each other.\n\nThe interrogation of 'law', 'science', and 'technology' has also worked to render problematic any notion that these are tightly bounded things always already out there in the world, waiting patiently for analytic attention. We might, perhaps, broadly agree on what law and technoscience look like, but seeking to provide minute and enduring definitions only further pluralizes their ontologies. Accordingly, we are sceptical about the possibility of definitively specifiying the natures of knowledge, technologies, and legal processes \u2013 even as we necessarily hold on to imaginaries of each of these in order to work and to live.\n\nIt should be clear, then, that our intention with this book is certainly not to produce any kind of schematic about how 'law', 'science', and 'technology' _should_ relate to one another. Our project is a little different, and in some senses it is modest: to bring together a range of perspectives on these entities, in order to begin to apprehend how different epistemic communities might talk about them and their relationships, and hence how they are talked into existence. Such interdisciplinary conversations around technoscience and law can, we believe, enable the issues at stake to be considered in more critical depth, and bring to light some of the complexity of the interactions between what we know, how we do things, and how we think we should live. Our modest agenda, then, enables us to be more ambitious: by eliding responsibility for answering standard normative questions, we seek to energize reflection regarding (and perhaps the transformation of) the foundations upon which normativity is usually enacted.\n\nScholarship that speaks to our concerns has already been undertaken by a range of authors, and we have no wish to reinvent the wheel. We want to acknowledge our, and our contributors', intellectual debts. In particular, the settlement of rules and the making of knowledge in the contexts of law and science have been drawn into conversation (Latour 2002); the patterns that emerge in the interaction between legal institutions, scientific knowledge, and technological innovation have been interrogated (Brownsword and Yeung 2008; Jasanoff 1997; Aronson and Cole 2009); and conceptual insights from STS have become increasingly deployed to explore the making of legal and political norms and practices (Riles 2011; de Goede 2012; Pottage 2012).\n\nThese explorations have been undertaken within a variety of areas, from environmental regulations, to the practices of biomedicine, as well as in the context of information technology and financial regulations. Medical devices, energy saving technologies, computers, lie detectors, stem cells, and other artefacts, both spectacular and mundane, are now increasingly regarded as being at least partly defined by and constituted through their legality. In the messy networks that stabilize these objects, legal processes thus come to be another site of entanglement between materials, techniques, cultures, and institutions, that are inherently contingent on the deployment of knowledge and technologies, and established and emerging forms of expertise. Technoscience and law, rather than leaning on each other as independent spheres of norm-making, come to co-produce each other (Jasanoff, 2004), while remaining defined by particular systemic modes of enunciation that participate in performing their own uniqueness (Luhmann 1995; Pottage 2012).\n\nScholars have devoted much attention to the links between new scientific possibilities and emerging normative conflicts (as the history of the regulation of reproductive technologies has demonstrated, for example; Ragon\u00e9 and Franklin 1998) and to the various sets of social, legal and democratic questions raised by significant events, or disasters (Petryna 2002). At the same time, studies of practices that are, for different communities, everyday, mundane, or 'discrete' have increasingly enriched our understanding of the many locations in which the entanglement of law and technoscience is visible, and of the various surfaces upon which the production of knowledges is scaffolded, and even concretised, by legal events (Jacob 2012; Riles 2011; Cloatre 2013). Historical engagements with law and expertise demonstrate both the transformation of these over the years through the mutations of the other (Jasanoff 1997), and the great extent to which certain types of tensions and opportunities persevere (Leclerc 2005).\n\nAs we intimated above, underlying much of the questioning of the links between law and science are issues pertaining to the nature of each institution. The portrayal of law and science as separate fields of knowledge and activity once dominated \u2013 and, to an extent, continues to be evident within \u2013 the legal literature. This resulted in a set of normative claims (persistent in much public and legal discourse) regarding how legal processes should (better) emulate the methods of science, and its quest for 'truth' rather than 'settlement'. Similarly, we can see that law is consistently called to be more responsive to technoscientific developments, and even to reformulate its own logics with regard to novel (and tentative) knowledge claims (Pickersgill 2011). As Sheila Jasanoff puts it:\n\n> The representation of law and science as fundamentally different enterprises has given rise to two strikingly recurrent themes in legal writing about science: that of the 'culture clash' between lawyers (or legally trained bureaucrats) and scientists, and the corollary that the culture of law should strive as far as possible to assimilate itself to the culture of science when dealing with scientific issues.\n> \n> (Jasanoff 1997: 7)\n\nSTS-inflected studies of law and science have for some time challenged these claims: the possibility, and desirability, of law becoming 'more scientific' or being made more 'authoritative' through incorporating scientific claims or mimicking its epistemic norms becomes increasingly questionable. STS has demonstrated how science is produced through a heady mix of professional dis\/agreement, the management of uncertainty, the 'dance' between the observer and the observed and between the measure and the measured, the silencing of some voices and the amplification of others, and the ongoing intertwinement of the material and the semiotic. Increasing attention to the role of legal discourse within this cocktail has been suggestive of the extent to which law and science exist in a relationship of co-constitution, or entanglement. Accordingly, early normative claims that law should (or even could) 'follow' science in such predetermined ways became less compelling.\n\nThis reimagining of the relationship between law and technoscience also bears significance in relation to our understanding of law and of broader debates within interdisciplinary legal studies. The work of law and society scholarship, legal anthropology, critical legal scholarship, feminism, and race theory, amongst others, has enjoined questions about what we take law to actually be, over the same period of time as STS was deconstructing and redefining the identities of science (e.g. Constable 2007; Silbey and Ewick 1998). Traditional engagements with the 'impact' of law, in turn, came to be challenged by more fluid understandings of its modes of operation, and its co-constitutive entanglement with routinized practices, complicating further the 'and society' aspect of legal studies (e.g. Cloatre 2008, 2013; Pickersgill 2013a; Jacob 2012). For Alain Pottage,\n\n> there is indeed more society in law than there is in the society that is invoked by studies in 'law and society'. Ironically, however, the effect is actually a retrenchment of the premise that law exists as a singular social instance that it is the business of the theorist to explain.\n> \n> (Pottage 2012: 179)\n\nAccordingly, many assumptions have come to be scrutinized about how law functions, about its actual or imagined role, and about how norms and decisions are shaped or made. Law's internal machinery has been interrogated, and the self-constitution of law as a system of knowledge and authority freshly perceived. The externalities of law, and the meanings law adopts in everyday life, have expanded what our understanding of what legal space might encompass (Sibley and Ewick 2000). Just as boundaries between science and society, nature and culture, that were once regarded as bright and sharp were progressively shown by STS to be fuzzy (at the very least), interdisciplinary scholarship around 'the law' has made the possibility of clearly delineating between this and the spaces it inhabits increasingly questionable.\n\nWe follow this position below, and argue that the insights of STS contribute to a radical revisiting of the making and meaning of law \u2013 a project we call, following other 'expansions' of the remit of STS (e.g. MacKenzie _et al._ 2007), 'social studies of law'. Explorations in law and technoscience, and the joint insights of STS and interdisciplinary legal scholarship in unpacking the processes through which knowledge is made and society ordered, have indeed opened up significant possibilities for questioning the nature of legal processes and the production of technoscience. Before turning to this general claim, however, we engage in further detail with three key elements that interrogations of knowledge, technologies, and law invite us to consider.\n\n## **Understandings of progress and regulatory framings**\n\nMuch of the debate surrounding law and science has revolved around how law responds, or ought to respond, to new scientific developments, risks, and opportunities. The conversations here have often been loaded with significant policy pressures, with governments having to balance ethical concerns with the possibilities offered by new \u2013 and at times controversial \u2013 technologies. The truism that law fails to keep up with scientific and technological innovations has long been deconstructed, as the co-production of law and science, and their inherent interdependency, have progressively been proposed as a more adequate way of imagining these interactions (Jasanoff 2004).\n\nA range of studies have explored how balances are made between contrasting interests regarding perceived needs to enable yet control new technologies, and between the rights and duties of citizens \u2013 as well as how new technologies themselves transform or re-embed particular understandings of citizenship (Flear and Pickersgill 2013; Flear _et al._ 2013; Jasanoff 2011). Genetic technologies have been at the centre of many of these explorations, and the past twenty years have seen an inflation of the number of public debates that surround new scientific opportunities, and the normative challenges that they raise. The regulation of science, and regulation _through_ science, is nonetheless at stake in policy choices made far beyond these examples; for example, when choosing which clean energies to facilitate, how to regulate industries in the face of climate change, or how to engage with nuclear energy.\n\nThese questions all contain complex sets of issues relating not only to the role of law, but also to ideas of development and progress, and of the futures that societies accept, imagine, or strive for. Rather than being simply about either constraining or fostering innovation, normative responses to science and technology can come to establish new social understandings of who 'we' are, and how we hope to live. Legal engagements with technoscience therefore entail particular visions of society, and imply exclusions and silencing, as well as the occasional (re)involvement of various groups and interests (cf. Minnow 1991). The regulatory framing of new technology becomes a useful site to explore how citizens contribute to social futures, how these are constituted, and what is gained or lost in different types of regulatory strategies. Additionally, issues of social justice and (re)distribution are inherently entangled in regulatory choices, as is most visible in debates about patents and ownership over biotechnology and pharmaceutical products (e.g. Hayden 2003; Pottage 2006). Each strategy deployed, and the balancing of rights and duties that are attached to technoscientific activities, will presume particular notions of (un)fairness and (in)justice, and either challenge or perpetuate patterns of exclusion and inequality. The interrelationship between law and scientific innovation is therefore complex, multilayered, and entangled within both choices about technological futures and, more discreetly, about citizenship and social justice.\n\n## **Expertise, citizens, and the making of law**\n\nThe questioning of expertise is also central to exploring the deployment of law, and legal bureaucracies, and has been achieved most thoroughly in the context of the courtroom. There, the ways in which legal and scientific knowledges construct and imagine their own spheres of authority, or achieve a particular 'result', have been charted in contexts ranging from criminal to tort to environmental law. STS scholars have demonstrated how the value of expert knowledge, and of particular technologies, in the context of the courtroom, is itself dependent on negotiation and resolutions that happen both within and outside the courts (Jasanoff 1997). Visions of both law and science in which these are, respectively, conceived of as pure constructions of justice, or of truth, have progressively been rendered more complicated than early narratives of trials might have implied.\n\nExplorations of expertise and the roles played by experts in broadly legal settings have also opened up debates on the nature of democracies. The democratic premise of equal participation in the political sphere is often at stake in a context of (real or perceived) increased reliance on experts in decision-making. At one level, the issue here is to reflect on the balancing that must, or can, be operated between democratic expectations and the necessity of grounding decision-making in informed knowledge. STS scholarship has reconstructed and enriched questions pertinent to this debate. Highlighting the contingency of knowledge, and the entanglements between epistemology, power and politics, leads, for example, to much reflection on how expertise is mobilized and framed by governments (local, national, and supranational). The notion of expertise itself appears (perhaps unsurprisingly) more problematic through analytic engagements with instances of knowledge-making and institutional involvement by 'lay-experts' (Gallon _et al._ 2009; Epstein 1996). Finally, the idea of 'the public', and its homogeneity and positionality with regard to new scientific evidence, has been refined. Indeed, it has been pluralized, with the term 'publics' employed within recent scholarship to underscore the intrinsic heterogeneity of communities, subject positions and identities that are present within 'the public'. Overall, the connections between knowledge and law-making appear to break and reform boundaries in ways that are continuous with broader patterns of power and exclusion that are not specific to technoscience, though at times are amplified by the temptations of technocratic decision-making.\n\nInterrogating the interlinks between expert knowledge and the law has moved from imagining legal and scientific expertise as independent spheres to revisiting them as entangled sites of reciprocal production. As neither law nor science are perceived as isolated sites of social action, questions shift from how expert knowledge can best be used, to how the joint works of law and science proceed to the validation and leveraging of knowledges, often restructuring highly political debates into seemingly technical ones. Tensions re-emerge when this mobility of politics is made apparent \u2013 for example by the displacement to either courts or expert committees of highly political issues.\n\n## **Thinking about law and materiality**\n\nEngaging with law and technoscience invites significant attention to social patterns, events, norms, and citizens, as developed above. However, it also requires a shift of attention to material sites of production, and deployment, of both legal and epistemic processes. Indeed, the role of materiality in shaping, conditioning, and mediating the construction and action of law has been rendered visible by many commentators (Pottage and Sherman 2010; Cloatre and Dingwall 2013; Jacob 2012). This is in part a consequence of the progressive permeation of STS within the study of law. As the social study of technoscience has carefully drawn out the many ways in which materiality 'matters', the seizing of this importance by interdisciplinary legal scholars has opened avenues for new interrogations of the law. Examining specific sites of entanglement of law and technoscience therefore bring to the fore the significance of materiality in the production of norms and the deployment of law. This provides forms of questioning that have not traditionally mobilized legal scholars, but which are now increasingly seen as relevant. For Javier Lezaun:\n\n> The effacement of the labour involved in making legal categories material is evident in the lack of attention of legal scholars to the fabrication of the artefacts that incarnate legal entities. Even when the focus is on 'standardization', or 'harmonization', the emphasis tends to be on the production of texts, rules, and criteria \u2013 in other words, on processes of writing that continue and resemble the textual fabrication of the law.\n> \n> (Lezaun 2012: 38)\n\nNevertheless, the claim that materiality has largely been associated with textual materiality in legal scholarship, though certainly once true, now needs to be nuanced slightly. Increasingly, attention is being paid to how law becomes entangled in other kinds of objects. Much of this has been stimulated by wider interest in actor-network theory and posthumanist philosophy (Barad 2007), and an increased attention within political theory to the import of matter (Bennett 2007).\n\nThis critical reflection is important not only as a way of paying attention to 'things', but also because they often suggest a radical reimagining of social ontologies (Barad 2007). Within these broader theoretical movements, the attention paid to matter within legal scholarship has progressively moved beyond the materiality of texts and towards the broader set of materials that conditions the day-to-day activities of lawyering (Latour 2002; Riles 2000). A further shift directed attention to reflecting on how materials themselves come to embed particular legal scripts and histories, or come to be formed, accepted, and transformed by legal processes (Cloatre 2013; Jacob 2012; Lezaun 2012). In turn, the material environment of the law, and the shaping of environments through the law, have been shown to be essential aspects of our understandings of the nature of legal processes (Philippopoulos-Mihalopoulos 2013). The methodological implications of material explorations of law are, in turn, significant, privileging critical and ethnographic approaches to legal processes \u2013 and fostering further links between legal scholarship and the anthropologies of law and science (e.g. Jacob and Riles 2007).\n\nOverall, at stake in these STS-inflected studies of law are the meanings of things in the shaping of legal processes, and in turn the significance of the law in producing forms of materiality \u2013 and the inevitable interrelationships between these. Such explorations reveal how materials become sites that produce, stabilize, and perpetuate particular kinds of power, and help render apparent (potential) displacements of 'traditional' forms of political action. Objects become socially charged both in regards to what they project of the law, and in what they enable (users of) the law to do.\n\n## **Social studies of law**\n\nSome of the core contributions of STS to the wider social sciences and humanities have been the generation of new forms of critical engagement with the institutions of science and the social meanings of technology \u2013 but also, more broadly, the close documentation of how power\/knowledge are made and deployed in processes of ordering the social. In doing so, STS has participated in challenging and deconstructing categories of sometimes entrenched beliefs, norms, and practices, unmaking assumptions as to what constitutes their foundations. When associated with the rich tradition of critical forms of legal scholarship for the constant interrogation of the meaning and nature of law, of what does (or does not) set it aside from other social institutions and 'regimes of normativity' (Pickersgill 2012), STS opens up questions about the boundaries of the legal space, and enjoins us to ask again: what is law?\n\nWe suggest that articulating legal scholarship and STS has necessary consequences for understandings of law and inevitably results in deep challenges to its institutional uniqueness and singularity as an object of study. If the insights of STS are taken seriously, and integrated with the understandings of legality unfolded through conceptual and empirical interdisciplinary legal scholarship, law seems to become a set of fluidly defined associations within shifting networks, even though its language, institutional landscapes, and modes of deployment may appear specific (or perhaps, more appositely, peculiar). In other words, law can be reimagined as co-constituted rather than presupposed, in the way that social relations and objects are more generally, and this has significant implications for its study.\n\nSTS thus allows for a radicalization of the question of law and, as Pottage suggests when reflecting specifically on materiality, pushes us to question whether a deep engagement of law and STS more generally 'might not actually lead to the dissolution of law as a social instance?' (Pottage 2012: 180). Indeed, if our focus of inquiry turns towards the subtle processes of making and ordering that scholars within STS have tended to pay attention to, the vision of law as a predetermined institution becomes difficult to sustain, as the legal space becomes reimagined as being itself constitutive of, and subjected to, broader and fluid material-semiotic interaction. Processes of settlement can be unpacked in order to explore how things fall into place in a particular way, through the joint work of law, science, and other entangled processes, but the idea of law that transpires from each of these instances is to be determined rather than assumed. It is the analytic agenda that falls out from this set of propositions, with its methodological corollaries, that we term 'the Social Studies of Law'.\n\nWork that we may retrospectively characterize as social studies of law already exists, of course, some of which is referred to above. It has also taken the form of what Annelise Riles (2005) calls a return to the 'technical'; enabled through a closer engagement with practices and the 'micro', and a renewed focus on the details of how legal technicalities and the routinized practices of lawyers matter to broader questions about society. There, STS-inflected studies of the law commonly meet the concerns of legal anthropology, as everyday praxis becomes intrinsic to the constitution of law and its social role. The analysis of legal processes thus becomes inherently dependent on the repeated unpacking of a range of assumptions, discourses, practices, and objects. This, in turn, enables the exploration of the making of particular forms of knowledge (both legal and technoscientific), and of the ways norms, institutions, and the deployment of legal rules are inherently entangled in social, political, and material orders. Enjoined here is a conceptual shift to regard law as a fluid notion that acquires its strength and stability by a range of means (verbal, institutional, textual, material...) through space and time. The methodological implications of this approach bear consideration, as (for instance) a microscopic examination of legal processes becomes a valuable tool through which to interrogate the contingent nature of law.\n\nIf law is, or becomes, something different, through comprehension of its entanglement with other sets of cultural or social practices, then this, as we have suggested above, makes it by 'nature' a fluid object, hard to seize and only temporarily stabilized. Nevertheless, the erosion of boundaries that delineate 'law' from other performances of the social does not mean, however, that technical manifestations of either law or science become irrelevant or redundant for the observer. Just because law and its boundaries are performed by societal praxis does not mean that it is irrelevant: it is a thing that is important because it is believed by many to be _a thing that is important_. Both the thingness and the import are worthy of attention in terms of their construction, and the norms and values that they create, propel and challenge.\n\n## **Knowledge, technoscience, and law**\n\nThe overriding aim of this book, therefore, is to map out contemporary themes and issues that emerge from the close interrogation of the links between law, technologies and forms of knowledge, across thematic fields and disciplinary boundaries. As the dynamics between law and technoscience develop as foci of growing academic interest \u2013 and, it should be noted, of policy and civil relevance \u2013 it remains difficult to find overviews of the range of approaches, ideas, methods, and concepts that shape and direct it. This book seeks to display some of the diversity of strategies, preoccupations, and concepts that animate the scholarship in this area, as well as \u2013 we hope \u2013 provide a means of propelling further theoretical and empirical work in what we have referred to as the social studies of law.\n\nThe chapters which follow are divided into four sections, reflecting key questions and domains of analysis in law and science scholarship.\n\nSection 1, _Law, expertise and public participation_ , interrogates the relationships between expert knowledge and public involvement in legal processes and decision-making. In Chapter 1, 'Science, uncertainty and the normative question of epistemic governance in policymaking', Sujatha Raman interrogates the foundation of public participation in the production of scientific knowledge, through insights from STS and political theory. In doing so, she reflects on the complex role publics play in science-based decision-making. This, in turn, opens up important questions on the role of expert knowledge in public policy, and the positionality of publics in democratic decision-making. Mark Flear and Thomas Pfister continue this thematic focus in Chapter 2, 'Contingent participation: imaginaries of sustainable technoscientific innovation in the European Union'. Here, they focus more specifically on processes of inclusion and exclusion that operate in regulatory decision-making. Looking at the example of EU policymaking, they explore how a narrow definition of sustainable development, based on specific technoscientific imaginaries, has narrowed the space for public participation in EU environmental decision-making. In her essay, Chapter 3, 'The voice of silence: UK patients' silent resistance to the assisted reproduction regulations', Ilke Turkmendag interrogates the nature of participation itself. She achieves this through a case study of how would-be parents engaged with the UK removal of sperm donor anonymity. Turkmendag illustrates how discreet forms of resistance and discontent were deployed in response to the new law, offering a rich illustration of the subtle ways in which publics elect to participate in shaping legal processes.\n\nIn Section 2, _Objects and epistemologies in criminal law_ , contributions turn to explorations of the making and stabilization of knowledge, and of technologies of evidence, in the courtroom. Focused specifically on criminal justice settings, a central focus of much scholarship on law and science, this section also raises broader issues about the role of technoscience in legal settlement. Insights from STS are used by Barbara Prainsack in Chapter 4, 'Unchaining research: processes of dis\/empowerment and the social study of criminal law and investigation', to unpack some of the key assumptions and categorizations that underlie commentaries on criminal justice and forensic science. Focusing on two notions \u2013 the nature of convictions and of convicts, on the one hand, and the role of forensic DNA as a technology of individualization, on the other \u2013 she questions how STS tools can be deployed to critically interrogate unexamined assumptions about the nature of criminal justice processes. A focus on contested criminological technologies is continued in Gethin Rees' analysis of the colposcope in the context of rape trials. In Chapter 5, 'Making the colposcope \"forensic\": the medicolegal management of a controversial visualisation device', Rees analyses its usage and unveils some of the controversies and contestations that surround it and which play out amongst particular communities of clinical, scientific and legal practice. He demonstrates how these debates are rooted in deeper, conflicting visions of the role of forensic medical practitioners in rape trials. Contest and controversy are also of import in Chapter 6, 'Telling tales: some episodes from the multiple lives of the polygraph machine'. Here, Andrew Balmer examines the history of the stabilization of another technology of evidence: the polygraph. His critical analysis demonstrates how a story that has often been told as a determined process of technological development is in fact highly messy, complex, and multidirectional. Balmer suggests that acknowledging the multiple lives of technologies is an important part of making sense of their role in legal systems.\n\nSection 3, _Regulation, ethics and. values_ , provides explorations of the entanglement of regulation and governance with other discourses of normativity, and their practical and political implications. In Chapter 7, 'Through the thicket and across the divide: successfully navigating the regulatory landscape in life sciences research', Graeme Laurie and Shawn Harmon document some of the potential discrepancies between what the law dictates, and what scientists feel is required to govern their work. Looking at the particular example of the regulatory separation between tissue and data, they argue for a more responsive approach to the legal governance of science. Marie-Andr\u00e9e Jacob turns to the regulation of research in Chapter 8, 'Misconduct hunting: research integrity via law, science and technology'. Specifically, she is concerned with how integrity is governed. Jacob explores how self-regulation through technologies has become a central aspect of the governance of research integrity. In doing so, she questions the boundary between legal and technological modes of regulation, and between law and science in the making of ethical norms for scientific practice. The analysis of how a set of financial technologies \u2013 financial derivatives \u2013 can be understood as value-making technologies is the core of Chapter 9, 'Financial derivatives and the challenge of performation: where contingency meets contestability', by Donatella Alessandrini. Employing insights from literatures in the social studies of finance and political economy, she challenges the boundary between technologies and practices. In turn, Alessandrini asks, and helps us to answer, questions about how new forms of regulating and formatting markets can be derived from this understanding of derivatives in their complexity.\n\nFinally, contributions in Section 4, _Law, technoscience and the stabilization of knowledge_ , focus on the making of knowledge, and the joint roles of law and technoscience in producing and stabilizing forms of knowledge. David Winickoff, in Chapter 10, 'Epistemic jurisdictions: science and courts in regulatory (de)centralisation', articulates the role of science in complex models of multi-level governance. Looking at examples from the regulation of food and climate change, he demonstrates how regulatory science shapes sites of global governance, and mediates jurisdictional conflicts. In Chapter 11, 'Un-knowing exposure: toxic emergency housing, strategic inconclusivity and governance in the US Gulf South', Nicholas Shapiro interrogates the production of certitude in legal processes surrounding chemical exposure in emergency housing in the US Gulf South. He traces the role of scientific claims in processes of un-knowing in federal governance and court proceedings, and explores the interplay between scientific and legal knowledges in generating (un)certainty. Another uncertain relationship between science and law is set out by Emily Grabham in Chapter 12, 'A likely story: HIV and the definition of disability in UK employment equality law, 1996\u20132005'. Here, by exploring the position of HIV\/AIDs patients in the face of employment equality law, she problematizes in particular the making of legal temporalities in the set of legal and medical discourses, objects and practices that mattered to the legal processes at play. Consequently, she provokes us into thinking more deeply about the dynamics between law and time. Finally, in Chapter 13, Taper prototypes', Alain Pottage analyses the nature of patents, by considering patent texts as prototypes. Looking back at the history of patent jurisprudence, and drawing on STS and related theoretical approaches, he explores how a logic of prototyping and reproduction is a central aspect of the constitution of patents, the implications of this for the position of ideas and techniques in the making of inventions, and what this might suggest for our understanding of patents as a particular legal and technoscientific object.\n\nOverall, the chapters chart some of the contemporary directions taken by scholarship in law and science, and start to articulate the variety of challenges that emerge when seeking to critically engage with legal and scientific institutions, processes, norms, and practices. Although the contributors share similar concerns for the complexity and fluidity of these, questions are approached from a variety of conceptual perspectives, which speak to and complement each other, and together represent a vantage point from which we might better view this still emerging field of intellectual enquiry. As a result, we hope that this collection will enrich conversations both in interdisciplinary legal studies and in STS, and provide a platform for future debates and critical interrogations of the movements exemplified in the various case studies elaborated herein.\n\n## **References**\n\nAronson, J. D. and Cole, S. A. (2009) 'Science and the death penalty: DNA, innocence, and the debate over capital punishment in the United States', _Law & Social Inquiry_, 34(3): 603\u2013633.\n\nBarad, K. (2007) _Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning_ , Durham, NC: Duke University Press.\n\nBennett, J. (2010) _Vibrant Matter: A Political Ecology of Things_ , Durham, NC: Duke University Press.\n\nBrownsword, R. and Yeung, K. (eds) (2008) _Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes_ , Oxford: Hart.\n\nCallon, M., Lescoumes, P. and Barthes, Y. (2009) _Acting in an Uncertain World: An Essay on Technical Democracy_ , Cambridge, MA: MIT Press.\n\nCloatre, E. (2008) 'TRIPS and pharmaceutical patents in Djibouti: an ANT analysis of socio-legal objects', _Social and Legal Studies_ , 17(2): 263\u2013287.\n\nCloatre, E. (2013) _Pills for the Poorest: An Exploration of TRIPS and Access to Medication in Sub-Saharan Africa_ , London: Palgrave Macmillan.\n\nCloatre, E. and Dingwall, R. (2013) '\"Embedded regulation\": the migration of objects, scripts and governance', _Regulation and Governance_ , 7(3): 365\u2013386.\n\nConstable, M. (2007) _Just Silences: the Limits and Possibilities of Modern Law_ , Princeton, NJ: Princeton University Press.\n\nDe Goede, M. (2012) _Speculative Security: The Politics of Pursuing Terrorist Monies_ , Minneapolis: University of Minnesota Press.\n\nEpstein, S. (1996) _Impure Science: AIDS, Activism and the Politics of Knowledge_ , Berkeley, CA: University of California Press.\n\nFlear, M. L. and Pickersgill, M. D. (2013) 'Regulatory or regulating publics? The European Unions regulation of emerging health technologies and citizen participation', _Medical Law Review_ , 21(1): 39\u201370.\n\nFlear, M. L., Farrell, A., Hervey, T. K. and Murphy, T. (eds) (2013) _European Law and New Health Technologies_ , Oxford: Oxford University Press.\n\nHayden, C. (2003) _When Nature Goes Public: The Making and Unmaking of Bioprospecting in Mexico_ , Princeton, NJ: Princeton University Press.\n\nJacob, M.-A. (2012) _Matching Organ with Donors: Legality and Kinship in Transplants_ , Philadelphia: University of Pennsylvania Press.\n\nJacob, M.-J. and Riles, A. (2007) 'The new bureaucracies of virtue', _Political and Legal Anthropology Review_ , 30(2): 181\u2013191.\n\nJasanoff, S. (1997) _Science at the Bar: Law, Science and Technology in America_ , Cambridge, MA: Harvard University Press.\n\nJasanoff, S. (ed.) (2004) _States of Knowledge: The Co-production of Science and Social Order_ , London: Routledge.\n\nJasanoff, S. (ed.) (2011) _Reframing Rights: Bioconstitutionalism in the Genetic Age_ , Cambridge, MA: MIT Press.\n\nLatour, B (2002) _La fabrique du droit_ , Paris: La Decouverte.\n\nLeclerc, O. (2005) _Le juge et l'expert. Contribution \u00e0 l'\u00e9tude des rapports entre le droit et la science_ , Paris: LGDJ.\n\nLezaun, J. (2012) 'The pragmatic sanction of materials: notes for an ethnography of legal substances', _Journal of Law and Society_ , 39(1): 20\u201338.\n\nLuhmann, N. (1995) _Social Systems_ , Stanford, CA: Stanford University Press.\n\nMackenzie, D., Muniesa, F. and Siu, L. (2007) _Do Economists Make Markets: On the Performativity of Economics_ , Princeton, NJ: Princeton University Press.\n\nMinnow, M. (1991) _Making All the Difference: Inclusion, Exclusion and American Law_ , Ithaca, NY: Cornell University Press.\n\nPetryna, A. (2002) _Life Exposed: Biological Citizens after Chernobyl_ , Princeton, NJ: Princeton University Press.\n\nPhilippopoulos-Mihalopoulos, A. (2013) 'Atmospheres of law: senses, affects, lawscapes', _Emotion, Space and Society_ , 7(1): 35\u201344.\n\nPickersgill, M. (2011) 'Connecting neuroscience and law: anticipatory discourse and the role of sociotechnical imaginaries', _New Genetics and Society_ , 30(1): 27\u201340.\n\nPickersgill, M. (2012) 'The co-production of science, ethics and emotion', _Science, Technology & Human Values_, 37(6): 579\u2013603.\n\nPickersgill, M. (2013a) 'How personality became treatable: the mutual constitution of clinical knowledge and mental health law', _Social Studies of Science_ , 43(1): 30\u201353.\n\nPickersgill, M. D. (2013b) 'Sociotechnical innovation in mental health: articulating complexity', in Flear, M. L., Farrell, A., Hervey, T. K., and Murphy, T. (eds), _European Law and New Health Technologies_ , Oxford: Oxford University Press, pp. 323\u2013342.\n\nPottage, A. (2006) 'Too much ownership: bioprospecting in the age of synthetic biology', _BioSocieties_ , 1(2): 137\u2013159.\n\nPottage, A. (2012) 'The materiality of what?', _Journal of Law and Society_ , 39(1): 167\u2013183.\n\nPottage, A and Sherman, B. (2010) _Figures of Invention: A History of Modern Patent Law_ , Oxford: Oxford University Press.\n\nRagon\u00e9, H. and Franklin, S. (1998) _Reproducing Reproduction: Kinship, Power, and Technological Innovation_ , Philadelphia: University of Pennsylvania Press.\n\nRiles, A. (2000) _The Network Inside Out_ , Ann Arbor: University of Michigan Press.\n\nRiles, A. (2005) 'A new agenda for the cultural study of law: taking on the technicalities', _Buffalo Law Review_ , 53: 973\u20131033.\n\nRiles, A. (2011) _Collateral Knowledge: Legal Reasoning in the Global Financial Markets_ , Chicago: University of Chicago Press.\n\nSilbey, S. and Ewick, P. (1998) _The Common Place of Law: Stories from Everyday Life_ , Chicago: University of Chicago Press.\nSection I\n\n* * *\n\n# Law, expertise and public participation\n\n* * *\nChapter 1\n\n* * *\n\n# **Science, uncertainty and the normative question of epistemic governance in policymaking**\n\n_Sujatha Raman_\n\n* * *\n\n## **Introduction**\n\nWhat role should science play in the making of law and public policy? The question is relevant for at least three reasons. First, science both identifies (e.g., potential hazards to health and the environment) _and_ generates problems (e.g., potential hazards posed by research and associated technologies), and policymakers are increasingly required to acknowledge and enable their governance. Second, science offers more detailed knowledge on the specific nature of problems (e.g., at what level do some potential hazards become significant?), ways of addressing them, and their impacts (e.g., might we be able to clean up pollution with microbial forms?). Nor is this restricted to hazards alone or to the natural sciences, as the social\/economic\/policy sciences also identify, generate, specify and offer solutions to aspirations around finance, social security, education, health and so on. Third, the question has become resonant in wider public discourse especially in the UK, with government science advisors, scientists and science popularizers, politicians, journalists and academics debating the subject. In this context, I will explore how we might think about the role that science should play in the policy context from an STS and socio-legal studies standpoint.\n\nIn a recent example of the wider debate over science and policymaking, astronomer\/broadcaster Brian Cox and actor\/writer Robin Ince (2012) point out that science derives its status as the best available approach to understanding nature from the fact that its findings are regularly subject to testing and revision in the face of new evidence. Science, it appears, can therefore offer the grounding required for policymaking because of the rigour of its standards for generating knowledge. However, STS studies cast doubt on this type of argument for 'science-based' policymaking (e.g., Irwin 2008; Jasanoff 1991; Sarewitz 2004; Weingart 1999; Wynne 2010). In this chapter, I examine the grounds for this critique and outline an alternative normative perspective on science-in-policymaking that builds on an emerging political theory of science and democracy (e.g., Brown 2009; Guston 2000; Jasanoff 2003a, 2003b, 2013). This perspective is concerned with the need for what I call 'epistemic governance' or the governance of knowledge-for-governance.\n\nI suggest that, contrary to common assumptions that STS seeks to dispense with or downplay science altogether, a key insight from STS research is that the scientific knowledge which is important for policymaking is a distinctively hybrid object straddling both science and politics. In practice, some of this hybridity has been lost in technocratic discourse about 'sound science' which has in turn led to suggestions for a return to the primacy of elected law-makers with a lesser role for science. However, the notion that science-in-policymaking is a hybrid form of expertise highlights the role of delegation as a principle of liberal democracy. I will explore the implications of the notion of expertise as delegated authority (Jasanoff 2003a) for how we think about problems of scientific uncertainty and public involvement around decision-making in technical domains.\n\nFirst, a few clarifications are in order. A variety of knowledge claims may be advanced in policymaking processes \u2013 some of them dubious or openly contested \u2013 without changing the policy decisions they have been used to justify. But if we are interested in the role that knowledge _ought_ to play in policy, then we want to know how this knowledge is produced, what it consists of, how real and potential disagreements are managed, and what forms of evidence are 'acceptable' in pluralistic societies (Jasanoff 1991). This is the domain of 'epistemic governance' or the governance of knowledge-for-governance. While governance in practice involves diverse actors within and beyond the state and this is no less characteristic of knowledge issues (Irwin 2008), I focus on the normative challenges for science advice produced through and for the state.\n\n## **Technocracy, decisionism \u2013 and populism? Normative models of science\/policy**\n\nThe headline of Cox and Ince's (2012) opinion-piece is as follows: _politicians_ must not elevate mere opinion over _science_ (italics mine). The phrasing of their normative position is telling \u2013 where politics is represented as embodied in a specific group of people (politicians), science lacks similar grounding. Aside from mentioning two famous scientists, the opinion-piece pits politicians against 'science' as a rarefied force. But if the production of science for policy is a social activity as STS and socio-legal scholars highlight, any normative model needs to be grounded in a sense of how science is embodied and made material to legislative\/political questions. In this section, I examine why the normative question of how science should relate to policy has become a question about the proper role of _experts_ vis-\u00e0-vis law-makers and the public, and consider three ways in which this relationship has been traditionally conceptualized.\n\nIf science is taken to be the body of knowledge produced by researchers and contained in journals, then as Stehr and Grundmann (2011) argue, it is unlikely that much of this can be directly relevant to policymaking. When science puts new problems on the policy agenda such as climate change or when evidence of global warming is challenged, it is expertise that is centre-stage, or, more specifically, experts who interpret, translate and put knowledge in context for policymaking and policy debate (Stehr and Grundmann 2012). Individual studies cannot be the basis of policymaking since any one study \u2013 of, say, the link between hurricanes and global climate change or the risk of transmission of a new cattle disease to humans \u2013 will need to be interpreted and put in context. In practice, policymakers _might_ attempt to ground their decisions in a single piece of evidence, but from a normative standpoint, they need to consider expert judgement on the credibility of evidence, the standards to which any one study was done and how its findings compares with others, and how to interpret the implications of multiple sources of evidence. In this respect, the notion of 'science-based' or 'evidence-based' policymaking can be misleading if it implies that evidence speaks for itself unmediated by particular agents.\n\nSo, while political actors may appeal to 'the science' or scientific evidence for grounding policy claims, academic debate on this question has come to revolve around how we ought to understand the role of experts who mediate that evidence and how this role should be positioned in relation to the public and policymakers (e.g., Collins _et al._ 2010; Stehr and Grundmann 2012; Jasanoff 2003a; Kitcher 2001; Turner 2003). To make sense of what is at stake here, it is worth considering two classic normative positions on experts-in-policymaking: _technocracy_ and _decisionism_ (Weingart 1999). Although these emerged in relation to the role of bureaucrats, they have been translated to make sense of experts, and more recently, experts providing scientific advice to government. The technocratic model associated with Comte's seventeenth-century vision is loosely defined as rule by a secular priesthood of experts ('sociologists' in his account) while Weber's decisionist model put political leaders 'on top' in that they were expected to first define the goals of policy while experts ('bureaucrats' in his account) were confined to supplying knowledge of the means for achieving them and monitoring implementation (Millstone 2007). Weber's notion of 'value-free' science rested not so much on a fear that reliable knowledge would be corrupted by politics but on a desire to keep science out of important value-based judgements which Weber saw as the proper domain of sovereign rulers (Proctor 1991). Decisionism therefore presumes a clear divide between values (the domain of politics) and facts (the domain of science). By contrast, the original Comtean vision blurred the boundary between expertise as a scientific project and a political one (Fuller 2007). Derived from a belief in the superiority of expert judgement over both the sovereign and the populace, technocracy in this sense would give experts the responsibility for value judgements _as well as_ factual ones.\n\nThe normative case against technocracy \u2013 in the sense of simply leaving policy decisions in the hands of experts without political or public oversight \u2013 is straightforward. Technocracy assumes elite privilege, the notion that a class of people is somehow specially equipped to the powers of reasoning required for decision-making (Brown 2009; Kitcher 2001). In contemporary discourse, it is rare to find articulations of a pure Comtean vision. Rather, the Comtean inclination _co-exists_ with the Weberian in a way that allows technocratic advocates to clearly separate politics as arbiter of value-based choices and science as provider of facts (which may be subject to revision, but which is held to be the most reliable knowledge available at any point in time).\n\nFor example, Cox and Ince (2012) observe that politicians are free to disregard scientific findings and recommendations, but they then argue that politicians must clarify in scientific terms what it is they are disregarding and why. They acknowledge that policy responses to problems identified by science are not just scientific ones since they have moral, geopolitical and economic elements \u2013 but they see policy and wider political debate on value judgements and choices for action coming _after_ science has supplied the (value-neutral) facts. The typical argument here is: politicians can choose what to do about climate change or decide how to regulate GM crops, _but_ they must proceed on the basis of 'sound science' which confirms that the risk of GM crops is low and that of climate change is high. This position accommodates decisionism in one sense (elected politicians have ultimate responsibility for policy decisions in existing democracies), and a version of technocracy based on the virtues of 'sound science' rather than of a group of people who are supposed to embody science. Appealing to elite privilege is no longer needed, it appears; instead, we have technocracy-without-technocrats, reconciled with the demands of a democracy-of-elected-rulers.\n\nOn occasion, the technocratic\/decisionist nexus has translated into campaigns for getting more people with scientific training into Parliament, the logic being that these political agents would also be scientific agents and hence able to make 'science-based' decisions. This has the virtue of at least partly recognizing how science is embodied. But neither version of contemporary technocratic philosophy engages adequately with the implications for policymaking of scientific disagreements and value judgements within science, nor does it consider the limits of decisionism if this means simply leaving decisions to politicians. Before exploring this further, we might ask: are there normative alternatives beyond technocracy or decisionism?\n\nA third model is commonly characterized as _populism_ , usually by way of critiquing the concept of public involvement in policymaking rather than to advance it. STS scholars have been charged from within the discipline of failing to stand up to the dangers of populism (e.g., Collins _et al._ 2010) and from outside for promoting 'relativist\/postmodernist' approaches to knowledge (Kuntz 2012; Taverne 2005). Later I will show why this is a misreading of STS contributions; for the moment, let us consider the normative case for involving the public at all in 'science-based' policymaking.\n\nOver twenty years ago, the American political scientist, Peter May, set out the problem of what he called policies-without-publics (May 1991). May argued that technical matters such as natural and technological hazards, biodiversity, endangered species and innovation policy carried few incentives for the formation of organized public groups to participate actively in debate. Perceived as remote from everyday concerns with high costs associated with participation for limited benefit, these domains were dominated by scientific\/technical communities with policy discussions occurring in the 'backwaters of political institutions' (May 1991: 194). Yet, far from being a technocratic paradise, policy domains without publics lacked adequate mechanisms for identifying key issues, defining problems and designing policies. For May, actively engaged publics were essential for policymaking \u2013 even in domains permeated by scientific and technical matters. His argument was less about concerns over elite privilege and more about the _substantive_ matter of adequacy of knowledge for decision-making. Writing in the STS tradition but drawing from Dewey, Marres (2007) makes a similar argument that public involvement in policymaking is important for issue formation. These are not populist arguments in the sense of just leaving decisions to 'the people' but rather arguments for acknowledging the factual and value-based complexity of policy matters.\n\nIndeed, despite their prevalence, all three models \u2013 technocracy, decisionism, populism \u2013 are easy to attack insofar as they rest on the misleading language of 'leaving decisions' to one group of agents or the other. In the rest of the chapter, I will elaborate why STS offers normative insights for making a place in policymaking for science without technocracy, the public without populism, and the political process without decisionism. 'Science-based' democratic policymaking may well be a good model to aspire to \u2013 so long as we are clear about what we mean by science and by democracy and how these are embodied in practice. To unpack this, the next section considers insights from over thirty years of STS research on science and policymaking.\n\n## **Uncertainty and the hybrid object of science-in-policy**\n\nSTS insights on science-in-policymaking are rooted in controversies over risk assessment in regulatory policy, especially around environmental issues. Indeed, May's account of the absence of publics around environmental regulation is at odds with numerous experiences detailed in this literature of how science has been openly contested rather than been left in institutional backwaters, generating uncertainty about the validity of specific scientific\/evidentiary assertions used to underpin decisions. These studies also highlight why the public legitimacy of science advice and policymaking has been called into question for failing to engage adequately with the basis for these contestations. Appeals to 'sound science' have been problematized in the process. As I detail below, the key insight from these studies is about how uncertainty has been managed rather than the fact that it exists.\n\nSince the 1970s, at least in the USA, public debate was amply evident around processes for regulating potential environmental hazards (Jasanoff 1990). Rather than providing the basis for public agencies to depoliticize decision-making, science became embroiled in politics as citizen-groups challenged the evidence used to make policy on hazards of industrial chemicals, nuclear power plants, construction of airport runways and other ostensibly esoteric matters. Likewise, industry groups who questioned the need for regulation by using rival scientific studies to challenge evidence offered of hazard also politicized science, a pattern that persists today. Discussions over the credibility and policy implications of different scientific studies that might have been expected to remain within closed circles were made public by groups from different sides of the political spectrum.\n\nBy contrast, systems of regulation in Europe at the time seemed to fit the closed, depoliticized policymaking style described by May but these too came to be challenged. For example, Horlick-Jones and de Marchi (1995) warned that high-profile controversies were eroding the credibility of science advice and threatening to transform European-style negotiated decision-making into US-style gridlock. The link acknowledged in 1996 between a fatal disease in cattle (BSE) and its variant in humans (v-CJD) provoked widespread criticism of earlier government attempts to downplay the risk. As well as BSE, controversies involving the role of science, science advisors and organized publics in policies related to pesticides, genetically modified crops, vaccination, climate change, classification of drugs and other issues have opened up the question of the role that science plays and should play in policymaking. In this context, there are concerns that science is being politicized with the evidence 'cherry-picked' to suit specific ideological interests and uncertainties exaggerated even where there is insufficient warrant.\n\nEarly STS work highlighted several examples of such deliberate\/intentional politicization of science-in-policy, but located this outcome in the nature of science itself. Relevance to policy destroyed the mechanisms by which scientists achieve consensus (Collingridge and Reeve 1986), and the simultaneous increase in policy involvement by scientists accompanied by a declining capacity to resolve political conflict became a central paradox (Weingart 1999). The point here is that science requires some level of containment or suspension of critique in order for knowledge claims to be certified; these claims retain authority insofar as their underlying assumptions remain tacit. But when the political stakes are high, the likelihood of these assumptions being scrutinized and contested is also high if only because it is possible to make different assumptions and draw different inferences (Collingridge and Reeve 1986). When stakes are high, the relationship between science and policy becomes 'over-critical', i.e., characterized by endless debate between competing interest-groups over technical points. Sarewitz (2004) however, traces the difficulty of grounding policies in science to an 'excess of objectivity' rather than political differences alone. This arises from the internal complexity of science (and ultimately, nature itself), since, for example, molecular biologists and ecological scientists study the hazards of GMOs differently with different criteria for what counts as a hazard and as evidence that one exists. For Sarewitz too, the difficulties of 'science-based' policymaking arise from a mistaken view that science _can_ provide a firm basis for decisions.\n\nHowever, others sharing the emphasis on tacit assumptions that underpin science (and which when made explicit are potentially open to contestation), highlight the distinctive nature of 'scientific' work taking place at the interface with policymaking. Rather than making a case for limiting the role of science _per se_ , these authors make one for limiting the role of _scientism_ (Wynne 2010). This means paying attention to how uncertainty, scientific disagreement and plural inputs are framed and managed in the process of generating knowledge for policy. Independent advisory committees constituted by the state, research divisions within government departments and trans-national advisory bodies such as the Intergovernmental Panel on Climate Change (IPCC) produce 'regulatory science' which is different from academic science as it is specifically generated to answer policy questions. Informed by such concepts as boundary work (looking empirically at how boundaries between science and non-science are drawn in different instances often using different standards) and framing (asking how a policy question and the role of scientific knowledge in addressing that question are implicitly defined in some ways as opposed to others), this research highlights ways in which regulatory science can be understood as a hybrid of factual and value-based judgements (e.g., Irwin 1995, 2008; Jasanoff 1990; Wynne 1992).\n\nFor example, in the case of climate science and policy, the knowledge claims at stake are different from purely scientific ones (e.g., how has global temperature changed over time? what is the relationship between carbon dioxide and global temperature change?). In science-in-policy, the questions are instead: how significant is the relationship between carbon dioxide and temperature change for policy? What sort of standards and actions are appropriate? Who bears the burden of proof or how is the burden distributed? How should the knowledge claims of different parties be engaged? These questions are hybrids of science\/politics and cannot be addressed entirely by either side (Jasanoff 2013).\n\nIn practice, regulatory science has not always acknowledged or engaged sufficiently with different claims from publics, underlying value-based assumptions in scientific assessments, or with the limits and conditions of expert judgements (Irwin 1995; Jasanoff 1990; Stilgoe _et al._ 2006; Wynne 2010). For example, where probabilistic judgements are involved, differences between using frequency data where these exist and Bayesian judgements (which are based on expert interpretations) have invited scrutiny. Likewise, different value judgements are involved in preferring Type 2 (false negative) versus Type 1 (false positive) errors in statistical hypothesis testing of hazards (Jasanoff 1991). This type of conditionality of claims is often invisible in invocations of 'sound science'. For example, Irwin (1995) showed that for farmworkers, the 'recommended' method of spraying the 2,4,5-T pesticide assumed by the Advisory Committee on Pesticides (ACP) simply bore no resemblance to conditions on the ground. The ACP argued there was no evidence to link miscarriages\/birth deformities highlighted by workers to pesticides, but this judgement was based on laboratory conditions which were radically different from working conditions. STS has therefore highlighted differences between risk (where the parameters are known and estimates can be made with confidence) and more complex situations of uncertainty, ignorance and indeterminacy where these conditions do not hold (Wynne 1992).\n\nImportantly, this line of work also considers how uncertainty can be _produced_ and perpetuated by actors who insist on absolutist standards of 'scientific proof'. Like scientists in the wider debate have argued, decisionmaking requires some closing down of uncertainty as well as due recognition of tacit assumptions of scientific claims that might be open to challenge. But what tends to be missed is the point that uncertainty is only a problem for a technocratic philosophy that claims science can supply value-free facts as the basis for policy. Epistemic doubt is amplified where science bears such a burden, Wynne (2010) argues, at the expense of the broader public judgements required for collective decision-making. Some scientists have, of course, acknowledged the distinctive nature of science produced for policy. Indeed, nuclear physicist, Alvin Weinberg (1972) was one of the first to make the point with his concept of 'trans-science'. In the UK debate on drugs policy triggered by the government sacking its chief advisor, David Nutt, science writer Sue Blackmore acknowledged that there was little in science that was 'pure fact'; the advisory committee in question necessarily engaged with judgements about what to count as harm and how to weigh different harms as well as physiological evidence. Drawing on STS work, climate scientist Mike Hulme (2009) makes a crucial point: scientific knowledge is transformed through its interaction in the social world and what will count as knowledge for policy may be different from what counts as such for research\/academic science. What then are the normative implications of these insights on the nature of scientific knowledge (or more accurately, expertise) produced for policymaking?\n\n## **Back to decisionism? Or, epistemic governance as a problem of delegation**\n\nFrom the above diagnosis, different conclusions have been drawn on the role that science can and should play in decision-making. One response seems perilously close to decisionism; for example, the argument that if science is too complex and plural to be able to underpin policy decisions, policymakers must first sort out what are really value-based conflicts using the normal negotiating procedures of democratic politics (Collingridge and Reeve 1986; Sarewitz 2004). This seems to re-draw the boundary between factual claims and value choices, only giving primacy to the latter and a notion of politics that is stripped of the ability to draw on the former. By contrast, Collins _et al._ (2010) strongly underline the need for scientific expertise in policymaking, but still do so on the basis of a clean separation between the role for experts (provide facts) and the role for politics (adjudicate on values). Pielke (2007) acknowledges the difficulty of this boundary, but his normative vision \u2013 especially in domains where there is strong disagreement on both factual and value questions \u2013 is one of scientists being 'honest brokers' between different policy options with their varying knowledge bases rather than advocating a particular position.\n\nBut the return to decisionism fails to get to the heart of the idea that 'science' \u2013 in the sense of expert advice for policymaking \u2013 is supposed to be a form of reason, a hybrid of scientific and political or value judgement. In addition to transforming the received view of science-in-policymaking, this notion also provides the normative ballast to challenge politics as business-as-usual bargaining. Although bargaining may be the normal mode of governmental politics, the notion of politics assumed in the debate on science advice must allow for aspirations of legitimacy. Legitimacy necessarily involves normative judgements of better or worse decisions (not least from publics who may wish to challenge business-as-usual politics), judgements that rely on knowledge as Ezrahi (1990) writing on the origins of liberal-democratic thought clarifies. Ezrahi highlights a key philosophical tension in the eighteenth century between, on the one hand, the values of liberty, equality and Enlightenment notions of contingency, and on the other the case for order and governance. If absolutist rule was no longer justifiable in the absence of a divinely ordained vision of the world, if people (male, property-owning) are free and equal and capable of disagreeing with each other, how then was order possible or even justifiable? How could a select few be legitimately responsible for governing the rest? Putting aside the exclusions embedded in the definitions of freedom and equality at the time, the answer for classical theorists of liberal democracy came from the Scientific Revolution. The notion of modern, experimental science as producing universal (depersonalized) knowledge through methods that were transparent to all made it possible to develop a case for liberal-democratic rule based on public knowledge accessible to the governed and where rulers' actions were observable by citizens.\n\nSo, the expectation that public knowledge informs the practice of government and that this practice can, in principle, be evaluated by the public is central to the normative idea of liberal democracy. This means the question of the proper role for science-in-policymaking cannot be resolved by giving primacy in Weberian-decisionist fashion to a fact-free politics in response to the tribulations of value-free notions of technocracy\/scientism. How though can the central problem for epistemic governance, namely, the uncertainty entailed by _multiple_ knowledge claims on a policy issue and the potential for tacit assumptions in any of these to be contested, be addressed? The principles I outline below are threefold.\n\nFirst is the normative principle of _transparency_ in mechanisms for handling plurality of knowledge. Advisory institutions already have mechanisms for dealing with epistemic disagreement amongst experts, for better or for worse. These may differ across regulatory cultures as Jasanoff (1991) shows, highlighting the problems of overly formal, quantitative methods for resolving discrepancy between different studies that needed to be aggregated to determine carcinogenic risk (in her case study). Rule-based risk analysis attracted criticism for false precision, the failure to be transparent about value judgements made in statistical assessments, and ironically, for being insufficiently scientific in its inability to be responsive to new knowledge. By contrast, the informal, negotiated methods of British regulatory culture have also been critiqued for the failure to consider public and scientific challenges to established risk assessment based on animal models in toxicology for regulation of industrial chemicals (Diggle 2007). This again underlines the need for reasoned judgement and for this judgement to be more transparent to external scrutiny. Hulme (2009) notes that the use of subjective (Bayesian) assessment to estimate risk of future events in the face of uncertainty is an accepted method in science; likewise, when stakes are high and there is a need for timely decisions to be made, methods to develop consensus amongst conflicting expert assessments may be entirely justified. But the substantive legitimacy of such resolutions of uncertainty rests on making their basis public, even if transparency cannot guarantee the closure of controversy \u2013 outcomes in practice will rest on how different political actors mobilize in response.\n\nSecond, normative principles including transparency are necessarily interpreted in the _context_ of judgement about what is the public interest. There may be good reasons to limit or challenge demands for transparency if these arise from purely private-corporate interests oriented towards the deliberate deconstruction of scientific evidence to ward off regulation (see Jasanoff 2006). In addition, criteria for what counts as good judgement will vary across political systems (Jasanoff 2005), though these can in turn evolve through experience and debate as we see in the UK case. An interest in normative principles does not mean that these differences should be erased.\n\nThird and most important, focusing on experts rather than 'science' in policymaking brings to the forefront the _delegation_ of specialist judgements to experts on par with more widely recognized forms of delegation of decisions by the public to elected representatives. Expertise is a 'form of delegated authority' (Jasanoff 2003a) which means that those involved in producing regulatory science through their role in advisory bodies and science-based administrative agencies are specifically authorized to produce knowledge in the public interest. This delegation of expertise in turn invites a series of political questions similar to those raised around more familiar administrative responsibilities in liberal-democratic societies (Brown 2009; Guston 2000; Jasanoff 2003a). How adequate are the structures through which delegation is institutionalized? How do agents to whom specific roles are delegated fulfil their role or, how _accountable_ are they to the system from which they derive their authority to offer advice?\n\nIf public knowledge for policymaking has been delegated to experts, to whom are experts accountable? Insofar as the debate around science-in-policymaking has been framed in terms of 'science' or 'evidence', the typical response is that experts are ultimately accountable to Nature (thus making it difficult to see where the public fits in), while politicians are authorized by 'the people' to act on their behalf (Cox and Ince, 2012). But if regulatory scientists produce knowledge to address questions of public and policy interest, these experts are indeed accountable to the public. The idea is not that a representative sample of publics must be on advisory panels, but that publics have the right to speak back, to challenge, to articulate different positions, to raise alternative interpretations of the evidence, or to rein in the power of experts. The public do not give up that right -they 'grant to experts a carefully circumscribed power to speak _for them_ on matters requiring specialized judgement' (Jasanoff 2003a: 158, italics mine).\n\nIn sum, expert advisors straddle the science\/politics interface insofar as they are capable of speaking authoritatively on the public good, though they may sometimes fail in this task (in the sense of speaking only for a limited and contestable vision of the public good, e.g., 'what is good for UK plc') or make unwarranted assumptions about what publics want or will accept. In the course of this, they are called upon to 'open up' science and scrutinize the basis of conflicting knowledge claims, but also 'close down' some of this uncertainty in delivering their judgements for policymaking. How they perform and manage this tension lies at the heart of the normative challenge of epistemic governance. Unlike scientists, expert advisors are required to engage with challenges by publics who may emerge around specific issues (but are not expected to be necessarily represented in all expert committees as caricatured in discussions of public participation). Where they have focused on too narrow a range of scientific and public inputs, they have been found wanting; likewise, their role has been problematic when they have failed to adequately bound the problem of scientific disagreement\/uncertainty for policy (in their role _as_ experts, not as academic scientists for whom endless disagreement is not, in principle, a problem). So, if the problem of epistemic governance is one of delegation of specialist knowledge to experts, then the issue on which STS sheds light is the nature and manner of public accountability of experts. Accountability in the deliberative democratic tradition is understood in terms of giving public reasons for judgements which demonstrate how multiple perspectives have been engaged with. Hence, 'to the extent that public challenges to expert claims receive an adequate response, such challenges may actually strengthen expert authority' (Brown 2009: 217).\n\n## **Conclusion**\n\nRobert Dahl, distinguished theorist of democracy, defines delegation as a 'revocable grant of authority, subject to recovery by the demos' (Dahl 1997: 114). Building on an emerging political theory of expertise in STS, I have argued that it is this issue which has been at the heart of recent normative controversy around the role of science-in-policymaking. Although STS research highlights scientific uncertainty and articulates a case for opening up science-in-policy to wider public input, this analysis has been carried out in the context of how 'regulatory science' has worked in practice and how it _ought_ to work in order to fulfil a key role at the interface of science, the public and policymaking. This is fundamentally a question of the necessary delegation of specialist knowledge to expertise, the nature of that knowledge, the question of who counts as an expert, and the need for government science advisors in liberal democracies to recognize and engage with multiple criteria for knowledge (including _within_ science).\n\nIn conclusion, I turn to key dilemmas that arise in response to this account of epistemic governance as a hybrid of science, politics and publics. First, does the notion of delegating knowledge for policymaking to experts imply a return to assumptions of elite privilege? If as I have argued, delegation rests on the public having the right to speak back, elite privilege is not an issue. Rather, delegation rests on the inevitable epistemic division of labour in complex societies and a political structure for governing this system for obtaining specialized knowledge for policymaking.\n\nSecond, the case for public engagement around science-based policymaking has been challenged on very different grounds: one, that in the absence of clear limits, science is devalued and governance degenerates into populism (Collins _et al._ 2010); two, and on the contrary, that it idealizes 'democratic citizens', many of whom are unable to inhabit the role of scrutinizer of science (Kerr 2004) and three, that it depoliticizes governance by seeking to tame genuine conflicts (Thorpe and Gregory 2010). From the perspective set out in this chapter, public engagement involves far more \u2013 and less \u2013 than implied by these very different critiques. The concept is far less radical than implied by a cartoon-vision of participation in which citizens are put in the position (or given the burden) of deciding on matters of scientific fact including those on which they have little knowledge. But it is more radical in the sense that more is demanded of expert committees to whom epistemic judgements are delegated. Expert committees are called upon to become more diverse 'mini-republics' (Brown 2003; Jasanoff 2003a), to consider different ways of framing the issues at stake and interpreting the facts that matter (Jasanoff 2003b), and negotiate collective judgements which clarify how they reconcile different ways of conceptualizing the public interest. None of this means that publics decide or that these mini-republics absorb-and therefore displace \u2013 political conflict. Rather, the notion of public engagement helps articulate the essential nature of _public_ delegation of expertise to authorized groups, the demands arising from this relationship, its limits and its possibilities.\n\nThird, the focus on scientific advisory systems which sift the knowledge that has already been produced by science does not address the fundamental question of how science itself is to be organized. For example, some argue that there needs to be greater political representation in decisions about the kind of scientific knowledge to fund in order to cope with the systemic marginalization of some questions (Brown 2009; Fuller 2007). Fourth, and related to the above is the question of how political institutions shape the independence of expertise. So, is it even possible for experts to be independent in a depleted public sector and in the context of demands for science to contribute to 'UK plc'? If scientists conduct their work in partnership with state and corporate interests as Hind (2012) argues, can expertise be independent or impartial? Delegation only works when there is the possibility to hold to account. Formal accountability requirements only work if there is scrutiny by citizens and a response by officials (Brown 2009). But Hind's point is that the public needs the _means_ for this to happen, i.e., to form a public. In the present era, this is a challenge of tall order but one that further reinforces the need for independent expertise.\n\n## **References**\n\nBeddington, J. (2011) 'We need both scepticism and consensus', _New Scientist_ w\u00adw\u00adw\u00ad.\u00adn\u00ade\u00adw\u00ads\u00adc\u00adi\u00ade\u00adn\u00adt\u00adi\u00ads\u00adt\u00ad.\u00adc\u00ado\u00adm\u00ad\/\u00adb\u00adl\u00ado\u00adg\u00ads\u00ad\/\u00adt\u00adh\u00ade\u00ads\u00adw\u00ado\u00adr\u00add\u00ad\/\u00ad2\u00ad0\u00ad1\u00ad1\u00ad\/\u00ad0\u00ad2\u00ad\/\u00ads\u00adc\u00adi\u00ade\u00adn\u00adc\u00ade\u00ad-\u00ads\u00adc\u00ade\u00adp\u00adt\u00adi\u00adc\u00adi\u00ads\u00adm\u00ad-\u00ada\u00adn\u00add\u00ad-\u00adc\u00ado\u00adn\u00ads\u00ade\u00adn\u00ads\u00ad.\u00adh\u00adt\u00adm\u00adl (accessed 30 August 2013).\n\nBrown, M. B. (2003) _Science in Democracy_ , Cambridge, MA: MIT Press.\n\nCollins, H., Weinel, M. and Evans, R. (2010) 'The politics and policy of the Third Wave: new technologies and society', _Critical Policy Studies_ , 4 (2): 185\u2013201.\n\nCollingridge, D. and Reeve, C. (1986) _Science Speaks to Power: The Role of Experts in Policy Making_ , London: Pinter.\n\nCox, B. and Ince, R. (2012) 'Politicians must not elevate mere opinion over science', _New Statesman_ w\u00adw\u00adw\u00ad.\u00adn\u00ade\u00adw\u00ads\u00adt\u00ada\u00adt\u00ade\u00ads\u00adm\u00ada\u00adn\u00ad.\u00adc\u00ado\u00adm\u00ad\/\u00ads\u00adc\u00adi\u00ad-\u00adt\u00ade\u00adc\u00adh\u00ad\/\u00ads\u00adc\u00adi\u00ad-\u00adt\u00ade\u00adc\u00adh\u00ad\/\u00ad2\u00ad0\u00ad1\u00ad2\u00ad\/\u00ad1\u00ad2\u00ad\/\u00adb\u00adr\u00adi\u00ada\u00adn\u00ad-\u00adc\u00ado\u00adx\u00ad-\u00ada\u00adn\u00add\u00ad-\u00adr\u00ado\u00adb\u00adi\u00adn\u00ad-\u00adi\u00adn\u00adc\u00ade\u00ad-\u00adp\u00ado\u00adl\u00adi\u00adt\u00adi\u00adc\u00adi\u00ada\u00adn\u00ads\u00ad-\u00adm\u00adu\u00ads\u00adt\u00ad-\u00adn\u00ado\u00adt\u00ad-\u00ade\u00adl\u00ade\u00adv\u00ada\u00adt\u00ade\u00ad-\u00adm\u00ade\u00adr\u00ade\u00ad-\u00ado\u00adp\u00adi\u00adn\u00adi\u00ado\u00adn\u00ad-\u00ado\u00adv\u00ade\u00adr\u00ad-\u00ads\u00adc (accessed 30 August 2013).\n\nDahl, R. (1997) 'Procedural democracy', in R. Goodin and P. Pettitt (eds), _Contemporary Political Philosophy: An Anthology_ , Chichester: Wiley.\n\nDiggle, R. (2007) _Regulatory science and uncertainty in the risk assessment of pesticide residues_ , PhD thesis, University of Nottingham h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ade\u00adt\u00adh\u00ade\u00ads\u00ade\u00ads\u00ad.\u00adn\u00ado\u00adt\u00adt\u00adi\u00adn\u00adg\u00adh\u00ada\u00adm\u00ad.\u00ada\u00adc\u00ad.\u00adu\u00adk\u00ad\/\u00ad1\u00ad4\u00ad5\u00ad1\u00ad\/\n\nEzrahi, Y. (1990) _The Descent of Icarus: science and the Transformation of Contemporary Democracy_ , Cambridge, MA: Harvard University Press.\n\nFuller, S. (2007) _New Frontiers in Science and Technology Studies_ , Cambridge: Polity.\n\nGuston, D. (2000) _Between Politics and Science_ , Cambridge: Cambridge University Press.\n\nHind, D. (2012) _The Return of the Public: Democracy, Power and the Case for Media Reform_ , London: Verso Books.\n\nHorlick-Jones, T. and de Marchi, B. (1995) 'The crisis of scientific expertise in fin de si\u00e8cle Europe', _Science and Public Policy_ , 22(3): 139\u2013145.\n\nHulme, M. (2009) _Why We Disagree about Climate Change: Understanding Controversy, Inaction and Opportunity_ , Cambridge: Cambridge University Press.\n\nIrwin, A. (1995) _Citizen Science: A Study of People, Expertise and Sustainable Development_ , London: Routledge.\n\nIrwin, A. (2008) 'STS perspectives on scientific governance', in E. J. Hackett, O. Amsterdamska, M. Lynchand and J. Wajcman (eds), _The Handbook of Science and Technology Studies_ , Cambridge, MA: MIT Press.\n\nJasanoff, S. (1990) _The Fifth Branch: Science Advisers as Policymakers_ , Cambridge, MA: Harvard University Press.\n\nJasanoff, S. (1991) Acceptable evidence in a pluralistic society', in D. G. Mayo and R. D. Hollander (eds), _Acceptable Evidence: Science and Values in Risk Management_ , New York: Oxford University Press.\n\nJasanoff, S. (2003a) '(No?) Accounting for expertise', _Science and Public Policy_ , 30 (3): 157\u2013162.\n\nJasanoff, S. (2003b) 'Technologies of humility: citizen participation in governing science', _Minerva_ , 41 (3): 223\u2013244.\n\nJasanoff, S. (2005) _Designs on Nature: Science and Democracy in Europe and the United States_ , Princeton, NJ: Princeton University Press.\n\nJasanoff, S. (2006) 'Transparency in public science: purposes, reasons, limits', _Law and Contemporary Problems_ , 69(3): 21\u201345.\n\nJasanoff, S. (2013) 'The science of science advice', in Doubleday R and Wilsdon J. (eds), _Future Directions for Scientific Advice in Whitehall_ , Available at w\u00adw\u00adw\u00ad.\u00adc\u00ads\u00ada\u00adp\u00ad.\u00adc\u00ada\u00adm\u00ad.\u00ada\u00adc\u00ad.\u00adu\u00adk\u00ad\/\u00adm\u00ade\u00add\u00adi\u00ada\u00ad\/\u00adu\u00adp\u00adl\u00ado\u00ada\u00add\u00ads\u00ad\/\u00adf\u00adi\u00adl\u00ade\u00ads\u00ad\/\u00ad1\u00ad\/\u00adf\u00add\u00ads\u00ada\u00adw\u00ad.\u00adp\u00add\u00adf (accessed 21 September 2013).\n\nKerr, A. (2004) _Genetics and Society: A Sociology of Disease_ , London: Routledge.\n\nKitcher, P. (2001) _Science, Truth, and Democracy_ , Oxford: Oxford University Press.\n\nKuntz, M. (2012) 'The postmodern assault on science', _EMBO Reports_ , 13(10): 885\u2013889.\n\nMarres, N. (2007) 'The issues deserve more credit pragmatist contributions to the study of public involvement in controversy', _Social Studies of Science_ , 37 (5): 759\u2013780.\n\nMay, P. J. (1991) 'Reconsidering policy design: policies and publics', _Journal of Public Policy_ , 11(2): 187\u2013206.\n\nMillstone, E. (2007) 'Can food safety policymaking be both scientifically and democratically legitimated? If so, how?', _Journal of Agricultural and Environmental Ethics_ , 20(5): 483\u2013508.\n\nProctor, R. (1991) _Value-free Science?: Purity and Power in Modern Knowledge_ , Cambridge, MA: Harvard University Press.\n\nPielke, R. A. (2007) _The Honest Broker: Making Sense of Science in Policy and Politics_ , Cambridge: Cambridge University Press.\n\nSarewitz, D. (2004) 'How science makes environmental controversies worse', _Environmental Science & Policy_, 7(5): 385\u2013403.\n\nStehr, N. and Grundmann, R. (2011) _Experts: The Knowledge and Power of Expertise_ , London: Routledge.\n\nStilgoe, J., Irwin, A. and Jones, K. (2006) _The Received Wisdom: Opening Up Expert Advice_ , London: Demos.\n\nTaverne, D. (2005) _The March of Unreason: Science, Democracy, and the New Fundamentalism_ , Oxford: Oxford University Press.\n\nThorpe, C. and Gregory, J. (2010) 'Producing the post-Fordist public: the political economy of public engagement with science', _Science as Culture_ , 19(3): 273\u2013301.\n\nTurner, S. (2003) _Liberal Democracy 3.0: Civil society in an Age of Experts_ , London: Sage.\n\nWeinberg, A. M. (1972) 'Science and trans-science', _Minerva_ , 10(2): 209\u2013222.\n\nWeingart, P. (1999) 'Scientific expertise and political accountability: paradoxes of science in politics', _Science and Public Policy_ , 26(3): 151\u2013161.\n\nWynne, B. (1992) 'Uncertainty and environmental learning: reconceiving science and policy in the preventive paradigm', _Global Environmental Change_ , 2(2): 111\u2013127.\n\nWynne, B. (2010) 'When doubt becomes a weapon', _Nature_ , 466(7305): 441\u2013442.\n\n1. For example, the House of Lords Select Committee on Science and Technology published a widely cited report on Science and Society in 2000 in the context of controversy over the role of science advice in the BSE crisis and protests over GM crop trials. The think-tank Demos published several reports on science policy and specific areas of policy in the 2000s as did a number of learned societies including the Royal Society. In the 2010s, blogs on science policy matters appear regularly in the _Guardian_ and elsewhere on the internet.\n\n2. The status of social science as a 'scientific' way of understanding social phenomena is more contested even from within, but the broad acknowledgement that it involves or should involve systematic methods (thus meriting the status of specialist knowledge) is less controversial.\n\n3. For example, some of the evidence used by Coalition government ministers in the UK at the time of writing to justify economic austerity, social security cuts or sweeping changes in education has been challenged but to no policy effect.\n\n4. The term 'science-based' policymaking or 'scientific governance' are commonly used to signal this domain, but since STS scholars take issue with the received view of science and open up the question of how different knowledge claims should be engaged with in policymaking, 'epistemic governance' captures it better.\n\n5. Though their argument is perhaps more complex than suggested by the headline.\n\n6. It should be noted that the term 'technocracy' emerged later in the twentieth century, but its roots are usually traced to Auguste Comte and Henri Saint-Simon.\n\n7. Cox and Ince use the example of climate change, but see Wynne (2010) for a comparison of how science-based arguments for policy are articulated in the case of climate change as opposed to GMOs.\n\n8. For example: http:\/\/societyofbiologyblog.org\/do-we-need-more-scientists-in-parliament\/ (accessed 21 September 2013).\n\n9. In keeping with the pluralist tradition in political science, he defined publics (rather than 'the public') as 'professional associations, producer groups, consumer groups, trade groups, public interest groups, neighborhood groups, or other groups surrounding common issue interests' (May 1991: 190).\n\n10. May cited earthquakes, mass-produced vaccines and household radon as examples of hazards. He also added 'budget policy', an area defined by economics, but one I am bracketing for reasons of space.\n\n11. See Cox and Ince (2012); also the argument set out by former UK Chief Scientific Advisor, John Beddington (2011).\n\n12. www.theguardian.com\/commentisfree\/2009\/nov\/03\/drugs-science-evidence (accessed 21 September 2013).\nChapter 2\n\n* * *\n\n# **Contingent participation**\n\nImaginaries of sustainable technoscientific innovation in the European Union\n\n_Mark L. Flear and Thomas Pfister_\n\n* * *\n\n## **Participation in the governance of uncertain futures**\n\nCitizen or public participation (these tend to be used interchangeably, see: Irwin 2006a: 299) is often valorised in the governance of technoscience, both as a good in itself and as an essential component of legal and regulatory decision-making. In this chapter we consider how citizen participation is contingent on imaginaries. Our case study for the latter investigation is sustainable development as it has been linked to and operationalised in the European Union's (EU's) broader agenda on technoscientific innovation. At least in the global West, democracy and citizenship are so deeply embedded that citizens and publics always matter and 'no participation' is not an option. Indeed, there is increasing awareness of diverse citizenry and publics voicing and often contesting elite claims about public interests and needs, for example, with regard to risks and benefits of new technologies or (their attendant) environmental hazards (Fiorini 1990). A particularly important reason for the focus on participation is the recognition that complex scientific and technological systems have repeatedly and increasingly been unable to prevent \u2013 and can even actually produce \u2013 accidents, disasters and risks (Beck 1986; Beck 2009; Giddens 1998).\n\nThis has prompted a turn towards the integration of techniques and practices into decision-making that together can be understood as constituting 'technologies of participation'. These are a means through which citizens regulate and are themselves regulated into producing (procedurally if not substantively) legitimate legal and regulatory decisions in relation to new and emerging technologies (Flear and Pickersgill 2013). As such participation can be seen as a way of assuaging public fear and anxiety about the risks and ethics of technoscientific development, maintaining faith in progress, and the ability of law and regulation to govern in the face of uncertainty. Participation is also implicated in delineating the boundaries of responsibility, helping to distribute credit for success and, more importantly, accountabilities in the event of failure (Black 2005).\n\nAt the same time, faith in progress through the continued harnessing of technoscience, and of enhanced global development, have also been frustrated by the idea that this might not be _sustainable_ , impelling a turn to the multi-faceted and essentially contested concept of 'sustainable development' (Davidson 2011; Dresner 2002; Irwin 2006b; O'Riordan 2009). Here, we examine how this concept is defined, detailed, and mobilised in the context of the EU. The latter develops its approach from the widely cited and influential definition found in the Brundtland Report (Brundtland), which famously describes sustainable development as being about ensuring that humans can meet 'the needs of the present without compromising the ability of future generations to meet their own needs' (World Commission on Environment and Development 1987: 43). By contrast, the second part of the definition is regularly neglected:\n\n> [sustainable development] contains within it two key concepts: the concept of 'needs', in particular the essential needs of the world's poor, to which overriding priority should be given; and the idea of limitations imposed by the state of technology and social organization on the environment's ability to meet present and future needs.\n> \n> (ibid.)\n\nWithin the EU, sustainable development has been increasingly linked to technoscientific innovation, producing a specific framing: 'sustainable technoscientific development'. With regard to non-technical narratives and visual materials we outline the construction of this frame and how it produces and cements a gradual move away from Brundtland's emphasis on global equity and the limits of economic growth. In doing so, we highlight imaginaries or collectively held symbolic views and architectures that help to manufacture and maintain social purpose and order, and which thereby help to legitimate formally accountable power: the frame itself, which is aimed at producing a _knowledge-based economy_ inhabited and seemingly produced by its _innovator citizens_. Crucially, these imaginaries narrow the conditions of possibility for participation in this area, rendering it as a private productive activity in the marketplace. Innovators are ranked above consumers and 'others', and the image of a united Europe aiming for sustainability paradoxically limits the space for democratic-political debate in governance.\n\n## **Technologies of participation and imaginaries**\n\nThe sheer diversity of the techniques and practices that together constitute technologies of participation prevents more than a limited overview. Participation can occur _ex ante_ or 'upstream' (Mandel 2009; Tait 2009; Wilsdon and Willis 2004), in making the decision itself, or _ex post_ such as when technoscience enters the marketplace and can be purchased by consumers. It includes attempts at public deliberation and involvement that can be more or less extensive, ranging from input into technoscientific development and agenda-setting ahead of the decision to regulate, to public relations exercises that communicate governance efforts either alongside of-or perhaps in formally _making_ \u2013 the decision. Wherever it occurs, participation can be broadly active or passive.\n\nMoreover, the participants comprising 'the public' can be rendered as citizens within different models, such as 'citizenship-as-rights' often favoured in liberal accounts of formal citizenship in which the equal status of citizens as rights-bearers is stressed, or 'citizenship-as-participation' inspired by civic republican ideas of participants who genuinely share in power in order to shape and steer governance (for discussion see: Pfister 2011). Participants can also be rendered as _inter alia_ citizens, stakeholders, laypersons and consumers. These latter roles are not necessarily synonymous or coterminous with citizenship even as they are related to it, at least in terms of being part of 'the public'. Perhaps especially within liberal accounts of democratic decisionmaking where it is delegated to trusted elites wielding scientific and technocratic knowledge and expertise (Flear 2009), the figures constituting 'the public' \u2013 and especially those not explicitly termed 'citizens' \u2013 function to quell contestation, produce public legitimacy and maintain a functioning economy. Overall, the use and deployment of participation is contingent upon its design and configuration in relation to increasingly complex, interlaced, and decentralised governance arrangements (Gottweis 2008; Yeung 2008).\n\nLike participation, imaginaries, including those of science and technology, are one important key to the success and perpetuation of a key repository of formally accountable power, the nation-state, for and by which they have long been established, utilised and normalised (Anderson 1983; Ezrahi 1990; Scott 1998). Imaginaries provide symbolic (in contrast to formal) accounts of social order and political identity as well as of the shared history and common goals that bind a community together and legitimate political action. Latterly, scholars working within science and technology studies (STS) have highlighted the salience of imaginaries in attempts by nation-states to link their scientific and technological projects with their societies (through the sociotechnical) and use this to produce collectively imagined forms of social life and social order, so-called 'sociotechnical imaginaries' (Jasanoff and Kim 2009; 2013). In short, imaginaries are a key part of the symbolic and cultural foundations of political and legal-constitutional orders. Within this literature, there has also been an attempt to decentre nation-states and loosen their link to imaginaries through a focus on micro-social processes and how they emerge in transnational discourse (Pickersgill 2011).\n\nThrough our case study of sustainable development and technoscientific innovation we highlight imaginaries found in textual and non-textual elements, including videos and images (Jasanoff 2001). In doing so, we build on extant literatures in order to expand and augment the study of imaginaries within law and political studies, which have engaged with technoscience, but have hitherto paid little if any attention to imaginaries. We do so by (re)focusing on the relations between imaginaries and legal and regulatory decision-making, and specifically that of the EU (arguably the most successful non-national-state supranational polity), which has also been the focus of studies on participation. We highlight how imaginaries help to render the future legible and governable, enlarging the scope of power and the means of producing legitimacy and identity for 'Europe'. In addition, we seek to open a clear line of enquiry into how imaginaries help to determine who can and should participate in decision-making, including through configurations of the 'citizen' or 'public', how and why they are to participate, including by and through which knowledges, and the outcomes to be achieved.\n\nRegard to the EU is also useful in that it is explicitly founded on the objective of promoting European integration, and the decision-making capacity of its legal-constitutional order has been central to this project. This order draws on the historical and cultural heritage and imaginaries of European classical civilisation and, perhaps paradoxically, the subsequent nation-states that it purports to transcend. The EU also draws on related imagery of the nation-state, including a body of law and institutional practice (the _acquis communautaire_ ) produced by and embedded in institutions (especially the European Council, European Commission (Commission), Court of Justice of the EU, and the European Parliament), presidential roles (especially of the Commission and the European Council), a flag and an anthem. Together these are deeply resonant and potent ways of connecting with the disparate component peoples of Europe, figured \u2013 in another echo of European heritage \u2013 as not just publics, but also (EU) citizens.\n\nAt the same time, especially in the current era of austerity in the name of 'Europe', the EU's stateness, democratic legitimacy, as well as the coherence and togetherness of its people(s) seems to be even more limited. The EU is constantly searching for its political identity, as well as for stable and meaningful relationships with its citizens. Although the EU is struggling to draw on established imaginaries of national political orders and national political communities, it nonetheless continues to make huge investments in the production and mobilisation of (old and new) imaginaries of Europeanness, through its polity, shared future and a 'We' or demos. These are particularly salient with regard to sustainable development, which, like the EU's imaginaries, attempt to transcend traditional national conceptual frames and institutions. Yet, as we go on to show, the EU's imaginaries join sustainable development with technoscientific innovation in the frame of 'sustainable technoscientific development' for the symbolic production of its political and legal-constitutional order. This helps to legitimate the EU's actions and regulatory decisions, distribute responsibilities and the credit for success, limit accountabilities for failures, and as part of that to enrol EU citizens in its political projects, while also configuring them in relation to decisionmaking through those technologies of participation.\n\n## **Sustainable technoscientific development and innovation in the EU**\n\nWe turn now to overview how sustainable development has been gradually linked with technoscientific innovation and the configuration of citizen participation in relation to it. As an initial point, although Brundtland has proven hugely influential, there are different formulations of sustainable development circulating within and between policy domains. For instance, the specific meaning attached to the main elements of economic growth (Ihlen and Roper 2011; Kambites 2012), the environment (Ross 2009), and global social justice (Barkemeyer _et al._ 2011; Deutz 2012), as well as to their relative weight within the concept, can all differ. Sustainable development focuses less on specific uncertainties and risks and more on providing wide-ranging guidance on how to steer away from the current (unsustainable) trajectory. Moreover, sustainable development is nearly always formulated as a cross-cutting idea impacting, at least in principle, on all policy domains, including technoscience.\n\nThe EU included sustainable development among its objectives in 1997 with the Amsterdam Treaty, which amended the then European Community Treaty, (now further amended and renamed the Treaty on the Functioning of the European Union (TFEU)). Moreover, several key policy documents refine the meaning of sustainable development and how it should be promoted by the EU \u2013 mostly in the context of the EU Sustainable Development Strategy (EU SDS), launched in 2001. The EU SDS consists of three main elements. First, it prioritises a list of key unsustainable trends requiring most urgent action. Second, it addresses how the EU could promote sustainable development globally. Finally, the EU SDS promotes a new approach to policy-making in order to mainstream sustainable development in all policy fields and integrate economic, social and environmental policy. This final element is closely tied to the EU's 'Better Regulation' agenda, which promotes the political and administrative technology of (environmental) impact assessment as a way to produce coherent policy-making and facilitate mainstreaming.\n\nHowever, sustainable development has been increasingly narrowed through its link with technoscientific innovation within the overarching direction for EU governance provided by the European Council Lisbon Strategy (2000\u20132010) (Council of the European Union 2000; Armstrong 2008) which was followed by the subsequent 'Europe 2020' strategy (to cover 2010\u20132020). The 2000 Lisbon European Council formulated the much-cited strategic objective to make Europe the ' _most_ dynamic and competitive, _knowledge-based economy_ in the world, capable of sustaining economic growth, employment and social cohesion' (European Council 2000, emphasis added). In fact, the _knowledge-based economy_ is not used just as a descriptive term but is also a powerful imaginary, which is constructed to gather diverse political actors around a vision of a better and achievable future (for academic formulations of this vision, see: Bell 1976; Drucker 1993). In order to promote this transition towards a knowledge-based economy, technoscientific innovation became the central _'driver_ for the _production and exploitation of knowledge_ [making it] above all a _linchpin_ in the implementation of the Lisbon Strategy' (European Commission 2005: 2, emphasis added).\n\nOf particular note are some tensions within the basic political and legal-constitutional architecture, which have important consequences for the EU's notion of sustainable development and its related imaginaries, especially of citizens and their involvement. To begin with, at the legal-constitutional level, sustainable development is a fundamental, long-term objective of the EU as defined by Article 3(3) of the Treaty on European Union (TEU) and Article 11 TFEU. However, its importance as a central value is much less significant when looking at political strategies operationalising and giving life to the concept of sustainable development. To begin with the EU SDS has only been designed as a complementary process adding an environmental dimension to the Lisbon Strategy's economic and employment\/social policy dimensions, as also affirmed by the 2009 review (European Commission 2009: 13\u201315). In other words, sustainable development \u2013 a fundamental long-term objective of the EU \u2013 is subsumed under two subsequent strategies to boost growth and employment, which are more limited in scope as part of a medium-term objective of economic optimisation.\n\nThis narrowing of sustainable development was reinforced by the subsequent refocusing of the Lisbon Strategy on growth and jobs in 2005 (Kok 2004) and the adoption of the EU's Europe 2020 strategy in 2010 \u2013 a new ten-year strategy for economic growth, mainly formulated in response to the economic and financial crisis then (and still) engulfing the EU (European Commission 2010a; European Commission 2010b). The focus going towards 2020 remains on fostering growth, particularly through the exploitation of knowledge and the propagation of 'knowledge workers'. In this manner the EU claims that:\n\n> the Europe 2020 strategy is about delivering growth that is: _smart_ , through more effective investments in education, research and innovation; _sustainable_ , thanks to a decisive move towards a low-carbon economy; and _inclusive_ , with a strong emphasis on job creation and poverty reduction.\n\nThis promise is translated into quantitative targets in the areas of employment, innovation, education, poverty reduction and climate\/energy. One important novel element of Europe 2020 is the seven flagship initiatives formulated by the Commission as concerted EU level actions to realize the proclaimed combined notion of smart, sustainable, and inclusive growth.\n\nThe prioritisation of growth implies further reshaping of sustainable development. For instance, the strategic objectives of a low-carbon economy and ambitious goals with regard to reducing CO2 emissions display strong environmental commitments. Yet, the notion of sustainable growth, especially in the context of ongoing economic crisis, also implies growth that is resistant to future crises rather than the reflexive and equity-oriented notion adopted in Brundtland. In addition, in the EU sustainability regime, environmental protection is to be achieved by _boosting_ economic activities instead of limiting them. For example, the EU's industrial policy agenda \u2013 one of two flagship initiatives to promote sustainable growth under Europe 2020 \u2013 is meant to 'focus explicitly on opportunities for sustainable growth and job creation while making the transition to more resource efficiency across industry as a whole' (European Commission 2010c). For this purpose, a main goal is described as developing 'the EU market for environmental goods and services' (ibid.).\n\nThe other flagship initiative under the heading sustainable growth is 'Resource-efficient Europe'. Acknowledging natural limits, the overall tone of this flagship initiative is about efficiency and growth potential. According to the Commission:\n\n> increasing resource efficiency will be key to securing growth and jobs for Europe. It will bring major economic opportunities, improve productivity, drive down costs and boost competitiveness... This will help stimulate technological innovation, boost employment in the fast developing 'green technology' sector, sustain EU trade, including by opening up new export markets, and benefit consumers through more sustainable products.\n> \n> (European Commission 2011: 2)\n\nAll in all, the political and legal-constitutional architecture comprised of the TEU and TFEU, the EU SDS, the Lisbon Strategy and Europe 2020 suggests a very specific meaning of sustainable development in the EU. Sustainable development is not understood or operationalised as a far-reaching transformation of the economy in order to promote global and intergenerational justice in line with the limits and capacities of ecosystems i.e. the Brundtland formulation. Rather, technoscientific innovation is figured as a necessary requirement for sustainable _growth, wealth_ , and _employment_. In short, the sustainability of Europe is framed and understood as being its persistent economic success, which should guarantee that it is resilient enough to manage large economic crises and the challenges arising from globalisation, demographic change and the transition towards a knowledge-economy. These goals are achieved not on the basis of self-restraint, as implied by Brundtland, but rather through the greater efficiency and market leadership flowing from technoscientific innovation.\n\nBased on how sustainable development is mobilised and embedded in the context of the strategies for growth and employment, a very specific frame and imaginary of sustainability in an integrated Europe comes to the fore: _sustainable technoscientific development_. Indeed, as we go on to outline, this works with the imaginary of the knowledge-based economy to order governance in a particular way and shape the conditions of possibility for, the configuration, and design of participation in relation to it.\n\n## **The limits of participation in EU governance of sustainable development**\n\nCitizen participation has played an important role in most conceptual formulations of sustainable development (Barry 2006). For example, although performing rather than claiming direct participation, Brundtland gave much space to citizens voicing their needs and understandings, which were collected in several public hearings. Similarly, when launching the EU SDS, the Gothenburg European Council underscored the importance of participation, albeit limited to 'consulting widely with all relevant stakeholders' and calling on the Member States 'to establish appropriate national consultative processes' (European Council 2001). In spite of this and the more general idea that sustainable development is only achievable through widespread support and citizen engagement (Agyeman and Evans 2006), at the broader political and legal-constitutional level there is little if any explicit mention of participatory processes understood as citizens sharing in power through deliberation in order to shape and steer governance. This might be explained by the way in which sustainable development amounts to an add-on to the much larger process of facilitating sustainable growth through sustainable technoscientific development.\n\nMoreover, the specific design of the EU SDS (which still exists alongside Europe 2020) does limit opportunities for public involvement since it is mostly based on technocratic and expert-led governance. Most importantly, there is a strong emphasis on environmental impact assessment. Thus, this assessment is 'a process that _prepares evidence for political decision-makers_ on the advantages and disadvantages of possible policy options by assessing their potential impact'. In a move that seeks to underline this closed class of decision-maker (those who are 'political') as responsible and accountable, the focus is on ' _improving the quality_ of new policy proposals... by making those _people who are responsible for policy development_ also responsible for assessing the impact of what they propose'. This is a tool based on expert analyses geared towards effectiveness and efficiency, but which limits the scope for substantive citizen participation through input into governance in advance of decision-making.\n\nIn addition, although there is an apparent attempt at epistemic integration of such inputs \u2013 't]his approach ensures that all relevant _expertise within the Commission_ is used, together with _inputs from stakeholders'_[ 10 \u2013 the consultation exercise limits and prefigures who can participate through a focus on 'stakeholders'. This suggests that what is 'at stake' has already been defined, as have, by implication, those with an interest in governance. Indeed, this abets the delegation of decision-making to trusted decision-makers and the valorisation of those providing scientific knowledge and expertise, and implies a limited form of citizen, one that is less interested in or even capable of sharing in power in order to shape and steer governance. Hence while the resort to technical tools might mean the 'Commission's system is both _accountable and transparent_ ', it makes 'political decision-makers' alone competent to assess supposed value-neutral technical evidence and select the appropriate policy response, while also limiting citizen participation in decision-making to _ex post_ contestation. Indeed, the invocation of supposedly neutral expert knowledge as a support for decisions actually attempts to reduce \u2013 and arguably thwart \u2013 such involvement, which is also limited by the dearth of clearly defined avenues for feeding back into _ex ante_ processes and future decisions.\n\nIn a similar vein, there are technologies to measure and report progress with regard to sustainable development. In this regard, the Commission's progress reports are very much central (European Commission 2007), and they are facilitated by the gathering and production of data by Eurostat in accordance with sustainable development indicators. These in turn provide the basis for bi-annual monitoring reports produced since 2007 (Eurostat 2009a; Eurostat 2011), which provide 'an objective, statistical picture of progress towards the goals and objectives of the EU sustainable development strategy... [and which] underpins the European Commission's progress report on the implementation of the strategy' (Eurostat 2009b: 2). The first monitoring report also provided the basis for the 2009 review of the EU SDS (European Commission 2009). A supplementary form of monitoring is found in the use of National Sustainable Development Strategies. This is a 'process for voluntary peer reviews of National Sustainable Development Strategies, aimed at improving the sharing of good practices.' In other words, this is aimed at strengthening the EU SDS overall, and aims at 'improving the coordination with other levels of governments'.\n\nOverall, these different ways of gathering, producing and disseminating data are not only an important underpinning for governance, but also highlight the limited and particular understanding of participation. According to the Director-General of Eurostat: 'Impartial and objective statistical information is _essential for all decision-making_ and provides transparency and openness. Official statistics play a fundamental role in today's society and are essential for the development, implementation, monitoring and evaluation of EU policies' (Eurostat 2009b: 1). Yet this data, while valuable for demanding and contesting decision-making, highlights the centrality of technical processes and the privileging of experts \u2013 and, by implication, the marginal role of citizens and publics \u2013 in gathering, interpreting and using data and knowledge for decision-making (a longstanding and core element of EU decision-making, as seen in: European Commission 2001). Environmental policy within a sustainable Europe is based on imaginaries of environmental policy as being technical, complex, and not in need of public political deliberation. This also affects how citizen participation is understood and configured in relation to governance, with it again being limited to _ex post_ contestation based on the information published.\n\nIn sum, citizen participation hardly appears in the EU's strategies and policies on sustainable development (European Commission 2012). More specifically, although there are references to certain participants (especially stakeholders), this implies that citizens per se are not provided with a specific formal political arena where they can get involved in sharing power in order to deliberate the meanings of sustainable development or the definition \u2013 even broadly \u2013 of a sustainable integrated Europe. There is a distinct lack of clearly defined avenues for feeding back into _ex ante_ processes and future decisions. Yet, as we shall highlight, this does not mean that participation is completely absent from interventions in creating a sustainable Europe. In the following section, we focus on the Innovation Union, another Europe 2020 flagship initiative, which is subsumed under the priority of smart growth, and which exemplifies the idea of sustainable technoscientific development as a focus of governance. This programme is also exemplary for the third main imaginary in this chapter: the citizens and publics in the context of Europe 2020.\n\n## **Imaginaries of a sustainable Europe, its publics and citizens in the Innovation Union**\n\nDiagnosing an 'innovation emergency', the Innovation Union is aimed at 'refocusing R&D and innovation policy on major challenges for our society like climate change, energy and resource efficiency, health and demographic change' and 'strengthening every link in the innovation chain, from \"blue sky\" research to commercialisation'. European citizens appear on the Innovation Union website, for example, in terms of 'what it means to me', where the benefits of this political programme are explained. In short, all Europeans are promised a higher quality of life in a greener Europe, with work in a smarter economy. Moreover, the Innovation Union is claimed to improve primarily and directly the work life and opportunities of entrepreneurs and knowledge workers.\n\nMuch more interesting, however, are attempts to engage with those perusing the Innovation Union website through the significant amount of playful and entertaining elements which project strong images and messages. For example, the online game 'Innovation Union Lab' invites the public to hunt for 'innovation treasure-spheres in a futuristic eco-city' \u2013 also with 'Augmented Reality' if the users wish. Moreover, the e-book 'Tales from the Future' (Billing 2011) assembles four short stories about a highly appealing future where, for example, skin cancer is cured by gene therapy and surgical 'nanobots'. Each story is followed by a short description of several EU funded projects whose innovations could bring about the world described. Finally, the collection closes with a three-page interview with the Commissioner for Research, Innovation, and Science, who while claiming that 'research and innovation are the key to meeting [the most crucial] challenges' to our societies (Billing 2011: 57) also states that the 'Innovation Union aims to involve everyone in innovation' (Billing 2011: 60) \u2013 not only researchers and entrepreneurs but also public authorities, civil society organisations, and communities.\n\nThis aim of involving citizens in the 'European' task of innovation is also projected by videos on the Innovation Union website. The majority of which consist of short documentary clips presenting European innovators (all working in EU funded projects) and their inventions ranging from a remote controlled fire-fighting tank to carbon nanotubes. In addition, the website features videos of a more abstract and symbolic rather than documentary character. For example, in one clip, ten (mostly young) people \u2013 apparently citizens from different countries of the EU \u2013 present 'their' definition of innovation. The various meanings offered range from innovation as 'change' to the 'power to imagine the future' to the allegoric 'first warm and sunny spring day after a long cold winter'. Apart from the last atmospheric example, all presented definitions share a striking characteristic: they all focus on innovation as a personal characteristic (rather than as an ideational or material product). Projected as a citizen's virtue, innovation obtains ethical qualities and as such helps to demarcate a sphere of self-governance with its attendant responsibilities that is distinct from, albeit interactive with, the broader governance strategy aimed at prosperity, welfare, and environmental protection.\n\nA similar verbal and symbolic language is employed in two videos named _Innovation Union Act 1_ and _Innovation Union Act 2_ , although they are even free of speaking agents. The first video follows three people \u2013 one could be a planner or architect, one seems to be a natural scientist in the lab, the third one could be a researcher working at his laptop computer \u2013 who are literally 'drawn' into their technoscientific innovations. For example, the architect enters a futuristic city, which looks very green, very clean, and technically advanced. In the second, _Innovation Union Act 2_ , the same three people appear only briefly and are then followed by a succession of flashing words ranging from 'zero emissions' and 'greener cities' to 'creativity', 'expertise', and 'jobs' to 'improved health', 'better ageing', and 'better life'. Importantly, all terms are positive but their scope and positive connotations increase towards the end until the already comprehensive 'better life' is followed by an ultimate, catch-all concept: 'innovation'. In the process, based on the visual and spoken language used in these videos, innovation itself becomes an iconic label and a synonym for all positive qualities of a sustainable and technoscientific Europe.\n\nIndeed, a main category and component of the public in the context of the Innovation Union is that of what we call _innovator citizens_ : individuals who do not necessarily need to be in possession of formal (EU) citizenship but who have technoscientific knowledge and expertise. They are responsibilised into using it \u2013 to innovate! \u2013 in coming to understand _themselves_ as innovators who work with a wider community of innovators and knowledge workers, and produce _for others_ , in order to create a sustainable European economy. By contrast, those who are not given or are incapable of fulfilling such specific roles \u2013 including those who are unable, unwilling or failing to produce innovations \u2013 are effectively not rendered as full citizens in this specific imaginary. These components of the public are to be educated into technoscientific literacy and consumption; while those who are incapable of participating as such are rendered even more marginal. The Europe inhabited by innovator citizens is the private market sphere of the knowledge-based economy where they build a sustainable Europe through their technoscientific innovations. Political power, deliberation, and contestation as central aspects of social order are, however, entirely absent in this imagined 'society'.\n\n## **Discussion**\n\nIn this chapter, the area of sustainable technoscientific development provided our case study for highlighting how imaginaries help to provide the conditions of possibility for and shape the design of technologies of participation and their relation to decision-making. We highlighted three imaginaries found in textual and non-textual elements of the EU. The imaginary of the _knowledge-based economy_ fuses sustainable development to technoscientific innovation, thereby limiting it to being about economic optimisation in the imaginary of a Europe of _sustainable technological development_. Within this frame, those publics in possession of knowledge and expertise are produced and enrolled in another imaginary as private _innovator citizens_ who become complicit in the EU's project of rule. Consumers are implicit in that they literally consume the ideational and material products of technoscience produced by innovators.\n\nImportantly, this public \u2013 especially innovator citizens, stakeholders (noted above) and consumers \u2013 does not explicitly encompass citizens understood as those who are fully capable of sharing in power in order to shape and steer governance. The limited nature of those explicitly figured as participants eschews this fuller notion of citizenship, implying that not all citizens can equally contribute knowledge to decision-making. Those who are not innovator citizens or stakeholders are limited to _ex post_ and (arguably) passive participation as consumers of the ideational and material products of decision-making. Moreover, the contribution of stakeholders is limited to participation in consultations, while the activity of innovator citizens occurs in the private, self-governing depoliticised knowledge-based economy \u2013 it does not extend to participation _in_ decision-making itself or, apparently, _ex ante_ processes.\n\nThere is little formal institutionalisation of _democratic-political_ participation in this area, but attention to imaginaries helps understanding of this situation and reveals the privileging of experts in the gathering, production and interpretation of technical data. This is used to extrapolate imaginaries of an innovative and (technoscientifically, economically, and socially) optimised future facilitated by the EU. Moreover, in a neoliberal vein, this Europe is a depoliticised space of (private) action, creation, production, and consumption, where the agents imagined are exhorted to self-governance and responsibilised as such towards the production and harnessing of technoscientific knowledge and expertise or consumption of its ideational and material products for economic optimisation. This is deemed essential to sustainability, but understood as being about transcending current limits and propelling further, deeper and ongoing economic growth by and through innovation.\n\nSince the knowledge and expertise available to innovator citizens is defined as being of use merely to their self-governance and, through that, the economy, its and their importance and potential contribution to decisionmaking is frustrated \u2013 and that of consumers, and those who cannot participate as such, even more so. Indeed, this implies Europe's citizenry as a whole is stymied from engaging in public questioning of governance options, and the potential transformation of an innovation-centred Europe. In short, the collective imaginary of a united and sustainable Europe is not up for debate among its citizens.\n\nNevertheless, or actually because of this, understood as providing the conditions of possibility for and constituting technologies of participation, the role of imaginaries for the production and perpetuation of the EU's order becomes clearer. Configured in relation to decision-making, imaginaries not only shape participation, they also play a key role in distributing and sharing responsibilities, the credit for success, the accountability and blame for failure, and legitimating and publicly sanctioning the predetermined imagined future of Europe, while undermining or even precluding the possibility of others. There is, therefore, a need for further attention to the mutual constitution and design of imaginaries, participation and the broader technologies of decision-making.\n\nOverall, our analysis highlights imaginaries and the struggle about meaning that they represent as a crucial site for legal and regulatory design, the politics of new and emerging technoscientific innovations, and the future(s) the EU has in mind for Europe. Making imaginaries visible is not just an analytical exercise: exposure is a precondition for disruption. As such, this creates both an avenue for participation and the possibility of formulating alternative imaginaries, knowledge domains and sustainable futures within and outside of formal governance.\n\n## **Acknowledgements**\n\nWe are grateful to the editors of this special issue for their encouragement, comments and suggestions, as well as to Th\u00e9r\u00e8se Murphy. Thomas Pfister also would like to thank Sheila Jasanoff for hosting him as a fellow in the Program on Science, Technology and Society at the Harvard Kennedy School (2010\/2011) as well as for providing the intellectual environment where many important thoughts for this research took shape. The chapter was written with the support of an ESRC Seminar Series grant (RES-451-26-0764) and a BA Small Grant (SG-48186) to Mark L. Flear, and a DAAD scholarship to Thomas Pfister.\n\n In the EU context see especially Article 9 TEU and Articles 20\u201324 TFEU, which establish EU citizenship and its rights.\n\n 'European Year of Citizens 2013': h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ade\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.\u00ade\u00adu\u00ad\/\u00adc\u00adi\u00adt\u00adi\u00adz\u00ade\u00adn\u00ads\u00ad-\u00ad2\u00ad0\u00ad1\u00ad3\u00ad\/\n\n These areas of priority are: climate change and energy; sustainable transport; sustainable consumption and production; conservation and management of natural resources; public health; social inclusion, demography, and migration; global poverty and sustainable development changes.\n\n 'Impact assessment': h\u00adt\u00adt\u00adp\u00ad:\/\u00ad\/e\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/g\u00ado\u00adv\u00ade\u00adr\u00adn\u00ada\u00adn\u00adc\u00ade\u00ad\/i\u00adm\u00adp\u00ada\u00adc\u00adt\u00ad\/i\u00adn\u00add\u00ade\u00adx\u00ad_e\u00adn\u00ad.h\u00adt\u00adm\u00ad\n\n 'Commission launches consultation on EU 2020: a new strategy to make the EU a smarter, greener social market': h\u00adt\u00adt\u00adp\u00ad:\/\u00ad\/\u00ade\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/\u00adr\u00ada\u00adp\u00adi\u00add\u00ad\/\u00adp\u00adr\u00ade\u00ads\u00ads\u00adR\u00ade\u00adl\u00ade\u00ada\u00ads\u00ade\u00ads\u00adA\u00adc\u00adt\u00adi\u00ado\u00adn\u00ad.d\u00ado\u00ad?;\u00adr\u00ade\u00adf\u00ade\u00adr\u00ade\u00adn\u00adc\u00ade\u00ad=\u00adI\u00adP\u00ad\/\u00ad0\u00ad9\u00ad\/\u00ad1\u00ad8\u00ad0\u00ad7. Emphasis added.\n\n 'Europe 2020': h\u00adt\u00adt\u00adp\u00ad:\/\u00ad\/\u00ade\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/\u00ade\u00adu\u00adr\u00ado\u00adp\u00ade\u00ad2\u00ad0\u00ad2\u00ad0\u00ad\/\u00ade\u00adu\u00adr\u00ado\u00adp\u00ade\u00ad-\u00ad2\u00ad0\u00ad2\u00ad0\u00ad-\u00adi\u00adn\u00ad-a\u00ad-n\u00adu\u00adt\u00ads\u00adh\u00ade\u00adl\u00adl\u00ad\/p\u00adr\u00adi\u00ado\u00adr\u00adi\u00adt\u00adi\u00ade\u00ads\u00ad\/\u00adi\u00adn\u00add\u00ade\u00adx\u00ad_\u00ade\u00adn\u00ad.h\u00adt\u00adm\u00ad\n\n In respect of the environment aspect of sustainable development, and therefore worthy of mention, is the Aarhus Convention (1998), which provides for participation, and has been operationalised in EU governance through secondary legislation. Given its focus on the environment it is, however, rather limited, even as it provides a 'way in' for participation in sustainable development more broadly.\n\n 'Impact assessment': h\u00adt\u00adt\u00adp\u00ad:\/\u00ad\/\u00ade\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/\u00adg\u00ado\u00adv\u00ade\u00adr\u00adn\u00ada\u00adn\u00adc\u00ade\u00ad\/\u00adi\u00adm\u00adp\u00ada\u00adc\u00adt\u00ad\/\u00adi\u00adn\u00add\u00ade\u00adx\u00ad_e\u00adn\u00ad.h\u00adt\u00adm\u00ad. Original emphasis.\n\n Ibid. Original emphasis.\n\n Ibid. Original emphasis.\n\n Such as 'High Level Group of National Regulatory Experts': h\u00adt\u00adt\u00adp\u00ad:\/\u00ad\/\u00ade\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/g\u00ado\u00adv\u00ade\u00adr\u00adn\u00ada\u00adn\u00adc\u00ade\u00ad\/\u00adi\u00adm\u00adp\u00ada\u00adc\u00adt\u00ad\/h\u00adi\u00adg\u00adh\u00ad_\u00adl\u00ade\u00adv\u00ade\u00adl\u00ad_\u00adg\u00adr\u00ado\u00adu\u00adp\u00ad\/i\u00adn\u00add\u00ade\u00adx\u00ad_\u00ade\u00adn\u00ad.h\u00adt\u00adm\u00ad\n\n Ibid. Original emphasis. For online publication see 'Impact assessments': h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ade\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/\u00adg\u00ado\u00adv\u00ade\u00adr\u00adn\u00ada\u00adn\u00adc\u00ade\u00ad\/\u00adi\u00adm\u00adp\u00ada\u00adc\u00adt\u00ad\/i\u00ada\u00ad_\u00adc\u00ada\u00adr\u00adr\u00adi\u00ade\u00add\u00ad_o\u00adu\u00adt\u00ad\/i\u00ada\u00ad_c\u00ada\u00adr\u00adr\u00adi\u00ade\u00add\u00ad_\u00ado\u00adu\u00adt\u00ad_\u00ade\u00adn\u00ad.h\u00adt\u00adm\u00ad\n\n 'Sustainable development indicators': h\u00adt\u00adt\u00adp\u00ad:\/\u00ad\/\u00ade\u00adp\u00adp\u00ad.e\u00adu\u00adr\u00ado\u00ads\u00adt\u00ada\u00adt\u00ad.e\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/\u00adp\u00ado\u00adr\u00adt\u00ada\u00adl\u00ad\/\u00adp\u00ada\u00adg\u00ade\u00ad\/\u00adp\u00ado\u00adr\u00adt\u00ada\u00adl\u00ad\/\u00ads\u00add\u00adi\u00ad\/\u00adi\u00adn\u00add\u00adi\u00adc\u00ada\u00adt\u00ado\u00adr\u00ads\u00ad\n\n 'National Sustainable Development Strategies': h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ade\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/\u00ade\u00adn\u00adv\u00adi\u00adr\u00ado\u00adn\u00adm\u00ade\u00adn\u00adt\u00ad\/\u00adp\u00add\u00adf\u00ad\/\u00adn\u00ads\u00add\u00ads\u00ad.p\u00add\u00adf\u00ad\n\n 'Sustainable Development': h\u00adt\u00adt\u00adp\u00ad:\/\u00ad\/\u00ade\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/e\u00adn\u00adv\u00adi\u00adr\u00ado\u00adn\u00adm\u00ade\u00adn\u00adt\u00ad\/\u00ade\u00adu\u00ads\u00ads\u00add\u00ad\/\n\n Ibid.\n\n Beyond the EU SDS, participatory elements can also be found in single elements of the Lisbon-oriented economic reform agendas, especially as part of (national-level) consultations on National Reform Programmes or in the context of specific open methods of coordination. However, these consultations always focus on specific strategies and measures to implement common guidelines, for example, in relation to employment policy, but in which sustainable development is not an explicit topic, even as it is supposed to contribute to the objective of sustainable development.\n\n 'Innovation Union': h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/e\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/r\u00ade\u00ads\u00ade\u00ada\u00adr\u00adc\u00adh\u00ad\/\u00adi\u00adn\u00adn\u00ado\u00adv\u00ada\u00adt\u00adi\u00ado\u00adn\u00ad-\u00adu\u00adn\u00adi\u00ado\u00adn\u00ad\/\u00adi\u00adn\u00add\u00ade\u00adx\u00ad_e\u00adn\u00ad.c\u00adf\u00adm\u00ad.?;\u00adp\u00adg\u00ad=\u00adw\u00adh\u00ady\u00ad\n\n 'Europe 2020': h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ade\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/\u00ade\u00adu\u00adr\u00ado\u00adp\u00ade\u00ad2\u00ad0\u00ad2\u00ad0\u00ad\/\u00ade\u00adu\u00adr\u00ado\u00adp\u00ade\u00ad-\u00ad2\u00ad0\u00ad2\u00ad0\u00ad-\u00adi\u00adn\u00ad-\u00ada\u00ad-\u00adn\u00adu\u00adt\u00ads\u00adh\u00ade\u00adl\u00adl\u00ad\/p\u00adr\u00adi\u00ado\u00adr\u00adi\u00adt\u00adi\u00ade\u00ads\u00ad\/s\u00adm\u00ada\u00adr\u00adt\u00ad-\u00adg\u00adr\u00ado\u00adw\u00adt\u00adh\u00ad\/\u00adi\u00adn\u00add\u00ade\u00adx\u00ad_e\u00adn\u00ad.h\u00adt\u00adm\u00ad\n\n 'Innovation Union \u2013 what it means to you': h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ade\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/r\u00ade\u00ads\u00ade\u00ada\u00adr\u00adc\u00adh\u00ad\/\u00adi\u00adn\u00adn\u00ado\u00adv\u00ada\u00adt\u00adi\u00ado\u00adn\u00ad-u\u00adn\u00adi\u00ado\u00adn\u00ad\/i\u00adn\u00add\u00ade\u00adx\u00ad_\u00ade\u00adn\u00ad.c\u00adf\u00adm\u00ad?p\u00adg\u00ad=\u00adw\u00adh\u00ada\u00adt\u00adi\u00adt\u00adm\u00ade\u00ada\u00adn\u00ads\u00ad\n\n 'Innovation Union lab': h\u00adt\u00adt\u00adp\u00ad:\/\u00ad\/\u00ade\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/\u00adr\u00ade\u00ads\u00ade\u00ada\u00adr\u00adc\u00adh\u00ad\/\u00adi\u00adn\u00adn\u00ado\u00adv\u00ada\u00adt\u00adi\u00ado\u00adn\u00ad-u\u00adn\u00adi\u00ado\u00adn\u00ad\/\u00adi\u00adn\u00add\u00ade\u00adx\u00ad_\u00ade\u00adn\u00ad.c\u00adf\u00adm\u00ad?p\u00adg\u00ad=\u00adl\u00ada\u00adb\u00ad\n\n 'Augmented reality': h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ade\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/\u00adr\u00ade\u00ads\u00ade\u00ada\u00adr\u00adc\u00adh\u00ad\/i\u00adn\u00adn\u00ado\u00adv\u00ada\u00adt\u00adi\u00ado\u00adn\u00ad-u\u00adn\u00adi\u00ado\u00adn\u00ad\/i\u00adn\u00add\u00ade\u00adx\u00ad_\u00ade\u00adn\u00ad.c\u00adf\u00adm\u00ad?p\u00adg\u00ad=\u00ada\u00adr\u00ad\n\n For all videos, see 'Videos': h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ade\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/r\u00ade\u00ads\u00ade\u00ada\u00adr\u00adc\u00adh\u00ad\/\u00adi\u00adn\u00adn\u00ado\u00adv\u00ada\u00adt\u00adi\u00ado\u00adn\u00ad-\u00adu\u00adn\u00adi\u00ado\u00adn\u00ad\/\u00adi\u00adn\u00add\u00ade\u00adx\u00ad_e\u00adn\u00ad.c\u00adf\u00adm\u00ad?p\u00adg\u00ad=\u00adv\u00adi\u00add\u00ade\u00ado\u00ads\u00ad\n\n## **References**\n\nAgyeman, J. and Evans, B. (2006) 'Justice, governance, and sustainability: perspectives on environmental citizenship from North America and Europe', in A. Dobson and D. Bell (eds), _Environmental Citizenship_ , London: MIT Press.\n\nAnderson, B. (1983) _Imagined Communities_ , London: Verso.\n\nArmstrong, K. (2008) 'Governance and constitutionalism after Lisbon', in JCMS Symposium: EU Governance After Lisbon _Journal of Common Market Studies_ , 46: 413.\n\nBarkemeyer, R., Holt, D., Preuss, L. and Tsang, S. (2011) 'What happened to the development in sustainable development? Business guidelines two decades after Brundtland', _Sustainable Development_ , DOI: 10.1002\/sd.521.\n\nBarry, J. (2006) 'Resistance is futile: from environmental to sustainability citizenship', in A. Dobson and U. Beck (1986) _Risk Society: Towards a New Modernity_ , London: Sage.\n\nBeck, U. (2009) _World at Risk_ , Cambridge: Polity Press.\n\nBell, D. (1976) _The Coming of Post-Industrial Society: A Venture in Social Forecasting_ , Harmondsworth: Basic Books.\n\nBilling, R (2011) _Tales from the Future_ , Brussels: European Union, available at: h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ade\u00adc\u00ad.e\u00adu\u00adr\u00ado\u00adp\u00ada\u00ad.e\u00adu\u00ad\/r\u00ade\u00ads\u00ade\u00ada\u00adr\u00adc\u00adh\u00ad\/\u00adi\u00adn\u00adn\u00ado\u00adv\u00ada\u00adt\u00adi\u00ado\u00adn\u00ad-u\u00adn\u00adi\u00ado\u00adn\u00ad\/\u00adi\u00adn\u00add\u00ade\u00adx\u00ad_\u00ade\u00adn\u00ad.c\u00adf\u00adm\u00ad?p\u00adg\u00ad=\u00adt\u00ada\u00adl\u00ade\u00ads\u00ad\n\nBlack, J. (2007) 'Tensions in the regulatory state', _Publie Law_ , 58: 58\u201373.\n\nDavidson, K. (2011) 'A typology to categorize the ideologies of actors in the sustainable development debate', _Sustainable Development_ , DOI: 10.1002\/sd.520.\n\nDeutz, P. (2012) 'A class-based analysis of sustainable development: developing a radical perspective on environmental justice', _Sustainable Development_ , DOI: 10.1002\/sd.l528.\n\nDobson, A. and Bell, D. (2006) _Environmental Citizenship_ , London: MIT Press.\n\nDresner, S. (2002) _The Principles of Sustainability_ , London: Earfhscan.\n\nDrucker P. (1993) _Post-Capitalist Society_ , New York: Harper Business.\n\nEuropean Commission (2001) _European Governance: A White Paper_ , COM(2001) 428 final.\n\n\u2014\u2014 (2005) _Building the ERA of Knowledge for Growth_ , COM (2005) 118 final.\n\n\u2014\u2014 (2007) _Progress Report on the Sustainable Development Strategy 2007_ , COM (2007) 642 final.\n\n\u2014\u2014 (2009) _Mainstreaming Sustainable Development into EU Policies: 2009 Review of the European Union Strategy for Sustainable Development_ , COM (2009) 400 final.\n\n\u2014\u2014 (2010a) _Smart Regulation in the European Union_ , COM (2010) 543 final.\n\n\u2014\u2014 (2010b) _Europe 2020 Flagship Initiative Innovation Union_ , COM (2010) 546 final.\n\n\u2014\u2014 (2010c) _An Integrated Industrial Policy for the Globalisation Era. Putting Competitiveness and Sustainability at Centre Stage_ , COM (2010) 614 final.\n\n\u2014\u2014 (2011) _A Resource-Efficient Europe \u2013 Flagship Initiative Under the Europe 2020 Strategy_ , COM, (2011)21 final.\n\n\u2014\u2014 (2012) _Action for Stability and Jobs_ , COM (2012) 299 final.\n\nEuropean Council (2000) _Lisbon European Council_ , 23 and 24 March2000. Presidency Conclusions, no. 5.\n\n\u2014\u2014 (2001) _Gothenburg European Council_ , 15 and 16 June2001. Presidency Conclusions.\n\nEurostat (2009a) _Sustainable Development in the European Union. 2009 Monitoring Report of the EU Sustainable Development Strategy_ , Luxembourg: Office for Official Publications of the European Communities.\n\n\u2014\u2014 (2009b) _Sustainable development in the European Union_ , Luxembourg: Office for Official Publications of the European Communities.\n\n\u2014\u2014 (2011) _Sustainable Development in the European Union. 2011 Monitoring Report of the EU Sustainable Development Strategy_ , Luxembourg: Office for Official Publications of the European Communities.\n\nEzrahi, Y. (1990) _The Descent of Icarus_ , Cambridge, MA: Harvard University Press.\n\nFiorini, D.J. (1990) 'Citizen participation and environmental risk: a survey of institutional mechanisms', _Science, Technology & Human Values_, 15: 226.\n\nFlear, M.L. (2009) 'Editorial: an introduction to \"New Technologies, European Law and Citizens\"', _Maastricht Journal of European and Comparative Law_ , 16(1): 3.\n\nFlear, M.L. and Pickersgill, M.D. (2013) 'Regulatory or regulating publics? The European Unions regulation of emerging health technologies and citizen participation', _Medical Law Review_ , 21: 39.\n\nGiddens, A. (1998) 'Bisk society: the context of British politics', in J. Franklin (ed.), _The Politics of Risk Society_ , Cambridge: Polity Press.\n\nGottweis, H. (2008) 'Participation and the new governance of life', _Bio Societies_ , 3: 265.\n\nIhlen, \u00d8. and Roper, J. (2011) ' _Corporate reports on sustainability and sustainable development: \"we have arrived\"_ ', Sustainable Development, DOI: 10.1002\/ sd.524.\n\nIrwin, A. (2006a) 'The politics of talk: coming to terms with the \"new\" scientific governance', _Social Studies of Science_ , 36: 299.\n\nIrwin, A. (2006b) _Sociology and the Environment_ , Cambridge: Polity Press.\n\nJasanoff, S. (2001) 'Image and imagination: the formation of global environmental consciousness', in P. Edwards and C. Miller (eds), _Changing the Atmosphere: Expert Knowledge and Environmental Governance_ , Cambridge, MA: MIT Press.\n\nJasanoff, S. and Kim, S.-H. (2009) 'Containing the atom: sociotechnical imaginaries and nuclear power in the United States and South Korea', _Minerva_ , 47: 119.\n\nJasanoff, S. and Kim, S.-H. (2013). 'Sociotechnical imaginaries and national energy policies', _Science as Culture_ , 22: 189.\n\nKambites, C.J. (2012) '\"Sustainable development\": the \"unsustainable\" development of a concept in political discourse', _Sustainable Development_ , DOI: 10.1002\/sd.l552.\n\nKok, W. (2004) _Facing the Challenge: The Lisbon Strategy for Growth and Employment. Report from the High Level Group Chaired by Wim Kok_ , Brussels: European Commission.\n\nMandel, G.N. (2009) 'Regulating emerging technologies', _Law, Innovation and Technology_ , 1: 75.\n\nO'Riordan, T. (2009) 'Reflections on the pathways to sustainability', in N. Adger and A. Jordan (eds), _Governing Sustainability_ , Cambridge: Cambridge University Press.\n\nPfister, T. (2011) _The activation of citizenship in Europe_ , Manchester: Manchester University Press.\n\nPickersgill, M.D. (2011) 'Connecting neuroscience and law: anticipatory discourse and the role of sociotechnical imaginaries', _New Genetics and Society_ , 30(1): 27.\n\nRoss, A. (2009) 'Modern interpretations of sustainable development', _Journal of Law and Society_ , 36(1): 32.\n\nScott, J.C. (1998) _Seeing Like a State_ , New Haven, CT: Yale University Press.\n\nTait, J. (2009) 'Upstream engagement and the governance of science' _EMBO Reports_ 10: S18\u2013S22.\n\nWilsdon, J. and Willis, R (2004) _See-Through Science: Why Public Engagement Needs to Move Upstream_ , London: Demos.\n\nWorld Commission on Environment and Development (1987) _Our Common Future_ , Oxford: Oxford University Press.\n\nYeung, K. (2008) 'Towards an understanding of regulation by design', in R Brownsword and K. Yeung (eds), _Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes_ , Oxford: Hart Publishing.\nChapter 3\n\n* * *\n\n# **The voice of silence**\n\nUK patients' silent resistance to the assisted reproduction regulations\n\n_Ilke Turkmendag_\n\n* * *\n\n## **Introduction**\n\nSince the 1990s, one of the most controversial changes to the UK's Human Fertilisation and Embryology Act (HFE Act) concerned the information that is kept about the egg, sperm (gametes) and embryo donors in donor registers. This information is held by the Human Fertilisation and Embryology Authority (HFEA), which is a quasi-autonomous non-governmental organisation (quango) set by the HFE Act in 1990 to license and regulate clinics that carry out _in vitro_ fertilisation (IVF), other assisted conception techniques using human sperm, eggs and embryos, and embryo research.\n\nOriginally, section 31(3) of the HFE Act 1990 allowed offspring conceived through gamete donation the right to apply for information about the donor when reaching the age of 18. In practice, donors were asked to provide some non-identifying information, which could be passed to potential recipients. However, on 1 April 2005, with the implementation of the Human Fertilisation and Embryology Authority (Disclosure of Donor Information) Regulations 2004, UK law was changed to allow children born through gamete donation to access identifying details of the donor (The Human Fertilisation and Embryology Authority, 2004). Drawing on trends in adoption law, the decision to abolish donor anonymity was strongly influenced by a discourse that asserted the 'child's right-to-personal identity' (Turkmendag, 2012), encouraged by the European Convention on Human Rights (ECHR) (European Convention for the Protection of Human Rights and Fundamental Freedoms, 1950).\n\nThe removal of donor anonymity has aggravated the outgoing donor shortage since individuals were less likely to donate if their relatedness could one day be established (Turkmendag _et al._ , 2008). Moreover, on 5 July 2007, the HFE Act was amended in order to bring the EU Tissues and Cells Directive (EUTCD) into UK law. The implementation of EUTCD's voluntary and unpaid donations policy for tissues and cells has made it even more difficult to recover the donor shortage as donations cannot be promoted by financial incentives (Turkmendag, 2013). As a result, fewer people now have access to the treatment in the UK (Turkmendag, 2013).\n\nPerhaps surprisingly, neither during nor after the removal of donor anonymity did the government face organised resistance from would-be parents through classic avenues of opposition to new legislation; i.e., both those who cannot receive treatment due to the donor shortage, and those who might carry the burden of informing children of their origins if treatment were available and successful (Turkmendag _et al._ , 2008) Those who prefer anonymous donation are reluctant to press their claims because confidentiality matters to them (Turkmendag, 2013). Even those who are pro-openness may be reluctant to face publicity as undergoing fertility treatment, and using donated gametes are private matters. Presumably because they did not articulate their concerns in formal ways of communication (such as lodging complaints for a policy change), there has been little enquiry into the reactions of would-be parents to the new law. But, despite their reticence during the donor anonymity debate, would-be parents articulated _anonymous_ protests against the new law on the discussion forums of online fertility support groups. In this chapter, I examine such discussion forums in order to cast fresh light into the discourse of would-be parents that was propelled by the change to the HFE Act. I examine how strategies of legal avoidance and resistance to the new law were facilitated by the growth of an online community of would-be parents, and explore their impact on policy adjustments. The data form part of my ongoing (since 2005) research into the socio-legal dimensions of reproductive care.\n\n## **Avoidance of the law by the 'virtual' patient communities**\n\nOnline forums are known to have transformed much of the movements and relationships that animate communities of would-be parents. As an example, Waldby (2008) argues that multilingual websites and Internet communication have made international partnerships and the possibilities for patients to travel abroad for treatment much easier. This, together with the legal possibilities offered by the European Union, have facilitated what is sometimes referred to as 'cross-border reproductive care' (CBRC): a form of medical tourism that enables patients to take advantage of differences in reproductive consumer cultures reflected in variations in the cost and availability of treatments, success rates, and waiting lists. The removal of donor anonymity in the UK has resulted in an increasing number of would-be parents going abroad to seek treatment, a movement facilitated by the availability of online support that Waldby describes (Shenfield _et al._ , 2010). Indeed, the Internet plays a significant role here, that I trace through this chapter: through online discussion forums, it facilitates ongoing discourse among the patients, effectively creating a 'virtual patient community', a social aggregation based on common interest and intellectual and affective (rather than geographic) proximity. The virtual patient community that emerged around (and in antagonism to) the changes to the HFE Act did not promote the breaking of law, but instead its avoidance. Community members develop and share strategies to do this (i.e., how best to make use of CBRC), and mobilise around their contentions about the legal shift while remaining anonymous.\n\nIn the UK the online discussion forums hosted by fertility support groups (e.g. Acebabes, fertilityfriends, INUK, More to Life and IVF.Net) or fertility clinics played a salient role (perhaps unintentionally) in helping potential patients to overcome their anxiety about receiving treatment away from home. Following the removal of donor anonymity, these forums were snowed under with posts about treatment opportunities in other jurisdictions. Those who wanted to receive treatment using anonymous gametes and those who were denied access to treatment at home for various reasons (donor shortage, being regarded as unfit parents) exchanged information about the jurisdictions where the treatments they needed would be available (e.g. standards of the clinics, the success rates, costs, quality of the aftercare, whether practitioners or nurses could speak English, and regulatory frameworks). Those who could not afford to travel abroad exchanged information about 'backstreet' arrangements (such as purchasing of sperm online). In addition, these forums became a space in which patients could express their opposition to the new law without making their identities known (e.g. signing online petitions, discussing the HFEA's decision-making processes, sending feedback through forum administrators, talking to journalists and researchers). In what follows, I present data collected during a virtual ethnography on one of these discussion forums: Repromed (www.repromed.co.uk), which was hosted by the website of the Bristol Centre for Reproductive Medicine (BCRM). In doing so, I render visible the discreet ways in which would-be parents avoided, resisted, and ultimately expressed their opposition, to the new law.\n\n## **'Would-be' parents and the discussion forums of Repromed**\n\nRepromed introduced itself as a website for anyone in the UK interested in reproductive medicine. Like many other patient support groups, Repromed was open to anyone, at any stage of their treatment, offering an ideal platform to access to information about treatment techniques, clinics abroad, and other patients' experiences at no or little cost (some patient groups may require a small subscription fee), quickly and anonymously from home. On the home page, there was a link to the discussion area; there, the forum titles were listed. Each topic involved a question, or a brief introduction about the inquiry; for example: 'Is anyone currently trying or pregnant with a donor conceived baby?' Forum administrators and moderators, who had the highest level of control over forum content, could turn forum features on and off, ban and remove users, edit and delete posts, and create users and groups. This gave them some control over the postings, although it might be rather difficult to monitor and manage such a large and continuously evolving content. By the time that I visited the website in 2006, users had posted 27,829 messages, on 704 topics, in 15 forums. The age range of the other 28 users was from 28 to 43. The average age was 35 and there were two modes (the most frequently occurring age in the distribution) at 35 and 40. Although more than 5,000 postings were scanned, the final sample included 900 postings sent between 13 July 2005 and 26 April 2007.\n\nPosts on Repromed included inquiries as to where one could receive treatment abroad by using gametes from anonymous donors, stories about fertility travel, complaints about the long waiting lists and the donor shortage in the UK, and criticism of the new regulations. The data presented here draw mainly on the 'Donor Gamete' Forum on Repromed, which had the fullest coverage of donor anonymity. Although this study does not claim to be a definitive account of UK would-be parents who use online forums, _Repromed_ was clearly one of the most active sites (based on the number of messages posted, and the depth of the threads) during this period and represents an important source of data on voices that were not fully heard in the public debates over ending donor anonymity. The extracts from the forum posts that I present herein are not edited, except for the references to clinics or names of websites where sperm can be purchased.\n\n## **Policy processes and patient reactions**\n\nBefore this forum was set up, fertility travel was being addressed in other discussion forums on the website. One of the patients, Elaine, who was receiving treatment at the BCRM (the clinic that hosts Repromed), had decided to receive treatment in Spain; this was mainly due to the removal of donor anonymity in the UK and consequent lengthening of waiting lists. She shared her experiences with other users of _Repromed_ , eventually meeting in person and befriending six other women users who also ended up receiving treatment in Spain. According to Elaine, the fact that they were discussing their experiences on Repromed discussion forums was not well-received by the host clinic and the administrator set up a new 'Donor Gamete' forum so that people receiving donor gamete treatment could interact in a setting separate from other patients of the clinic. It was established in November 2005 with the following disclaimer:\n\n> After much discussion we have set up the Donor gamete forum. We are a little concerned about the oversees fertility treatment options as these are outside UK ethical and safety regulations, but we fully agree that UK patients receiving donor gamete treatment would benefit from mutual online support.\n> \n> Regards, Admin\n\nMost forum participants were on waiting lists for gamete donation and clearly associated the donor shortage with the change in the law to remove anonymity. Examples of posts expressing this view were as follows (all are excerpted from a thread about the options available for those who attempted to receive treatment in licensed clinics in the UK but for various reasons could not access the treatment they needed):\n\n> Apparently out of 500 applicants to give sperm in the last year only 2 actually do so, predominantly due the anonymity issue and the low sperm counts of the younger applicants. Very frightening stats don't you think?\n> \n> (Mandy)\n> \n> i do think that the change in the law has got every thing to do with it especialy if you are donating eggs or sperm and you are haveing difficulty consiving yourself .i have also been told that sperm donations have gone down because there use to be alot of students that use to donate but have stoped because they dont want a knock on the door in 18 years do you know if it the same if you go out side theuk and have treatment and donate at the same time cause im not sure about that one from the first time i went for icsi to now things seame to have got more compucated and expensive lol.\n> \n> (Brenda)\n\nPrior to the removal of donor anonymity, a 2004 Department of Health survey of UK fertility clinics indicated widespread agreement that lifting anonymity would decrease donations (Bionews, 2004). During the spring of 2004, the HFEA conducted its own survey of clinics to develop a detailed understanding of the current demand for, and provision of, treatment using donated gametes or embryos, as well as to identify relevant trends (Human Fertilisation and Embryology Authority, 2004). The removal of donor anonymity was the most frequently cited factor anticipated negatively to affect sperm and egg donation by the clinics which were already suffering from a donor shortage. In other words, the donor crisis was hardly a surprise to the regulatory bodies.\n\nIn the following post, one user, Tara, drew attention to the fact that the HFEA ignored the evidence provided by the clinics:\n\n> They (HFEA) should have warned clinics to overstock a year ago in case this should happen. From what I could gather from the clinics I rang they had warned the HFEA that this would happen!! The HFEA knew there could be a dip in donors as this happened in Australia and New Zealand and Sweden, I would like to know if they ever recovered! ! All I can say is well done (not) HFEA.\n> \n> (Tara)\n\nIf the donor shortage was predictable, why then did HFEA take precautions? In 2007, the Department of Health and the National Gamete Donation Trust (NGDT) launched the 'Give a Toss' campaign to raise awareness of donation and recruit new donors (critics of the campaign have accused the NGDT of alienating donor parents, would-be parents and donor-conceived children by perpetuating a 'tacky' image) (McVeigh, 2007). Other campaigns were launched throughout the country to promote sperm donation and the British Fertility Society established a working party to examine sperm donation in the UK. Its report was launched at a HFEA meeting in February 2008. The report made a number of recommendations about how the existing pool of donors could be used to more effect: 'the report gives a helpful impression of the current state of sperm donation services in the UK, showing an undeniable mismatch between supply and demand'. The HFEA's response in July 2008 was that the Authority's remit did not extend to the recruitment of donors; this was a matter for the sector and should be addressed as a matter of urgency (Jardine, 2008a).\n\nDuring the donor anonymity public debate, the HFEA made it clear that it supported lifting the anonymity. The then Chair of the HFEA, Suzi Leather noted that '[W]e have been asking for this for a long time. I can understand why people want to know who their genetic parents are' (BBC News, 2004). In the following post Tara expressed her anger about the view that the child's right to know should be championed at any cost, rather than weighed against the competing interests of would-be parents:\n\n> Anyway on another note we should thanks Dame Suzi Leather and her syndicate for causing this mess!! I wouldnt be surprised if a couple took them to the court of human rights \u2013 essentially they are stopping couples having a baby!! To me thats enfringing on our human rights. Of course this is my personal opinion, but I havent yet found anyone who is happy with what the HFEA have done.\n> \n> (Tara)\n\nOne forum user, Sylvia, claimed that in order to resolve the sperm shortage the law should be changed back; she believed that anonymity should be reinstated, and produced a petition to that effect:\n\n> if you think sperm donors should be legally allowed to remain anonymous please sign my petition. i am petitioning the prime minister of the uk to change the law so sperm donors can go back to being anonymous so the sperm shortage can be resolved. if you agree with this please sign at the following link: http:\/\/petitions.pm.gov.uk\/sperm-donor-anon\/.\n> \n> (Sylvia)\n\nLike Tara, she framed the legislative changes as explicitly curtailing her and others' reproductive freedom.\n\n> i think anonymity should be brought back after watching a documentary on it. being someone who possibly may need a sperm donor's help in the future id rather not be waiting for years because of the shortage if you agree could you please sign my petition. i understand the reasons why it shouldn't but why should we prevent a loving couple from having the child they wish to have?\n> \n> come on sign it if you agree :D\n> \n> (Sylvia)\n\nIn her response to Sylvia's post, Dorothy says:\n\n> I have signed your petition as I have used sperm donor in the past and though we were not successful I was shocked for other people at the huge decline in donors when the law was changed.\n> \n> (Dorothy)\n\nSigning petitions is one of the ways that patients can lodge formal complains. However, it might be problematic for those who have confidentiality concerns. For example, one of the users, Hannah had twins through surrogacy arrangements in USA and did not want this to be known. According to UK law, the surrogate is considered a child's legal mother. If the origins of the twins were known, Hannah would have to formally adopt them in order to become legally recognised as their mother. Hence, she was very careful about making formal complaints that might reveal the origins of her children. Another user, Cheryl opted for adoption after failed attempts to achieve pregnancy. She also had confidentiality concerns about making herself known as she did not know how that would affect her chances of adopting a child.\n\nIn order to circumvent the challenges imposed by the law, many would-be parents claimed that they or others might or would go abroad in order to obtain treatment. As Tara put it:\n\n> Now it has happened, where and what do people needing donor sperm do??? If clinics dont start recruiting their own donors then we will have to go to the ends of the Earth to try and get one, thats our only choice, its so unfair.\n> \n> (Tara)\n\nSome commentators consider CBRC as a 'reproductive exile' where patients who are denied access to treatment at home are led to find a legal regime under which the treatment they need will be available (Matorras, 2005). For example, in the Republic of Ireland (where abortion is not permissible unless there is a risk to the pregnant woman's life), between January 1980 and December 2010, at least 147,912 women travelled abroad to terminate their pregnancies. Often due to the complicated travel arrangements these abortions are performed later than is preferred, creating emotional upset for women at an already stressful time. Further, only those who can afford to travel can escape the law's strictures. Again, in Ireland, both illegal abortions, and the use of illegal abortion pills have become a necessity for women who cannot travel abroad to end their pregnancies (McGuinness, 2011). Similarly, in Repromed discussion boards, some of the users ask for others' opinions about 'backstreet' arrangements:\n\n> I have found this company on the web and wanted to know your views: [the website's URL] Having quickly read it through it seems as tho they send you the sperm and you self inseminate (?!) for under _\u00a3_ 400. I know people have been going abroad, but i can't find a place that does D IUI.\n> \n> (Suzanne)\n\nIn one reply, Tara says:\n\n> Also there is another site [the website's URL] as anyone thought or used this one??? I have heard they are fairly good. I have to say it makes me wonder whether all the sperm donors that went through clinics and wanted to remain anonymous are now going to these places in order to help people needing it??!!\n> \n> (Tara)\n\nElaine advices this user to seek treatment in Spain using a cheap airline:\n\n> Try [a clinic in Spain] \u2013 anonymous donor sperm is 260 Euros per go and I flew from Bristol to Barcelona for _\u00a3_ 50 each way!\n> \n> (Elaine)\n\nIn another post, after complaining about the fees that each couple has to pay to undergo treatment, Tara addresses the risks of 'backstreet' arrangements. She notes that the new law leads people to consider less legitimate ways of overcoming their childlessness and to take health risks. For example, one can use the 'turkey baster' method and achieve pregnancy using fresh sperm, but this sperm cannot be screened for certain diseases. Licensed clinics are required to freeze and store the sperm for six months, the incubation period of HIV, before insemination to ensure that it is virus free.\n\n> basically they are ruining couples (like us) chances of having a child using donor or want for a better word 'harder' \u2013 they are actually going to force this underground. Couples will go to other agencies (not so legitimate) and abroad to get donors is that fair??? The sad thing is that the women like us may not have as much in the way of protection against certain things like HIV etc I really think they have made the whole donor issue a lot worse.\n> \n> (Tara)\n\nPatients seem to be aware of risks involved in purchasing sperm online, hence they consult each other about the safety of providers. However, further legislative changes have curtailed even their ability to (legally) assess and respond to such risks themselves. Following the implementation of EUTCD within the UK, Internet sperm providers came to be subject to the same rules as HFEA-licensed clinics. Thus, the procurement, testing, processing or distribution of any embryo or gametes intended for 'human use' must now be licensed by the HFEA or be subject to an agreement with a licensed service. Therefore, even if would-be parents would like to take a risk with fresh sperm, they have to think twice. The current Chair of the HFEA, Professor Lisa Jardine, has 'strongly' advised 'any person who became aware that a person or organisation may be procuring, testing, processing or distributing gametes without a licence to contact the police' (Jardine, 2008b).\n\nOn the other hand, if patients are considering obtaining sperm, eggs or embryos from within the EU, a licensed UK clinic can organise for a transfer to be made from that country. However, the sperm, eggs or embryos transferred must meet UK requirements. Would-be parents can also exercise their rights to travel within the EU to receive gametes or embryos from a clinic even if that clinic does not comply with UK standards.\n\nThus, would-be parents feel that the changes made to the HFE Act enjoins them to seek treatment abroad. One forum user, Suzanne, was a patient at a clinic which found a donor for her; however, he did not match either her physical characteristics or those of her husband. Suzanne and her partner thus chose a donor profile from Spain, to be imported by their licensed clinic, but the clinic informed them that UK law prohibited the purchase of sperm from an anonymous donor. If would-be parents cannot purchase anonymous sperm online, or have the sperm imported by a licensed clinic, they construct themselves as having no option but to go abroad for treatment. Other reasons behind reproductive travel are addressed in the following post by Elaine:\n\n> In the end, I think more and more people will go to Europe for treatment, as it costs the same, is completely anonymous, has no waiting lists and, best of all, not bl**dy HFEA!!! That's not to say they aren't regulated, but by sensible people with our interests at heart!!!\n> \n> (Elaine)\n\nThe following post by Tom summarises many of the issues raised by other users:\n\n> There are many infertility newsgroups out there, so it becomes quite easy to compare the approaches taken by different clinics. Some UK clinics seem to be several steps off the pace of both other UK centres and overseas centres. Treatments used almost routinely in other centres are dismissed with the excuse that it is not current clinic policy, or the treatment isn't proven. The result is that the patient feels that they are not being treated as an individual, and they are left with many unresolved questions about their particular situation, and they are paying a premium price for this unsatisfactory state.\n> \n> In many cases patients find that they are managing their own treatment, pushing the medics to try something outside the standard for that clinic. This is probably the most wearing part of treatment for infertility, dealing with a system that you are not part of. It is probably also the reason why many more women in the UK will turn to overseas centres.\n> \n> Overseas treatment will also be favoured because any form of medical treatment in the UK means that at some stage the patient will come up against the NHS, a hugely expensive medical insurance scheme which has now turned into a useless quivering jelly of administrators, processes and targets; fertility treatment is way down the list of priorities.\n> \n> (Tom)\n\nThe accounts I have presented indicate that forums users had significant difficulties in accessing the treatment they sought; this was deemed to be due to the donor shortage, which was itself accounted for as a consequence of legislative change. Following the removal of donor anonymity, would-be parents who could not access treatment due to the donor shortage, and\/or preferred anonymous donors constructed themselves as having no choice but to seek treatment abroad or purchase gametes (sperm) through the Internet.\n\nAlthough discussion forums on online support groups are \u2013 in theory \u2013 rather democratic places where people can express their opinions freely, they are still subject to control by forum moderators. The power of the moderators was exercised when, in May 2006, Repromed's 'Egg and Sperm Donation' forum was suspended. This action followed the HFEA's warning against reproductive care abroad. In April 2006, the then Chair, Suzi Leather, issued a public statement warning British citizens against the poorly regulated treatment in overseas clinics:\n\n> We know that a relatively small number of people choose to travel abroad to undergo fertility treatment and that sometimes the treatment is packaged as a \"holiday\" where the patient can convalesce in the sun. However, we are concerned about people who choose to have their treatment abroad should know about the potential risks. (...) It is very sad when we receive complaints from patients about their treatment abroad and we are not able to help or reassure them. _We would urge patients to think twice and. consider the risks and implications before going abroad for treatment_.\n> \n> (Leather, 2006, emphasis added)\n\nThe Repromed administrator informed me that the 'Egg and Sperm Donation' forum had been closed because members of the HFEA-licensed centre hosting the site were concerned about the increasing number of posts advising participants about offshore facilities for assisted conception using anonymous gametes. The forum was reopened five months later with a disclaimer stating: 'In view of popular demand the donor gamete forum has been reopened, but anyone thinking of going abroad for treatment should first consider the guidance provided by the HFEA, which is quoted below' The HFEA's warning about fertility treatment abroad and Leather's speech were both quoted. Repromed users posted a range of responses to the disclaimer, expressing discontent about the HFEA's approach to fertility travel:\n\n> I'd like to see the HFEA or perhaps other organisation carry out a full investigation into the standards of clinics abroad, to provide some hard evidence that will allow potential patients to make their minds up once fully informed.\n> \n> (Will)\n\n> I do think the HFEA is very heavy-handed in its approach to this issue, and has also made some very poor regulatory decisions (especially about donor anonymity, for example) which may well end up encouraging people to seek treatment elsewhere. I guess they would open to considerable criticism if it could be shown that, in order to avoid the consequences of their approach, lots of people are travelling abroad for treatment, so now they are trying to discourage them with 'scare tactics'. Not a very grown up approach, really.\n> \n> (Melissa)\n\n> I feel that if donor anonymity was guaranteed in the UK you wouldn't be going to abroad anyway. That's down to the government perhaps in places they shouldn't.\n> \n> (Gemma)\n\nSome individuals grounded their critique of the HFEA with reflections on the high quality treatment they had received abroad; in so doing they (re)positioned foreign clinics not as sites of risk and uncertainty, but as morally progressive and highly professional centres of care:\n\n> I really do get fed up with the HFEA scaremongering about going abroad, especially when they've done so much to prevent successful treatment in the UK, for example, removing donor anonymity and successfully wiping out the entire UK sperm bank in one go!!!!!! I for one can only express my utmost admiration at the extremely expert and professional treatment I received at [name of the clinic], which was vastly better than the treatment I received in the UK and resulted in my gorgeous baby at the first attempt. In total, six of us became pregnant after only one or two transfers after going to [name of the clinic], two with twins, which would never have happened if we'd remained in the UK I think that says it all!\n> \n> (Elaine)\n\n> On the note of the HFEA 'warning' all I will say is that I was offered a very high quality individualised treatment at [a clinic] in Spain. I very carefully researched a number of clinics abroad as there were no further options left for me in the UK \u2013 I did not have 5 years to wait for Egg Donation in the UK. I was offered a service and support at xxx that was not on offer at any of the three clinics I was under in the UK. Will, I agree whole heartedly with you \u2013 it would be good for the HFEA to produce some good statistical evidence of what they claim \u2013 I'd like to know just how many people have complained to them about treatment abroad.\n> \n> (Kelly)\n\nThe HFEA's warning about the risks of offshore treatment \u2013 including countries which were popular EU destinations where assisted conception is highly regulated and standards of fertility clinics are demonstrably as high, if not better, than those in the UK \u2013 seemed to have frustrated the would-be parents; they saw themselves as having no choice but to seek treatment abroad as a consequence of the legal shift promoted by the HFEA itself. It was not until the largest fertility support group of the UK brought patients' views together that their voices were heard.\n\n## **Bringing patients' views together**\n\nFollowing the removal of donor anonymity, while hundreds of fertility patients were protesting against the new policy through various online patient support networks, a 2008 fertility travel survey conducted by INUK (Infertility Network UK, 2008) brought these opinions together for the first time. INUK had 339 responses to its survey, which explored patient experiences of cross-border treatment. The results were striking: 76 per cent of respondents stated that they would consider going abroad for treatment (in particular, to receive egg donation). Further, 120 of the 339 (35 per cent) respondents answered the question \"Which country did you go to for your treatment?\"; the most popular response was Spain, which maintains donor anonymity for fertility treatment. At least three times as many patients went there as anywhere else.\n\nAlthough the INUK study received little recognition by public bodies, its findings were confirmed by the European Society of Human Reproduction and Embryology (ESHRE) Task Force, which undertook the most comprehensive study on cross-border reproductive care to-date. The ESHRE survey contained data from 46 clinics across Europe, including 53 patients travelling from the UK. Thirty-four per cent of UK resident patients \u2013 more than any other nationality within Europe \u2013 cited access issues as their reason for travelling abroad (Shenfield _et al._ , 2010).\n\nFindings from a recent qualitative study of UK residents showed that the four most commonly cited reasons for travel were: donor shortages in the UK (27 cases), cost (13 cases), perceived better success rates overseas (12 cases) and previous unsatisfactory care in the UK (7 cases). Overall, 'a desire for timely and affordable treatment with donor gametes' was evident in 71 per cent of cases, making donor conception the most sought after treatment by the CBRC travellers. Most patients sought treatment within European borders, the most popular destinations being Spain and Czech Republic (Culley _et al._ , 2011).\n\nThe growing evidence around reproductive tourism led the HFEA to move away from solely administering warnings, to critical engagements with its policy. In August 2010, the Authority launched a full, public consultation into sperm and egg donation policies. In the consultation document, the HFEA noted that the landscape of donation had changed since the Authority looked at their rules in 2005, and it emphasized that it would like to update these if necessary. It was acknowledged that the demand for donor treatment in the UK was greater than the supply of donors and that the removal of donor anonymity might have exacerbated the shortage. The Authority further noted that the shortage of donors resulted in long waiting times for treatment, with reports of UK waiting times of three to five years for donor eggs. Concerns were also expressed that some fertility centres overseas might offer substandard treatment, and that increasing numbers of people were purchasing unscreened sperm on the Internet (HFEA, 2011). The HFEA asked if gamete supply could be improved by a change in policy, or withdrawal from commitments under the EUTCD (Blyth, 2011).\n\nThe HFEA received a record number of responses to its consultation, with over 2,400 surveys completed by around 1,100 people. The first set of decisions made by the Authority in response was that the maximum number of families that an individual donor can donate to should not be changed. The HFEA also decided to issue guidance stating that sperm and eggs should not be mixed if they come from very close genetic relatives. Further, the allowable compensation amounts for gamete donors have been increased: egg donors can now receive _\u00a3_ 750 per cycle of donation, while sperm donors can receive a _\u00a3_ 35 fee per individual visit to a clinic (Starr, 2011). Finally, the HFEA decided to take a more proactive approach to donor recruitment and established a National Donation Strategy Group. Chaired by Professor Sheila McLean, the Group will bring together a wide range of experts in order to enhance the information that donors receive, develop new approaches to raising awareness of donation, and ascertain how the care of donors can be improved. However, it is questionable whether any of these measures can improve the UK's donor conception policy in terms of its acceptability to users, unless stakeholders' views are central to the HFEA's policy decisions.\n\n## **Conclusion**\n\nThe virtual ethnography study presented in this chapter offers some potentially important insights into how would-be parents negotiate the issues of donor gamete donation interactively and collectively on discussion boards. The topics generated by users of Repromed provide an indication of those matters they perceive as important or problematic with regards to donation. The most important issue that users raise is that, despite the warnings of a likely donor shortage, the government lifted anonymity at the expense of would-be parents' access to treatment, and the HFEA should have taken measures to avoid the (predictable) crisis that resulted from this legislative shift.\n\nThe continued shortage of donor gametes means that UK patients have to wait longer for treatment and have less choice with respect to donor characteristics. In response, we can see (from discussion threads about fertility travel and online sperm providers) that would-be parents have come to experiment with new ways of contesting the law, and \u2013 more importantly \u2013 novel means of avoiding it. The user accounts indicate that donor shortage problems lead couples to find alternative solutions to their childlessness; in particular, would-be parents in the UK have readily available alternatives in their access to treatment in other EU nations. Success rates, the lower cost of treatment, donor availability, and anonymity make clinics abroad increasingly attractive. Yet, the HFEA warn against their use, creating further frustration.\n\nThe posts on the patient forums that I presented here suggest that patients feel their views are ignored by the HFEA in policy-making. But there is little they can do to have their voices heard without compromising their anonymity. Furthermore, the pursuit of a child is, to many would-be parents, a considerably demanding task: physically, emotionally, and in terms of time. Little space remains for lobbying when pregnancy is still to be achieved. Finally, being a would-be parent is a transitional phase: once pregnant, individuals' priorities might change, and campaigning for new policy to address the needs of others is less salient once one's own needs have been met.\n\nIt is important at this point to clarify that I am not claiming that would-be parents are completely divorced from more influential policy actors and processes. Indeed, patients use online environments to make their voices audible to policy-makers: they participate in online surveys, connect with journalists and researchers who recruit interviewees, and sign electronic petitions. However, in effect, UK would-be parents are caught in a double bind: those who are directly affected by the legal shift may be reluctant to press their claims because confidentiality matters to them. Given this reticence, their voices are not taken into consideration by public bodies, like the HFEA.\n\nSince the passing of the 1990 Act, some academic and political commentators have questioned whether the regulatory model adapted for the HFEA was acceptable to the general public, or to the major stakeholders, those who are directly affected by the law (Callus, 2007). Subsequently, the HFEA has faced several legal challenges and its policy decisions were criticised by the House of Commons Science and Technology Committee: 'democracy is not served by unelected quangos taking decisions on behalf of Parliament authorisation' (House of Commons Science and Technology Committee Report, 2002). In one of these legal challenges in 2004, the Court of Appeal confirmed that the HFEA had the legal authority to regulate these areas: \"Parliament has placed that choice in the hands of the HFEA\". But how does the HFEA make these choices? During the public debate over ending donor anonymity it was clear that the proposed legal shift would create a donor crisis, and as a result, fewer people would have access to treatment. But the would-be parents' interests were not a paramount concern to the Authority. The then Chair of the HFEA made it clear that she viewed the principal of the welfare of the child as fundamental, despite the fact that the HFE Act only requires that this should be 'taken into account' (Dawson, 2004: 3).\n\nAs Dawson argues, the HFEA's inspector role is vital to ensure the clinics are fully licensed and monitored (Dawson, 2004). On the other hand, the recent consultation might be an opportunity to separate or improve the Authority's approach to policy-making. Arguably, the UK assisted conception regulations may benefit from a more democratic decision-making process where the 'conflicting' interests of the stakeholders are fully and equally taken into account. This might include more evidence-based policymaking, and taking full advantage of the Internet and information technology to enhance stakeholder participation especially to reach out those who cannot use transparent and formal modes of communication due to their confidentiality concerns, like the would-be parents in this study.\n\nFinally, it should be noted that the passage of the Public Bodies Act 2011 made it possible to transfer the functions of quangos (including the HFEA and Human Tissue Authority) elsewhere, without opening up the original Act that created them. The HFEA was one of several quangos set to be abolished, in order to meet the UK Government's aim of cutting National Health Services (NHS) costs, although this decision has since been reconsidered. On 28 June 2012, the government launched a consultation on the future of the HFEA to consider whether the regulators' responsibilities should move to the Care Quality Commission and the Health Research Authority (HFEA, 2012). Consequently, the likelihood of current HFEA deliberations into mechanisms to increase donation being successfully translated into practice is uncertain. Regardless, it is clear that future efforts to govern the interactions between law, biomedical knowledge and reproductive practice keep in mind the limitations of authority and the creativity of subjects, and endeavour to seek solutions that satisfy the hopes and aims of both regulators and regulatees in democratically-defensible ways.\n\n## **Acknowledgements**\n\nThis work was supported by a Mildred Blaxter Post Doctoral Fellowship from the Foundation for the Sociology of Health and Illness. I am grateful to Emilie Cloatre and Martyn Pickersgill for their constructive comments.\n\n The Human Fertilisation and Embryology Authority (Disclosure of Donor Information) Regulations 2004 SI 2004\/1511.\n\n European Convention for the Protection of Human Rights and Fundamental Freedoms 1950, ETS 5. The ECHR was incorporated into domestic law in the UK via the Human Rights Act 1998.\n\n Directive 2004\/23\/EC of the European Parliament and of the Council of 31 March 2004 on setting standards of quality and safety for the donation, procurement, testing, processing, preservation, storage and distribution of human tissues and cells [2004] L 102\/48. There are also two supplementary Technical Directives: 2006\/17\/EC; 2006\/86\/EC.\n\n At the time of this study, the clinic behind the Repromed website was the University of Bristol's, Centre for Reproductive Medicine. This centre has now merged with the Southmead Fertility Service.\n\n The Human Fertilisation and Embryology Act 2008 section 33 (or the equivalent HFE Act 1990 section 27 for conceptions before 6 April 2009) provides that the woman who gives birth to a child is the legal mother.\n\n Irish Family Planning Association website.\n\n ibid.\n\n Under the directly effective freedom to provide and _receive_ services under Article 56 Treaty on the Functioning of the European Union: _R v. Human Fertilization and Embryology Authority, ex parte Blood_ [1997] 2 All ER 687 (CA). See T. K. Hervey, 'Buy Baby: The European Union and Regulation of Human Reproduction' (1998) _Oxford Journal Legal Studies_ 18: 207.\n\n _R (on the application of J. Quintavalle on behalf of CORE) v. Human Fertilisation and Embryology Authority_ , [2004] Q.B. 168.\n\n Public Bodies Act 2011 (c. 24) Section 5.\n\n## **References**\n\nBlyth, E. (2011) 'Gamete donation review: not the HFEA's finest hour', _Bionews_ , 605 w\u00adw\u00adw\u00ad.\u00adb\u00adi\u00ado\u00adn\u00ade\u00adw\u00ads\u00ad.\u00ado\u00adr\u00adg\u00ad.\u00adu\u00adk\u00ad\/\u00adp\u00ada\u00adg\u00ade\u00ad_\u00ad9\u00ad4\u00ad0\u00ad1\u00ad2.\u00ada\u00ads\u00adp (accessed 23 September 2013).\n\nCallus, T. (2007) 'Patient perception of the human fertilisation and embryology authority', _Medical Law Review_ , 15: 62\u201385.\n\nCulley, L., Hudson, N., Rapport, F., Blyth, E., Norton, W. and Pacey, A. (2011) 'Crossing borders for fertility treatment: motivations, destinations and outcomes of UK fertility travellers', _Human Reproduction_ , 26: 2373\u20132381.\n\nDawson, A. (2004) 'The human fertilisation and embryology authority: evidence based policy formation in a contested context', _Health Care Analysis_ , 12: 1\u20136.\n\nDonor anonymity to be removed in UK (2004) _Bionews_ , 242, w\u00adw\u00adw\u00ad.\u00adb\u00adi\u00ado\u00adn\u00ade\u00adws\u00ad.\u00ado\u00adr\u00adg\u00ad.\u00adu\u00adk\u00ad\/\u00adp\u00ada\u00adg\u00ade\u00ad_l\u00ad1\u00ad8\u00ad3\u00ad3\u00ad.\u00ada\u00ads\u00adp (accessed 9 September 2013).\n\nHuman Fertilisation and Embryology Authority (2004) 'Sperm, egg and embryo donation (SEED) policy review: findings of the clinic survey', w\u00adw\u00adw\u00ad.\u00adh\u00adf\u00ade\u00ada\u00ad.\u00adg\u00ado\u00adv\u00ad.\u00adu\u00adk\u00ad\/\u00add\u00ado\u00adc\u00ads\u00ad\/\u00adC\u00adl\u00adi\u00adn\u00adi\u00adc\u00ads\u00ad_\u00ads\u00adu\u00adr\u00adv\u00ade\u00ady\u00ad_\u00adS\u00ade\u00ade\u00add\u00ad_\u00adr\u00ade\u00adv\u00adi\u00ade\u00adw\u00ad.\u00adp\u00add\u00adf (accessed 18 September 2013).\n\nHuman Fertilisation and Embryology Authority (2011) 'A review of the HFEA's sperm and egg donation policies', w\u00adw\u00adw\u00ad.\u00adhfea\u00ad.\u00adg\u00ado\u00adv\u00ad.\u00adu\u00adk\u00ad\/\u00add\u00ado\u00adc\u00ads\u00ad\/\u00ad2\u00ad0\u00ad1\u00ad1\u00ad-\u00ad0\u00ad1\u00ad1\u00ad3\u00ad_\u00adD\u00ado\u00adn\u00ada\u00adt\u00adi\u00ado\u00adn\u00ad_\u00adr\u00ade\u00adv\u00adi\u00ade\u00adw\u00ad_\u00adb\u00ada\u00adc\u00adk\u00adg\u00adr\u00ado\u00adu\u00adn\u00add\u00ad.\u00adp\u00add\u00adf (accessed 26 September 2013).\n\nHuman Fertilisation and Embryology Authority (2012) 'Consultation on proposals to transfer functions from the human fertilisation and embryology authority and the human tissue authority', w\u00adw\u00adw\u00ad.\u00add\u00adh\u00ad.\u00adg\u00ado\u00adv\u00ad.\u00adu\u00adk\u00ad\/\u00adh\u00ade\u00ada\u00adl\u00adt\u00adh\u00ad\/\u00ad2\u00ad0\u00ad1\u00ad2\u00ad\/\u00ad0\u00ad6\u00ad\/\u00adc\u00ado\u00adn\u00ads\u00adu\u00adl\u00adt\u00ada\u00adt\u00adi\u00ado\u00adn\u00ad-\u00adr\u00ade\u00adg\u00adu\u00adl\u00ada\u00adt\u00ado\u00adr\u00ads (accessed 27 September 2013).\n\nHouse of Commons Science and Technology Committee Fourth Report (2002) _Developments in Human Genetics and Embryology_ , 18 July 2002 HC 791. Paragraph 18.\n\nJardine, T. (2008a) 'Internet sperm providers are illegal', _Guardian News and Media Limited_ , 23 September 2008, w\u00adw\u00adw\u00ad.\u00adg\u00adu\u00ada\u00adr\u00add\u00adi\u00ada\u00adn\u00ad.\u00adc\u00ado\u00ad.\u00adu\u00adk\u00ad\/\u00adu\u00adk\u00ad\/\u00ad2\u00ad0\u00ad0\u00ad8\u00ad\/\u00ads\u00ade\u00adp\u00ad\/\u00ad2\u00ad3\u00ad\/\u00adu\u00adk\u00adc\u00adr\u00adi\u00adm\u00ade\u00ad.\u00adl\u00ada\u00adw\u00adl (accessed 23 September 2013).\n\nJardine, L. (2008b) ' _BFS report on sperm donation in the UK: HFEA response_ ', _Human Fertilisation and Embryology Authority Paper_ , (HFEA (02\/07\/08) 452. Para 2.2).\n\nLeather, S. (2004) 'Sperm donors to lose anonymity', _BBC News_ , 21 January 2004, h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00adn\u00ade\u00adw\u00ads\u00ad.\u00adb\u00adb\u00adc\u00ad.\u00adc\u00ado\u00ad.\u00adu\u00adk\u00ad\/\u00adl\u00ad\/\u00adh\u00adi\u00ad\/\u00adh\u00ade\u00ada\u00adl\u00adt\u00adh\u00ad\/\u00ad3\u00ad4\u00ad1\u00ad4\u00ad1\u00ad4\u00ad1\u00ad.\u00ads\u00adt\u00adm (accessed 25 September 2013).\n\nLeather, S. (2006) 'Press release: thinking of going abroad? Think twice about going abroad for fertility treatment', w\u00adw\u00adw\u00ad.\u00adh\u00adf\u00ade\u00ada\u00ad.\u00adg\u00ado\u00adv\u00ad.\u00adu\u00adk\u00ad\/\u00ad6\u00ad3\u00ad2\u00ad.\u00adh\u00adt\u00adm\u00adl (accessed 23 May 2012).\n\nMatorras, R (2005) 'Reproductive exile versus reproductive tourism', _Human Reproduction_ , 20: 3571.\n\nMcGuinness, S. (2011) 'A, B, and C leads to D (for delegation!): A, B and C v. Ireland 25579\/05 [2010] ECHR2032', _Medical Law Review_ , 19: 476\u2013491.\n\nMcVeigh, K. (2007) 'Online ads for sperm donors called offensive', _Guardian News and Media Limited_ , 26 March, w\u00adw\u00adw\u00ad.\u00adg\u00adu\u00ada\u00adr\u00add\u00adi\u00ada\u00adn\u00ad.\u00adc\u00ado\u00ad.\u00adu\u00adk\u00ad\/\u00adm\u00ade\u00add\u00adi\u00ada\u00ad\/\u00ad2\u00ad0\u00ad0\u00ad7\u00ad\/\u00adm\u00ada\u00adr\u00ad\/\u00ad2\u00ad6\u00ad\/\u00adn\u00ade\u00adw\u00adm\u00ade\u00add\u00adi\u00ada\u00ad.\u00ada\u00add\u00adv\u00ade\u00adr\u00adt\u00adi\u00ads\u00adi\u00adn\u00adg (accessed 26 September 2013).\n\nShenfield, Rde Mouzon, J., Pennings, G., Ferraretti, A.P., Andersen, A.N., de Wert, G., Goossens, V., ESHRE taskforce on cross border reproductive care (2010) 'Cross border reproductive care in six European countries' _Human Reproduction_ , 25: 1361\u20131368.\n\nStarr, S. (2011) 'HFEA takes first set of decisions following donation review', _BioNews_ , 616, w\u00adw\u00adw\u00ad.\u00adb\u00adi\u00ado\u00adn\u00ade\u00adw\u00ads\u00ad.\u00ado\u00adr\u00adg\u00ad.\u00adu\u00adk\u00ad\/\u00adp\u00ada\u00adg\u00ade\u00ad_\u00ad1\u00ad0\u00ad2\u00ad1\u00ad9\u00ad9\u00ad.\u00ada\u00ads\u00adp (accessed 27 September 2013).\n\nThe Infertility Network UK (2008) _Fertility Tourism Survey Results_ , Bexhill-on-Sea: Infertility Network.\n\nTurkmendag, I., Dingwall, R, and Murphy, T. (2008) 'The removal of donor anonymity in the United Kingdom: the silencing of claims by would-be parents.' _International Journal of Law, Policy and the Family_ , 22: 283\u2013310.\n\nTurkmendag, I. (2012) 'The donor-conceived childs \"right to personal identity\": the public debate on donor anonymity in the United Kingdom', _Journal of Law and Society_ , 39: 58\u201375.\n\nTurkmendag, I. (2013) 'When sperm cannot travel: experiences of UK would-be parents seeking treatment abroad', in Hear, M. L., Farrell, A., Hervey, T. K and Murphy, T. (eds) _(2013) European Law and New Health Technologies_ , Oxford: Oxford University Press.\n\nWaldby, C. (2008) 'Oocyte markets: women's reproductive work in embryonic stem cell research', _New Genetics and Society_ , 27: 19\u201331.\nSection 2\n\n* * *\n\n# Objects and epistemologies in criminal law\n\n* * *\nChapter 4\n\n* * *\n\n# **Unchaining research**\n\nProcesses of dis\/empowerment and the social study of criminal law and investigation\n\n_Barbara Prainsack_\n\n* * *\n\n## **Introduction**\n\nThrough their analyses of legal practices and processes of knowledge production within them, science and technology studies (STS) scholars have made considerable contributions to critical scholarship on law, science, and society in recent decades. One of the most significant achievements of STS scholarship has been its interrogation of the ways in which scientific and technological knowledge and artefacts are embedded in legal systems. Nevertheless, there are other aspects in the domain of law \u2013 and in particular also in criminal justice \u2013 which have not yet seen the same level of critical scrutiny. In these areas, the scope and direction of STS research specifically has been restrained by the tacit acceptance \u2013 the black-boxing \u2013 of some of the categories that other professional experts operate with. In the context of an area such as STS, whose emergence and whose identity is closely linked to the challenging of assumptions and casting light on 'hidden' matters, this is surprising. In this chapter, after discussing in more detail two such under-explored issues within STS and socio-legal scholarship, I will reflect on the conditions and processes within and by which many scholars operate that account for this phenomenon. I will conclude by highlighting two strategies to address these issues.\n\n### **_Unexamined assumptions_**\n\nAbout ten years ago, in an article discussing the first wave of socio-legal studies of wrongful convictions, San Francisco-based legal scholar Richard Leo spoke of the salience of 'unexamined assumptions' within this literature:\n\n> The unexamined assumption in virtually all miscarriages writing and scholarship is that [the reasons examined for miscarriage of justice] are actual causes, and once they are identified, we will know how and why the problem of wrongful conviction occurs. [...] But eyewitness misidentification, false confession, and police and prosecutorial misconduct are not actual root causes. By identifying the causes, we beg the obvious, deeper causal question: What are the causes of eyewitness misidentification? What are the causes of police-induced false confessions? What are the causes of police and prosecutorial conduct? [... Overall, scholars] need to move beyond the legal categories and concepts handed to them by journalists and lawyers and start to reconceptualize the study of miscarriage of justice, most fundamentally, as about the study of human behaviour and human error in social and organizational contexts.\n> \n> (Leo 2005: 213; see also Machado and Prainsack 2012; Heinemann _et al._ 2012)\n\nLeo's critique was primarily aimed at legal scholarship, of course, and both research in STS and socio-legal studies has certainly gone a long way in doing exactly what Leo calls for; namely, challenging the categories and concepts handed to them by actors in the field they study. Seminal STS work on law more generally, and criminal justice in particular, has problematised hitherto unchallenged categories, practices, or technologies such as the belief in the self-corrective ability of science; the rule of law over science; the notion of expertise; the truth claims inherent in scientific, technical, and other kinds of evidence; and the relationship between materiality and conceptuality. Both STS and socio-legal scholarship have also helped to complicate the neat and linear way in which success stories about science and technology in criminal investigation and criminal justice are told.\n\nYet there is still work left to do. We have not yet engaged sufficiently with some unexamined assumptions within criminal law and investigations, obstructing a deeper understanding of norms and practices in the field. In the following section I will discuss two broad issues in the field of criminal justice within which unexamined assumptions are salient and under-explored discourses and practices carry significance, dis\/empowering particular actors, institutions, and technoscientific practice: first, the nature of conviction and of convicts; and second, the role of forensic DNA testing as a technology of individualisation in the sense that they establish a robust link between a person and another construct (here: a crime). These examples have been salient within my own research on forensic DNA technologies and practices, and I hope that they will be able to help to illuminate some of the dynamics underlying the (occasional) ignoring of important social and epistemic assumptions and issues in other areas of STS scholarship.\n\n## **Power, conviction and convicts 8**\n\nIn recent years, the Innocence Project, a US-based litigation and public policy organisation (founded in 1992) dedicated to overturning wrongful convictions, has received a lot of attention in socio-legal scholarship and beyond (see, for example, Risinger 2007; Krimsky and Simoncelli 2011; Krieger 2011; Machado and Prainsack 2012). Although the Innocence Project was founded in the US, and most of its activities are still focused within that region, similar projects and initiatives in other parts of the world have emerged. Besides the obvious and tangible effects that the work of the Innocence Project has on the lives of those who were exonerated due to the _pro bono_ work of its members, it has also had a very pronounced impact on public awareness of the problems and possibilities for errors inherent in the social, legal and technological practices that are employed in criminal investigation and in court. It is no coincidence that calls for more scrutiny of forensic science and practice \u2013 wherever possible according to scientific standards \u2013 are increasingly made by professional institutions and within reports commissioned by governments (e.g. National Academy of Sciences 2009).\n\nThe supposition underpinning such calls, however \u2013 namely that making forensics more 'scientific' will help solve the problem of wrongful convictions \u2013 ignores insights of several decades of STS and socio-legal scholarship, which have challenged this rather simplistic conceptualisation of the role of science in problem solving (see also Edmond 2011). In this respect, such understanding, that 'better' forensic science will solve the problem of wrongful convictions, contradicts the thrust of much STS and socio-legal scholarship.\n\nIn another respect, however, the spirit of calls for a better research culture in forensics _resonates_ with STS scholarship: it implies the admission of a particular kind of mistake on the side of actors in the legal system; i.e., that they have not taken a more critical stance towards expertise and evidence in the past. This admission reinforces STS' critical stance 'through the back door'. And this is indeed how the lesson of the Innocence Project seems to have been received within STS so far: things go wrong when the authority of established experts (in any relevant context: as fingerprint experts, molecular biologists, etc.) is accepted without much scrutiny and reflection (see also Bal 2005).\n\nAccording to an analysis of the causes of the wrongful convictions of the first 239 people that the Innocence Project has helped to exonerate (West 2010), however, it was not the use of high-tech forensic applications, but something very different that was identified as the single most important basis on which the conviction had rested: false eyewitness evidence. This finding has not had much traction in STS research. Perhaps this is to be expected, as STS is by definition the social study of science and technology, and eyewitness testimony does not obviously fall within this remit. But the silence of STS scholarship on this issue could also stem, at least partly, from the fact that the story of the Innocence Project so far \u2013 a narrative of human error (wrongful convictions) that are overturned with the help of a technology (DNA testing) \u2013 has the culprit and the hero in the wrong roles. STS scholars have spent decades countering technological instrumentalism, the view that science and technology are trustworthy in principle and 'tarnished only by the humans using it' (Edmond 2011: 133). Accordingly, the fundamental lesson from the Innocence Project \u2013 i.e., that technology helps undo the mistakes that people make \u2013 conflicts with some of the core tenets of STS. By remaining silent on these issues, the technocratic systems of power that give life to current mechanisms of conviction, are implicitly validated.\n\nThe core narrative of the Innocence Project is also dependent on a clear distinction between 'perpetrator' and 'victim', concepts that have also remained largely unchallenged within STS. It is true that those scholars whose work has helped to debunk assumptions of infallibility of forensic practices and technologies are usually careful not to conflate the label 'convict' with the assumption of guilt, but somehow this has had no wider implications so far. This is even more surprising in the context of a discipline that has so long been engaged in making science and technology more participatory and more 'democratic' through the inclusion of the experience and expertise of people other than social elites and professional experts. Although many convicts have relevant and diverse expertise and experiences in the area of crime scene technologies, this group has not been included in such efforts.\n\nThe experience and expertise of convicts in this respect can be twofold: first, convicts are, in their majority, a marginalised group who have little, if any, influence over how science and technology are used on and against them, while this use has, in many instances, fundamentally life changing effects. Thus many convicts are able to provide perspectives on science and technology from the viewpoint of some of the least powerful actors in a given field. Second, some convicts have relevant 'alternative' professional knowledge pertaining to technology use: this applies to so-called 'career criminals' for example, whose technical and technological knowledge in the realm of forensics applies to avoiding, concealing, or destroying crime scene traces (Prainsack and Kitzberger 2009; Machado and Prainsack 2012).\n\nTo clarify: I am not suggesting that convicts necessarily _should_ be included in our quest for making science and technology more participatory. Bather, I argue that the fact that they have, in general, not yet been included in such endeavours, while other marginalised groups have (e.g. people with disabilities, older adults, ethnic minorities), is instructive. What exactly does this disempowerment tell us? It could be argued that the relative absence of perspectives of convicts from discussions about science and technologies used in criminal investigations and in courts is due to the fact that prisoners are virtually inaccessible as research participants: the obstacles put up by research ethics guidelines, one could say, are so difficult to overcome that they discourage researchers from even trying. This may be a compelling argument insofar as it pertains to prisoners; yet, the group of convicts is much wider, including many people who have never been in prison, and those who were released. Although there may be difficulties in reaching members of these groups as well, these obstacles are regularly no more difficult to overcome than those in respect to working with members of other marginalised groups.\n\nA much more plausible explanation for why convicts have not been listened to in STS scholarship is because there is a tacit acceptance of a wider societal assumption that they should be at the margins of society. This is also what the guilty verdict implies: being guilty means that the guilty person has infringed the social order in some significant way, so that the convicted person is placed outside society, if only temporarily (at the very least, during the commission of the breach; at most, in the literal sense, i.e. being 'locked away' for life) (see also Taslitz 2011). Because they are at the margins \u2013 or even outside \u2013 of society, they have not been included in previous efforts of fostering dialogue and exchange at the interface of science and society. And because the particular distribution of social status and power between actors from criminal justice systems and the supposedly law abiding citizens that they represent on the one hand, and convicts on the other, is tacitly accepted within wide ranges of STS scholarship, other truth claims inherent in criminal justice systems are also taken on board.\n\nExamples of the latter are different valuations of expertise and the power of particular kinds of experts. This, in turn, has two consequences. First, the expertise of convicts pertaining to the science and technology used in legal processes is to be subordinated to, or to be ignored at the cost of, the expertise of other experts. Second, such expertise on the side of convicts is not recognised as professional expertise, even if certain actors do refer to it as such \u2013 typically, 'career criminals', who see their criminal activities as their jobs (Machado and Prainsack 2012).\n\nIn sum, consideration of the nature of conviction and convicts in terms of power and technoscience underscores the importance of refocusing the critical scrutiny of STS and also socio-legal studies. First, an emphasis on what are 'obviously' scientific and technological matters obscures the great degree to which other forms of epistemic and material practices play important roles in the technocratic machinery powering the systems within which conviction occurs (and the debates associated with this). Second, an uncritical acceptance of dominant societal narratives concerning the nature of 'the convict' disempowers convicts as a group and fails to recognise the diverse kinds of expertise they often hold (with implications for both democratic processes and criminological discourse).\n\nIn what follows, I turn now to address an instance where STS scholarship has largely adopted an unexamined assumption underpinning the field of forensic science (specifically, forensic genetics) into its own domain.\n\n## **(Re)producing DNA testing as a technology of individualisation**\n\nA recent article by the geneticist Ant\u00f3nio Amorim makes two propositions: first, that 'the results of forensic DNA testing are limited in both theoretical and practical terms', and second, that an understanding of the reasons for these limitations is crucial even for people outside the field of forensic genetics (Amorim 2012: 260). Most scholars in both STS and socio-legal studies will not be surprised by these suggestions; they have long attended to the limitations of forensic DNA testing. Yet when reading Amorim's argument in detail, some deeper implications of these limitations move to the foreground.\n\nTo begin with, in contrast to fingerprints, for example, which are recorded in the form of visual data (and their annotations), traditional DNA technologies are recorded as a small set of discrete, numeric data (see also Cole 2001; Lynch _et al._ 2008). When we look at a pre-defined set of specific places on the genome, then we find that different people have different numbers of so-called 'short tandem repeats' (STRs, i.e. repetitions of chains of nucleotides) in these places. The larger the number of such places (loci) that are analysed, the smaller the likelihood that two different people _coincidentally_ have the same number of repetitions. This is the basic underlying principle of 'traditional' forensic DNA analysis. It is a principle of stratification, of assigning people to different groups according to their genetic markers. This situation would, in fact, be no different if we did not count the number of repeats in a particular locus but if we noted down the _exact sequence_ of the nucleotides A, T, G, and C themselves as we find them in that locus. The particular order of nucleotides in a given locus describes a geno _type_ that is shared by other individuals (see also Amorim 2012: 261). If we look at the sequence of nucleotides in all possible loci on a person's genome, there is \u2013 again with the exception of monozygotic twins and multiples \u2013 such a small chance that the exact order of all nucleotides in all loci between two individuals matches _by chance_ that it is considered practically (and thus forensically) impossible.\n\nBut stating that something is a practically and thus forensically impossibility does not mean, of course, that it is impossible _in principle_. As Amorim summarises:\n\n> [There are] no unique individuals: everyone belongs to a class shared by many others \u2013 at least potentially, depending on the frequencies of [the nucleotides] A, T, G, and C. Thus, individuals cannot be distinguished from each other at this level of analysis.\n> \n> (Amorim 2012: 261)\n\nForensic DNA analysis never establishes a _certain_ link between a marker and a particular individual, but such a link is always probabilistic. This deficit is, for practical purposes, overcome by the use of statistics. This is necessary because in practice, forensic DNA analysis is mainly used exactly for what it technically cannot do: identifying individuals (via the establishment of a link between a marker from the DNA found at a crime scene and an individual).\n\nThis is not a trivial point; there are few other instances where the connection between a 'marker' and an individual is made by means of statistics and has such severe consequences. In the context of forensics, such a probabilistic link has potentially very grave consequences if accepted in court as 'proof of a link between a person and a crime scene: it may put the narrative of a case in a different light (Johnson and Williams 2004), and in extreme cases, even death sentences can be the result. Even when the DNA profile is treated 'merely' as evidence corroborating other evidence, the consequences can still be very serious. Yet the discussion of the problems inherent in the use of forensic DNA technologies in STS and scholarship on their ethical, legal, and social implications (ELSI) has so far focused primarily on other issues: besides those pertaining to other categories of problems altogether \u2013 such as contamination or flawed analyses of the DNA sample \u2013 discussions about the robustness of DNA analysis have revolved around how to correctly calculate random match probabilities, so that they do not come out as too low or too high (see M'Charek 2005; Aronson 2007; Aronson 2010; Prainsack 2010). The fact that 'true' individualisation is not possible at all on the basis of DNA technologies has been acknowledged but not widely problematised, even within STS scholarship (see M'Charek 2005). This situation compromises the preparedness and effectiveness with which STS arguments can be used to publicly counter claims such as those inherent in the trial of Cleveland Barrett.\n\nIn 2010, Ohio resident Cleveland Barrett was charged with predatory criminal sexual assault of a nine-year-old girl. The evidence presented in the trial against him included both victim testimony and the result of a DNA test. Jurors were told in the closing argument by the Assistant State Attorneys that DNA evidence had been found on the victim that 'matched' Barrett's profile. According to a trial transcript, an Assistant State Attorney posed the rhetorical question to the jury, and provided an answer: 'Who is the major profile in the DNA that's found? The defendant' (quoted from Mills 2012).\n\nWas this a case where the random match probability \u2013 i.e. the chance that the profile obtained from the crime scene matched a randomly drawn person from the general population \u2013 was so small that it was practically impossible for the DNA profile not to have come from Mr Barrett? Was it for this reason that the DNA profiles were simply described as 'matches'? No, and no. The DNA sample found at the crime scene \u2013 on the victim's lips, to be precise -had not resulted in a proper DNA profile at all. When probed, the Illinois State Police crime lab analyst stated during the trial that the DNA test on the sample had only yielded results in one locus; the number of STRs in the other nine genetic loci that are normally tested for could not be determined (this can be the result of degraded DNA, or if insufficient amounts of DNA are available for testing). A one-locus profile, because of the very high likelihood that it could have come from another person who 'accidentally' has the same genotype in that particular locus, is not a suitable 'proof for anything; at best, it could indicate that the defendant cannot be excluded as a source of the trace based on the DNA evidence alone.\n\nThis also explains why the State Prosecutors presented the DNA 'evidence' in such an unusual manner: what the Assistant State Prosecutors had done was to look at how frequently the particular profile from the DNA found on the victim was found in different population groups. In other words, what was the proportion of specific ethnic groups that had this particular genetic variant at this particular locus? These statistical data were then translated into statements about how likely it was that the profile had come from one particular suspect, in this case, Mr Barrett. In this instance, the particular genetic profile found in that one locus that was analysed matched, on average, one in four African-American males, one in eight Hispanic males, and one in nine Caucasian males (Mills 2012). Cleveland Barrett fit the label African American, and thus the DNA profile was sold to the jury as 'matching' Mr Barrett's.\n\nMr Barrett was eventually acquitted, but by then he had already been detained for over a year. How could this happen? How could it be that the minimal DNA 'profile' obtained from the perpetrator's DNA found on the victim was presented to the jury as a 'match' to Mr Barrett's DNA? I would argue that the reason for this was exactly the situation described above, namely that DNA technologies are treated as an individualising method. The fact that DNA technologies cannot provide a certain link to an individual is then overshadowed by the practical and forensic 'impossibility' of adventitious matches if the number of the analysed markers is high enough. Because the scenario of adventitious matches, in many cases, does not matter practically, DNA technologies are seen as 'safe' if the process is done right. This brings us back to exactly the kind of technological instrumentalism that STS has typically challenged, namely that if the application of a technology leads to bad results, the reason for this lies in the application, not in the technology. The effect of this is that everything that carries the label 'DNA test' is seen a reliable method of truth production.\n\nThe question remains open as to why STS scholarship, while having challenged so many other hitherto unexamined assumptions in the field of law and criminal justice, has not prominently highlighted that forensic DNA technologies are tools for stratification (i.e. tools to assign different profiles to different groups), rather than for individualisation. One possible explanation is that STS has, at least to a considerable extent, focused upon 'the usual suspects'. In the area of law and law enforcement, research has been primarily focused on the controversies that lawyers have 'handed us down', to use Richard Leo's terminology once more (see also Edmond 2011).\n\nA key case in point is represented by the so-called 'DNA wars', a heated discussion among experts and practitioners in the aftermath of the first uses of DNA evidence in the 1980s about the scientific 'robustness' of the technology. In this debate, many aspects pertaining to the scientific quality of forensic DNA profiling, and arguments in favour or against of forensic admissibility, were critically examined by STS scholarship (for an overview, see e.g. Aronson 2007; Lynch _et al._ 2008). These examinations, however, remained to a large extent within the existing boundaries of either legal or existing STS discourse. Other discourses, such as that of molecular genetics, for example, while playing a very prominent role in arguments put forward by the participants of the 'DNA wars' themselves, has played a minor role within STS scholarship (for an exception, see Lynch _et al._ 2008; Williams and Johnson 2008; Williams 2010). Also other discourses, including those pertaining to political theory, have not shaped the research questions and methods of STS scholars in this field to the same extent as other discourses have.\n\n## **Unchaining research: how can we get out of 'prison'?**\n\nHow can we solve these problems? How can we 'unchain' STS scholarship in the context of law so that it enables us to see more of the 'root causes' (Leo 2005: 213) for why things are as they are? An approach to a solution will need to contain at least two elements. First, we need to challenge even more what seems unproblematic and obvious. We need to query not only the assumptions and presumptions of the actors in the field that we study, but also our own (e.g. Valverde 2009; Farmer 2010). This means that we need to critically examine (for example) not only how, and according to what standards, truths are produced within the criminal justice system, but we also need to interrogate the practices, norms, and values that render the system legitimate.\n\nSuch an endeavour could be inspired by a commitment to recognise the hegemonic nature of every social order, which, as Chantal Mouffe and other critical discourse theorists (e.g. Mouffe 2005; Howarth 2000) have argued, is always predicated on the exclusion of other possibilities. Applied to the criminal justice system, such an analysis would entail asking how the system could look like if it did not look as it does (see also Lynch 2001; Pollner 1991). Comparative research is one possible way to address such questions, but our thought experiments do not need to be limited by the range of existing scenarios. If fines and prison sentences did not exist \u2013 perhaps because the system operated on the assumption that there was no free will, and that every crime or offence was in some way biologically and\/or environmentally determined \u2013 would such a system operate with a different notion of 'guilt'? What kind of truth would be negotiated in a 'criminal' trial of this sort, and what would be the standards according to which they would be negotiated? Who would make authoritative decisions in such a system? In such a fictitious system, it is likely that science would play a different role, because the truth that this system strived for would serve a different purpose.\n\nI am not advocating that STS and socio-legal researchers should all become armchair philosophers. But keeping in mind that such alternative realities could exist, and that they could provide a fruitful point of reference for our work, can be important (see also Lynch 2011). This process of rendering the familiar exotic is, of course, a task that anthropologists especially have long been concerned with, and STS and socio-legal researchers carrying out ethnographic work are regularly trained to do as well. However, the commitment to challenging what is uncontroversial and obvious to us is something that we need to maintain also outside our immediate field work; in particular, when conceptualising our research and framing the questions we hope it might answer.\n\nA second approach to further enhancing analyses of STS scholarship in the domain of criminal justice consists in rendering explicit the distribution of power and political agency. This is an issue socio-legal scholars have traditionally been highly sensitive to, and much STS research has placed the articulation of agency in the foreground. It will be fruitful to not stop at rendering visible the 'hidden' agency of people and things, but also pose the genealogical question of how particular types of collective action came into being, and how the agency of some entities is more salient than others (see Toom 2012). This requires a particular focus on the role of polities in systems of collective action.\n\nInstances of _situated dis\/empowerment_ (Prainsack and Toom 2010: 1120; see also Haraway 1991: 195; Gad and Lauritsen 2009) may be instructive to bear in mind and locate, in order 'to highlight the importance of \"location, positioning, and situating'\" of practices and contexts, and be perceptive to the ways in which empowering and disempowering processes \u2013 i.e. those which influence the scope of agency of specific actors \u2013 are always intertwined and frequently mutually constitutive. Attention to instances of situated dis\/empowerment enjoins researchers to take seriously the agency of non-human actors while at the same time keeping in view the distribution of political power, where human actors play a different (albeit not necessarily more important) role than non-human ones. Greater attention to rendering explicit the distribution of power and political agency in the fields we study, in conjunction with a continued focus on scrutinising our own tacit assumptions, will go a long way helping us to unpack the remaining unexamined assumptions within STS and socio-legal scholarship.\n\n## **Conclusion**\n\nStudies of technoscience, law, and society have benefited from research and conceptual frameworks emerging from STS, but unexamined assumptions still remain within scholarship in this area, and issues that are salient for the actors occupying the social field of criminal justice continue to be under-explored. In this chapter, I have discussed in detail two broad areas of dis\/empowerment relating to knowledge and technology that I have encountered in the subject area of my own work on the political and social aspects of forensic DNA testing: first, the flows of power intrinsic to the production of the nature of conviction and of convicts; and second, the role of forensic DNA testing as a technology of individualisation capable of linking a specific person to a specific crime. After showing some of the largely unexamined assumptions and omissions within STS scholarship, and discussing some of the consequences of these, I suggested two strategies for enhancing scholarly capacity to open even more black boxes in socio-legal domains. First, continuing to challenge what seems unproblematic and obvious not only to the actors, but to researchers; and, second, explicating the distribution of political power and agency in law and legal processes. Engaging closely with technoscientific practices that structure aspects of law may be an important means of undertaking both of these tasks. In some respects, STS and socio-legal scholars have already attended to them; nonetheless, new and specific emphasis will be a fruitful endeavour.\n\n## **Acknowledgements**\n\nI am grateful to Ant\u00f3nio Amorim, Troy Duster, Marie-Andr\u00e9e Jacob, Corinna Kruse and Victor Toom for their comments on an earlier version of this manuscript, and to Emilie Cloatre and Martyn Pickersgill for having been immensely helpful editors. The usual disclaimer applies. My thanks go to Troy Duster also for bringing the Cleveland Barrett case to my attention.\n\n See, for example, Jasanoff 1995; Lynch _et al._ 2008; Obasogie and Duster 2011.\n\n E.g. Jasanoff 1995; Mercer 2002.\n\n E.g. Smith and Wynne 1989; Jasanoff 1995; Bloor 2000; Cole 2001; Edmond 2004; Lynch and Cole 2005; see also Collins and Evans 2002; Collins and Evans 2007.\n\n For a problematisation of scientific and technical evidence, see, for example, Smith and Wynne 1989; Jasanoff 1995; Angell 1996; Lynch _et al._ 2008; Williams and Johnson 2008; Williams 2010; and Pottage 2011. For a discussion of other kinds of evidence, see also Bloor 2000; Cole 2001; Golan 2004, Lynch _et al._ 2008; M'Charek 2008.\n\n See e.g. Golan 2004; Gillespie 2006; Pottage 2011; Faulkner _et al._ 2012.\n\n See, for example, Cole 2001; Aronson 2007; Jasanoff 2010; Kruse 2012; Prainsack and Toom 2012.\n\n My own research in this area has so far been twofold: it has focused on exploring marginalised perspectives on the role, utility, and broader societal meaning of bioscientific applications in criminal investigation and forensics (e.g. Prainsack and Kitzberger 2009; Machado and Prainsack 2012), and on the shifts in the distribution of power and agency that are intertwined with regulatory measures and approaches (e.g. Hindmarsh and Prainsack 2010; Prainsack and Toom 2010, 2012).\n\n I use the term 'convicts', and not 'offenders', because the latter indicates that they have actually committed an offence or a crime, which the former term, strictly speaking, does not. Moreover, speaking about 'convicts' makes sense with regard to the argument about their experience with forensic technologies; a certain level of contact with criminal investigation and the criminal justice system is required for this to materialise (i.e. somebody who has committed a crime but never been the subject of an investigation is not somebody whom forensic technologies have been used upon, as argued here).\n\n Nucleotides make up human DNA; long chains of DNA are 'rolled' into chromosomes.\n\n The chance that two people do have the same number of repetitions in the same places \u2013 the so-called 'random match probability' \u2013 increases when the number of repeats in a certain loci are relatively typical in the respective population, or when individuals are biologically related. For example, full siblings share 50 per cent of their DNA on average. This means that on average, half of the genetic information (number of repeats) in the loci that forensic tests analyse will be the same. This phenomenon is utilised in a technique called 'familial searching'; the method takes profiles that match in many but not all loci as an indicator that the originator of the profile that is searched for is a biological relative of the one whose profile matched in more loci (than would be expected for unrelated individuals). For more detail, see Murphy (2010).\n\n It should be noted here that 'traditional', STR-based forensic DNA testing does not look at all loci of a person's genome but only to about 10\u201315, depending on the particular genetic loci typed, and the kind of software used, in a given jurisdiction.\n\n Also for this reason, it has been argued that any forensic technique or technology that delivers probabilistic results should not be grounds for a conviction in the absence of convincing other evidence (Mnookin _et al._ 2011: 747).\n\n The delineation of racial categories \u2013 African-American, Hispanic, Caucasian, etc. \u2013 remained an unexamined assumption not only in the trial, but also in the media coverage of the case. There is a growing body of STS scholarship on this topic; see, for example, Duster 2004, 2006a, 2006b, 2008; M'Charek 2008a, 2008b, M'Charek _et al._ 2012; Reardon 2005, and it is also a hotly debated topic in forensic genetics.\n\n This situation is changing in light of growing transnational exchange of DNA profiles within Pr\u00fcm regime (Prainsack and Toom 2010, 2012). The larger the number of profiles that a particular DNA profile is compared with, the higher the chances that matches are not 'real' (i.e. stemming from the same person) but adventitious.\n\n## **References**\n\nAmorim, A. (2012) 'Opening the DNA black box: demythologizing forensic genetics', _New Genetics & Society_, 31(3): 259\u2013270.\n\nAngell, M. (1996) _Science on Trial: The Clash of Medical Evidence and the Law in the Breast Implant Case_ , New York: Norton.\n\nAronson, J.D. (2007) _Genetic Witness: Science, Law, and Controversy in the Making of DNA Profiling_ , New Brunswick, NJ: Rutgers University Press.\n\nAronson, J.D. (2010) 'On trial! Governing forensic DNA technologies in the USA', in R. Hindmarsh and B. Prainsack (eds), _Genetic Suspects: Global Governance ofForensic DNA Profiling and Databasing_, Cambridge, UK: Cambridge University Press, pp. 240\u2013261.\n\nBal, R. (2005) 'How to kill with a ballpoint. Credibility in Dutch forensic science', _Science, Technology, and Human Values_ , 30(1): 52\u201375.\n\nBloor, M. (2000) 'The South Wales miners federation, miners' lung and the instrumental use of expertise, 1900\u20131950', _Social Studies of Science_ , 30(1): 125\u2013140.\n\nCole, S.A. (2001) _Suspect Identities: A History of Fingerprinting and Criminal Identification_ , Cambridge, MA: Harvard University Press.\n\nCollins, H.M. and Evans, R. (2002) 'The third wave of science studies: studies of expertise and experience', _Social Studies of Science_ , 32(2): 235\u2013296.\n\nCollins, H.M. and Evans, R. (2007) _Rethinking Expertise_ , Chicago, IL: University of Chicago Press.\n\nDuster, T. (2004) 'Selective arrests, an ever-expanding DNA forensic database, and the specter of an early-twenty-first-century equivalent of phrenology', in D. Lazer (ed.), _The Technology of Justice: DNA and the Criminal Justice System_ , Cambridge, MA: MIT Press, pp. 315\u2013334.\n\nDuster, T. (2006a) 'Explaining differential trust of DNA forensic technology: grounded assessment or inexplicable paranoia?', _Journal of Law, Medicine & Ethics_, 34(2): 293\u2013300.\n\nDuster, T. (2006b) 'The molecular reinscription of race: unanticipated issues in biotechnology and forensic science', _Patterns of Prejudice_ , 40(4\u20135): 427\u2013441.\n\nDuster, T. (2008) 'DNA dragnets and race: larger social context, history and future', _Gene Watch_ , 21(3\u20134): 3\u20135.\n\nEdmond, G. (2004) _Expertise in Regulation and Law_ , Aldershot: Ashgate Publishing.\n\nEdmond, G. (2011) 'The building blocks of forensic science and law: recent work on DNA profiling (and photo comparison)', _Social Studies of Science_ , 41(1): 127\u2013152.\n\nFarmer, L. (2010) 'Time and space in criminal law', _New Criminal Law Review_ , 12(2): 333\u2013356.\n\nFaulkner, A., Lange, B. and Lawless, C. (2012) Special issue of Journal of Law and Society on ' _Material worlds: intersections of law, science, technology and society_ ', 3(1).\n\nFlear, ML. and Vakulenko, A. (2010) 'A human rights perspective on citizen participation in the EU's governance of new technologies', _Human Rights Law Review_ , 10(4): 661\u2013688.\n\nGad, C. and Lauritsen, P. (2009) 'Situated surveillance: an ethnographic study of fisheries inspection in Denmark', _Surveillance & Society_, 7(1): 49\u201357.\n\nGillespie, T. (2006) 'Engineering a principle: \"end-to-end\" in the design of the internet', _Social Studies of Science_ , 36(3): 427\u2013457.\n\nGolan, T. (2004) 'The emergence of the silent witness: the legal and medical reception of X-rays in the USA', _Social Studies of Science_ , 34(4): 469\u2013499.\n\nHaraway, D.J. (1991) 'Situated knowledges: the science question in feminism and the privilege of partial perspective', in D.J. Haraway (ed.), _Simians, Cyborgs and Women: The Reinvention of Nature_ , London: Free Association Books, pp. 183\u2013202.\n\nHeinemann, T., Lemke, T. and Prainsack, B. (2012) 'Rsky profiles: societal dimensions of forensic uses of DNA profiling technologies', _New Genetics and Society_ , 31(3): 249\u2013258.\n\nHindmarsh, R. and Prainsack, B. (2010) (eds) _Genetic Suspects: Global Governance of Forensic DNA Profiling and Databasing_ , Cambridge: Cambridge University Press.\n\nHowarth, D. (2000) _Discourse_ , Buckingham: Open University Press.\n\nJasanoff, S. (1995) _Science at the Bar: Law, Science, and Technology in America_ , Cambridge, MA: Harvard University Press.\n\nJasanoff, S. (2010) 'Foreword', in R. Hindmarsh and B. Prainsack (eds), _Genetic Suspects: Global Governance of Forensic DNA Profiling and Databasing_ , Cambridge: Cambridge University Press, pp. xix-xxiv.\n\nJohnson, P. and Williams, R. (2004) 'Post-conviction testing: the UK's first \"exoneration\" case? _Science and Justice_ , 44(2): 77\u201382.\n\nKrieger, S.A. (2011) 'Why our justice system convicts innocent people and the challenges faced by innocence projects trying to exonerate them', _New Criminal Law Review_ , 14(3): 333\u2013402.\n\nKrimsky, S. and Simoncelli, T. (2011) _Genetic Justice: DNA Data Banks, Criminal Investigations, and Civil Liberties_ , New York: Columbia University Press.\n\nKruse, C. (2012) 'Legal storytelling in pre-trial investigations: arguing for a wider perspective on forensic evidence', _New Genetics and Society_ , 31(3): 299\u2013309.\n\nLeo, RA. (2005) 'Rethinking the study of miscarriages of justice: developing a criminology of wrongful conviction', _Journal of Contemporary Criminal Justice_ , 21(3): 201\u2013223.\n\nLynch, M. (2001) 'Ethnomethodology and the logic of practice', in T.R. Schatzki, K.K. Cetina and E. Von Savigny (eds), _The Practice Turn in Contemporary Theory_ , London: Routledge, pp. 131\u2013148.\n\nLynch, M. (2011) 'Harold Garfinkel (29 October 1917\u201321 April 2011): a remembrance and reminder', _Social Studies of Science_ , 41(6): 927\u2013942.\n\nLynch, M. and Cole, S. (2005) 'Science and technology studies on trial: dilemmas of expertise', _Social Studies of Science_ , 35(2): 269\u2013311.\n\nLynch, M., Cole, S.A., McNally, R and Jordan, K. (2008) _Truth Machine: The Contentious History of DNA Fingerprinting_ , Chicago, IL: University of Chicago Press.\n\nMachado, H. and Prainsack, B. (2012) _Tracing Technologies: Prisoners' Views in the Era of CSI_ , Aldershot: Ashgate Publishing.\n\nM'Charek, A. (2005) _The Human Genome Diversity Project: An Ethnography of Scientific Practice_ , Cambridge: Cambridge University Press.\n\nM'Charek, A. (2008a) 'Contrasts and comparisons: three practices of forensic investigation', _Comparative Sociology_ , 7(3): 387\u2013412.\n\nM'Charek, A. (2008b) 'Silent witness, articulate collective: DNA evidence and the inference of visible traits', _Bioethics_ , 22(9): 519\u2013528.\n\nM'Charek, A., Toom, V. and Prainsack, B. (2012) 'Bracketing off populations does not advance ethical reflection on EVCs: a reply to Kayser and Schneider', _Forensic Science International: Genetics_ , 6(1): e16\u2013e17.\n\nMercer, D. (2002) 'Scientific method discourses in the construction of \"EMF science\": interests, resources and rhetoric in submissions to a public inquiry', _Social Studies of Science_ , 32(2): 205\u2013233.\n\nMills, S. (2012) 'Weak DNA evidence could undermine justice, experts say', by _Chicago Tribune_ reporter. Online. Available: \u00adh\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ada\u00adr\u00adt\u00adi\u00adc\u00adl\u00ade\u00ads\u00ad.\u00adc\u00adh\u00adi\u00adc\u00ada\u00adg\u00ado\u00adt\u00adr\u00adi\u00adb\u00adu\u00adn\u00ade\u00ad.c\u00ado\u00adm\u00ad\/\u00ad2\u00ad0\u00ad1\u00ad2\u00ad-\u00ad0\u00ad7\u00ad-\u00ad0\u00ad5\u00ad\/\u00adn\u00ade\u00adw\u00ads\u00ad\/\u00adc\u00adt\u00ad-\u00adm\u00ade\u00adt\u00ad-\u00add\u00adn\u00ada\u00ad-\u00adq\u00adu\u00ade\u00ads\u00adt\u00adi\u00ado\u00adn\u00ads\u00ad-\u00ad2\u00ad0\u00ad1\u00ad2\u00ad0\u00ad7\u00ad0\u00ad5\u00ad_\u00ad1\u00ad_f\u00ado\u00adr\u00ade\u00adn\u00ads\u00adi\u00adc\u00ad-\u00add\u00adn\u00ada\u00ad-\u00ada\u00adn\u00ada\u00adl\u00ady\u00ads\u00adi\u00ads\u00ad-\u00add\u00adn\u00ada\u00ad-\u00adp\u00adr\u00ado\u00adf\u00adi\u00adl\u00ade\u00ad-\u00add\u00adn\u00ada\u00ad-\u00ads\u00adc\u00adi\u00ade\u00adn\u00adt\u00adi\u00ads\u00adt\u00ads\u00ad (accessed 26 August2012).\n\nMnookin, J.L., Cole, S.A., Dror, I.E., Fisher, B.A., Houck, M.M., Inman, K., Kaye, D.H., Koehler, J.J., Langenburg, G., Rsinger, D.M., Rudin, N., Siegel, J. and Stoney, D.A. (2011) 'The need for a research culture in the forensic sciences', _UCLA Law Review_ , 58: 725\u2013779.\n\nMouffe, C. (2005) _On the Political_ , Abingdon: Routledge.\n\nMurphy, E. (2010) 'Relative doubt: familial searches of DNA databases', _Michigan Law Review_ , 109(3): 291\u2013348.\n\nNational Academy of Sciences (2009) ' _Strengthening forensic science in the United States: a path forward_. Online. National Research Council', Online. Available: \u00adw\u00adw\u00adw\u00ad.n\u00adc\u00adj\u00adr\u00ads\u00ad.\u00adg\u00ado\u00adv\u00ad\/\u00adp\u00add\u00adf\u00adf\u00adi\u00adl\u00ade\u00ads\u00adl\u00ad\/\u00adn\u00adi\u00adj\u00ad\/\u00adg\u00adr\u00ada\u00adn\u00adt\u00ads\u00ad\/\u00ad2\u00ad2\u00ad8\u00ad0\u00ad9\u00ad1\u00ad.\u00adp\u00add\u00adf\u00ad (accessed 26 August 2012).\n\nObasogie, O.K. and Duster, T. (2011) 'All that glitters isn't gold', _Hastings Center Report_ , 41(5): 15\u201317.\n\nPollner, M. (1991) 'Left of ethnomethodology: the rise and decline of radical reflexivity, _American Sociological Review_ , 56(3): 370\u2013380.\n\nPottage, A. (2011) 'Law machines: scale models, forensic materiality and the making of modern patent law', _Social Studies of Science_ , 41(5): 621\u2013643.\n\nPrainsack, B. (2010) 'Key issues in DNA profiling and databasing: Implications for governance', in R Hindmarsh and B. Prainsack (eds) _Genetic Suspects: Global Governance of Forensic DNA Profiling and Databasing_ , Cambridge: Cambridge University Press, pp. 15\u201339.\n\nPrainsack, B. and Kitzberger, M. (2009) 'DNA behind bars: other ways of knowing forensic DNA technologies', _Social Studies of Science_ , 39(1): 51\u201379.\n\nPrainsack, B. and Toom, V. (2010) 'The Pr\u00fcm regime: situated dis\/empowerment in transnational DNA profile exchange', _British Journal of Criminology_ , 50(6): 1117\u20131135.\n\nPrainsack, B. and Toom, V. (2012) 'Performing the union: the Pr\u00fcm Decision and the European dream', _Studies in the History and Philosophy of Science_ , (in press).\n\nReardon, J. (2005) _Race to the Finish: Identity and Governance in an Age of Genomics_ , Princeton, NJ: Princeton University Press.\n\nRisinger, D.M. (2007) 'Innocents convicted: an empirically justified factual wrongful conviction rate', _Journal of Criminal Law and Criminology_ , 97(3): 761\u2013806.\n\nSmith, R. and Wynne, B. (1989) _Expert Evidence: Interpreting Science in the Law_ , London: Routledge.\n\nTaslitz, A. (2011) 'The rule of criminal law: why courts and legislatures ignore Richard Delgado's rotten social background', _Alabama Civil Rights & Civil Liberties Review_, 2: 79\u2013130.\n\nToom, V. (2012) 'Bodies of science and law: forensic DNA profiling, biological bodies, and biopower', _Journal of Law and Society_ , 39(1): 150\u2013166.\n\nValverde, M. (2009) 'Jurisdiction and scale: legal \"technicalities\" as resources for theory', _Social and Legal Studies_ , 18(2): 139\u2013157.\n\nWest, E.M. (2010) ' _Court findings of ineffective assistance of counsel claims in postconviction appeals among the first 255 DNA exoneration cases_ '. Online. Available: \u00adw\u00adw\u00adw\u00ad.i\u00adn\u00adn\u00ado\u00adc\u00ade\u00adn\u00adc\u00ade\u00adp\u00adr\u00ado\u00adj\u00ade\u00adc\u00adt\u00ad.o\u00adr\u00adg\u00ad\/\u00add\u00ado\u00adc\u00ads\u00ad\/\u00adI\u00adn\u00adn\u00ado\u00adc\u00ade\u00adn\u00adc\u00ade\u00ad_\u00adP\u00adr\u00ado\u00adj\u00ade\u00adc\u00adt\u00ad_\u00adI\u00adA\u00adC\u00ad_\u00adR\u00ade\u00adp\u00ado\u00adr\u00adt\u00ad.\u00adp\u00add\u00adf\u00ad (accessed 20 August 2012).\n\nWilliams, R (2010) 'DNA databases and the forensic imaginary', in R Hindmarsh and B. Prainsack (eds), _Genetic Suspects: Global Governance of Forensic DNA Profiling and Databasing_ , Cambridge: Cambridge University Press, pp. 131\u2013152.\n\nWilliams, R and Johnson, P. (2008) _Genetic Policing: The Use of DNA in Criminal Investigations_ , Cullompton: Willan Publishing.\nChapter 5\n\n* * *\n\n# **Making the colposcope 'forensic'**\n\nThe medico-legal management of a controversial visualisation device\n\n_Gethin Rees_\n\n* * *\n\nThe presence of injury upon the body of a rape assault survivor has been demonstrated as a strong predictor of successful criminal prosecution (Du Mont and White 2007). Forensic practitioners, either doctors (known as Forensic Medical Examiners (FMEs)) or nurses (Forensic Nurse Examiners (FNEs)), spend the majority of the forensic intervention that follows the reporting of a rape assault observing the body and recording anything that they determine constitutes abnormal phenomena (Rees 2011). This observation and recording process is separated into genital and non-genital examinations, and while injuries are often not difficult to observe macro-scopically upon non-genital areas, the same cannot be said for the ano-genital region. Consequently, since the 1990s, a gynaecological technology known as the colposcope has been recommended for use within forensic medical examinations (Slaughter and Brown 1992). The colposcope, initially designed to investigate the cervix and tissues of the vagina and the vulva following the return of an irregular pap smear, is a type of microscope that can magnify the genital (and anal) areas up to 30 times (Rogers 2004). It comes complete with a light for illumination and a camera for recording the examination. According to those promoting the use of the colposcope within forensic medical examinations, magnification and illumination is likely to increase the probability of observing signs of genital injury; meanwhile, the camera enables improved contemporaneous recording of the examination, producing better evidence for criminal justice as well as medico-legal research (Brennan 2006). However, not all forensic practitioners are convinced that this device is useful or appropriate, and some challenge its incorporation on both victim-centric and scientific grounds.\n\nSuch concerns are indicative of the difficulties that arise when new technological interventions are introduced into the legal milieu. While the practitioners of both science and the law promote reliance upon rationality and objectivity in their knowledge production (Smith and Wynne 1989) and are keen to employ new devices in order to ascertain that knowledge, as other chapters in this volume have testified, implementing new technologies is often complex and controversial. While (medico-) legal practitioners may endeavour to utilise technologies that are the state of the art as part of a thorough investigation, the use of the technologies are also required to meet the ethical, legal and moral standards required by law. The colposcope is such an example of a device where it was considered to offer (what I will go on to label) evidential benefit at the expense of victim care and as a result an ongoing controversy in forensic medicine ensued regarding whether the colposcope should become established best practice.\n\nControversies have been a fruitful area of study in science and technology studies (STS) as they demonstrate science in the making (Mercer 1996). Harry Collins (1975, 1981) employed the metaphor of the ship in the bottle to explain the ways that controversies can help the analyst of scientific knowledge and practice. Once the ship (a scientific claim or practice) is in the bottle (accepted as 'knowledge' or 'best practice'), then for all intents and purposes it appears to have always been there, or 'ready assembled' (Collins 1981: 45). In other words, it becomes seen as self-explanatory that a particular practice is the most efficacious method. However, close observation of the controversy \u2013 either historically, when the analyst knows the outcome of the debate (see for example Barnes and Shapin 1979), or ethnographically when the controversy is on-going (Collins 1981) \u2013 draws attention to the complex negotiations and social influences required to generate consensus around the knowledge claim or practice (for instance, not publishing papers in high profile journals, or not inviting certain scientists to important conferences; Barnes _et al._ 1996, Kuhn 1996).\n\nIn many ways the colposcope controversy mirrors a larger dialectic, often considered to be at the heart of forensic medicine: whether the forensic medical practitioner is an objective evidence-gatherer or the provider of therapeutic care to a victim. This 'dual role' inherent in the original title of the forensic practitioner \u2013 i.e., the 'Police Doctor' (Savage _et al._ 1997) \u2013 necessitates that practitioners choose to emphasise either the evidential or the therapeutic aspect of their work when performing a forensic intervention. As I will show in the first half of this chapter, both those who promote and those who oppose the routine implementation of the technology are drawing upon a particular rhetoric surrounding the forensic medical role (i.e. evidential or therapeutic) to make their case. However, as with nearly all the scientific controversies previously studied, there has been no decisive 'crucial experiment' or announcement marking an end to the debate; rather, research interests have moved on, leaving forensic practitioners to make local decisions regarding the best ways to employ the technology. Accordingly, the device is routinely employed in forensic work, but in circumscribed ways: practitioners aim to achieve the most benefit from the colposcope while simultaneously attempting to limit the harm caused by the device to the survivor.\n\nIn the second half of the chapter I will draw upon interview data from two studies of forensic medical practitioners (one with FMEs, the other with FNEs) to argue that when making their decisions about the best ways to use the colposcope, practitioners are not acting in a 'dual role' but rather in a specially 'forensic' mode (Mulla 2011, Rees forthcoming). Such a modality of practice does not involve the fluctuation between evidence gathering and therapy, but involves a unique style of praxis that is simultaneously alert to and incorporative of both aspects of the work traditionally characteristic of FM\/NEs. Further, I will argue that the introduction of new technologies into the legal milieu is not without its problems; in this case the requirement for both therapeutic and evidential practices necessitated innovation in the ways that the colposcope was employed, most notably the ways that the images produced by the device were stored and utilised. Such local negotiations and workarounds are common for the successful application of scientific and medical technologies in the legal milieu.\n\nThe data presented in this chapter derive from two studies: one investigating the role and work of FMEs in Scotland, conducted between 2006 and 2008, and the second investigating FNEs in England in 2010. Interviews were performed between 2006\u201310 with 13 FMEs from four constabularies in Scotland and five FNEs from five hospitals in England (the total population of FNEs employed at the time). Alongside the interview data, both studies also involved analysis of the textbooks, journal articles and practice guidelines that FM\/NEs use in training, research and work. Drawing upon these data for the remainder of this chapter, I will set out the colposcopy controversy before focusing on the way that it has played out and been resolved in local contexts.\n\n## **The colposcopy controversy**\n\nThe controversy centred around two studies by Laura Slaughter and colleagues in the 1990s (Slaughter and Brown 1992, Slaughter _et al._ 1997), which advocated the use of the colposcope due to the evidential benefit provided by its improved illumination and magnification. However, many practitioners were unconvinced by Slaughter _et al._ 's recommendations. I will outline the details of the controversy, beginning with those who promoted the device for its _evidential benefit_.\n\n### **_Evidential benefit_**\n\nGynaecologists were the first to identify the potential benefits of the colposcope for forensic medical investigations (Teixeira 1980). Due to their experience working with the device in order to perform clinical investigations of the cervix, gynaecologists were aware of the improved magnification and postulated that this would enable forensic practitioners to examine genital regions with similarly greater precision. This early recommendation is often elided in practitioner histories of the colposcope (e.g., Sommers _et al._ 2005), and Teixeira's (1981) colposcopic study of 500 victims of sexual offences is normally cited as the first medico-legal study to identify the efficacy of the technology. Teixeira reported that his method of observing the hymen via the colposcope resulted in greater clarity than macroscopic observation alone, arguing the colposcope: 'to be irreplaceable in the examination of the victims of sexual offences and recommends its adoption by all medicolegal services' (Teixeira 1981: 214).\n\nAlthough Teixeira's conclusions have since been labelled 'dubious' and 'confusing' (Lincoln 2001: 212), his findings nevertheless demonstrated that the colposcope could provide improved visibility for genital examinations. This was deemed to have implications for criminal justice, such as the identification of injuries unseen by macroscopic observation and the ability to identify observed phenomena as naturally (or otherwise) occurring.\n\nTeixeira's findings appear to have been ignored and forgotten about during the 1980s, and the question of the incorporation of the colposcope was not broached again until the publication of Slaughter and Brown's (1992) study. Concerned with the reported low observation rate of genital injuries (only in 10 per cent to 30 per cent of cases), the authors compared the colposcope with a nuclear stain, 'Toludine Blue', customarily placed on complainants' genitalia in order to highlight signs of injury (albeit leaving the complainant with a stain on her genitalia that came to be known as the 'the mark of the rape victim' (Lincoln 2001: 215)). Slaughter and Brown colposcopically examined 131 women who had reported non-consensual penile-vaginal intercourse taking place within the past 48 hours, observing genital injury in 87 per cent of cases. They concluded that the colposcope was not only superior in observing signs of injury, but was also, by attaching a camera, of significant value to criminal investigations and the development of a medico-legal evidence base for genital injury:\n\n> The clarity afforded by colposcopic magnification will also allow us to further characterize genital changes associated with rape. The findings in this study support the opinion that rape victims often sustain mounting injuries. The ability to detect and document these injuries through colposcopic photography provides valuable medical and legal information.\n> \n> (Slaughter and Brown 1992: 86)\n\nThis quotation exemplifies much of the argument for the colposcope (although it of course developed as the controversy progressed). Slaughter and Brown emphasised the different types of evidence that the device could provide: not only would it result in magnified and clearer images of the genitalia that would enable experts to make more precise observations, but it also enabled recording of those images (as photographs, or later moving visual images on film and digital recordings), which could be used by other actors in the criminal justice process.\n\nMoreover, those same images could also be collated and used as scientific evidence in the development of forensic medical knowledge about the effects of forced sexual intercourse on female genitalia, which in turn would enable more probative expert evidence. Of course, precautions would have to be taken in order to maintain the anonymity and confidentiality of such images, and supporters of the colposcope drew attention to the logistical concerns that needed to be overcome in order to enable routine use of the device (e.g., Rogers 1996). However, these difficulties were dwarfed by the 'benefits that will accrue if there are fewer or shorter trials because of the improved quality of the evidence' (Rogers 1996: 121). For medico-legal practitioners such as Slaughter, Brown and Rogers, the improved illumination, magnification and recording of the ano-genital region would certainly improve the quality of the evidence that forensic medical experts could provide in rape cases.\n\n### **_Victims and the absence of injuries_**\n\nA year after the publication of Slaughter and Brown's article in the _American Journal of Obstetrics and Gynaecology_ , a letter was published in the same journal stating that colposcopy as a routine part of the forensic medical intervention was 'a regressive step in the management of rape victims whose histories may not be believed unless demonstrable colposcopic injuries are seen' (Patel _et al._ 1993: 1334). Patel and colleagues were voicing a concern, growing amongst the forensic medical community at the time, that it was common for victims of rape to report without any signs of injury.\n\nThis position was made explicit a year later with the publication 'It's normal to be normal' (Adams _et al._ 1994). Based on a review of child and adolescent sexual assault cases where the victims had been colposcopically examined shortly after reporting, the authors found that very few of the cases had observable signs of ano-genital injury. While Adams _et al._ 's study was paediatric, the findings \u2013 and, more importantly, the slogan \u2013 were quickly mobilised by forensic practitioners primarily concerned with adult examinations (Brennan 2006).\n\nPatel and colleagues were clearly sympathetic (if not pre-emptive) to the idea of 'it's normal to be normal' and feared that the routine implementation of the colposcope would only serve to reinforce popular misconceptions that all victims present with signs of injury, and therefore fail to benefit those who do not have colposcopically observable ano-genital injuries. In fact they went further, voicing concerns that the device could become the arbiter of truth in criminal cases, with decisions over the validity of allegations being decided upon the presence of injuries alone. Lincoln (2001) made a similar argument in relation to resource management. She was concerned with the prohibitive costs of the device and argued that if the colposcope did achieve the 'gold standard' many victims would be disadvantaged as some medico-legal teams would be unable to afford it.\n\nPatel and colleagues developed their critique of colposcopy further, arguing that its routine application would constitute a second assault upon the survivor by requiring them to undergo an invasive genital examination, potentially against their wishes. They argued that the potential for the survivor to develop (what was then known as) 'Rape Trauma Syndrome' as a result of the anxiety caused by a colposcopic procedure should stay forensic practitioners' hands from routinely implementing the technology. Patel _et al._ thus chose to criticise the routine employment of the device on victim-centric\/therapeutic grounds: while the argument was not framed around the provision of therapy, they postulated that routine use of the device could result in further harm, first to survivors' cases (with the potential for signs of colposcopically recorded genital injury to become the arbiter of rape cases), and second to the survivors themselves (due to the possibility of 'an increase in psychological trauma' (Patel _et al._ 1993: 1334)). In opposition to Slaughter and colleagues' promotion of the device based upon the evidential benefit, Patel _et al._ advanced a counterargument grounded in the therapeutic (or medical) aspects of medico-legal work.\n\n### **_Injury interpretation_**\n\nAlongside therapeutic concerns, still more professionals chose to confront Slaughter and Brown in evidentiary terms. In particular, Bowyer and Dalton (1997) were concerned by the large quantity of injuries found in Slaughter and Brown's (1992) study and questioned whether they were all the result of non-consensual intercourse. The authors cited Norvell _et al._ 's (1984) colposcopic study of women who had engaged in consensual sexual intercourse six hours before examination and displayed signs of genital 'microtrauma'. Bowyer and Dalton postulated that Slaughter and Brown had included injuries that were not necessarily the result of non-consensual intercourse including 'tiny lacerations, abrasions, bruises and swelling' (Bowyer and Dalton 1997: 619), and challenged the extent to which one could necessarily distinguish between 'microtrauma' caused by consensual intercourse and injuries that were the result of non-consensual intercourse. In effect, Bowyer and Dalton challenged Slaughter and Brown's claim that the device would provide more probative evidence of non-consensual intercourse.\n\nSlaughter _et al._ (1997) attempted to address the question of the relationship between injuries and consent in their follow-up study, 'Patterns of genital injury in female sexual assault victims'. The objective was to ascertain whether it would be possible to identify, colposcopically, a particular injury or collection of injuries specific to non-consensual sexual intercourse. They reviewed the colposcopic recordings of 311 victims and compared them with findings in 75 women who were examined 24 hours after (what was judged to have been) consensual sexual intercourse. The study concluded that genital injury was only identifiable in a small number (11 per cent) of the consensual intercourse sample, while injury was more commonplace amongst the victim sample (89 per cent).\n\nThese results led Slaughter and colleagues to assert: 'Although coital injury seems to be associated with insertion of the penis, its prevalence is significantly associated with a history of non-consensual intercourse' (Slaughter _et al._ 1997: 615). They were nevertheless cautious in drawing conclusions about consent from their findings, declaring that 'Further investigation is needed to determine whether there is a finding or group of findings that can distinguish nonconsensual and consensual activity' (Slaughter et _al_. 1997: 615). However, they repeated their recommendation that the colposcope should be routinely used as part of 'best practice' for generating forensic evidence in sexual assault cases.\n\n### **_The new quest for the rape injury_**\n\nThe last paper to expressly address the controversy was Brennan's (2006) review of the debate. Brennan reported having used the colposcope since 1999, and strongly advocated for its routine use in order to develop an evidence base of 'normal' genitalia from which the 'abnormal' (i.e., assaulted) could then be identified. Such evidence was necessary to collect, since:\n\n> At this point in time, research in adult sexual assault is probably at the stage where child sexual abuse was two decades ago: individual clinicians working in isolation (without accountability or peer review), who were compelled to interpret what they found... with variable standards in forensic qualifications and clinical experience, and without the benefits of normative standards for the appearance of the hymen, peri-hymenal tissues, or perianal tissues.\n> \n> (Brennan 2006: 199)\n\nPerforming colposcopic examinations with the accompaniment of visual recording would, she argued, develop a collection of images that could be used to produce an evidence base; this would help the development of medico-legal knowledge around sexual assault and could also be used in the training of new practitioners. Enabling colleagues to watch contemporaneous recordings of each other's work would provide a basis for the judgement of an individual's practices as well as the quality of their interpretations.\n\nBrennan was aware of the problems with the colposcope, and accepted that the standardised performance of the colposcopic exam might not be in the best interests of the survivor; she emphasised that healthcare requirements should be identified and addressed first. Moreover, she empathised with the survivors, noting that they might not wish to have invasive images of their genitalia produced (whether for medical, scientific or legal reasons), and strongly encouraged medico-legal teams to take great care with any images produced:\n\n> Since photographs of the genital area, taken for valid medical purposes, can be similar to pornographic images, the question of ownership and control of the photographs is of concern to victims, but more so to those responsible for taking them. Once consent is granted and images come into existence, these can end up in hospital records, a police station, a journal article, or a court room, with no clear boundaries as to who might get to view them.\n> \n> (Brennan 2006: 196)\n\nNevertheless, like Rogers before her, Brennan believed the device could provide substantial evidential benefit and that the concerns she outlined could be resolved: it was acceptable to conduct a colposcopic examination as long as survivors gave their consent.\n\nIt would be a misrepresentation to characterise the debate around the role of the colposcope in forensic examinations as _solely_ about evidential benefit versus medical clinical utility; however, in making their arguments either for or against the device, contributors to journal debates about its use relied upon one or other aspect of the dual role of forensic medicine. For instance, Slaughter and colleagues continually emphasised its evidential aspect, whereas Patel _et al._ focused on the dangers to victims, and even those who demonstrated awareness of the other argument (e.g. Brennan's and Rogers' worries about the dissemination of colposcopic images) eventually subsumed these concerns to the benefit of the greater good, i.e. the expected probative evidence.\n\nAs with so many scientific controversies, this debate appears to have come to an end as the community has moved on to other research questions, isolating those who continue to debate as their opinions are not discussed in the literature. The question of whether or not to use the colposcope is no longer addressed in the forensic medical literature; today, it is common to see large-scale studies and reviews of colposcopic examinations following consensual and\/or non-consensual intercourse in an attempt to discover the still elusive 'rape injury' (e.g. Anderson _et al._ 2009, Astrup _et al._ 2012, Hilden _et al._ 2005). However, concerns still remain, and as there has not been a decisive statement on whether or not to use the device routinely, local adoption has been an _ad hoc_ affair, with forensic practitioners finding their own ways to negotiate the difficulties caused by the technology. In the remainder of this chapter, I will outline some of the workarounds practitioners have employed in order to manage the evidential benefits while at the same time limiting harm.\n\n## **Managing the colposcope**\n\nThe colposcope is routinely used in much of England and Scotland, and, in attempting to localise the device, practitioners are at pains to reap its benefits while limiting further harm to the survivor. Drawing on my interview data, I demonstrate that there are two ways in which practitioners can make the colposcope a successful 'forensic'\/therapeutic-evidential tool (Mulla 2011, Rees forthcoming): first, choosing when to record, and second, limiting access to the images.\n\n### **_Recording decisions_**\n\nBoth FMEs and FNEs were keenly aware of the potential for the colposcope to enable the forensic medical practitioner to perform the genital examination more clearly:\n\n> The facility with the colposcope that we have got there... I think it's a fantastic machine and the clarity and the pictures you get are fantastic you know, and you will really see stuff that you probably wouldn't have seen before.\n> \n> (FME C)\n\n> I always examine with the colposcope because the examination with the colposcope picks up, depending on how many studies you are looking at, about 10 to 30 per cent of the injuries that are not visible to the naked eye, so it is far better to have that. If I see an injury then I would record it, if not I would just be using the colposcope to just look for injuries as a light source and magnification.\n> \n> (FME A)\n\nLike those arguing for the evidential benefit before them, FMEs remarked upon the way the technology enabled them to observe the genital area more clearly, enabling them to see injuries that they would not have been able to observe macroscopically and\/or confirm the existence of abnormal phenomena:\n\n> Genital injuries what I would do is... if it was external I would first put the colposcope on because you can do that without recording, and it could well be that they've shaved their pubic hair, it could be a shaving rash, which I would look more clearly at because the colposcope would magnify, and again I would say to them 'well I'm just going to take a look at this' and I would say 'well actually when you look at that it is an in-growing hair or whatever'. \n> \n> (FNE B)\n\nWhile both FME A and FNE B emphasise the benefit of the device in enabling them to view and clarify, they also draw attention to the fact that they are doing so while the colposcope's camera is switched off; i.e. they are using the device for illumination and magnification, not to record. Maintaining discretion over whether an examination should be recorded enables practitioners to limit the potential for further harm to come to the survivor; as highlighted by Rogers (1996) and Brennan (2006), the security of the colposcopic images once recorded is paramount and can place resource pressures on a medico-legal team to ensure that they have the storage capacity to maintain the security and confidentiality of the recordings. In the words of one FME:\n\n> May I say though the trend is... to use the colposcope more routinely, more regularly. We may not take photographs with it, but at least we use it for as I say, value of illumination and magnification. So it is being used more and more frequently... but no recording carried out, because that gives us the headache of having to store the video and the rest of it.\n> \n> (FME B)\n\nConsequently, practitioners make a decision on a case-by-case basis regarding whether to switch on the recording aspect of the device: some wait until an injury is observed that requires recording (FME A above for instance), while others have their own classificatory framework to identify which victims it would be appropriate to record. For instance:\n\n> Use the colposcope when you feel you are going to get more information; so for example, if you are looking at somebody who has had six children and had sex last week, the chance of finding any injury are limited, so you wouldn't bother with the colposcope, but if you are looking at somebody who's 18, never had sex before, then you want to record the details in minute, minute detail.\n> \n> (FME B)\n\nWhichever way practitioners decide if and when to switch on recording, an explicit reason for maintaining this discretion is to limit unnecessary further harm to the survivor (such as that which can result from images not being maintained securely).\n\nIn the same vein, before FM\/NEs can carry out the examination, they must first obtain consent to recording it. While no guidance has been provided on whether the device should be used routinely, the Faculty of Forensic and Legal Medicine has produced advice on the collection, storage and use of any images produced via colposcopic recording. The practitioner is required to inform the survivor that the images are likely to be viewed by other medical practitioners, and, if in England and Wales (although not Scotland for reasons discussed below), that there is the exceptional chance that the images will also be shown in the courtroom. On receiving this information, the survivor is deemed to have been informed, and can choose whether or not to allow recording.\n\nPractices such as maintaining discretion over when to record and asking for the survivor's consent before recording are not significantly victim-oriented. For instance, they do not address Patel _et al._ 's fears that routine colposcopic examination could result in the greater possibility of psychological harm to the survivor, something that troubled one of the FMEs in the study:\n\n> Yes we usually use a colposcope to check for injuries, but genital injuries are usually rare... but it's a usual technique to use the colposcope... so when you go straight according to the guidelines you have to use the colposcope, so that's a second rape now.\n> \n> (FME D)\n\nYet, at the same time, consenting for recording does not have an overtly evidential basis; for instance, obtaining consent potentially reduces the number of colposcopic images, which might in turn diminish the quantity of evidence produced during the examination. As a result, the measures introduced to make the colposcope useful do not fit with either of the two traditional discourses about forensic medicine; they constitute a compromise, attempting to employ the colposcope for its benefits (illumination and magnification) but at the same time ensuring that they only have as many images as they can securely manage in order to maintain a duty of care to the survivor. Such compromises between the evidential and the therapeutic are essential in forensic work, as further demonstrated by another strategy used to make the colposcope beneficial but not harmful: controlling who has access to the images once they have been produced.\n\n### **_Access to images_**\n\nBrennan (2006) drew attention to the potential for the colposcope to assist with training and peer review. For the training of FNEs in particular, the colposcope has been used as a means to record the trainee's examination and review their practice in conversation with their trainer. During the early phases of training, the more experienced practitioner is physically present while the trainee is performing the forensic examination, guiding and advising the process. As part of this early phase, the trainer pays attention to the way the trainee uses the colposcope as well as the details pertaining to the other parts of the examination. This includes:\n\n> not only how you physically do the cases, how you record it as well, if the light's not right, if your hands are in the way or if the focus is not right then it's basically a worthless recording, you might as well have not done it. So it's the technique with the colposcope.\n> \n> (FME H)\n\nExtensive initial training ensures that the experienced practitioner will not be required at later examinations performed by the trainee; they can instead rely on the colposcopic recording (or 'DVD') as the focus of training conversations and feedback.\n\nApart from cases where the survivor did not provide consent, all examinations are recorded during training in order to provide feedback for development. Of course, once a practitioner is deemed competent it is down to their own discretion to determine when it is appropriate to record. Nevertheless, the cases chosen to be recorded by the practitioners are important for their own development, with medico-legal teams holding regular peer review meetings to discuss interesting cases, comment on each other's skills and share ideas of best practice. As one FME reflected: 'We have a monthly meeting for peer review... and if anyone's got an interesting DVD they bring that to the meeting... so there is an on-going [peer review]' (FME H).\n\nWhile the technology is of notable benefit for the training and development of medico-legal practitioners, it does come at the cost of the sharing of images which the survivor may not want distributed to the practitioner's colleagues. As discussed above, the Faculty of Forensic and Legal Medicine have provided clear guidance that the practitioner should explain the various ways that the documents may be used, including within the context of the courtroom; however, FMEs in Scotland have found another way to limit the number of people who will see the recordings.\n\nScottish FMEs are adamantly opposed to the practice of showing colposcopic recordings in the courtroom as has been done in other jurisdictions (Faculty of Forensic and Legal Medicine 2010, _State of West Virginia v. William. M._ 2010) and have an agreement with the Scottish prosecuting body, the Crown Office and Procurator Fiscal Service, not to be asked to show the images:\n\n> I find some women feel that this [colposcopic recording] is going to be brandished in court, _NO WAY_ , we only allow another medical expert, the defence medical expert to see, they will come to us and I will not stay in the room it will be a police officer who will stay with them, they will see this thing, see my report and then go.\n> \n> (FME A, respondent's emphasis)\n\n> If we're using colposcopy... then there is a CD now which is recording the examination, now that CD is not available to anybody but the doctor, doctors who've taken it... So the Crown Office [and Procurator Fiscal Service], the courts, nobody, and we have an assurance from the Crown Office that they will not ask for it to be produced in evidence, as has been done in England, it has been used as evidence in England. In Scotland we believe, there is a woman in the witness box and the next minute you are showing 'Glorious Technicolors' of her private parts, that's not on. Now you may say 'Well, what about the defence?' Now if the defence have their own experts they are allowed to see the video, together with the doctors who have removed it, taken it... And that's how it is done and we've had no problems in Scotland, the fact that we cannot produce it in evidence... But that [colposcopic recording] we guard with our lives almost, in no way is that going to be divulged, given to anybody.\n> \n> (FME B)\n\nDue to the potential for harm to the survivor via the observation of the video within a public forum like the court, Scottish FMEs have negotiated with the Crown Office and Procurator Fiscal Service to ensure that such images are not publicly displayed. In order to do so they have developed a strategy to maintain one of the evidential benefits of the colposcope, the allowing of others in the criminal justice process to view a contemporaneous representation of the examination. However, they have permitted access only to those who will read the images in a medico-legal manner; other forensic practitioners to whom the survivor has already provided consent. The strategy is yet another example of a forensic compromise: attempting to retain the benefits of the technology (e.g. maintaining the perception of fairness and justice by enabling the defence to have access to the recordings), while at the same time ensuring the dignity of the survivor. It is with some final thoughts about the relationship between the evidential and therapeutic aspects of forensic medical work and what that means for the incorporation of new technologies into legal spaces that I will conclude.\n\n## **Conclusion**\n\nIt is no longer the case that forensic medical practitioners solely focus on either evidential or therapeutic concerns; although there are certain times in the forensic medical examination when one of these may be emphasised over the other, studies have demonstrated that practitioners are always cognisant of both during the overall conduct of their work (Mulla 2011, Rees forthcoming). Consequently, it is not possible for technologies such as the colposcope to address only one aspect of the evidential-therapeutic spectrum. As the colposcope controversy and the ensuing local decisions about practice have identified, it is not enough for a new technology to provide only an evidential benefit if it is also harmful to victims.\n\nWhile it was assumed by early proponents of the technology that the colposcope would revolutionise rape trials, enabling practitioners to identify more injuries and thereby provide more certain evidence, in reality the routine use of the device has only caused greater confusion over what the presence of genital injuries actually represents. Routinisation of colposcopic examinations despite continuing uncertainties regarding what its use represents testifies to the recently consolidated consensus around the broad import of the device. Yet, the colposcope was only considered useful once practitioners had found ways to ensure that they could successfully manage and securely store the large numbers of images it produced, i.e. by only recording when they believed it to have the greatest potential evidential benefit, ensuring the survivor's consent, and negotiating for the images only to be admissible in the courtroom in exceptional circumstances (and in Scotland, attaining agreement not to show them at all). It was only after such manoeuvring was complete that the colposcope achieved the status of a successful 'forensic' device.\n\nThe colposcope is indicative of the difficulties of incorporating technological devices into the legal milieu. While the institutions of science and law are interrelated and reliant upon each other in order to maintain credibility (Smith and Wynne 1989), the nexus at which they meet throws up complex sets of difficulties that require workarounds, as other contributions to this volume indicate. While many assumed that the colposcope would be of significant benefit to legal fact-finding as it was, when faced with the realities of victims' experience and criminal justice it was deemed necessary to adapt the ways through which the device was used. It is not enough to transfer scientific technologies wholesale into the legal arena; rather to be successful, devices must be made forensic and slowly integrated, taking account of the ways such artefacts can be differently interpreted, utilised and understood within legal contexts.\n\n The assumption that evidence gathering and care of the complainant are mutually exclusive categories has not only been prevalent in the forensic medical intervention, but the police and prosecuting and defending barristers have also, for many years, believed that the only way to get at the truth of whether an allegation happened was by appearing sceptical or disbelieving to the complainant. i.e. emphasizing evidential needs over care (Horvath and Brown 2009, Kelly _et al._ 2005, Rees 2010, Temkin 2000, 2005). Fortunately such practices are nowadays less common, especially in the police (McMillan and Thomas 2009), however, even the recent Stern Review (Stern 2010) can be characterised as stating that complainants can either receive fair treatment (known as procedural justice) or have a strong case (criminal justice), i.e. the care versus evidence dichotomy produced anew.\n\n Throughout this chapter I will use the terms 'survivor' and 'victim' interchangeably to connote the subject upon which the colposcope is used. While I am aware of the various difficulties with the choice of preferred noun for the survivor of sexual assault, and in particular the use of the word 'victim', I believe that these represent a far less disbelieving attitude towards the subject than the term 'complainant', normally used to identify someone who has reported a sexual assault but which leaves dubiety over the veracity of their allegation.\n\n The team were assessing the veracity of sexual violence cases based upon whether the hymen was intact, a highly circumspect method of interpretation.\n\n Teixeira's study is not cited in Slaughter and Brown (1992) and in fact they explicitly state that '[n]o published studies have evaluated the usefulness of the colposcope in the routine assessment of rape victims' (Slaughter and Brown 1992: 83). Of course such an evaluation was exactly what Teixeira was trying to achieve.\n\n STS scholars have drawn attention to the importance of images in scientific and medical work, emphasising the ways in which representations enable work, particularly when the object of investigations is not visible to the naked eye, but also recognising that such images are not the product of 'natural realism', but rather are constructed along established disciplinary conventions (Frow 2012, Lynch 1985). The production of images and the ways that they come to be interpreted can be very important in legal contexts given the significant power representations have in terms of their 'visual persuasiveness' (Burri 2012: 52). The belief that images do portray naturalistic realism gives them the status of objective facts, enabling others, for instance jurors, to see for themselves. Given that images are constructions however, they are also framed with particular meanings, drawing attention to certain aspects of phenomena at the expense of others. As a result images can provide very persuasive evidence (Burri 2012, Jasanoff 1998).\n\n Although as Brennan notes, the extension of the slogan to the adult cases had taken place 'without a significant research base' (Brennan 2006: 195).\n\n This has unfortunately come to pass; see White and Du Mont (2009).\n\n The study has been criticised for its methodology; cases were allocated to either the 'sexual assault' or 'consensual sexual intercourse' cohort based upon whether a case had been corroborated (or not). Consequently, if there existed substantiating evidence for sexual assault, for instance evidence of genital injury, the case would be added to the victim cohort. Conversely, if an account was not corroborated, then the complainant (who had originally been colposcopically examined as a victim) would be re-categorised within the consensual sexual intercourse group. As a result, some of the cases located within the consensual category might have been inappropriately classified. The implication of this is that a large proportion of the consensual sex category could represent women who had, in fact, been sexually assaulted, but whose cases resulted in insufficient corroboratory evidence to substantiate the allegation.\n\n As Lloyd _et al._ 's (2005) study recording the genital measurements of 50 women reported, female bodies are more diverse than most medical practitioners assume.\n\n Such assumptions about the type of cases where injuries are more likely to appear are not without their difficulties (Rees 2010).\n\n## **References**\n\nAdams, J.A., Harper, K., Knudson, S. and Revilla, J. (1994) 'Examination findings in legally confirmed child sexual abuse: it's normal to be normal', _Paediatrics_ , 94: 310\u2013317.\n\nAnderson, S.L., Parker, B.J. and Bourguignon, C.M. (2009) 'Predictors of genital injury after nonconsensual intercourse', _Advanced Emergency Nursing Journal_ , 31: 236\u2013247.\n\nAstrup, B.S., Ravn, P., Lauritsen, J. and Thomsen, J.L. (2012) 'Nature, frequency and duration of genital lesions after consensual sexual intercourse \u2013 implications for legal proceedings', _Forensic Science International_ , 219: 50\u201356.\n\nBarnes, B. and Shapin, S. (1979) _Natural Order_ , Thousand Oaks, CA: Sage Publications.\n\nBarnes, B., Bloor, D. and Henry, J. (1996) _Scientific Knowledge: A Sociological Analysis_ , London: Athlone.\n\nBowyer, L. and Dalton, M.E. (1997) 'Female victims of rape and their genital injuries', _British Journal of Obstetrics and Gynaecology_ , 104: 617\u2013620.\n\nBrennan, P.A.W. (2006) 'The medical and ethical aspects of photography in the sexual assault examination: why does it offend', _Journal of Clinical Forensic Medicine_ , 13: 194\u2013202.\n\nBurri, R.V. (2012) 'Visual rationalities: towards a sociology of images', _Current Sociology_ , 60: 45\u201360.\n\nCollins, H.M. (1975) 'The seven sexes: a study in the sociology of a phenomenon or the replication of experiments in physics', _Sociology_ , 9: 205\u2013224.\n\nCollins, H.M. (1981) 'Son of seven sexes: the social destruction of a physical phenomenon', _Social Studies of Science_ , 11: 33\u201362.\n\nDu Mont, J. and White, D. (2007) _The Uses and Impacts of Medico-Legal Evidence in Sexual Assault Cases: A Global View_ , Geneva: World Health Organisation.\n\nFaculty of Forensic and Legal Medicine (2010) _Guidance for Best Practice for the Management of Intimate Images that May Become Evidence in Court_. Available at: h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00adf\u00adf\u00adl\u00adm\u00ad.\u00ada\u00adc\u00ad.\u00adu\u00adk\u00ad\/\u00adl\u00adi\u00adb\u00adr\u00ada\u00adr\u00ady\u00adb\u00ady\u00ad\/\u00adt\u00ady\u00adp\u00ade\u00ad\/\n\nFrow, E.K. (2012) 'Drawing a line: setting guidelines for digital image processing in scientific journal articles', _Social Studies of Science_ , 42: 369\u2013392.\n\nHilden, M., Schei, B. and Sidenus, K. (2005) 'Genitoanal injury in adult female victims of sexual assault', _Forensic Science International_ , 154: 200\u2013205.\n\nHorvath, M. and Brown, J. (2009) _Rape: Challenging Contemporary Thinking_ , Cullompton: Willan Publishing.\n\nJasanoff, S. (1998) 'The eye of everyman: witnessing DNA in the Simpson trial', _Social Studies of Science_ , 28: 713\u2013740.\n\nKelly, L., Lovett, J. and Regan, L. (2005) _A Gap or a Chasm? Attrition in Reported Rape Cases, London_ : Home Office Research, Development and Statistics Directorate.\n\nKuhn, T.S. (1996) _The Structure of Scientific Revolutions_ , Chicago: University of Chicago Press.\n\nLincoln, C. (2001) 'Genital injury: is it significant? A review of the literature', _Medicine, Science and the Law_ , 41: 206\u2013216.\n\nLloyd, J., Crouch, N.S., Minto, J.L., Liao, L.M. and Creighton, S.M. (2005) 'Female genital appearance: \"normality unfolds\"', _British Journal of Obstetrics and Gynaecologists_ , 112: 643\u2013646.\n\nLynch, M. (1985) 'Discipline and the material form of images: an analysis of scientific visibility', _Social Studies of Science_ , 15: 37\u201366.\n\nMercer, D. (1996) _Understanding Scientific\/Technical Controversy_. Available at www.uow.edu.au\/arts\/sts\/research\/STPPapers\/Occpaper-1.html\n\nMcMillan, L. and Thomas, M. (2009) 'Police interviews of rape victims: tensions and contradictions in contemporary rape cases', in M. Horvath and J. Brown (eds), _Rape: Challenging Contemporary Thinking_ , Cullompton: Willan Publishing.\n\nMulla, S. (2011) 'Facing victims: forensics, visual technologies, and sexual assault examination', _Medical Anthropology_ , 30: 271\u2013294.\n\nNorvell, M.K., Benrubi, G.I. and Thompson, G.I. (1984) 'Investigation of microtrauma after sexual intercourse', _Journal of Reproductive Medicine_ , 29: 269\u2013271.\n\nOstrander, S.A. (1995) 'Surely you're not in this just to be helpful', in R. Hertz and J. B. Imber (eds), _Studying Elites Using Qualitative Methods_ , Thousand Oaks: Sage.\n\nPatel, H.C., Courtney, G.M. and Forster, G.E. (1993) 'Colposcopy and Rape', _American Journal of Obstetrics and Gynecologists_ , 168: 1334\u20131335.\n\nRees, G. (2010) '\"It is not for me to say whether consent was given or not\": Forensic medical examiners' construction of \"neutral reports\" in rape cases', _Social and Legal Studies_ , 19: 371\u2013386.\n\nRees, G. (2011) '\"Morphology is a witness that doesn't lie\": diagnosis by similarity relation and analogical inference in clinical forensic medicine', _Social Science and Medicine_ , 73: 866\u2013872.\n\nRees, G. (forthcoming) 'Contentious roommates? Spatial constructions of the therapeutic-evidential spectrum in medico-legal work', in I. Harper, T. Kelly and A. Khanna (eds), _The Clinic and the Court: Medicine, Law and Anthropology_ , Cambridge: Cambridge University Press.\n\nRogers, D. (1996) 'Physical aspects of alleged sexual assaults', _Medicine, Science and the Law_ , 36: 117\u2013122.\n\nRogers, D. (2004) 'The genital examination', in M. Dalton (ed.), _Forensic Gynaecology: Towards Better Care for the Female Victim of Sexual Assault_ , London: Royal College of Obstetricians and Gynaecologists Press.\n\nSavage, S.P., Moon, G., Kelly, K. and Bradshaw, Y. (1997) 'Divided loyalties? \u2013 the police surgeon and criminal justice', _Policing and Society_ , 7: 79\u201398.\n\nSlaughter, L. and Brown, C.R.V. (1992) 'Colposcopy to establish physical findings in rape victims', _American Journal of Obstetrics and Gynecology_ , 166: 83\u201386.\n\nSlaughter, L., Brown, C.R.V., Crowley, S. and Peck, R. (1997) 'Patterns of genital injury in female sexual assault victims', _American Journal of Obstetrics and Gynecology_ , 176: 609\u2013616.\n\nSmith, R. and Wynne, B. (1989) _Expert Evidence: Interpreting Science in the Law_ , London: Routledge.\n\nSommers, M.S., Fisher, B.S. and Karjane, H.M. (2005) 'Using colposcopy in the rape exam: health care, forensic, and criminal justice issues', _Journal of Forensic Nursing_ , 1: 28\u201334.\n\nState of West Virginia v. William M. (2010) Available at www.state.wv.us\/wvsca\/docs\/Spring10\/35130.htm\n\nStern, V. (2010) _The Stern Review_ , London: Home Office.\n\nTeixeira, W.R. (1980) 'Hymenal colposcopic examination in sexual offences', _American Journal of Forensic Medicine and Pathology_ , 2: 209\u2013215.\n\nTemkin, J. (2000) 'Prosecuting and defending rape: perspectives from the bar', _Journal of Law and Society_ , 27: 219\u2013248.\n\nTemkin, J. (2005) _Rape and the Legal Process_ , Oxford: Oxford University Press.\n\nWhite, D. and Du Mont, J. (2009) 'Visualizing sexual assault: an exploration of the use of optical technologies in the medico-legal context', _Social Science and Medicine_ , 68: 1\u20138.\nChapter 6\n\n* * *\n\n# **Telling tales**\n\nSome episodes from the multiple lives of the polygraph machine\n\n_Andrew Balmer_\n\n* * *\n\n## **Introduction**\n\nThis chapter is about the polygraph device, more popularly known as the 'lie detector'. The device has changed over time, though it has retained a central logic: specifically, physiological measures (e.g. blood pressure) are used as proxies for emotional responses created by the act of lying. However, this simple description might already go too far in prescribing a particular account of the device, since it will be the argument of this chapter that much of what the polygraph does can only be understood in relation to the contexts of its application. My argument focuses on the use of the polygraph in United States criminal trials and governance. Though it is largely assumed that the device is inadmissible in the US, the actual picture is more complex. Moreover, the adoption of the polygraph outside of criminal trials is similarly intricate, with the technology playing diverse roles in government departments, police investigations, dispute resolutions, post-probation programmes, surveillance, private investigation, family conflicts, media campaigns, in films and television, and a range of other contexts.\n\nThere are now several histories of the lie detector, which provide a fascinating account of how it developed, spread and found its way into systems of governance. These descriptions often tell a kind of 'origins' story. As the scholarship has developed, the accounts have worked further and further back into the device's history, to its antecedents and to the epistemological spaces into which it was born. They have also developed lines of connection between the polygraph and the recent emergence of brain-based devices. These analyses have argued that there are significant similarities in the assumptions underlying the way that these ostensibly different technologies are seen to work, or in the ways in which these technologies are rhetorically positioned as solutions to problems of governance, justice and social order.\n\nFor example, Alder affirms the significance of the polygraph in solving the problem of (dis)trust in American workplaces, criminal procedure and governance (Alder, 2002; Alder, 2007). One important feature of this argument regards police use of 'third degree' practices of physical abuse to obtain confessions. The politicisation of violence in policing practices allowed developers of the polygraph to rhetorically position use of the device as enlightened, humane and progressive (Littlefield, 2009; Littlefield, 2011; Bunn, 2012).\n\nSuch tales often recount the troubled emergence of the polygraph, during which it struggled to gain scientific and legal legitimacy. They highlight that the machine has undergone a series of alterations to its material components and that the protocols for its application during an examination have been revised on a number of occasions in the quest for credibility. Despite this story of technical change, such efforts to legitimise the polygraph, and demonstrate its reliability, appear to have been largely in vain. The poly-graph's status is as contested as it ever was. This proves to be the point of friction that gets the socio-historical fires burning: how is it that a device that has been doggedly attacked over a hundred years has found application in so many varied and vitally significant contexts?\n\nBunn (2012: 7\u201329), for instance, describes the significance of the notion of the 'born criminal' for the development of criminology and how this concept proved a barrier to the emergence of lie detection since it treated lies as a symptom of an underlying pathology, rather than a common social behaviour. Charting a similar territory, Weber (2008: 68\u201370) cogently illustrates how scientific work on fear was vital to establishing a link between lying, the body and the measurement devices available at the time. Ronald Thomas (1999: 23) traces the role of detective fiction in the emergence of a 'new literacy' of the body, and argues that the developers of polygraphy 'aimed at achieving in the field of law enforcement the same feat that detective writers sought to produce in the literary imagination: reading the truth directly inscribed in the criminal body.' Drawing on Thomas' insights, Littlefield (2011: 68) highlights the way in which a history of 'mind reading' in fiction was similarly important in creating a body that could be read and notes the continued importance of this narrative in contemporary brain-based deception detection.\n\nIn this regard, the scholarly literature has perhaps largely assumed that the story of the polygraph somehow needs to be made sense of. Though these social histories revel in the many twists and turns of the polygraph's life story, they nonetheless often point towards the contradiction between the device's status and its application as being in need of resolution. Such an impetus to explain the polygraph's success embeds a certain notion that the world can indeed be explained coherently; that seemingly incongruent phenomena can be resolved if enough context is excavated from the historical archives.\n\nThough these stories of lie detection are already vibrant and rigorous, I aim to further nuance this work by taking inspiration from scholarship in science and technology studies (STS) that has evidenced the ways in which technologies and social order are co-produced (Latour, 1987; Jasanoff, 2004a). Co-production is 'shorthand for the proposition that the ways in which we know and represent the world (both nature and society) are inseparable from the ways in which we choose to live in it. Knowledge and its material embodiments are at once products of social work and constitutive of forms of social life' (Jasanoff, 2004a: 2). I also draw on work that has celebrated the complexities and multiplicities of technological artefacts and knowledge-making practices (see the following collections for examples: Law and Mol, 2002; Ang, 2011). In this regard, I adopt the notion that multiple realities co-exist and that each individual context comes with its own set of practices for making sense of technologies, bodies, texts, and so on. Mol (2002: 6) argues that, ' _ontology_ is not given in the order of things, but that, instead, _ontologies_ are brought into being, sustained, or allowed to wither away in common, day-to-day, sociomaterial practices' (emphasis in original'. Following Mol and others, Pickersgill (2013) has drawn attention to the ways in which technoscientific innovation can profoundly reshape social and legal ontologies that in turn become implicated in innovative processes themselves and in complex ways that escape linear narratives. In what follows, I draw inspiration from these positions to tell a few tales of the polygraph from its lives in and out of the courtroom. Using the extant literature, I argue that the numerous practices in which the polygraph has found application produce a complex story at the intersections of law and science that cannot be easily resolved into a coherent and sensible trajectory.\n\n## **Assessing the legal status of the polygraph in the USA**\n\nIn thinking about the legal history of the polygraph, it is useful to reflect on the development of physiological studies relating to (criminal) behaviour. In particular, let us consider the history of blood pressure. From the 1600s to the late 1800s this phenomenon was constituted through a great many developments and paradigmatic changes in understanding and apparatus, much of which we owe to Stephen Hales, a biologist working in the eighteenth century. In 1733 he took measurements directly from the artery of a horse by use of a 9ft glass tube (Booth, 1977) and thus made the first measurements of the pressure of blood. One hundred and fifty years later, in 1896, the first non-invasive blood pressure cuff was invented by Scipione Riva-Rocci (Noyes, 1936). Around this time, Angelo Mosso first made connections between blood pressure and fear. Mosso observed that a patient with a cranial fissure showed increased brain pulsations when he was fearful. For example, when the patient accidentally missed saying a prayer at 12pm Mosso's measuring device showed an increase in pulsations. This helped form the basis of studies of the correlation of the blood pressure with emotional changes (Raichle, 1999). Mosso was a student and colleague of the infamous Cesare Lombrosso, who theorised that criminal behaviour was a form of atavism and that criminals could be identified by their anatomical features (Bunn, 2012). Lambrosso was the first to apply this knowledge in assisting police interrogation of criminal suspects (Trovillo, 1939). Thus, by the beginning of the twentieth century, the links between emotions, criminality and physiology had been born, but had only taken small steps.\n\nHugo Munsterberg's work at Harvard in 1915, conducted with his student William Moulton Marston, brought Mosso's early observations into dialogue with understandings of deception and truth. Marston published his work on the systolic blood pressure symptoms of deception in 1917 claiming that: 'The behavior of the blood pressure [... ] constitutes a practically infallible test of the consciousness of an attitude of deception' (Marston, 1917: 162). Littlefield (2011: 52\u201356) shows how Marston developed what he called the 'significant lying curve', a graphical representation of bodily responses, which he connected to his concept of the 'deceptive consciousness'. Most important in this work was Marston's creation of experimental protocols, still used today, that are designed to have subjects perform known lies at known times (having subjects, for example, commit a 'mock crime' by 'stealing' an object from a known room and then lie about it). Weber argues that the performance of mock crimes and the staging of the examination were crucial to invoking fear in the subject and that this emotional response was vital to the measurement of lying as conceived by Marston and colleagues (Weber, 2008: 74\u201375 and 149). Thus were established the key ontological connections that made lie detection in its early years. Importantly, these ontological connections between emotion, the body, and lying invoked emerging ideas from physiology, but also psychology and psychiatry.\n\nSo it was that, in the 1920s, what Alder (2007) calls 'the American obsession' with lie detectors began. The brutal 'third degree' interrogation practices of police of the time, coupled with widespread corruption of the justice system, proved a fertile ground for the emergence of this nascent forensic science (Alder, 2007). The work on deception continued into the late 1920s and it was then that Marston called his first version of the lie detector 'the systolic blood pressure deception test' (Grubin and Madsen, 2005). Shortly after this, it was Marston himself that administered the test in what came to be seen as a landmark decision in _Frye v. United States_ (1923) 293 F. 1013 D.C. Circuit Court.\n\n_Frye_ set the standard of admissibility for expert evidence, and through it began the long battle to establish the polygraph as admissible evidence for criminal trials. In 1922, William Moulton Marston \u2013 one of the developers of the lie detector \u2013 examined James Alphonse Frye in a now infamous case of combined robbery and murder. Marston determined that the test results exonerated Frye of wrongdoing. However, his testimony was deemed inadmissible by the lower trial court and the judgment was upheld by the D.C. Court of Appeals. The opinion of the appeal court created the now well-known and broadly applied _Frye_ standard, which held that, for expert testimony to be admissible, the scientific principle from which the testimony was deduced must be 'sufficiently established to have gained general acceptance in the particular field in which it belongs' ( _Frye_ , 1014). The Court determined that the systolic blood pressure deception test, later to become 'the polygraph', had 'not yet gained such standing and scientific recognition among physiological and psychological authorities' ( _Frye_ , 1014).\n\nIn this regard, the device's scientific immaturity, based not only on the still emerging science of physiology but also of psychology and psychiatry, proved an important technical contribution to the determination of its inadmissibility. _Frye's_ focus on general acceptance and reliability exemplified this:\n\n> Just when a scientific principle or discovery crosses the line between the experimental and demonstrable stages is difficult to define. Somewhere in this twilight zone the evidential force of the principle must be recognized [... ] We think the systolic blood pressure deception test has not yet gained such standing and scientific recognition among physiological and psychological authorities as would justify the courts in admitting expert testimony deduced from the discovery, development, and experiments thus far made.\n> \n> ( _Frye v. United States_ (1923) 54 App. D.C. 46; 293 F. 1013, 1014)\n\nThe Court's decision in _Frye_ was therefore directly, though not fully, determined by the status of the device and of the scientific fields with which the polygraph was affiliated. In physiology, the practices of measurement were progressing towards standardisation but had really only just been developed ten years prior. Similarly, in 1923, the psychological notion of the emotional body and the psychiatric understanding of the unconscious were still in development and these ideas were central to the polygraph's claim to be objective in the determination of truth and falsehood (Weber, 2008). The lie detector was thus similarly in an early stage of development, with new physiological measures being added, new practices for the examination being devised, and with little or no standardisation in the devices, its usage, or the training and backgrounds of operators. The ambiguous nature of the lie detector, then, ensured that it would not be used in court at this time; more profoundly, however, this same ambiguity played a key role in the production of stable evidentiary procedure that would resonate through the courts for decades to come.\n\n## **The shifting terrain of admissibility**\n\nThe _Frye_ ruling developed across the states into a _per se_ exclusion, meaning that the polygraph was routinely refused admissibility without much hearing of its reliability, validity or technological development. This began to change in the 1970s and 1980s, when a few cases emerged at the Federal level in which the polygraph was granted limited admissibility (Giannelli, 1997). Such cases were largely heard in relation to specific issues, such as perjury or cases in which the defendant's taking of a polygraph test was relevant to some matter outside of the central issue of the trial. As such, it was specificities of the crime under examination that allowed some courts to grant admission of the evidence from polygraph tests. Many of these developments were overturned and by the late 1980s the response to the polygraph remained largely exclusionary at the Federal level.\n\nAt the state level, however, a different picture had emerged. Whilst some courts retained the _per se_ ban on admissibility, others took a different path in their development of a 'prior-stipulation' mechanism for admitting polygraph results and testimony. The 'prior-stipulation' rule required that all parties to the trial agreed to the admission of the findings of the examination prior to it being conducted. This meant that defendants might find themselves hearing evidence to which they themselves had stipulated but which claimed they were guilty. This unusual situation continues to this day at the state level, with twenty-nine states retaining the _per se_ ban, fifteen states adopting prior stipulation and only one state, New Mexico, allowing polygraph evidence without prior stipulation (Shniderman, 2011).\n\nThis emergence of the prior-stipulation rule was facilitated by _Frye's_ focus on reliability and general acceptance. Cases seeking admissibility of the polygraph were able to argue that technical developments in the period since 1923 had led to increased reliability, whilst narrowing down the relevant scientific community to a select group of experts on polygraph science (Balmer, 2011(. Advocates of the polygraph, it seemed, had finally found a route into criminal trials.\n\nWhilst the prior-stipulation rule had been developing, there was also mounting dissatisfaction with _Frye_ (Jasanoff, 1995). For example, some courts were having trouble applying the rule as regards determining the appropriate field in which to evaluate the evidence and struggled to determine exactly what would constitute general acceptance (Giannelli, 1980). Since the Rules Enabling Act of 1934, the Supreme Court had been able to promulgate rules of practice and procedure but nonetheless took a rather light-handed approach to developing rules of evidence. In 1975, the Federal Rules of Evidence (FRE) were approved by Congress and President Ford; these had been hashed-out and re-developed over an achingly long period between 1934 and 1975. A number of committees were established and disbanded, reports written and re-written, and a series of political controversies troubled the formation of rules for evidentiary decision-making (Scallen, 2001).\n\nDespite such trouble, it was not until 1993 \u2013 seventy years after _Frye_ \u2013 that the Supreme Court finally decided to hear a case directly regarding the admissibility of expert evidence, in _Daubert v. Merrell Dow Pharmaceuticals_ (1993) 509 U.S. 579. In that decision, the Supreme Court held that general acceptance 'is not a necessary precondition to the admissibility of scientific evidence under the Federal Rules of Evidence' ( _Daubert_ , 597). As such, the _Daubert_ decision moved away from _Frye_ (though it retained a 'general acceptance' test as one of five criteria to be considered) and firmly adopted the FRE as the principal mechanism through which admissibility decisions were to be made at the Federal level (see also Solomon and Hackett, 1996).\n\n_Daubert_ was modified in two further Supreme Court cases, _General Electric Co._ v. _Joiner_ (1997) 522 U.S. 136 and _Kumho Tire Co. v. Carmichael_ (1999) 526 U.S. 137, which now constitute the ' _Daubert Trilogy_ ' used to determine admissibility at the Federal level under the FRE. Each state in the US may choose to adopt or adapt the Trilogy, and many had already adopted or adapted the FRE to their own local context before _Daubert_. However, the Trilogy has been far from uniformly implemented at the state level. Indeed, _Frye_ remains in force in its home Court in the District of Columbia, and in a number of states, including some of the larger ones like California, New York, Pennsylvania, Illinois and Florida. Other states have adopted the Trilogy in part or in whole but the uneven spread of _Daubert_ has been unpredictable (Kritzer and Beckstrom, 2007).\n\nFrom this summary of decisions, we begin to see that the United States operates something of a hodgepodge of practices for determining admissibility of the polygraph at the state level. The emergence of _Daubert_ in a period of relative success for the polygraph (as regards prior stipulation) has added to the options available to any court seeking to determine the admissibility of the technique. Moreover, as with the emergence of prior stipulation, it is the specificity of the case that often influences the way in which the polygraph is handled by the trial court.\n\nTake, for example, the case of _State of New Mexico v. Shane Glen Harrison_ (2000) 129 N.M. 328; 2000 NMSC 22; 7 P.3d 478. In New Mexico, polygraph evidence is currently admissible without prior stipulation \u2013 a situation that has developed over a number of cases in the past twenty years. Given a set of five rules to which the evidence must adhere, the polygraph charts and examiners can be admitted to the trial. One of these rules relates to the expertise of the examiner. In _Harrison_ , the Court had to determine what constituted a sufficient level of expertise for a polygraph examiner. The defendant claimed that the polygraph expert was not qualified to determine the effect that the defendant's condition of eye irritation and associated pain would have on the results of the test. The Court decided that the expertise of the polygraph examiner was sufficient to determine the results of the test, despite it not extending to the influence the eye irritation may have on those results. Rather, the Court instructed, such information had to be taken into account in weighing the significance of the evidence from the polygraph in relation to the rest of the evidence in the trial. In this regard, the state rules, based on elements of the FRE, interacted with the particular trajectory of cases heard on the polygraph in New Mexico to that date, notions of expertise, the specificities of the defendant's body and medical history, and the other evidence in the case.\n\nAdmissibility practices are thus constantly shifting and are open to re-interpretation. The corollary of this is that the meaning and import of polygraph evidence and of the polygraph's reliability and validity is similarly shifting and multiplying. We can see how the ontological status of the polygraph is not only given by the practices of which it forms a part but similarly influences the direction and consolidation of those practices. The ambiguous material and practical technoscientific status of the polygraph contributed to the creation of practices for the determination of admissibility of expert evidence more generally, which in turn pushed the device's development outside of the criminal trial and into the legal periphery. In the following section, I tell another tale of the polygraph that recounts some of how its ontological status changed as it multiplied out into\/formed a new world as a surveillance technique.\n\n## **The world of surveillance: the polygraph and social order**\n\nFollowing the exclusions of the lie detector from the criminal courts, innovation with the device nevertheless continued. Although Marston had done much to get the lie detector started, and would later contribute significantly to its popularisation, it was John Larson, a physiologist and medical student, and Leonarde Keeler, a 'man-about-town and entrepreneur' (Alder, 2002: 8) who oversaw its development and dissemination into American industry, government and the justice system. The pair worked under August Vollmer, the Berkeley Police Chief who was responsible for the professionalization of the police force in the early part of the century. Vollmer's pledge was to do away with the third degree and found policing anew, this time on the basis of objectivity and a dispassionate pursuit of justice. Larson and Keeler's skill with the techniques of physiology were central to that programme since the emergence of criminology meant that crime was increasingly studied through biological lenses (Rafter, 2008; Bunn, 2012). Whilst each of the men contributed to the development of the polygraph, it was Keeler who patented an 'apparatus for recording arterial blood pressure' in 1931 (Keeler, 1931) and who, reacting to the exclusion of the polygraph, steered the device into policing, government and industry.\n\nIndeed, it was Keeler who did the most to popularise the polygraph, drawing on rhetorics of science, magic, law and order in his marketing of the device (Bunn, 1997). For example, in 1929 he suggested subjecting department store employees to regular polygraph exams, believing that this would substantially reduce the losses suffered from pilfering (Alder, 2007). Within twelve months Keeler had begun administering the polygraph in banks, where he found 10\u201325 per cent of employees confessed to theft of one sort or another. The rate of the confessions was startling, and the outing of employees as untrustworthy connected with the emergence of a post-Taylorist demand for efficiency and workforce management (Alder, 2002). Despite a managerial will to fire these individuals, Keeler proposed that they instead be retained and re-tested regularly, a strategy that would certainly make him more money and position the polygraph not simply as a detector of deceit but also as a psychological deterrent to criminal activity. It was through this practice of regular testing and the notion of prevention, spurred by the strategies for producing social order in industry, that the polygraph became a surveillance technique. Contrary to Foucault's arguments regarding the panopticon's visual force (Foucault, 1977), the polygraph's key potency in the emerging practices of corporate surveillance was its temporal effects. As Weber (2008) argues, the polygraph promised to instil a feeling of being observed by the threat of the upcoming examination. This depended on a characterisation of the criminal mind and body: that one's crimes were no longer eroded by the ever flowing waters of time but were inscribed into the emotional body, later to be read.\n\nThis re-emphasis on the polygraph for employee screening had an impact on Keeler's development of training and examination. Previously, the number of examinations he could conduct was limited and every new trainee he produced became a competitor for consultations. The sale of the polygraph device did not net him nearly as much profit as would the consultations lost to his own progeny of investigators (Alder, 2002). The trouble was that in order to get a patent on the device, Keeler had drawn too strong a line between himself as 'inventor' of the device and its increasing depiction in the media (Alder, 2007; Bunn, 2012; Littlefield, 2010). As such, Keeler shied away from mass production of the device, worried that it would tarnish his image if it was taken up by a range of unqualified and untrained individuals and ultimately cost him in sales since he was the only one with the patent. However, his patent ran out after the war and \u2013 no longer able to monopolise the developing market \u2013 Keeler switched from a stratagem of direct consultation and selling of his own expertise, to one of selling the device and training-up more operators. He began running two-week training courses on how to use the polygraph, which rapidly increased the number of examiners in the USA (Alder, 2007). By the 1950s, various parts of American industry were routinely employing the polygraph to secure the reliability of its white collar workforce, with over 2,000,000 examinations taking place in 1950 alone (Alder, 2002).\n\nThe 1940s and 1950s thus saw a massive expansion in the use of polygraphy, particularly during the McCarthyite period. What Keeler had developed in the context of industrial efficiency now took up residence in the heart of the government's security policies as the device became used to police against the threat of hidden communist sympathisers and agents. The first government agency to engage the practice became the Atomic Energy Commission in 1947 (Cook and Robinson, 1947; Shaw and Sapirie, 1947) but in the decades following it has been picked-up in the Department of Energy, the Department of Defense, the National Security Agency, and a range of other large departments in the USA with responsibilities for (inter)national security. The primary capability of the polygraph in this context thus became its use as a deterrent to espionage and as an investigative technique to expose it. Fear around the stability of American social order demanded new technologies for securing against its breakdown; the polygraph seemed to answer social and political concerns, helping to facilitate the expansion of practices of surveillance and interrogation. In this respect, the polygraph helped to shape the notion of American security from the 1940s onwards.\n\nA combination of factors thus influenced the shift in Keeler's strategies: the disappointing outcome that had been _Frye_ and the need to find a new home for the polygraph outside of the legal system; the struggle to attain a patent; the need to profit from the device without the patent; and the various political practices that constituted the culture of suspicion and mistrust during the 1940s and 1950s. Keeler played a prominent role in creating a new ontology for the polygraph by shifting his sales strategy. The emerging McCarthyist climate helped in stabilising this new world for the machine by adopting the polygraph as a surveillance device in government practices. The polygraph similarly proved influential in sustaining these practices. In the following section, I outline another context in which the polygraph came to act as a surveillance device (the Department of Energy) and show how the application of the device within these practices caused protest, leading to a damning report on its reliability and to its simultaneous entanglement with the 'war on terror'.\n\n## **The world of terror: challenges to the polygraph at the DOE**\n\nThe DOE was created in 1977 to coordinate the various energy enterprises and research centres across the USA in order to respond to the energy crisis crippling the US economy (Rudman _et al._ , 1999). The scientific work conducted in some of the DOE's most important organisations is classified and can be traced back to the Manhattan Project. However, counterintelligence (CI) reports over its first few decades consistently claimed that the DOE was failing to maintain high-level security. This was a problem since the 'nuclear secret' had become a fetish of the Cold War and the DOE had hundreds of millions of pages of classified material to manage (Masco, 2002).\n\nA Presidential Commission was created to investigate the matter in 1998, which declared the Department to be a 'Byzantine, and bewildering bureaucracy' (Rudman _et al._ , 1999) that would have to be fundamentally reformed to fortify itself against the threat of security breaches. The structure of the DOE was thus overhauled by Bill Clinton with Presidential Decision Directive 61. That directive created the DOE Counter Intelligence Office, which became operational in 1998 (Anon, 2000). It also included provisions for polygraph testing and so in the same year 800 DOE Federal and contractor employees were scheduled to be screened (Cumming, 2007) by the CI for a number of 'espionage-related reasons' (Richardson, 1999).\n\nThe screening found very little acceptance by the energy scientists working in the major DOE laboratories (Anon, 2000). Four public hearings were held and over 100 written comments were received during 1999 on the subject of polygraph screening (Richardson 1999). Even so, in the autumn of the same year Congress approved legislation expanding the number of employees subject to polygraph screening to 13,000, which was met with further criticism. Due to continued protest about the programme and the influence of Senator Jeff Bingaman \u2013 at the time, a prominent democrat on the Senate Energy and Natural Resources Committee (Bingaman, 1999) \u2013 an 18 month report from the National Research Council (NRC) was commissioned by the DOE with funding of $860,000 (McKee, 2000) to investigate the scientific status of the polygraph.\n\nThe report's findings painted a worrisome picture for DOE security and for the polygraph's legitimacy. It argued that the proportion of spies and terrorists that would be subject to DOE polygraph screening would, presumably, be extremely low, since they would account for only a small percentage of the much larger legitimate scientific workforce. The polygraph, it was argued, does not have adequate sensitivity to demarcate such a small number of threatening subjects. The hypothetical population of ten spies in 10,000 employees working at a reasonably generous accuracy rate of 80 per cent detection of deceivers would, according to the report, produce 1,606 failed tests, of which eight would be spies. The rest would be false positives that misidentified trustworthy scientists as spies (NRC, 2003). The report thus questioned the validity and reliability of the polygraph and criticised many of the scientific articles supporting the device for, amongst other factors, not having adequate experimental controls. Overall, the report is a rather damning assessment of the potential of the polygraph, as two of the authors have themselves tried to make clear elsewhere (Fienberg and Stern, 2005).\n\nThe report, however, concedes that the polygraph may still have utility in deterring application by candidates that present a security risk and that it may go some way to reducing security violations and increasing confessions. Thus, the use of the device in security settings was nonetheless (partially) preserved by virtue of its confession-producing ability, which the report implies has little to do with any scientific principle. Instead, the potency of the polygraph is attributed to the continued _perception_ of the device as valid and accurate and its use in leveraging confessions from examinees.\n\nDespite retaining some of the promise of the polygraph, the NRC report recommended a programme of scientific research that might validate, supplement or supplant the device: 'National security is best served by a broad research programme on detecting and deterring security threats, not a narrow focus on polygraph research' (NRC, 2003(. Congress responded to this recommendation by allocating $500,000 for the National Science Foundation and Office of Science and Technology Policy through the 2004 Intelligence Authorization Act. The funding was for research on 'behavioral, psychological, and physiological assessments of individuals' (Public Law 108\u2013 177-December 13th, 2003, Section 375.).\n\nIn 2002 the National Defense Authorization Act directed the DOE to develop a new polygraph programme in light of the NRC report (Cumming, 2007) and the DOE dutifully responded in the Federal Register of April the following year (Anon, 2003). As regards the criticisms of the NRC, the DOE acknowledged the lack of scientific evidence for polygraph screening whilst retaining its programme because it was mandated to do so by the Congress. The DOE argued that the polygraph was not as central to the determination of security as seemed to be the case. It explained that it was just one test used as part of an investigative process and that the number of false positives was tolerable since the technique would nevertheless identify suspect individuals and innocent individuals would not be affected by a false positive. The Secretary of Energy, Spencer Abraham, commented that 'in light of the current national security environment, the ongoing [sic] military operations in Iraq and the war on terrorism it was appropriate to retain the existing system' (Anon, 2006). Thus a number of political factors were implicated in the continued use of the polygraph machine at the DOE despite the protests from prominent scientists. The device was constituted as part of a programme of measures being taken to ensure the security of the United States and so its role as a surveillance device in homeland security was expanded along new lines to become an important tool in the war on terror.\n\n## **Conclusion**\n\nAs Mol and Law contend, investigations into the organisation of the modern world lead us to discover that we are 'living in two or more neighbouring worlds, worlds that overlap and coexist' (2002: 8). In this chapter I have shown how the status of the polygraph is importantly contextualised within the practices of its use in different socio-legal spaces and how it similarly influences their development.\n\nThe establishment of admissibility rules in _Frye_ , their influence on prior stipulation, the shifting positions on prior stipulation across the US state courts, the way in which the FRE was adopted in _Daubert_ , the subsequent creation of the Trilogy and the individual specificity of cases brought before trial courts all evidence a tangled, multiple and complex picture for the polygraph as regards its legal status. Within this heterogeneous network of relations, any attempt to pin-down the exact nature or pattern of polygraph admissibility is made impossible by the tensions that emerge between the various worlds of legal evidence decision-making.\n\nIf this was not enough complexity, the various multiplications of the poly-graph's existence in the legal periphery and government of the United States make any representation of the device ever more unstable. The adoption of the lie detector by the US government during the 1950s and 1960s was concurrent and overlapped with its heterogeneous extensions in the worlds of law. Moreover, the movement of the polygraph into the world of contemporary practices of war is only one of many new developments in the lives of the polygraph. In particular, the choices I have made in this chapter have obscured our view of the polygraph's use with sex offenders. Such use has been developing since the 1960s and has further interacted with admissibility decisions and surveillance practices (Balmer and Sandland, 2012). I might also have chosen to explore the connections made between these episodes and the development of brain-based techniques of deception detection (Littlefield, 2009). The polygraph's lives in the media have resonances and discordances with these other worlds of representation and use. There are more and more tales to be told.\n\nThe tendency in the literature has been to see these various episodes as progressions in the journey of the device, from failure at trial to success in the government or to try to explain how the device 'doesn't work' scientifically (or at least it is highly contested) but does work as a surveillance tool (or at least rhetorically). But there are non-coherences (Law, 2002) in the narratives analysts have articulated about the polygraph, and these must be recognised as such. Accepting non-coherence in our accounts of the device is to begin to produce a polygraphic story of polygraphy, which might lead us to novel perspectives, analytical tools and representational strategies to understand the way in which the polygraph navigates and constitutes relations between science, law and social order.\n\n## **References**\n\nAlder, K. (2002) 'A social history of untruth: lie detection and trust in twentieth-century America', _Representations_ , 80(1): 1\u201333.\n\nAlder, K. (2007) _The Lie Detectors: The History of an American Obsession_. New York, Free Press.\n\nAng, I. (2011) 'Navigating complexity: from cultural critique to cultural intelligence', _Continuum_ , 25(6): 779\u2013794.\n\nAnon (2000) _Improving Counterintelligence Capabilities at the Department of Energy and the Los Alamos, Sandia, and Lawrence Livermore National Laboratories_.\n\nAnon (2003) 'Office of counterintelligence: polygraph examination regulations 10 Cfr Part 709', _Federal Register_ , 68(71): 17886\u201317890.\n\nAnon (2006) 'Counterintelligence evaluation regulations department of energy 10 Cfr Parts 709\u2013710', _Federal Register_ , 71(1989): 57386\u201357397.\n\nBalmer, A.S. (2011) _Where the Truth Lies: The Development and Legal Status of Lie Detection for Use in U.S. Criminal Trials_ , PhD Thesis, University of Nottingham.\n\nBalmer, A.S. and Sandland, R. (2012) 'Making monsters: the polygraph, the plethysmograph and other prctices for the performance of abnormal sexuality', _Journal of Law and Society_ , 39(4): 593\u2013615.\n\nBingaman, J. (1999) 'Speaking in congress on the national defense authorization act for fiscal year 2000 conference report 22 September 1999', _Congressional Record_ : S11189\u2013S11201.\n\nBooth, J. (1977) 'A short history of blood pressure measurement', _Proceedings of the Royal Society of Medicine_ , 70: 793\u2013799.\n\nBunn, G.C. (1997) _The Hazards of the Will to Truth: A History of the Lie Detector_ , York: York University.\n\nBunn, G.C. (2012) _The Truth Machine: A Social History of the Lie Detector, Baltimore_ , MA: Johns Hopkins University Press.\n\nCook, R.W. and Robinson, J.C. (1947) _Continued and Extended Use of the Polygraph, Department of Energy Declassification Review_ Online]. Available: [h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ada\u00adn\u00adt\u00adi\u00adp\u00ado\u00adl\u00ady\u00adg\u00adr\u00ada\u00adp\u00adh\u00ad.\u00ado\u00adr\u00adg\u00ad\/\u00add\u00ado\u00adc\u00adu\u00adm\u00ade\u00adn\u00adt\u00ads\u00ad\/\u00adR\u00adH\u00adT\u00adG\u00ad4\u00ad0\u00ad4\u00ad2\u00ad2\u00ad.\u00ads\u00adh\u00adt\u00adm\u00adl\u00ad (a\u00adc\u00adc\u00adevs\u00ads\u00ade\u00add\u00ad \u00ad2\u00ad4\u00ad \u00adJ\u00adu\u00adl\u00ady\u00ad 2\u00ad0\u00ad1\u00ad0).)\n\nCumming, A. (2007) _Polygraph Use by the Department of Energy: Issues for Congress_. _Library of Congress Washington DC, Congressional Research Service Report for Congress_.\n\nFienberg, S. and Stern, P. (2005) 'In search of the magic lasso: the truth about the polygraph', _Statistical Science_ , 20(3): 249\u2013260.\n\nFoucault, M. (1977) _Discipline and Punish_ , London: Allen Lane.\n\nGiannelli, P.C. (1980) 'The admissibility of novel scientific evidence: Frye v. United States, a half-century later', _Columbia Law Review_ , 80: 1197\u20131250.\n\nGiannelli, P. C. (1997) 'Polygraph evidence: post-Daubert', _Hastings Law Journal_ , 49: 895\u2013924.\n\nGrubin, D. and Madsen, L. (2005) 'Lie detection and the polygraph: a historical review', _Journal of Forensic Psychiatry & Psychology_, 16(2): 357\u2013369.\n\nJasanoff, S. (1995) _Science at the Bar: Law, Science, and Technology in America_. Cambridge, MA: Harvard University Press.\n\nJasanoff, S. (2004) ' _The idiom of co-production_ ', in S. Jasanoff (ed.), _States of Knowledge: The Co-Production of Science and Social Order_ , New York: Routledge.\n\nKeeler, L. (1931) _Apparatus for Recording Arterial Blood Pressure_.\n\nKritzer, H.M. and Beckstrom, D.C. (2007) 'Daubert in the states: diffusion of a new approach to expert evidence in the courts', _Journal of Empirical Legal Studies_ , 4(4): 983\u20131006.\n\nLatour, B. (1987) _Science in Action: How to Follow Scientists and Engineers through Society_ , Boston, MA: Harvard University Press.\n\nLaw, J. (2002) ' _On hidden heterogeneities: complexity, formalism and aircraft design_ ', in J. Law amd A. Mol (eds), _Complexities: Social Studies of Knowledge Practices_ , London: Duke University Press.\n\nLaw, J. and Mol, A. (eds) (2002) _Complexities: Social Studies of Knowledge Practices_ , London: Duke University Press.\n\nLittlefield, M.M. (2009) 'Constructing the organ of deceit the rhetoric of fmri and brain fingerprinting in post-9\/11 America', _Science, Technology & Human Values_, 34(3): 365\u2013392.\n\nLittlefield, M.M. (2011 ) _The Lying Brain: Lie Detection in Science and Science Fiction_ , Ann Arbor, MI: University of Michigan Press.\n\nMarston, W.M. (1917) 'Systolic blood pressure symptoms of deception', _Journal of Experimental Psychology_ , 2(2): 117\u2013163.\n\nMasco, J. (2002) 'Lie detectors: on secrets and hypersecurity in Los Alamos', _Public Culture_ , 14(3): 441\u2013467.\n\nMckee, J. (2000) '$860,000 Department of Energy study to evaluate polygraphs', _Albuquerque Journal_ , 5 December.\n\nMol, A. (2002) _The Body Multiple: Ontology in Medical Practice_. London: Duke University Press.\n\nNoyes, B. (1936) 'The history of the thermometer and sphygmomanometer', _Bulletin of the Medical Library Association_ , 24(3): 155\u2013165.\n\nNRC (2003) _The Polygraph and Lie Detection_. Washington, DC: National Academies Press.\n\nPickersgill, M.D. (2013) ' _Sociotechnical innovation in mental health: articulating complexity_ ', in Flear, M. L., Farrell, A., Hervey, T. K. and Murphy, T. (eds) (2013) _European Law and New Health Technologies_ , Oxford: Oxford University Press.\n\nRafter, N. (2008) _The Criminal Brain: Understanding Biological Theories of Crime_ , New York: NYU Press.\n\nRaichle, M.E. (1999) 'Modern phrenology: maps of human cortical function', _Annals of the New York Academy of Sciences_ , 882: 107\u2013118.\n\nRichardson, B. (1999) _Memorandum for Heads of Departmental Offices 'Counter Intelligence Polygraph Implementation Plan' the Secretary of Energy Washington DC 20585_ [Online]. Available: www.fas.org\/sgp\/othergov\/polygraph\/ polyimp.html (accessed 24 July 2010).\n\nRudman, W.B., Caracristi, A.Z., Drell, S.D. and Friedman, S. (1999) _Science at Its Best, Security at Its Worst: A Report on Security Problems at the U.S. Department of Energy_. _President's Foreign Intelligence Advisory Board_ , Washington: United States Government.\n\nScallen, E.A. (2001) 'Analyzing the politics of (evidence) rulemaking', _Hastings Law Journal_ , 53: 843\u2013884.\n\nShaw, D.F. and Sapirie, S.R (1947) _Continued and Extended Use of the Polygraph, Department of Energy Declassification Review_ Online]. Available: [h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ada\u00adn\u00adt\u00adi\u00adp\u00ado\u00adl\u00ady\u00adg\u00adr\u00ada\u00adp\u00adh\u00ad.\u00ado\u00adr\u00adg\u00ad\/\u00add\u00ado\u00adc\u00adu\u00adm\u00ade\u00adn\u00adt\u00ads\u00ad\/\u00adR\u00adH\u00adT\u00adG\u00ad4\u00ad0\u00ad4\u00ad2\u00ad2\u00ad.\u00ads\u00adh\u00adt\u00adm\u00adl\u00ad (a\u00adc\u00adc\u00adevs\u00ads\u00ade\u00add\u00ad \u00ad2\u00ad4\u00ad \u00adJ\u00adu\u00adl\u00ady\u00ad 2\u00ad0\u00ad1\u00ad0).)\n\nShniderman, A. (2011) 'You cant handle the truth: lies, damn lies, and the exclusion of polygraph evidence'', _Albany Law Journal of Science and Technology_ , 22(2): 433\u2013473.\n\nSolomon, S.M. and Hackett, E.J. (1996) 'Setting boundaries between science and law: lessons from Daubert v. Merrell Dow Plarmaceuticals, Inc.', _Science, Technology, & Human Values_, 21(2): 131\u2013156.\n\nThomas, R.R (1999) _Detective Fiction and the Rise of Forensic Science_. New York: Cambridge University Press.\n\nTrovillo, P.V. (1939) 'A history of lie detection (concluded)', _Journal of Criminal Law and Criminology_ , 30(1): 104\u2013119.\n\nWeber, S. (2008) _The Hidden Truth: A Sociological History of Lie Detection_ , PhD Thesis, London School of Economics.\nSection 3\n\n* * *\n\n# Regulation, ethics and values\n\n* * *\nChapter 7\n\n* * *\n\n# **Through the thicket and across the divide**\n\nSuccessfully navigating the regulatory landscape in life sciences research\n\n_Graeme Laurie and Shawn H.E. Harmon_\n\n* * *\n\n## **Introduction**\n\nThe metaphor of the 'regulatory landscape' may be over-used, but its utility ensures its continuation. The metaphor is particularly apt for lawyers (working around the life sciences) because much of the responsibility for navigating the landscape is theirs. Within this landscape, law is often characterised \u2013 or caricatured \u2013 as both creator of problems and bringer of solutions. We argue that both accounts are true, but only in half-measure. While it is often said that law struggles to keep up with rapid advances in the life sciences, we focus on a particular aspect of this struggle, viz, it is not due to an absence of law but rather to the significant (and ever-expanding) space that law has come to occupy. The result is that law is often seen as a 'thicket': a complex, fragmented, and cumulative collection of instruments, institutions, and mechanisms that requires ever greater knowledge, time and capital to navigate, thereby imposing disproportionate costs on actors and requiring inordinate amounts of effort to move through.\n\nWe posit that a root cause of many of the problems associated with the regulatory thicket is the piecemeal and reactive ways in which law has emerged; it has grown rapidly under the guidance of those largely uninformed about the workings and trajectories of modern life sciences and has tended to operate in a compartmentalised\/atomised fashion \u2013 focusing on particular _objects_ of regulatory concern, and building frameworks around those objects, rather than taking the time and adopting a broader perspective to view technological trajectories holistically (cf., Metzler and Webster, 2011). Two particular regulatory objects have caught regulators' attention: human tissue, and personal data. In scientific terms, these might be seen as connected and indistinguishable in relation to their potential and value as sources of new knowledge, but in legal terms they have become subject to vast and disconnected structures of control involving local, national and international authorities and instruments, and a plethora of judicial decisions on how their collection, storage and use does or does not impinge on the legal rights of those from whom they are derived. In short, regulatory systems have grown up around tissue and data with little, if any, consideration of how these regulatory objects relate to each other or, indeed, to the interests that the systems are designed to protect.\n\nThis chapter argues that the unanticipated and worrying consequence of these processes is a net failure to deliver regulatory systems that work to protect citizens' rights and to promote scientific development (where such is warranted). The undesirable result is that, despite the complex of regulatory rules and standards, the risks regulated for persist, sometimes living in the many gaps between regulatory instruments, and new ones are generated. Focusing on the life sciences, and more specifically the (controversial) storage and use of newborn bloodspot collections, we explore the problem of unintelligent regulatory design. We advocate a re-visioning of design: one that is driven by appropriate objects of attention, that is proportionate, reflexive, value-based and interest-achieving, and that does not attempt to impose anticipatory control of science trajectories but rather provides an adaptive _regulatory continuum_.\n\n## **'Thickets' and 'divides' in life science regulation**\n\nAn examination of life sciences regulation discloses a general drift towards a 'decentred' approach (Black, 2001) whereby control\/influence is exercised by a diffuse collection of actors which includes but goes beyond the state. In short, despite injunctions to focus regulation, particularly state efforts, on specific problems or objectives (Sparrow, 2000) and to ensure that efforts are 'responsive' (Baldwin _et al._ , 2012), the life sciences are characterised by multiple authority-wielding actors bringing to bear different and often conflicting interests and values from perspectives that are frequently narrow, all of which get captured in elements of the landscape. Moreover, actors call on others to intervene (or contribute to the landscape) despite not fully appreciating the propriety of what might be delivered. The law is frequently called upon to so intervene, and it has frequently done so; regimes governing the life sciences have burgeoned in the last half century. The legal landscape relevant to human tissue is a good illustration of this growth.\n\nThe _Human Tissue Act 1961_ was 2.5 pages long and consisted of four sections. Its successor, the _Human Tissue Act 2004_ (HRA, 2004), has 61 sections (and seven Schedules) and is 25 times longer than the original Act. It established a regulatory system, including the establishment of a statutory governing authority, the regulation of consent, storage and use, provisions on property, and the articulation of criminal offences. Moreover, for those conducting research on human material, the HTA 2004 is but one of many legislative landmarks that sit at the centre of a mesmerising landscape of legislative provisions, professional guidelines, and ethical approval mechanisms. Those dealing with human reproductive material must navigate an entirely different legal landscape as articulated by the _Human Fertilisationand Embryology Act 2008_ (HFEA, 2008). This erects separate institutional and licensing arrangements despite the fact that, in practice, the distinction between reproductive\/non-reproductive materials is not always clear.\n\nFurther, research in the life sciences must begin and end with willing participants whose personal data will inform to greater and lesser extents the establishment of basic research resources or, indeed, downstream research and products. The processing of personal data is governed by a European regime which in the UK is implemented by the _Data Protection Act 1998_ (DPA, 1998). All forms of handling and storing personal data are caught by its provisions. While anonymised data are not caught by the regime, the processes of securing 'adequately anonymised' data are elusive, and all handling of data up to and including anonymisation are covered by the DPA 1998.\n\nVarious additional science-dependent considerations complicate the picture further. Research that is conducted with a view to delivering products or benefits to the market must satisfy various regimes as governed by the Medicines and Healthcare products Regulatory Agency (MHRA) and the European Medicines Agency (EMA). These can include conformity with the European Clinical Trials Directive (designed to establish safety and efficacy in medicinal products) and\/or the medical devices regimes (designed to test safety and performance). At various junctures, researchers will encounter and have to satisfy research ethics committees \u2013 of which there can be many, arising within each of the above regimes \u2013 as well as local, more _ad hoc_ arrangements that are driven by particular concerns (e.g., the protection of research participants' privacy). Finally, all of these regimes and the conduct that they are designed to regulate must be in conformity with human rights. In Europe, this regime stems from the Council of Europe's Convention on Human Rights (1950), as incorporated into UK law by the Human Rights Act 1998.\n\nThe increasingly expansive nature of law's participation in (if not dominion over) life sciences regulation has not been tempered by a well-informed and proportionate approach. Accordingly, the problem of the regulatory thicket is well-recognised in the research community, which has frequently lamented the regulatory burden under which it labours. Giving evidence to the House of Lords Science and Technology Committee on Genomic Medicine in 2009, Andrew Morris noted: 2006)\n\n> The Department of Health guidance suggests that this domain is affected by 43 relevant pieces of legislation. There were 12 sets of relevant standards and eight professional codes of conduct. What this has bred is a culture of caution, confusion, uncertainty and inconsistency.\n> \n> (HL, para 6.15)\n\nThe Academy of Medical Sciences (AMS) has twice reported on the issue, first in the context of using health information in medical research (AMS, 2006) and then in a wider call for more proportionate governance regimes (AMS, 2011). A common theme in these analyses is that confusion about law and the constraints and flexibilities that it allows is a recurring problem; a further cross-cutting theme is the call for a shift in regulatory culture as part of the solution. This has resonance with the recommendations of the Data Sharing Review (2008), which reported that:\n\n> We have found that in the vast majority of cases, the law itself does not provide a barrier to the sharing of personal data. However, the complexity of the law, amplified by a plethora of guidance, leaves those who may wish to share data in a fog of confusion.\n> \n> (Thomas and Walport, 2008: Foreword)\n\nIt was addressing the information governance environment only; in cases where downstream market considerations come into play, the complexities multiply.\n\nTo demonstrate how these thickets can evolve, and to exemplify the deeply unhelpful divides that are erected\/perpetuated, we consider one particular undertaking: the ambition to conduct research using newborn bloodspot collections, or so-called 'Guthrie cards'. Newborn screening programmes began in many western countries in the 1960s. They involve a heel-prick sample of blood from the infant that is retained on manual cards. Blood samples are initially taken in the health interests of the infant to diagnose and treat conditions that can be detected and eradicated or effectively managed at this early stage of life (e.g. phenylketonuria (PKU) or hypothyroidism). It was common to retain these cards, however, and their value has grown over the decades both with advances in genetic analysis and with the prospect of effective linkage to medical and other records (Couzin-Frankel, 2009).\n\nPerhaps unsurprisingly, the nature and balance of the interests implicated have changed, as have the social attitudes and the legal regimes involved. Whereas the idea of asking for consent was anathema in the 1960s, consent has ascended to a principal role in patient management and research regulation (Laurie, 2002). Equally, the advent of tissue and data protection regimes has changed the calculus (Lewis _et al._ , 2011; Hu, 2012); they raise important questions about the legal status of these collections, which contain both tangible (blood spots as tissue) and intangible (personal data) elements, and which have been recognised as de facto state-held DNA databases (Nuffield Council on Bioethics, 2007).\n\nA failure to engage families and publics around the existence and possible uses of such resources has resulted in numerous high-profile destructions of entire collections. For example, in _Beleno v. Texas Dept. of State Health Services_ (2009), the failure to obtain parental consent for the retention and use of blood spots was raised as an infringement of Constitutional rights (Drabiak-Syed, 2011). As part of the out-of-court settlement, the Department of Health ordered that four million samples be destroyed. In Canada, _LD and ED (Guardian Ad Litem) v. British Columbia Women's Hospital and Health Centre and British Columbia Children's Hospital_ (2010), was initiated for breach of privacy. Most recently, the Irish Department of Health and Health Service Executive announced plans to destroy its collection unless appropriate consent for retention was obtained ( _Irish Times, 7_ March 2013). This decision was revised shortly after, and the destruction of the collection was put on hold until further expert advice was gathered ( _Irish Times_ , 26 March 2013). The argument underlying all of these machinations is that data protection law does not permit retention and use without this specific consent.\n\nThere have, of course, been attempts to address the challenges that new uses and changing regulatory regimes create. But these initiatives all seem to face a common problem alluded to above: existing legal regimes have developed in a sector-specific manner. Thus, data protection and the common law of confidentiality govern data sharing and linkage, while the HTA 2004 and common law developments on property in the body dominate tissue research (Harmon and Laurie, 2010). These legally-created and embedded artificial distinctions are not only unhelpful but potentially obstructive to the core interests they purport to protect.\n\nLooking beyond law, human geographer Bronwyn Parry has argued that we need to collapse distinctions in order to explore questions about who controls 'bio-informational' transactions in the global economy (Parry, 2004). While Parry's concern is to understand the downstream intellectual property and benefit-sharing regulations that apply once transformations have taken place, we are more concerned with the logically prior and upstream question of regulating the 'raw' materials and data _before_ any commercial value is realised. The perspectives are inherently connected, however, because ultimately we must all consider the importance of a coherent regulatory continuum from basic research to delivery of benefit. Having said that, the upstream considerations are particularly important for two reasons. First, complete 'decontextualisation' of the materials and data has not yet taken place in that connections to persons remain, necessitating regulatory regimes that avert to the protection of their rights and interests in the handling of said materials. Second, serious limits or outright failures in regulatory regimes in the early stages of research and technological development will thwart efforts to realise any downstream benefits.\n\nA well-designed regulatory continuum can help facilitate the translation of knowledge to product but bridging the tissue\/data divide is critical. At present, legal frameworks and those working within them interact insufficiently and fail to see the bigger picture despite literatures on data sharing and tissue-use being replete with common governance challenges (e.g., propriety and expectations around consent, overly complex legal provisions and disproportionate regulatory burdens, concerns around public and research participant trust, and conflicting ambitions for public and participant engagement).\n\n## **The widening 'divide' in data and tissue**\n\nA crucial step in delivering appropriate and proportionate governance is the vertical and horizontal trimming of the regulatory thicket. The current data\/tissue regulatory divide is deeply unhelpful; not only can it lead to further cross-sector regulatory burden when the same artefact is caught by different regimes, but it can also lead to regulatory responses that are almost wholly divorced from the technological and scientific realities of dealing with tissues and data. From a biomedical perspective, tissue and data are simply different points on a continuum; tissue is less valuable for its tangible qualities than for the precious data it yields. Equally, data is important not necessarily in and of itself, but because of the aggregate value that derives turning raw data into _information._ Law fails completely to appreciate this relative value and its implications.\n\nConsider the example of the European data protection regime, currently under review. The focus for regulatory attention is the processing of 'personal data'. This is defined as 'data which relate to a living individual who can be identified \u2013 (a) from those data, or (b) from those data and other information which is in the possession of, or is likely to come into the possession of, the data controller' (DPA, 1998, sl(1)(e)). The details of the regulatory requirements are immaterial for present purposes; rather, three observations can be made.\n\nFirst, the regime is premised on a semblance of continuing connection between data being processed and the person to whom they relate. This is so irrespective of the nature or degree of any risks (such as to privacy) to the said person; moreover, it is blind to the motive or interest of the data controller. In essence, it treats the snooper in the same way as the researcher who is only interested in the aggregate value of information about a cohort of persons rather than any one individual (although see the limited provisions of the 'research exemption' discussed below).\n\nSecond, the definitions imply that if individuals are not identifiable from the data then the regime does not apply (i.e., anonymised data are not caught). This, however, raises two questions: What level of anonymisation is sufficient, given that such processes are more art than science? What is the position if identifiable data _are_ required for the scientific robustness of research? This is an on-going issue and the subject of considerable uncertainty and variation in practice. It has most recently been addressed in the UK by a Code of Practice from the Information Commissioner's Office (ICO, 2012). The Code provides recommendations to minimise risks but leaves a margin of appreciation with respect to what counts as _effective_ anonymisation. It does not have legal force and its reach is limited to practices within the UK. In one key respect, however, it provides assistance in conducting risk assessments with respect to anonymisation practices: it recommends considering privacy threats relative to the 'motivated intruder', that is, what would be the likelihood of success if someone sought to use anonymised data to re-identify individuals when they were a reasonably competent actor with access to public resources and who would make enquiries of third parties who could provide additional information? The test does not assume specialised knowledge or skills (e.g., hackers), nor access to specialist equipment, nor resort to criminality. Thus, the threshold is set midway between the 'relatively inexpert' member of the public and the 'skilled specialist with prior knowledge'. The test operates by way of a benchmark against which data controllers can test the robustness of their anonymisation practices, noting importantly that motives, threats and risks can change over time and therefore policies and practices must be kept under regular review \u2013 adding to the administrative and regulatory burden of policing the use of anonymised data.\n\nThird, as to research uses of identifiable data, the data protection regime provides a limited research exemption that can apply to data obtained for one purpose when later used for a research purpose so long as two crucial criteria are met:\n\n> (i)... the data must not be processed to support measures or decisions with respect to particular individuals, and (ii) the data must not be processed in such a way that substantial damage or substantial distress is, or is likely to be, caused to any data subject.\n> \n> (DPA, 1998, s. 33)\n\nEffectively anonymised data clearly meet these requirements. As to identifiable data, the justification must be made. If applicable, the consequence is that data can be retained indefinitely (normally data must be destroyed after original purposes for processing are met), and data subjects need not be granted access to their data (otherwise a norm in the regulations) _so long as_ research results are not published in an identifiable form. This is an attempt to relieve the regulatory burden of managing access requests, but this has been undermined by the fact that the Code suggests that granting such access is good practice (2012: 46). Moreover, the reduction in burden might be slight because while consent to research uses is not mandated, data subjects must still have adequate notice of the fact that data are being used for research. And if consent is not obtained and research data cannot be published in an effectively anonymised form, then subject access must be granted and the researcher is exposed to an action for breach of data protection (unless it could be shown that there was no alternative but to publish the results in an identifiable form).\n\nThe result is that research involving personal data continues to be constrained by the 'consent or anonymise' paradigm that has been so heavily criticised by the research community (AMS, 2006). This might be compounded further if current proposals to develop a European Data Protection Regulation are adopted in their current form. A January 2013 report from a European Parliament Committee advocated that research involving health data should only be conducted with data subject consent, which should be 'freely-given, specific, informed and explicit' (EP, 2013: 198). If this is not possible then research would only be permitted if it served 'an exceptionally high public interest' and data must then be 'anonymised or at least pseudonymised using the highest technical standards' (EP, 2013: 198). This trajectory for regulation is disconcerting because it suggests a widening, not a closing, of the gap between law and scientific practice contrary to accumulated experience. The slow pace and political charge of European legislative change are notorious, and the time to worry for data protection and health-related research has not yet come, but this phenomenon speaks to the deeper issues that concern us in this chapter.\n\nA further example from data protection \u2013 this time from the European Court of Human Rights (ECtHR) \u2013 indicates that a proportionate and common-sense approach is not prevailing, making the imperative for change all the stronger. _S and Marper v. United Kingdom_ (2008) concerned the lawfulness of the blanket policy in England and Wales to retain indefinitely and without consent DNA profiles and samples taken from persons arrested for (but never convicted of) criminal offences, however minor. It was held that such an indiscriminate policy was a breach of human rights that could not be justified. In the wake of the decision, the law was changed to embody a more proportionate approach involving time limits for retention (three years usually), but with the possibility of longer retention depending on whether there was a prior record of serious offending and\/or the seriousness of the instant offence (Protection of Freedoms Act 2012, Chapter 1). The reforms will result in around 1 million samples being destroyed, slated to be completed by May 2013 (Hansard, 2012).\n\nThe significance of _Marper_ lies in two key findings of the ECtHR. First, mere retention of DNA \u2013 even without use \u2013 is an interference with the human right to respect for private life because of the possible implications that future uses can have for individuals _(Marper_ , paras 67\u201377). This can be justified in the public interest so long as any interference meets a social need and is necessary and proportionate. Second, the samples, DNA profiles and any information derived from them constitute 'personal data' within the data protection regime ( _Marper_ , para 67), a fact previously accepted by the UK government. These findings can be contrasted with 2007 Opinion of the Article 29 Working Party which exists as an independent advisory body on matters of European data protection:\n\n> Human tissue samples (like a blood sample) are themselves sources out of which biometric data are extracted, but they are not biometric data themselves (as for instance a pattern for fingerprints is biometric data, but the finger itself is not). Therefore the extraction of information from the samples is collection of personal data, to which the rules of the Directive apply. The collection, storage and use of tissue samples themselves may be subject to separate sets of rules.\n> \n> (Article 29 Working Party, 2007: 9)\n\nHerein lies the rub. One European body suggests that DNA samples are 'personal data' in their own right while another advises contrarily and claims that separate rules might apply. Consequently, there is legal uncertainty about the status of tissue samples with respect to whether they are caught by the data protection regimes while, at the same time, there are additional legal regimes being applied to human tissue alone. The implications are potentially profound. On either perspective we have growth of the regulatory thicket. Either a cautious approach will be adopted to the regulation of health research involving data and tissues \u2013 in which case both regimes will be considered to apply \u2013 or the divide between data and tissues will be considered self-evident, in which case separate regimes will evolve that necessarily have overlapping effect.\n\nWe do not argue that data protection ought or ought not to apply to human tissues. Rather, we make a plea to develop and implement regulatory regimes that adopt a functional approach both with respect to the protection of core human rights and interests as well as the likely cumulative effect of their operation on the scientific endeavour. From a scientific perspective the data\/tissue dichotomy is a false one. The distinction can be collapsed with little consequence. Can we achieve this view in law?\n\n## **Collapsing the 'divide' between data and tissue**\n\nAlthough the regulatory objects in these diverse regimes are data and tissue, the core concern of the law \u2013 quite rightly \u2013 is the perceived link to the individual and the protection of his or her rights and interests. Seen in this way, regulatory regimes can be found to have much in common. For example, each regime (tissue and data) is far less onerous if anonymisation is achievable (i.e., if the link to the individual is broken). When the link remains, there is a tendency to place the individual at the centre of regulatory propriety (i.e., individual consent is needed to legitimise dealings with the tissue and data). We have argued elsewhere that consent is neither necessary nor sufficient to protect individual concerns in the health-research context (Laurie and Postan, 2012).\n\nMoreover, the tendency in law to fix the 'consent moment' by the use of the now ubiquitous consent form is dangerous and limiting for many reasons, not least that it can only represent a snapshot of expectations and responsibilities. This ignores the fact that expectations and responsibilities can and will change over time. To the extent that prior consent says anything about an individual's future interests, it can have an ossifying effect on their protection, foregoing the opportunity to develop dialogue and partnership with research participants. Where consent has a role to play, then, it should be seen as a continuing process necessitating genuine interaction throughout the life of the project (Laurie and Postan, 2012).\n\nAs we argued above, though, consent is not always possible or appropriate. Accordingly, it is here that law probably fails most to take a proportionate view of what is at stake. If the function of law is to protect core rights and interests, then what _arc_ the legitimate residual interests that individuals have in their tissue and data once these find their way into a research setting? Privacy is undoubtedly a key concern, yet many concerns around privacy can be addressed through adequate security and access arrangements. Such measures can reduce the risks involved, as well as serve to screen for future inappropriate motives in access and use.\n\nNotwithstanding these technical responses\/solutions, some have argued that the importance given to individual autonomy should deliver a degree of continuing control over data and samples to originators of that tissue or data (Beyleveld and Histed, 2000). For example, one might find it objectionable if one's tissue or data were used to further research objectives that one finds offensive or immoral. While the existence of affront might be real, it is not obvious what the legitimate _legal_ interest is that would be compromised. It is not privacy, which is about intrusion into a state of separateness from others (Laurie, 2002), nor is it bodily integrity since samples are no longer part of the individual and the data never were. But the control metaphor looms large because we tend to talk about both samples and data as if they were extensions of ourselves. The trend toward autonomy reinforces this, most particularly in the fetishisation of consent; moreover, there are some judicial mutterings about property in the body which reinforce these intuitional perspectives (Harmon and Laurie, 2010).\n\nWhere does this leave law? The consent-based approach to research regulation has strength in its focus on the importance of the individual and in its strong suggestion of an element of control. There is, in fact, an elegant simplicity in such an approach. But much is lost as a result. First, the power that consent provides is largely the power to say 'No' (i.e., to refuse). Second, it is not clear that consent can ever operate alone or in a primary role to protect research participants' interests, and the role of consumer protection laws is a helpful analogy here. Third, the simplicity of the solution leads us to ignore\/overlook much that is at stake especially in terms of the overall interests in play.\n\nWe contend that the appropriate regulatory ambition for law is to _correctly_ identify and facilitate the core interests and values at stake, not just private interests (of research participants) but also public interests (such as promoting scientifically sound and ethically robust research). This demands a clearer and more sustained commitment to the wider range of underlying values that promote this broad spectrum of public and private interests (Harmon, 2010; Harmon, 2011; Laurie _et al._ , 2012; Nuffield Council, 2012). Here, an important but often un-vindicated value is that of solidarity, which encourages us to adopt a more social or communitarian stance and to consider what duties might flow from the individual in support of valuable undertakings aimed at the collective (Harmon, 2006; Nuffield Council, 2011).\n\nAccordingly, we advocate a functional approach to regulation in this area that is constructed around both the nature and scope of interests at stake, and which is calibrated by a consideration of the relative risks to said interests. For example, if the core continuing interest that individuals have in research using tissue or data derived from them is privacy, then an appropriate regulatory response would be to address security and access. If research can be demonstrated to have a reasonable prospect of promoting public interests it should be supported. Yet, the trend towards the non-negotiability of consent militates against this. Ironically, it does not guarantee that core interests will be better protected and it runs the considerable risk that public interests will be thwarted. This is not to suggest that consent never has a role to play. Rather, we propose that consent must be deployed as one in a set of regulatory options relating to the range of interests and objectives at stake. Paying due attention to the importance that many people place on consent is an important element in fostering trust in research. Equally, however, trust in that enterprise is dependent on the research delivering on its promises, and this can and does take commitment and investment over time.\n\nGiven this reality, governance regimes must be adaptive and reflexive; that is, capable of picking up on signals about shifting parameters, including participant expectations and scientific possibilities, and responding to these quickly and effectively (Laurie, 2011; Harmon _et al._ , 2013). A reflexive system of governance is based on constant interaction between stakeholders, including participants and publics, and, importantly, it is not entirely dependent on law. At best, law can provide structural architectures within which reflexive governance can be delivered, but it is a crude tool to provide what is needed on the ground and on a case-by-case basis (Laurie, 2013). It further suggests the need for multi-disciplinary design in regulatory regimes, at both the conceptual and the operational stages. A first step in this is collapsing any false distinctions such as that between tissue and data in health-related research.\n\n## **Crossing the 'divide': newborn screening programmes redux**\n\nAs indicated above, Guthrie card collections are a prime example of the challenges that face contemporary biomedical research. The hurdles to (research) success are compounded by the legal regimes that have grown up in an _ad hoc_ manner in the decades since their establishment. While some have turned their attention to the need for appropriate guidance (Botkin _et al._ , 2013; Chrysler _et al._ , 2011), there has been far less consideration of the lessons to be learnt for law. The long-term nature of the collection poses considerable dilemmas about whether and how it is appropriate or possible to impose contemporary governance requirements \u2013 such as consent \u2013 on collections that were established at a very different time (and in this regard we note the literature on museums and the politics of display: Lohman and Goodnow, 2006).\n\nThe value of the resource has also changed over time and will continue to do so, as will the nature of any interests people have in samples and data, facts which also complicate their appropriate regulation. Arguably, with respect to the tissue originators, direct health interests diminish and are replaced by core privacy interests. On our analysis, if the research value of the collection can be demonstrated and if adequate security and access provisions are put in place, this interest is met. In such a case, must consent still be sought (assuming it is at all practicable to do so)? There is no reason to suggest that consent from parents of present and future newborns should not be sought for inclusion in the resource since this is the dominant contemporary ethical paradigm. But whether retrospective consent should be sought or even contemplated is another matter. The logistics aside, we posit that on-going governance of such resources need not and should not place consent as its central focus.\n\nAnd what is the legal status of such collections in light of the argument above? The decision in _Marper_ suggests that data protection applies, but the tangible nature of the collection will also be caught by tissue legislation where this exists. Accordingly, in England and Wales both the DPA 1998 and the HTA 2004 must be observed. In Scotland, the collection of 4 million samples is treated as part of the medical record (i.e., for the informational value that it represents). Arguably, there is no need for an additional layer of legal regulation simply because tangible cards\/spots are involved (and we acknowledge the literature on materiality as exemplified by Faulkner _et al._ , 2012, and the special issue it introduces). To require consent represents an example of disproportionate governance relative to the risks and interests at stake. Far more importantly for the future and for the effective protection and promotion of both private and public interests, is the need for dynamic governance arrangements that both engage publics about the existence and value of such resources and deliver transparent and adaptive mechanisms for their management (Douglas _et al._ , 2012). Arguments for the primacy of consent are not self-evident, either practically (given the scale and age of the resources) or as a matter of principle (it is unlikely to deliver a balance of all interests).\n\nAs a robust alternative to consent, there is the role of independent oversight bodies (Rothwell _et al._ , 2011), perhaps with lay representation as appropriate (albeit this raises important questions about what is meant by both 'lay' and appropriateness of representation). Denmark and other European countries have adopted this model, charging a suitably-constituted body with the stewardship of the resource (N\u00f8rgaard-Pedersen and Hougaard, 2007). We support such an approach and suggest further that central to its operation must be a commitment to the twin principles of openness and reflexivity.\n\n## **Conclusion**\n\nAfter much deliberation, the UK government announced that it will not abolish the Human Tissue Authority or the Human Fertilisation and Embryo Authority as had been proposed. Rather, it will conduct bespoke reviews of the functions of these regulatory entities with a view to reducing regulatory burden (Department of Health, 2013, para 69). This is only the UK context and we must take into account European and international regulatory pressures; yet, we strongly suggest that any such reviews in the near or longer-term future must consider both the vertical and horizontal branches of the regulatory thicket. We advocate an approach that is cognizant of the full range of underlying values and interests at stake in modern health research and which embraces and reflects key features that deliver both responsiveness and proportionality in regulation. Legal architectures that embody a command and control approach are unlikely to reflect these dimensions. This is a plea to recognise the limits of law and a call to deliver intelligent regulatory design at the nexus of science and law that reflects scientific realities and changing understandings of what counts as social goods.\n\n For example, the Health Research Authority was created in 2011 with a specific remit to: 'create a unified approval process and to promote proportionate standards for compliance and inspection within a consistent national system of research governance' (HRA website). We have also developed a regime in the Wellcome Trust Scottish Health Informatics Programme that adopts a principles-based approach to the sharing and linkage of health-related data and that delivers a proportionate approval scheme that matches applications to link and shares data to appropriate governance pathways based on a risk-based approach (Laurie and Sethi, 2013).\n\n We have advocated this elsewhere in the context of data linkage governance (Laurie and Sethi, 2013).\n\n## **References**\n\nAcademy of Medical Sciences(2006) _Personal Data for Public Good: Using Health Information in Medical Research_ , London: AMC.\n\nAcademy of Medical Sciences(2011) _A New Pathway for the Regulation and Governance of Health Research_ , London: AMS.\n\nArticle 29 Data Protection Working Party, Opinion 15\/2011 on the Definition of Consent (2011). _Available at_ : http:\/\/ec.europa.eu\/justice\/policies\/privacy\/docs\/wpdocs\/2011\/wp187_en.pdf (accessed 12 April 2013).\n\nBaldwin, R., Cave, M. and Lodge, M. (2012) _Understanding Regulation_ , 2nd edn., Oxford: Oxford University Press.\n\nBeyleveld, D. and Histed, E., (2000) 'Betrayal of confidence in the Court of Appeal', _Medical Law International_ , 4: 277\u2013311.\n\nBlack, J. (2001) 'Decentring regulation: the role of regulation and self-regulation in a \"Post Regulatory\" world', _Current Legal Problems_ , 54: 103\u2013146.\n\nBotkin, J., Goldenberg, A., Rothwell, E., Anderson, R. and Lewis, M. (2013) 'Retention and research use of residual newborn screening bloodspots', _Pediatrics_ , 131: 120\u2013127.\n\nChrysler, D., McGee, H., Bach, J., Goldman, E. and Jacobson, P. (2011) 'The Michigan BioTrust for Health: using dried bloodspots for research to benefit the community while respecting the individual', _Journal of Law and Medical Ethics_ , 39 Supp 1: 98\u2013101.\n\nCouzin-Frankel, J. (2009) 'Newborn blood collections: science gold mine, ethical minefield', _Science_ , 324: 166\u20138.\n\nDepartment of Health (2013) _Government response to the consultation on proposals to transfer functions from the Human Fertilisation and Embryology Authority and the Human Tissue Authority_ , 25 January2013.\n\nDouglas, C., van El, C., Faulkner, A. and Cornel, M. (2012) 'Governing biological material at the intersection of care and research: the use of dried blood spots for biobanking', _Croatian Medical Journal_ , 53: 390\u20137.\n\nDrabiak-Syed, K. (2011) 'Legal regulation of banking newborn blood spots for research: how Bearder and Beleno resolved the question of consent', _Houston Journal of Health Law & Policy_, 11: 1\u201346.\n\nEuropean Parliament (Committee on Civil Liberties, Justice and Home Affairs: Rapporteur Jan Philipp Albrecht) (2012) ' _Draft report on the proposal for a regulation of the European Parliament and of the Council on the protection of individual with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation)_ ', 17 December2012.\n\nFaulkner, A., Lange, B. and Lawless, C. (2012) 'Introduction: material worlds: intersections of law, science, technology and society', _Journal of Law & Society_, 39: 1\u201319.\n\nHarmon, S. (2006) 'Solidarity: a (new) ethic for global health policy', _Health Care Analysis_ , 14: 215\u201336.\n\nHarmon, S. (2010) 'Regulation of stem cell and regenerative science: stakeholder ppinions, plurality and actor space in the Argentine social\/science setting', _Law, Innovation & Technology_, 2: 95\u2013114.\n\nHarmon, S. (2011) 'Ambition and ambivalence: encouraging a science culture in Argentina through engagement and regulatory reform', _Studies in Ethics, Law & Technology_, 5: 1\u201326.\n\nHarmon, S. and Laurie, G. (2010) 'Yearworth v. North Bristol NHS Trust: property, principles, precedents and paradigms', _Cambridge Law Journal_ , 69: 476\u201393.\n\nHarmon, S., Laurie, G. and Haddow, G. (2013) 'Governing risk, engaging publics and engendering trust: new horizons for law and social science?', _Science & Public Policy_, 40: 25\u201333.\n\nHu, S. (2012) 'Minnesota Supreme Court hears whether the Genetic Privacy Act protects newborn blood spot samples obtained under the state's newborn screening statutes \u2013 _Bearder v. State of Minnesota', American Journal of Law & Medicine_, 38:225\u20137.\n\nIrish Times, 'Calls for \"priceless resource\" of heel-prick test results to be saved', 7 March2013. _Available at_ : www.irishtimes.com\/newspaper\/health\/2013\/0205\/1224329653693.html.\n\nIrish Times, 'Reilly does U-turn, order HSE not to destroy newborn blood samples', 26 March2013. Available at: www.irishtimes.com\/news\/health\/reilly-does-u-turn-ordering-hse-not-to-destroy-newborn-blood-samples-1.1338666.\n\nInformation Commissioner's Office(2011) _Data Sharing Code of Practice_ , Cheshire: ICO.\n\nLaurie, G. (2002) _Genetic Privacy_ , Cambridge: Cambridge University Press.\n\nLaurie, G. (2002) 'Better to hesitate at the threshold of compulsion: PKU testing and the concept of family autonomy in Eire', _Journal of Medical Ethics_ , 28: 136\u20137, doi:10.1136\/jme.28.3.136.\n\nLaurie, G. (2011) 'Reflexive governance in biobanking: on the value of policy led approaches and the need to recognise the limits of law', _Human Genetics_ , 130: 347\u2013356.\n\nLaurie, G. (2013) 'Governing the spaces in-between: law and legitimacy in new health technologies', in Flear, M. L., Farrell, A., Hervey, T. K. and Murphy, T. (eds) (2013) _European Law and New Health Technologies_ , Oxford: Oxford University Press.\n\nLaurie, G., Harmon, S. and Arzuaga, F. (2012) 'Foresighting futures: law, new technologies, and the challenges of regulating for uncertainty', _Law, Innovation & Technology_, 4: 1\u201333.\n\nLaurie, G. and Postan, E. (2012) 'Rhetoric or reality: what is the legal status of the consent form in health-related research?', _Medical Law Review_ , 1\u201344, doi: 10.1093\/medlaw\/fws031.\n\nLaurie, G. and Sethi, N. (2013) 'Towards principles-based approaches to governance of health-related research using personal data', _European Journal of Risk Regulation_ , 1: 43\u201357.\n\nLewis, M., Goldenberg, A., Anderson, R., Rothwell, E. and Botkin, J. (2011) 'Blood samples: state laws regarding the retention and use of residual newborn screening blood samples', _Pediatrics_ , 127: 703\u201312.\n\nLohman, J. and Goodnow, K. (eds) (2006) _Human Remains and Museum Practice_ , London: UNESCO.\n\nMetzler, I. and Webster, A. (2011) 'Bio-objects and their boundaries: governing matters at the intersection of society, politics, and science', _Croatian Medical Journal_ , 52: 648\u201350.\n\nN\u00f8rgaard-Pedersen, B. and Hougaard, D. (2007) 'Storage policies and use of the Danish Newborn Screening Biobank', _Journal of Metabolic Disease_ , 30: 530\u20136.\n\nNuffield Council on Bioethics(2007) _Forensic Uses of Bioinformation: Ethical Issues_ , London: NCB.\n\nNuffield Council on Bioethics(2011) _Solidarity: Reflections on an Emerging Concept in Bioethics_ , London: NCB.\n\nNuffield Council on Bioethics(2012) _Emerging Biotechnologies: Technology, Choice and the Public Good_ , London: NCB.\n\nParry, B. (2004) _Trading the Genome: Investigating the Commodification of Bio-information_ , New York: Columbia University Press.\n\nRothwell, E., Anderson, R., Burbank, M., Goldenberg A., Lewis, M., Stark, L., Wong, B. and Botkin, J. (2011) 'Concerns of newborn blood screening advisory committee members regarding storage and use of residual newborn screening blood spots', _American Journal of Public Health_ , 101: 2111\u201316.\n\n_S and Marper v. United Kingdom_ (2009) 48 EHRR 50.\n\nSalter, B. and Faulkner, A. (2011) 'State strategies of governance in biomedical innovation: aligning conceptual approaches for understanding \"Rising Powers\" in the global context', _Global Health_ , 7: 1\u201314.\n\nSparrow, M. (2000) _The Regulatory Craft_ , Washington, DC: Brookings.\n\nTaylor of Holbeach (Lord), _Statement: Protection of Freedoms Act 2012_ , Hansard, 13 December2012: Column WS161.\n\nThomas, R. and Walport, M. (2008) _Data Sharing Review Report_ , at www.connectingforhealth.nhs.uk\/systemsandservices\/infogov\/links\/datasharingreview.pdf\nChapter 8\n\n* * *\n\n# **Misconduct hunting**\n\nResearch integrity via law, science and technology\n\n_Marie-Andr\u00e9e Jacob_\n\n* * *\n\nToday many elite scientists and experts from different parts of the world are part of virtual global networks, sharing more with one another than with many colleagues from their own countries (Haas 1992; Boulton _et al._ 2012; Riles 2011; Nielsen 2012). Actors within elite science receive similar education, read and write in the same journals, attend the same conferences, obey the same courtesy rules (Lam 2010: 15); in other words, they share a culture (Strathern 2000). A notable element of this shared culture is well-disseminated guidelines about 'good research practice'. There is a lot to be said about what 'good research practice' has come to mean in the last few decades, and why it has been so minutely articulated in some contexts and left purposefully vague in others. In fact these 'good research practice' norms are currently being renegotiated, and hence should not be taken for granted (Biagioli 2003, 2012; Jacob 2013). For instance, there have been calls for expanding the definitions of research misconduct comprised of Fabrication, Falsification and Plagiarism (FFP) to include self-plagiarism, ghost authorship, non-disclosure of conflicts of interest, amongst others (Farthing and Wells 2008; Goodstein 2010). Montgomery and Oliver (2009) have traced three 'institutional logics' mobilised to deal with deviance in science in the United States. They note that prior to 1975, the discourse was about norms and counter-norms of 'normal practice of science'. It then moved, between 1975 and 1990, to a focus on the prevention of scientific misconduct; and from the 1990s to the present, to promoting research integrity.\n\nIn the last couple of years in the UK, various consultation papers and reports have been lamenting that the regulation of research is too burdensome, bureaucratic, and expensive. There have been calls from the government to cut the bureaucracy, with prominent actors and institutions advocating a 'radical simplification' (DoH 2010: 9) of the legal frameworks regulating research. The Department of Health 'Report on the arm's-length bodies review' (2010) \u2013 which followed the publication of the White Paper 'Equity and excellence: Liberating the NHS' (2010) \u2013 has set out proposals to make the health sector more efficient and less bureaucratic by increasing accountability and transparency and reducing the number and cost of quangos. Amongst other things, the Report mandated the Academy of Medical Sciences to review current regulatory frameworks and plan a new pathway for regulating research. The Academy's subsequent 2011 report suggested increasing the speed of decisionmaking, reducing complexity, and eliminating unnecessary bureaucracy and costs (see also Laurie and Harmon, this volume). In this context \u2013 where bureaucracy almost became a dirty word, and where traditional modes of state regulation by law are seen as inefficient and expensive \u2013 self-regulation through the use of science and technologies takes on special relevance (Lessig 1999; Black 2001). But when it comes to regimes of governance, do legal tools and motifs ever recede in the background?\n\nIn contrast to the efforts of those interested in preventing misconduct, this chapter will instead examine how the idea of 'research integrity' itself has been translated into a regulatory agenda. What interests me is how 'legal' and 'technological' tools that govern contemporary science act to differentiate research 'integrity' from 'misconduct,' and hunt for misconduct. I shall further argue that these research integrity tools cannot, in fact, be neatly classified as either legal or technological. By drawing parallels between legal practices and peer deliberations by scientists, I also show how the specificity of the 'legal' and the 'scientific' is bound to get blurred. This has consequences for our own theoretical (or disciplinary) perspectives regarding the study of research integrity.\n\nSTS scholar Mario Biagioli suggests that norms within science, such as norms on scientific authorship, act as 'a para-legal discourse predicated on the absence (perhaps the impossibility) of a Law' (Biagioli 2007: 147). A first objective of this chapter is to modulate this claim, by showing _how_ methods of regulation and moderation are not only used as a substitute for a law in this area but also how, in the area of research integrity, these methods themselves mobilise typically legal resources and modes of enunciation. Drawing upon ethnographic fieldwork in the Committee on Publication Ethics (COPE) over 2010\u20132014, I suggest that the work of participants in present-day research integrity agendas resonates with an old kinship shared between legal and scientific practices.\n\nThe regulatory agenda of scientific research integrity is characterised by striking similarities between the practices of regulatees and that of regulators. This phenomenon is distinct, and arguably far more intense, than that of 'regulatory capture' (Braithwaite and Ayres 1992) as it implies instead the sharing of the very same activities, tools, documentation practices and apprentices, between both parties to the regulatory encounter. Hence a second objective of this chapter is to explore the implications of one such example of simultaneous sharing of practices between regulators and regulatees, coming from the domain of the regulation of scientific conduct.\n\nMy third objective is to compare past and current regulatory strategies to hunt scientific misconduct, and reconsider what sets them apart. In particular, the chapter seeks to provoke a re-thinking of the difference between so-called 'old' legal tools, and so-called 'new' scientific-technological regulation (Brownsword 2005; Lessig 1999) that have been applied to the governance of scientific work. Using the example of moderation of nineteenth-century medical publishing, I deflate the novelty (and hype) of regulation by technologies, and then reflect back on the evolving practices of COPE.\n\nLaw and society scholars Silbey and Ewick (2003) have declared that the new 'authenticating' site of science, where scientific truth is lodged, is no longer the lab, but the text, the publication itself. The milieu of scientific publication (and of publication ethics) has also become the site where some of the most interesting regulatory and self-regulatory initiatives within science take place. Before turning to this regulatory archipelago (Rose and Miller 2008: 218) that moderates the conduct of researchers, and hunts instances of misconduct, let me briefly contextualize my chapter within the movements that inspire this collection.\n\n## **Engaging socio-legal studies and STS**\n\nSTS and socio-legal scholars are increasingly exchanging their tools, methods and concepts, in order to study their respective objects of inquiry (Valverde 2005; Lezaun 2006; Cloatre and Dingwall 2013). The scrutiny of STS scholarship has opened some of the black boxes of law and regulation, and cast fresh light on socio-legal domains more generally (e.g. Prainsack, Turkmendag, Shapiro, this volume). Socio-legal scholars, on their part, have effectively unpacked various ways of 'making order' in science. 'Legal-pluralist' analyses that decentre 'enforcement by the state' (Braithwaite 1993; Black 2001) have made socio-legal approaches eminently relevant to study normative orders beyond state law, and tackle regulatory webs that involve all sorts of actors, including scientific ones.\n\nPerhaps because of their trained eye and sensitivity to tailored legalities, lawyers and socio-legal scholars are well situated to study multiple forms of 'disputing' (Abel 1973) within science. For example, their acquaintance with texts that 'tell us what to do' (Constable 2008), may turn out to be useful when it comes to unpacking standards (Bush 2012) and other forms of 'regulatory objectivity' (Cambrosio _et al._ 2006) that proliferate within, and increasingly typify, biomedicine. Their familiarity with matters of statecraft and legislative modes of enunciation may help to tackle the power and authority of the government in managing dissent, consensus-making and reordering in science.\n\nPerhaps most importantly, what the joint streams of STS and socio-legal works have shown us is that if one 'attempts to capture the dynamics of knowledge processes', rather than labelling knowledges statically or categorically as either scientific, or legal, or social, 'one sees new things' (Valverde 2005: 421). The cross-fertilization between the two fields gives hope for less of pre-conceived separation of units and more open-ended inquiries into what law and science can mean.\n\nIn the following pages, I use my object of inquiry to foreground a found intimacy between scientific and legal reasoning. Misconduct hunting, as we will see, is an evocative interplay between law, science and society.\n\n## **'Witnessing' and deliberating science**\n\nI start by sharing some initial thoughts about innovative methods of deliberation used by the Committee on Publication Ethics (COPE), an organisation in which I have been conducting ethnographic observations on a quarterly basis since 2010. I will show that the deliberations of these participants can be analogised with forms of experimental work performed in laboratories (cf. Bogner 2012), and that through their ways of 'witnessing' science, these participants juggle scientific but also long-standing legal traditions in their deliberations.\n\nA number of organisations are involved in preventing misconduct, many of them groups of editors, including the International Committee of Medical Journal Editors (ICMJE), the World Association of Medical Editors (WAME), and the European Association of Science Editors (EASE). One of the most notable is certainly COPE, an international organisation which started in April 1997 as an informal group of medical editors convened by Michael Farthing (then editor of _Gut_ , and now Vice Chancellor of the University of Sussex), Richard Smith (then editor of the _British Medical Journal_ ), and Richard Horton (of _The Lancet_ ). At that first meeting, about 15 medical editors gathered together at the British Medical Association (BMA) House in London: 'We discussed cases, and I think that we found it interesting and all learnt something. It was a very informal atmosphere, and we laughed and had fun', recalls Smith (Horton _et al._ 2012: 5). They kept meeting regularly to 'tell each other stories' (Horton _et al._ 2012: 5) but soon feared being perceived as a 'kangaroo court' (Horton _et al._ 2012: 5) and being sued for libel and slander. Following the advice of eminent professor of medical law Ian Kennedy, they began to anonymise all the cases under discussion, and made clear that they 'weren't making decisions [...] simply offering advice' (Horton _et al._ 2012: 5).\n\nInitially a local group, today COPE is a large international network (though still physically operating from the UK). It is well known and consulted by the scientific and publishing community across the English-speaking world. Its Code of Conduct, and flowcharts 'designed to help editors follow COPE's Code of Conduct and implement its advice when faced with cases of suspected misconduct', have been translated into many languages including Spanish, Portuguese, and Persian. A truly cosmopolitan organisation, COPE has Council members from Europe, Australia, Brazil, China, and Iran, and an ombudsman from Australia, who is flown over to London quarterly for Council meetings.\n\nCOPE is funded by annual subscriptions from a very wide range of publishers and individual journals, and works on the basis of membership (as of 2012, it had more than 7,000 members). Originally, journals would become members individually, but now publishers like Blackwell and Elsevier have registered all their journals, and therefore some journal editors might now be officially members of this organisation without knowing about it. Indeed, I myself became an Associate Member (available for those not working as editor but who have an interest in publication ethics) of COPE, for _\u00a3_ 50 per year, in order to gain better access to its materials and benefit from reduced rates to attend its seminars.\n\nIn 2012, COPE received an award from the Council of Science Editors (CSE) for its work on improving scientific communication and promoting high editorial standards. The award was displayed to all participants at a COPE meeting. The CSE award tells something about how COPE sees itself and is seen by others in the field: as a crucial actor in the promotion of good practice in science. The prize is also indicative of the increasing professional-isation of the field of misconduct studies, as Biagioli has discussed. (Biagioli 2012: 467). Further to this, Richard Horton has recently said that COPE 'has become a national treasure' (Horton _et al._ 2012: 7). This might also indicate the implicit and under-examined Britishness of COPE (in spite of its international membership).\n\nCOPE's distinctive activity is its Forum (where my on-going ethnographic fieldwork is taking place). COPE members call it a 'self-help' group for journal editors. The Forum is a place where suspicions of questionable conduct are discussed openly, and where participants share their experiences and offer suggestions of responses and advice as to what to do next: 'It's an open forum for the sharing of views, we don't decide or judge', as one member explained to me. The Forum meets quarterly \u2013 on the day of the administrative meetings of the Council \u2013 to discuss cases submitted to them by journal editors. Members of the Forum include Council members of COPE (the Council includes medical writers, journal editors, publishing consultants, academics, one lawyer), but anyone who has editorial functions for a journal member of COPE, or who works for a publisher member of COPE, can also come and contribute to the exchange in the Forum.\n\nAny member can submit a case to COPE in advance of the Forum meeting, after which the case is anonymised, summarised in writing and circulated to all members prior to the Forum. At the Forum itself, the person who submitted the case presents orally, in person or by phone or videoconference, to the participants, and then participants openly discuss the case, share their own experience with similar cases, raise questions, share views, and debate -but, as the Chair pointed out to me, 'without rendering a judgment'. The open-endedness is further highlighted by the format with which COPE publishes the cases: 'Anonymised Text of the Case submitted'; 'Advice'; 'Follow-Up'; sometimes an 'Advice on the Follow-up'; and 'Resolution'.\n\n_Figure 1.1_ Committee on Publication Ethics (COPE) Forum\n\nSource: Jeannie Wurz\n\n'COPE is an experiment', wrote Michael Farthing in his first annual report (1998). Drawing on the seminal works of Shapin and Schaffer (1985), and Shapiro (2002), I now show how COPE can be understood as a regulatory space, and a laboratory.\n\nIn _Leviathan and the Air-Pump_ , Shapin and Schaffer show how, in the seventeenth century, Robert Boyle and his colleagues from the Royal Society formulated ways of conducting and reporting experiments that propelled 'not only new regimes of fact-finding but also new regimes of governance' (Jasanoff 2007: 22). The 'witnessing' of science practised through peer-review, for instance, helped to democratize science, and gradually imposed itself hand in hand with other democratic means of deliberation that would relieve citizens from the absolutisms of the monarchy. Shapiro (2002) points out that within the Royal Society, during this transformation of the conduct of experiments in the laboratory, long-standing legal conventions of legal witnessing and fact-finding were integrated in scientific enterprise and reasoning. Experiments and specific observations were debated within the language of law. Boyle himself often used legal terminology, for example arguing that 'matters of fact had to be brought to trial' (Shapiro 2002: 243). Shapin and Schaffer describe how he defined the laboratory:\n\n> In Boyle's programme there was to be a special space in which experimental natural philosophy was done, in which experiments were performed and witnessed. This was the nascent _laboratory_.\n> \n> (Shapin and Schaffer 1985: 334, their emphasis)\n\nShapin and Schaffer also draw a connection between experimental knowledge, and legal knowledge produced in criminal court, in that both are based on 'collective witnessing'. In law and science 'the reliability of testimony depended upon its multiplicity' (Shapin and Schaffer 1985: 334).\n\nThey explain further:\n\n> The thrust of the legal analogy should not be missed. It was not merely that one was multiplying authority by multiplying witnesses (although this was part of the tactic): it was that _right action_ could be taken, as seen to be taken, on the basis of these collective testimonies.\n> \n> (Shapin and Schaffer 1985: 56, their emphasis)\n\nThe Royal Society debated 'matters of fact' not only with the language of truth, but also of moral certainty. Members were interested in normative questions about thresholds of credibility themselves. Latour (1993), discussing Boyle's use of legal metaphors (Shapin 1984), describes the scene almost ethnographically: 'credible, trustworthy, well-to-do witnesses gathered at the scene of the action can attest to the existence of a fact, the matter of fact, even if they do not know its true nature' (Latour 1993: 18).\n\nI suggest that the COPE Forum similarly uses a legally-influenced mode of 'witnessing', as well as deliberation, as tools of regulation. The Forum appears to constitute a kind of informal dispute resolution arena, but also, a laboratory, an experimental space, a 'place where this multiplicity of interests [is] addressed, acquitted, and drawn together' (Shapin and Schaffer 1985: 340).\n\nCirca 2010\u20132014, members of the COPE Forum came to examine 'matters of fact' and matters of conduct in specific contexts of disputes. They try to do both: determine what is, what has happened, but also how to characterise (or categorise) what is now to be done. They ask what is and what has happened by following the plot closely, asking follow-up questions about chronology, dates of exchanges, specific locations and jurisdictions. For example, members are interested in knowing where the research takes place, where the researchers come from (this alludes to the controversy over whether different scientific standards of conduct might apply to different countries; see Jacob 2013). COPE members also use analogies with previous cases to feed into and shape their discussion. In the course of case discussions, the first question the Chair addresses the Forum with is, recurrently: 'has anyone experienced something similar before?'\n\nThe Royal Society was commonly confronted with conflicting testimony: gentlemen could conflict in the Royal Society 'as they often did in the course of lawsuits' (Shapiro 2002: 56). Members and spokesmen often compared the work of the Royal Society to the work of the court. However, in practice they carefully 'avoided making collective judgments about matters of fact' (Shapiro 2002: 56). Specific scientists may have asked the Royal Society to rigorously 'investigate, examine, compare and also to declare judgment' (Shapiro 2000: 126) in cases of disputes and controversies. However, the Society preferred to suspend judgment, instead making statements about the instruments used by the disputed scientists, their propensity or not as to 'willingly falsify an observation' (Shapiro 2002: 257). This suggests that the Society wished to restrict their mandate to examining the facts of methods and the facts of conduct of scientific experiments, but _not_ scientific merit _per se_.\n\nLikewise, COPE members are explicit about their wish not to take the role of an adjudicator: as one member observed to me, 'We don't judge, it's very open-ended'. In the deliberations, issues of scientific merit are not discussed. What is key is the facts of conduct and of potential misconduct of scientists, not the worth of the science. Donald Kennedy (1997) points to that distinction as a feature of the tension between legal and scientific approaches to scientific conduct in the context of adjudication of scientific fraud allegations. In the context of such inquiries, he notes, the subject matter itself never gets disputed. It is the sequence of events, the order of manuscript drafts (or of email exchanges), and the minute details of records that come under scrutiny. Kennedy alleges that these are the kind of things that scientists absorbed with the technical content (of their science) might miss (or misinterpret); however they would hardly be missed by lawyers (Kennedy 1997: 232; see also Latour 2002).\n\nI interpret the COPE Forum as a contemporary practical case that makes Boyle's thoughts and experiment look suddenly much less part of a distant past. COPE emphasises that it is precisely because there are many views expressed, and many testimonies, and experiences shared, that it can offer the right advice (they are careful to state that this is not a _judgment_ ). Moreover, in the COPE Forum, it seems that there is an epistemic affinity between research and the mode of moderating it. Here it is deliberation, discussion of a case on the basis of the sharing of multiple accounts by multiple witnesses: an old legal technique, in renewed form? In other words, the scientific, experimental method proposed by Boyle and which, according to Shapin and Schaffer, can be analogised to natural philosophy and to criminal law, is re-performed in the Forum, to control and moderate deviations in science.\n\nUntil recently, the COPE Forum had been meeting face to face around a large oval table, in a boardroom in central London, inviting editors to present their cases in person, and also taking cases via phone from editors working all over the world. In March 2013 COPE held its very first virtual Forum meeting via webinar. The head of COPE, Virginia Barbour from _PLoS Medicine_ , chaired the Forum from an office in Cambridge, with the technical assistance of COPE staff. For two hours, 49 participants from different parts of the world, presumably sitting in front of a computer, headsets on, read cases and guidelines on screen as pointed to by the Chair, and listened to editors presenting their cases. Some took a more active role, by 'raising their hand' with one-click button in order to technically be 'unmuted' by the moderator and to share their thoughts to the group via microphone. I was conducting my ethnographic observation sitting at a desk in my university office in Keele. The meeting seemed to have gone well and similarly to previous meetings in many ways, in the sense that opinions and disagreements were aired politely and firmly, in the usual manner. Parts of the flow of the Forum's collective discussion, including spontaneous reactions, the occasional joke and ensuing collective laughter, the raised eyebrows and head-shaking that also characterise the Forum table, were for sure clogged by the media of the webinar. From COPE's perspective, this meeting was a success, and made the Forum more accessible to a broader range of international members. COPE expressed that its intention to hold both virtual and face-to-face Forums from now on.\n\nPeople make new social media in their own image, and not the other way around; yet, I cannot help but wonder: what will such virtual meetings mean for the future of this type of legally-influenced, 'Royal Society style' face-to-face deliberation about scientific research conduct? How will the 'conduct of conduct of conduct' (cf. Rose 2000) be reshaped in the process?\n\n## **Regulating (with) science and technologies**\n\nWhat happens when science and technology are _simultaneously_ (rather than alternatively) tools and targets? Can the governance of scientists' conduct gain legitimacy, become more workable and efficient, precisely because it is grounded on science and technology (Miller and Rose 2008: 218)? Or: can control, regulation and moderation of research conduct be effective because, and perhaps only because, it is effectuated by a method of moderation that is grounded on, and mimics research itself?\n\nTechnologies of research integrity verification, such as electronic tracking, textual similarity algorithms and statistical analysis of submissions, are now well integrated in the machinery of scientific publishing. Based on the premise that technology can detect, prevent, repair and undo the mistakes (see Prainsack, this volume) but also the misconduct and improprieties people make, these technologies and the body of quantitative studies that has developed on and with them (e.g. Fanelli 2010) receive a lot of attention in research integrity circles.\n\n'Technologies of integrity verification' exemplify what lawyers Brownsword and Yeung, borrowing from STS, call regulatory 'technological fixes' (2008), in that they make use of knowledge and technology to regulate and thus cure the problem of research integrity. Technoscience can now alternatively be a _tool_ as much as a _target_ of regulation (Brownsword and Yeung 2008; see also Flear and Pickersgill 2013; Cloatre and Dingwall 2013). Framed as tools to detect fraud in research, technologies of integrity verification entail what Biagioli calls a 'depersonalisation and delocalisation of the evidence of misconduct' (Biagioli 2012: 468\u2013469): instead of focusing on the grievance of a particular author who has been plagiarised or otherwise wronged by misconduct, these technologies prevent misconduct at large by disciplining all scientists (see also Lewis and Atkinson 2011). In other words, these regulatory technologies aim at upholding an ethos, rather than preventing or compensating for harm. This is consistent with other contexts of professional self-regulation, as it emphasises the requirement that professional values (of integrity and ethics) be _shown_ (Jacob 2012: 7).\n\nOne example of a technology of verification is statistics. It is currently a standard procedure for the most important medical journals to have a statistician on their editorial board, and to have any papers with numerical figures reviewed by statisticians. A statistician also sits on the board of the UK Research Integrity Office. Statisticians are able to detect data fabrication and falsification, on the basis of the concept of digit preference (Evans 2008). Digit preference theory asserts that particular numbers are preferred to be recorded or chosen, rather than having a uniform distribution. It is thus difficult to invent, or alter data without leaving 'fingerprints' (Evans 2008: 164). Statistics can also be useful to verify data integrity in the case of multi-centres treatment trials: here, the statistician can compare the data sets and to detect anomalous distribution of data.\n\nTechnologies of integrity verification also seem to have 'computerized the detective's role' (Biagioli 2012: 468). Perhaps more dramatically, in a collection edited by prominent figures in the UK milieu of medical research integrity, statistician Evans describes his own discipline as an 'omniscient adjudicator' so convincing that 'no corroboration is needed' (Evans 2008: 161). Statistics, as a discipline, is thus being self-promulgated by its practitioners as a judge. This judge behaves secretly, disinclined to reveal its precise methods of detection. The rationale for the vagueness found in writings about statistical methods to detect fraud in medical research, claims Evans, is that it is important to prevent fraudsters becoming more sophisticated and thus able to circumvent detection methods; when there is a code, there is a code breaker (Machado and Prainsack 2012: 57\u201372).\n\nTextual similarities software is another mode of integrity verification. Focusing on the large-scale quantitative detection of textual similarity, these tools search for textually similar articles, whether published by different authors, or by the same authors. Hence these electronic tools are used to detect plagiarism, and also 'self-plagiarism' (i.e., 'redundant' or duplicate publications). Because there are far more instances of duplicate publications than of plagiarised articles, the engineers and proponents of textual similarity algorithms significantly increase their markets and audiences by mixing the 'apples of plagiarism with the oranges of self-plagiarism' (Biagioli 2012: 467) without pausing and asking what these forms of deviance have in common and what sets them apart.\n\nThe idea of self-plagiarism is noteworthy in two key senses. First, the term itself indicates again this notion that misconduct self-regulation is not about protecting others from harm, but about a scientific ethos and, here more particularly perhaps, about the relationship of a scientist to herself. The category 'self-plagiarism' suggests that a scientist's respect for professional standards is a matter for regulatory remit. Second, with self-plagiarism, we might wonder whether the regulator, at least in part, might be technologically constructing its own regulatee and inflating its own regulatory target, hence creating the need for more technological fixes (cf. Biagioli 2012).\n\n## **Guarding against the quacks**\n\nIn spite of the alluring novelty of integrity verification entrepreneurship, nineteenth-century medical publishing practices suggest that the use of science and technology to regulate knowledge production is no recent innovation. In this sense it is helpful to examine medical journals' past responses to the phenomenon of quackery. Throughout history, the quack has often been referred to as cunning, and as a confidence trickster, fraudster, and faker (Bartrip 1995: 191; Porter 2003: 313). It was every practitioner's responsibility to guard the gullible public against it (Porter 2003: 312; see also Warner 2005), just as today scientists are responsible for making sure that people are not deceived by poor science (Biagioli 2012: 465).\n\nThere were dilemmas among medical reformers of the nineteenth century as to whether to suppress or eradicate quackery, or try to educate the quacks (Porter 2003: 309). The editor of _The Lancet_ , Thomas Wakley, accused the medical establishment itself of being the source of quackary, and physician Thomas Percival had taken a tough approach towards quackery in his influential 1803 _Medical Ethics_. A petition to parliament was presented to outlaw unqualified practice, to protect legally qualified practitioners and suppress quacks (Porter 2003: 309). The British Medical Association's work against quack medicine included campaigning for legislation forcing manufacturers to reveal ingredients of secret remedies, the restriction of sales, prosecution of those in breach, as well as public relations and persuasion of mainstream newspapers not to publish advertisements for 'imposters'. Practitioners, politicians, corporations and medical editors allied together to propel legislative changes, but also felt there was a need to move beyond the law.\n\nCampaigns against quackery related to the 'politicisation of medicine' (Porter 1989: 222). The late nineteenth and early twentieth century was a critical period for the professionalisation of medicine, and medical innovators and the 'old guard' of the traditional elite were fighting for control over the provision of medical services and cures (Bynum and Porter 1986). This was also a period of growth for medical journals, which, for financial reasons, included plenty of advertisements of quack medicines. Bartrip (1995) reports that in 1904 Dawson Williams, then head of the BMA, commissioned an analysis of proprietary medicines by reputable pharmacist Edward Harrison, in order to identify and reveal the ingredients of these medicines. Harrison's results, along with data on the price of these medicines were published by Williams as a series in the _British Medical Journal_ in 1904\u20131908, thus uncovering that a range of medicines with little ingredients of value were sold at high prices. Bartrip notes how great an achievement this had been for the _British Medical Journal_.\n\nOne constant between twenty-first-century technologies of research integrity verification and nineteenth-century pharmaceutical science's detection of counterfeit medicines is that science and technology themselves are, and were, being used in self-regulation settings, to regulate and monitor the behaviour of scientists. The alleged reasons for doing so are, in both settings, to protect the public, and (hence) maintain public trust. More salient here is the epistemic affinity between the surveillance of research and the research itself, between the methods of controlling and the conduct to be controlled (cf. Rose and Miller 2008). Through such internal audit, science also succeeds in keeping positive law at bay.\n\nAccording to recent fieldwork by Lewis and Atkinson, laboratories have already become sites where 'technologies of surveillance, measurement and monitoring' (Lewis and Atkinson 2011: 382) make visible the methods (as opposed to the object) of scientific work. Could we imagine operational means \u2013 and the ideological and legal set of mind \u2013 that would allow us to prevent fraud in research from happening altogether? In the name of security, integrity and so forth, regulation envisaged by 'technologies of integrity verification' could eventually get beyond the paperwork, audits, and checks on the reported material of the scientists, and immerse itself into the interstices of the scientific work itself. For instance, technologies could regulate science _ex ante_ (Kerr 2013) by preventing human interventions, in certain components of the research. Or alternatively, as is already happening in the field of bioinformatics (for purposes of protection of intellectual property), there could be engines that trace everything the scientist does in the lab, instead of relying on the human scientist to record _post-facto_ what she does aside from her work, on a log sheet. This form of built-in regulation could even look like what Larry Lessig (1999) refers to as regulation through 'code,' meaning that the scientists might eventually not know that their lab work is being regulated in such ways. What would this all mean for regulation? But more intriguingly, what would this mean for science and for scientists as critical human agents (cf. Rouvroy 2013)? What would science look like, stripped of the _possibilities_ of human misconduct and fraud?\n\n## **Conclusion**\n\nAs we saw above, deliberations amongst the citizens of 'the republic of science' (Jasanoff 1997: 93) as well as old legal techniques of collective decision-making, take on surprising, rejuvenated forms in COPE in London, circa 2010\u20132012. Taken together, the profile of the seventeenth-century Royal Society's and the COPE Forum's respective members, rules of procedure, sponsors\/benefactors\/funders, physical setting and geographical location (London), elicit a unique, almost inescapable parallel between the two organisations.\n\nIn this context it seemed crucial to highlight the presence of good old legal tools and legal knowledge, found rejuvenated, in innovative forms, in the area of research governance and management of scientific misconduct. 'Witnessing' and deliberation do seem to survive and to show their sustained, renewed relevance. Legal knowledge and modes of enquiry can be found in innovative sites (Jacob 2011), for example in the COPE Forum, and hence are still very much pertinent to the governance of research conduct. This means that in spite of high-tech strategies deployed to govern scientific integrity, scientists and misconduct hunters perhaps do not quite live yet in the 'post-regulatory world' (Black 2001: 103).\n\nThe nature of COPE's work for research integrity is itself changing, in that it increasingly works with technology, and is supplemented by a panoply of high-tech integrity verification tools. These tools in turn resonate with older technological responses to deviance in medicine. Many dimensions of the multivalent work of misconduct hunting have yet to be unpacked. I have only highlighted some aspects here, namely, the sharing of practices between the regulator and its regulatees, the kinship between 'scientific' and 'legal' means of regulation, and the commonalities between 'traditional' versus 'novel' engagement with deviance.\n\nThe discourses of 'research integrity' keep gaining momentum, and getting more and more professionalised. Studying research integrity will thus require the foregrounding of knowledge itself, without letting the mapping of that knowledge's provenance ('is this legal? or scientific? or is this 'just' social?') pre-establish our grid of analysis (cf. Latour 1993), and also without letting single perspectives, and perspectivism (Lavi 2011: 815), condense our understanding of the multiple incarnations research integrity takes.\n\n## **Acknowledgements**\n\nThis chapter draws on research conducted under the AHRC Fellowship AH\/J008338\/1, as well as on ethnographic fieldwork conducted in the Committee on Publication Ethics, with support from the Centre for Law Ethics and Society at Keele University. Approval has been received from Keele Ethics Review Panel in October 2010. The author would like to thank the editors Emilie Cloatre and Martyn Pickersgill, as well as Martin Wasik, Barbara Prainsack, Hyo Yoon Kang, Vaios Karavas, Andrew Webster and Ginny Barbour for their helpful comments. Earlier versions have benefited from comments by participants at numerous events including at Universit\u00e4t Luzern, University of Nottingham Rhetorics of Moderation ESRC Seminar, Birkbeck College of Law, and the workshop 'Regulating Health Technologies' (supported by the Wellcome Trust and held at the Royal Society of Medicine, London, 12\u201313 May 2011).\n\n 'Good research practice' norms aim to practically implement values pertaining to research integrity, such as honesty, transparency and fairness in research. 'Good research practice' refers to norms regarding, for instance, acknowledgement of authorship, data storage and data sharing, disclosure of conflicts of interests, or the use of images in scientific publications, to name a few (Jacob 2013).\n\n These norms, proposed by sociologist Robert Merton (1979), were: communalism, universalism, disinterestedness, originality and scepticism, and the counter norms: solitariness, particularism, interestedness, and dogmatism.\n\n _Report of the UK Research Integrity Futures Working Group_ , chaired by Janet Finch, September 2010; _Liberating the NHS: Report on the Arm's Length Bodies Review_ , July 2010. _The AMS Report, A New Pathway for the Regulation and Governance of Health Research_ , January 2011.\n\n Committee on Publication Ethics Flowcharts: h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00adp\u00adu\u00adb\u00adl\u00adi\u00adc\u00ada\u00adt\u00adi\u00ado\u00adn\u00ade\u00adt\u00adh\u00adi\u00adc\u00ads\u00ad.o\u00adr\u00adg\u00ad\/r\u00ade\u00ads\u00ado\u00adu\u00adr\u00adc\u00ade\u00ads\u00ad\/\u00adf\u00adl\u00ado\u00adw\u00adc\u00adh\u00ada\u00adr\u00adt\u00ads\u00ad\n\n See Committee on Publication Ethics h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00adp\u00adu\u00adb\u00adl\u00adi\u00adc\u00ada\u00adt\u00adi\u00ado\u00adn\u00ade\u00adt\u00adh\u00adi\u00adc\u00ads\u00ad.o\u00adr\u00adg\u00ad\/\u00ada\u00adb\u00ado\u00adu\u00adt\u00ad\/h\u00adi\u00ads\u00adt\u00ado\u00adr\u00ady\u00ad (accessed 10 January 2013).\n\n For instance, the General Medical Council's codes of practice and guidance evoke the importance of maintaining a good standard of practice and are in order to justify the trust placed upon registered doctors.\n\n I thank Hyo Yoon Kang for drawing my attention to this fact.\n\n## **References**\n\nAbel, R. (1973) 'A comparative theory of dispute institutions ', _Law and Society Review_ , 8(2): 217\u2013347.\n\nBartrip, P. (1995) 'Secret remedies, medical ethics, and the finances of the British Medical Journal ', in R. Baker (ed.), _The Codification of Medical Morality: Historical and Philosophical Studies of the Formalization of Western Medical Morality in Eighteenth and Nineteenth Centuries_ , Dordrecht: Kluwer.\n\nBiagioli, M. (2003) 'Rights or rewards? changing frameworks of scientific authorship', in M. Biagioli and P. Gallison, _Scientific Authorship: Credit and Intellectual Property in Science_ , New York: Routledge, pp. 253\u2013280.\n\nBiagioli, M. (2007) 'Documents of documents: scientists names and scientific claims ', in A. Riles (ed.), _Documents: Artifacts of Modern Knowledge_ , Ann Arbor: Michigan University Press.\n\nBiagioli, M. (2012) 'Recycling texts or stealing time? plagiarism, authorship, and credit in science ', _International Journal of Cultural Property_ , 19(3): 453\u2013476.\n\nBlack, J. (2001) 'Decentring regulation: understanding the role of regulation and self-regulation in a \"post-regulatory\" world ', _Current Legal Problems_ , 54(1): 103\u2013147.\n\nBogner, A. (2012) 'The paradox of participation experiments ', _Science, Technology & Human Values_, 37(5): 506\u2013527.\n\nBoulton, G.et al. (2012) _Science as an Open Enterprise_ , London: The Royal Society.\n\nBraithwaite, J. (1993) 'Transnational regulation of the pharmaceutical industry', _Annals of Academy of Political Sciences_ , 525: 12\u201330.\n\nBraithwaite, J. and Ayres, I. (1992) _Responsive Regulation: Transcending the Deregulation Debate_ , Oxford: Oxford University Press.\n\nBrownsword, R (2005) 'Code, control, and choice: why east is east and west is west ', _Legal Studies_ , 25(1): 1\u201321.\n\nBrownsword, R and Yeung, K. (2008) 'Regulating technologies: tools, targets and thematics ', in R Brownsword and K. Yeung (eds), _Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes_ , Oxford: Hart, pp. 3\u201322.\n\nBush, L. (2012) _Standards: Recipes for Reality_ , Cambridge, MA: MIT Press.\n\nBynum, W. and Porter, R. (1986) _Medical Fringe and Medical Orthodoxy, 1750\u20131850_ , London: Routledge.\n\nCambrosio, A., Keating, P., Schlich, T. and Weisz, G. (2006) 'Regulatory Objectivity and the Generation and Management of Evidence in Medicine ', _Social Science and Medicine_ , 63(1): 189\u2013199.\n\nCloatre E, and Dingwall, R. (2013) '\"Embedded regulation:\" the migration of objects, scripts, and governance ', _Regulation and Governance_ , 7(3): 35\u201338.\n\nConstable, M. (2008) 'On the (legal) study methods of our time: Vico Redux ', _Chicago-Kent Law Review_ , 83(3): 1303\u20131332.\n\nEvans, S. (2008) 'Can statistical analysis reveal research misconduct? ', in F. Wells and M. Farthing (eds), _Fraud and Misconduct in Biomedical Research_ , London: The Royal Society of Medicine Press.\n\nFanelli, D. (2010) 'How many scientists fabricate and falsify research? A meta-analysis of survey data', _PLoS ONE_ 4 (5): e5738. doi:10.1371\/journal.pone.0005738.\n\nFarthing, M. and Well, F. (eds) (2008) _Fraud and Misconduct in Biomedical Research_ , 4th edn., London: Royal Society of Medicine.\n\nFlear, M.L. and Pickersgill, M.D. (2013) 'Regulatory or regulating publics? The European Unions regulation of emerging health technologies and citizen participation ', _Medical Law Review_ , 21(1): 39\u201370.\n\nGoodstein, D. (2010) _On Fact and Fraud, Cautionary Tales from the Front Lines of Science_ , Princeton, NL: Princeton University Press.\n\nHaas, P. (1992) 'Introduction: Epistemic communities and international policy coordination ', _International Organization_ , 46(1): 1\u201335.\n\nHorton, R, Farthing M. and Smith R (2012) 'In the beginning... a Q&A with COPEs founders ', _Ethical Editing_ , 4(1): 5\u20138.\n\nJacob, M.-A. (2011) 'Knowledge games, truthfulness, and organ transplants regulation ', _BioSocieties_ , 6(2): 243\u2013261.\n\nJacob, M.-A. (2012) _Matching Organs with Donors: Legality and Kinship in Transplants_ , Philadelphia, PA: University of Pennsylvania Press.\n\nJacob, M.-A. (2013) _Scientific Research Integrity: Background Paper_ , London: Nuffield Council on Bioethics.\n\nJasanoff, S. (1997) _Science at the Bar: Law, Science, and Technology in America_ , Cambridge, MA: Harvard University Press.\n\nJasanoff, S. (2007) _Designs on Nature: Science and Democracy in Europe and the United States_ , Princeton, NJ: Princeton University Press.\n\nKennedy, D. (1997) _Academic Duty_ , Cambridge, MA: Harvard University Press.\n\nKerr, I. (2013) 'Prediction, pre-emption, presumption: the path of law after the computational turn ', in M. Hildebrandt and K. de Vries (eds), _Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology_ , London: Routledge, pp. 91\u2013120.\n\nLam, A. (2010) 'From \"ivory tower traditionalists\" to \"entrepreneurial scientists\"? academic scientists in fuzzy university-industry boundaries ', _Social Studies of Science_ , 40(2): 307\u2013340.\n\nLatour, B. (1993) _We Have Never Been Modern_ , trans. C. Porter, Cambridge, MA: Harvard University Press.\n\nLatour, B. (2002) _La fabrique du droit. une ethnographie du Conseil d'\u00c9tat_ , Paris: La d\u00e9couverte.\n\nLavi, S. (2011) 'Turning the tables on \"law and...\": a jurisprudential inquiry in contemporary legal theory ', _Cornell Law Review_ , 96(4): 811\u2013838.\n\nLessig, L. (1999) _Code and other Laws of Cyberspace_ , New York: Basic Books.\n\nLewis, J. and Atkinson, P. (2011) 'The surveillance of cellular scientists practice ', _BioSocieties_ , 6(3): 381\u2013400.\n\nLezaun, J. (2006) 'Creating a new object of government: making genetically modified organisms traceable ', _Social Studies of Science_ , 36(4): 499\u2013531.\n\nMachado, H. and Prainsack, B. (2012) _Tracing Technologies: Prisoners' Views in the Era of CSI_ , Farnham: Ashgate.\n\nMerton, R. (1979) _The Sociology of Science: Theoretical and Empirical Investigations_ , Chicago, IL: University of Chicago Press.\n\nMiller, P. and Rose, N. (2008) _Governing the Present: Administering Economic, Social and Personal Life_ , Cambridge: Polity.\n\nMontgomery, K. and Oliver, A. L. (2009) 'Shifts in guidelines for ethical scientific conduct how public and private organizations create and change norms of research integrity,' _Social Studies of Science_ , 39(1): 137\u2013155.\n\nNielsen, M. (2012) _Reinventing Discovery: The New Era of Networked Science_ , Princeton, NJ: Princeton University Press.\n\nPorter, R. (1989) _Health for Sale: Quackery in England, 1660\u20131850_ , Manchester: Manchester University Press.\n\nPorter, R. (2003) _Quacks: Fakers and Charlatans in Medicine_ , London: Tempus.\n\nRiles, A. (2011) _Collateral Knowledge: Legal Reasoning in the Global Financial Markets_ , Chicago, IL: University of Chicago Press.\n\nRose, N. (2000) 'Government and Control, ' _British Journal of Criminology_ , 40(2): 321\u2013339.\n\nRouvroy, A. (2013) 'The end(s) of critique: data behaviourism versus due process ', in M. Hildebrandt and K. de Vries (eds), _Privacy, Due Process and the Computational Turn: The Philosophy of Law Meets the Philosophy of Technology_ , London: Routledge, pp. 143\u2013168.\n\nShapin, S. (1984) 'Pump and circumstance: Robert Boyles literary technology ', _Social Studies of Science_ , 14(1): 481\u2013520.\n\nShapin, S. and Schaffer, S. (1985) _Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life_ , Princeton, NJ: Princeton University Press.\n\nShapiro, B.J. (2000) _A Culture of Fact: England 1550\u20131720_ , Ithaca, NY: Cornell University Press.\n\nShapiro, B.J. (2002) 'Testimony in seventeenth-century english natural philosophy: legal origins and early development ', _Studies in History and Philosophy of Science_ , 33(2): 243\u2013263.\n\nSilbey, S. and Ewick, P. (2003) 'The architecture of authority: the place of law in the space of science ', in A. Sarat, L. Douglas and M. M. Umphrey (eds), _The Place of Law_ , Ann Arbor: Michigan University Press, pp. 75\u2013108.\n\nStrathern, M. (2000) (ed.), _Audit Cultures_ , London: Routledge.\n\nUK Department of Health (2010) _Liberating the NHS: Report on the Arm's Length Bodies Review_. Available at: w\u00adw\u00adw\u00ad.g\u00ado\u00adv\u00ad.u\u00adk\u00ad\/\u00adg\u00ado\u00adv\u00ade\u00adr\u00adn\u00adm\u00ade\u00adn\u00adt\u00ad\/\u00adu\u00adp\u00adl\u00ado\u00ada\u00add\u00ads\u00ad\/\u00ads\u00ady\u00ads\u00adt\u00ade\u00adm\u00ad\/\u00adu\u00adp\u00adl\u00ado\u00ada\u00add\u00ads\u00ad\/\u00ada\u00adt\u00adt\u00ada\u00adc\u00adh\u00adm\u00ade\u00adn\u00adt\u00ad_\u00add\u00ada\u00adt\u00ada\u00ad\/\u00adf\u00adi\u00adl\u00ade\u00ad\/\u00ad1\u00ad5\u00ad2\u00ad0\u00ad1\u00ad6\u00ad\/\u00add\u00adh\u00ad_\u00ad1\u00ad1\u00ad8\u00ad0\u00ad5\u00ad3\u00ad.\u00adp\u00add\u00adf\u00ad.p\u00add\u00adf\u00ad.\n\nValverde, M. (2005) 'Authorizing the production of urban moral order: appellate courts and their knowledge games ', _Law & Society Review_, 39(2): 419\u2013456.\n\nWarner, M. (2005) _Publics and Counterpublics_ , Cambridge, MA: MIT Press.\nChapter 9\n\n* * *\n\n# **Financial derivatives and the challenge of performation**\n\nWhere contingency meets contestability\n\n_Donatella Alessandrini_\n\n* * *\n\n## **Introduction**\n\nThe attention financial derivatives have attracted with the 2007 crash has noticeably subsided as a result of the focus on austerity and deficit reduction which has come to dominate public discourse since 2010. However, concerns over the financial speculation they have originated and its effects on the real economy \u2013 that is, the domain within which goods and services are produced \u2013 have not disappeared. No longer in the limelight, derivatives nevertheless continue to populate the world of regulators with technical standards and rules aimed at preventing their speculatory 'excesses'. One important manifestation of such excess concerns the value they express: this became apparent in 2008 when the amount of outstanding derivatives worldwide exceeded USD 1.114 quadrillion, 22 times the GDP of the entire world (Matai, 2009), raising critical questions about their relationship with the sphere of the real economy.\n\nAt the risk of oversimplifying a very complex debate, two approaches have emerged in relation to the role of financial derivatives in the economy, and the possibility of curbing their excesses. One has characterised them as dangerous innovations, with nothing to contribute to the economy but the distortion (if not destruction) of 'real' values that has tremendous consequences for people's lives. This argument resonates with the position in the social sciences which has expressed concerns over the growth of financial markets generally and financial derivatives in particular (Strange, 1997; Lipuma and Lee, 2004). While important work has been carried out on financial capitalism and its violent effects (see Marazzi, 2010), my intention is to demonstrate how, rather than being parasitical and unproductive, financial derivatives act as a powerful technology able to affect current value-making processes. They do certain things to the value they are supposed to measure and it is important to take this process into account so as to appreciate both the way in which the real and the financial spheres of the economy are deeply entangled and the kind of intervention at their intersection which are possible and desirable.\n\nThe second approach sees derivatives playing a beneficial role in the economy, allowing firms and governments to hedge (i.e. protect) themselves against the risk of price volatility, but also carrying certain risks, particularly over-the counter (OTC) derivatives which are not traded on official exchanges. This is certainly the position underlying the regulatory approach, which recognises their role in providing continuity to global production while also attempting to limit the risks they present. Although regulation remains an important tool in reducing the level and type of speculation, my interest lies in demonstrating how (in a manner not too dissimilar from the previous approach) the regulatory response remains predicated on a conceptual separation between the real and the financial sphere of the economy which does not attend to the complex value-making processes to which financial derivatives contribute. Moreover, focusing on the productive power of derivatives allows us to see how regulation always exceeds the realm of formal rules and processes that aim to affect human behaviour (Cloatre and Dingwall, 2012: 7), in this case that of market participants. Regulation encompasses the wide 'range of socio-technical practices that shape institutions and subjectivity either deliberately or otherwise' (Pickersgill, 2013: 325), and, from this angle, financial derivatives emerge as a regulatory device able to shape economic values and activities.\n\nIn this chapter, I explore the technology of financial derivatives and analyse the modalities through which they participate in value-making processes; that is, the processes through which economic value is conceptualised and produced in today's (mainly Western) financialised economies, with a three-fold aim; first, to illustrate the contribution that the STS-inflected Social Studies of Finance (SSF) have made to our understanding of the complex workings of economic markets in general and financial innovations in particular. Drawing on STS insights into the impossibility of separating material objects and social practices, SSF have shown how social values are embedded in the tools and techniques that make up the supposedly neutral realm of finance. Second, to reflect on the limitations emerging from this approach when the question of how to act in the midst of such complexity comes to the fore: it is my contention that, while opening up many black boxes of financial practices, including those of financial derivatives, STS-inflected SSF have up until now refrained from thinking about how to 'format' markets otherwise; and finally, to point to the potential for collaboration with constructivist approaches in critical and feminist political economy that share a commitment to complexity, contingency and contestability, thereby bringing to the fore the question of how values (in)form action.\n\nThe chapter is organised as follows: the first section problematises current approaches to financial derivatives by showing how this technology has called into question the possibility of neatly distinguishing between a real and a financial sphere of the economy, and highlighted the difficulty inherent in disentangling real and financial values. In this respect, it contributes to the rich STS tradition of troubling long established boundaries both empirically and conceptually. The second section takes the financial uncertainty revealed by derivatives as its starting point for a consideration of the contribution that STS-inflected SSF have made to our understanding of how economic models, technologies and innovations do not simply mirror or represent an underlying economic reality but actively make or perform it. The third section brings the insights of critical and feminist political economy on the contingency and contestability of values to bear on the SSF, particularly in their performativity of economics manifestation, in thinking about desirable forms of intervention. I conclude by referring to an instance of such (always imperfect) intervention in the context of currency futures.\n\n## **The real and the financial life of derivatives**\n\nMuch has been written about financial derivatives from a variety of perspectives and long before the crisis erupted in 2007 (Maurer, 2002; Knorr Cetina and Preda, 2005; Bryan and Rafferty, 2006). I look at them from a particular angle, one which focuses on their relationship to the value they express. Indeed, the standard definition is that of contracts whose value is derived from an underlying asset (i.e., a commodity or a financial asset such as shares, interest rates and exchange rates, or an index). However, this definition does not capture the different nature, characteristics and functions of the various types of derivatives in circulation (such as futures, swaps, options, collaterised debt obligations, and so on). Derivatives are different things; for instance, whereas _futures_ are contracts according to which two parties commit to buy or sell a certain asset at a future date and at a specified price, _options_ confer the right to buy or sell a certain asset at a certain date and at a specified price. The variety of derivatives in circulation makes any comprehensive definition unsatisfying.\n\nWith this in mind, one way of understanding derivatives is to look at their function: what they _are_ might be best characterised through what they _do_. Historically, their realm has been that of commodities \u2013 in particular, agricultural commodities \u2013 and they have been important in ensuring continuity in the production cycle by allowing farmers, for instance, to hedge against price fluctuations. Alongside this hedging function, however, derivatives also carry a speculative role; this is because there will always be discrepancies at any time on the market between the demand and offer of hedging opportunities. In other words, these markets need speculators to take on the risks and this pertains to commodity as well as the more recent financial derivatives. Indeed, as Lipuma and Lee (2004: 20\u201321) have put it, although derivatives have existed for a long time, it is as a result of the collapse of the Bretton Woods monetary management system and the move towards floating exchange rates in the 1970s, of the lifting of capital controls, and of the proliferation of contractual outsourcing, that old risks such as counterparty and interest rates risks have increased and new ones such as currency and 'socio-political' risks have emerged.\n\nI want to pause on Lipuma and Lee's work, _Financial Derivatives and the Globalisation of Risks_ , because it is representative of the perspective in the social sciences that recognises the role of derivatives in providing the means to address price volatility in an increasingly uncertain and interconnected world. This perspective acknowledges that since exchange and interest rates have been made to fluctuate in the market, derivatives have become a crucial tool to manage the risks of global investing. However, it also sees derivatives as the chief instruments of a 'speculative' capital that remains distinct and distinguishable from industrial and commercial capital. In their words, 'technologically driven derivatives detach the value, cost and price of money \u2013 manifest in exchange and interest rates \u2013 from the fundamentals of the economy, particularly the state of production, the social welfare of the producers, and the political needs of citizens' (Lipuma and Lee, 2004: 2). However, this account of derivatives as instruments detaching 'the value, cost and price of money' from the so-called fundamentals of the economy requires further scrutiny as it relies on two problematic assumptions about derivatives that have been called into question by the reality of their trading.\n\nThe first assumption is that it is possible to effectively distinguish between hedging on the one hand and speculation on the other, so that by curtailing the latter (or at least its 'unnecessary' manifestations) derivatives can be kept in line. This is the reason why regulators distinguish between the speculation that is necessary for granting liquidity to these markets, and arbitrage, which is the practice of buying and selling to profit from changes in market prices. Thus, the argument often made is that it is necessary to curb the latter form of speculation so as to preserve the healthy purpose derivatives serve in hedging against risks. The problem with this position is that in actual markets the behavioural differences between the two types of market participants, hedgers and speculators, as well as between necessary speculation and arbitrage, are difficult to detect. Studies have not only found a range of reasons for (or types of) both hedging and speculation, but also that, at any point in time, participants are engaged in both activities (Bryan and Rafferty, 2006, 2002). It is therefore extremely difficult to distinguish between these different categories.\n\nThe second assumption concerns the standard definition of a derivative as a contract whose value is based on that of the underlying asset. The problem is that, however one measures it, the value of a derivative does not derive from the value of the underlying asset, but from the trend of that value in the market. This is important since the trend in market prices can be influenced by market players, which means that the mechanisms that determine the value of the derivative are not linear. This is not only a matter of individual players who influence the trend of the value of the 'underlier'; rather, it is their agreement, the derivative itself, that can have this effect. The point is that we are dealing with contracts in which to anticipate or speculate on the value of an asset is also to intervene in, influence or make that very value. The challenge this poses to regulation is enormous: introducing clarity and limiting the amount of speculation is certainly important. However, regulation cannot avoid, save the banning of derivatives altogether, the circular value-making process between the underlying asset and the derivative whereby the latter is in actual fact participating in the production of the value it is supposed to be merely based on.\n\nThe fact that the derivative makes the value of, as well as deriving it from, the underlying asset creates a complex circularity which raises important questions exceeding the regulatory realm. Investors refer to 'fundamental values' in their estimation of whether the underlying asset is to be considered worth investing in. For businesses, information includes revenues, assets, liabilities and growth. For governments it includes interest rates, budget deficit, the balance of trade account, inflation rates, the state of the banking and financial sector and consumer confidence, among other things. But if value is acknowledged as being produced at the intersection between the two spheres of the economy, what are we to make of the 'fundamental' values to which financial entities such as derivatives are supposed to be referring?\n\nExpectations have long been recognised to play a role in shaping the value of financial entities (see Marx, 1981: 597). However, what is becoming increasingly evident is that today's 'asset markets have a reflexive nature in that prices are generated by traders' expectations, but these expectations are formed on the basis of anticipations of _others'_ expectations' (Arthur _et al._ , 1996: 21). This is a completely different scenario from the one depicted by orthodox financial theory where stock prices are not supposed to greatly diverge from the fundamental value of the firm, that is the value 'equal to the discounted stream of future expected dividends': today's market prices depend to a great extent on agents' anticipation of others' expectations (Orlean, 2005: 3).\n\nThe self-referentiality thesis therefore points to the first degree of abstraction from 'fundamental values' operated within financial markets. This is why some scholars have argued that market participants should revert to an analysis based on fundamentals, i.e. one that grounds expectations in the economic situation of the firm issuing the assets (Graham and Dodd, 1940: 21\u201321 in Mackenzie, 2006: 76\u201377). However, the difficulty this position confronts is that not only are prices today so distorted, but firms' balance sheets and state accounting are so compromised that assessing the health of a company \u2013 let alone that of an economy (as in the case of exchange rates) \u2013 is extremely challenging. Moreover, financial derivatives add, through the circularity just described, another layer of difficulty in accounting for the 'real' or 'fundamental' value of an asset. This is yet more problematic when we consider that they are financial entities relating to other financial entities which have also undergone the first degree of abstraction at the level of expectations.\n\nTo sum up, rather than simply deriving their value from that of the underlying asset, derivatives participate in its actual construction. This points to the extreme difficulty of disentangling 'real' and 'financial' values, and therefore reveals a deep level of uncertainty in the determination of 'fundamental' values in today's financialised economies. It also poses a further question: does acknowledging such uncertainty imply we should give up altogether on 'fundamental values' as the anchor between the real and the financial spheres and take the market as the sole arbiter and maker of values? I do not think so: it is my contention that acknowledging contingency in value-making processes means neither ignoring the constraints within which such processes take place nor doing away with judgments about their desirability. In what follows, I draw on the theoretical resources offered by the performativity of economics thesis to show how thinking about contingency (within constraints) in value-making processes characterised by financial uncertainty is a necessary (albeit not sufficient) condition for thinking about desirable forms of intervention in such processes.\n\n## **Performing the economy**\n\nSTS-inflected research that came to be called the Social Studies of Finance (SSF) emerged in the 1990s, ostensibly motivated by a desire to move away from the emphasis of classical economic sociology on human action (as embedded in institutions, norms, relationships and structures) to explain economic activities. Scholars working within SSF turned their attention to the ways in which such action takes place through hybrids made of humans and technical and material devices; subsequently, this body of work has particularly focused on economics, arguing that economic models and theories actively shape and format \u2013 rather than merely describe \u2013 the economy (cf. Callon, 1998: 2). The _performativity thesis_ that has emerged since has traced the multifarious ways in which economic theories and models become a material force embodied in market practices, arrangements and infrastructures. Contrary to self-fulfilling prophecy developed by Merton in 1948 (1968: 477), performativity does not imply falsehood in the (human) beliefs that come to be performed. Performativity is concerned 'with the incorporation of economics in the infrastructures of markets' so that it exceeds beliefs and mindsets, however important these are. As Mackenzie puts it\n\n> an aspect of economics that is incorporated only into beliefs \"in the heads of economic actors\" may have a precarious status. A form of incorporation that is in some sense deeper is incorporation into algorithms, procedures, routines, and material devices. An economic model that is incorporated into these can have effects even if those who use them are skeptical of the model's virtues, unaware of its details, or even of its very existence.\n> \n> (Mackenzie, 2006: 19)\n\nTwo aspects of the performativity argument are worth reflecting on to illustrate the contribution that STS-inflected SSF have made to our understanding of the co-constitutive relationship between economics and the economy, theory and practice, and I would also say between the financial and the real spheres of the economy. The first is that performativity exceeds the sphere of human beliefs as the mere 'psychological' realm. It highlights the role of material devices with which the theory or model interacts. At the same time, and this is the second important aspect, it brings to light the constraints within which the model operates. Thus, to say that the Black-Scholes-Merton formula was performative, as Mackenzie does in a classic SSF story, 'is not to make the crude claim that any arbitrary formula for option prices, if proposed by sufficiently authoritative people, could have 'made itself true' by being adopted. Most such formulas could not do so, at least other than temporarily' (Mackenzie, 2006: 20). Even if initially adopted widely, the formula would soon have been abandoned had it led agents to consistently lose money or conduct arbitrage so as to gain from systematic price discrepancies.\n\nThe issue becomes one of understanding why certain formulas have durable consequences and this requires expanding as far as possible the radar of research so as to delve into the reality in which formula, models, and devices have become immersed. The concept of _agencement_ is meant to shed light on how markets take different forms in different places: ' _Agencements_ denote sociotechnical arrangements when they are considered from the point [of] view of their capacity to act and to give meaning to action' (Callon and Caliskan, 2005: 24\u201325 quoted in Mackenzie, 2009: 20\u201321). _Agencement_ therefore places the emphasis on the tools that enable performativity. Mackenzie for instance argues that the performativity of financial theory has become incorporated in financial markets in three specific ways. From a technical perspective, the growth of financial markets would have been impossible without the development of infrastructures, and without financial theory built in software so as to enable risk calculation and semi-automatic fast-trading. From a linguistic perspective, financial theory provided words and concepts with which to manage and reduce the complexity inherent in market transactions. For instance 'implied volatility' is an engineered concept that reduced the complexity inherent in, and made possible by, the pricing of options. Finally, from a regulatory or legitimising perspective, financial markets would not have developed without the active intervention of economists and regulators who 'purified' previously contentious practices and instruments from their association with gambling by making them efficient and rational realms of transactions (Latour, 1993; de Goede, 2005). Thus, by opening up to scrutiny the complex web of socio-material practices constituting markets, work on the performativity of economics points to contingency within constraints, material and technical as well as human.\n\nYet, the performativity thesis has not gone uncontested: if engaging socio-material practices always necessarily entails a partial account, however serious the attempt is to widen up the reality to account for, concerns have been expressed in relation to the kind of entanglements that have ended up being privileged. Thus Mirowski and Nik-Khah have lamented the fact that in their attempt to do away with structures, forces and intentionality, performativity scholars and STS-inflected SSF more generally have ended up concentrating on machine metaphors, theories and models in a way that has consistently diverted attention from other important processes. In _Do Economists Make Markets_?, a collection edited by Mackenzie, Muniesa and Siu (2007), Mirowski and Nik-Khah take issue with the account of the Federal Communication Commission (FCC) spectrum auctions which Callon, Muniesa and Guala provide as a clear instance of successful performativity (successful since economists' theoretical accounts of auctions shaped the format of the auctions adopted). Their argument is that concentrating on economists' theories has led these authors to ignore both the role of the government and that of telecommunication companies 'in orchestrating the outcome' of the auctions (Mirowski and Nik-Khah, 2007: 216\u2013217).\n\nAs they recognise, this 'partial' account might have depended on the fact that performativity scholars relied on economists' own accounts of the event (ibid., 202) rather than on a careful description of the phenomena. This would have required them to follow all 'actants', and not only the economists and their account of the 'successful' auction. Indeed, placing the emphasis on _agencement_ would have required opening up to scrutiny the 'sociotechnical arrangement', i.e. the FCC actions in its various articulations. Furthermore, I would argue, nothing prevents one from talking about powerful actors as long as that power is carefully traced in action (cf. Latour, 2005: 177). Similarly, there is nothing in the performativity thesis that prevents us from seeing that certain theories\/models cannot be made to work. Mackenzie has certainly acknowledged this point in relation to the Black-Scholes-Merton model when he has argued that not all formulas for pricing options could have worked.\n\nThis connects to the more general point emerging from their critique of the performativity school, and that is the fact that recognising contingency within constraints, however important, is not sufficient to start thinking about, and devising, ways to organise markets differently. What is crucial is to enter the debate about their desirability, and this means confronting economists on the reality they attempt to construct through their theories\/models. Interesting in this respect is the recent abandonment by neo-classical economists of their comparison of the price system to a natural mechanism, and the revelation of their ambition is to ' _fabricate_ markets, and not simply treat them as States of Nature' (Mirowski and Nik-Khah, 2007: 215\u2013216). In essence, they have admitted that markets are achieved and normative rather than neutral and natural, making more apparent the fact that the struggle has always been one over the reality which different actors attempt to construct. This is a point which Callon acknowledges in the concluding chapter to the edited collection where he proposes to explore different market formations as 'struggles of performation [...] between competing programs which make the disassembling and reassembling process possible, necessitating investments that measure up to those by which actual markets were formatted' (Callon, 2007: 349; see also Mackenzie, 2007: 80).\n\nYet, I would argue, work on the performativity of economics has up until now fallen short of making substantial 'investments' that measure up to those which have formatted financial markets so far, and which have furthered a vision based on the privatisation of risks and uncertainty. This is the challenge the performativity of economics confronts if it is not to become redundant: not to provide a blueprint for action, but to think of partial, limited and always precarious attempts to re-assemble markets differently. This is a point that is emerging more generally within STS: as the 'inherently contextual nature of not only the application of scientific knowledge but also its means of production' (Pickersgill, 2012: 599) is re-emphasised, questions about ethics and technology announce a possible 'turn to politics and the normative' (Bijker _et al._ , 2012: xxvi; Johnson and Wetmore, 2009; Collins and Evans, 2002; Jasanoff, 2003; Wynne, 2003).\n\n## **Struggling for different performations**\n\nHow does this work on performativity relate specifically to the discussion of financial derivatives? STS-inflected SSF have been mainly concerned with the second order economy, that is, with circulating 'goods' such as contracts (security, currencies, bonds, derivatives), rather than with the production side of the economy. Much is to be gained by concentrating on both sides, however. Indeed, the case of financial derivatives shows how the two are deeply entangled: their trading clearly impacts on production as decisions on investment in the so-called real economy are made based on valuation processes taking place in financial markets. Acknowledging that value is produced at the intersection between the real and the financial sphere of the economy presents a dual task: it requires us to take into account the contingency (within constraints) of such a process as well as to pose the question of its desirability.\n\nThis crucial task is not new: it has preoccupied feminist and critical political economists for quite some time. Indeed, if neo-classical economists have only recently acknowledged that markets are fabricated, feminist and critical political economists have since the 1970s put forward an understanding of political economy as the nexus between economy and society, showing how these two realms have been kept separate by a problematic belief in an economic order governed by natural laws that exist 'out there' and that society had to discover and implement (for instance, the law of demand and offer, the law of equilibrium, and certain interpretations of the Marxist law of value) (see Dalla Costa and James, 1972; Federici, 1980; Mirowski, 1991). This reconceptualisation of political economy led them to reject both objective and subjective theories of value, and to put forward an understanding of it as a radically contingent process. Through their work on reproductive labour in particular, feminist political economists showed how value is actively made and measured rather than being objectively determined. This further enjoined them to explore alternative value-making processes. For instance, they put forward alternative institutional arrangements so as to instil different processes of valorisation. An example of this was the wages for housework initiative which required the state to recognise domestic labour as 'productive'. The campaign was not without its problems as it risked re-enforcing the valorisation of domestic labour as women's labour as well as extending the same measurement process they were critiquing. As a provocative demand, however, this form of intervention was considered and deployed in light of its potential to shift exactly those value-making processes (see Alessandrini, 2012).\n\nAcknowledging the contestability of value remains today a difficult task: it is because of this contingency that many economists, for example, prefer to focus on money and prices rather than value per se. However, as Bryan and Rafferty (2006: 36) have noted, explanations in terms of money and prices end up being unsatisfactory: hence the need for valuation of something more 'determinant' and 'underlying' (such as fundamental value) always re-occurs, as it has in the current crisis when the exorbitant value of derivatives' markets has been called into question. The body of critical and feminist political economic work I have briefly referred to can provide crucial resources for interrogating and intervening in today's value-making processes. This is because it illustrates how seeing value as always 'contingent, hermeneutic, negotiable and non natural' (Mirowski, 1990: 7) and its measurement as socially constructed does not mean rejecting _any_ measure of value. Rather, it leads us to recognise that while there is no one 'correct way for a society to measure a commodity [...] the way its measurement is _instituted_ has important consequences' (Mirowski, 1991: 568; emphasis added).\n\nIndeed, the growth of financial derivatives has not happened overnight. In addition to the technical, linguistic and legitimatory mechanisms Mackenzie has identified in relation to the performativity of financial theories, the role institutional arrangements have played cannot be overestimated. It was with the demise of the Bretton Woods monetary management system that old risks previously managed by the state were revealed and became privatised while new ones started to proliferate; and firms and governments' increasing need to protect themselves against price volatility was met by the proliferation of financial instruments such as derivatives. And the significance of such a process is not limited to the commercial sphere of firms and governments. As states have gradually withdrawn from the realm of social provisioning, financial markets have increasingly acquired the role of social insurers (Fumagalli and Mezzadra, 2009: 210). This means that an individual's present and future has become ever more reliant on the value of assets invested in financial markets, pension funds being just one example. As well as illustrating the role of institutional arrangements in generating change, this process brings to light the productive power of a technology such as derivatives, showing how although financial innovations, economy and society 'may be shaped through regulatory processes, technology can also act as a form of regulation in its own right since it forces certain kinds of human behaviour while removing the conditions of possibility for others' (Pickersgill, 2013: 329).\n\nNow, we have seen how in the process of allowing firms, governments and individuals to hedge themselves against an ever growing number of risks, derivatives have the potential of affecting the value of the underlying asset, raising the more general issue about the impossibility of retrieving any 'pure, 'real' value untainted by the financial sphere. We can therefore appreciate how crucial it is \u2013 if a world in which financial markets are the arbiters of value is to be avoided, not because they are unreal but because the reality they produce is undesirable \u2013 to start thinking of ways to deal with such uncertainty differently, and this includes thinking of alternative institutional arrangements which might be able to affect such value-making processes. I turn briefly to one such example to illustrate the possibility of dealing with financial uncertainty and the contestability of value revealed by financial derivatives differently. This should not be taken as a blueprint for macroeconomic action but as an (always imperfect) attempt to _institute_ a different way of determining and measuring value in today's financialised economies. In 2010, I conducted research on the Regional Monetary Agreement (RMA), a proposal for dealing with exchange rates tabled by the Republic of Ecuador at the UN. Although the proposal has since been abandoned, I refer briefly to it since it represents an attempt to take financial uncertainty seriously by recognising, while participating in, the co-production of the financial and real spheres of the economy (Ecuador, 2008). Exchange rates are quite significant from a perspective that considers the contestability of value as they are supposed to represent the worth of one currency (and therefore the state of one economy) in terms of another. But how does one think about this relation of equivalence? How does one measure it?\n\nNeo-classical theory talks of Purchasing Power Parity and fundamental values but in financial markets actual exchange rates are determined on the basis of the supply and demand of the currencies and it is here that expectations play an important role. Although these expectations are supposed to be based on so-called real variables such as price index, interest rates, macro-economic stability and inflation rates, and leaving aside the question of how real an account of the state of the economy these variables can ever give, the point is that in today's financial markets expectations about the future impact of such variables, including anticipations of others' expectations, are always already affecting today's value (Harvey, 2005: 5\u20136). This level of uncertainty shows exactly the limits of regulation: measures can and should be introduced to limit speculative attacks against currencies. However, as long as derivatives provide such a crucial hedging function there will be a need for them and as long as there is such a need, there will be speculation. Furthermore, even if the impact of derivatives on prices were reduced, the value exchange rates are supposed to express (the average prices of goods and services) remains contestable. Indeed, how is the 'real' value of goods and services, whose relation of equivalence exchange rates are supposed to express, to be measured in the first place?\n\nThese are the two aspects the RMA aimed to address, first, by proposing to deal with exchange rates in a cooperative manner. This was through mutually agreed bands based on the sharing of information about macroeconomic policies among states (Ecuador, 2008: 2\u20133), and it aimed to remove a critical source of uncertainty from the realm of private dealings. Thus, it would have reduced the need for firms and governments to hedge their operations against the risks of price volatility in the first place. Second, by acknowledging, while trying to shape, expectations, and agreeing on optimal exchange rates within bands, policymakers intended to immerse themselves in the construction of exchange rates, thereby removing this process from the exclusive realm of financial markets. The proposal envisaged interaction between governments and market agents in the following way: governments would give an indication of exchange rate through bands; the rates would however be left free to fluctuate within these bands until governments' projections would have been called into question by market agents. Then a re-assessment would have taken place and a decision by governments been made again. The proposal therefore envisaged a different kind of arrangement between states and markets, recognising the participation of both in the determination of exchange rates' value.\n\nWhat is interesting is that, although the proposal talked about anchoring expectations, the actual aim was not that of making exchange rates reflect equilibrium prices (in particular the average price of goods and services). When I asked in Quito (23 March 2010) what 'real' variables exchange rates' expectations should be anchored to, Pedro P\u00e1ez, the President of the Ecuadorean Technical Commission, pointed out that 'equilibrium prices' have not existed for a long time, particularly if one takes into account the 'historic matrix of dependence' that has its roots in the colonial period: hence the difficulty with setting any (exchange rate) parity at the beginning of the institutional arrangement. However, this did not imply abandoning the attempt to connect exchange rates with the economy (in this case average prices); on the contrary, it meant participating more actively in its making. This is why, as the proposal made clear and P\u00e1ez was adamant to emphasise, the financial architecture could not be thought of in isolation from reforms that need to occur in areas ranging from food, agriculture, housing, health and education, to energy, technology, environmental and bio-services.\n\nIt is not my intention to idealise such an arrangement nor to suggest it as a blueprint for future action in the international management of exchange rates. I have briefly referred to it as an instance of 'investment' in the economy that rejects the separation between the real and the financial spheres: its emphasis was rather on the economy conceived of not in isolation from the other realms of life but as the terrain on which the (re)production of life takes places. I have also referred to it as an instance of an arrangement that shows how seeing value as contested does not mean that any value-making process goes but on the contrary requires actors (i.e. policy makers, market participants, etc.) to engage more fully in its construction. From this perspective, this notion of 'fundamental' value need not be abandoned but rather reconceptualised; no longer seen as that which is pure and untainted by financial activity \u2013 and therefore outside financial markets which attempt to capture its substance \u2013 but rather as that which is constantly made. Finally I have conceived of this arrangement as a 'provocation', and I am proposing to think of institutional arrangements more generally as provocations, in three interrelated respects: first, in the tradition of feminist autonomists, so as to think of desirable forms of intervention in the economy capable of engendering alternative valorisation processes. Second, to acknowledge that non-humans as well as humans participate in valorisation processes and that the reality provoked by these interventions might be very different from that envisaged by humans. Third, to bear in mind that whatever the 'goals' we aim to achieve through interventions, the act of formatting markets will always comport violent exclusions.\n\n## **Conclusion**\n\nThis chapter has argued against the view of financial derivatives as parasitical and unproductive financial innovations. It has claimed they can be best understood as a technology, and one which acts as a 'form of regulation in its own right', facilitating certain practices and obstructing others (Pickersgill, 2013: 329). Thus, while embedding the institutional drive for the privatisation (and proliferation) of risks and uncertainty, derivatives have also affected the process through which value is produced at the intersection between the real and the financial spheres of the economy, with important consequences for the investment decisions of firms, governments and individuals. Tracing the complexity of such value-making processes has simultaneously brought to light the porosity of the boundaries between the real and the financial spheres of the economy, the economy and society, formal and informal regulation, and the intended and unintended effects of the latter, thereby extending the important contribution STS have made in troubling traditional boundaries to the domain of financial practices (cf. Cloatre, 2013). Of paramount importance in this respect has been the contribution of STS-inflected SSF: scholars working within this field have opened up to scrutiny the many financial innovations which until recently have been considered as either neutral devices or mere receptacles of values. Notwithstanding this important work, I have argued that SSF have up until now refrained from making substantial 'investments' in reformatting the economy. Such investments include, but are not limited to, struggles over different kinds of regulation, institutional and otherwise. Regulation is here conceived of not as a blueprint for action but as a provocation, that is the partial, limited and always precarious attempt to re-assemble markets (and reality) differently, with an appreciation that the reality thus provoked might be different from the one we envisage in action.\n\n1. The research was supported by the British Academy (grant number SG090138).\n\n## **References**\n\nAlessandrini, D. (2012) 'Immaterial labour and alternative valorisation processes in Italian feminist debates: (re)exploring the \"commons\" of re-production', _Feminists@Law_ , 1(2): 1\u201328.\n\nArthur, B.W., Holland, J.H., LeBaron, B., Palmer, R. and Tayler, P. (1996) 'Asset pricing under endogenous expectations in an artificial stock market, Social Sciences Working Paper'. Available: h\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ads\u00ads\u00adr\u00adn\u00ad.\u00adc\u00ado\u00adm\u00ad\/\u00ada\u00adb\u00ads\u00adt\u00adr\u00ada\u00adc\u00adt\u00ad=\u00ad2\u00ad2\u00ad5\u00ad2 (accessed 5 July 2013).\n\nBijker,W., Hughes, T.P. and Pinch, T. (2012) _The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology_ , Cambridge, MA: MIT Press.\n\nBryan, D. and Rafferty, M. (2006) _Capitalism with Derivatives: A Political Economy of Financial Derivatives, Capital and Class_ , London: Palgrave Macmillan.\n\nCallon, M. (1998) _The Laws of the Markets_ , Oxford: Blackwell.\n\nCallon, M (2007) 'What does it mean to say that economics is performative?, in D.A. Mackenzie, F. Muniesa, and L. Siu (eds), _Do Economists Make Markets: On the Performativity of Economics_ , Princeton, NJ: Princeton University Press.\n\nCollins,H.M. and Evans, R. (2002) 'The third wave of science studies: studies of expertise and experience', _Social Studies of Science_ , 32(2): 235\u2013296.\n\nDalla Costa, M. and James, S. (1972) _The Power of Women and the Subversion of the Community_ , Bristol: Falling Wall.\n\nCloatre, E. (2013) _Pills for the Poorest: An Exploration of TRIPs and Access to Medication in Sub-Saharan Africa_ , London: Palgrave Macmillan\n\nCloatre, E. and Dingwall, R. (2012) 'The migration of objects, scripts and governance', _Regulation and Governance_ , first published online: 10 October2012.\n\nde Goede, M. (2005) _Virtue, Fortune, and Faith: A Genealogy of Finance_ , Minneapolis: University of Minnesota Press.\n\nEcuador, Permanent Mission to the United Nations (2008) 'The Ecuadorian proposal for a crisis response agenda from the south', _New York: UN_.\n\nFederici, S. (1980) 'Wages against housework', in E. Malos (ed.), _The Politics of Housework_ London: Allison and Busby.\n\nFumagalli, A. and Mezzadra, S. (2009) _Crisi Dell'Economia Globale: Mercati Finanziari, Lotte Sociali e Nuovi Scenari Politici_ , Verona: OmbreCorte-UniNomad.\n\nGraham,B. and Dodd, D. (1940) _Security Analysis:Principles and Technique_. New York and London: McGraw-Hill Book Company, Inc.\n\nHarvey, J.T. (2005) ' _Post Keynesian versus neoclassical explanations of exchange rate movements: a short look at the long run_ ',Working Paper, Texas: Department of Economics, 1996.\n\nJasanoff, S. (2003) 'Breaking the waves in science studies: comment on H.M. Collins and Robert Evans, the third wave of science studies', _Social Studies of Science_ , 33(3): 380\u2013400.\n\nJohnson, G.G. and Wetmore, J.M. (2009) _Technology and Society: Building our Sociotechnical Future_ , Cambridge, MA: MIT Press.\n\nKnorr Cetina, K. and Preda, A. (eds) (2005) _The Sociology of Financial Markets_ , Oxford: Oxford University Press.\n\nLatour, B. (1993) _We Have Never Been Modern_ , Brighton: Harvester Wheatsheaf.\n\nLatour, B. (2005) _The Politics of Nature: How to Bring the Sciences into Democracy_ , Cambridge: Harvard University Press.\n\nLipuma, E. and Lee, B. (2004) _Financial Derivatives and the Globalisation of Risks, Durham_ , NC and London: Duke University Press.\n\nMackenzie, D. (2006) _An Engine, Not a Camera: How Financial Models Shape Markets_ , Cambridge, MA: MIT Press.\n\nMacKenzie, D. (2009) _Material Markets: How Economic Agents are Constructed_ , Oxford: Oxford University Press.\n\nMackenzie, D., Muniesa, F. and Siu, L. (2007) _Do Economists Make Markets: On the Performativity of Economics, Woodstock_ , Oxfordshire: Princeton University Press.\n\nMarazzi, C. (2010) _The Violence of Financial Capitalism_ , London: MIT Press.\n\nMarx, K. (1981) _Capital_ , Vol. III, London: Penguin Classics.\n\nMatai, D.K. (2009) ' _The size of the derivative bubble = $190k per person on the planet_ ', Global Research. Available: www.globalresearch.ca\/the-size-of-the-derivatives-bubble-190k-per-person-on-planet\/12753 (accessed 5 July 2013).\n\nMaurer, B. (2002) 'Repressed futures: financial derivatives theological unconscious', _Economy and Society_ , 31: 15\u201336.\n\nMerton, R.K. (1968) _Social Theory and Social Structure_ , New York: Free Pres.\n\nMirowski, P. (1990) 'Learning the meaning of a dollar: conservation principles and the social theory of value in economic theory', _Social Research_ , 57(3): 689\u2013717.\n\nMirowski, P. (1991) 'Postmodernism and the social theory of value', _Journal of Post-Keynesian Economics_ , 13(4): 565\u2013582.\n\nMirowski, P. and Nik-Khah, E. (2007) 'Markets made flesh: performativity and a problem in science studies, augmented with the consideration of the FCC auctions', in D. Mackenzie, F. Muniesa, and L. Siu (eds), _Do Economists make Markets: On the Performativity of Economics_ , Woodstock, Oxfordshire: Princeton University Press.\n\nOrlean, A. (2005) 'The self-referential hypothesis in finance', in J.P. Touffut (ed.), _The Stability of Finance in Europe_ , Paris: Albin Michel.\n\nPickersgill, M.D. (2012) 'The co-production of science, ethics and emotions', _Science Technology Human Values_ , 37(6): 579\u2013603.\n\nPickersgill, M.D. (2013) 'Sociotechnical innovation in mental health: articulating complexity' in Flear, M. L., Farrell, A., Hervey, T. K. and Murphy, T. (eds) _(2013) European Law and New Health Technologies_ , Oxford: Oxford University Press.\n\nStrange, S. (1997) _Casino Capitalism_ , Manchester: Manchester University Press.\n\nWynne, B. (2003) 'Seasick on the third wave? subverting the hegemony of propositionalism', _Social Studies of Science_ , 33(3): 401\u2013417.\nSection 4\n\n* * *\n\n# Law, technoscience and the stabilization of knowledge\n\n* * *\nChapter 10\n\n* * *\n\n# **Epistemic jurisdictions**\n\nScience and courts in regulatory (de)centralisation\n\n_David E.Winickoff_\n\n* * *\n\nRegulatory science can be defined as the scientific foundation of public policy decisions, whether regulatory, legislative, or judicial (FDA 2013). As a practice, it includes, among other things, technical reasoning such as risk analysis and modeling; the operation of expertise in policy; and procedures for validating knowledge in courts and other governing institutions. The politics and operation of these forms of technical reason within public process has been the subject of STS scholarship for some time (e.g. Nelkin 1984). This work has helped uncover, in nuanced ways, the politics of who makes authoritative knowledge, using whose methods, for whom, and subject to what political consequences (e.g. Jasanoff 1987; Gieryn 1999; Sarewitz 2004). As STS scholars have shown, regulatory science does not represent knowledge free of value. Often sitting in a context of uncertainty and contestation, it combines scientific reasoning with social and political judgment (Jasanoff 1990: 229).\n\nScholars turning their attention to regulatory science at the global and international levels have largely focused on its operation, broadly speaking, within single institutions. A central finding has been that these institutions can act as hegemonic centers of calculation and assessment, specialized knowledge, and embodied expertise. This work has analyzed, among other things, the use of science for environmental impact assessments at the World Bank (Goldman 2005); the use of self-governance and peer review at the IPCC (Edwards and Schneider 2001); and power dynamics in the Millennium Ecosystem Assessment (Miller 2007). Such work has also examined how expert communities function and acquire power at the global scale (Oreskes and Conway 2010; Haas 1992).\n\nMuch less has been said about how science operates through governance systems that traverse levels, scales, and sectors to address collective problems (Ostrom 2009; Biermann and Pattberg 2008). It is well understood that regulatory approaches to global issues, from climate change to food safety, are increasingly moving away from globalizing systems and instead operating through pluralistic, overlapping, and fragmented regimes (Boyd 2010). These forms of \"polycentric\" or \"complex\" governance are hailed by some scholars and policy-makers (e.g. Ostrom 2009; Rabe 2004). But they can result in jurisdictional overlap and competition. Regulatory science is often a focal point of conflict (e.g. Peel 2010).\n\nThe jurisdictional politics of ethanol regulation is a good illustration. In the absence of strong Federal leadership, U.S. cities, states, and regions are enacting efficiency incentives and purchasing mandates to mitigate their carbon footprints. California recently enacted a \"low-carbon fuel standard\" which was cited as a model by the European Union in its passage of the 2009 Renewable Energy Directive. However, in a lawsuit currently being heard in an appellate court in California, private actors have sued California on the grounds that national standards pre-empt California's technical standards on federalist grounds. Meanwhile, at the global level, Brazil and the U.S. are beginning to challenge the EU Directive at the World Trade Organization (\"WTO\") on the grounds that its sustainability standards constitute an unfair technical barrier to trade. In this process, technical indicators of sustainability themselves will likely go on trial.\n\nHow is regulatory science operating in the jurisdictional politics of complex regulatory regimes, and what are the implications for democratic legitimacy? A number of cases of cross-level legal friction and multi-level formation help me explore these questions. First, I attempt to theorize the role of regulatory science within multi-level governance through the STS framework of coproduction, drawing attention to what I call \"epistemic jurisdiction.\" Second, I analyze legal encounters with regulatory science in three case studies, including food safety adjudication at the WTO, the Clean Air Act at the U.S. Supreme Court, and California Global Warming Solutions Act. Through these cases, I argue that the creation of epistemic jurisdictions and the authorization of particular forms of regulatory science are performing critical political work in shaping multi-level regimes.\n\n## **Coproduction, multi-level governance, and the law**\n\nImportant strands of work on regulatory science focus on the decision-making processes within expert communities, the contingency of knowledge, and political influences on knowledge production (e.g. Edwards and Schneider 2001; Oreskes 2010). In this chapter, I wish to examine not only how regulatory science is shaped in contexts of multi-level conflict, but also how it is _productive_ of new jurisdictional arrangements.\n\nWork in the STS framework of coproduction (Jasanoff 2004) is useful here because it understands science and technology on the one hand, politics and law on the other, as co-constitutional. STS scholars have explored the co-emergence of political and epistemic order in a wide array of contexts, including climate change (Miller 2005), the protection of endangered species (Thompson 2004), patenting in the life sciences (Parthasarathy 2007), and racial categorization in drug development (Kahn 2012). As a theory, coproduction draws on Michel Foucault's insight that knowledge is not just a factor in governance but a constitutive element (e.g. Foucault 1971 [1966]).\n\nThe coproductionist framework encourages us to view the development of regulatory science and the formation of jurisdictional architecture in the same breath. This architecture is in a period of rapid change and so deserves analysis and scrutiny. In the U.S., it is an era of devolution. Power is actively shifting, broadly speaking, from the center back toward the States and towards even lower levels in arenas like environmental policy. In the European Union, the process of integrating economic and regulatory functions has birthed a politics of \"subsidiarity\" (Nicolaidis and Howse 2001: 1). There is, in general, a move to complex regulatory architectures that depart from the clearly demarcated levels of regulatory jurisdiction found in pure federalism (Hooghe and Marks 2001). For this reason, many legal scholars have given new power sharing agreements across levels the name of \"dynamic federalism\" (Ruhl and Salzman 2010: 66) or prefer to speak of \"complex governance.\" Multiple levels of government are interacting in regulatory processes (Engel 2006), and these interactions require more attention within the field of STS.\n\nCoproduction points to how processes both of political centralization and decentralization may depend upon, more than has been acknowledged, the production of new forms of science and expert authority. In particular, evolving multi-level governance raises questions about the allocation and reallocation of \"epistemic jurisdiction\": _the power to produce, interpret or warrant technical knowledge for a given political community, topical arena, or geographical territory_.\n\nThe production of new epistemic jurisdiction can support political centralization. For instance, the mandate to liberalize trade has called into being new global standard setting bodies that have a taken a share of regulatory authority from sovereign states. Through appeals to a universalist expertise and regulatory science, these bodies can operate as depoliticized vectors of globalization (Moore _et al._ 2011). But epistemic jurisdiction can also be invoked to oppose political centralization. For instance, within the arena of climate change, sub-national actors have resisted international regimes, and the policies of their own states, through appeals to \"local knowledge\" (Long-Martello and Jasanoff 2004). In other words, the appeal to local expertise can underwrite claims of localized political autonomy.\n\nUnderstanding the ways in which centralization and decentralization depend on regulatory science and the configuration of epistemic authority requires more systematic investigation at different sites of practice. Legislative history, rule making, and adjudication are promising sites for they stage the interaction of legal and epistemic forms of authority, and are productive of new regulatory architectures. Legal process within the regulatory arenas of food safety and climate change will be the focus here. These cases show the different ways in which new jurisdictions\u2014both epistemic and political\u2014are built through legal entanglements with regulatory science.\n\n## **International food safety regime**\n\nThe Sanitary and Phytosanitary (SPS) Agreement, one of the suite of agreements adopted under the WTO in 1994, is the dominant source of food safety governance at the global level. The agreement grew out of widespread concerns that food safety regulations were being used to protect domestic production against foreign competition. The aim of the SPS Agreement is actually more ambitious than that of the core Global Agreement on Tariffs and Trade (GATT), for it moves beyond the principle of non-discrimination to one of regulatory harmonization (Charnovitz 2000). Harmonization is not homogenization, and the agreement recognizes the need to preserve a degree of regulatory autonomy for member states (SPS Prologue). Even so, harmonization does imply the operation of a common rationality that can serve to promote convergence and cooperation.\n\nThe negotiation history, text, and legal contestations of the SPS Agreement suggest how regulatory science has been a constitutive element of the food safety regime. Harmonization as embodied in the agreement was only achieved via a technocratic ideology, a scientistic epistemology, and the delegation of epistemic jurisdiction to a global agency. Further, as adjudication under the SPS shows, the system of power sharing within the global regime has co-emerged with interpretations of what counts as valid science.\n\n### **_The ascent of risk assessment and a global regulatory jurisdiction_**\n\nWhile regulatory harmonization was a goal of the Uruguay round of trade negotiations, it was not easy to achieve (Winickoff and Bushey 2010). Food, after all, is a highly regulated area sitting squarely within the traditional police powers of the sovereign state. Safety standards of all kinds, let alone ones in the arena of food, are value-laden and indicative of ethical choices (Busch 2012; Timmermans and Epstein 2010). To develop global standards, then, is to enact a form of global political and cultural order (Nils Brunsson and Bengt Jabobsson 2000).\n\nThe negotiating history of the SPS Agreement shows how science became the primary ideological resource for producing agreement (Winickoff and Bushey 2010). As the lone superpower in the mid-1990s, the U.S. pushed a universalist notion of \"scientific justification\" as the primary tool to combat bans on beef hormones in European states and other non-tariff barriers to the food trade. Negotiators from Canada, Australia, New Zealand, and the U.S. identified formal risk assessment as the earmark of scientific justification, even though it was not yet well established in food safety regulation worldwide. Concurrently, the European Commission\u2014Europe's trade representative at the WTO\u2014was engaged in its own project of harmonizing \"social regulation\" across EU member states (Joerges 1997), and was ultimately willing to embrace risk analysis as a harmonizing force within the SPS Agreement. Given the alignment of interests across the U.S. and the EC to harmonize food safety regulation, scientific evidence was a plausible enough idea around which to forge agreement.\n\nThe focus on science is evident in the text of the agreement. The SPS's core disciplines require that WTO members ensure that any food safety measure \"is based on scientific principles and is not maintained without sufficient scientific evidence\" (SPS 2.2). Final regulatory judgments of states are given safe harbor, so long as they are \"rationally based\" on technical risk assessments. Scientific validity, as established through a particular type of regulatory science, is thus necessary to establish legal validity.\n\nHarmonizing regulatory logic does not address the substance of standards themselves, so SPS negotiators looked for existing international food standards as a possible resource for substantive harmonization. They found the Codex Alimentarius Commission, a little-known bureau of the FAO and WHO that had been producing voluntary food safety standards on residues, pesticide use, etc., since the 1960s. Accordingly, within the SPS Agreement, the Codex was designated one of three \"relevant international organizations\" around whose standards the signatories would attempt to harmonize (SPS 3.4). The guidelines and recommendations of Codex, if adopted by nations, would be considered to satisfy the obligations under the agreement (SPS 3.2). Ironically, it was only the prospect of designation as a \"scientific\" agency of the WTO that the Codex began to formalize its own risk assessment protocols (Winickoff and Bushey 2010). Far from drawing upon a well-established set of epistemic authorities, the WTO regime was a midwife to them.\n\nThe negotiation and text of the SPS Agreement reveal how within multilevel governance systems, authorizing regulatory science and new epistemic jurisdictions can underwrite projects of centralization. Within this regime, derogation from known international standards requires special scientific justification. Epistemic jurisdiction has evolved from an exclusively national one to a multi-level one in which state agencies are held accountable to certain methodological universals.\n\n### **_Adjudicating GMOs at the World Trade Organization_**\n\nThe fact that the SPS Agreement did not settle questions of power sharing is evidenced by the litigation that has steadily gone on since its execution. It is unsurprising that these legal disputes have tested the requirements of \"scientific justification,\" given its pivotal role in the regime. Cases involving hormones in beef, imported salmon, apples, and genetically modified crops show how the patterns of regulatory power sharing within the global regime are co-evolving with interpretations of validity in regulatory science.\n\nIn particular, the GMO dispute under the SPS Agreement illustrates this process. The context of the dispute is well known. In the 1980s, the U.S. developed a \"products approach\" for regulating agricultural biotechnology. This approach assumes that transgenesis entails no additional risk to consumers or agriculture and requires no additional scientific evidence of safety so long as the new product is deemed \"substantially equivalent\" to its conventional counterparts. All new GMO crops in the 1980s and 1990s met this standard. In the EC and European states, regulators have taken a more precautionary \"process approach.\" This approach assumes that transgenesis may entail novel and unique risks to human health or the environment and puts an extra scientific burden on producers to demonstrate safety (Jasanoff 1995).\n\nIn 2003, the WTO Dispute Settlement Body had to address the growing trade conflict on GMOs, and adjudicate these distinct approaches to evidentiary burden and scientific justification. Europe had begun importing Round-up Ready soy and other GMOs in the mid-1990s. In the face of rising consumer concerns in the late 1990s, European states had delayed the approval process for new GMO applications, ostensibly waiting until the EU could pass new legislation on traceability and labeling (Winickoff _et al._ 2005). The United States, Canada, and Argentina initiated dispute settlement procedures at the WTO against the EC for delaying approvals of GM crops.\n\nThis dispute became the so-called _Biotech Products_ case, and it involved the interpretation provisions of the SPS Agreement concerning \"scientific justification\" and \"risk assessment.\" In _Biotech Products_ , the U.S. challenged the scientific basis of various European actions preventing the importation of GM crops and food products. In particular, the U.S. alleged that reversals of GM regulatory policy within the EU illustrate the EU's departure from a fixed body of sound science and constituted \"unreasonable\" or \"undue delay\" under Annex C(1)(a) of the agreement. In response, the EC focused its argument on the safe harbor provision of SPS Article 5.7\u2014which permits members to impose provisional or precautionary measures \"in cases where relevant scientific evidence is insufficient\"\u2014arguing that GM presented novel scientific and social questions and therefore, at the time, required a more developed scientific record of safety.\n\nIn the process of the adjudication, the WTO took up a detailed review of the risk assessment processes of European states, putting it in the position of certification or rejection of competing models of technical reason (Winickoff _et al._ 2005). The Dispute Settlement Body (DSB) found against individual EU member states and the European Commission for procedural irregularities in their risk assessment processes (Bonneuil and Levidow 2012). The DSB held that a number of European countries had conducted risk assessments, and the conclusions of these assessments did not support a \"precautionary\" moratorium on crop approvals. Rather, their procedure entailed an \"undue delay\" and therefore a violation of the agreement. Further, the DSB disagreed that \"relevant scientific evidence was insufficient\" for a valid risk assessment, an interpretation that seems to narrow the availability of defenses based on the precautionary principle.\n\nThe _Biotech Products_ case achieved an important result in the food safety regime, effectively strengthening the SPS Agreement as a sword against nationally-enacted food safety regulations. The international level of governance was strengthened within the multi-level regime at the same time as risk assessment was reinforced as the master regulatory science.\n\nPerhaps less predictably, the ruling produced new regulatory order at lower levels. A follow-on result was that _Biotech Products_ helped the European Commission consolidate its epistemic jurisdiction over food safety _across_ Europe. The EC had recommenced GM approvals before the resolution of _Biotech Products_ , and had adopted the sharpened risk-based approach emerging out of Codex and WTO case law. As the opinion found non-compliance of individual member states, there has been a shift in epistemic jurisdiction from the national to the European level. Since _Biotech Products_ , there has been an ongoing struggle between the European Food Safety Authority (EFSA), which nominally holds the authority to conduct the science-based risk assessment, and national authorities. Although the ability of member states to make a political judgment concerning the acceptability of a given set of risks (as outlined by EFSA) has not been questioned, the idea that the EC would take from member states the power to assess, i.e., conduct the technical analysis itself, has led commentators and activists to label this as an unwarranted grab at state sovereignty.\n\nPolitical legitimacy has been a key concern of EFSA's power to assess. This assignment of epistemic authority over food safety is perceived to be part of a larger pattern whereby Europe attempts \"to integrate new geographic spaces and populations not by overt coercion, but by instituting a host of harmonized regulations, codes, and standards\" (e.g. Dunn 2005). For some, centralization of these \"powers to know\" are a back channel for constituting Europe as a unified political community without the proper political debate and without democratic accountability (Wickson and Wynne 2012).\n\n## **Climate federalism in the United States**\n\nThe food safety case evinces a redistribution of power operating through particular scientific logics and the redistribution of epistemic authority. It has been a top-down process, although lower levels are pushing back in Europe through the \"safeguard clause\" of the European GMO directive (Dir. 2001\/18\/EC). In the arena of climate change, regime formation is not occurring from the top down, but from the bottom up, creating innovative approaches to carbon mitigation, but also introducing conflicts between lower and higher regulatory authorities. Here, too, science and epistemic jurisdiction are important terrain for negotiating multi-level power sharing.\n\nClimate change is currently a crucial site in the construction of a more complex global order (Boyd 2010). Networks of cooperation are moving vertically and horizontally across cities, states, regions, and nations worldwide. In the U.S.\u2014where national policy has moved at snail's pace\u2014regions, states, counties, and cities are taking matters into their own hands, producing what legal scholars have called \"climate federalism\" (Farber 2008). Advocates of this _de facto_ decentralized approach argue that it can foster a laboratory of democratic experimentation and render governance more accountable (Rabe 2004; Chimerinski _et al._ 2008). Critics see a multi-level approach as potentially causing unnecessary patchworks of regulation (Farber 2008).\n\nRegardless of the relative merits of climate federalism in the U.S., it has led to jurisdictional friction. The mismatch in scale of biogeophysical systems and management systems can cause regulatory overlaps, gaps, and controversies (Cash _et al._ 2006). Science and technical reason are focal points both for cooperation and conflict. Jurisdictional conflicts, often playing out in the courts, are sites in which particular forms of knowledge, assessment, and expertise are made authoritative. In this domain as in food safety, the construction of epistemic jurisdiction\u2014especially as it operates over evidentiary issues, the choice of analytic frameworks, and methods of accounting\u2014is a driver of multi-level reformation.\n\n### **_Carbon regulation at the U.S. Supreme Court_**\n\nThe early 2000s were a time of political logjam and intense acrimony in U.S. climate policy. The arrival of George W. Bush as the 43rd U.S. President meant the beginning of an era of overt obstructionism towards reducing GHGs in the U.S. The Environmental Protection Agency (EPA), the lead environmental agency on climate, was complicit. Faced with an intransigent White House and EPA, 19 non-government organizations petitioned the agency to begin regulating GHGs as an \"air pollutant\" under the Clean Air Act (CAA). After holding a number of public hearings on the issue, the EPA decided to do nothing. A large number of State and local governments joined the original petitioners to file suit in the Federal Courts, alleging that the agency's decision not to regulate violated the law. The State of Massachusetts became the lead plaintiff in the case, and the U.S. Supreme Court eventually took the case.\n\nNarrowly speaking, there were two legal issues in the resulting case, _Massachusetts v. EPA_. First, did Massachusetts have legal standing to sue the EPA? To establish standing in U.S. Federal Courts, a plaintiff must be able to show, among other things, that it \"suffered a concrete and particularized injury that is either actual or imminent.\" Plaintiffs must also show that the injury is fairly traceable to the defendant, and that the injury is capable of redress. Second, was the EPA's judgment not to regulate carbon as a \"pollutant\" \"arbitrary and capricious,\" the standard of judicial review under the Clean Air Act?\n\nComplex questions of federalism and epistemic jurisdiction lurk underneath these seemingly narrow legal issues. In its complaints, the State of Massachusetts appealed to a global science and global experts to establish local harms, making both a procedural and substantive challenge to regulatory authorities at the Federal level. The EPA, the expert agency to whom the U.S. Congress delegated power, denied that existing scientific knowledge justified legal standing, and claimed that the Court itself lacked sufficient legal and scientific authority to overturn its definition of \"pollutant.\" The questions of epistemic authority and federalism were inseparable.\n\nIn a 5\u20134 decision, the U.S. Supreme Court decided that Massachusetts had demonstrated its injury and was granted standing. The Court also held that the EPA's conclusion that carbon cannot be an air pollutant was an \"arbitrary and capricious\" judgment, and therefore subject to rejection by the Court under the terms of the Clean Air Act. Further, the Court ruled that the EPA could \"avoid taking further action only if it determines that greenhouse gases do not contribute to climate change or if it provides some reasonable explanation as to why it cannot or will not exercise its discretion to determine whether they do\" (533). In other words, the EPA would need to justify its inaction: the Court shifted the scientific burden of proof to the EPA.\n\nWriting for the majority, Justice John Paul Stevens led not with law but facts about climate change as understood by \"respected scientists\" (505). He narrated a history of growing agreement about the scope and cause of global warming, indicating the ways in which accreting knowledge had grounded government actions in the U.S. and the Kyoto Protocol. Stevens drew together a large network of actors to attest to the harm Massachusetts has suffered from global warming and the EPA's inaction\u2014from the Mauna Lao observatory and its atmospheric carbon measurements dating back to 1959, to ancient ice cores, to the United Nations climate institutions, to rising coastal sea levels (507\u2013509).\n\nUnder the Clean Air Act, epistemic jurisdiction clearly lies with the EPA, for EPA regulators have the power to assess risk and produce regulations. However, Stevens explicitly cited the Intergovernmental Panel on Climate Change (IPCC) five times, describing it as \"a multinational scientific body organized under the auspices of the United Nations [...] drawing on expert opinions from across the globe\" (508). The effect of his scientific discussion is to simultaneously validate an account of nature and privilege an epistemic authority that transcends state or nation.\n\nWriting for the minority, Justice John Roberts rejected the state's arguments on both logical and empirical grounds. He said that deriving a causal inference between the EPA's failure to regulate and Massachusetts' harm is \"pure conjecture\": the complexity of feedbacks in the system make drawing this causal inference to _local_ harm impossible. Furthermore, Roberts attacked the empirical basis of the injury finding, noting that the measured amount of Massachusetts sea level change is within the computer model's conceded margin of error (542).\n\nIn the face of such a strong dissent, Stevens probably felt the need to shore up his epistemic case with a moral argument about federalism\u2014they work hand in glove. As part of the federalist bargain in the U.S., he said, individual States like Massachusetts have effectively waived certain \"sovereign prerogatives\", including ones relevant here, e.g., the power \"to force reductions in greenhouse gas emissions, to negotiate emissions treaties with developing countries, and (in some circumstances) to exercise the police power to reduce motor-vehicle emissions\"; these powers \"are now lodged in the Federal Government\" (519). Because of these significant forfeitures, States as \"quasi sovereigns,\" empowered to protect their land and citizenry, must have special consideration as they call the Federal Government to action in pre-empted regulatory spheres.\n\nIn effect, Stevens struck an interesting new balance of Federal power: the exclusive power to assess science and regulate has not been dislodged from the Federal Government, but States\u2014as a class of plaintiffs challenging regulatory judgments\u2014should be shown special consideration because of their traditional sovereign roles. In aid of his argument, Stevens reached far back to a 1907 opinion by the famous Supreme Court Justice, Oliver Wendell Holmes, in the case of _Georgia v. Tennessee Copper Co._ (206 U.S. 230). In that case, Georgia sought to protect its citizens from air pollution originating outside its borders. Quoted at length in the opinion, Justice Holmes stated that,\n\n> [t]his is a suit by a State for an injury to it in its capacity of quasi-sovereign. In that capacity the State has an interest independent of and behind the titles of its citizens, in all the earth and air within its domain. It has the last word as to whether its mountains shall be stripped of their forests and its inhabitants shall breathe pure air.\n> \n> (518\u2013519)\n\nStevens not only quoted Holmes, but also echoed his ominous tone. \"Rising seas,\" he says, \"have already begun to swallow Massachusetts' land\" (499). Stevens seemed to be thoroughly persuaded by the IPCC and scientific depositions that Massachusetts, perhaps like Georgia in 1907, was under the kind of environmental assault that required calling back the old doctrine of \"quasi-sovereignty.\" This standing granted Massachusetts a right to challenge the EPA's epistemic judgments.\n\n### **_San Bernardino County under California's Global Warming Solutions Act_**\n\nJustice Roberts' dissent in _Massachusetts v. EPA_ , especially the idea that the very concept of global warming is inconsistent with recognizing the standing of localities, cuts against the grain of recent policy. Indeed, there is now quite significant regulatory activity on climate change at the state and local level in the U.S. As of 2011, seven regional climate initiatives were underway, 23 U.S. states had established GHG emissions targets or goals, and over a thousand mayors had committed their cities to try and meet the U.S. Kyoto Protocol targets (Pew 2011).\n\nCalifornia stands out among these efforts: it has legislated ambitious emissions targets and developed novel regulatory mechanisms. Signed into law in 2006 by Governor Arnold Schwarzenegger, the Global Warming Solutions Act is the capstone of the California's climate policy. Best-known as AB 32, the law is explicit that although it is acting as one state, its passage is vital to both local and global interests:\n\n> [a]ction taken by California to reduce emissions of GHGs will have far-reaching effects by encouraging other states, the Federal Government, and other countries to act [...] [AB32] will continue [California's] tradition of environmental leadership by placing [it] at the forefront of national and international efforts to reduce emissions of GHGs.\n> \n> (2(c)\u2013(e))\n\nAB 32 creates a statewide GHG emissions registry and mandates a cap of GHG emissions at 1990 levels by 2020. The law does not specify the regulatory mechanisms for achieving these goals, but it directs the California Air Resources Board (CARB) to spearhead the regulatory process in collaboration with other agencies.\n\nOpponents to California's aggressive approach to climate change have attacked the law on federalist grounds. As referenced above, the Low-Carbon Fuel Standard (LCFS) developed by the California Air Resources Board regulates the kinds of fuels that can be imported and used in the state, deploying particular accountings of \"carbon intensity.\" The state's LCFS was challenged in the Federal Courts by a coalition of actors in the transportation industry who claimed that the Federal approach to fuel standards should pre-empt California's stricter carbon intensity approach. Because it regulated exhaust emissions before the Federal Government, California has an exemption under the Clean Air Act for setting emissions standards of fuels higher than those of the Federal Government. California defended its fuel standard on these grounds.\n\nCalifornia's scope for derogating from Federal regulatory standards turned on contested meanings of carbon intensity: does regulating \"carbon intensity\" of fuels amount to a regulation of the fuel itself, thereby triggering California's exemption, or is it merely a regulation on production methods and therefore not exempt? The Federal District Court ultimately affirmed that carbon intensity constituted a fuel characteristic, allowing the California LCFS to stand. The Court simultaneously answered a technical and legal question, constituting fuel and federalism in the same judgment.\n\nJust as California's carbon regulations have come into conflict with a higher level of government, so have they brought conflict with lower ones. Under the California Environmental Quality Act (CEQA), counties or other public agencies undertaking a project with the potential to harm the environment must prepare an environmental impact report (EIR) that analyzes foreseeable environmental effects of the project and measures available to mitigate those effects. Located in Southern California, San Bernardino County covers a dry terrain of over 20,000 square miles, twice the size of Massachusetts but with one-third of the population (roughly 2 million). On March 13, 2007, the Board of Supervisors of San Bernardino County approved its 25-year Growth Plan and EIR, which lays out the policies underlying future land use and transportation. It projected a 64 percent increase in population from 2000 to 2020 (Osofsky 2009).\n\nWith California's climate law freshly in the books, State-level regulators submitted formal comments criticizing the county for failure to include an assessment of how this growth would impact GHG emissions. In its formal reply, San Bernardino County claimed that these impacts were \"currently speculative\" (607), and that it lacked methodological guidance from State authorities for carrying out such a calculation. In April 2007, unimpressed by the country's response, the California Attorney General initiated _The People of California v. San Bernardino County_ , a lawsuit alleging regulatory violations under CEQA for insufficient disclosure and analysis of the project's impacts on climate change.\n\nThe dispute amounted to a test of the relative regulatory power of the State and the county within the emerging California climate regime. Can a state environmental law on the control of GHG emissions be used to compel local government to change the way it approaches urban growth? What should be the appropriate balance of regulatory power across States and their counties?\n\nIn August 2007, both parties to the lawsuit agreed to settle the case out of court. As part of the settlement, San Bernardino County agreed to develop an inventory of all known sources of GHGs currently existing in its territory, an estimate of the current amount of gases emitted per year, a projection of emissions based on current growth plans, and a new policy target for mitigation. The county agreed to all of this, though it got the State to concede that no data sources currently exist to build this accounting system.\n\nPolitical settlement was achieved through a new system of GHG accounting and was predicated on a form of shared epistemic jurisdiction. On the one hand, the State holds the power to determine the kinds of gases that need to be counted and mandate the methods of assessment. California, in other words, sets the frame for technical analysis. The county, on the other hand, has the authority to collect the data and perform the analysis, as well as to develop a mitigation strategy based on that analysis.\n\nThis was simultaneously a political and epistemic settlement of climate change's jurisdictional politics. What are the rights and obligations of localities in the face of globally caused phenomena, and what forms of legal and epistemic standing should they have? This was the question in _Massachusetts v. EPA_ and it reappears in San Bernardino County. In each case, we see how the constitution of multi-level carbon regimes is disputed on the terrain of regulatory science and shaped by the assignment of epistemic jurisdiction. But just like political jurisdiction in an age of complex and dynamic governance, epistemic jurisdiction is divided and shared.\n\n## **Conclusion**\n\nThis chapter represents a response to what Sovacool and Brown (2009) have called \"entanglements of scale\" in the dual keys of STS and legal studies. Although overlapping and \"polycentric\" forms of governance offer certain advantages, they also introduce forms of cross-level conflict that are giving rise to new multi-level formations. Science is rarely absent as regulatory power is negotiated across levels, sometimes functioning as a foundation for agreement. More often it is a sticking point.\n\nFor legal and STS scholars alike, the cases in this chapter help underscore the politics of epistemic jurisdiction\u2014the power to produce, interpret, or warrant technical knowledge for a given political community, topical arena, or geographical territory\u2014and how it is involved in processes of political (de)centralization. The food safety case illustrates how the operation of technical reason and the polycentric architecture of governance are co-evolving. _Massachusetts v. EPA_ illustrates how the construction of a new vision of split-sovereignty grows from the epistemic authority claimed by the Court and assigned to the IPCC. In the San Bernardino case, California went on the offensive against one of its constituent counties, mandating a new regime of carbon accounting, while relinquishing the basic power to interpret that data.\n\nThe production and negotiation of epistemic jurisdiction, of course, raises critical questions for democracy. In ways that are not always obvious, processes of forming and deploying epistemic jurisdiction enact\u2014both tacitly and explicitly\u2014principles of subsidiarity, standing, pre-emption, and quasi-sovereignty. The technical veil of epistemic jurisdiction is worrisome given the tangible stakes for political representation and accountability. However, the development of shared epistemic jurisdiction within new federalisms suggests an interesting solution to an old problem in STS: the concern about how diverse and distributed worldviews can be accommodated in larger political and economic integrations. If the Federal principle is applied to epistemic jurisdiction, one begins to have a political language for the sharing of the power to know across scales.\n\n Interview with members of the SPS Secretariat, Geneva Switzerland, 2006\u20132007.\n\n The others are enumerated as the International Office of Epizootics and the Secretariat of the International Plant Protection Convention. (SPS, Annex A(3)).\n\n Formally called, _European Communities\u2014Measures Affecting the Approval and Marketing of Biotech Products_ (Dispute DS291), initiated Aug. 8, 2003 (WT\/DS291\/23).\n\n See ICLEI \u2013 the global cities network. \u00adw\u00adw\u00adw\u00ad.\u00adi\u00adc\u00adl\u00ade\u00adi\u00ad.\u00ado\u00adr\u00adg\u00ad\/\u00ad\n\n Codified at 42 U.S.C. \u00a77401 et seq.\n\n _Massachusetts v. Environmental Protection Agency_ , 549 U.S. 497 (2007).\n\n _Rocky Mountain Farmers Union v. Goldstene_ 843 F.Supp.2d 1042, (E.D.C.A.) 2011.\n\n California Environmental Quality Act (\"CEQA\"; Pub. Resources Code, \u00a7 21000 et seq.).\n\n## **References**\n\nBiermann, F. and Pattberg, P. (2008) \"Global environmental governance: taking stock, moving forward,\" _Annual Review of Environmental Resources_ , 33: 277\u2013294.\n\nBonneuil, C. and Levidow, L. (2012) \"How does the world trade organisation know? The mobilization and staging of scientific expertise in the GMO trade dispute,\" _Social Studies of Science_ , 42(1): 75\u2013100.\n\nBoyd, W. (2010) \"Climate change, fragmentation, and the challenges of global environmental law: elements of a post-Copenhagen assemblage,\" _University of Pennsylvania Journal of International Law_ , 32: 457\u2013550.\n\nBrunsson, N. and Jacobsson, B. (2000) _A World of Standards_. Oxford: Oxford University Press.\n\nBusch, L. (2012) \"Standards: recipes for reality,\" _Rural Sociology_ , 77: 490\u2013492.\n\nCash, D. W., Adger W., Berkes, F., Garden, P., Lebel, L., Olsson, P., Pritchard, L., and Young, O. (2006) \"Scale and cross-scale dynamics: governance and information in a multilevel world,\" _Ecology and Society_ , 11(2): Article 8 online] [\u00adw\u00adw\u00adw\u00ad.\u00ade\u00adc\u00ado\u00adl\u00ado\u00adg\u00ady\u00ada\u00adn\u00add\u00ads\u00ado\u00adc\u00adi\u00ade\u00adt\u00ady\u00ad.\u00ado\u00adr\u00adg\u00ad\/\u00adv\u00ado\u00adl\u00ad1\u00ad1\u00ad\/\u00adi\u00ads\u00ads\u00ad2\u00ad\/\u00ada\u00adr\u00adt\u00ad8\u00ad\/\u00ad (accessed 31 July 2013).\n\nCharnovitz, S. (2000) \"The supervision of health and biosafety regulation by world trade rules.\" _Tulane Environmental Law Journal_ , 13: 271\u2013302.\n\nChimerinski, E., Daniels, B., Hardy, B., Profeta, T., Schroeder, C. H. and Siegel, N. S. (2008) \"California, climate change, and the constitution,\" _The Environmental Forum_ , 25(4): 50\u201363.\n\nDunn, E. C. (2005) \"Standards and person-making in East Central Europe,\" in A. Ong and S.J. Collier (eds). _Global Assemblages: Technology, Politics, and Ethics as Anthropological Problems_ , Oxford: Blackwell, pp. 173\u2013193.\n\nEdwards, P. N. and Schneider, S. H. (2001) \"Self-governance and peer review in science-for-policy: the case of the IPCC second assessment report,\" in C. Miller and P. N. Edwards (eds), _Changing the Atmosphere: Expert Knowledge and Environmental Governance_ , Cambridge, MA: MIT Press, pp. 219\u2013246.\n\nEngel, K. H. (2006) \"Harnessing the Benefits of Dynamic Federalism in Environmental Law,\" _Emory Law Journal_ , 56: 159\u2013188.\n\nFarber, D. (2008) \"Climate Change, Federalism, and the Constitution,\" _Arizona Law Review_ , 40: 879.\n\nFDA (Food and Drug Administration), \"Advancing regulatory science,\" \u00adw\u00adw\u00adw\u00ad.\u00adf\u00add\u00ada\u00ad.\u00adg\u00ado\u00adv\u00ad\/\u00ads\u00adc\u00adi\u00ade\u00adn\u00adc\u00ade\u00adr\u00ade\u00ads\u00ade\u00ada\u00adr\u00adc\u00adh\u00ad\/\u00ads\u00adp\u00ade\u00adc\u00adi\u00ada\u00adl\u00adt\u00ado\u00adp\u00adi\u00adc\u00ads\u00ad\/\u00adr\u00ade\u00adg\u00adu\u00adl\u00ada\u00adt\u00ado\u00adr\u00ady\u00ads\u00adc\u00adi\u00ade\u00adn\u00adc\u00ade\u00ad\/\u00add\u00ade\u00adf\u00ada\u00adu\u00adl\u00adt\u00ad.\u00adh\u00adt\u00adm\u00ad. (accessed 30 January 2013).\n\nFoucault, M. (1971) [1966]. _The Order of Things: An Archaeology of the Human Sciences_. New York: Pantheon Books.\n\nGoldman, M. (2005) _Imperial Nature: The World Bank and Struggles for Social Justice in the Age of Globalization_ , New Haven, CT: Yale University Press.\n\nGieryn, T. F. (1999) _Cultural Boundaries of Science: Credibility on the Line_. Chicago, IL: University of Chicago Press.\n\nHaas, P. M. (1992) \"Introduction: Epistemic communities and international policy coordination.\" _International Organization_ , 46(1): 1\u201335.\n\nHooghe, L. and Marks, G. (2001) \"Types of multi-level governance.\" European Integration online Papers (EIoP) 5(11), \u00adh\u00adt\u00adt\u00adp\u00ad:\u00ad\/\u00ad\/\u00ade\u00adi\u00ado\u00adp\u00ad.\u00ado\u00adr\u00ad.\u00ada\u00adt\u00ad\/\u00ade\u00adi\u00ado\u00adp\u00ad\/\u00adt\u00ade\u00adx\u00adt\u00ade\u00ad\/\u00ad2\u00ad0\u00ad0\u00ad1\u00ad-\u00ad0\u00ad1\u00ad1\u00ada\u00ad.\u00adh\u00adt\u00adm\u00ad (accessed 29 July 2012).\n\nJasanoff, S. (ed.)(1987) \"Contested boundaries in policy-relevant science,\" _Social Studies of Science_ , 17: 195\u2013230.\n\n\u2014\u2014 (1990) _The Fifth Branch: Science Advisors as Policymakers_. Cambridge, MA: Harvard University Press.\n\n\u2014\u2014 (1995) \"Product, process, or programme: three cultures and the regulation of biotechnology,\" in M. Bauer (ed.). _Resistance to New Technology_ , Cambridge: Cambridge University Press, pp. 311\u2013331.\n\n\u2014\u2014 (2004) _States of Knowledge: The Co-production of Science and Social Order_. London: Routledge.\n\nJoerges, C. (1997) \"Scientific expertise in social regulation and the European Court of Justice: Legal frameworks for denationalized governance structures,\" in C. Joerges, K.-H. Ladeur, and E. Vos (eds), _Integrating Scientific Expertise into Regulatory Decision-making_ , Baden: Nomos Verlagsgesellschaft, pp. 295\u2013324.\n\nKahn, J. (2012) _Race in a Bottle: The Story of BiDil and Racialized Medicine in a Post-Genomic Age_. New York: Columbia University Press.\n\nLong-Martello, M. and Jasanoff, S. (eds) (2004) _Earthly Politics: Local and Global in Environmental Governance_ , Cambridge, MA: MIT Press.\n\nMiller, C. A. (2005) \"New Civic Epistemologies of Quantification: Making Sense of Indicators of Local and Global Sustainability,\" _Science, Technology, & Human Values_, 30(3): 403\u2013432.\n\n\u2014\u2014 (2007) \"Democratization, international knowledge institutions, and global governance,\" _Governance_ , 20: 325\u2013357.\n\nMoore, K., Kleinman, D.L., Hess, D. and Frickel, S. (2011) \"Science and neoliberal globalization: a political sociological approach,\" _Theory and Society_ , 40: 505\u2013532.\n\nNelkin, D. (1984) _Controversy: Politics of Technical Decisions_ , 2nd edn., Beverly Hills, CA: Sage Publications.\n\nNicolaidis, K. and Howse, R. (eds)(2001) _The Federal Vision: Legitimacy and Levels of Governance in the US and the EU_ , New York: Oxford University Press.\n\nOreskes, N. and Conway, E. M. (2010) _Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming_ , New York: Bloomsbury Press.\n\nOsofsky, H. M. (2009) \"Is climate change 'international'?: litigation's diagonal regulatory role,\" _Virginia Journal of International Law_ , 49: 585\u2013650.\n\nOstrom, E. (2009) \" _A polycentric approach for coping with climate change_ ,\" Report prepared for the WDR2010 Core Team, Development and Economics Research Group, World Bank, October 2009, \u00adw\u00adw\u00adw\u00ad-\u00adw\u00add\u00ads\u00ad.\u00adw\u00ado\u00adr\u00adl\u00add\u00adb\u00ada\u00adn\u00adk\u00ad.\u00ado\u00adr\u00adg\u00ad\/\u00ade\u00adx\u00adt\u00ade\u00adr\u00adn\u00ada\u00adl\u00ad\/\u00add\u00ade\u00adf\u00ada\u00adu\u00adl\u00adt\u00ad\/\u00adW\u00adD\u00adS\u00adC\u00ado\u00adn\u00adt\u00ade\u00adn\u00adt\u00adS\u00ade\u00adr\u00adv\u00ade\u00adr\u00ad\/\u00adI\u00adW\u00ad3\u00adP\u00ad\/\u00adI\u00adB\u00ad\/\u00ad2\u00ad0\u00ad0\u00ad9\u00ad\/\u00ad1\u00ad0\u00ad\/\u00ad2\u00ad6\u00ad\/\u00ad0\u00ad0\u00ad0\u00ad1\u00ad5\u00ad8\u00ad3\u00ad4\u00ad9\u00ad_\u00ad2\u00ad0\u00ad0\u00ad9\u00ad1\u00ad0\u00ad2\u00ad6\u00ad1\u00ad4\u00ad2\u00ad6\u00ad2\u00ad4\u00ad\/\u00adR\u00ade\u00adn\u00add\u00ade\u00adr\u00ade\u00add\u00ad\/\u00adP\u00adD\u00adF\u00ad\/\u00adW\u00adP\u00adS\u00ad5\u00ad0\u00ad9\u00ad5\u00ad.\u00adp\u00add\u00adf\u00ad (accessed 29 July 2012).\n\nParthasarathy, S. (2007) _Building Genetics Medicine: Breast Cancer, Technology, and the Comparative Politics of Health Care_ , Cambridge, MA: MIT Press.\n\nPeel, J. (2010) _Science and Risk Regulation in International Law_. Cambridge: Cambridge University Press.\n\nPew Center on Global Climate Change (PCGCC) (2011) _Climate Change 101: State Action_. \u00adw\u00adw\u00adw\u00ad.\u00adc\u00ad2\u00ade\u00ads\u00ad.\u00ado\u00adr\u00adg\u00ad\/\u00ads\u00adc\u00adi\u00ade\u00adn\u00adc\u00ade\u00ad-\u00adi\u00adm\u00adp\u00ada\u00adc\u00adt\u00ads\u00ad\/\u00adc\u00adl\u00adi\u00adm\u00ada\u00adt\u00ade\u00ad-\u00adc\u00adh\u00ada\u00adn\u00adg\u00ade\u00ad-\u00ad1\u00ad0\u00ad1\u00ad\/\u00ads\u00adt\u00ada\u00adt\u00ade\u00ads\u00ad\n\nRabe, B. (2004) _Statehouse and Greenhouse: The Emerging Politics of American Climate Change Policy_ , Washington, DC: Brookings Institution Press.\n\nRuhl, J. B. and Salzman, J. (2010) \"Climate change, dead zones, and massive problems in the administrative state: a guide for whittling away,\" _California Law Review_ , 98: 59\u2013120.\n\nSarewitz, D. (2004) \"How science makes environmental controversies worse,\" _Environmental Science & Policy_, 7: 385\u2013403.\n\nSovacool, B. K and Brown, M. A. (2009) \"Scaling the policy response to climate change,\" _Policy and Society_ , 27: 317\u2013328.\n\nThompson, C. (2004) \"Co-producing CITES and the African elephant,\" in S. Jasanoff (ed.), _States of Knowledge: The Co-production of Science and Social Order_ , London: Routledge, pp. 67\u201386.\n\nTimmermans, S. and Epstein, S. (2010) \"A world of standards but not a standard world: toward a sociology of standards and standardization,\" _Annual Review of Sociology_ , 36: 69\u201389.\n\nWickson, F. and Wynne, B. (2012) \"The anglerfish deception.\" _EMBO Reports_ , 13(2): 100\u2013105.\n\nWinickoff, D. E. and Bushey, D. (2010) \"Science and power in global food regulation: the rise of the codex alimentarius,\" _Science, Technology, and Human Values_ , 35: 356\u2013381.\n\nWinickoff, D. E., Jasanoff, S., Busch, L., Grove-White, R. and Wynne, B. (2005) \"Adjudicating the GM food wars: science, risk, and democracy in world trade law,\" _Yale Journal of International Law_ , 30: 81\u2013123.\nChapter 11\n\n* * *\n\n# **Un-knowing exposure**\n\nToxic emergency housing, strategic inconclusivity and governance in the US Gulf South\n\n_Nicholas Shapiro_\n\n* * *\n\n> There is not one but many silences, and they are an integral part of the strategies that underlie and permeate discourses.\n> \n> (Michel Foucault 1978: 27)\n\nLate in the summer of 2005, two hurricanes\u2014Katrina, then Rita\u2014tore into the central Gulf Coast of the United States. The damage spanned hundreds of miles of coastline, from East Texas to the Southwestern tip of Alabama. Between 1 and 1.5 million people were displaced. Regionally available rental units, hotels, and motels quickly reached capacity. After months spent shifting between the homes of friends, relatives, and shelters, some 300,000 internally displaced persons were eventually housed in approximately 120,000 travel trailers supplied by the Federal Emergency Management Agency (FEMA). These 250-square foot emergency housing units were deployed both to the driveways of homes-in-ruin, and to federally run trailer parks for those that possessed no real estate of their own.\n\nWithin a fortnight of the inundation of New Orleans and the ravaging of the central Gulf Coast, a single manufacturer, Gulf Stream Inc., had secured over $500 million in contracts to build temporary housing units. New plants were opened to accommodate these rush orders. As one Gulf Stream employee recounted to the House Committee on Oversight and Government, \"There would be supervisors and plant managers walking down the line with bullhorns screaming at us, go faster, you're not doing your job, you don't deserve your paycheck.\" Many laborers on the assembly floor experienced compromised health while working to meet these large expedited orders; symptoms ranged from constant sinus infections and shortness of breath to bloody eyes and noses. When employees complained of the stench and \"stickiness\" of the fiberboard and lauan paneling, they were met with little explanation or reaction from their supervisors other than noting that it came from a \"new supplier\" (Majority Staff Analysis 2008: 10\u201311).\n\nAs displaced Gulf Coast residents began to inhabit these emergency housing units and fall ill, complaints began to trickle back to FEMA and trailer manufactures. \"There is an odor in my trailer that will not go away,\" read a message sent in March 2006 by an inhabitant of a Gulf Stream-made FEMA trailer to the company via a comment feature on their website. The complaint continued, \"It burns my eyes and I am getting headaches every day. PLEASE, PLEASE HELP ME!!\"\n\nConcurrently, other trailer residents began to investigate the chemical constitution of their new homes after experiencing increased asthma attacks, bloody noses, insomnia, chronic fatigue, headaches, diarrhea or dermatological irritation. On his first morning in his new FEMA trailer \u2013 installed in front of what little was left of his Mississippi home \u2013 Paul Stewart woke up to find his pet cockatiel inactive on the floor of his cage, his wife bleeding from her nose, and a burning sensation in his eyes, nose and throat. Paul, a businessman and environmental activist, began researching inexpensive means of assessing the quality of the trailer's indoor air. He found a $39 mailorder formaldehyde test kit, which, when run in his home, yielded a reading of 0.22 parts per million (ppm), more than twice the 0.1ppm maximum indoor level recommended by the U.S. Environmental Protection Agency (EPA). In collaboration with another local environmental activist, Becky Gillette, and equipped with funding from the Sierra Club (an American grassroots environmental organization), the two assessed the atmospheric formaldehyde content of 69 FEMA trailers in Mississippi. They found 61 units, or 88 percent, to contain indoor formaldehyde in concentrations higher than the EPA's 0.1ppm threshold.\n\nThis informal citizen-science assessment drew international media attention and incited both a crisis in post-disaster governance and a multi-million dollar lawsuit. In this chapter I unpack the multiplicity of scientific processes that refute, dilute, and disqualify claims of FEMA trailer-related formaldehyde exposure and ensuing negative health outcomes in both the immediate governmental response and later court proceedings. I document how multiple and divergent technical practices weave together into a larger regime of chemical exposure and ensuing illnesses obfuscation, what I later call \"un-knowing.\" The methods of un-knowing analyzed herein span the realms of both federal controversy governance and the application of science in court. While the actors, instruments and techniques of un-knowing shift between these domains, the outcomes remain the same. Such an analysis brings into relief the negative epistemological space that shapes the contours of both technoscientific and legal facts.\n\n## **Techniques of un-knowing**\n\nHistorian of science Michelle Murphy observed a similar uniformity of toxic injury dismissal in her wide-ranging study on the chemical exposure complaints of female office workers beginning in the 1980s, a phenomenon that came to be referred to as \"sick building syndrome\" (Murphy 2006). Murphy contends that the refutation of toxic harm by the dominant means of scientific evaluation was and still is structured by \"regimes of imperceptibility.\" Regimes of imperceptibility make sense of the patterned outcomes of scientific explanatory systems that time and time again found low-dose chemical exposures to be inconclusively toxic, undeserving of redress, and without an onus for remediation. Yet, attending to the existence of regimes of imperceptibility is not the same as explicating _how_ such conclusions were reached. This chapter thus elaborates Murphy's analysis by enumerating and theorizing the diverse means of un-knowing by which regimes of imperceptibility are achieved. Some facets of toxic inconclusivity are the inevitable affordances of different scientific objects or exposure events, others are byproducts of shifting diagnostic criteria, while many more are strategically sculpted.\n\nScholarship on the latter category, the production of scientific indeterminacy, is rapidly developing within science and technology studies (STS) and affiliated disciplines. These studies largely document practices of inconvenient knowledge avoidance or contesting the causal relationship between products and adverse health outcomes in the pharmaceutical (Corrigan 2002, Avorn 2006, Abraham and Davis 2006) and tobacco (Proctor 2006, Michaels 2008) industries. Linsey McGoey asserts that studies on the strategic deployment of ignorance are of increasing urgency in a democratizing and neoliberalizing world as such manipulations of the known are particularly well suited in contexts where public scrutiny threatens the viability of plain-old state and corporate secrecy (McGoey 2007, cf. Galison 2004).\n\nThe empirical contribution of this chapter to the emergent literature on strategic unknowns is an apprehension of the multiple techniques of inducing ignorance surrounding a single issue. My analysis moves through several scales, arenas of governance, and scientific practices. The various modalities of scientific incomprehension, enumerated below, are part and parcel of the active and dynamic process of moving away from perception, admittance, or substantiation of mass chemical exposure, a discursive current I refer to collectively as \"un-knowing.\"\n\nUn-knowing encapsulates the techniques by which ignorance is deployed, the privileges of secrecy are exercised, and the methods by which the facticity of exposure-related illnesses are called into question. Un-knowing includes both intentional strategic ignorance and unwitting discursive ignorance. As I shift my analytical gaze from harried federal denial of potential chemical health threats to the meticulously choreographed testimony of expert witnesses in the ensuing litigation, the filaments of un-knowing charted in this chapter scale down from the brash and systematic exercise of governmental control to more sophisticated means of disavowing negative health outcomes. What is at stake in these techniques of un-knowing is how we cohabitate with low-dose toxicity, how we understand the human health effects of these exposures, and the allocation of responsibility for such impairments.\n\nApprehending the multiple and protean tactics that aggregate into unknowing connections between chemical exposure and health effects demands a diversity of theoretical approaches. I employ a triad of perspectives in the hopes of amassing a forensic toolbox for the scrutiny of un-knowing. I first make use of the STS focus on the avoidance of inconvenient knowledge, then utilize the Foucauldian analytics of the subjugation of knowledge (Foucault 2003) to understand the manipulation of toxicological thresholds in a federal assessment of chemical levels in the FEMA trailers. Transitioning into the courts, I track the ontological choreography (Cussins 1996) used by expert witnesses in the courtroom to evaporate causal links between formaldehyde exposure and disease outcomes, specifically asthma. The chapter closes with a focus on how the pharmaceuticalization (Biehl 2007, Whitmarsh 2008) of asthma facilitates the exclusion of environmental triggers, such as formaldehyde, from equations of disease causation. This unintentional and reductionistic facet of biomedicine's definition of asthma is then strategically utilized by a defense expert witness in court.\n\nThis chapter draws on participant-observation in one of the law offices representing the plaintiffs in the FEMA trailer formaldehyde litigation, deposition transcripts, expert affidavits, internal FEMA emails disclosed during a congressional investigation, interviews in Arkansas, Louisiana, Massachusetts, and Georgia and is situated within a two-year ethnographic study tracking the social, scientific, and economic lives of the FEMA trailers.\n\n## **Forestalling assessment**\n\nIn early spring of 2006, as many families were still moving into FEMA trailers, the Sierra Club circulated a press release about their grassroots indoor formaldehyde testing campaign and media attention began to mount. In response, a FEMA spokesperson announced in May, \"FEMA and industry experts are monitoring the small number of cases where odors of formaldehyde have been reported, and we are confident that there is no ongoing risk.\" Behind the serenity of their official statements, FEMA was divided in planning their response to the issue. Since mid-March internal emails were circulating among FEMA field offices which advised an immediate and proactive response.\n\nOn May 18 a lawsuit, which would eventually swell to over fifty thousand plaintiffs, was filed in the Easter District of Louisiana. FEMA trial attorney Patrick \"Rick\" Edward Preston was assigned to the case in June and was simultaneously propelled to the forefront of FEMA's scientific inquiry into the formaldehyde issue (Minority Staff Report 2008: 10). One day after he was appointed to the litigation, Preston sent an email vetoing a swift evaluation:\n\n> Do not initiate testing until we give the OK. While I agree that we should conduct testing, we should not do so until we are fully prepared to respond to the results. Once you get results and should they indicate some problem, the clock is ticking on our duty to respond to them.\n\nThe institutional liability of scientifically verifying the hazards of chemical exposure outweighed such knowledge's utility in informing citizens or interdicting the corporeal risks of exposure that continue to accrue with time. This form of forestalling the scientific and systematic corroboration of FEMA trailer related illnesses was the first and most straightforward instance of the Agency's un-knowing of exposure.\n\nIn June 2006, FEMA, the Agency for Toxic Substances and Disease Registry (ATSDR), the Environmental Protection Agency (EPA), and the Centers for Disease Control and Prevention (CDC) began conducting biweekly interagency conference calls to address escalating public relations, public health, and litigation concerns. Through these calls it was eventually decided in August that the EPA would run tests in September and October of _unoccupied_ trailers to test what methods of ventilation were most efficacious. Environmental activist Becky Gillette, a central organizer of the Sierra Club's testing efforts, felt triumphant for garnering federal attention rather than weary of investigations shaped in part by FEMA; as she related to me in her Arkansas home, \"I was happy when I heard the EPA was testing in October, I was glad that the professionals were coming in to verify.\"\n\nGillette's faith began to ebb as the New Year came and went and no results were made public; as she recounted, \"they really dragged their feet and so I wrote a FOIA [Freedom of Information Act letter] in February to force them to release the results.\" While the EPA had conducted the tests, they had then delivered the data unanalyzed to FEMA who was then supposed to forward them to ATSDR for analysis. In early December of 2006 ATSDR was still awaiting delivery of the test results from Rick Preston's Office.\n\nSoon thereafter Preston passed on the data with the stipulation that all ATSDR analyses would remain confidential \"No information should be released to any third party without my express permission,\" Preston mandated in an email to ATSDR. Duly, the report remained secret until FEMA issued a press release in May of 2007 \u2013 three months after ATSDR sent its final report to Preston's office and eight months after the tests were conducted. In addition to avoiding chemical assessment, secrecy was also utilized to the extent permissible by contemporary democratic structures.\n\n### **_Leveling concern_**\n\nSetting the correct exposure threshold for FEMA trailers has been contentious from the start. When the Agency for Toxic Substances and Disease Registry (ATSDR) analyzed the potential hazards posed by the indoor air quality of the FEMA trailers in 2007, they disregarded their own intermediate (14\u2013364 days) and long-term (over 1 year) formaldehyde minimum response levels of 0.03ppm and 0.008ppm, respectively. Instead, in drafting the report, the authors crafted their own standard, which they dubbed \"the level of concern\" (ATSDR 2007) and has, according to former ATSDR director Dr. Howard Frumkin, \"little or no operational meaning\" (Minority Staff Analysis 2008: 17). The level of concern was set at 0.3ppm\u2014ten to thirty-seven times the concentration of formaldehyde that the Agency deemed capable of causing adverse health effects. This 0.3ppm threshold dismissed the illnesses that stricter guidelines were intended to prevent, labeling them mere \"nuisance\" symptoms. The Agency's report found that by opening all windows, static vents, and exhaust fan vents, the indoor formaldehyde levels of FEMA trailers would, on average, fall below their \"level of concern\" and, therefore, the trailers should not be expected to produce adverse health effects (ATSDR 2007).\n\nThis tidy conclusion served to, temporarily, quell rising concern about the chemical consequences of inhabiting a FEMA trailer. The study was designed not to assess the chemical levels of FEMA trailers in the state in which people lived but rather to see if it was possible to bring the indoor formaldehyde levels within a \"safe\" range. The conclusion of safety was based upon a best-case scenario that was inappropriate for brutal Gulf Coast summers, when trailer windows need to be closed and air conditioning set at full tilt in order to fend-off withering heat and humidity.\n\nFrank Renda (a pseudonym), then a senior environmental scientist at ATSDR, was alarmed by the report when it landed on his desk for approval:\n\n> [The consultation] was dated February 1st [2007], I saw it on about the 17th of February; within about two hours I called my senior management and said that we had a problem. The consultation as it was written was misleading, possibly misleading, and a potential public health threat.\n\nAs Renda recalled over lunch in Atlanta, he traced the study design's lineage to the FEMA attorney, Rick Preston, who requested ATSDR analysis:\n\n> They had been directed by FEMA first of all not to share it with anyone [and secondly] that they were only to address the shorter term. That was the thing, I didn't have to go into any in-depth review to know that we had missed the boat, we missed the mark, that FEMA had gotten what they had asked for.\n\nRenda's technical misgivings and the patent influence of a lawyer managing FEMA's liability did not move Renda's superiors into amending the consultation. It was sent to FEMA and only resurfaced later in the summer when, in Renda's words, \"things began to break loose.\" In May the report was used in a FEMA press release to justify inaction. Following this, the two junior scientists that penned the consultation were raked over the coals by senior management. ATSDR's senior leadership then turned to Renda and asked him to lead an across-Agency work group to develop recommendations for remediating the report. Renda's committee produced a revised document that bore increased precautions:\n\n> One of the points stated, \"given the hazards posed by [...] the trailers and the formaldehyde exposures, that efforts should be undertaken in the areas of health education and that appropriate measures to interdict exposures should be implemented.\" And after that there was all of a sudden some sudden displeasure with that I had done. It was removed from my oversight and the executive summary was revised to say, \"analyze\" as opposed to \"implement.\" Paralysis by analysis.\n\nRenda was not only removed from his role supervising the revision of the FEMA trailer consultation but was soon notified of his removal from his position at ATSDR and his placement on a \"a performance improvement plan\"\u2014a slow, bureaucratic means of firing.\n\nFurther criticism was leveled against the 0.3ppm \"level of concern\" by Dr. Vincent Garry, a pathologist and reviewer of ATSDR's 1999 toxicological profile on formaldehyde. In a March 2008 letter to Representative Brad Miller, Chairman of the Subcommittee on Investigations and Oversight, Garry pointed out the inapplicability of such a threshold of concern because of the authors' reliance on occupational exposure studies which are based upon eight hours of exposure per day for five days a week, whereas, for many trailer residents, \"this is a 24 hr per day 7 days per week exposure\" (Garry 2008: 1). In addition to disjunctures between the exposure durations of the industrial exposure literature drawn upon and the residential case which it was applied to, Garry noted the physiological differences between the largely healthy adult males that composed the occupational studies and the wideranging biological makeup of trailer inhabitants, \"For example, children under age 2 have a short trachea and breathe faster than adults (30\u201340 breaths per minute), therefore, process more formaldehyde into the body and are probably less efficient in the metabolism of the chemical\" (ibid.). He concludes his letter by diagnosing a lack of peer-reviewing and robust intra-Agency communication as the underlying issue that led to the usage of this technical miscalculation as the central benchmark in ATSDR's report.\n\nThese issues, highlighted by Garry and Renda, are not isolated to this particular health consultation but rather are symptoms of a longstanding lack of oversight and shortcomings in study design. Such underestimations of harm are the result of techniques of toxicological investigation employed by ATSDR since its creation.\n\nFrom its early days in the 1980s through to the present, the Agency has been accumulating complaints regarding the irresolution of their toxic exposure assessments. The U.S. General Accounting Office, in a 1991 review, found ATSDR's results to be \"seriously deficient as public health analyses\" (GAO 1991: 2). The Environmental Health Network and the National Toxins Campaign Fund asserted in a 1992 study that ATSDR's studies were \"inconclusive by design,\" elucidating their argument with multiple accounts of toxic exposures in the Gulf Coast that were deemed by ATSDR to pose no health risk (Russell _et al._ 1992). Corroborating this statement, one current ATSDR scientist, bearing witness anonymously, testified that, \"It seems like the goal is to disprove the communities' concerns rather than actually trying to prove exposures\" (ATSDR Staff Report 2009: 2).\n\nThe technical means of disavowing the potential chemical harm of these emergency housing units falls squarely within a domain of inquiry that the French philosopher Michel Foucault referred to as \"subjugated knowledges\"; knowledges \"that have been disqualified as non-conceptual knowledges, as insufficiently elaborated knowledges [...] that are below the required level of erudition or scientificity\" (Foucault 2003: 7). The 0.3ppm level of concern disqualifies the \"nuisance\" effects of formaldehyde exposure as non-conceptual. It submerges the constant headaches, fatigue, rashes, coughing, and diarrhea experienced by FEMA trailer residents under an ostensibly higher level of scientificity, disavowing their reality as a true or significant health consequence of exposure.\n\nThe analytics of power\/ignorance\u2014largely focusing on knowledge avoidance\u2014and subjugated knowledges are complimentary as evinced by the study design of the ATSDR FEMA trailer health consultation. The EPA-collected data on trailers that were ventilated to an extent beyond practical possibility was an apparatus of hazardous knowledge avoidance, while the parameter of analysis\u2014'the level of concern'\u2014was an instrument of subjugation. The threshold and the policy decisions it informed subjected tens, if not hundreds, of thousands of Gulf Coast residents to sustained formaldehyde exposures, for \"at least one year longer than necessary\" (Minority Staff Report 2008: 1). It further disqualified the existing scientific recommendations for maximum domestic formaldehyde concentrations\u2014including those of ATSDR itself\u2014as overly precautious and attempted to historically mask the hazards manifest in the exposure data it was charged with adjudicating. These technical maneuvers are tempered by the Agency's distinct history of evaporating concerns about the public's cohabitation with toxics, privileging liabilities of fellow federal agencies and industry over population health.\n\n## **Ontologies of expertise**\n\nThis section moves from an analysis of the often-blunt means of liability avoidance, deferral, or disqualification deployed as techniques of governance, to attend to the more indirect ways through which scientific ambiguity and inconclusivity is leveraged vis-\u00e0-vis the ever-shifting ontology of the \"expert witness.\" This is documented through an exemplary test case, technically known as a \"bellwether trial.\" The trail of focus here revolves around Christopher Cooper, who was eight years old when Katrina hit and the flood protection system of New Orleans failed, destroying his family home.\n\nChristopher was diagnosed with asthma when he was three. Since his diagnosis he would use a corticosteroid \"rescue\" inhaler one or two times a month during (mostly sports-induced) asthma attacks. After the hurricane and six-months of displacement in Florida, Christopher, his mother, and his elder sister moved into a FEMA trailer parked in the driveway of their flooded home. Soon thereafter, the frequency of Chris' asthma attacks increased fourfold. This worsening of his asthma is the basis of Chris' litigious claim for redress.\n\nDr. Kenneth Smith is the director of the intensive care unit at East Jefferson General Hospital, just West of New Orleans, and a practicing respiratory disease specialist at Jefferson Pulmonary Associates. Smith was hired by the defense council of the FEMA trailer formaldehyde litigation to refute claims of both general formaldehyde-asthma causation and the specific causation of the exacerbated asthma of Christopher Cooper. In the following excerpt from his deposition, a plaintiff attorney asked Smith whether a series of statements were true or false\u2014a line of questioning often used to establish the baseline opinions of expert witnesses:\n\nQ. | First statement: There's a cause-and-effect relation between formaldehyde and asthma. \n---|--- \nA. | That's a qualified maybe. \nQ. | Is there a cause-and-effect relationship between formaldehyde and initiation of asthma? \nA. | Possibly. \nQ. | Formaldehyde is an irritant to the respiratory system? \nA. | Possibly. Again, these are all concentration issues. They are potential irritants at the appropriate concentration. We've seen several patients who lived temporarily in FEMA trailers that have asthma and have other illnesses. But that [allergen and toxin exposure] was a very common occurrence in this city after Katrina. [...] \nQ. | Formaldehyde exposure can have harmful effects on developing lungs in children? \nA. | I would say that I'm unaware of any data that would suggest that that is the case. \nQ. | So is that a no? \nA. | That's a qualified no because I'm unaware of it.\n\n(Metairie, Louisiana, 10 July 2009)\n\nThis strained back and forth continued on for several minutes. To questions revolving around whether or not the scientific literature demonstrated that low-dose formaldehyde exposure can exacerbate asthma, Smith responded \"maybe,\" \"possibly,\" \"I'm unaware of it.\" He also, and to a lesser extent, answered with, \"false.\"\n\nThe latter was his ruling on the statement, \"There is a correlation between formaldehyde exposure and the risk of allergic sensitization.\" Allergic sensitization is one of three potential pathophysiological links between formaldehyde exposure and asthma attacks. After Smith's firm negative answer, the questioning attorney pressed further:\n\nQ. | A definitive not true or you haven't seen any studies that suggest that? \n---|--- \nA. | I haven't seen any studies that suggest that, and the studies that I have seen suggest that that's not the case.\n\nIn Smith's list of reliance materials, a technical term for everything that he has consulted in formulating his opinion, he listed only seven articles. Of these articles none were studies that scrutinize allergic sensitization in the presence of formaldehyde, and most had little to no bearing upon the trial's claimed concurrence of FEMA trailer residency and childhood asthma exacerbation. Smith produced a firm account of causal uncertainty through his listless engagement with the scientific literature. His window upon the scientific literature was not just small but also strategically sculpted, as evidenced through his responses to questions from the plaintiff attorney:\n\nQ. | The studies that you reviewed that are in your Reliance File, how did you get those? Did you go out and hit the books or hit the computer or did the lawyers provide them? Did someone else provide them to you? \n---|--- \nA. | Most of the\u2014the documents that I referenced in my opinion paper, except the ones for Up-To-Date, which is an on-line textbook, mega textbook\u2014 \nQ. | Sure. \nA. | \u2014were provided by defense counsel.\n\nThus, Smith did not produce his expert opinion alone. The intentional ignorance of Smith's testimony is shaped by the tandem efforts of defense attorneys' strategically sanitized reading list and his own restrained research. By way of this combined effort, Smith was able to aver, without fear of perjury, formaldehyde's non-causation of asthma. It was through this calibrated epistemic avoidance that his testimony achieved an ontological fixity of the non-existence of a relationship between formaldehyde and asthma.\n\nSmith's claims were further buttressed by demarcating his expertise: \"I'm not a study wonk,\" he remarked as a plaintiff attorney questioned him about exactly what study it was that refuted sensitization to formaldehyde. Smith continued, when an attorney pressed him on the specifics of a study, \"That's not my deal. I'm a clinician.\" Although Smith has treated at least 50,000 patients with respiratory ailments since he finished his training in 1978, he reported that not a single one of his asthmatic patients have claimed formaldehyde-induced aggravation. He centered his expertise upon his capabilities and knowledge in practice and not his methods of literature inquiry or memory of textual specificities.\n\nContradictorily, however, Smith later stated that his clinical instinct is not what he relies upon. When asked how he formulates his opinions Smith circled back to the literature, which is in conflict with his clinical common-sense:\n\n> Well, it's based on my personal experience, but it's also based on literature and people that have done this. You know, quite frankly, quite frankly, I\u2014I would have thought intuitively, intuitively that any irritant can\u2014can exacerbate asthma. All right? If you ask me as a clinician, I'd say, yeah, virtually any irritant. So it was real interesting to me as a\u2014as a scientist to\u2014to read literature that's from good places that seem to be good literature where that in fact is not the case [...]\u2014that's somewhat surprising, quite frankly. But I believe it.\n\nSmith did not base his expert opinion of formaldehyde-induced asthma exacerbation on what he saw day-in-and-day-out as a clinician, nor did he claim to base his views on the literature alone. Rather, he coordinated his authority between clinic and library\u2014practicing pulmonologist and scientist. This movement \"ontologically choreographed\" (Cussins 1996) Smith's different claims on expertise; i.e., he forged \"a functional zone of compatibility that maintain[ed] referential power between\" his different medical existences (Cussins 1996: 600). Smith's dogged coordination of his expertise deflected further cross-examination from plaintiff attorneys and his expertise was not called into question by the presiding judge.\n\nSmith's expertise arose from a repeated oscillation between the clinical and the scholarly, yet as he jockeyed back and forth he persistently located the seat of his knowledge in his _other_ medical existence. Behind the onto-logical choreography of Smith's expertise that sufficed to maintain his legitimacy in court, was a deep-seated \"phantomatic\" quality to the ontology of his expertise. In her work on waterborne single-celled microorganisms that were simultaneously the suspected and vindicated culprit of large-scale fish die-offs, Astrid Schrader developed the notion of \"phantomatic ontology.\" While the term was coined to describe scientific objects that \"reshape their configurations in different contexts\" (Schrader 2010: 278), I extend it here to the producers, or at least brokers, of scientific knowledge whose own expertise bears \"the paradoxical existence of a specter as neither being nor non-being\" (ibid.).\n\nSmith coordinated his polyvalent expertise to successfully shield his data avoidance from contempt and vulnerability. Yet, on closer inspection the ontology of his expertise that was so readily choreographed, also revealed the vacancy of his epistemic practices. As Lezaun and Woolgar (2013: 321\u2013323) note, ontology and epistemology are not easily parsed. Both are implicated and entangled within broader practices of un-knowing links between formaldehyde and asthma.\n\n## **Pharmaceutical logics**\n\nIn the move from speaking about general causation to specific causation, Smith repositions himself as a clinician. While he concedes that Christopher Cooper did experience more frequent attacks with greater severity while living in the FEMA travel trailer, he does not \"know specifically what caused it.\" He frays the claimed aetiology of formaldehyde exposure not only with his \"will to ignorance\" (McGoey 2007) but also with his clinical knowledge of asthma provocation. As with his testimony negating the general causality of asthma by formaldehyde, these accounts of specific causality are also produced by drawing upon an emergent pharmacological re-definition of asthma's aetiology. This section transitions from an analysis of the likely intentional ignorance of expert witnessing to pervasive biomedical logics that are without personal intent yet are also braided into processes of un-knowing, specifically the exclusion of environmental exposures from definitions of asthma.\n\nIn May of 2009, Christopher Cooper and his mother, Alana Alexander, traveled to National Jewish Hospital on a trip paid for by the plaintiff steering committee. The respiratory wing of this hospital is widely regarded as a global leader in pulmonary care. There, Dr. Karen A. Pacheco ran a battery of tests on Christopher: pulmonary function tests, a challenge with a bron-choconstrictive agent, skin-prick allergy testing, and CT scans of his sinuses and chest. In her affidavit, she concludes that it is \"medically plausible that [Christopher's] asthma would have been aggravated by occupancy of a water damaged trailer with formaldehyde off-gassing\" (2009: 7) but given the lack of baseline medical records (many of which were lost to the 2005 flood waters) she cannot attest to the precise degree of aggravation. \"Nonetheless,\" she continues:\n\n> The patient's test results document moderately severe bronchial hyperresponsiveness as well as a component of fixed airflow obstruction. This likely reflects a long history of asthma that has been suboptimally treated. Although it is common to wish to minimize medication use, especially in children, in those with asthma this is not necessarily a good policy. Chronic untreated asthma can lead to airways remodeling and scarring, with fixed airflow obstruction that no longer completely reverses with inhaled bronchodilator.\n> \n> (Pacheco 2009: 7)\n\nPacheco reads Christopher's pulmonary landscape as sculpted by a longstanding lack of pharmaceutical intervention. This explanation of Christopher's current airway topography falls in line with an even longer history of the biomedical imagination's use of asthma pharmaceuticals to subtract environmental factors from equations of asthma causation or exacerbation.\n\nIn his wide-ranging history of asthma and allergies in the US, Gregg Mitman notes that following the conclusion of the Second World War, \"engineering the body in ways that would overcome the peculiarities of place and environmental change became a consumer and corporate dream\" (2007: 211). In other words, bronchodilating inhalers were used to replace considerations of environmental exposures when thinking about asthma or allergies. This shift in the way asthma was conceptualized aligned the interests of patients-cum-consumers and the pharmaceutical industry and obfuscated many of the underlying environmental triggers. Whitmarsh also notes that, \"increasingly in the early-twentieth-century United States, pharmaceutical intervention was used to define [asthma]\" (Whitmarsh 2008: 61). Alleviation of bronchoconstriction by inhaled pharmaceuticals became foundational in defining what respiratory illnesses were identifiable as asthma. It is upon this foundation that lack of treatment becomes more of a cause of asthma than the environment. In a circular fashion, the treatment of asthma has become part of its definition, and a lack of treatment becomes the cause of asthmatic damage to the lungs. The original impetus for respiratory distress and pharmaceutical innovation, the environment, is displaced by its response. It is under this rubric of pharmaceuticalization (Biehl 2007), that we must understand the assignation of non-environmental causality.\n\nOnly Pacheco's affidavit appears in court, not her person. Her expert report is brought in as an exhibit and then, as an expert witness, Smith translates her medical terminology into more accessible English. At the beginning of his analysis he pauses to \"applaud\" her reading of Christopher's severe bronchial hyperresponsiveness and fixed airflow obstruction as a function of a lack of pharmaceutical intervention rather than the result of a pathophysiology exacerbated by environmental substances. Indeed, Christopher's (and his mother's) responsibility for the changes to his airways is, Smith posits, evinced by \"[t]he fact that he has no alteration in forced expiratory volume in one second that doesn't go normal after the use of a bronchodilator.\" In other words, how much air he can exhale at a baseline is the same as how much air he can exhale immediately after a chemically triggered bronchoconstriction is countered by a fast-acting asthma inhaler. Christopher's test responses place his bronchial sensitivity snugly within the pharmaceuticalized definition of asthma. He should, therefore, be able to remove environmental encounters with asthmogens from asthma's causal horizon.\n\nSmith implies below that pathophysiologies are not asthma; they are merely risk factors for asthma. Asthma becomes a lack of human control of bodily risks in Smith's testimony, \"Christopher actually has mild asthma. He has severe bronchial hyperresponsiveness which may put him at risk for more severe asthma or may\u2014if he's not controlled, he may\u2014he could have long-term sequelae.\" Christopher's asthma is itself mild, although his lungs are very reactive. The control necessary to prevent \"more severe asthma\" is a control of Christopher and not a control of the environment. As Smith renders the ontology of asthma, asthma is not in the body or influenced by the body's surroundings but rather comes into being by how one medicates or does not sufficiently medicate the body.\n\nIn his expert testimony, Smith takes the helm of diffuse practices of pharmaceuticalization and aligns their logics with his own argument. Yet, the momentum of such unwieldy processes affords only minimal maneuverability. The primary agency of separating environmental asthma triggers from rubrics of causation lies beyond individual or institutional agency, as outlined in previous sections, and resides in scientific discourse itself. These pharmaceutical logics are the largest scale and most de-centered aspect of un-knowing documented in this chapter.\n\n## **Conclusion**\n\nIn this chapter I have enumerated a compendium of techniques of un-knowing as they emerged at the interface of the post-disaster governance, law, and the health sciences. In doing so I hope to contribute a toolbox of approaches to the growing body of work on intentional ignorance that is developing in STS and render visible the ways in which socio-legal venues and practices mediate the un-knowing of evidence.\n\nThe practices of un-knowing illnesses in the ill-fated emergency housing units of study are mercurial. They take multiple forms across scale and time, including: secrecy, assessment postponement, scientific disqualification\/knowledge subjugation, knowledge avoidance, and the ontological obfuscation of environmental triggers.\n\nThe above techniques weave together to form the process I have dubbed \"un-knowing.\" Un-knowing answers the technical question \"how did regimes of imperceptibility come to veil the potential harm of formaldehyde in FEMA trailers?\" By documenting the ways in which knowledge can be parried, submerged under thresholds of significance, concealed, and ignored. While the STS literature on \"strategic ignorance\" semantically imputes intentionality, a focus on processes of un-knowing highlights how willful knowledge insufficiencies and unpremeditated discursive blinders, such as pharmaceuticalization, work hand-in-hand and towards similar ends.\n\n## **Acknowledgments**\n\nI would like to thank the editors of this volume and Nadine Levin for their comments on an earlier draft and Fadhila Mazanderani for commenting on an ancestral version of this chapter.\n\n Displaced persons owning or renting apartments and those with homes that did not have sufficient space on their lot for a trailer were also allocated units on federally run group sites.\n\n Over 2 billion dollars in emergency housing contracts would eventually be awarded to over 60 manufacturers.\n\n This digital complaint was forwarded from Scott Pullin to Dan Shea, both employees of Gulf Stream Inc., on March 21, 2006. This and all further cited internal FEMA emails were disclosed during the discovery process of a congressional investigation into the mishandling of the FEMA trailer formaldehyde issue.\n\n The Stewarts had a friend air the trailer out for two weeks prior to their return to Mississippi. The cockatiel made a full recovery after being removed from the trailer.\n\n While this threshold is a useful, albeit conservative, yardstick for indicating the level at which potential negative health effects can ensue, it is not a regulatory benchmark, as domestic air quality is not regulated.\n\n These lay\/activist tests were later largely corroborated, see Maddalena _et al._ 2009.\n\n Nigel Thrift (1996: 97\u2013120) details a \"social unknowing\" which divides unknowns into five groups. His usage aligns with the noun \"unknowingness\" as he is interested in states of unknowing and not my verbal and processual usage of un-knowing.\n\n My one-sided focus on defensive techniques of weakening connections between chemical exposure and illness, as opposed to those intending to strengthen causal links, puts me at risk of accusations of what Woolgar and Pawluch (1985) refer to as \"ontological gerrymandering,\" or the asymmetrical leveling of critique. The unbalanced problemitizations of this chapter result from a focus on the triumphant epistemic practices, which in the case at hand happens to be those of disavowal and disqualification. Michelle Murphy (2006) meticulously documented the dominance of these regimes of imperceptibility; my goal here in is to understand the processes by which such regimes are constituted.\n\n _Hillard v. United States Government_ , Complaint, Civil Action 06\u20132576, U.S. District Court, Eastern District of Louisiana, filed on May 18, 2006.\n\n Internal E-mail sent from Patrick Preston to Jill Igert, Jordan Fried, and Kevin Souza on June 15, 2006.\n\n Internal E-mail sent from Patrick Preston to Scott Wright on November 30, 2006.\n\n They did so in spite of EPA, National Cancer Institute and the Occupational Safety and Health Administration assertions that formaldehyde exposure as levels three times lower (0.1ppm) can cause not just irritation but coughing, skin rash, and severe allergic reactions.\n\n The press release is available at: \u00adw\u00adw\u00adw\u00ad.\u00adf\u00ade\u00adm\u00ada\u00ad.g\u00ado\u00adv\u00ad\/\u00adn\u00ade\u00adw\u00ads\u00ad\/\u00adn\u00ade\u00adw\u00ads\u00adr\u00ade\u00adl\u00ade\u00ada\u00ads\u00ade\u00ad.f\u00ade\u00adm\u00ada\u00ad?\u00adi\u00add\u00ad=\u00ad3\u00ad6\u00ad0\u00ad1\u00ad0\u00ad, issued on May 4th 2007. Release Number: HQ-07\u2013061.\n\n## **References**\n\nAbraham, J. and Davis, C. (2006) \"Testing times: the emergence of the Practolol disaster and its challenge to British drug regulation in the modern period.\" _Social History of Medicine_ , 19 (1): 127\u2013147.\n\nATSDR (2007) \"ATSDR health consultation: revised formaldehyde sampling at FEMA temporary housing units, October 2007.\"\n\nAvorn, J. (2006) \"Dangerous deception\u2014hiding the evidence of adverse drug effects.\" _New England Journal of Medicine_ , 355 (21): 2169\u20132171.\n\nBiehl, J. G. J. (2007) \"Pharmaceuticalization: AIDS treatment and global health politics.\" _Anthropological Quarterly_ , 80 (4): 1083\u20131126.\n\nCA OEHHA (2001) _Prioritization of Toxic Air Contaminants: Formaldehyde_.\n\nCorrigan, O. P. (2002) \"A risky business: the detection of adverse drug reactions in clinical trials and post-marketing exercises.\" _Social Science & Medicine_, 55 (3): 497\u2013507.\n\nCussins, C. (1996) \"Ontological choreography: agency through objectification in infertility clinics.\" _Social Studies of Science_ , 26 (3): 575\u2013610.\n\nFoucault, M. (1978) _The History of Sexuality Volume I: An Introduction_. New York: Pantheon.\n\nFoucault, M. (1980) _Power\/Knowledge: Selected Interviews and Other Writings_ , 1972\u20131977. New York: Pantheon.\n\nFoucault, M. (2003) _\"Society Must Be Defended\": Lectures at the College de France_ , 1975\u20131976. London: Picador.\n\nGAO (1991) _[Title obscured] B-244536_.\n\nGalison, P. (2004). Removing knowledge. _Critical Inquiry_ , 31 (1).\n\nGarry, V. (2008) \"Formaldehyde and FEMA provided mobile home trailers,\" (letter to Congress by scientific expert).\n\nGraeber, D. (2006) \"Beyond power\/knowledge an exploration of the relation of power, ignorance and stupidity.\" Malinowski Memorial Lecture, London School of Economics.\n\nLezaun, J. and Woolgar, S. (2013) \"The wrong bin bag: a turn to ontology in science and technology studies?\" _Social Studies of Science_ , 43 (3): 321\u2013340.\n\nMaddalena, R., Russell, M., Sullivan, D. P. and Apte M. G. (2009) \"Formaldehyde and other volatile organic chemical emissions in four FEMA temporary housing units.\" _Environmental Science & Technology_, 43 (15): 5626\u20135632.\n\nMajority Staff Analysis (2008) _Trailer Manufacturers and Elevated Formaldehyde Levels. Washington, DC_ : United States House of Representatives, Committee on Oversight and Government Reform.\n\nMathews, A. (2005) \"Power\/knowledge, power\/ignorance: forest fires and the state in Mexico.\" _Human Ecology_ , 33 (6): 795\u2013820.\n\nMcGoey, L. (2007) \"On the will to ignorance in bureaucracy.\" _Economy and Society_ , 36 (2): 212\u2013235.\n\nMichaels, D. (2008) _Doubt Is Their Product_. Oxford: Oxford University Press.\n\nMinority Staff Report (2008) _Toxic Trailers\u2014Toxic Lethargy: How The Centers for Disease Control and Prevention has Failed to Protect Human Health_. Washington, DC: U.S. House of Representatives.\n\nMitman, G. (2007) _Breathing Space: How Allergies Shape Our Lives and Landscapes, 1st edn_. New Haven, CT: Yale University Press.\n\nMol, A. (1999) \"Ontological politics: a word and some questions,\" in John Law and John Hassard (eds), _Actor Network Theory and After_. Oxford: Blackwell, pp. 74\u201389.\n\nMurphy, M. (2006) _Sick Building Syndrome and the Problem of Uncertainty: Environmental Politics, Technoscience, and Women Workers_ , 1st edn. Durham, NC: Duke University Press.\n\nOffermann, F. J. (2009) _Ventilation and Indoor Air Quality in New Home_. California Energy Commission.\n\nPacheco, K. (2010) _Clinical Summary: Cooper_. Denver, CO: National Jewish Hospital.\n\nProctor, R. N. (2006) '\"Everyone knew but no one had proof: tobacco industry use of medical history expertise in US courts, 1990\u20132002.\"' _Tobacco Control_ , 15 (suppl_4): iv117\u2013iv125.\n\nRussell, D., Lewis, S. and Keating, B. (1992) _Inconclusive by Design: Waste, Fraud and Abuse in Federal Environmental Health Research_. The Environmental Health Network and National Toxics Campaign Fund.\n\nSchrader, A. (2010) \"Responding to Pfiesteria piscicida (the fish killer) phantomatic ontologies, indeterminacy, and responsibility in toxic microbiology.\" _Social Studies of Science_ , 40 (2): 275\u2013306.\n\nTattersfield, A. E., Knox, A. J., Britton, J. R. and Hall, I. P. (2002) \"Asthma.\" _The Lancet_ , 360 (9342): 1313\u20131322.\n\nThompson, C. M. and Roland, C. G. (2008) \"Mechanistic considerations for formaldehyde-induced bronchoconstriction involving S-nitrosoglutathione reductase.\" _Journal of Toxicology and Environmental Health, Part A: Current Issues_ , 71 (3): 244.\n\nThrift, N. J. (1996) _Spatial Formations_. Thousand Oaks, CA: Sage.\n\nWhitmarsh, I. (2008) _Biomedical Ambiguity: Race, Asthma, and the Contested Meaning of Genetic Research in the Caribbean_. Ithaca, NY: Cornell University Press.\nChapter 12\n\n* * *\n\n# **A likely story**\n\nHIV and the definition of disability in UK employment equality law, 1996\u20132005\n\n_Emily Grabham_\n\n* * *\n\n## **Introduction**\n\nIn the late 1990s, HIV-positive people in the UK, buoyed by recent developments in anti-retroviral therapies, found themselves having to argue their own potential decline in health in order to gain access to discrimination claims for HIV-related harassment or dismissal. This chapter analyses how legal and clinical ideas about time contributed to these claims. It focuses on how a test of 'likelihood' of future impairment became a significant legal requirement for such individuals, and how lawyers interacted with clinicians to obtain medical reports about claimants' futures. This analysis is thus about 'prognosis', 'uncertainty' and the (newfound) promise of anti-retrovirals. It is also about the role of non-human actors: HIV tests, T-cell counts, viral loads, medical reports, and anti-retrovirals, and their role in confabulating multiple, often contradictory, legal temporalities which had significant effects in the lives of people living with HIV.\n\nThe conceptual question that this chapter raises, therefore, is about law and time. Scholarship in this area continues to grow (Douglas 2011; Keenan 2012; Mawani forthcoming; van Marle 2003). Time is central to Nikolas Luhmann's systems theory (as temporalisation) as much as it animates Derrida's theories of deconstruction and difference (Cornell 1990; Philippopoulos-Mihalopoulos 1999). Within law and society scholarship, social horizons of time influence legal actors' approaches to formal and informal legal regimes (Greenhouse 1996; Engel 1987; Melissaris 2005); multiple and contrasting temporal orders can fail to co-exist as much as they might combine or cohere (Cooper 2013), and legal constructions of temporal thresholds are heavily influenced by technoscientific practices (Beynon-Jones 2012).\n\nContributing to these conversations, my current research investigates how legal actants create and sustain legal temporalities. At a basic level, this line of inquiry has been anticipated within social theory and social history for some time. Our relationship with objects and technologies such as clocks and calendars has created particular segmented understandings of time, which, sociologists and anthropologists remind us, are culturally specific (Zerubavel 1985). My perspective here also draws heavily on work in science and technology studies (STS) which aims to provide accounts of the significance of objects and material artefacts in culture (Henare _et al._ 2007; Pottage and Mundy 2004; Latour 2010). Legal temporalities, from this perspective, are produced through new and changing social relationships, and through mutating interactions between human and non-human actors. In essence, non-human actors, technoscientific practices, objects, and legal technicalities, can be just as important as human actors in creating legal ontologies of time (Beynon-Jones 2012).\n\nThe first section of this chapter situates debates about law and time alongside work in STS which engages with questions of legal processes, networks, paradigms and dilemmas, and with questions of materiality. The following sections, and the conclusion, set out, and reflect upon, research I conducted into how lawyers and activists argued that HIV was a disability for the purposes of discrimination claims in the late 1990s (drawing on semi-structured interviews with activists, lawyers, and policy makers, and documentary and case analysis). These sections explain the context for the claims, and the strategies of activists and advisors in trying to situate their claimants within the relevant legal test. I conclude with some thoughts about how this kind of analysis of time can help us to develop understandings of how temporalities emerge within networked relationships, legal and non-legal.\n\n## **From law and time to material legal temporalities**\n\nBruno Latour argues that temporality is just a way of connecting entities and 'filing them away' (Latour 1993: 75). Crucially, if we change the way this connecting and filing happens, then we get a different temporality. In other words, time is created through exchange and action, not, as modernity would have it, through calendars, flows, and progress (ibid.). As Latour puts it:\n\n> We have never moved either forward or backward. We have always actively sorted out elements belonging to different times. We can still sort. _It is the sorting that makes the times, not the times that make the sorting_.\n> \n> (Latour 1993: 76)\n\nIt is to be expected that the sorting to which Latour refers in this argument is embedded within what has been termed 'material worlds' (Faulkner _et al._ 2012). In other words, the relationships between human and non-human actors create and sustain the temporalities that are seen, in some quarters, as 'natural' or 'social'. Yet as actor-network theory (ANT) and other scholarly approaches have demonstrated, tracing the role of the material in socio-technical or socio-legal relationships is not at all straightforward, especially in the legal field (Pottage 2012). What is a 'thing', after all? Alain Pottage and Martha Mundy have traced the distinctions between persons and things that legal concepts, themselves, have initiated (Pottage and Mundy 2004). The distinction between persons ( _personae_ ) and things ( _res_ ) that was central to Roman law remains influential within codified, as well as common law, traditions, and even shapes current political theory (Pottage, 2004, drawing on the work of Yan Thomas). For this reason, it is also important, when thinking about the role of things in creating legal concepts, to remember the role of law in classifying phenomena as _res_ or _personae_ in the first place.\n\nAddressing the study of law, science, and technology in particular, Alex Faulkner and colleagues use the ANT-inspired concept of 'socio-materiality' to describe the interaction of human and non-human actors within performative legal and scientific networks (Faulkner _et al._ 2012). Drawing on the work of Michel Callon, they describe the process through which theories and material worlds, and a range of actors, are co-ordinated and co-constituted as 'socio-technical agencement', a type of performative material agency (Faulkner _et al._ 2012: 10). For these authors, law is part of the material world, which can support, or be prepared for, particular developments in science and technology. However, within ANT-inspired accounts, law is often not visible, because the analytic intention is to avoid bestowing upon processes, actors, or objects particular categories such as 'social' and 'legal' in the first place (Rooke _et al._ 2012). As Catriona Rooke and colleagues put it:\n\n> The material and its link to regulation are conceived of in a more interdependent way where law is both the result of socio-technical assemblages and becomes part of specific materials, so the 'things' followed during analysis are themselves shaped and defined by the legal and regulatory frameworks that they carry.\n> \n> (Rooke _et al._ 2012: 43)\n\nFollowing 'legal things' is therefore part of an approach to social relations that requires being able to trace when objects enter, leave, or are shaped by heterogeneous networks and encourages the researcher to look past what has already been stabilised as the subject of legal regulation.\n\nHow much can such a view help us to understand the creation of legal temporalities? One example can be found in Latour's own work. In his ethnography of the French administrative court, the Conseil d'\u00c9tat, Latour argues that legal processes within the Conseil produce a sort of homeostasis, a sense of everything being covered 'completely and seamlessly', unlike scientific processes, which leave voids for future researchers to investigate (Latour 2004: 114). Homeostasis evokes law's ability, in this context, to maintain a constant through the adjustment of other features of a system. The temporal horizon of homeostasis does particular work within Latour's analysis. Legal processes and individual conseillers labour under an obligation to ensure legal predictability (or securit\u00e9 juridique). However, paradoxically, in Latour's analysis, legal predictability happens through the fabrication of doubt and distancing \u2013 in other words, through the strategic _avoidance_ of certainty:\n\n> At the Conseil d'\u00c9tat, every effort is made to sustain doubt for as long as possible, but when a decision is reached it is made once and for all.\n> \n> (Latour 2004: 94)\n\nHaving suspended certainty, and indeed actively fabricated doubt, a curious completeness then takes over law: a type of all-encompassing, self-adjusting, temporality (Latour 2004: 113). From the present (post-judgment) vantage point, the legal principle confirmed through proceedings at the Conseil is as it has always been, despite the fact that the entire process has been pursued through means of a graduated suspension of certainty.\n\nAn alternative analysis of legal temporalities can be found in Alain Pottage's work on genetic patrimony and human rights. Pottage focuses on a petition created by two parliamentary members of the Council of Europe in May 2000, which protested against the European Union's Directive on the Legal Protection of Biotechnological Inventions. The protestors argued that this Directive authorised patenting of human gene sequences and that instead, it should be recognised that 'the human genome is the common patrimony of humanity', hence no patents on the genome should be granted (Pottage 2004: 249). According to Pottage, the phrase, 'common patrimony of humanity, which was mobilised within the petition, uses a particular type of 'forever' time, associated with the socio-legal concept of inheritance, to establish the inviolability of the genome. Thus, the use of the concept of genetic patrimony, or inheritance, has specific temporal effects:\n\n> [It] produces a horizon of inheritance that resembles an aevum of mediaeval historiography. That is, it unfolds as a sort of permanent present, a period in which all events occur within the same existential frame or medium; all events have, so to speak the same essence.\n> \n> (Pottage 2004: 264)\n\nWithin accounts of legal temporalities such as these, time is not, or not merely, an intrinsic aspect of the social field, but instead it is created through social relationships, including legal processes, in which the role of nonhuman actors is key. For these, and many other reasons, deploying an analytics of materiality when tracing legal temporalities is a complex process, involving layers of excavation that I can only begin to attempt here. At the very least, however, analysing material worlds in relation to law and schemes of legality can and should involve understanding the temporalities inaugurated and sustained by human and material actors within legal networks (Beynon-Jones 2012). The following sections constitute one attempt to do so, tracing the multi-layered temporalities of HIV found in the field of disability law activism in the late 1990s and early 2000s when, as treatments improved, people's experiences of HIV changed radically, and new legal routes became available to HIV-positive people.\n\n## **1980\u20131996: legal-temporal ontologies of crisis**\n\nAs many will be aware, the new 'crisis' of what was soon to be termed AIDS first emerged as a public health problem in countries of the global North, including the United States and United Kingdom, in the early 1980s. The early years of understanding and isolating the virus, establishing the link between HIV and AIDS, and scoping testing and treatments for HIV, were characterized by a number of intense credibility struggles, as scientific teams advanced different explanations for the causes of this new disease, and clinicians and government agencies advanced a wide range of often conflicting responses (Patton 1990; Epstein 2009). Activists were key to the development of new knowledges, treatments, and new relationships between clinicians and health populations in the context of a political climate all too hospitable to homophobic and racist fears about the new disease (Cohen 1999; Epstein 2009; Patton 1990).\n\nBy the mid-1990s, a causal link between HIV and AIDS had been settled. HIV testing had become available for many of those who wanted it \u2013 and indeed often imposed on many who did not \u2013 and new treatments had become available. This chapter picks up at the point at which intense struggles of a different kind began, this time within networks of lawyers and activists in the UK concerned with ensuring employment equality for people with disabilities, amongst whom were counted those with diagnoses of HIV and AIDS. After a spirited campaign by disability rights activists, the UK's Disability Discrimination Act 1995 (DDA) came into force in 1996. For the first time, people who experienced disability discrimination in the workplace had a rights-based means of redress. For people with HIV \u2013 who often experienced racist and\/or homophobic harassment, dismissal, and 'outing' of various kinds associated with an HIV diagnosis \u2013 making a claim was often a significant route to badly needed financial compensation.\n\nHowever, these claims remained very difficult to make. Much of this was to do with the kinds of future that discrimination law could and could not imagine. In particular, a person could only go on to argue a discrimination claim if they had first established that their HIV constituted a disability for the purposes of the Act. Making this argument was fundamentally about time: it was about whether and how claimants, clinicians, lawyers, and Employment Tribunal members could find each other's understandings of the future effects of HIV realistic or credible within the relevant legal paradigms.\n\nJohn Lynch argues that 'AIDS was from the beginning of its discursive \"naming\" defined in temporal terms' (Lynch 2000: 247). Dominant discourses surrounding HIV\/AIDS by the mid-1990s had been temporally framed as both epidemiological crisis and individual moral failure. Indeed, AIDS in the UK during the 1980s and 1990s had its own distinct social chronology, which can be schematised as a 'dawning crisis', proceeding with a homophobic moral panic and then plateauing into crisis management (Lynch 2000: 249). The period before the DDA could be characterized as what Lynch understands as dawning crisis, or what Cindy Patton would term 'AIDS as emergency', which had its own effects in creating and sustaining a particular system of social control (Patton 1990, 107). Within this context, HIV activists concerned with the welfare of themselves and their friends, lovers, and colleagues had started out in the 1980s dealing with legal issues arising from the shockingly rapid decline in health that people with AIDS experienced before viable HIV tests and treatments became available. The context, as Douglas Crimp puts it, was one in which pragmatic hope was a necessity:\n\n> In practice, convincing the public and ourselves that people with AIDS can participate in life, even prosper, has been an uphill struggle. The facts are all too clear: people suffer terribly, and we're dying by the thousands. But I'm not talking about not dying; I don't deny the reality that sooner or later most people with AIDS die; it would be foolish not to address death. I'm talking about the business of living, or making choices, of _not_ being passive, helpless, dependent, the storm-tossed object of the ministrations of the kindly well.\n> \n> (Crimp and Bersani 1988: 145)\n\nEven with this reinforced hope, many AIDS organisations worked hard to get access to benefits and housing for people with significant health impediments on the assumption that they were dealing with a crisis and that any individual arrangements they made for people with AIDS (PWAs) were necessarily temporary, because \u2013 based on contemporary medical knowledge \u2013 PWAs were possibly going to die within a matter of months and would certainly live no longer than a few years. This paradigm also meant that employment-related activism worked with a short horizon, that of ensuring continued employment for PWAs if that was possible, and\/or extracting compensation (through settlements) for AIDS-related discrimination in order to provide badly needed financial support:\n\n> If you think, you've just had your diagnosis, you've got a CD4 count which is in double figures if you are lucky. You are already getting symptomatic illnesses. You've probably got diagnosed after your third bout of thrush, because nobody knew what it was the first couple of times.... An awful lot of people thought the gain was not worth a candle to challenge their employers. So in those cases, we would just try and extract maximum padding for them as it were.\n> \n> (health activist\/policy maker: 7)\n\nEmployment advice was largely voluntary and activist-based, aimed at dealing with pressing and often quite obvious health needs, and commonly involved non-legally qualified representation.\n\nAfter the very early stages of the crisis, as the aetiological link between HIV and AIDS was stabilised (Epstein 2009) and HIV tests eventually became available, this temporal paradigm of AIDS, the paradigm of imminent death, with its associated forms of representation and activism, began to shift. By no means (yet) considered a chronic condition, people were nevertheless being diagnosed earlier and treatments were coming online. Although these remained associated with hugely problematic side-effects \u2013 see the debates around AZT (Epstein 2009; Crimp and Bersani 1988) \u2013 and could be difficult to reach depending on where a person lived \u2013 see debates about aerosolized pentamidine in Canada (Silversides 2003) \u2013 people were surviving longer after diagnosis.\n\nLynch has analysed public health campaigns introduced in the UK in the 1990s, the period, incidentally, during which time the push for disability rights legislation was also in full force. As he demonstrates, one of the most important medico-cultural narratives about HIV focused on a new temporal sequence:\n\n> the first (stage) was the body's reaction to the initial HIV infection; followed by a latent period before the development of symptoms of the 'full-blown' condition; and finally inevitable death.\n> \n> (Lynch 2000: 250)\n\nThese campaigns introduced into public discourse the infected but asymptomatic person \u2013 who could be straight or queer \u2013 who unknowingly passed on the virus to others (Lynch 2000: 254). This was a different temporal frame from the suffering and terminally ill AIDS patient. Such campaigns relied on a representation of HIV-affected people as appearing 'normal' and potentially infectious. In one advert published in the mainstream press, a question was posed and answered, in white, on a black background: 'What is the difference between HIV and AIDS?' 'Time' (Lynch 2000: 255). As Lynch points out, this advert advances an essential closed, monologic, understanding of the temporal progression of HIV\/AIDS. The advert looks like a tombstone and effectively translates individual experiences of infection into what Lynch calls 'an abstracted realm of supra-human fate' (Lynch 2000, 256). Not only that, but within these narratives the Person With AIDs is understood through a fear-inducing matrix of 'past-transgression, present-agent of infection, future-death (the absence of future)' (Lynch 2000: 257).\n\nPublic health campaigns thus promoted fear of seemingly healthy but infectious people transmitting the virus. For their part, media outlets spread rumours about AIDS being more prevalent amongst Haitian populations in the United States or amongst African populations in London (Cohen 1999). It is not surprising, therefore, that many HIV-related employment discrimination issues were, as activists have put it, based on 'pure stigma':\n\n> It was sort of pure stigma related cases. That sort of fed into a lot of issues around whether, when to disclose your HIV status. If, for example, you were starting a new job and you knew that you would have to take time off for treatment or for hospital appointments or it would have some effect in the workplace should you disclose and take the risk the job offer would be withdrawn or you would be sacked.\n> \n> (legal activist: 2)\n\nAs treatments improved, disclosure issues changed, so that even though racialised and homophobic constructions of HIV still prevailed within popular culture, news stories, and employer attitudes, people with HIV had more control over whether and how to disclose to employers in the first place. PWAs with obvious symptoms still encountered outright hostility at work and immediate health crises, but more and more people with HIV diagnoses were facing difficult questions about whether to disclose to an employer in the first place, or how to deal with stigma relating to being 'out' as HIV-positive.\n\n## **1996\u20132005: the significance of likelihood**\n\nBy the time the DDA came into force in 1996, therefore, one distinct social and clinical shift had already happened in the lives of people concerned with the epidemic. This was the shift from very late diagnoses of what we now understand to be AIDS, in other words terminal diagnoses, to diagnoses of HIV infection (regarded as 'pre-AIDS'), and the beginnings of treatments which could delay progression from HIV to AIDS. In the midst of this, the DDA, which came into force in 1996, also promised a route for people with HIV\/AIDS to obtain some form of legal redress for HIV discrimination in the workplace. Yet, even with HIV recognised within the disability paradigm, between 1996 and 2005, the fact of a person's HIV diagnosis did not mean they were automatically considered to be disabled for the purposes of the DDA. In order to make a claim, a person had to prove that they had a disability for the purposes of the legislation. They had to show under section 1 of the DDA that they had a 'physical or mental impairment which ha[d] a substantial and long-term adverse effect on [their] ability to carry out normal day-to-day activities'. Often this issue would be determined by an Employment Tribunal at a preliminary hearing.\n\nOne lawyer spoke of the problems associated with the disability test more generally, which related to the trauma of having to 'prove' one's disability legally in this way:\n\n> When you are talking about someone's disability you are going to have to talk about what happens at home. What happens when they go to the doctor. It's a much more personal investigation of their circumstances. That is very difficult, often, for people to talk about, particularly in an open context, and that is what you are having to ask them to do. For many people, they didn't want to do that. And so they might pull out at that point.\n> \n> (lawyer\/policy maker: 5)\n\nGiven the social context, it might not have appeared too difficult to argue that HIV would have such an effect. As we have already seen, it was in very recent memory that people would be diagnosed with AIDS rather than HIV and in any case it was a matter of 'common sense' even to legal activists that HIV should be considered to be a disability. Yet despite all of this, people with HIV had two main problems fitting within the DDA's specific definition. The first was that if they had no symptoms, they would be unable to show an 'impairment' and would therefore not be covered at all by the DDA. This problem had been foreseen in the parliamentary debates preceding the enactment of the DDA in 1995 as affecting the majority of people with HIV. It could usually be countered by referring to even very minor symptoms associated with a person's sero-conversion illness (which people often at the time thought was a very bad cold). The second problem was that even if they had experienced symptoms, they then had to fit these symptoms within the provisions of the DDA that covered 'progressive conditions'.\n\nThrough case-law that developed over the period between 1996 and 2005, it became clear that people with HIV had to show that their condition was 'likely' in the future to have a substantial adverse effect on their ability to carry out day-to-day activities. 'Likelihood' meant more probable than not, importing, from the start, a medico-legal concept of the future into the definition of disability. This type of 'likelihood' was difficult to demonstrate, often requiring complex arguments in preliminary hearings and the commissioning of joint medical reports. Lawyers developed a field of expertise in creating draft questions for clinicians that would elicit the desired response, and they also spent a considerable amount of effort arguing, between parties, about which questions should be posed.\n\nBefore they were even written, expert medical reports, as material legal actors, had significant effects. Structured through negotiations between legal parties over which expert, how to instruct, and how to understand the features and prognosis of HIV generally and the treatments available, each report then had to address the question of _this_ HIV in _this_ claimant, as we will see below. Sometimes, representatives for employers would attempt to lead the clinical expert into making a statement about the legal status of the person's disability (or lack of disability), a question that was understood on normal legal principles to be firmly a matter for the Employment Tribunal itself. Claimants' representatives would resist these attempts, not always successfully:\n\n> there would be a wide range of good and bad chair people in [a] tribunal. Some would see that for what it was. Some would say, \"well, that's settled then isn't it and this expert said that this person doesn't meet the definition of disability. That is the end of the case.\" That was difficult.\n> \n> (legal activist: 9)\n\nClinical expertise, imported into legal proceedings for the purpose of providing evidence relating to a person's disability status, could sometimes have the effect of eliding constructed boundaries separating legal 'truth' from medical 'evidence' (Fassin and d'Halluin 2005), and claimants' representatives were on guard to ensure that this did not happen. In this way, through medical reports, and the tensions and discussions that produced and surrounded them, fields of clinical and legal knowledge, and the imagined overlap between them, were produced and fortified.\n\n## **Prognosis and uncertainty: contested technical imaginaries of time**\n\nAs we have seen, a key factor debated in medical reports about the question of likelihood was the individual's prognosis. Generally, prognosis refers to the mixture of statistics, survival rates, and other factors that go into predicting the course and outcome of a given condition. As a socio-technical invention, prognosis creates fields of power through the articulation of probabilities based on statistical measurements, clinical conjecture, and understandings of the life-course. As such, it interacts with other socio-temporal concepts such as 'decline', 'crisis', or 'recovery'. As Sara Lochlann Jain has argued, prognosis has a double effect, 'causing and evacuating the terror of a potential future' through a logic which is at once incessantly future-oriented and abstracted (Jain 2007: 81).\n\nWhen mobilised in legal settings, prognosis materialises diseases such as cancer, and in the present case, HIV. Yet in Jain's own study, prognosis was not necessarily enough to establish a legal claim. Cases relating to 'lost chances', where they were allowed, were only successful if the claimant suffered more than a 50 per cent loss of survival, bringing a claimant from more than a 50 per cent survival chance to less than 50 per cent. The staging indices usually used in prognosis (e.g., the size of a tumour) often did not provide enough specific information to allow this kind of determination (Jain 2007: 85\u201388).\n\nLegal activists in the realm of HIV discrimination were similarly beset with temporo-legal problems, oriented around a central concept of prognosis. In the context of clinical decisions around the treatment of HIV, prognosis is normally influenced by where a patient's test results fall on a normative scale, determined with reference to viral load and T-cell count (Persson 2004: 50). Viral load refers to the extent of HIV infection in the body, and is hence a clinical, and also cultural, measurement of disease. Clinicians normally view an undetectable viral load as indicating therapeutic success and 'exponential viral replication' as a decline in health (Persson 2004: 51). T-cell (or CD4) counts measure specialised immune cells that are targeted by HIV, so they indicate retained functionality in the immune system. In this way, technosci-entific practices of tracing, counting, and interpreting cells and viruses have a direct effect on the health constructs that are mobilised within legal spheres.\n\nFurthermore, in the late 1990s and early 2000s, the disability of HIVpositive claimants was legally established with reference to an individual's own prognosis, not an understanding of what this condition might usually do in a majority of cases. In other words, what activists reported was an articulation of scale and individuation, which prioritized a legal subject's personalized prognosis and lifespan over a population-level understanding of the progression of disease based on statistical likelihood:\n\n> [T]he definition of likely was more probably than not. That gave rise to real problems... Even if you can show that more than 50% of people with [HIV] go on to develop a substantial and long-term adverse effect from it... that didn't necessarily help you, because the tribunals and the courts were saying, \"well, we don't just want to know whether statistically that condition leads to that, is likely to lead to that. We want to know in the case of your client, does your client fall into the 51% who will go on to have substantial and long term effects or are they in the 49%?\"\n> \n> (legal activist: 6\u20137)\n\nLikelihood, 'more probably than not', was a universally applied legal invention, established through the interaction of definitions within the DDA, case-law, and clinical decisions, which instantiated the potentially disabled legal subject as _temporally_ distinct and distinguishable on the basis of his\/her prognosis. This temporalised individuation signalled a relationship, of sorts, between legal and clinical knowledges.\n\nOne way of understanding this relationship is through Catherine Waldby's concept of the 'shared technical imaginary'. Waldby analyses the new relationships of equivalence that emerged between quite distinct domains of biomedicine in the mapping of AIDs, and argues that one feature of this coming together, or equivalence, was a process of schematisation:\n\n> In the case of AIDS we have on the one hand a concern about T and B cells, lymph, tissues and glycoproteins, and on the other patterns of sexual practice and alliance. These diverse phenomena can be made to coalesce precisely through their transformation into a shared technical imaginary which is characterized by a maximum of schematisation. Wherever possible, pathological processes in both bodies and the population are quantified and mathematicised, rendered into units which can be graphed, plotted, counted. Such mathematicisation places the divergent spheres of clinical and epidemiological medicine into modular and compatible relations with one another, allowing a relationship to be expressed between each of the phenomena so rendered.\n> \n> (Waldby 2004: 97)\n\nIn the context of legal arguments about HIV as a disability in the late 1990s, prognosis functioned across legal and clinical fields, distributing knowledges about T-cells, viral load, decline, or recovery, with varying effects. This process relied on a schematic approach to the constructed, internalized, 'truths' of the body, creating a shared, or at least co-articulated, technical imaginary of time, linking the legal concept of likelihood with the realm of projected clinical outcomes. The resulting technical imaginary of time mobilised T-cell counts, viral load measurements, and pharmaceutical innovations as much as it drew on claimants' own (temporal) narratives of health and (dis)ability.\n\nThe temporalised effects of cells (pace Waldby and Cooper 2010), their functions and measurements, were quite distinct within the co-articulated technical imaginary of HIV time in the late 1990s. As non-human actors, viral load measurements and T-cell counts created a specific genre of HIV temporality, based on viral and cellular replication, through which the generative function of T-cells or the virus (respectively) was instead mapped back onto the future of the body-as-organism\/legal claimant, as well as the potential success of the case. On this logic, crudely expressed, as the number of T-cells grew, so the potential life time attributable to the human legal actor expanded, meaning that both a growing number and a high number of cells lengthened and improved a person's prognosis and made a finding of future impairment, and hence disability, less likely. Similarly, on this logic, a growing or high viral load, indicating exponential generation at the level of the virus, reduced and worsened prognosis, making a legal finding of disability more possible.\n\nIn this way, cellular processes, and the actions of clinicians in measuring them, influenced legal determinations of prognosis, evidencing a level of temporal co-articulation that accords with what Waldby identifies in her analysis of scientific knowledges relating to HIV. Indeed, Waldby perceives this type of co-articulation as being very smooth, a 'seamless reciprocity' between knowledge relating to individual symptoms and knowledge relating to epidemiology (Waldby 2004: 97\u201398). A degree of fit and exchange was established and maintained between clinical and legal understandings of HIV temporalities through various means: legal activists acquainted themselves with key medical terms and concepts in order to educate Employment Tribunals making decisions on the question of someone's disability; medical experts, carefully chosen and carefully instructed by lawyers, provided what they thought to be the most rigorous and appropriate assessments of a claimant's current condition and future medical outlook within the legal guidelines they had been given.\n\nYet, the machinery powering this 'seamless reciprocity' also broke down at times. Despite the apparent reliability of their own T-cell and viral load measurements for treating patients and\/or undertaking medical research, when faced with the task of making a prediction for legal purposes clinical experts often did not give the level of specificity that legal representatives wanted. Legal activists interpreted this as signifying an uncomfortable relationship with prediction, if not a stubborn unwillingness to give vital information about a claimant's future health:\n\n> It seems to me that the medical experts are always reluctant to really go out on a limb and give anything other than a vague, well, this could or this might happen.\n> \n> (legal activist: 13)\n\nThe reason for this, as a respondent from a slightly different health activist field speculated, could have been because the expert's role as a clinician involved communicating prognosis in terms of life-course to patients:\n\n> This was a thing I think that came up particularly in the context of cancer. I suspect it also came up in the context of HIV, although we didn't see as much of it at the [ _name of organisation_ ]. With the reluctance of physicians to say that it was more likely than not that it would progress. Certainly in the context of cancer, they just didn't want to say it. They didn't want to say that it's likely this is going to come back and it's going to get much worse. They didn't want their patients to think that that's what was going to happen, yet that was [what] they had to say in order for them to fall within the definition.\n> \n> (lawyer\/policy maker: 8)\n\nClinical experts resorted to what they thought was a more palatable or even more concrete and defensible temporal outlook: uncertainty. Professionally, what Fred Davis would term 'functional uncertainty' would consciously or not have served to deflect difficult discussions and emotional encounters between clinician and patient around prognosis (Davis 1960), and would not have equated directly with clinical uncertainty as such. Disclosing uncertainty has varying effects and motivations, amongst clinicians, signalling rigour and discretion in some circumstances, but sometimes also being feared as an admission of lack of knowledge (Gerrity _et al._ 1992), and any of these factors might have been at play in individual cases. However, as Tiago Moreira and colleagues argue, uncertainty can be also understood in biomedical terms as a 'discursive, interactive accomplishment' (Moreira _et al._ 2009: 667), constituent of \u2013 instead of undermining or reversing \u2013 biomedical knowledge production.\n\nGiven the potentially foundational effects of uncertainty in generating scientific knowledges, it becomes all the more difficult to imagine an encounter between legal and clinical fields about HIV prognosis in which likelihood can be articulated as entirely distinct from uncertainty. Yet a 'centrifrugal pull' of temporal orders was in motion in this period (Cooper 2013), through which the construct of biomedical uncertainty effectively destabilised the legal rationalities of prediction needed to establish a claimant's HIV as disability.\n\n## **Conclusion**\n\nIn this chapter, I have attempted to clear some conceptual space for thinking about how legal networks create plural ontologies of time. Multiple temporalities were at play in the case study: the individual lifespan; the recent temporal 'progression' of the AIDS and HIV public health crises; experiences of PWAs; various contested paradigms of the 'future' and 'prognosis'; temporalities of aetiology, diagnosis, the life-saving or life-preserving effects of anti-retroviral treatments, and disease itself; and temporalities of litigation and legal subjectivity. These temporalities, confabulated through the coming together (or not) of legal and clinical knowledges, objects, cells, tests, medical reports, HIV treatments, and legal resources, constituted what I term a 'likely story': the story of what happened in the late 1990s when 'uncertainty' about HIV prognosis met 'likelihood', the legal test for future impairment, and the gateway to beginning a disability discrimination claim.\n\nBy the mid-2000s, legally constructed temporalities of HIV, in perverse ways, mirrored the newfound (yet cautious) hope that positive people were experiencing with the development of 'combination therapy' or highly active anti-retroviral therapy (HAART), where these therapies were available. HAART suppressed the virus and also preserved the immune system. Similarly, within the logic of 'likely' future impairment, it was possible for HIV not to be debilitating, for legal purposes. Indeed, the problem was that HIV was not a disability unless proven so, and prognosis, despite the concrete nature of T-cell and viral load measurements, was often scientifically uncertain enough to disqualify people from legal claims.\n\nWhat is interesting, for me, is quite how much these ontologies of HIV \u2013 created through pharmaceutical innovations, new treatment regimes, medical reports, and legal processes \u2013 varied from temporalities of HIV being produced and circulated in health education networks in the UK around about the time period. In health education networks, as Lynch shows, the disease was debilitating: it was present without visible or obvious effects and prognosis was certain death. Entirely different temporal ontologies were deployed in one country for the same disease. If anything, this is evidence that our understandings of time cannot escape the networked interactions of human and non-human actors that create temporalities instead of inhabit them. Tracing and diagnosing such temporalities can contribute to our understanding, more broadly, of the variegated relationships, networks, and worlds conjured by encounters within law, science, and technology.\n\n## **Acknowledgements**\n\nI would like to thank Emilie Cloatre and Martyn Pickersgill for their wonderful editorial advice and suggestions. This research was funded by the UK Socio-Legal Studies Association and a Faculty Grant from the University of Kent, and was conducted whilst on sabbatical at the Institute for Feminist Legal Studies, Osgoode Hall Law School. Many thanks also to the following people for comments, engagement, and inspiration: contributors to the 'Temporalising Law' stream at the Canadian Law and Society conference _Law on the Edge_ July 2013 and audience, Davina Cooper, Mariana Valverde, Nick Piska, Maria Drakopoulou, Kate Bedford, Mairead Enright, Sonia Lawrence, and Donatella Alessandrini. The usual disclaimers apply.\n\n## **References**\n\nBeynon-Jones, S. M. (2012) 'Timing is everything: the demarcation of \"later\" abortions in Scotland'. _Social Studies of Science_ , 42(1): 53\u201374.\n\nCohen, C. J. (1999) _The Boundaries of Blackness: AIDS and the Breakdown of Black Politics_. Chicago, IL: University of Chicago Press.\n\nCooper, D. (2013) 'Time against time: normative temporalities and the failure of community labour in local exchange trading schemes'. _Time & Society_, 22(1): 31\u201354.\n\nCornell, D. (1990) 'Time, deconstruction, and the challenge to legal positivism: the call for judicial responsibility'. _Yale Journal of Law & the Humanities_, 2: 267.\n\nCrimp, D. and Bersani, L. (1988) _AIDS: Cultural Analysis, Cultural Activism_. Cambridge, MA: MIT Press.\n\nDavis, F. (1960) 'Uncertainty in medical prognosis clinical and functional'. _American Journal of Sociology_ , 66(1): 41\u201347.\n\nDouglas, S. (2011) 'Between constitutional mo(nu)ments: memorialising past, present and future at the district six museum and constitution hill'. _Law and Critique_ , 22(2): 171\u2013187.\n\nEngel, D. M. (1987) 'Law, time, and community'. _Law and Society Review_ , 21: 605.\n\nEpstein, S. (2009) _Impure Science: AIDS, Activism, and the Politics of Knowledge_. Berkeley, CA: University of California Press.\n\nFassin, D. and d'Halluin, E. (2005) 'The truth from the body: medical certificates as ultimate evidence for asylum seekers'. _American Anthropologist_ , 107(4): 597\u2013608.\n\nFaulkner, A., Lange, B. and Lawless, C. (2012) 'Introduction: material worlds: intersections of law, science, technology, and society'. _Journal of Law and Society_ , 39(1): 1\u201319.\n\nGerrity, M. S., Earp, J. A. L., DeVellis, R. F. and Light, D. W. (1992) 'Uncertainty and professional work: perceptions of physicians in clinical practice'. _American Journal of Sociology_ , 97(4): 1022\u20131051.\n\nGreenhouse, C. J. (1996) _A Moment's Notice: Time Politics Across Cultures_. Ithaca, NY: Cornell University Press.\n\nHenare, A. J. M., Holbraad, M. and Wastell, S. (eds) (2007) _Thinking Through Things: Theorising Artefacts Ethnographically_. London: Routledge.\n\nJain, S. L. (2007) 'Living in prognosis: toward an elegiac politics'. _Representations_ , 98(1): 77\u201392.\n\nKeenan, S. (2012) 'Subversive property: law and the production of spaces of belonging'. Unpublished.\n\nLatour, B. (1993) _We Have Never Been Modern_. Translated by Catherine Porter. Cambridge, MA: Harvard University Press.\n\n\u2014\u2014 (2004) 'Scientific objects and legal objectivity', in _Law, Anthropology, and the Constitution of the Social_ , Cambridge Studies in Law and Society. Cambridge; Cambridge University Press, pp. 73\u2013114.\n\n\u2014\u2014 (2010) _The Making of Law: An Ethnography of the Conseil d'\u00c9tat_. Cambridge, UK; Malden, MA: Polity.\n\nLynch, J. (2000) 'AIDSTimes representing AIDS in an age of anxiety'. _Time & Society_, 9(2\u20133): 247\u2013267.\n\nMawani, R. (forthcoming) 'Law as temporality: colonial politics and indian settlers'. _Irvine Law Review_.\n\nMelissaris, E. (2005) 'The chronology of the legal'. _McGill Law Journal_ , 50: 839\u2013861.\n\nMoreira, T., May, C. and Bond, J. (2009) 'Regulatory objectivity in action mild cognitive impairment and the collective production of uncertainty'. _Social Studies of Science_ , 39(5): 665\u2013690.\n\nPatton, C. (1990) _Inventing AIDS_. New York: Routledge.\n\nPersson, A. (2004) 'Incorporating pharmakon: HIV, medicine, and body shape change', _Body & Society_, 10(4): 45\u201367.\n\nPhilippopoulos-Mihalopoulos, A. (1999) 'The silence of the sirens', _Law and Critique_ , 10(2): 175\u2013197.\n\nPottage, A. (2004) 'Our original inheritance', in _Law, Anthropology, and the Constitution of the Social_ , Cambridge Studies in Law and Society. Cambridge: Cambridge University Press, pp. 249\u2013285.\n\n\u2014\u2014 (2012) 'The materiality of what?' _SSRN eLibrary_ (March). http:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=2009206.\n\nPottage, A. and Mundy, M. (2004) _Law, Anthropology, and the Constitution of the Social: Making Persons and Things_ , Cambridge: Cambridge University Press.\n\nRooke, C., Cloatre, E. and Dingwall, R. (2012) 'The regulation of nicotine in the United Kingdom: how nicotine gum came to be a medicine, but not a drug'. _Journal of Law and Society_ , 39(1): 39\u201357.\n\nSilversides, A. (2003) _AIDS Activist: Michael Lynch and the Politics of Community_. Toronto: Between the Lines.\n\nVan Marle, K. (2003) 'Laws time, particularity and slowness', _South African Journal on Human Rights_ , 19: 239\u2013255.\n\nWaldby, C. (2004) _AIDS and the Body Politic: Biomedicine and Sexual Difference_. London: Routledge.\n\nWaldby, C. and Cooper, M. (2010) 'From reproductive work to regenerative labour the female body and the stem cell industries'. _Feminist Theory_ , 11(1): 3\u201322.\n\nZerubavel, E. (1985) _Hidden Rhythms: Schedules and Calendars in Social Life_. Berkeley, CA: University of California Press.\nChapter 13\n\n* * *\n\n# **Paper prototypes**\n\n_Alain Pottage_\n\n* * *\n\nThe techniques and institutions of patent law offer a ready-made intersection between law and science, one that has been quite extensively explored from various perspectives. Many of these explorations assume that intellectual property rights necessarily bring with them something like the powers of control and exclusion that lawyers ascribe to property. The assumption is that patents function as instruments for regulating access to scientific or technological knowledge. It is obvious that patents have effects, but in order to understand those effects one should be clear about what kind of object a patent is. The conventional understanding objectifies patents by construing them as instruments of broader social and economic forces. In fact, a patent (if one can use the noun form at all) is a complex assemblage of material, discursive, and cognitive elements: files, documents, indexing systems, bureaucratic practices and aesthetics, structural couplings to economic and scientific institutions and techniques, and so on. Accordingly, the better place to begin in developing a social-scientific account of the agency of patents is with the 'knowledge practices' that are immanent in this assemblage of elements (Riles, 2011). In other words, the agency of patent law should be construed not in terms of social forces of which law would be a dependent vehicle or instrument, but in terms of the emergent agency that articulates the composite 'object' of material and communicative elements and techniques. Of course, this quite considerably complicates the question of how law intersects with science in the medium of patent law, but it is better to acknowledge these complexities than it is to perpetuate the ideological narrative of ownership that has been retailed by more than two centuries of legal and economic commentary.\n\nIn this chapter, I focus on one element of the patent assemblage \u2013 the patent text or specification. I suggest that in classical patent jurisprudence the specification was understood in terms of a logic of prototyping. Patent lawyers did not talk about prototyping, but they imagined the invention as a kind of genetic template; as an immaterial causative principle that informed the form and functionality of the material artefact. This sense of the invention was probably an effect of the intimate historical connection between the formation of copyright and patent law and the emergence of large-scale manufacture and industrial production. Copyright law emerged when scribal publication was eclipsed by the diffusion of the printing press \u2013 'the first assembly line in the history of technology' (Ong, 2002: 118) \u2013 and the category of the invention in its modern sense was closely shaped by the formation of an ideology of manufacturing which construed manufactured machines or consumer artefacts as 'one of a multitude' (Babbage, 1851: 48). By stamping out potentially limitless copies of an original, the technology of manufacturing promoted the generalized sense that 'ideas' were the prime movers in the making of mechanical devices and literary artefacts (as Kant observed, even publishing was 'carried on in the manner of a factory [ _fabrikenm\u00e4ssig_ ]') (Kant, 1996a: 626). The sheer quantity of identical copies \u2013 even if quantity was potential and identity necessarily imperfect \u2013 affirmed the prestige of the original template or 'prototype' as the most vital agency in the manufacturing process.\n\nThe artefact itself was construed as the inessential embodiment of the invention. The art of patent discourse was to elicit the type from the tokens in which it was embodied, and, in the context of the infringement action, where the question was which of two claimants was the 'original and first inventor' of a new invention, to determine which of two types qualified as the 'prototype', or the first practicable expression of a novel idea. This particular take on prototyping took the invention as the endpoint of one process of prototyping, namely, the process of working out the practicability of an idea, and as the origin of another, namely, the process of manufacture. The process of eliciting the invention from its embodiment, or of anatomizing materiality to express the intangible 'type', took place in various settings, ranging from the office of the patent attorney, through the patent office, to the circuit or appellate courts. In each of these settings the inventive artefact was scrutinized to identify the novel 'principle' that it embodied.\n\nThe lawyer's sense of prototyping was essentially normative. According to patent doctrine, inventions had to be properly disclosed before they could be patented; the object was to fix the intangible idea in print, so that it could circulate to others 'skilled in the art' both as a notice of the existence of the invention and as a disclosure of the knowledge expressed in the invention. This theory of paper as the medium in which inventions were communicated followed from the basic normative premise of the US patent system, which is that the inventor is granted an exclusive right to exploit the invention for a limited period of time in return for disclosing the invention to the public in terms which would enable the skilled artisan to make and use the invention:\n\n> The law confers an exclusive patent-right on the inventor of any thing new and useful, as an encouragement and reward for his ingenuity, and for the expense and labor attending the invention. But this monopoly is granted for a limited term only, at the expiration of which the invention becomes the property of the public. Unless, therefore, such a specification was made, as would at all events enable other persons of competent skill to construct similar machines, the advantage to the public, which the act contemplates, would be entirely lost, and its principal object would be defeated.\n> \n> ( _Lowell v. Lewis_ 15 F.Cas. 1018, 1020 [1817])\n\nThe emergence of this theory was an effect of a quite profound transformation in the political or governmental diagram of modern Europe, which Mario Biagioli characterizes in terms of a relation between political representation and the representation of inventions (Biagioli _et al._ , 2011: 27\u201328). The theory of the printed specification identified inventions as prototypes in two senses: retrospective and prospective. The paper specification functioned retrospectively as the medium in which the invention was fixed as the endpoint of the process of conception and reduction to practice; in setting out a recipe from which other artisans could reproduce the invention, a properly prepared specification proved that the inventor truly possessed the new idea. The same criterion of reproducibility allowed the specification to function prospectively, as the medium in which the invention became an industrial prototype, or an effective genetic template.\n\nI begin with an account of the historical factors that allowed textual descriptions and perspectival drawings to be adopted as vehicles for recollecting and communicating mechanical designs. I then describe how patent jurisprudence settled on a definition of the invention which animated the patent text in ways which escaped the instrumental agency implied in the doctrinal theory of disclosure. Nineteenth-century patent lawyers in the United States defined the mechanical invention not as a spatial form but as a transitive or dynamic force. So the 'prototype' that was held in a patent text was not identical to the 'virtual' construct depicted in the drawings and descriptions; rather, it was the transitive 'mode of operation' that could be seen in the articulations of the machine once it was made. And a 'mode of operation' was an effect of the way that texts were put together with concepts, gestures, and material arrangements. That is, the invention as prototype was an artefact of knowledge practices that articulated this assemblage of elements. Indeed, in characterizing the mode of operation of a machine as a means abstracted from ends, or as a means in itself, patent lawyers found in their object the very mode of agency that articulated the discourse of patent law. I conclude by suggesting that the legal definition of the invention introduced a sense of prototyping that was more akin to postindustrial 'modulation' than to industrial 'moulding' (on this distinction, see Deleuze, 1980). Although the jurisprudential theory of disclosure was consonant with the classical sense of prototypes as points in a linear process of engineering, the definition of the mechanical invention as a 'means' allowed patent lawyers to reinvent inventions by recombining 'form' and 'matter', paper and materiality, or the visual and textual. It is precisely this kind of agency that should inform reflection on the social or economic effects of patents.\n\n## **Paper prototyping**\n\nIn his patent treatise of 1849, the patent lawyer George Ticknor Curtis drew an analogy between a transaction in a material thing and a transaction in a 'secret process':\n\n> Let us suppose that A., by the exertion of his inventive faculties, has ascertained, that by placing matter in certain positions to be operated upon by forces of nature, a result will be produced, in the shape of an instrument, wholly unknown before, and capable of being usefully applied to the wants of mankind. Let us suppose that B., seeing the result but wholly ignorant of the process by which it may be attained, desires to possess that instrument. Common gratitude would prompt him to return something valuable for it. How does it alter the case, if, instead of a single specimen of the instrument, we suppose A. to have retained in his recollection the process by which copies of that instrument may be indefinitely multiplied, and that it is the secret process of making the thing, the intellectual property and knowledge, which B. desires to possess?\n> \n> (Curtis, 1849: 3\u20134)\n\nIn one sense this merely restated the basic political justification of patents, which was that inventors were granted a patent in return for disclosing inventions that they might otherwise have kept secret. But the particular characterization of the invention as a process from which copies of a thing might be 'indefinitely multiplied' gets at the lawyer's sense of the invention as a prototype. The invention was not the material product but the recipe from which copies were reproduced.\n\nThe notion of 'recollection' is important because it expressed the principle that inventors had to be in possession of the invention that they purported to disclose, which in turn meant that they had to master the new principle well enough to be able to survey or traverse it intellectually, and well enough to be able to communicate it to other persons 'skilled in the art'. Towards the end of the nineteenth century, William Robinson made the same point by using the metaphor of 'perception' rather than 'recollection':\n\n> To create by accident without a recognition of the fact or nature of his own creation, and consequently without the power to repeat the same creative act, is not invention. [I]t is essential to the inventive act that the inventor should not only _con_ ceive, but should also _per_ ceive his original idea, and should do both so clearly as to make this idea an actual addition to his fund of knowledge, and to be able to communicate it to the public.\n> \n> (Robinson, 1890: 121\u2013122)\n\nOne could discover something accidentally, by recognizing the significance of a fortuitous event, but such a discovery became a patentable invention only if the inventor had understood it well enough to recollect it, and to present that recollection as a recipe from which it could be reproduced by 'the person skilled in the art'.\n\nThe invention became a prototype only through the agency of paper. First, the recollection of the invention in print proved that one kind of prototyping had resolved into an endpoint \u2013 that the inventor's experiments had matured into a complete 'perception' of the idea. Second, the genetic agency of the invention as an industrial prototype, as endpoint turned into original template, was an effect of its constitution in textual and graphical form. Paper was the medium in which the invention was constituted as the active principle protected by patent law. The agency or instrumentality of the invention was an effect of its representation in texts and drawings, or, more precisely, of the way that texts and drawings were drawn into the discursive practices that unfolded in the various institutions in which the invention had to be elicited and scrutinized. Paper was also the medium in which the invention was supposed to be anatomized, and formulated in terms that would make it intelligible and practicable for artisans or manufacturers. In that sense, paper was the medium in which the invention became an effective template or instrument (even if that template could be used freely only after the patent expired). A late nineteenth-century Supreme Court decision referred to the material thing or machine as an 'outward embodiment of the terms contained in the patent', meaning that the primary and most vital medium of the intangible invention was the text, and that the material artefact was merely the secondary or reflected embodiment of the representation formulated in text. Thus, the invention as prototype was 'prototyped' by the medium of paper and the discursive operations in which paper was mobilized.\n\nThe agency of paper might be formulated in more abstract, and perhaps more consequential terms. Media theorist Friedrich Kittler observes that the rise of engineering in Europe was made possible by the combining of texts and drawings in the plane of the printed page. To some extent, this observation merely recalls the effects of standardization that came with 'print culture' (Eisenstein, 1982); engineering knowledge circulated in the form of 'technical drawings, construction plans, and sketches, which looked the same in every printed copy simply because they were indestructible reproductions of a single original' (Kittler, 2010: 66). The ability to reproduce images as woodcuts predated the emergence of the printed press, so what is important is that the technical drawings that were joined with text were based on linear-perspectival geometry: the image and accompanying text 'correctly' reproduced an 'original' machine or structure because this original had been translated into an image by a draftsman who saw it through the frame of linear-perspectival geometry and because it was then decoded by a reader who visualized it using the same perspectival _dispositif_. Thus, '[t]he book became a medium in which technical innovations as such could take place. They could be stored, shared, and even advanced with the help of technical drawings in the text' (Kittler, 2010: 67). The emergence of this perspectival _dispositif_ might well have been coupled to the emergence of the governmental diagram that is described in Biagioli's story of the patent specification: just as the specification assured the transition from privileges to rights, so did the perspectival geometry implemented by the specification make the transition from 'guild secrets' to 'the knowledge of engineers' (Kittler, 2010: 67).\n\nGiven that the specifications lodged in the US Patent Office in the first half of the nineteenth century were manuscripts illustrated with non-perspectival drawings, it is not evident how this 'new' _dispositif_ of print and perspective came to shape the juridical-discursive prototyping of inventions. However, the problematic here is not to question _how_ this translation occurred, but rather to query the _effects_ of this _dispositif_. The theory that machines could be prototyped in paper was current in the nineteenth century, and it is also true that some scientists and engineers in the United States, notably the patrons of the mechanics' institutes that were established in the middle of the century, believed that patent specifications could be taken as a means of communicating innovation, even though access to the full specification meant paying Patent Office scribes and draftsmen to copy out the text and drawings at the then considerable cost of 20 cents per sheet. But the primary addressees of the patent specification were not engineers or artisans but lawyers, patent examiners, and judges. The construction of the invention as a prototype took place in the legal settings, where lawyers or patent examiners judged the adequacy of the specification by reference to the legal fiction of the 'person skilled in the art'. Accordingly, what really matters for our purposes is the discursive practice and competence of this community of interpreters. In a discussion focused on the US patent system in the latter part of the nineteenth century, Lisa Gitelman suggests that lawyers took the patent specification to be the transparent representation of an existent invention:\n\n> The patent system assumes a na\u00efve relation between words and things. According to law, any 'useful art, manufacture, engine, machine or device', may be protected by 'describing the said invention or discovery clearly, truly, and fully', and undergoing a successful examination by the patent office. Any technology can be described, and any invention can be neatly, if not uniquely, divided into individual claims, that is, bite-sized acts of objective signification, for which language is presumed to be clear, clean, natural, and free from ambiguity. No symbolic or figurative meanings pertain; the whole is exactly equal to the sum of its parts. In an age that imagined so many machines for language, the patent process offered a language for machines'.\n> \n> (Gitelman, 2000: 101)\n\nAnd, to the extent that the invention could not be captured in text, the lawyers' understanding was that 'drawings do what language cannot' (Gitelman, 2000: 106).\n\nPatent lawyers might have been somewhat more knowing about their own naivety. Writing in the late 1930s, Emerson Stringham, the pre-eminent expert on patent claim drafting, dismissed the idea that claims were indexed to the 'essential reality' of the invention:\n\n> The effect upon Western culture of the discovery that \"essential reality\" is a human concept having no foundation in the external world cannot yet be predicted. The discovery is only slowly filtering into our consciousness. When the dust has cleared we shall still have \"essential reality\" but we shall recognize it as something to be repeatedly picked and chosen and discarded, simply as seems convenient from time to time.\n> \n> (Stringham, 1939: 32)\n\nStringham approached the question of what the patent claims \u2013 and by implication the patent text as whole \u2013 might actually refer to by way of a reflection on the writings of Berkeley, Freud, Korzybski, and, ultimately, Pierce's differentiation of qualisigns, sinsigns, and legisigns: 'A patent claim never identifies anybody's \"invention\", but is merely a set of legisigns that, by virtue of conventional usage, help to select those events with respect to which the law will take some steps' (Stringham, 1939: 977). Despite his pre-eminence, Stringham was not in the usual run of patent lawyers, and the flourish of philosophy (and realism) was more characteristic of mid-twentieth-century than mid-nineteenth-century patent jurisprudence. But although most nineteenth-century patent lawyers in Europe and the US probably did believe that there really was such a thing as the invention waiting to be found in a material machine or artefact, there is one basic reason why the language of a specification could only have been understood and practised forensically, as a means of contesting or negotiating the shape and effect of the invention. In the age of mechanical invention, language always fell short of reality, not so much because the invention was intangible but because it was transitive.\n\n## **Transitive types**\n\nIn the introduction to his patent treatise, Curtis characterized the invention in terms of a distinction between a 'form' and a 'means':\n\n> [A]ll inventions are independent of form, except those whose entire essence, purpose, and characteristics begin and end in form alone, as is the case with all manufactures, the utility and advantage and proposed object of which depend on form. But where there is a purpose that does not begin and end in form alone, where the form or arrangement of matter is but the means to accomplish a result of a character which remains the same, through a certain range of variations of those means, the invention is independent of form and arrangement, to this extent, that it embraces every application of means which accomplishes the result without changing its nature and character.\n> \n> (Curtis, 1849: 17)\n\nInventions whose 'entire essence' consisted 'in form alone' were the kinds of consumer designs that were serially moulded by the industrial manufacturers that emerged in the US from the 1840s onwards. In one sense, the distinction between the mechanical invention and the artistic work or consumer design turned on an eminently practical point. To identify a mechanical invention with the form of a machine would have been to make machine patents almost worthless: any competitor could evade the inventor's patent by changing some incidental feature of the machine. Thus, in patent law the material form of the machine was deciphered as an equivocal and potentially deceptive expression of the embodied invention. According to an influential nineteenth-century decision:\n\n> the principles of two machines may be the same, although the form or proportions may be different. They may employ substantially the same power in the same way, though the external mechanism be apparently different. On the other hand, the principles of two machines may be very different, although their external structure may have great similarity in many respects.\n> \n> ( _Barrett v. Hall_ 2 F.Cas. 914, 923 [1818])\n\nConsequently, the mechanical invention could not be identified with the external form of a machine, but nor could it be construed as a kind of fixed inner form, as a virtual blueprint or spectral architecture.\n\nThis insight distinguished the form of the invention from the prevailing understanding of form as an effect of operations of moulding. This understanding has its origins in the old distinction between the liberal and the mechanical arts. In his history of the imprint [ _l'empreinte_ ] as the 'countermodel of the idea of art' (Didi-Huberman, 2008: 102), Georges Didi-Huberman describes how the Renaissance sense of artistic creativity emerged from a contrast between these two modes of production. The expansion of modes of imprinting in the Renaissance period \u2013 exemplified in Pisanello's portrait medals, new techniques of engraving, the refinement of the art of sculpture, and (last but not least) the advent of the printing press \u2013 was paralleled by the emergence of a discourse in which these techniques were characterized as 'mechanical' arts and subordinated to the 'liberal' arts:\n\n> The form obtained by an imprint was incompatible with the very notion or ideal of art because it was too directly derived from matter as it already existed and insufficiently derived from the _idea_ that was so dear to the classical theory of art. Being both borrowed and imprinted [ _\u00e0 la fois 'emprunt\u00e9e' (pr\u00e9lev\u00e9e) et empreint\u00e9e (physiquement marqu\u00e9e)_ ] form was transferred directly from matter to matter. In order to come into existence it did not have to be formed in the mind of the artist.\n> \n> (Didi-Huberman, 2008: 121)\n\nAn imprint did not involve artistic creativity. The theory was that although painters and sculptors quite commonly used techniques of imprinting they used them only as means or instruments of artistic expression. The true artist 'molded with his mind' (Didi-Huberman, 2008: 129), and the mind thoroughly controlled the means by which a style was made material. This carries over into copyright law in the form of the assumption that literary or artistic style or expression is fixed in the material form, pattern, or immanent architecture of a work, or in a distinctive arrangement of words or brushstrokes. Plainly, these different kinds of work are expressed in different kinds of materiality, ranging from the plastic materials of traditional sculpture, through the density of certain kinds of painting media, to the print-based combinatory of 'standardized elements or characters' (Kittler, 2002: 38) that underwrites uniqueness of literary style.\n\nBy contrast, Curtis's description of the invention as a 'means to accomplish a result', or as the means of applying a 'natural law, or power, or property of matter' (Curtis, 1849: 7\u20138), expressed the jurisprudential conception of the mechanical invention as its 'mode of operation' rather than its form or ultimate function:\n\n> It is obvious that, where the invention is in machinery, the mode of operation embodied in such machinery must constitute the essence of the means of producing the result. If any one think otherwise, let him test it by supposing the mode of operation to be taken away from the machine, and see what will remain. To enforce this truth, imagine, if possible, a machine without any mode of operation, and what is it? Clearly nothing but the wood and metal composing it. This shows that the mode of operation is the characterizing feature.\n> \n> (Argument of counsel in _Burr v. Duryee_ 68 U.S. 531, 554 [1863])\n\nThe cryptic proposition that a mechanical invention consisted in 'the essence of the means of producing the result' gets at the idea that the invention was the machine construed as a mode of functioning abstracted from its ultimate functions, or \u2013 with apologies to Kantians \u2013 as a 'means in itself'. An inventor could not have a patent for the ends of the machine \u2013 its ultimate function \u2013 because such a right would be too broad and indefinite: '[The patentee] cannot describe a machine which will perform a certain function, and then claim the function itself, and all other machines that may be invented to perform the same function'; it followed that the invention encompassed by the patent had to be defined not by the end but by the means, and the distinction between these two terms had to be drawn clearly enough to allow patents to be issued for alternative means to the same end.\n\nReturning to the Renaissance notion of moulded form, Curtis's point was that the 'type' of the mechanical invention was not like the 'form' of an artistic work. It was because the 'principle' of a machine consisted in the way that a machine elicited an effect from nature \u2013 in the 'application of the natural law, or power, or property of matter' (Curtis, 1849: 7\u20138) \u2013 that the invention had to be apprehended as a 'means', or as a transitive type. Thus, the material substance of a machine, to take the paradigm of invention, was not 'formed' in the way that one forms plaster to make a sculpture; rather, it was 'arrayed' in the way that one arrays experimental devices and instruments to generate or precipitate a particular effect. So the 'principle' of a machine \u2013 the 'thing' protected by a patent and in an infringement action \u2013 was the specific inflection of physical or mechanical forces that was elicited by a mechanical configuration.\n\nThere are two important points about this sense of the invention as an effect elicited from nature. First, the effect in question was not the ultimate end or function of the machine but the physical or kinetic process generated in and by the operation of the machine. Second, for the purposes of patent law, the machine designed by an inventor was just the means by which nature was instrumentalized:\n\n> [T]he peculiarity of the invention consists in the effect produced by the application of the natural law, as an agent; and this effect is not changed, by the use of different vehicles for the action of the agent, provided there is still the same agent, operating in substantially the same way, to produce substantially the same effect or result.\n> \n> (Curtis, 1849: 8)\n\nOf course, the machine was essential to the invention, but the intimate type of the invention was the 'machinic' trace of the encounter between mechanism and natural 'agent', and this trace was the thing reproduced by legitimate and illicit 'copiers'.\n\nThe form of the invention was revealed and defined by the periodic articulation of a mechanism rather than by the material or physical composition of the machine. How could one fix a mode of operation on paper? Forensically, a mechanical invention could be apprehended only by seeing or imagining a machine in operation. Writing in 1890, and taking the example of a machine for planing lumber, William Robinson suggested that what should be seen or imagined was the operation of the machinery abstracted from its immediate objects:\n\n> the planer [is not] changed in character or capability by the presence or the absence of the lumber; [it acts] with equal energy, and in precisely the same manner, whether [its] action terminates on the material [object], or issues only in the beating of the air.\n> \n> (Robinson, 1890: 210)\n\nI have described elsewhere the ways in which the doctrinal conception of the mechanical invention as a mode of operation was conditioned by media of representation (Pottage and Sherman, 2010; Pottage, 2011). Although the paper specification was the official medium for the recollection and communication of inventions, the process of fixing and communicating the invention in print actually involved a diversity of media \u2013 voice, gestures, sketches, scale models, and technical drawings. And, at least until the second half of the nineteenth century, the patent model was the form in which most inventors, attorneys, and judges recollected, scrutinized, and delimited the invention. The question of the limpidity or objectivity of the language or drawings in a specification was largely beside the point because most of the addressees of these descriptions simply could not see a machine on paper. In 1878, a former Commissioner of Patents observed that most judges could not read engineering drawings \u2013 'there is not to exceed one in five, at most, and probably not so large a proportion, of our best judges on the bench who get any just conception of a machine by the drawing' \u2013 and that for this reason the only way to reach an understanding of the 'principle' of a machine was by demonstrating a scale model.\n\nWith the question of prototyping in mind, this suggests that we need to take a more expansive view of the discursive media and techniques that were involved in eliciting a prototypical agency from the material form of the machine. One might go somewhat further than Gitelman, and suggest that the language of patent law was not so much a language for machines as it was language that (in the philosophical sense) 'machined' machines (Pottage, 2011). And this process of machining might be explored at different orders of scale, from the close texture of the rhetorical, gestural, and material manoeuvres that brought the invention into being for a jury in a courtroom, or for an examiner in the Patent Office, to the more abstract diagram.\n\nSo, if the invention did not consist in a material form, or in an immaterial form that could be radiographed by the specification, then what exactly was the 'type' that was reproduced in each exemplar of the invention, and which an infringer might be held to have 'lifted'? If the invention did not consist in form, and if lawful or unlawful reproduction of the invention could not be construed as transactions in form, then what exactly was being transacted? Although the mechanical invention could be defined only negatively (it was not the material form of the machine, not its ultimate function, and not the natural forces or materials that were mobilized in a machine), patent lawyers assumed that there was a 'principle' waiting to be found in the patented machine: 'However inadequate the term may be to express what it is used to convey, it is obvious that there is a characteristic, an essence, or purpose of every invention, which, in our law, has been termed by jurists its principle' (Curtis, 1849: 10). Speculatively, one might say that a transitive type could acquire determinacy because the 'mode of operation' of a machine is periodic and repeatable. If, to adapt a classic definition, a mechanism is an assembly of elements that is articulated in such a way that its configuration is maintained (or, rather, periodically restored) throughout the operation of the machine, then this periodicity is capable of definition. And, for the purposes of property law, reproducibility is as good as materiality.\n\n## **Prototyping machines**\n\nAlthough the transitive 'mode of operation' of a machine could be fixed as an observable trace, it is obvious, first, that this effect was contingent on the specific setting in which the demonstration of a machine or mechanical model was performed and witnessed, and, second, that depending on the strategy of demonstration, on the way that models, texts, and drawings were articulated together, and on the expectations of the judges and jurors, the same machine could yield a quite different 'principle'. Indeed, the point of litigation was to contest the nature of the 'type' embodied in a mechanical invention. Each side in an infringement action would mobilize available discursive or forensic resources that were available in such a way as to press its own argument; each side would anatomize and recompose the machines in question so as to reveal a mode of operation that supported its case. There were only a few variables to work with \u2013 patent jurisprudence reduced machines to a set of basic elements: 'shape, size, capacity, proportions, arrangement, or materials' (Robinson, 1890: 320) \u2013 but the invention was a legal rather than a mechanical artefact. In one prominent early nineteenth-century decision, the trial judge took the parties' lawyers to task for turning an infringement action into a lesson in mechanical science:\n\n> the question whether the principles are the same in two machines, is in reality, when all the facts are given, rather a matter of law, than of the opinion of mechanics; at least matter of law is necessarily mixed up with it, which mechanics could not be presumed to be acquainted with.\n> \n> ( _Barrett v. Hall_ 2, F. Cas. 914, 923, [1818])\n\nThe point was that lawyers should demonstrate mechanical form or draw on the 'opinion of mechanics' only for the purposes of revealing the inventive 'principle' of a machine, which was 'a matter of law'. So the machine, or the model that stood in for it, was actually two machines in one. In one dimension, the machine or model was a creature of mechanical ingenuity, an artefact embodying a principle that could be identified and explained by expert witnesses in the language of mechanical powers, structures and forces. In its alternative dimension, the model embodied the legal schema of the invention, a 'principle' that was accessible only to the legal imagination. Mechanical and legal re-engineering worked reciprocally; demonstrations switched between the two dimensions of the model, engineering mechanical form in such a way as to sustain a favourable interpretation of the doctrinal notion of invention, and engineering legal form so as to ask new questions of mechanical opinion.\n\nKeeping in mind the question of prototyping, and more especially the classical understanding of prototypes as instructional means that relayed design to manufacture and consumption, the most interesting effect of this mode of prototyping was that it turned machines into means in themselves. Patent law, which presented itself as a thoroughly instrumentalist technique, effectively turned the instrumentalist logic of the nineteenth-century inside out. From within a deeply entrenched understanding of machines and mechanical processes as means to ends, of industrial manufacturing as the means to achieve economy, efficiency, and precision, and of human competences as mere components or adjuncts to this mode of production, patent jurisprudence turned machines into instruments abstracted from their industrial causes or effects. Whereas engineers understood prototypes as designs that could be translated from a model or blueprint into a machine that would articulate the instrumental flow of an industrial process, the legal definition of the mechanical invention suspended or involuted this linear flow. The effect of construing machines as means in themselves was to locate the invention in what cyberneticians or systems theorists would call the 'blind spot' of each of the hallmark distinctions of the industrial ideology (means and ends, matter and form, inputs and outputs). In other words, and this was precisely what made the task of apprehending inventions so difficult, the invention took shape in the zone of indistinction of each of these cardinal distinctions, the zone in which they remained latent or contingent.\n\nOf course, the figure of the machine as a transitive type immediately re-entered the industrial economy; patents secured the investments made by 'projectors' and manufacturers in production runs, and they became bases of economic speculation in their own right. The peculiar definition of machines as means in themselves was essential to these industrial and economic functions. The definition of the machine as a 'mode of operation' enabled patent lawyers to identify patent infringement by reference to criteria that anticipated and defused the strategies that infringers might use to avoid liability for copying the 'type' embodied in a mechanical invention. So one might say that the virtue of locating the invention in a zone of indistinction was that it enabled the 'type' of the machine, its intangible substance, to be responsive to perceptions of the 'fairness' of certain kinds of industrial competition, and that in return the spirit of the machine, its inventive 'principle', took shape as a reflex of the articulation of a broader industrial _dispositif_.\n\n## **Conclusion**\n\nAlthough there is a longstanding historical sense of prototypes as ideal forms rather than material or medial agents, few people these days think of prototypes without having in mind the working sketches, experimental models, technical specifications, blueprints, and trial manufactures that are used to test mechanical hypotheses, to probe potential functionality or marketability, or to instruct mass production. Moreover, the recent critical understanding of prototyping as a contingent, recursive, or reflexive operation has been informed by the shift from these classical industrial prototypes to informatic prototyping; namely, rapid prototyping, direct writing, computer-aided design or manufacturing, and 3-D printing. Although some practitioners still understand these newer procedures instrumentally, as modes of 'instructional design', they mark the emergence of what one might call 'objectile' objects: things which no longer have the 'appearance of essence' that was generated by processes of industrial reproduction (see Deleuze, 1980: 26).\n\nIf in the case of the 'objectile' object the object becomes an event, then one might say that patent discourse reversed this move: the 'type' of the mechanical invention was an event fixed as an object. And what was fixed was not a specific intersection of form and matter, or an effect of moulding, but a transitive 'modulation' of the elements of the discursive _dispositif_ of patent law, and also of the machines to which a 'mode of operation' was ascribed. Some patent lawyers were alive to the philosophical implications of this mode of modulation. Discussing the prevalent sense of mechanical inventions as 'combinations' of known elements, Emerson Stringham drew on Whitehead's _Process and Reality_ to explain why a mechanical combination was a discursive artefact, or a product of judgment:\n\n> [For Whitehead] 'the production of novel togetherness' is the ultimate notion in the term 'concrescence'. These ultimate notions of 'production of novelty' and of 'concrete togetherness' are inexplicable either in terms of higher universals or in terms of the components participating in the concrescence. The analysis of the components abstracts from the concrescence. The sole appeal is to intuition.\n> \n> (Stringham, 1939: 1150).\n\nAnd, as in Deleuze's notion of postindustrial modulation, patent discourse machined the terms that the logic of moulding takes as determinate starting points: form and matter, means and ends, production and consumption. Even if transitive types were folded back (see Strathern, 2006) into the industrial economy, their mode of fabrication or prototyping suggests that patent law had created something like a postindustrial machine, the machine in question being the discursive or medial _dispositif_ of patent jurisprudence itself, more than the artefacts that were produced by that machine.\n\n E.g. section 2 of the US Patent Act of 1790.\n\n According to the dictionary definition, 'the mental operation by which objects or ideas are revived in the mind' ( _Oxford English Dictionary_ , online edition 'recollection').\n\n 'Paper' rather than 'print' because it was only in 1866 that the Patent Office began printing patent specifications.\n\n _Bischoff v. Wethered_ 76 U.S. 812 (1869).\n\n 'In the 1850s, a simple drawing of a washing machine attached to a patent application might look quite similar to a large cross-section of the Great Eastern steamship drawn to woo investors; both would show a non-perspective view, use blue colouring for steel and yellow for brass, and show light coming from the upper left' (Rankin, 2011: 67).\n\n See Patent Act 1793, section 11.\n\n For another example, see Kent (1945).\n\n Kant distinguished between literary works and other kinds of artistic 'manufacture' on the basis that whereas the material book was the medium through which the living 'speech' of the author was relayed, an engraving or sculpture was fully realized by its material form (Kant 1996b: 29\u201335). Nonetheless, this ongoing 'speech' presupposed the finite elements of a graphic combinatory.\n\n _Corning v. Burden_ 56 U.S. 252, 270 (1853).\n\n See Arguments before the Committee on Patents April 3rd 1878, 45th Congress 2d Session Mis Doc 50, pp. 99\u2013100.\n\n So, to take the leading example, Samuel Johnson's dictionary defines prototype as: 'the original of a copy; exemplar, archetype' (Johnson, 1766), at p. 413, s.v. 'prototype').\n\n## **References**\n\nBabbage, C. (1851) _The Exposition of 1851: or Views of the Industry, the Science and the Government of England_ , London: John Murray.\n\nBiagioli, M., Jaszi, P. and Woodmansee, M. (eds) (2011) _Making and Unmaking Intellectual Property. Creative Production in Legal and Cultural Perspective_ , Chicago, IL: University of Chicago Press.\n\nCurtis, G. (1849) _A Treatise on the Law of Patents for Useful Inventions_ , Boston: Little & Brown.\n\nDeleuze, G. (1980) _Le pli. Leibniz et le baroque_ , Paris: Minuit.\n\nDidi-Huberman, G. (2008) _La ressemblance par contact. Arch\u00e9ologie, anachronisme et modernit\u00e9 de l'empreinte_ , Paris: Minuit.\n\nGitelman, L. (2000) _Scripts, Grooves, and Writing Machines. Representing Technology in the Edison Era_ , Stanford, CA: Stanford University Press.\n\nKant, I. (1996a) 'On turning out books', in Mary J. Gregor (ed.), _Practical Philosophy_ , Cambridge: Cambridge University Press.\n\nKant, I. (1996b) 'On the wrongfulness of unauthorized publication of books', in Mary J. Gregor (ed.), _Practical Philosophy_ , Cambridge: Cambridge University Press.\n\nKent, C. (1945) 'General semantics and the patent system', _Journal of the Patent Office Society_ , 27: 37\u201347.\n\nKittler, F. A. (2000) _Optical Media_ , Cambridge: Polity Press.\n\nKittler, F. (2002) 'The perspective of print', _Configurations_ , 10(1): 37\u201350.\n\nOng, W. (2002) _Orality and Literacy. The Technologizing of the Word_ , London: Routledge.\n\nPottage, A. (2011) 'Law machines. Scale models, forensic materiality, and the making of modern patent law', _Social Studies of Science_ , 41: 621\u2013643.\n\nPottage, A. and Sherman, B. (2010) _Figures of Invention. A History of Modern Patent Law_ , Oxford: Oxford University Press.\n\nRiles, A. (2011) _Collateral Knowledge. Legal Reasoning in the Global Financial Markets_ , Chicago, IL: University of Chicago Press.\n\nRobinson, W. C. (1890) _The Law of Patents for Useful Inventions_ , Boston: Little, Brown & Co.\n\nStrathern, M. (2006) 'Cutting the network', _Journal of the Royal Anthropological Institute_ , 2: 517\u2013553.\n\nStringham, E. (1939) _Patent Claims. A Drafter's Manual_ , Madison, WI: Pacot Publications.\n* * *\n\n# Index\n\n* * *\n\n2000 Lisbon European Council\n\n2004 Department of Health survey: fertility clinics \u2013\n\n2007 Opinion of the Article 29 Working Party\n\nAarhus Convention (1998) 40n7\n\nAB 32\n\nabortions\n\nabsolutism\n\nAcademy of Medical Sciences (AMS) \u2013; review of current regulatory frameworks\n\nACP (Advisory Committee on Pesticides)\n\n_acquis communautaire_\n\nactor-network theory (ANT) _see_ ANT\n\nAdams, J. A.: 'it's normal to be normal' publication ; review of child and adolescent sexual assault victims\n\nadoption\n\nAdvisory Committee on Pesticides (ACP)\n\n_agencement_ ,\n\nAgency for Toxic Substances and Disease Registry (ATSDR): complaints against ; conference calls \u2013; formaldehyde minimum response levels ; forwarded results to FEMA ; level of concern , \u2013; nuisance symptoms ; toxicological investigation techniques\n\nagricultural commodities: financial derivatives\n\nAIDS: benefits and housing \u2013; credibility struggles between scientific teams ; crisis management ; dawning crisis ; defined in temporal terms ; emergence ; homophobic moral panic ; link with HIV ; media scaremongering ; paradigm of imminent death , ; pragmatic hope ; prognosis _see_ prognosis; schematisation ; social chronology ; _see also_ HIV; PWAs\n\nAlder, K.: American obsession with lie detectors ; polygraphs in America \u2013\n\nAlexander, Alana\n\nallergic sensitisation \u2013\n\n_American Journal of Obstetrics and Gynaecology_\n\nAmorim, Ant\u00f3nio ,\n\nAmsterdam Treaty\n\nANT (actor-network theory): invisibility of law ; socio-materiality\n\narbitrage\n\nArticle 11 TFEU\n\nArticle 3(3) of the Treaty on European Union (TEU)\n\narts: liberal ; mechanical\n\nasset markets\n\nasthma: bronchodilating inhalers ; defined by pharmaceutical industry ; environmental triggers \u2013; impact of formaldehyde exposure \u2013; pathophysiologies ; reconceptualisation of ; treatment of\n\nAtkinson, P.\n\nAtomic Energy Commission\n\nauctions\n\nAugmented Reality\n\nAZT\n\nBarbour, Virginia\n\nBarrett, Cleveland \u2013\n\nBartrip, B. ,\n\n_Beleno v. Texas Dept. of State Health Services_ (2009) \u2013\n\nbellwether trial \u2013\n\nBenrubi, G. I.: genital microtrauma caused by consensual sexual intercourse\n\nBiagioli, Mario: norms within science ; relation between political representation and representation of inventions ; technologies of integrity verification\n\nBiermann, F.\n\nBingaman, Senator Jeff\n\n_Biotech Products_ case \u2013\n\nBlackmore, Sue\n\nBlack-Scholes-Merton formula\n\nBlackwell\n\nblood pressure: apparatus for recording arterial blood pressure ; history of ; relationship to truth and deception ; relationship with fear ; systolic blood pressure deception test ,\n\nbloodspot collections , \u2013, \u2013\n\nboundary work\n\nBowyer, L.: injury interpretation\n\nBoyd, W.\n\nBoyle, Robert ,\n\nBrazil: challenge to EU Directive\n\nBrennan, P. A. W.: advocates routine use of colposcope \u2013; potential for the colposcope\n\nBretton Woods monetary management system\n\nBristol Centre for Reproductive Medicine (BCRM)\n\nBritish Fertility Society\n\nBritish Medical Association (BMA): work against quackery\n\nBritish Medical Association (BMA) House\n\n_British Medical Journal_\n\nbronchoconstriction\n\nbronchodilating inhalers\n\nBrown, C. R V.: evidence and image recordings ; study of injury caused by consensual and nonconsensual sexual intercourse \u2013; study on colposcopes \u2013; Toludine Blue \u2013\n\nBrown, M. A.\n\nBrownsword, R.\n\nBrundtland Report ,\n\nBryan, D.: value\n\nBSE crisis 17n1,\n\nBunn, G. C.: notion of the born criminal\n\nBush, George W.\n\nCalifornia: emissions targets ; lawsuits ; low-carbon fuel standard ; political settlement ; regulatory power struggle between State and county\n\nCalifornia Air Resources Board (CARB) ,\n\nCalifornia Environmental Quality Act (CEQA)\n\nCallon, Michael , , ; market formations\n\ncapital: commercial ; controls ; industrial ; speculative\n\ncarbon regulation \u2013,\n\nCare Quality Commission\n\nCBRC (cross-border reproductive care) \u2013; reproductive exile\n\nCenters for Disease Control and Prevention (CDC): conference calls \u2013\n\ncitizen participation: citizenship-as-participation ; citizenship-as-rights ; EU's strategies and policies on sustainable development ; Innovation Union ; _innovator citizens_ ; limited in EU SDS ; sustainable development ; technologies of participation _see_ technologies of participation; _see also_ publics\n\ncitizens: EU (European Union)\n\nClean Air Act (CAA) \u2013, ; epistemic jurisdiction\n\nclimate change , , ; bottom up regime formation ; climate federalism _see_ climate federalism; conflicts between regulatory authorities ; regulation at State and local level ; site of a complex global order\n\nclimate federalism: carbon regulation \u2013, ; decentralised approach ; GHG emission targets ; jurisdictional friction ; _Massachusetts v. EPA_ ; patchworks of regulation ; political settlement ; US obstructionism towards reducing GHGs\n\nClinton, Bill\n\nCode of Practice \u2013\n\nCodex Alimentarius Commission\n\nCollins, Harry ; separation of roles for experts and politics\n\ncolposcopes: access to images \u2013; benefits of ; circumscribed applications of ; controversy around ; cost of ; design of ; discretion over whether examinations are recorded , ; distinguishing between consensual and nonconsensual sexual intercourse ; dual role applications of , ; evidence and image recordings ; evidential benefit at expense of victim care ; examinations ; FNEs use in training ; forensic medical examinations ; forensic medical practitioners' support for \u2013; greater clarity than macroscopic observation ; improved visibility for genital examinations ; management of \u2013; recording decisions \u2013; requirement for therapeutic and evidential practices ; security of images , , ; study of \u2013; survivors' consent for examinations ; usefulness or appropriateness ; victims of routine application ; _see also_ images; rape; sexual assault\n\ncolposcopy controversy: evidential benefits of colposcope \u2013; injury interpretation \u2013; new quest for the rape injury \u2013; potential benefits identified by gynaecologists ; victims and the absence of injuries \u2013\n\nColumbia\n\ncommercial capital\n\nCommissioner for Research, Innovation, and Science\n\nCommittee on Publication Ethics (COPE) _see_ COPE\n\ncommodities: financial derivatives\n\ncomplex governance\n\nComte, Auguste , 20n6\n\nConseil d'\u00c9tat \u2013\n\ncontracts: financial derivatives \u2013\n\ncontractual outsourcing \u2013\n\nConvention on Human Rights (1950)\n\nconvictions: false eyewitness evidence ; nature of power and technoscience ; wrongful\n\nconvicts: barriers to research on ; definition 73n8; exclusion from democratic science and technology processes ; experience and expertise of \u2013; identification by anatomical features ; knowledge of technology ; marginalised groups , ; margins of society ; subordination of expertise\n\nCooper, Christopher: aggravation by water damaged trailer ; asthma and formaldehyde \u2013; bronchial sensitivity placed within the pharmaceuticalised definition of asthma ; medication \u2013; visit to National Jewish Hospital\n\nCOPE (Committee on Publication Ethics) ; award from Council of Science Editors (CSE) ; Britishness ; Code of Conduct ; convening of ; cosmopolitan membership ; Forum \u2013, __ , , ; funding ; innovative methods of deliberation ; large international network ; open-ended deliberations assessing misconduct ; open-ended publication of cases ; promotion of good practice in science ; virtual Forum \u2013; _see also_ misconduct hunting\n\ncoproduction: co-constitutional realms ; political centralisation and decentralisation ; regulatory science and jurisdictional architecture\n\ncopyright law ; emergence\n\nCouncil of Europe ,\n\nCouncil of Science Editors (CSE)\n\nCourtney, G. M.: critique of colposcopy ; rape without signs of injury ; sympathy to 'it's normal to be normal' publication\n\nCourt of Justice of the EU\n\nCox, Brian , ; politicians and science\n\ncriminal justice system: benefits of colposcopes , ; hegemony ; polygraphs ; problems and possibilities for errors ; recognition and power of experts ; social status and power of actors ; subordination of expertise of convicts ; truths , ; unexamined assumptions \u2013; _see also_ miscarriage of justice\n\ncriminals _see_ convicts\n\ncriminology: notion of the born criminal ; studied through biological lenses\n\nCrimp, Douglas\n\ncritical discourse theory\n\ncritical political economy _see_ feminist political economy\n\ncross-border reproductive care (CBRC) \u2013; reproductive exile\n\nCrown Office and Procurator Fiscal Service\n\nCurtis, George Tinkner: characterisation of invention ; material things and secret processes ; mechanical invention ; nature instrumentalised\n\nDahl, Robert\n\nDalton, M. E.: injury interpretation\n\ndata collection: Eurostat\n\n_Data Protection Act 1998_\n\ndata protection law: consent for retention ; consent for use in research \u2013; imperative for change\n\nData Sharing Review (2008)\n\n_Daubert Trilogy_\n\n_Daubert v. Merrell Dow Pharmaceuticals_ \u2013\n\nDavis, Fred\n\nDawson, A.: role of HFEA inspector\n\nDDA (Disability Discrimination Act) 1995: asymptomatic HIV carriers ; demonstration of likelihood \u2013; expert medical reports ; introduction ; legal redress for HIV discrimination \u2013; progressive conditions ; proof of disability\n\ndeath sentences\n\ndeception: relationship to blood pressure ; systolic blood pressure deception test ,\n\ndeceptive consciousness\n\ndecisionism: definition of , ; elected politicians ; role of science in \u2013\n\ndelegation: structures and accountability\n\nde Marchi, B.: high-profile controversies\n\ndemocracy: epistemic jurisdiction and ; nature of\n\ndemocratic experimentation\n\ndemocratic politics\n\nDemos 17n1\n\nDenmark\n\nDepartment of Defense\n\nDepartment of Energy (DOE) _see_ DOE\n\nDepartment Of Health: 'Report on the arm's-length bodies review'\n\nDepartment Of Health (Ireland)\n\nderivatives _see_ financial derivatives\n\ndetective fiction\n\nDidi-Huberman, Georges\n\ndigit preference theory\n\nDirective on the Legal Protection of Biotechnological Inventions\n\nDisability Discrimination Act 1995 (DDA) _see_ DDA\n\ndisplaced persons , 189n1; emergency housing units for Gulf Coast residents ; health problems of trailer residents ; subjected to sustained formaldehyde exposures\n\nDispute Settlement Body (DSP)\n\nDNA testing: Cleveland Barrett case \u2013; DNA wars ; ethnic profiling ; familial searching 77n10; flawed analysis of samples ; forensic analysis ; limitations ; markers and individuals \u2013; profiles ; random match probability 77n10; statistics \u2013, ; stratification ; STRs \u2013, 77n11; technological instrumentalism , \u2013; traditional technologies ; true individualisation , ; underlying principle\n\nDNA wars\n\n_Do Economists Make Markets?_\n\nDOE Counter Intelligence Office \u2013\n\nDOE (Department of Energy) \u2013; counterintelligence (CI) reports ; creation and purpose ; National Research Council report \u2013; new polygraph program ; NRC report challenges validity of polygraphs \u2013; polygraph testing of employees ; protests by employees against polygraph testing ; public hearings on polygraph testing ; restructuring \u2013; security issues\n\nDonor Gamete forum , \u2013\n\ndonors: anonymity \u2013, , , ; campaigns to support sperm donation ; extensive waiting times ; gametes ; identification by offspring of gamete donation ; recruitment ; register ; shortage , ; sperm , ; _see also_ gametes\n\ndynamic federalism\n\nEaster District of Louisiana\n\nEast Jefferson General Hospital\n\nEcuadorian Technical Commission\n\nelectronic tracking\n\nelite privilege: epistemic governance ; technocracy\n\nElsevier\n\nembryo donors _see_ donors\n\nEmployment Tribunals ,; disability tests\n\nenvironment: EU commitment to sustainable growth ; hazards \u2013; impact assessment of EU SDS ; limited citizen participation in governance ; public engagement in issues\n\nEnvironmental Health Network\n\nenvironmental impact report (EIR)\n\nEnvironmental Protection Agency (EPA) , , ; conference calls \u2013; delays releasing test results ; tests on unoccupied trailers\n\nEPA (Environmental Protection Agency) _see_ Environmental Protection Agency (EPA)\n\nepistemic governance: delegation of expert advisors \u2013; dilemmas \u2013; elite privilege ; focus on experts \u2013; independence of expertise ; normative principle of transparency \u2013; public\n\nengagement \u2013; public interest ; realm of ; scientific advisory systems ; uncertainty\n\nepistemic jurisdiction: Californian political settlement ; carbon regulation \u2013, ; definition ; EC food safety ; political centralisation ; questions for democracy\n\n'Equity and excellence: Liberating the NHS'\n\nespionage \u2013\n\nethanol regulation\n\nEU (European Union): Better Regulation agenda ; citizens ; Directive on the Legal Protection of Biotechnological Inventions ; Europe 2020 strategy \u2013; imaginaries ; industrial policy agenda ; legal-constitutional order ; low-carbon fuel standard model ; political identity ; politics of subsidiarity ; Resource-efficient Europe ; sustainable development _see_ sustainable development; _see also_ Innovation Union\n\nEurope 2020 strategy \u2013\n\nEuropean Association of Science Editors (EASE)\n\nEuropean Clinical Trials Directive\n\nEuropean Commission (EC) ; consolidation of epistemic jurisdiction ; flagship initiatives ; harmonizing social regulation \u2013; sustainable development progress reports\n\n_European Communities\u2014Measures Affecting the Approval and Marketing of Biotech Products_\n\nEuropean Community Treaty\n\nEuropean Convention for the Protection of Human Rights and Fundamental Freedoms 50n2\n\nEuropean Convention on Human Rights (ECHR): child's right to personal identity\n\nEuropean Council ,\n\nEuropean Court of Human Rights (ECtHR): data protection ; personal data of arrested persons ; retention of data \u2013\n\nEuropean Data Protection Regulation\n\nEuropean Food Safety Authority (EFSA) ; political legitimacy\n\nEuropean Medicines Agency (EMA)\n\nEuropean Parliament\n\nEuropean Parliament Committee report\n\nEuropean Society of Human Reproduction and Embryology (ESHRE) Task Force: survey\n\nEuropean Union (EU) _see_ EU\n\nEurostat\n\nEU SDS (EU Sustainable Development Strategy): complementary process ; data collection ; design of ; environmental impact assessment ; global promotion of sustainable development ; new approach to policy-making ; technocratic and expert-led governance ; unsustainable trends\n\nEU Sustainable Development Strategy (EU SDS) _see_ EU SDS (EU Sustainable Development Strategy)\n\nEU Tissues and Cells Directive (EUTCD) ,\n\nEvans, R.: separation of roles for experts and politics\n\nEvans, S.\n\nEwick, P.: authenticating site of science\n\nexchange rates , ,\n\nexpertise ; blurring of science and politics ; courtroom context ; delegated authority \u2013; EU SDS (EU Sustainable Development Strategy) ; influence on policymaking ; notion of ; ontologies of \u2013; role of experts in legal settings\n\nexperts: accountability ; committees ; Comtean view on ; role in relation to public and policymakers \u2013; science\/politics interface ; separate roles for politics and ; uncritical acceptance of authority of ; Weberian view on\n\nexperts-in-policymaking \u2013\n\neyewitnesses: false evidence\n\nEzrahi, Y.: tension between equality and order\/governance\n\nFabrication, Falsification and Plagiarism (FFP)\n\nFaculty of Forensic and Legal Medicine ,\n\nfamilial searching 77n10\n\nFarthing, Michael ,\n\nFaulkner, Alex\n\nFederal Communication Commission (FCC)\n\nFederal District Court: carbon regulation\n\nFederal Emergency Management Agency (FEMA) _see_ FEMA (Federal Emergency Management Agency)\n\nFederal Rules of Evidence (FRE)\n\nFEMA (Federal Emergency Management Agency) ; complaints to ; conference calls \u2013; press release on ATSDR report ; response to formaldehyde testing campaign ; _see also_ formaldehyde\n\nFEMA trailers ; chemical constitution of ; Christopher Cooper asthma case \u2013, , \u2013; exposure threshold ; formaldehyde content ; level of concern \u2013; nuisance symptoms ; safe levels of formaldehyde\n\nFEMA travel parks ; emergency housing units for Gulf Coast residents\n\nfeminist political economy: constructivist approaches ; nexus between economy and society \u2013; reconceptualisation of ; reproductive labor ; value-making processes\n\nfertility clinics \u2013\n\nfertility support groups\n\nfinancial capitalism\n\nfinancial derivatives: affecting current value-making processes ; beneficial role in the economy ; construction of value ; contracts \u2013; dangerous innovations ; decreasing attention to ; definition ; distinction between hedging and speculation ; entanglement of goods and production ; function ; futures ; impact on prices ; management of risks of global investing ; options ; over-the counter (OTC) derivatives ; price volatility ; productive power ; real and financial life of derivatives \u2013; realm of commodities ; regulation to reduce speculation ; relation to performativity ; speculative capital ; speculative role ; technical standards and rules ; trend in market prices \u2013; types of ; underlying asset , ; value of \u2013; worldwide value\n\n_Financial Derivatives and the Globalisation of Risks_\n\nfinancial markets: performativity ; social insurers\n\nfingerprints\n\nfloating exchange rates\n\nFMEs (Forensic Medical Examiners): clearer colposcopic images of genital area \u2013; colposcopic images banned in Scottish courtrooms ; decision on when to record images , ; forensic intervention of rape assaults ; objective evidence gatherer\/provider of therapeutic care ; opposition to use of colposcopes in courtroom in Scotland \u2013; peer review meetings ; potential for the colposcope ; use of colposcope in training\n\nFNEs (Forensic Nurse Examiners): clearer colposcopic images of genital area \u2013; decision on when to record images , ; forensic intervention of rape assaults ; objective evidence gatherer\/provider of therapeutic care ; peer review meetings ; potential for the colposcope ; use of colposcope in training\n\nFOIA (Freedom of Information Act) letter\n\nfood safety regime: agreements ; _Biotech Products_ case \u2013; Codex Alimentarius Commission standards ; legal disputes \u2013; regulatory harmonisation ; regulatory science ; risk assessment and a global regulatory jurisdiction \u2013; scientific justification , ; scientific validity\n\nFord, President Gerald\n\nforensic science: debunking assumptions of infallibility ; evidence and image recordings from colposcopes ; fingerprints ; scrutiny of ; solving wrongful convictions ; _see also_ DNA testing; polygraphs\n\nformaldehyde: allergic sensitisation \u2013; ATSDR findings ; Christopher Cooper asthma case \u2013, , \u2013; content in FEMA trailers ; impact on asthma \u2013; minimum response levels ; safe levels in FEMA trailers ; testing campaign ; trailer residents subjected to sustained exposure\n\nForster, G. E.: critique of colposcopy ; rape without signs of injury ; sympathy to 'it's normal to be normal' publication\n\nforums, online: would-be parents \u2013\n\nFoucault, Michel: panopticon's visual force ; silences ; subjugated knowledges\n\nframing\n\nFried, Jordan 193n10\n\nFrumkin, Dr Howard\n\nFrye, James Alfonse \u2013\n\n_Frye_ standard \u2013; difficulties applying the rule\n\n_Frye v. United States_ \u2013,\n\nfuel standards\n\nFumagalli, A.\n\nfundamental value\n\nfutures (financial derivatives)\n\ngametes (sperm): campaigns to support sperm donation ; donation , ; donors ; purchasing online ; sperm donors , ; _see also_ donors\n\nGarry, Dr Vincent: challenges ATSDR report findings\n\n_General Electric Co._ v. _Joiner_ (1997)\n\ngenetically modified organisms (GMOs) _see_ GMOs\n\ngenetic patrimony\n\ngenomes , ; patents\n\ngenotypes\n\n_Georgia v. Tennessee Copper Co._\n\nGHG emissions , , \u2013\n\nghost authorship\n\nGillette, Becky ; EPA testing\n\nGitelman, Lisa\n\n'Give a Toss' campaign\n\nGlobal Agreement on Tariffs and Trade (GATT)\n\nGlobal Warming Solutions Act\n\nGM crop trials 17n1\n\nGMOs (genetically modified organisms): crops ; dispute ; EC delayed approval of ; European process approach ; trade conflict ; US challenge to scientific basis of European actions ; US products approach\n\ngood research practice: norms 137n2\n\ngoods: performativity\n\nGothenburg European Council\n\nGrundmann, R.\n\nGulf Stream Inc.: compromised health of employees ; employees' complaints \u2013; working conditions for employees\n\nGuthrie cards , \u2013\n\nHales, Stephen\n\nHarper, K.: 'it's normal to be normal' publication ; review of child and adolescent sexual assault victims\n\nHarrison, Edward \u2013\n\nhazards 22n10; environmental \u2013\n\nHealth Research Authority , 125n1\n\nHealth Service Executive (Ireland)\n\nhedging: farming community ; firms and governments\n\nHFE Act 1990 ; opposition to\n\nHFEA (Human Fertilisation and Embryology Authority) ; consideration to abolish , ; consultation into egg and sperm donation policies ; guidance for overseas fertility treatment ; legal challenges ; licensing of sperm ; proactive approach to donor recruitment ; responses to consultations surveys ; support for lifting donor anonymity ; survey of clinics \u2013\n\nHigh Level Group of National Regulatory Experts 41n11\n\nhighly active anti-retroviral therapy (HAART)\n\nHind, D.\n\nHIV: asymptomatic affected people \u2013; complying with DDA's specific definition ; DDA act ; demonstration of likelihood \u2013; difficulties of claimants making claims \u2013; disability test \u2013; disclosure issues ; employment discrimination issues ; expert medical reports ; legal issues for activists ; link with AIDS ; prognosis _see_ prognosis; schematisation ; temporal progression \u2013; temporal sequence ; testing and treatments , ; _see also_ AIDS; PWAs\n\nHolmes, Oliver Wendell \u2013\n\nhomeostasis\n\nHorlick-Jones, T.: high-profile controversies\n\nHorton, Richard ,\n\nHouse Committee on Oversight and Government\n\nHouse of Commons Science and Technology Committee\n\nHouse Of Lords Science and Technology Committee on Genomic Medicine\n\nHouse Of Lords Select Committee on Science and Technology 17n1\n\nHulme, Mick ; subjective (Bayesian) assessment\n\nHuman Fertilisation and Embryology Act (HFE Act) , \u2013\n\nHuman Fertilisation and Embryology Authority (HFEA) _see_ HFEA\n\nHuman Rights Act 1998\n\nhuman tissue: collapsing the regulatory divide with personal data \u2013; consent ; crossing the regulatory divide with personal data \u2013; destruction of collections \u2013; individual autonomy and control over data \u2013; legal acts of legislation ; legal uncertainty about status of tissue samples ; privacy concerns ; regulatory divide with personal data \u2013; regulatory systems \u2013; samples ; storage on DNA databases\n\n_Human Tissue Act 1961_\n\n_Human Tissue Act 2004_\n\nHuman Tissue Authority\n\nHurricane Katrina\n\nHurricane Rita\n\nIgert, Jill 193n10\n\nIllinois State Police\n\nimages: colposcopic examinations ; importance in scientific and medical work 90n5; ownership of ; persuasive evidence 90n5; secure storage of ; security of , , ; survivors' consent for recording of ; survivors' fears of exposure ; _see also_ colposcopes\n\nimaginaries: EU (European Union) ; key repository of the nation-state ; nation-states ; salience of ; symbolism\n\nimplied volatility\n\nInce, Robin , ; politicians and science\n\nindustrial capital\n\ninfertility newsgroups\n\ninflation rates\n\nInformation Commissioner's Office: Code of Practice \u2013\n\nInnocence Project: creation of ; distinction between perpetrator and victim ; false eyewitness evidence ; reliance on technology to correct errors ; uncritical acceptance of authority of experts\n\ninnovation: citizen participation in European project ; citizen's virtue ; definition of ; symbolic of EU positive qualities ; videos \u2013\n\nInnovation Union: aims of ; Augmented Reality ; EU funded projects ; Europe 2020 flagship initiative ; Innovation Union Lab ; pledge to involve citizens in projects ; promise of higher quality of life ; Tales from the Future ; website ,\n\n_Innovation Union Act_ \u2013\n\n_Innovation Union Act_ \u2013\n\n_innovator citizens_\n\nIntelligence Authorization Act\n\nintentional ignorance \u2013\n\ninterest rates\n\nIntergovernmental Panel on Climate Change (IPCC) _see_ IPCC\n\nInternational Committee of Medical Journal Editors (ICMJE)\n\nInternational Office of Epizootics 177n2\n\nInternet sperm providers\n\nINUK (Infertility Network UK): survey\n\ninventions ; becoming a prototype through paper ; copies indefinitely multiplied ; effect elicited from nature ; forms and means , ; mechanical \u2013, , \u2013; perception ; prospective ; recollection , ; representation of ; retrospective ; _see also_ paper prototyping; patents; prototypes\n\nIPCC (Intergovernmental Panel on Climate Change) , ; self-governance and peer review\n\nIrwin, A.: farm workers and pesticides\n\nIVF (in vitro fertilisation)\n\nJain, Sara Lochlann ,\n\nJardine, Professor Lisa\n\nJasanoff, Sheila: coproduction of technology and social order \u2013; epistemic disagreement ; legal writing about science\n\nJefferson Pulmonary Associates\n\njurisdictional architecture: period of rapid change\n\nKant, Immanuel\n\nKeeler, Leonarde ; avoided mass production of polygraphs ; department stores and banks \u2013; employee screening ; patent of an apparatus for recording arterial blood pressure ; patent ran out ; popularising the polygraph ; sales and training ; strategies for commercialising the polygraph\n\nKennedy, Donald\n\nKennedy, Ian\n\nKittler, Friedrich\n\nknowledge-based economies: 2000 Lisbon European Council strategy\n\nknowledge claims: policymaking\n\nknowledge-for-governance _see_ epistemic governance\n\nKnudson, S.: 'it's normal to be normal' publication ; review of child and adolescent sexual assault victims\n\n_Kumho Tire Co. v. Carmichael_\n\nKyoto Protocol ,\n\nlaboratories ; t nce\n\nlabour: productive\n\n_The Lancet_\n\nLarson, John\n\nLatour, Bruno: coproduction of technology and social order ; legal predictability ; legal temporalities \u2013; Royal Society debates ; temporality\n\nlaw: functioning of ; role of materiality \u2013; social studies of \u2013\n\nlaw of value: Marxist\n\n_LD and ED (Guardian Ad Litem) v. British Columbia Women's Hospital and Health Centre and British Columbia Children's Hospital_ (2010)\n\nLeather, Suzi ; warning against overseas clinics\n\nLee, B. ; financial derivatives \u2013\n\nlegal temporalities: analysis by Bruno Latour \u2013\n\nlegitimacy: normative judgements of better or worse decisions\n\nLeo, Richard \u2013\n\n_Leviathan and the Air-Pump_\n\nLewis, J.\n\nLezaun, Javier ; ontology and epistemology\n\nliberal arts\n\nliberal democracy: public knowledge and governmental evaluation \u2013\n\nlie detectors _see_ polygraphs\n\nlife sciences regulation: decentred approach ; decontextualisation of materials and data ; multiple authorities ; reflexive system of governance \u2013; research , \u2013; research ethics committees ; sector-specific legal regimes , ; struggle of law to keep pace in ; thickets and divides in \u2013; _see also_ human tissue; personal data; thickets\n\nLincoln, C.: concerns about the cost of colposcopes\n\nLipuma, E. ; financial derivatives \u2013\n\nLisbon Strategy\n\nLittlefield, M. M.: mind reading ; significant lying curve\n\nLombrosso, Cesare \u2013\n\nLow-Carbon Fuel Standard (LCFS)\n\nLuhmann, Nikolas\n\nLynch, John ; public health campaigns ; temporal progression of HIV\/AIDS \u2013\n\nMackenzie, D. ; performativity \u2013\n\nmacroeconomic stability\n\nManhattan Project\n\nmarginalised groups \u2013\n\nMarres, N.: public involvement in policymaking\n\nMarston, William Moulton: examination of James Alfonse Frye \u2013; experimental protocols ; first version of the lie detector ; systolic blood pressure symptoms of deception\n\nMarx, Karl ,\n\n_Massachusetts v. EPA_\n\nmateriality: attention to matter within legal scholarship ; entanglement with other kinds of objects ; law and \u2013; textual\n\nMauna Lao laboratory \u2013\n\nMay, Peter: hazards 21n10; policies-without-publics \u2013\n\nMcCarthyite period \u2013\n\nMcGoey, Linsey\n\nMcLean, Professor Sheila\n\nmechanical arts\n\nmechanical inventions \u2013, , \u2013\n\nMedical and Healthcare products Regulatory Agency (MHRA)\n\n_Medical Ethics_\n\nmedical journals: growth of ; past records ; quackery \u2013; statistics\n\nmedicine: professionalisation of \u2013\n\nMerton, Robert ; performativity\n\nMezzadra, S.\n\nMillennium Ecosystem Assessment: power dynamics\n\nMiller, Brad\n\nMirowski, P. ; measurement of commodities\n\nmiscarriage of justice: unexamined assumptions \u2013\n\nmisconduct hunting: COPE Forum \u2013; open-ended deliberations at COPE ; professionalisation of studies into ; regulatory organisations ; self-regulation \u2013; _see also_ COPE; research integrity; technologies of integrity verification\n\nmock crimes\n\nMol, A.: ontology\n\nMontgomery, K.: institutional logics\n\nMoreira, Tiago\n\nMorris, Andrew\n\nMosso, Angelo ,\n\nMouffe, Chantal\n\nMundy, Martha: distinctions between persons and things\n\nMuniesa, F.\n\nMunsterberg, Hugo\n\nMurphy, Michelle , 192n8\n\nNational Cancer Institute 194n12\n\nNational Defense Authorization Act\n\nNational Donation Strategy Group\n\nNational Gamete Donation Trust (NGDT)\n\nNational Jewish Hospital\n\nNational Reform Programmes 43n17\n\nNational Research Council (NRC): report \u2013\n\nNational Science Foundation\n\nNational Security Agency\n\nNational Sustainable Development Strategies\n\nNational Toxins Campaign Fund \u2013\n\nnation-states: imagery of ; imaginaries\n\nneo-classical economists: price system \u2013\n\nNew Mexico\n\nNik-Khah, E.\n\nnormative principle of transparency: plurality of knowledge\n\nNorvell, M. K.: genital microtrauma caused by consensual sexual intercourse\n\nnucleotides\n\nNutt, David\n\nOccupational Safety and Health Administration 194n12\n\noffenders: definition 73n8\n\nOffice of Science and Technology Policy\n\nOliver, A. L.: institutional logics\n\nonline forums: control by moderators ; Donor Gamete forum , \u2013; fertility support groups ; Repromed ; virtual patient communities ; would-be parents \u2013,\n\nontological gerrymandering 192n8\n\nontology: connections between emotion, the body and lying ; definition\n\noptions (financial derivatives)\n\nOstrom, Elinor\n\nover-the counter (OTC) derivatives\n\nPacheco, Dr. Karen A. ,\n\nP\u00e1ez, Pedro ,\n\npaper: retrospective functioning ; theory of\n\npaper prototyping: emergence of _dispositif_ ; inventions formulated for artisans or manufacturers ; lawyers' interpretation of invention as prototype ; recollection of inventions\n\nParry, Bronwyn\n\nPatel, H. C.: critique of colposcopy ; rape without signs of injury ; sympathy to 'it's normal to be normal' publication\n\npatents: copies indefinitely multiplied ; doctrine ; genetic template ; invention ; justification of ; legal rights \u2013; life sciences ; machining ; system ; understanding of ; _see also_ inventions; paper prototyping; prototypes\n\npathophysiologies\n\nPattberg, P.\n\nPatton, Cindy\n\npension funds\n\n_The People of California v. San Bernardino County_ \u2013\n\npeople with AIDS (PWAs) _see_ PWAs\n\nPercival, Thomas\n\nperformativity pieces: _agencement_\n\nperformativity thesis: auctions ; Black-Scholes-Merton formula ; emergence of ; exceeding the sphere of human beliefs ; incorporation into financial markets ; relation to financial derivatives\n\npersonal data: anonymity \u2013; arrested persons , ; collapsing the regulatory divide with human tissue \u2013; consent ; consent for use in research \u2013; crossing the regulatory divide with human tissue \u2013; European data protection regime \u2013; individual autonomy and control over data \u2013; privacy concerns ; processing of ; regulatory divide with human tissue \u2013; regulatory systems \u2013; storage on DNA databases\n\npesticides\n\nphantomtic ontology\n\npharmaceutical industry \u2013\n\nphysiology: standardisation of measurements\n\nPickersgill, M. D.: reshaping social and legal ontologies ; scientific knowledge\n\nPielke, R. A.: boundary between factual claims and value choices\n\nplagiarism\n\n_PLoS Medicine_\n\nPolice Doctor _see_ FMEs (Forensic Medical Examiners); FNEs (Forensic Nurse Examiners)\n\npolicies-without-publics \u2013\n\npolicy domains: without publics \u2013\n\npolicymakers: democratic politics\n\npolicymaking: agenda set by science ; expert judgements ; individual studies ; knowledge claims ; problems identified by science \u2013; public involvement in ; science-based ; _see also_ populism\n\npolitical centralisation: coproduction ; dependence on regulatory science ; epistemic jurisdiction\n\npolitical decentralisation: coproduction ; dependence on regulatory science ; epistemic jurisdiction\n\npolitical economy _see_ feminist political economy\n\npolitics: decisionism in ; democratic ; pitted against science ; regulation of environmental hazards in USA ; separate roles for experts and\n\npolygraphs: admissibility \u2013; admissibility practices open to reinterpretation ; admissible evidence for criminal trials ; alternative to police methods of extracting confessions \u2013; American obsession with ; deterrent to espionage ; distrust in American workplaces, criminal procedure and governance \u2013; early stage of development ; exclusionary at Federal level ; expertise of examiners ; false positives , ; _Fryev. United States_ \u2013; government security policies against communism \u2013; histories ; innovation ; introduction to department stores and banks \u2013; legal status in the USA \u2013; ontological connections ; popularising ; prior-stipulation mechanism at state level ; scholarly literature on ; sex offenders ; struggle to gain scientific and legal legitimacy ; Supreme Court accepts as expert evidence \u2013; surveillance \u2013; surveillance technique in industry ; technical developments leading to increased reliability ; testing of employees in DOE ; training courses ; usage ; use during McCarthyite period \u2013; validity challenged by NRC report ; _see also_ Keeler, Leonarde\n\npopulism: concept of ; policies-without-publics \u2013\n\nPottage, Alain: common patrimony of humanity ; distinctions between persons and things ; functioning of law \u2013; genetic patrimony ; legal temporalities\n\nPresidential Commission\n\nPresidential Decision Directive\n\nPreston, Patrick \"Rick\" Edward ; FEMA's formaldehyde testing issue ; stipulated ATSDR results remain confidential\n\nprice index\n\nprice volatility: financial derivatives\n\nprint _see_ paper\n\nprinting press\n\nprior-stipulation rule ,\n\nproduction: performativity\n\nproductive labour\n\nprognosis: co-articulation ; determinant of likelihood ; double effect of ; function across legal and clinical fields ; legal claims ; likelihood \u2013; medical experts' reluctance to predict \u2013; patient's test results ; scale and individuation ; schematisation ; T-cell count , \u2013; uncertainty ; viral loads , \u2013\n\nprotection of endangered species\n\nprototypes: invention ; machines _see_ prototyping machines; machining ; paper prototyping _see_ paper prototyping; process of manufacturing ; vital agency in manufacturing process ; _see also_ inventions; patents\n\nprototyping machines: instrument abstracted from their industrial courses ; legal schema of invention ; machine as mechanical ingenuity ; transitive type embodied ,\n\nPublic Bodies Act 2011\n\npublic health campaigns: asymptomatic HIV carriers \u2013\n\npublic participation _see_ citizen participation\n\npublics: accountability of experts ; definition 21n9; engaged in policymaking ; _innovator citizens_ ; liberal democracy and \u2013; limited participation in EU data analysis ; public engagement around science-based policymaking \u2013; _see also_ citizen participation\n\nPullin, Scott 190n1\n\nPurchasing Power Parity\n\nPWAs (people with AIDS): financial support ; hostility at work ; temporary benefits and housing arrangements \u2013\n\nquackery: BMA's work against ; campaigners against ; characterisation of ; dilemmas for medical reformers ; petition to Parliament ; professionalisation of medicine \u2013\n\nquangos: abolition of ; HFEA _see_ HFEA (Human Fertilisation and Embryology Authority); transferring the functions of ; unelected\n\nracial categorisation\n\nRafferty, M.: value\n\nrandom match probability 77n10,\n\nrape: absence of signs of injury ; criminal prosecutions ; evidence provided by colposcopes ; forensic intervention ; invasive genital examination of colposcope ; 'it's normal to be normal' publication ; rape injury \u2013; Rape Trauma Syndrome ; recording and documenting injuries ; survivors' consent for recording of colposcopic images ; terminology of victims 88n2; _see also_ colposcopes; colposcopy controversy; sexual assault\n\nRape Trauma Syndrome\n\nregimes of imperceptibility\n\nRegional Monetary Agreement (RMA) ; exchange rates ; interaction of governments and markets\n\nregulation: financial derivatives\n\nregulatory capture\n\nregulatory harmonisation\n\nregulatory landscape: apt metaphor for lawyers\n\nregulatory science ; coproductionist framework ; definition ; focal point of conflict ; global and international levels ; institutions\n\nRenda, Frank: concerns about ATSDR report ; removal from post\n\nRenewable Energy Directive\n\nreproductive care: cross-border \u2013\n\nreproductive medicine\n\nreproductive travel\n\nRepromed ; advice on backstreet abortions ; Egg and Sperm Donation forum , ; forum statistics ; posts on ; reproductive medicine\n\nRepublic of Ecuador\n\nRepublic of Ireland: abortion\n\nresearch integrity: high-tech integrity verification tools ; regulatory agenda ; regulatory capture ; regulatory technological fixes ; technological verification ; technologies of integrity verification \u2013; _see also_ misconduct hunting\n\nResource-efficient Europe\n\nrespiratory illnesses\n\nRevilla, J.: 'it's normal to be normal' publication ; review of child and adolescent sexual assault victims\n\nRiles, Annelise\n\nRva-Rocci, Scipione\n\nRoberts, Justice John\n\nRobinson, William\n\nRooke, Catriona: legal things\n\nRound-up Ready soy\n\nRoyal Society 17n1, , ; conflicting testimonies\n\nRules Enabling Act\n\nSaint-Simon, Henri 20n6\n\nSan Bernardino County \u2013\n\n_S and Marper v. United Kingdom_ (2008) \u2013,\n\nSanitary and Phytosanitary (SPS) Agreement , , ,\n\nSarewitz, D. ,\n\nSchaffer, S.: connection between experimental knowledge and legal knowledge \u2013; scientific experimentation\n\nSchrader, Astrid\n\nSchwarzenegger, Governor Arnold\n\nscience: decisionism \u2013; definition of ; deviances in ; experimentation ; good research practice ; indeterminancy ; institutional logics ; laboratories ; management of uncertainty \u2013; politicisation of ; professionalisation of medicine \u2013; publishing ; quackery \u2013; regulation of research ; regulatory , ; research integrity _see_ research integrity; review of current regulatory frameworks ; role in making of law and public policy \u2013; scientific publication ; sound \u2013; value-free ; virtual global networks ; _see also_ climate change; climate federalism; COPE; Royal Society; science-in-policymaking\n\nscience and technology studies (STS) _see_ STS\n\nscience-based policymaking _see_ epistemic governance\n\nscience-in-policymaking ; climate science and policy ; environmental hazards \u2013; excess of objectivity \u2013; impact of high political stakes \u2013; limiting the role of scientism ; politicisation of ; probabilistic judgements ; STS insights on\n\nscientific governance _see_ epistemic governance\n\nscientific indeterminacy\n\nScientific Revolution\n\nscientism\n\nSecretariat of International Plant Protection Convention 177n2\n\nself-plagiarism , \u2013\n\nSenate Energy and National Resources Committee\n\nsexual assault: absence of signs of injury ; evidence provided by colposcopes ; invasive genital examination of colposcope ; 'it's normal to be normal' publication ; review of child and adolescent victims ; survivors' consent for recording of colposcopic images ; terminology of victims 88n2; _see also_ colposcopes; colposcopy controversy; rape\n\nShapin, S.: connection between experimental knowledge and legal knowledge \u2013; scientific experimentation\n\nShapiro, B. J.\n\nshared technical imaginary\n\nShea, Dan 190n1\n\nshort tandem repeats (STRs) \u2013, 77n11\n\nsick building syndrome\n\nSierra Club ; press release\n\nSilbey, S.: authenticating site of science\n\nSiu, L.\n\nSlaughter, Laura: application of the colposcope ; evidence and image recordings ; study of injury caused by consensual and nonconsensual sexual intercourse \u2013; study on colposcopes \u2013; Toludine Blue \u2013\n\nSmith, Dr Kenneth: asthma medication \u2013; disposition for FEMA trailer formaldehyde litigation \u2013; translation of affidavit\n\nSmith, Richard\n\nSocial Studies of Finance (SSF) _see_ SSF\n\nsocial studies of law \u2013\n\nsocio-legal scholars: multiple forms of disputing\n\nSouza, Kevin 193n10\n\nSovacool, B. K.\n\nspeculation: capital ; distinct from hedging ; role of financial derivatives\n\nsperm clinics ,\n\nsperm donors ,\n\nsperm (gametes) _see_ gametes\n\nSSF (Social Studies of Finance): contribution ; emergence of ; explaining economic activities ; second order economy\n\nState of Massachusetts\n\n_State of New Mexico v. Shane Glen Harrison_\n\nstatistics: Eurostat data collection ; medical journals ; secrecy of methods\n\nStehr, N.\n\nStern Review 87n1\n\nStevens, Justice John Paul \u2013,\n\nStewart, Paul\n\nstock prices\n\nStrange, S.\n\nstratification\n\nStringham, Emerson \u2013; discursive artefact\n\nSTRs (short tandem repeats) \u2013, 77n11\n\nSTS (science and technology studies): causal relationship between products and adverse health outcomes ; challenging categories and concepts in law _;_ challenging the unproblematic and obvious \u2013; controversies ; coproduction ; differences between risk and uncertainty ; distinction between perpetrator and victim ; DNA wars ; ethical, legal, and social implications (ELSI) ; exchanging methods and information with sociolegal scholars \u2013; exclusion of convicts from scholarship \u2013; false eyewitness evidence ; focus on usual suspects ; forensic science ; importance of images in scientific and medical work 90n5; instances of _situated, dis\/empowerment_ ; multiple forms of disputing ; regulatory science ; rendering explicit the distribution of power and political agency ; salience of imaginaries ; scholarship ; science-based policy-making ; science-in-policy \u2013; scientific knowledge ; social studies of \u2013; studies of law and science ; technological instrumentalism , \u2013; unexamined assumptions \u2013, \u2013\n\nSubcommittee on Investigations and Oversight\n\nsubjective (Bayesian) assessment\n\nsubmissions, statistical analysis of\n\nsupply and demand\n\nsurrogacy \u2013\n\nsurveillance: department stores and banks \u2013; deterrent to criminal activity ; DOE (Department of Energy) employees , ; employee confessions ; employee screening ; polygraph testing \u2013\n\nsurveys: HFEA (Human Fertilisation and Embryology Authority) consultations ; overseas fertility treatment \u2013\n\nsustainable development: Article 3(3) TEU ; Article 11 TFEU ; Brundtland Report ; citizen participation ; cross-cutting ; data collection ; different formulations of ; EU (European Union) \u2013; limitations ; limits of participation in the EU governance \u2013; long-term objective of the EU ; needs of the world's poor ; policy documents ; prioritisation of growth ; progress reports ; technoscientific innovation , \u2013\n\nsustainable growth: EU commitment ; Resource-efficient Europe\n\nsystolic blood pressure deception test ,\n\nTales from the Future\n\nT-cell count , \u2013\n\ntechnocracy: assumption of elite privilege ; definition of , ; normative case against ; sound science\n\ntechnologies of integrity verification \u2013; 'computerised the detective's role' ; statistics ; surveillance in laboratories ; textual similarities software\n\ntechnologies of participation: active\/passive participation ; citizen regulation ; _ex ante_ , ; _ex post_ ; techniques and practices \u2013\n\ntechnoscience: governance of ; risks and ethics about ; sustainable development ; sustainable growth \u2013; tool of regulation\n\nTeixeira, W. R.: colposcopic study\n\ntemporality: definition ; legal \u2013; material worlds ; natural or social ; systems theory ; _see also_ HIV\n\ntextual similarity algorithms ,\n\nthickets: collapsing the regulatory divide between personal data and human tissue \u2013; conflation of private and public interest , \u2013; consent and protection of individual concerns \u2013; continuum of personal data and human tissue ; crossing the regulatory divide between human tissue and personal data \u2013; definition ; European data protection regime \u2013; evolution of ; focus on objects of regulatory concern ; growth of ; Guthrie cards , \u2013; human tissue ; individual autonomy and control over data \u2013; personal data ; reflexive system of governance \u2013; regulatory burden ; root cause of many problems ; sector-specific legal regimes , ; upstream considerations \u2013; widening regulatory divide in personal data and human tissue \u2013; _see also_ life sciences regulation\n\nthird degree policing , ,\n\nThomas, Ronald: polygraphy and detective fiction\n\nThompson, G. I.: genital microtrauma caused by consensual sexual intercourse\n\nThrift, Nigel 191n7\n\ntime: segmented understandings of ; social horizons of \u2013; _see also_ HIV; temporality\n\nToludine Blue \u2013\n\ntrailers _see_ FEMA trailers\n\ntrans-science\n\ntravel parks _see_ FEMA travel parks\n\nTreaty on the Functioning of the European Union (TFEU)\n\nUK Research Integrity Office\n\nuncertainty: clinical ; governance of futures \u2013; management of \u2013; production and perpetuation of\n\nunderlying assets ,\n\nunexamined assumptions: challenging the unproblematic and obvious ; miscarriage of justice \u2013; technological instrumentalism \u2013\n\nUnited States of America (USA): admissibility of polygraphs \u2013; challenge to EU Directive ; climate federalism \u2013; security \u2013; third degree policing , , ; use of polygraphs during McCarthyite period \u2013; _see also_ DOE (Department of Energy)\n\nun-knowing: ATSDRFEMA trailer health consultation \u2013; Christopher Cooper asthma case \u2013, ; definition ; formaldehyde testing campaign exposure ; pharmaceutical logics \u2013; secrecy of ATSDR test results\n\nvalue: construction of ; contestability ; financial derivatives , \u2013; fundamental , ; law of ; measurement of , ; real and financial ; social ; value-making processes\n\nv-CJD (variant Creutzfeldt-Jakob disease)\n\nviral loads , \u2013\n\nvirtual global networks: scientists and experts\n\nvirtual patient communities\n\nin vitro fertilisation (IVF)\n\nVollmer, August: removal of third degree policing\n\nwages for housework initiative\n\nWakley, Thomas\n\nWaldby, Catherine ; co-articulation ; shared technical imaginary\n\nWeber, Max\n\nWeber, S.: mock crimes ; scientific work on fear\n\nWeinberg, Alvin\n\nWeinel, M.: separation of roles for experts and politics\n\nWellcome Trust Scottish Health Informatics Programme 125n1\n\nWhitmarsh, I.: pharmaceutical intervention\n\nWilliams, Dawson\n\nWoolgar, S.: ontology and epistemology\n\nWorld Association of Medical Editors (WAME)\n\nWorld Bank: environmental impact assessment\n\nWorld Trade Organization (WTO) _see_ WTO (World Trade Organization)\n\nwould-be parents: challenges faced by ; difficulties accessing required treatment ; managing own treatment ; obtaining sperm, eggs or embryos from within the EU ; online forums \u2013; opposition to HFE Act ; overseas treatment , \u2013; Repromed ; survey on overseas fertility treatment \u2013; virtual patient communities\n\nWTO Dispute Settlement Body\n\nWTO (World Trade Organization): adjudicating GMOs \u2013; challenge to EU Directive ; food safety agreements ; review of risk assessment processes ; risk assessment protocols\n\nWynne, B.: science-based arguments for policy\n\nYeung, K. \n","meta":{"redpajama_set_name":"RedPajamaBook"}} +{"text":" \n# Practical Deep Learning for Cloud, Mobile, and Edge\n\nReal-World AI and Computer-Vision Projects Using Python, Keras, and TensorFlow\n\nAnirudh Koul, Siddha Ganju, and Meher Kasam\n\n# Practical Deep Learning for Cloud, Mobile, and Edge\n\nby Anirudh Koul, Siddha Ganju, and Meher Kasam\n\nCopyright \u00a9 2020 Anirudh Koul, Siddha Ganju, Meher Kasam. All rights reserved.\n\nPrinted in the United States of America.\n\nPublished by O'Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472.\n\nO'Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles ( _http:\/\/oreilly.com_). For more information, contact our corporate\/institutional sales department: 800-998-9938 or _corporate@oreilly.com_.\n\n * Acquisitions Editor: Rachel Roumeliotis \n * Development Editor: Nicole Tache \n * Production Editor: Christopher Faucher \n * Copyeditor: Octal Publishing, LLC \n * Proofreader: Christina Edwards \n * Indexer: Judith McConville \n * Interior Designer: David Futato \n * Cover Designer: Karen Montgomery \n * Illustrator: Rebecca Demarest\n\n * October 2019: First Edition\n\n# Revision History for the First Edition\n\n * 2019-10-14: First Release\n\nSee _http:\/\/oreilly.com\/catalog\/errata.csp?isbn=9781492034865_ for release details.\n\nThe O'Reilly logo is a registered trademark of O'Reilly Media, Inc. _Practical Deep Learning for Cloud, Mobile, and Edge_ , the cover image, and related trade dress are trademarks of O'Reilly Media, Inc.\n\nThe views expressed in this work are those of the authors, and do not represent the publisher's views. While the publisher and the authors have used good faith efforts to ensure that the information and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and\/or rights.\n\n978-1-492-03486-5\n\n[LSI]\n\n# Preface\n\nWe are experiencing a renaissance of artificial intelligence, and everyone and their neighbor wants to be a part of this movement. That's quite likely why you are browsing through this book. There are tons of books about deep learning out there. So you might ask us, very reasonably, why does this book even exist? We'll get to that in just a second.\n\nDuring our own deep learning journeys since 2013 (while building products at companies including Microsoft, NVIDIA, Amazon, and Square), we witnessed dramatic shifts in this landscape. Constantly evolving research was a given and a lack of mature tooling was a reality of life.\n\nWhile growing and learning from the community, we noticed a lack of clear guidance on how to convert research to an end product for everyday users. After all, the end user is somewhere in front of a web browser, a smartphone, or an edge device. This often involved countless hours of hacking and experimentation, extensively searching through blogs, GitHub issue threads, and Stack Overflow answers, and emailing authors of packages to get esoteric knowledge, as well as occasional \"Aha!\" moments. Even the books on the market tended to focus more on theory or how to use a specific tool. The best we could hope to learn from the available books was to build a toy example.\n\nTo fill this gap between theory and practice, we originally started giving talks on taking artificial intelligence from research to the end user with a particular focus on practical applications. The talks were structured to showcase motivating examples, as well as different levels of complexity based on skill level (from a hobbyist to a Google-scale engineer) and effort involved in deploying deep learning in production. We discovered that beginners and experts alike found value in these talks.\n\nOver time, the landscape thankfully became accessible to beginners and more tooling became available. Great online material like Fast.ai and DeepLearning.ai made understanding how to train AI models easier than ever. Books also cornered the market on teaching fundamentals using deep learning frameworks such as TensorFlow and PyTorch. But even with all of this, the wide chasm between theory and production remained largely unaddressed. And we wanted to bridge this gap. Thus, the book you are now reading.\n\nUsing approachable language as well as ready-to-run fun projects in computer vision, the book starts off with simple classifiers assuming no knowledge of machine learning and AI, gradually building to add complexity, improve accuracy and speed, scale to millions of users, deploy on a wide variety of hardware and software, and eventually culminate in using reinforcement learning to build a miniature self-driving car.\n\nNearly every chapter begins with a motivating example, establishes the questions upfront that one might ask through the process of building a solution, and discusses multiple approaches for solving problems, each with varying levels of complexity and effort involved. If you are seeking a quick solution, you might end up just reading a few pages of a chapter and be done. Someone wanting to gain a deeper understanding of the subject should read the entire chapter. Of course, everyone should peruse the case studies included in these chapters for two reasons\u2014they are fun to read and they showcase how people in the industry are using the concepts discussed in the chapter to build real products.\n\nWe also discuss many of the practical concerns faced by deep learning practitioners and industry professionals in building real-world applications using the cloud, browsers, mobile, and edge devices. We compiled a number of practical \"tips and tricks,\" as well as life lessons in this book to encourage our readers to build applications that can make someone's day just a little bit better.\n\n# To the Backend\/Frontend\/Mobile Software Developer\n\nYou are quite likely a proficient programmer already. Even if Python is an unfamiliar language to you, we expect that you will be able to pick it up easily and get started in no time. Best of all, we don't expect you to have any background in machine learning and AI; that's what we are here for! We believe that you will gain value from the book's focus on the following areas:\n\n * How to build user-facing AI products\n\n * How to train models quickly\n\n * How to minimize the code and effort required in prototyping\n\n * How to make models more performant and energy efficient\n\n * How to operationalize and scale, and estimate the costs involved\n\n * Discovering how AI is applied in the industry with 40+ case studies and real-world examples\n\n * Developing a broad-spectrum knowledge of deep learning\n\n * Developing a generalized skill set that can be applied on new frameworks (e.g., PyTorch), domains (e.g., healthcare, robotics), input modalities (e.g., video, audio, text), and tasks (e.g., image segmentation, one-shot learning)\n\n# To the Data Scientist\n\nYou might already be proficient at machine learning and potentially know how to train deep learning models. Good news! You can further enrich your skill set and deepen your knowledge in the field in order to build real products. This book will help inform your everyday work and beyond by covering how to:\n\n * Speed up your training, including on multinode clusters\n\n * Build an intuition for developing and debugging models, including hyperparameter tuning, thus dramatically improving model accuracy\n\n * Understand how your model works, uncover bias in the data, and automatically determine the best hyperparameters as well as model architecture using AutoML\n\n * Learn tips and tricks used by other data scientists, including gathering data quickly, tracking your experiments in an organized manner, sharing your models with the world, and being up to date on the best available models for your task\n\n * Use tools to deploy and scale your best model to real users, even automatically (without involving a DevOps team)\n\n# To the Student\n\nThis is a great time to be considering a career in AI\u2014it's turning out to be the next revolution in technology after the internet and smartphones. A lot of strides have been made, and a lot remains to be discovered. We hope that this book can serve as your first step in whetting your appetite for a career in AI and, even better, developing deeper theoretical knowledge. And the best part is that you don't have to spend a lot of money to buy expensive hardware. In fact, you can train on powerful hardware entirely for free from your web browser (thank you, Google Colab!). With this book, we hope you will:\n\n * Aspire to a career in AI by developing a portfolio of interesting projects\n\n * Learn from industry practices to help prepare for internships and job opportunities\n\n * Unleash your creativity by building fun applications like an autonomous car\n\n * Become an AI for Good champion by using your creativity to solve the most pressing problems faced by humanity\n\n# To the Teacher\n\nWe believe that this book can nicely supplement your coursework with fun, real-world projects. We've covered every step of the deep learning pipeline in detail, along with techniques on how to execute each step effectively and efficiently. Each of the projects we present in the book can make for great collaborative or individual work in the classroom throughout the semester. Eventually, we will be releasing PowerPoint Presentation Slides on _http:\/\/PracticalDeepLearning.ai_ that can accompany coursework.\n\n# To the Robotics Enthusiast\n\nRobotics is exciting. If you're a robotics enthusiast, we don't really need to convince you that adding intelligence to robots is the way to go. Increasingly capable hardware platforms such as Raspberry Pi, NVIDIA Jetson Nano, Google Coral, Intel Movidius, PYNQ-Z2, and others are helping drive innovation in the robotics space. As we grow towards Industry 4.0, some of these platforms will become more and more relevant and ubiquitous. With this book, you will:\n\n * Learn how to build and train AI, and then bring it to the edge\n\n * Benchmark and compare edge devices on performance, size, power, battery, and costs\n\n * Understand how to choose the optimal AI algorithm and device for a given scenario\n\n * Learn how other makers are building creative robots and machines\n\n * Learn how to build further progress in the field and showcase your work\n\n# What to Expect in Each Chapter\n\nChapter 1, _Exploring the Landscape of Artificial Intelligence_\n\nWe take a tour of this evolving landscape, from the 1950s to today, analyze the ingredients that make for a perfect deep learning recipe, get familiar with common AI terminology and datasets, and take a peek into the world of responsible AI.\n\nChapter 2, _What's in the Picture: Image Classification with Keras_\n\nWe delve into the world of image classification in a mere five lines of Keras code. We then learn what neural networks are paying attention to while making predictions by overlaying heatmaps on videos. Bonus: we hear the motivating personal journey of Fran\u00e7ois Chollet, the creator of Keras, illustrating the impact a single individual can have.\n\nChapter 3, _Cats Versus Dogs: Transfer Learning in 30 Lines with Keras_\n\nWe use transfer learning to reuse a previously trained network on a new custom classification task to get near state-of-the-art accuracy in a matter of minutes. We then slice and dice the results to understand how well it is classifying. Along the way, we build a common machine learning pipeline, which is repurposed throughout the book. Bonus: we hear from Jeremy Howard, cofounder of fast.ai, on how hundreds of thousands of students use transfer learning to jumpstart their AI journey.\n\nChapter 4, _Building a Reverse Image Search Engine: Understanding Embeddings_\n\nLike Google Reverse Image Search, we explore how one can use embeddings\u2014a contextual representation of an image to find similar images in under ten lines. And then the fun starts when we explore different strategies and algorithms to speed this up at scale, from thousands to several million images, and making them searchable in microseconds.\n\nChapter 5, _From Novice to Master Predictor: Maximizing Convolutional Neural Network Accuracy_\n\nWe explore strategies to maximize the accuracy that our classifier can achieve, with the help of a range of tools including TensorBoard, the What-If Tool, tf-explain, TensorFlow Datasets, AutoKeras, and AutoAugment. Along the way, we conduct experiments to develop an intuition of what parameters might or might not work for your AI task.\n\nChapter 6, _Maximizing Speed and Performance of TensorFlow: A Handy Checklist_\n\nWe take the speed of training and inference into hyperdrive by going through a checklist of 30 tricks to reduce as many inefficiencies as possible and maximize the value of your current hardware.\n\nChapter 7, _Practical Tools, Tips, and Tricks_\n\nWe diversify our practical skills in a variety of topics and tools, ranging from installation, data collection, experiment management, visualizations, and keeping track of state-of-the-art research all the way to exploring further avenues for building the theoretical foundations of deep learning.\n\nChapter 8, _Cloud APIs for Computer Vision: Up and Running in 15 Minutes_\n\nWork smart, not hard. We utilize the power of cloud AI platforms from Google, Microsoft, Amazon, IBM, and Clarifai in under 15 minutes. For tasks not solved with existing APIs, we then use custom classification services to train classifiers without coding. And then we pit them against each other in an open benchmark\u2014you might be surprised who won.\n\nChapter 9, _Scalable Inference Serving on Cloud with TensorFlow Serving and KubeFlow_\n\nWe take our custom trained model to the cloud\/on-premises to scalably serve from hundreds to millions of requests. We explore Flask, Google Cloud ML Engine, TensorFlow Serving, and KubeFlow, showcasing the effort, scenario, and cost-benefit analysis.\n\nChapter 10, _AI in the Browser with TensorFlow.js and ml5.js_\n\nEvery single individual who uses a computer or a smartphone uniformly has access to one software program\u2014their browser. Reach all those users with browser-based deep learning libraries including TensorFlow.js and ml5.js. Guest author Zaid Alyafeai walks us through techniques and tasks such as body pose estimation, generative adversarial networks (GANs), image-to-image translation with Pix2Pix, and more, running not on a server but in the browser itself. Bonus: hear from key contributors to TensorFlow.js and ml5.js on how the projects incubated.\n\nChapter 11, _Real-Time Object Classification on iOS with Core ML_\n\nWe explore the landscape of deep learning on mobile, with a sharp focus on the Apple ecosystem with Core ML. We benchmark models on different iPhones, investigate strategies to reduce app size and energy impact, and look into dynamic model deployment, training on device, and how professional apps are built.\n\nChapter 12, _Not Hotdog on iOS with Core ML and Create ML_\n\n_Silicon Valley_ 's Not Hotdog app (from HBO) is considered the \"Hello World\" of mobile AI, so we pay tribute by building a real-time version in not one, not two, but three different ways.\n\nChapter 13, _Shazam for Food: Developing Android Apps with TensorFlow Lite and ML Kit_\n\nWe bring AI to Android with the help of TensorFlow Lite. We then look at cross-platform development using ML Kit (which is built on top of TensorFlow Lite) and Fritz to explore the end-to-end development life cycle for building a self-improving AI app. Along the way we look at model versioning, A\/B testing, measuring success, dynamic updates, model optimization, and other topics. Bonus: we get to hear about the rich experience of Pete Warden (technical lead for Mobile and Embedded TensorFlow) in bringing AI to edge devices.\n\nChapter 14, _Building the Purrfect Cat Locator App with TensorFlow Object Detection API_\n\nWe explore four different methods for locating the position of objects within images. We take a look at the evolution of object detection over the years, and analyze the tradeoffs between speed and accuracy. This builds the base for case studies such as crowd counting, face detection, and autonomous cars.\n\nChapter 15, _Becoming a Maker: Exploring Embedded AI at the Edge_\n\nGuest author Sam Sterckval brings deep learning to low-power devices as he showcases a range of AI-capable edge devices with varying processing power and cost including Raspberry Pi, NVIDIA Jetson Nano, Google Coral, Intel Movidius, and PYNQ-Z2 FPGA, opening the doors for robotics and maker projects. Bonus: hear from the NVIDIA Jetson Nano team on how people are building creative robots quickly from their open source recipe book.\n\nChapter 16, _Simulating a Self-Driving Car Using End-to-End Deep Learning with Keras_\n\nUsing the photorealistic simulation environment of Microsoft AirSim, guest authors Aditya Sharma and Mitchell Spryn guide us in training a virtual car by driving it first within the environment and then teaching an AI model to replicate its behavior. Along the way, this chapter covers a number of concepts that are applicable in the autonomous car industry.\n\nChapter 17, _Building an Autonomous Car in Under an Hour: Reinforcement Learning with AWS DeepRacer_\n\nMoving from the virtual to the physical world, guest author Sunil Mallya showcases how AWS DeepRacer, a miniature car, can be assembled, trained, and raced in under an hour. And with the help of reinforcement learning, the car learns to drive on its own, penalizing its mistakes and maximizing success. We learn how to apply this knowledge to races from the Olympics of AI Driving to RoboRace (using full-sized autonomous cars). Bonus: hear from Anima Anandkumar (NVIDIA) and Chris Anderson (founder of DIY Robocars) on where the self-driving automotive industry is headed.\n\n# Conventions Used in This Book\n\nThe following typographical conventions are used in this book:\n\n_Italic_\n\nIndicates new terms, URLs, email addresses, filenames, and file extensions.\n\n`Constant width`\n\nUsed for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords.\n\n**`Constant width bold`**\n\nShows commands or other text that should be typed literally by the user.\n\n_`Constant width italic`_\n\nShows text that should be replaced with user-supplied values or by values determined by context.\n\n###### Tip\n\nThis element signifies a tip or suggestion.\n\n###### Note\n\nThis element signifies a general note.\n\n###### Warning\n\nThis element indicates a warning or caution.\n\n# Using Code Examples\n\nSupplemental material (code examples, exercises, etc.) is available for download at _http:\/\/PracticalDeepLearning.ai_. If you have a technical question or a problem using the code examples, please send email to _PracticalDLBook@gmail.com_.\n\nThis book is here to help you get your job done. In general, if example code is offered with this book, you may use it in your programs and documentation. You do not need to contact us for permission unless you're reproducing a significant portion of the code. For example, writing a program that uses several chunks of code from this book does not require permission. Selling or distributing examples from O'Reilly books does require permission. Answering a question by citing this book and quoting example code does not require permission. Incorporating a significant amount of example code from this book into your product's documentation does require permission.\n\nWe appreciate, but generally do not require, attribution. An attribution usually includes the title, author, publisher, and ISBN. For example: \" _Practical Deep Learning for Cloud, Mobile, and Edge_ by Anirudh Koul, Siddha Ganju, and Meher Kasam (O'Reilly). Copyright 2020 Anirudh Koul, Siddha Ganju, Meher Kasam, 978-1-492-03486-5.\"\n\nIf you feel your use of code examples falls outside fair use or the permission given above, feel free to contact us at _permissions@oreilly.com_.\n\n# O'Reilly Online Learning\n\n###### Note\n\nFor more than 40 years, _O'Reilly Media_ has provided technology and business training, knowledge, and insight to help companies succeed.\n\nOur unique network of experts and innovators share their knowledge and expertise through books, articles, conferences, and our online learning platform. O'Reilly's online learning platform gives you on-demand access to live training courses, in-depth learning paths, interactive coding environments, and a vast collection of text and video from O'Reilly and 200+ other publishers. For more information, please visit _http:\/\/oreilly.com_.\n\n# How to Contact Us\n\nPlease address comments and questions concerning this book to the publisher:\n\n * O'Reilly Media, Inc.\n * 1005 Gravenstein Highway North\n * Sebastopol, CA 95472\n * 800-998-9938 (in the United States or Canada)\n * 707-829-0515 (international or local)\n * 707-829-0104 (fax)\n\nO'Reilly has a web page for this book, where we list errata, examples, and any additional information. You can access this page at _https:\/\/oreil.ly\/practical-deep-learning_. The authors have a website for this book as well: _http:\/\/PracticalDeepLearning.ai_.\n\nEmail _bookquestions@oreilly.com_ to comment or ask technical questions about this book; email _PracticalDLBook@gmail.com_ to contact the authors about this book.\n\nFor more information about our books, courses, conferences, and news, see our website at _http:\/\/www.oreilly.com_.\n\nFind us on Facebook: _http:\/\/facebook.com\/oreilly_\n\nFollow us on Twitter: _http:\/\/twitter.com\/oreillymedia_\n\nWatch us on YouTube: _http:\/\/www.youtube.com\/oreillymedia_\n\n# Acknowledgments\n\n## Group Acknowledgments\n\nWe'd like to thank the following people for their immense help throughout our journey in writing this book. Without them, this book would not be possible.\n\nThis book came to life because of our development editor Nicole Tach\u00e9's efforts. She rooted for us throughout our journey and provided important guidance at each step of the process. She helped us prioritize the right material (believe it or not, the book was going to be even larger!) and ensured that we were on track. She was reader number one for every single draft that we had written, so our goal first and foremost was ensuring that she was able to follow the content, despite her being new to AI. We're immensely grateful for her support.\n\nWe also want to thank the rest of the O'Reilly team including our production editor Christopher Faucher who worked tireless hours on a tight schedule to ensure that this book made it to the printing press on time. We are also grateful to our copy editor Bob Russell who really impressed us with his lightning-fast edits and his attention to detail. He made us realize the importance of paying attention to English grammar in school (though a few years too late, we're afraid). We also want to acknowledge Rachel Roumeliotis (VP of Content Strategy) and Olivia MacDonald (Managing Editor for Development) for believing in the project and for offering their continued support.\n\nHuge thanks are in order for our guest authors who brought in their technical expertise to share their passion for this field with our readers. Aditya Sharma and Mitchell Spryn (from Microsoft) showed us that our love for playing video racing games can be put to good use to train autonomous cars by driving them in a simulated environment (with AirSim). Sunil Mallya (from Amazon) helped bring this knowledge to the physical world by demonstrating that all it takes is one hour to assemble and get a miniature autonomous car (AWS DeepRacer) to navigate its way around a track using reinforcement learning. Sam Sterckval (from Edgise) summarized the vast variety of embedded AI hardware available in the market, so we can get a leg up on our next robotics project. And finally, Zaid Alyafeai (from King Fahd University) demonstrated that browsers are no less capable of running serious interactive AI models (with the help of TensorFlow.js and ml5js).\n\nThe book is in its current shape because of timely feedback from our amazing technical reviewers, who worked tirelessly on our drafts, pointed out any technical inaccuracies they came across, and gave us suggestions on better conveying our ideas. Due to their feedback (and the ever-changing APIs of TensorFlow), we ended up doing a rewrite of a majority of the book from the original prerelease. We thank Margaret Maynard-Reid (Google Developer Expert for Machine Learning, you might have read her work while reading TensorFlow documentation), Paco Nathan (35+ years industry veteran at Derwin Inc., who introduced Anirudh to the world of public speaking), Andy Petrella (CEO and Founder at Kensu and creator of SparkNotebook whose technical insights stood up to his reputation), and Nikhita Koul (Senior Data Scientist at Adobe who read and suggested improvements after every iteration, effectively reading a few thousand pages, thus making the content significantly more approachable) for their detailed reviews of each chapter. Additionally, we also had a lot of help from reviewers with expertise in specific topics be it AI in the browser, mobile development, or autonomous cars. The chapter-wise reviewer list (in alphabetical order) is as follows:\n\n * Chapter 1: Dharini Chandrasekaran, Sherin Thomas\n\n * Chapter 2: Anuj Sharma, Charles Kozierok, Manoj Parihar, Pankesh Bamotra, Pranav Kant\n\n * Chapter 3: Anuj Sharma, Charles Kozierok, Manoj Parihar, Pankesh Bamotra, Pranav Kant\n\n * Chapter 4: Anuj Sharma, Manoj Parihar, Pankesh Bamotra, Pranav Kant\n\n * Chapter 6: Gabriel Ibagon, Jiri Simsa, Max Katz, Pankesh Bamotra\n\n * Chapter 7: Pankesh Bamotra\n\n * Chapter 8: Deepesh Aggarwal\n\n * Chapter 9: Pankesh Bamotra\n\n * Chapter 10: Brett Burley, Laurent Denoue, Manraj Singh\n\n * Chapter 11: David Apgar, James Webb\n\n * Chapter 12: David Apgar\n\n * Chapter 13: Jesse Wilson, Salman Gadit\n\n * Chapter 14: Akshit Arora, Pranav Kant, Rohit Taneja, Ronay Ak\n\n * Chapter 15: Geertrui Van Lommel, Joke Decubber, Jolien De Cock, Marianne Van Lommel, Sam Hendrickx\n\n * Chapter 16: Dario Salischiker, Kurt Niebuhr, Matthew Chan, Praveen Palanisamy\n\n * Chapter 17: Kirtesh Garg, Larry Pizette, Pierre Dumas, Ricardo Sueiras, Segolene Dessertine-panhard, Sri Elaprolu, Tatsuya Arai\n\nWe have short excerpts throughout the book from creators who gave us a little peek into their world, and how and why they built the project for which they are most known. We are grateful to Fran\u00e7ois Chollet, Jeremy Howard, Pete Warden, Anima Anandkumar, Chris Anderson, Shanqing Cai, Daniel Smilkov, Cristobal Valenzuela, Daniel Shiffman, Hart Woolery, Dan Abdinoor, Chitoku Yato, John Welsh, and Danny Atsmo.\n\n## Personal Acknowledgments\n\n> \"I would like to thank my family\u2014Arbind, Saroj, and Nikhita who gave me the support, resources, time, and freedom to pursue my passions. To all the hackers and researchers at Microsoft, Aira, and Yahoo who stood with me in turning ideas to prototypes to products, it's not the successes but the hiccups which taught us a lot during our journey together. Our trials and tribulations provided glorious material for this book, enough to exceed our original estimate by an extra 250 pages! To my academic families at Carnegie Mellon, Dalhousie, and Thapar University, you taught me more than just academics (unlike what my GPA might suggest). And to the blind and low-vision community, you inspired me daily to work in the AI field by demonstrating that armed with the right tools, people are truly limitless.\"\n> \n> Anirudh\n\n> \"My grandfather, an author himself, once told me, 'Authoring a book is harder than I thought and more rewarding than I could have ever imagined.' I am eternally grateful to my grandparents and family, Mom, Dad, and Shriya for advocating seeking out knowledge and helping me become the person I am today. To my wonderful collaborators and mentors from Carnegie Mellon University, CERN, NASA FDL, Deep Vision, NITH, and NVIDIA who were with me throughout my journey, I am indebted to them for teaching and helping develop a scientific temperament. To my friends, who I hope still remember me, as I've been pretty invisible as of late, I would like to say a big thanks for being incredibly patient. I hope to see you all around. To my friends who selflessly reviewed chapters of the book and acted as a sounding board, a huge thank you\u2014without you, the book would not have taken shape.\"\n> \n> Siddha\n\n> \"I am indebted to my parents Rajagopal and Lakshmi for their unending love and support starting from the very beginning and their strong will to provide me with a good life and education. I am grateful to my professors from UF and VNIT who taught me well and made me glad that I majored in CS. I am thankful to my incredibly supportive partner Julia Tanner who, for nearly two years, had to endure nights and weekends of unending Skype calls with my coauthors, as well as my several terrible jokes (some of which unfortunately made it into this book). I'd also like to acknowledge the role of my wonderful manager Joel Kustka in supporting me during the process of writing this book. A shout out to my friends who were incredibly understanding when I couldn't hang out with them as often as they would have liked me to.\"\n> \n> Meher\n\nLast but not least, thank you to the makers of Grammarly, which empowered people with mediocre English grades to become published authors!\n\n# Chapter 1. Exploring the Landscape of Artificial Intelligence\n\nFollowing are the words from Dr. May Carson's (Figure 1-1) seminal paper on the changing role of artificial intelligence (AI) in human life in the twenty-first century:\n\n> Artificial Intelligence has often been termed as the electricity of the 21st century. Today, artificial intelligent programs will have the power to drive all forms of industry (including health), design medical devices and build new types of products and services, including robots and automobiles. As AI is advancing, organizations are already working to ensure those artificial intelligence programs can do their job and, importantly, avoid mistakes or dangerous accidents. Organizations need AI, but they also recognize that not everything they can do with AI is a good idea.\n> \n> We have had extensive studies of what it takes to operate artificial intelligence using these techniques and policies. The main conclusion is that the amount of money spent on AI programs per person, per year versus the amount used to research, build and produce them is roughly equal. That seems obvious, but it's not entirely true.\n> \n> First, AI systems need support and maintenance to help with their functions. In order to be truly reliable, they need people to have the skills to run them and to help them perform some of their tasks. It's essential that AI organizations provide workers to do the complex tasks needed by those services. It's also important to understand the people who are doing those jobs, especially once AI is more complex than humans. For example, people will most often work in jobs requiring advanced knowledge but are not necessarily skilled in working with systems that need to be built and maintained.\n\n###### Figure 1-1. Dr. May Carson\n\n# An Apology\n\nWe now have to come clean and admit that everything in this chapter up to now was entirely fake. Literally everything! All of the text (other than the first italicized sentence, which was written by us as a seed) was generated using the GPT-2 model (built by Adam King) on the website _TalkToTransformer.com_. The name of the author was generated using the \"Nado Name Generator\" on the website _Onitools.moe_. At least the picture of the author must be real, right? Nope, the picture was generated from the website _ThisPersonDoesNotExist.com_ which shows us new pictures of nonexistent people each time we reload the page using the magic of Generative Adversarial Networks (GANs).\n\nAlthough we feel ambivalent, to say the least, about starting this entire book on a dishonest note, we thought it was important to showcase the state-of-the-art of AI when you, our reader, least expected it. It is, frankly, mind-boggling and amazing and terrifying at the same time to see what AI is already capable of. The fact that it can create sentences out of thin air that are more intelligent and eloquent than some world leaders is...let's just say big league.\n\nThat being said, one thing AI can't appropriate from us just yet is the ability to be fun. We're hoping that those first three fake paragraphs will be the driest in this entire book. After all, we don't want to be known as \"the authors more boring than a machine.\"\n\n# The Real Introduction\n\nRecall that time you saw a magic show during which a trick dazzled you enough to think, \"How the heck did they do that?!\" Have you ever wondered the same about an AI application that made the news? In this book, we want to equip you with the knowledge and tools to not only deconstruct but also build a similar one.\n\nThrough accessible, step-by-step explanations, we dissect real-world applications that use AI and showcase how you would go about creating them on a wide variety of platforms\u2014from the cloud to the browser to smartphones to edge AI devices, and finally landing on the ultimate challenge currently in AI: autonomous cars.\n\nIn most chapters, we begin with a motivating problem and then build an end-to-end solution one step at a time. In the earlier portions of the book, we develop the necessary skills to build the brains of the AI. But that's only half the battle. The true value of building AI is in creating usable applications. And we're not talking about toy prototypes here. We want you to construct software that can be used in the real world by real people for improving their lives. Hence, the word \"Practical\" in the book title. To that effect, we discuss various options that are available to us and choose the appropriate options based on performance, energy consumption, scalability, reliability, and privacy trade-offs.\n\nIn this first chapter, we take a step back to appreciate this moment in AI history. We explore the meaning of AI, specifically in the context of deep learning and the sequence of events that led to deep learning becoming one of the most groundbreaking areas of technological progress in the early twenty-first century. We also examine the core components underlying a complete deep learning solution, to set us up for the subsequent chapters in which we actually get our hands dirty.\n\nSo our journey begins here, with a very fundamental question.\n\n# What Is AI?\n\nThroughout this book, we use the terms \"artificial intelligence,\" \"machine learning,\" and \"deep learning\" frequently, sometimes interchangeably. But in the strictest technical terms, they mean different things. Here's a synopsis of each (see also Figure 1-2):\n\nAI\n\nThis gives machines the capabilities to mimic human behavior. IBM's Deep Blue is a recognizable example of AI.\n\nMachine learning\n\nThis is the branch of AI in which machines use statistical techniques to learn from previous information and experiences. The goal is for the machine to take action in the future based on learning observations from the past. If you watched IBM's Watson take on Ken Jennings and Brad Rutter on _Jeopardy!_ , you saw machine learning in action. More relatably, the next time a spam email doesn't reach your inbox, you can thank machine learning.\n\nDeep learning\n\nThis is a subfield of machine learning in which deep, multilayered neural networks are used to make predictions, especially excelling in computer vision, speech recognition, natural language understanding, and so on.\n\n###### Figure 1-2. The relationship between AI, machine learning, and deep learning\n\nThroughout this book, we primarily focus on deep learning.\n\n## Motivating Examples\n\nLet's cut to the chase. What compelled us to write this book? Why did you spend your hard-earned money buying this book? Our motivation was simple: to get more people involved in the world of AI. The fact that you're reading this book means that our job is already halfway done.\n\nHowever, to really pique your interest, let's take a look at some stellar examples that demonstrate what AI is already capable of doing:\n\n * \"DeepMind's AI agents conquer human pros at StarCraft II\": _The Verge_ , 2019\n\n * \"AI-Generated Art Sells for Nearly Half a Million Dollars at Christie's\": _AdWeek_ , 2018\n\n * \"AI Beats Radiologists in Detecting Lung Cancer\": _American Journal of Managed Care_ , 2019\n\n * \"Boston Dynamics Atlas Robot Can Do Parkour\": _ExtremeTech_ , 2018\n\n * \"Facebook, Carnegie Mellon build first AI that beats pros in 6-player poker\": _ai.facebook.com_ , 2019\n\n * \"Blind users can now explore photos by touch with Microsoft's Seeing AI\": _TechCrunch_ , 2019\n\n * \"IBM's Watson supercomputer defeats humans in final Jeopardy match\": _VentureBeat_ , 2011\n\n * \"Google's ML-Jam challenges musicians to improvise and collaborate with AI\": _VentureBeat_ , 2019\n\n * \"Mastering the Game of Go without Human Knowledge\": _Nature_ , 2017\n\n * \"Chinese AI Beats Doctors in Diagnosing Brain Tumors\": _Popular Mechanics_ , 2018\n\n * \"Two new planets discovered using artificial intelligence\": _Phys.org_ , 2019\n\n * \"Nvidia's latest AI software turns rough doodles into realistic landscapes\": _The Verge_ , 2019\n\nThese applications of AI serve as our North Star. The level of these achievements is the equivalent of a gold-medal-winning Olympic performance. However, applications solving a host of problems in the real world is the equivalent of completing a 5K race. Developing these applications doesn't require years of training, yet doing so provides the developer immense satisfaction when crossing the finish line. We are here to coach you through that 5K.\n\nThroughout this book, we intentionally prioritize breadth. The field of AI is changing so quickly that we can only hope to equip you with the proper mindset and array of tools. In addition to tackling individual problems, we will look at how different, seemingly unrelated problems have fundamental overlaps that we can use to our advantage. As an example, sound recognition uses Convolutional Neural Networks (CNNs), which are also the basis for modern computer vision. We touch upon practical aspects of multiple areas so you will be able to go from 0 to 80 quickly to tackle real-world problems. If we've generated enough interest that you decide you then want to go from 80 to 95, we'd consider our goal achieved. As the oft-used phrase goes, we want to \"democratize AI.\"\n\nIt's important to note that much of the progress in AI happened in just the past few years\u2014it's difficult to overstate that. To illustrate how far we've come along, take this example: five years ago, you needed a Ph.D. just to get your foot in the door of the industry. Five years later, you don't even need a Ph.D. to write an entire book on the subject. (Seriously, check our profiles!)\n\nAlthough modern applications of deep learning seem pretty amazing, they did not get there all on their own. They stood on the shoulders of many giants of the industry who have been pushing the limits for decades. Indeed, we can't fully appreciate the significance of this time without looking at the entire history.\n\n# A Brief History of AI\n\nLet's go back in time a little bit: our whole universe was in a hot dense state. Then nearly 14 billion years ago expansion started, wait...okay, we don't have to go back that far (but now we have the song stuck in your head for the rest of the day, right?). It was really just 70 years ago when the first seeds of AI were planted. Alan Turing, in his 1950 paper, \"Computing Machinery and Intelligence,\" first asked the question \"Can machines think?\" This really gets into a larger philosophical debate of sentience and what it means to be human. Does it mean to possess the ability to compose a concerto and know that you've composed it? Turing found that framework rather restrictive and instead proposed a test: if a human cannot distinguish a machine from another human, does it really matter? An AI that can mimic a human is, in essence, human.\n\n## Exciting Beginnings\n\nThe term \"artificial intelligence\" was coined by John McCarthy in 1956 at the Dartmouth Summer Research Project. Physical computers weren't even really a thing back then, so it's remarkable that they were able to discuss futuristic areas such as language simulation, self-improving learning machines, abstractions on sensory data, and more. Much of it was theoretical, of course. This was the first time that AI became a field of research rather than a single project.\n\nThe paper \"Perceptron: A Perceiving and Recognizing Automaton\" in 1957 by Frank Rosenblatt laid the foundation for deep neural networks. He postulated that it should be feasible to construct an electronic or electromechanical system that will learn to recognize similarities between patterns of optical, electrical, or tonal information. This system would function similar to the human brain. Rather than using a rule-based model (which was standard for the algorithms at the time), he proposed using statistical models to make predictions.\n\n###### Note\n\nThroughout this book, we repeat the phrase _neural network_. What is a neural network? It is a simplified model of the human brain. Much like the brain, it has _neurons_ that activate when encountering something familiar. The different neurons are connected via connections (corresponding to synapses in our brain) that help information flow from one neuron to another.\n\nIn Figure 1-3, we can see an example of the simplest neural network: a perceptron. Mathematically, the perceptron can be expressed as follows:\n\n_output = f(x 1, x2, x3) = x1 w1 \\+ x2 w2 \\+ x3 w3 \\+ b_\n\n###### Figure 1-3. An example of a perceptron\n\nIn 1965, Ivakhnenko and Lapa published the first working neural network in their paper \"Group Method of Data Handling\u2014A Rival Method of Stochastic Approximation.\" There is some controversy in this area, but Ivakhnenko is regarded by some as the father of deep learning.\n\nAround this time, bold predictions were made about what machines would be capable of doing. Machine translation, speech recognition, and more would be performed better than humans. Governments around the world were excited and began opening up their wallets to fund these projects. This gold rush started in the late 1950s and was alive and well into the mid-1970s.\n\n## The Cold and Dark Days\n\nWith millions of dollars invested, the first systems were put into practice. It turned out that a lot of the original prophecies were unrealistic. Speech recognition worked only if it was spoken in a certain way, and even then, only for a limited set of words. Language translation turned out to be heavily erroneous and much more expensive than what it would cost a human to do. Perceptrons (essentially single-layer neural networks) quickly hit a cap for making reliable predictions. This limited their usefulness for most problems in the real world. This is because they are linear functions, whereas problems in the real world often require a nonlinear classifier for accurate predictions. Imagine trying to fit a line to a curve!\n\nSo what happens when you over-promise and under-deliver? You lose funding. The Defense Advanced Research Project Agency, commonly known as DARPA (yeah, those people; the ones who built the ARPANET, which later became the internet), funded a lot of the original projects in the United States. However, the lack of results over nearly two decades increasingly frustrated the agency. It was easier to land a man on the moon than to get a usable speech recognizer!\n\nSimilarly, across the pond, the Lighthill Report was published in 1974, which said, \"The general-purpose robot is a mirage.\" Imagine being a Brit in 1974 watching the bigwigs in computer science debating on the BBC as to whether AI is a waste of resources. As a consequence, AI research was decimated in the United Kingdom and subsequently across the world, destroying many careers in the process. This phase of lost faith in AI lasted about two decades and came to be known as the \"AI Winter.\" If only Ned Stark had been around back then to warn them.\n\n## A Glimmer of Hope\n\nEven during those freezing days, there was some groundbreaking work done in this field. Sure, perceptrons\u2014being linear functions\u2014had limited capabilities. How could one fix that? By chaining them in a network, such that the output of one (or more) perceptron is the input to one (or more) perceptron. In other words, a multilayer neural network, as illustrated in Figure 1-4. The higher the number of layers, the more the nonlinearity it would learn, resulting in better predictions. There is just one issue: how does one train it? Enter Geoffrey Hinton and friends. They published a technique called _backpropagation_ in 1986 in the paper \"Learning representations by back-propagating errors.\" How does it work? Make a prediction, see how far off the prediction is from reality, and propagate back the magnitude of the error into the network so it can learn to fix it. You repeat this process until the error becomes insignificant. A simple yet powerful concept. We use the term backpropagation repeatedly throughout this book.\n\n###### Figure 1-4. An example multilayer neural network (image source)\n\nIn 1989, George Cybenko provided the first proof of the _Universal Approximation Theorem_ , which states that a neural network with a single hidden layer is theoretically capable of modeling any problem. This was remarkable because it meant that neural networks could (at least in theory) outdo any machine learning approach. Heck, it could even mimic the human brain. But all of this was only on paper. The size of this network would quickly pose limitations in the real world. This could be overcome somewhat by using multiple hidden layers and training the network with...wait for it...backpropagation!\n\nOn the more practical side of things, a team at Carnegie Mellon University built the first-ever autonomous vehicle, NavLab 1, in 1986 (Figure 1-5). It initially used a single-layer neural network to control the angle of the steering wheel. This eventually led to NavLab 5 in 1995. During a demonstration, a car drove all but 50 of the 2,850-mile journey from Pittsburgh to San Diego on its own. NavLab got its driver's license before many Tesla engineers were even born!\n\n###### Figure 1-5. The autonomous NavLab 1 from 1986 in all its glory (image source)\n\nAnother standout example from the 1980s was at the United States Postal Service (USPS). The service needed to sort postal mail automatically according to the postal codes (ZIP codes) they were addressed to. Because a lot of the mail has always been handwritten, optical character recognition (OCR) could not be used. To solve this problem, Yann LeCun et al. used handwritten data from the National Institute of Standards and Technology (NIST) to demonstrate that neural networks were capable of recognizing these handwritten digits in their paper \"Backpropagation Applied to Handwritten Zip Code Recognition.\" The agency's network, LeNet, became what the USPS used for decades to automatically scan and sort the mail. This was remarkable because it was the first convolutional neural network that really worked in the wild. Eventually, in the 1990s, banks would use an evolved version of the model called LeNet-5 to read handwritten numbers on checks. This laid the foundation for modern computer vision.\n\n###### Note\n\nThose of you who have read about the MNIST dataset might have already noticed a connection to the NIST mention we just made. That is because the MNIST dataset essentially consists of a subset of images from the original NIST dataset that had some modifications (the \"M\" in \"MNIST\") applied to them to ease the train and test process for the neural network. Modifications, some of which you can see in Figure 1-6, included resizing them to 28 x 28 pixels, centering the digit in that area, antialiasing, and so on.\n\n###### Figure 1-6. A sample of handwritten digits from the MNIST dataset\n\nA few others kept their research going, including J\u00fcrgen Schmidhuber, who proposed networks like the Long Short-Term Memory (LSTM) with promising applications for text and speech.\n\nAt that point, even though the theories were becoming sufficiently advanced, results could not be demonstrated in practice. The main reason was that it was too computationally expensive for the hardware back then and scaling them for larger tasks was a challenge. Even if by some miracle the hardware was available, the data to realize its full potential was certainly not easy to come by. After all, the internet was still in its dial-up phase. Support Vector Machines (SVMs), a machine learning technique introduced for classification problems in 1995, were faster and provided reasonably good results on smaller amounts of data, and thus had become the norm.\n\nAs a result, AI and deep learning's reputation was poor. Graduate students were warned against doing deep learning research because this is the field \"where smart scientists would see their careers end.\" People and companies working in the field would use alternative words like informatics, cognitive systems, intelligent agents, machine learning, and others to dissociate themselves from the AI name. It's a bit like when the U.S. Department of War was rebranded as the Department of Defense to be more palatable to the people.\n\n## How Deep Learning Became a Thing\n\nLuckily for us, the 2000s brought high-speed internet, smartphone cameras, video games, and photo-sharing sites like Flickr and Creative Commons (bringing the ability to legally reuse other people's photos). People in massive numbers were able to quickly take photos with a device in their pockets and then instantly upload. The data lake was filling up, and gradually there were ample opportunities to take a dip. The 14 million-image ImageNet dataset was born from this happy confluence and some tremendous work by (then Princeton's) Fei-Fei Li and company.\n\nDuring the same decade, PC and console gaming became really serious. Gamers demanded better and better graphics from their video games. This, in turn, pushed Graphics Processing Unit (GPU) manufacturers such as NVIDIA to keep improving their hardware. The key thing to remember here is that GPUs are damn good at matrix operations. Why is that the case? Because the math demands it! In computer graphics, common tasks such as moving objects, rotating them, changing their shape, adjusting their lighting, and so on all use matrix operations. And GPUs specialize in doing them. And you know what else needs a lot of matrix calculations? Neural networks. It's one big happy coincidence.\n\nWith ImageNet ready, the annual ImageNet Large Scale Visual Recognition Challenge (ILSVRC) was set up in 2010 to openly challenge researchers to come up with better techniques for classifying this data. A subset of 1,000 categories consisting of approximately 1.2 million images was available to push the boundaries of research. The state-of-the-art computer-vision techniques like Scale-Invariant Feature Transform (SIFT) + SVM yielded a 28% (in 2010) and a 25% (2011) top-5 error rate (i.e., if one of the top five guesses ranked by probability matches, it's considered accurate).\n\nAnd then came 2012, with an entry on the leaderboard that nearly halved the error rate down to 16%. Alex Krizhevsky, Ilya Sutskever (who eventually founded OpenAI), and Geoffrey Hinton from the University of Toronto submitted that entry. Aptly called AlexNet, it was a CNN that was inspired by LeNet-5. Even at just eight layers, AlexNet had a massive 60 million parameters and 650,000 neurons, resulting in a 240 MB model. It was trained over one week using two NVIDIA GPUs. This single event took everyone by surprise, proving the potential of CNNs that snowballed into the modern deep learning era.\n\nFigure 1-7 quantifies the progress that CNNs have made in the past decade. We saw a 40% year-on-year decrease in classification error rate among ImageNet LSVRC\u2013winning entries since the arrival of deep learning in 2012. As CNNs grew deeper, the error continued to decrease.\n\n###### Figure 1-7. Evolution of winning entries at ImageNet LSVRC\n\nKeep in mind we are vastly simplifying the history of AI, and we are surely glossing over some of the details. Essentially, it was a confluence of data, GPUs, and better techniques that led to this modern era of deep learning. And the progress kept expanding further into newer territories. As Table 1-1 highlights, what was in the realm of science fiction is already a reality.\n\nTable 1-1. A highlight reel of the modern deep learning era 2012 | Neural network from Google Brain team starts recognizing cats after watching YouTube videos \n---|--- \n2013 |\n\n * Researchers begin tinkering with deep learning on a variety of tasks\n\n * word2vec brings context to words and phrases, getting one step closer to understanding meanings\n\n * Error rate for speech recognition went down 25%\n\n2014 |\n\n * GANs invented\n\n * Skype translates speech in real time\n\n * Eugene Goostman, a chatbot, passes the Turing Test\n\n * Sequence-to-sequence learning with neural networks invented\n\n * Image captioning translates images to sentences\n\n2015 |\n\n * Microsoft ResNet beats humans in image accuracy, trains 1,000-layer network\n\n * Baidu's Deep Speech 2 does end-to-end speech recognition\n\n * Gmail launches Smart Reply\n\n * YOLO (You Only Look Once) does object detection in real time\n\n * Visual Question Answering allows asking questions based on images\n\n2016 |\n\n * AlphaGo wins against professional human Go players\n\n * Google WaveNets help generate realistic audio\n\n * Microsoft achieves human parity in conversational speech recognition\n\n2017 |\n\n * AlphaGo Zero learns to play Go itself in 3 days\n\n * Capsule Nets fix flaws in CNNs\n\n * Tensor Processing Units (TPUs) introduced\n\n * California allows sale of autonomous cars\n\n * Pix2Pix allows generating images from sketches\n\n2018 |\n\n * AI designs AI better than humans with Neural Architecture Search\n\n * Google Duplex demo makes restaurant reservations on our behalf\n\n * Deepfakes swap one face for another in videos\n\n * Google's BERT succeeds humans in language understanding tasks\n\n * DawnBench and MLPerf established to benchmark AI training\n\n2019 |\n\n * OpenAI Five crushes Dota2 world champions\n\n * StyleGan generates photorealistic images\n\n * OpenAI GPT-2 generates realistic text passages\n\n * Fujitsu trains ImageNet in 75 seconds\n\n * Microsoft invests $1 billion in OpenAI\n\n * AI by the Allen Institute passes 12th-grade science test with 80% score\n\nHopefully, you now have a historical context of AI and deep learning and have an understanding of why this moment in time is significant. It's important to recognize the rapid rate at which progress is happening in this area. But as we have seen so far, this was not always the case.\n\nThe original estimate for achieving real-world computer vision was \"one summer\" back in the 1960s, according to two of the field's pioneers. They were off by only half a century! It's not easy being a futurist. A study by Alexander Wissner-Gross observed that it took 18 years on average between when an algorithm was proposed and the time it led to a breakthrough. On the other hand, that gap was a mere three years on average between when a dataset was made available and the breakthrough it helped achieve! Look at any of the breakthroughs in the past decade. The dataset that enabled that breakthrough was very likely made available just a few years prior.\n\nData was clearly the limiting factor. This shows the crucial role that a good dataset can play for deep learning. However, data is not the only factor. Let's look at the other pillars that make up the foundation of the perfect deep learning solution.\n\n# Recipe for the Perfect Deep Learning Solution\n\nBefore Gordon Ramsay starts cooking, he ensures he has all of the ingredients ready to go. The same goes for solving a problem using deep learning (Figure 1-8).\n\n###### Figure 1-8. Ingredients for the perfect deep learning solution\n\nAnd here's your deep learning _mise en place_!\n\n _Dataset + Model + Framework + Hardware = Deep Learning Solution_\n\nLet's look into each of these in a little more detail.\n\n## Datasets\n\nJust like Pac-Man is hungry for dots, deep learning is hungry for data\u2014lots and lots of data. It needs this amount of data to spot meaningful patterns that can help make robust predictions. Traditional machine learning was the norm in the 1980s and 1990s because it would function even with few hundreds to thousands of examples. In contrast, Deep Neural Networks (DNNs), when built from scratch, would need orders more data for typical prediction tasks. The upside here is far better predictions.\n\nIn this century, we are having a data explosion with quintillions of bytes of data being created every single day\u2014images, text, videos, sensor data, and more. But to make effective use of this data, we need labels. To build a sentiment classifier to know whether an Amazon review is positive or negative, we need thousands of sentences and an assigned emotion for each. To train a face segmentation system for a Snapchat lens, we need the precise location of eyes, lips, nose, and so forth on thousands of images. To train a self-driving car, we need video segments labeled with the human driver's reactions on controls such as the brakes, accelerator, steering wheel, and so forth. These labels act as teachers to our AI and are far more valuable than unlabeled data alone.\n\nGetting labels can be pricey. It's no wonder that there is an entire industry around crowdsourcing labeling tasks among thousands of workers. Each label might cost from a few cents to dollars, depending on the time spent by the workers to assign it. For example, during the development of the Microsoft COCO (Common Objects in Context) dataset, it took roughly three seconds to label the name of each object in an image, approximately 30 seconds to place a bounding box around each object, and 79 seconds to draw the outlines for each object. Repeat that hundreds of thousands of times and you can begin to fathom the costs around some of the larger datasets. Some labeling companies like Appen and Scale AI are already valued at more than a billion dollars each.\n\nWe might not have a million dollars in our bank account. But luckily for us, two good things happened in this deep learning revolution:\n\n * Gigantic labeled datasets have been generously made public by major companies and universities.\n\n * A technique called _transfer learning,_ which allows us to tune our models to datasets with even hundreds of examples\u2014as long as our model was originally trained on a larger dataset similar to our current set. We use this repeatedly in the book, including in Chapter 5 where we experiment and prove even a few tens of examples can get us decent performance with this technique. Transfer learning busts the myth that big data is necessary for training a good model. Welcome to the world of _tiny data_!\n\nTable 1-2 showcases some of the popular datasets out there today for a variety of deep learning tasks.\n\nTable 1-2. A diverse range of public datasets **Data type** | **Name** | **Details** \n---|---|--- \nImage |\n\nOpen Images V4\n\n(from Google) | * Nine million images in 19,700 categories\n\n * 1.74 Million images with 600 categories (bounding boxes)\n\n| Microsoft COCO |\n\n * 330,000 images with 80 object categories\n\n * Contains bounding boxes, segmentation, and five captions per image\n\nVideo | YouTube-8M |\n\n * 6.1 million videos, 3,862 classes, 2.6 billion audio-visual features\n\n * 3.0 labels\/video\n\n * 1.53 TB of randomly sampled videos\n\nVideo, images | BDD100K\n\n(from UC Berkeley) | * 100,000 driving videos over 1,100 hours\n\n * 100,000 images with bounding boxes for 10 categories\n\n * 100,000 images with lane markings\n\n * 100,000 images with drivable-area segmentation\n\n * 10,000 images with pixel-level instance segmentation\n\nWaymo Open Dataset | 3,000 driving scenes totaling 16.7 hours of video data, 600,000 frames, approximately 25 million 3D bounding boxes, and 22 million 2D bounding boxes \nText | SQuAD | 150,000 Question and Answer snippets from Wikipedia \n| Yelp Reviews | Five million Yelp reviews \nSatellite data | Landsat Data | Several million satellite images (100 nautical mile width and height), along with eight spectral bands (15- to 60-meter spatial resolution)\n\nAudio\n\n| Google AudioSet | 2,084,320 10-second sound clips from YouTube with 632 categories \nLibriSpeech | 1,000 hours of read English speech\n\n## Model Architecture\n\nAt a high level, a model is just a function. It takes in one or more inputs and gives an output. The input might be in the form of text, images, audio, video, and more. The output is a prediction. A good model is one whose predictions reliably match the expected reality. The model's accuracy on a dataset is a major determining factor as to whether it's suitable for use in a real-world application. For many people, this is all they really need to know about deep learning models. But it's when we peek into the inner workings of a model that it becomes really interesting (Figure 1-9).\n\n###### Figure 1-9. A black box view of a deep learning model\n\nInside the model is a graph that consists of nodes and edges. Nodes represent mathematical operations, whereas edges represent how the data flows from one node to another. In other words, if the output of one node can become the input to one or more nodes, the connections between those nodes are represented by edges. The structure of this graph determines the potential for accuracy, its speed, how much resources it consumes (memory, compute, and energy), and the type of input it's capable of processing.\n\nThe layout of the nodes and edges is known as the _architecture_ of the model. Essentially, it's a blueprint. Now, the blueprint is only half the picture. We still need the actual building. Training is the process that utilizes this blueprint to construct that building. We train a model by repeatedly 1) feeding it input data, 2) getting outputs from it, 3) monitoring how far these predictions are from the expected reality (i.e., the labels associated with the data), and then, 4) propagating the magnitude of error back to the model so that it can progressively learn to correct itself. This training process is performed iteratively until we are satisfied with the accuracy of the predictions.\n\nThe result from this training is a set of numbers (also known as weights) that is assigned to each of the nodes. These weights are necessary parameters for the nodes in the graph to operate on the input given to them. Before the training begins, we usually assign random numbers as weights. The goal of the training process is essentially to gradually tune the values of each set of these weights until they, in conjunction with their corresponding nodes, produce satisfactory predictions.\n\nTo understand weights a little better, let's examine the following dataset with two inputs and one output:\n\nTable 1-3. Example dataset **input 1** | **input 2** | **output** \n---|---|--- \n1 | 6 | 20 \n2 | 5 | 19 \n3 | 4 | 18 \n4 | 3 | 17 \n5 | 2 | 16 \n6 | 1 | 15\n\nUsing linear algebra (or guesswork in our minds), we can deduce that the equation governing this dataset is:\n\n_output = f(input 1, input2_) = 2 x _input 1_ \\+ 3 x _input 2_\n\nIn this case, the weights for this mathematical operation are 2 and 3. A deep neural network has millions of such weight parameters.\n\nDepending on the types of nodes used, different themes of model architectures will be better suited for different kinds of input data. For example, CNNs are used for image and audio, whereas Recurrent Neural Networks (RNNs) and LSTM are often used in text processing.\n\nIn general, training one of these models from scratch can take a pretty significant amount of time, potentially weeks. Luckily for us, many researchers have already done the difficult work of training them on a generic dataset (like ImageNet) and have made them available for everyone to use. What's even better is that we can take these available models and tune them to our specific dataset. This process is called transfer learning and accounts for the vast majority of needs by practitioners.\n\nCompared to training from scratch, transfer learning provides a two-fold advantage: significantly reduced training time a (few minutes to hours instead of weeks), and it can work with a substantially smaller dataset (hundreds to thousands of data samples instead of millions). Table 1-4 shows some famous examples of model architectures.\n\nTable 1-4. Example model architectures over the years **Task** | **Example model architectures** \n---|--- \nImage classification | ResNet-152 (2015), MobileNet (2017) \nText classification | BERT (2018), XLNet (2019) \nImage segmentation | U-Net (2015), DeepLabV3 (2018) \nImage translation | Pix2Pix (2017) \nObject detection | YOLO9000 (2016), Mask R-CNN (2017) \nSpeech generation | WaveNet (2016)\n\nEach one of the models from Table 1-4 has a published accuracy metric on reference datasets (e.g., ImageNet for classification, MS COCO for detection). Additionally, these architectures have their own characteristic resource requirements (model size in megabytes, computation requirements in floating-point operations, or FLOPS).\n\nWe explore transfer learning in-depth in the upcoming chapters. Now, let's look at the kinds of deep learning frameworks and services that are available to us.\n\n###### Note\n\nWhen Kaiming He et al. came up with the 152-layer ResNet architecture in 2015\u2014a feat of its day considering the previous largest GoogLeNet model consisted of 22 layers\u2014there was just one question on everyone's mind: \"Why not 153 layers?\" The reason, as it turns out, was that Kaiming ran out of GPU memory!\n\n## Frameworks\n\nThere are several deep learning libraries out there that help us train our models. Additionally, there are frameworks that specialize in using those trained models to make predictions (or _inference_ ), optimizing for where the application resides.\n\nHistorically, as is the case with software generally, many libraries have come and gone\u2014Torch (2002), Theano (2007), Caffe (2013), Microsoft Cognitive Toolkit (2015), Caffe2 (2017)\u2014and the landscape has been evolving rapidly. Learnings from each have made the other libraries easier to pick up, driven interest, and improved productivity for beginners and experts alike. Table 1-5 looks at some of the popular ones.\n\nTable 1-5. Popular deep learning frameworks **Framework** | **Best suited for** | **Typical target platform** \n---|---|--- \nTensorFlow (including Keras) | Training | Desktops, servers \nPyTorch | Training | Desktops, servers \nMXNet | Training | Desktops, servers \nTensorFlow Serving | Inference | Servers \nTensorFlow Lite | Inference | Mobile and embedded devices \nTensorFlow.js | Inference | Browsers \nml5.js | Inference | Browsers \nCore ML | Inference | Apple devices \nXnor AI2GO | Inference | Embedded devices\n\n### TensorFlow\n\nIn 2011, Google Brain developed the DNN library DistBelief for internal research and engineering. It helped train Inception (2014's winning entry to the ImageNet Large Scale Visual Recognition Challenge) as well as helped improve the quality of speech recognition within Google products. Heavily tied into Google's infrastructure, it was not easy to configure and to share code with external machine learning enthusiasts. Realizing the limitations, Google began working on a second-generation distributed machine learning framework, which promised to be general-purpose, scalable, highly performant, and portable to many hardware platforms. And the best part, it was open source. Google called it TensorFlow and announced its release on November 2015.\n\nTensorFlow delivered on a lot of these aforementioned promises, developing an end-to-end ecosystem from development to deployment, and it gained a massive following in the process. With more than 100,000 stars on GitHub, it shows no signs of stopping. However, as adoption gained, users of the library rightly criticized it for not being easy enough to use. As the joke went, TensorFlow was a library by Google engineers, of Google engineers, for Google engineers, and if you were smart enough to use TensorFlow, you were smart enough to get hired there.\n\nBut Google was not alone here. Let's be honest. Even as late as 2015, it was a given that working with deep learning libraries would inevitably be an unpleasant experience. Forget even working on these; installing some of these frameworks made people want to pull their hair out. (Caffe users out there\u2014does this ring a bell?)\n\n### Keras\n\nAs an answer to the hardships faced by deep learning practitioners, Fran\u00e7ois Chollet released the open source framework Keras in March 2015, and the world hasn't been the same since. This solution suddenly made deep learning accessible to beginners. Keras provided an intuitive and easy-to-use interface for coding, which would then use other deep learning libraries as the backend computational framework. Starting with Theano as its first backend, Keras encouraged rapid prototyping and reduced the number of lines of code. Eventually, this abstraction expanded to other frameworks including Cognitive Toolkit, MXNet, PlaidML, and, yes, TensorFlow.\n\n### PyTorch\n\nIn parallel, PyTorch started at Facebook early in 2016, where engineers had the benefit of observing TensorFlow's limitations. PyTorch supported native Python constructs and Python debugging right off the bat, making it flexible and easier to use, quickly becoming a favorite among AI researchers. It is the second-largest end-to-end deep learning system. Facebook additionally built Caffe2 to take PyTorch models and deploy them to production to serve more than a billion users. Whereas PyTorch drove research, Caffe2 was primarily used in production. In 2018, Caffe2 was absorbed into PyTorch to make a full framework.\n\n### A continuously evolving landscape\n\nHad this story ended with the ease of Keras and PyTorch, this book would not have the word \"TensorFlow\" in the subtitle. The TensorFlow team recognized that if it truly wanted to broaden the tool's reach and democratize AI, it needed to make the tool easier. So it was welcome news when Keras was officially included as part of TensorFlow, offering the best of both worlds. This allowed developers to use Keras for defining the model and training it, and core TensorFlow for its high-performance data pipeline, including distributed training and ecosystem to deploy. It was a match made in heaven! And to top it all, TensorFlow 2.0 (released in 2019) included support for native Python constructs and eager execution, as we saw in PyTorch.\n\nWith so many competing frameworks available, the question of portability inevitability arises. Imagine a new research paper published with the state-of-the-art model being made public in PyTorch. If we didn't work in PyTorch, we would be locked out of the research and would have to reimplement and train it. Developers like to be able to share models freely and not be restricted to a specific ecosystem. Organically, many developers wrote libraries to convert model formats from one library to another. It was a simple solution, except that it led to a combinatorial explosion of conversion tools that lacked official support and sufficient quality due to the sheer number of them. To address this issue, the Open Neural Network Exchange (ONNX) was championed by Microsoft and Facebook, along with major players in the industry. ONNX provided a specification for a common model format that was readable and writable by a number of popular libraries officially. Additionally, it provided converters for libraries that did not natively support this format. This allowed developers to train in one framework and do inferences in a different framework.\n\nApart from these frameworks, there are several Graphical User Interface (GUI) systems that make code-free training possible. Using transfer learning, they generate trained models quickly in several formats useful for inference. With point-and-click interfaces, even your grandma can now train a neural network!\n\nTable 1-6. Popular GUI-based model training tools **Service** | **Platform** \n---|--- \nMicrosoft CustomVision.AI | Web-based \nGoogle AutoML | Web-based \nClarifai | Web-based \nIBM Visual Recognition | Web-based \nApple Create ML | macOS \nNVIDIA DIGITS | Desktop \nRunway ML | Desktop\n\nSo why did we choose TensorFlow and Keras as the primary frameworks for this book? Considering the sheer amount of material available, including documentation, Stack Overflow answers, online courses, the vast community of contributors, platform and device support, industry adoption, and, yes, open jobs available (approximately three times as many TensorFlow-related roles compared to PyTorch in the United States), TensorFlow and Keras currently dominate the landscape when it comes to frameworks. It made sense for us to select this combination. That said, the techniques discussed in the book are generalizable to other libraries, as well. Picking up a new framework shouldn't take you too long. So, if you really want to move to a company that uses PyTorch exclusively, don't hesitate to apply there.\n\n## Hardware\n\nIn 1848, when James W. Marshall discovered gold in California, the news spread like wildfire across the United States. Hundreds of thousands of people stormed to the state to begin mining for riches. This was known as the _California Gold Rush_. Early movers were able to extract a decent chunk, but the latecomers were not nearly as lucky. But the rush did not stop for many years. Can you guess who made the most money throughout this period? The shovel makers!\n\nCloud and hardware companies are the shovel makers of the twenty-first century. Don't believe us? Look at the stock performance of Microsoft and NVIDIA in the past decade. The only difference between 1849 and now is the mind-bogglingly large amount of shovel choices available to us.\n\nGiven the variety of hardware available, it is important to make the correct choices for the constraints imposed by resource, latency, budget, privacy, and legal requirements of the application.\n\nDepending on how your application interacts with the user, the inference phase usually has a user waiting at the other end for a response. This imposes restrictions on the type of hardware that can be used as well as the location of the hardware. For example, a Snapchat lens cannot run on the cloud due to the network latency issues. Additionally, it needs to run in close to real time to provide a good user experience (UX), thus setting a minimum requirement on the number of frames processed per second (typically >15 fps). On the other hand, a photo uploaded to an image library such as Google Photos does not need immediate image categorization done on it. A few seconds or few minutes of latency is acceptable.\n\nGoing to the other extreme, training takes a lot more time; anywhere between minutes to hours to days. Depending on our training scenario, the real value of better hardware is enabling faster experimentation and more iterations. For anything more serious than basic neural networks, better hardware can make a mountain of difference. Typically, GPUs would speed things up by 10 to 15 times compared to CPUs, and at a much higher performance per watt, reducing the wait time for our experiment to finish from a week to a few hours. This can be the difference in watching a documentary about the Grand Canyon (two hours) versus actually making the trip to visit the Grand Canyon (four days).\n\nFollowing are a few fundamental hardware categories to choose from and how they are typically characterized (see also Figure 1-10):\n\nCentral Processing Unit (CPU)\n\nCheap, flexible, slow. For example, Intel Core i9-9900K.\n\nGPU\n\nHigh throughput, great for batching to utilize parallel processing, expensive. For example, NVIDIA GeForce RTX 2080 Ti.\n\nField-Programmable Gate Array (FPGA)\n\nFast, low power, reprogrammable for custom solutions, expensive. Known companies include Xilinx, Lattice Semiconductor, Altera (Intel). Because of the ability to run in seconds and configurability to any AI model, Microsoft Bing runs the majority of its AI on FPGAs.\n\nApplication-Specific Integrated Circuit (ASIC)\n\nCustom-made chip. Extremely expensive to design, but inexpensive when built for scale. Just like in the pharmaceutical industry, the first item costs the most due to the R&D effort that goes into designing and making it. Producing massive quantities is rather inexpensive. Specific examples include the following:\n\nTensor Processing Unit (TPU)\n\nASIC specializing in operations for neural networks, available on Google Cloud only.\n\nEdge TPU\n\nSmaller than a US penny, accelerates inference on the edge.\n\nNeural Processing Unit (NPU)\n\nOften used by smartphone manufacturers, this is a dedicated chip for accelerating neural network inference.\n\n###### Figure 1-10. Comparison of different types of hardware relative to flexibility, performance, and cost\n\nLet's look at a few scenarios for which each one would be used:\n\n * Getting started with training \u2192 CPU\n\n * Training large networks \u2192 GPUs and TPUs\n\n * Inference on smartphones \u2192 Mobile CPU, GPU, Digital Signal Processor (DSP), NPU\n\n * Wearables (e.g., smart glasses, smartwatches) \u2192 Edge TPU, NPUs\n\n * Embedded AI projects (e.g., flood surveying drone, autonomous wheelchair) \u2192 Accelerators like Google Coral, Intel Movidius with Raspberry Pi, or GPUs like NVIDIA Jetson Nano, all the way down to $15 microcontrollers (MCUs) for wake word detection in smart speakers\n\nAs we go through the book, we will closely explore many of these.\n\n# Responsible AI\n\nSo far, we have explored the power and the potential of AI. It shows great promise to enhance our abilities, to make us more productive, to give us superpowers.\n\nBut with great power comes great responsibility.\n\nAs much as AI can help humanity, it also has equal potential to harm us when not designed with thought and care (either intentionally or unintentionally). The AI is not to blame; rather, it's the AI's designers.\n\nConsider some real incidents that made the news in the past few years.\n\n * \"____ can allegedly determine whether you're a terrorist just by analyzing your face\" (Figure 1-11): _Computer World_ , 2016\n\n * \"AI is sending people to jail\u2014and getting it wrong\": _MIT Tech Review_ , 2019\n\n * \"____ supercomputer recommended 'unsafe and incorrect' cancer treatments, internal documents show\": _STAT News_ , 2018\n\n * \"____ built an AI tool to hire people but had to shut it down because it was discriminating against women\": _Business Insider_ , 2018\n\n * \"____ AI study: Major object recognition systems favor people with more money\": _VentureBeat_ , 2019\n\n * \"____ labeled black people 'gorillas'\" _USA Today_ , 2015. \"Two years later, ____ solves 'racist algorithm' problem by purging 'gorilla' label from image classifier\": _Boing Boing_ , 2018\n\n * \"____ silences its new A.I. bot Tay, after Twitter users teach it racism\": _TechCrunch_ , 2016\n\n * \"AI Mistakes Bus-Side Ad for Famous CEO, Charges Her With Jaywalking\": _Caixin Global_ , 2018\n\n * \"____ to drop Pentagon AI contract after employee objections to the 'business of war'\": _Washington Post_ , 2018\n\n * \"Self-driving ____ death car 'spotted pedestrian six seconds before mowing down and killing her'\": _The Sun_ , 2018\n\n###### Figure 1-11. Startup claiming to classify people based on their facial structure\n\nCan you fill in the blanks here? We'll give you some options\u2014Amazon, Microsoft, Google, IBM, and Uber. Go ahead and fill them out. We'll wait.\n\nThere's a reason we kept them blank. It's to recognize that it's not a problem belonging to a specific individual or a company. This is everyone's problem. And although these things happened in the past, and might not reflect the current state, we can learn from them and try not to make the same mistakes. The silver lining here is that everyone learned from these mistakes.\n\nWe, as developers, designers, architects, and leaders of AI, have the responsibility to think beyond just the technical problem at face value. Following are just a handful of topics that are relevant to any problem we solve (AI or otherwise). They must not take a backseat.\n\n## Bias\n\nOften in our everyday work, we bring in our own biases, knowingly or unknowingly. This is the result of a multitude of factors including our environment, upbringing, cultural norms, and even our inherent nature. After all, AI and the datasets that power them were not created in a vacuum\u2014they were created by human beings with their own biases. Computers don't magically create bias on their own, they reflect and amplify existing ones.\n\nTake the example from the early days of the YouTube app when the developers noticed that roughly 10% of uploaded videos were upside-down. Maybe if that number had been lower, say 1%, it could have been brushed off as user error. But 10% was too high a number to be ignored. Do you know who happens to make up 10% of the population? Left-handed people! These users were holding their phones in the opposite orientation as their right-handed peers. But the engineers at YouTube had not accounted for that case during the development and testing of their mobile app, so YouTube uploaded videos to its server in the same orientation for both left-handed and right-handed users.\n\nThis problem could have been caught much earlier if the developers had even a single left-handed person on the team. This simple example demonstrates the importance of diversity. Handedness is just one small attribute that defines an individual. Numerous other factors, often outside their control, often come into play. Factors such as gender, skin tone, economic status, disability, country of origin, speech patterns, or even something as trivial as hair length can determine life-changing outcomes for someone, including how an algorithm treats them.\n\nGoogle's machine learning glossary lists several forms of bias that can affect a machine learning pipeline. The following are just some of them:\n\nSelection bias\n\nThe dataset is not representative of the distribution of the real-world problem and is skewed toward a subset of categories. For example, in many virtual assistants and smart home speakers, some spoken accents are overrepresented, whereas other accents have no data at all in the training dataset, resulting in a poor UX for large chunks of the world's population.\n\nSelection bias can also happen because of co-occurrence of concepts. For example, Google Translate, when used to translate the sentences \"She is a doctor. He is a nurse\" into a gender-neutral language such as Turkish and then back, switches the genders, as demonstrated in Figure 1-12. This is likely because the dataset contains a large sample of co-occurrences of male pronouns and the word \"doctor,\" and female pronouns and the word \"nurse.\"\n\n###### Figure 1-12. Google Translate reflecting the underlying bias in data (as of September 2019)\n\nImplicit bias\n\nThis type of bias creeps in because of implicit assumptions that we all make when we see something. Consider the highlighted portion in Figure 1-13. Anyone shown it might assume with a high amount of certainty that those stripes belong to a zebra. In fact, given how much ImageNet-trained networks are biased toward textures, most of them will classify the full image as a zebra. Except that we know that the image is of a sofa upholstered in a zebra-like fabric.\n\n###### Figure 1-13. Zebra sofa by Glen Edelson (image source)\n\nReporting bias\n\nSometimes the loudest voices in the room are the most extreme ones and dominate the conversation. One good look at Twitter might make it seem as if the world is ending, whereas most people are busy leading mundane lives. Unfortunately, boring does not sell.\n\nIn-group\/out-group bias\n\nAn annotator from East Asia might look at a picture of the Statue of Liberty and give it tags like \"America\" or \"United States,\" whereas someone from the US might look at the same picture and assign more granular tags such as \"New York\" or \"Liberty Island.\" It's human nature to see one's own groups with nuance while seeing other groups as more homogenous, and that reflects in our datasets, as well.\n\n## Accountability and Explainability\n\nImagine, in the late 1800s, Mr. Karl Benz told you that he invented this four-wheeled device that could transport you quicker than anything else in existence. Except he had no idea how it worked. All he knew was that it consumed a highly flammable liquid that exploded several times inside it to propel it forward. What caused it to move? What caused it to stop? What stopped it from burning the person sitting inside it? He had no answers. If this was the origin story of the car, you'd probably not want to get into that contraption.\n\nThis is precisely what is happening with AI right now. Previously, with traditional machine learning, data scientists had to manually pick features (predictive variables) from data, from which a machine learning model then would learn. This manual selection process, although cumbersome and restrictive, gave them more control and insight into how the prediction came about. However, with deep learning, these features are automatically selected. Data scientists are able to build models by providing lots of data, and these models somehow end up making predictions reliably\u2014most of the time. But the data scientist doesn't know exactly how the model works, what features it learned, under what circumstances the model works, and, more importantly, the circumstances under which it doesn't work. This approach might be acceptable when Netflix is recommending TV shows to us based on what we've already watched (although we're fairly certain they have the line `recommendations.append(\"Stranger Things\")` in their code somewhere). But AI does a lot more than just recommend movies these days. Police and judicial systems are beginning to rely on algorithms to decide whether someone poses a risk to society and whether they should be detained before their trial. The lives and freedoms of many people are at stake. We simply must not outsource important decision making to an unaccountable black box. Thankfully, there's momentum to change that with investments in _Explainable AI_ , wherein the model would be able to not just provide predictions but also account for the factors that caused it to make a certain prediction, and reveal areas of limitations.\n\nAdditionally, cities (such as New York) are beginning to make their algorithms accountable to the public by recognizing that the public has a right to know what algorithms they use for vital decision making and how they work, allowing reviews and audits by experts, improving expertise in government agencies to better evaluate each system they add, and by providing mechanisms to dispute a decision made by an algorithm.\n\n## Reproducibility\n\nResearch performed in the scientific field gains wide acceptance by the community only when it's reproducible; that is, anyone studying the research should be able to replicate the conditions of the test and obtain the same results. Unless we can reproduce a model's past results, we cannot hold it accountable when using it in the future. In the absence of reproducibility, research is vulnerable to _p-hacking_ \u2014tweaking the parameters of an experiment until the desired results are obtained. It's vital for researchers to extensively document their experimental conditions, including the dataset(s), benchmarks, and algorithms, and declare the hypothesis they will be testing prior to performing an experiment. Trust in institutions is at an all-time low and research that is not grounded in reality, yet sensationalized by the media, can erode that trust even more. Traditionally, replicating a research paper was considered a dark art because many implementation details are left out. The uplifting news is that researchers are now gradually beginning to use publicly available benchmarks (as opposed to their privately constructed datasets) and open sourcing the code they used for their research. Members of the community can piggyback on this code, prove it works, and make it better, thereby leading to newer innovations rapidly.\n\n## Robustness\n\nThere's an entire area of research on one-pixel attacks on CNNs. Essentially, the objective is to find and modify a single pixel in an image to make a CNN predict something entirely different. For example, changing a single pixel in a picture of an apple might result in a CNN classifying it as a dog. A lot of other factors can influence predictions, such as noise, lighting conditions, camera angle, and more that would not have affected a human's ability to make a similar call. This is particularly relevant for self-driving cars, where it would be possible for a bad actor on a street to modify the input the car sees in order to manipulate it into doing bad things. In fact, Tencent's Keen Security Lab was able to exploit a vulnerability in Tesla's AutoPilot by strategically placing small stickers on the road, which led it to change lanes and drive into the oncoming lane. Robust AI that is capable of withstanding noise, slight deviations, and intentional manipulation is necessary if we are to be able to trust it.\n\n## Privacy\n\nIn the pursuit of building better and better AI, businesses need to collect lots of data. Unfortunately, sometimes they overstep their bounds and collect information overzealously beyond what is necessary for the task at hand. A business might believe that it is using the data it collects only for good. But what if it is acquired by a company that does not have the same ethical boundaries for data use? The consumer's information could be used for purposes beyond the originally intended goals. Additionally, all that data collected in one place makes it an attractive target for hackers, who steal personal information and sell it on the black market to criminal enterprises. Moreover, governments are already overreaching in an attempt to track each and every individual.\n\nAll of this is at odds to the universally recognized human right of privacy. What consumers desire is having transparency into what data is being collected about them, who has access to it, how it's being used, and mechanisms to opt out of the data collection process, as well to delete data that was already collected on them.\n\nAs developers, we want to be aware of all the data we are collecting, and ask ourselves whether a piece of data is even necessary to be collected in the first place. To minimize the data we collect, we could implement privacy-aware machine learning techniques such as Federated Learning (used in Google Keyboard) that allow us to train networks on the users' devices without having to send any of the Personally Identifiable Information (PII) to a server.\n\nIt turns out that in many of the aforementioned headlines at the beginning of this section, it was the bad PR fallout that brought mainstream awareness of these topics, introduced accountability, and caused an industry-wide shift in mindset to prevent repeats in the future. We must continue to hold ourselves, academics, industry leaders, and politicians accountable at every misstep and act swiftly to fix the wrongs. Every decision we make and every action we take has the potential to set a precedent for decades to come. As AI becomes ubiquitous, we need to come together to ask the tough questions and find answers for them if we want to minimize the potential harm while reaping the maximum benefits.\n\n# Summary\n\nThis chapter explored the landscape of the exciting world of AI and deep learning. We traced the timeline of AI from its humble origins, periods of great promise, through the dark AI winters, and up to its present-day resurgence. Along the way, we answered the question of why it's different this time. We then looked at the necessary ingredients to build a deep learning solution, including datasets, model architectures, frameworks, and hardware. This sets us up for further exploration in the upcoming chapters. We hope you enjoy the rest of the book. It's time to dig in!\n\n# Frequently Asked Questions\n\n 1. I'm just getting started. Do I need to spend a lot of money on buying powerful hardware?\n\nLuckily for you, you can get started even with your web browser. All of our scripts are available online, and can be run on free GPUs courtesy of the kind people at Google Colab (Figure 1-14), who are generously making powerful GPUs available for free (up to 12 hours at a time). This should get you started. As you become better at it by performing more experiments (especially in a professional capacity or on large datasets), you might want to get a GPU either by renting one on the cloud (Microsoft Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP), and others) or purchasing the hardware. Watch out for those electricity bills, though!\n\n###### Figure 1-14. Screenshot of a notebook on GitHub running on Colab inside Chrome\n\n 2. Colab is great, but I already have a powerful computer that I purchased for playing . How should I set up my environment?\n\nThe ideal setup involves Linux, but Windows and macOS work, too. For most chapters, you need the following:\n\n * Python 3 and PIP\n\n * `tensorflow` or `tensorflow-gpu` PIP package (version 2 or greater)\n\n * Pillow\n\nWe like keeping things clean and self-contained, so we recommend using Python virtual environments. You should use the virtual environment whenever you install a package or run a script or a notebook.\n\nIf you do not have a GPU, you are done with the setup.\n\nIf you have an NVIDIA GPU, you would want to install the appropriate drivers, then CUDA, then cuDNN, then `tensorflow-gpu` package. If you're using Ubuntu, there's an easier solution than installing these packages manually, which can be tedious and error prone even for the best of us: simply install the entire environment with just one line using Lambda Stack.\n\nAlternatively, you could install all of your packages using Anaconda Distribution, which works equally well for Windows, Mac, and Linux.\n\n 3. Where will I find the code used in this book?\n\nYou'll find ready-to-run examples at _http:\/\/PracticalDeepLearning.ai_.\n\n 4. What are the minimal prerequisites to be able to read this book?\n\nA Ph.D. in areas including Calculus, Statistical Analysis, Variational Autoencoders, Operations Research, and so on...are definitely _not_ necessary to be able to read this book (had you a little nervous, didn't we?). Some basic coding skills, familiarity with Python, a healthy amount of curiosity, and a sense of humor should go a long way in the process of absorbing the material. Although a beginner-level understanding of mobile development (with Swift and\/or Kotlin) will help, we've designed the examples to be self-sufficient and easy enough to be deployed by someone who has never written a mobile app previously.\n\n 5. What frameworks will we be using?\n\nKeras + TensorFlow for training. And chapter by chapter, we explore different inference frameworks.\n\n 6. Will I be an expert when I finish this book?\n\nIf you follow along, you'll have the know-how on a wide variety of topics all the way from training to inference, to maximizing performance. Even though this book primarily focuses on computer vision, you can bring the same know-how to other areas such as text, audio, and so on and get up to speed very quickly.\n\n 7. Who is the cat from earlier in the chapter?\n\nThat is Meher's cat, Vader. He will be making multiple cameos throughout this book. And don't worry, he has already signed a model release form.\n\n 8. Can I contact you?\n\nSure. Drop us an email at PracticalDLBook@gmail.com with any questions, corrections, or whatever, or tweet to us @PracticalDLBook.\n\n If you're reading a pirated copy, consider us disappointed in you.\n\n Robert Geirhos et al.\n\n# Chapter 2. What's in the Picture: Image Classification with Keras\n\nIf you have skimmed through deep learning literature, you might have come across a barrage of academic explanations laced with intimidating mathematics. Don't worry. We will ease you into practical deep learning with an example of classifying images with just a few lines of code.\n\nIn this chapter, we take a closer look at the Keras framework, discuss its place in the deep learning landscape, and then use it to classify a few images using existing state-of-the-art classifiers. We visually investigate how these classifiers operate by using _heatmaps_. With these heatmaps, we make a fun project in which we classify objects in videos.\n\nRecall from the \"Recipe for the Perfect Deep Learning Solution\" that we need four ingredients to create our deep learning recipe: hardware, dataset, framework, and model. Let's see how each of these comes into play in this chapter:\n\n * We begin with the easy one: _hardware_. Even an inexpensive laptop would suffice for what we we're doing in this chapter. Alternatively, you can run the code in this chapter by opening the GitHub notebook (see _http:\/\/PracticalDeepLearning.ai_) in Colab. This is just a matter of a few mouse clicks.\n\n * Because we won't be training a neural network just yet, we don't need a _dataset_ (other than a handful of sample photos to test with).\n\n * Next, we come to the _framework_. This chapter's title has Keras in it, so that is what we will be using for now. In fact, we use Keras for our training needs throughout a good part of the book.\n\n * One way to approach a deep learning problem is to obtain a dataset, write the code to train it, spend a lot of time and energy (both human and electrical) in training that model, and then use it for making predictions. But we are not gluttons for punishment. So, we will use a _pretrained model_ instead. After all, the research community has already spent blood, sweat, and tears training and publishing many of the standard models that are now publicly available. We will be reusing one of the more famous models called ResNet-50, the little sibling of ResNet-152 that won the ILSVRC in 2015.\n\nYou will get hands-on with some code in this chapter. As we all know, the best way to learn is by doing. You might be wondering, though, what's the theory behind this? That comes in later chapters, in which we delve deeper into the nuts and bolts of CNNs using this chapter as a foundation.\n\n# Introducing Keras\n\nAs Chapter 1 discussed, Keras started in 2015 as an easy-to-use abstraction layer over other libraries, making rapid prototyping possible. This made the learning curve a lot less steep for beginners of deep learning. At the same time, it made deep learning experts more productive by helping them rapidly iterate on experiments. In fact, the majority of the winning teams on _Kaggle.com_ (which hosts data science competitions) have used Keras. Eventually, in 2017, the full implementation of Keras was available directly in TensorFlow, thereby combining the high scalability, performance, and vast ecosystem of TensorFlow with the ease of Keras. On the web, we often see the TensorFlow version of Keras referred to as `tf.keras`.\n\nIn this chapter and Chapter 3, we write all of the code exclusively in Keras. That includes boilerplate functions such as file reading, image manipulation (augmentation), and so on. We do this primarily for ease of learning. From Chapter 5 onward, we begin to gradually use more of the native performant TensorFlow functions directly for more configurability and control.\n\n##### From the Creator's Desk\n\nBy Fran\u00e7ois Chollet, creator of Keras, AI researcher, and author of _Deep Learning with Python_\n\nI originally started Keras for my own use. At the time, in late 2014 and early 2015, there weren't any good options for deep learning frameworks with solid usability and strong support for both RNNs and convnets. Back then, I wasn't deliberately trying to democratize deep learning, I was just building what I wanted for myself. But as time went by, I saw lots of people pick up deep learning through Keras, and use it to solve many different problems I didn't even know existed. To me, that has been really fascinating. It made me realize that deep learning can be deployed, in transformative ways, to far more domains than what machine learning researchers are aware of. There are so many people out there that could benefit from using these technologies in their work. Because of that, I've grown to care a lot about making deep learning accessible to as many people as possible. That's the only way we're going to deploy AI to the full extent of its potential\u2014by making it broadly available. Today, in TensorFlow 2.0, the Keras API consolidates the power of deep learning in a spectrum of really productive and enjoyable workflows, suited to a variety of user profiles, from research to applications, including deployment. I'm looking forward to seeing what you'll build with it!\n\n# Predicting an Image's Category\n\nIn layperson's terms, image classification answers the question: \"what object does this image contain?\" More specifically, \"This image contains _X_ object with what probability,\" where _X_ is from a predefined list of categories of objects. If the probability is higher than a minimum threshold, the image is likely to contain one or more instances of _X_.\n\nA simple image classification pipeline would consist of the following steps:\n\n 1. Load an image.\n\n 2. Resize it to a predefined size such as 224 x 224 pixels.\n\n 3. Scale the values of the pixel to the range [0,1] or [\u20131,1], a.k.a. normalization.\n\n 4. Select a pretrained model.\n\n 5. Run the pretrained model on the image to get a list of category predictions and their respective probabilities.\n\n 6. Display a few of the highest probability categories.\n\n###### Tip\n\nThe GitHub link is provided on the website _http:\/\/PracticalDeepLearning.ai_. Navigate to `code\/chapter-2` where you will find the Jupyter notebook `1-predict-class.ipynb` that has all the steps detailed.\n\nWe begin by importing all of the necessary modules from the Keras and Python packages:\n\n import tensorflow as tf\n from tf.keras.applications.resnet50 import preprocess_input, decode_predictions\n from tf.keras.preprocessing import image\n import numpy as np\n import matplotlib.pyplot as plt\n\nNext, we load and display the image that we want to classify (see Figure 2-1):\n\n img_path = \"..\/..\/sample-images\/cat.jpg\"\n img = image.load_img(img_path, target_size=(224, 224))\n plt.imshow(img)\n plt.show()\n\n###### Figure 2-1. Plot showing the contents of the input file\n\nYup, it's a cat (although the filename kind of gave it away). And that's what our model should ideally be predicting.\n\n##### A Brief Refresher on Images\n\nBefore we dive into how images are processed, it would be good to take a look at how images store information. At the most basic level, an image is a collection of pixels that are laid out in a rectangular grid. Depending on the type of image, each pixel can consist of 1 to 4 parts (also known as _components_ or _channels_ ). For the images we're using, these components represent the intensities of the colors red, green, and blue (RGB). They are typically 8 bits in length, so their values range between 0 and 255 (i.e., 28 \u2013 1).\n\nBefore feeding any image to Keras, we want to convert it to a standard format. This is because pretrained models expect the input to be of a specific size. The standardization in our case involves resizing the image to 224 x 224 pixels.\n\nMost deep learning models expect a batch of images as input. But what do we do when we have just one image? We create a batch of one image, of course! That essentially involves making an array consisting of that one object. Another way to look at this is to expand the number of dimensions from three (representing the three channels of the image) to four (the extra one for the length of the array itself).\n\nIf that is not clear, consider this scenario: for a batch of 64 images of size 224 x 224 pixels, each containing three channels (RGB), the object representing that batch would have a shape 64 x 224 x 224 x 3. In the code that follows, where we'd be using only one 224 x 224 x 3 image, we'd create a batch of just that image by expanding the dimensions from three to four. The shape of this newly created batch would be 1 x 224 x 224 x 3:\n\n img_array = image.img_to_array(img)\n img_batch = np.expand_dims(img_array, axis=0) # Increase the number of dimensions\n\nIn machine learning, models perform best when they are fed with data within a consistent range. Ranges typically include [0,1] and [\u20131,1]. Given that image pixel values are between 0 and 255, running the `preprocess_input` function from Keras on input images will normalize each pixel to a standard range. _Normalization_ or _feature scaling_ is one of the core steps in preprocessing images to make them suitable for deep learning.\n\nNow comes the model. We will be using a _Convolutional Neural Network_ (CNN) called ResNet-50. The very first question we should ask is, \"Where will I find the model?\" Of course, we could hunt for it on the internet to find something that is compatible with our deep learning framework (Keras). _But ain't nobody got time for that!_ Luckily, Keras loves to make things easy and provides it to us in a single function call. After we call this function for the first time, the model will be downloaded from a remote server and cached locally:\n\n model = tf.keras.applications.resnet50.ResNet50()\n\nWhen predicting with this model, the results include probability predictions for each class. Keras also provides the `decode_predictions` function, which tells us the probability of each category of objects contained in the image.\n\nNow, let's see the entire code in one handy function:\n\n def classify(img_path):\n img = image.load_img(img_path, target_size=(224, 224))\n model = tf.keras.applications.resnet50.ResNet50()\n img_array = image.img_to_array(img)\n img_batch = np.expand_dims(img_array, axis=0)\n img_preprocessed = preprocess_input(img_batch)\n prediction = model.predict(img_preprocessed)\n print(decode_predictions(prediction, top=3)[0])\n\n classify(\"..\/..\/sample-images\/cat.jpg\")\n\n [('n02123045', 'tabby', 0.50009364),\n ('n02124075', 'Egyptian_cat', 0.21690978),\n ('n02123159', 'tiger_cat', 0.2061722)]\n\nThe predicted categories for this image are various types of felines. Why doesn't it simply predict the word \"cat,\" instead? The short answer is that the ResNet-50 model was trained on a granular dataset with many categories and does not include the more general \"cat.\" We investigate this dataset in more detail a little later, but first, let's load another sample image (see Figure 2-2):\n\n img_path = '..\/..\/sample-images\/dog.jpg'\n img = image.load_img(img_path, target_size=(224, 224))\n plt.imshow(img)\n plt.show()\n\n###### Figure 2-2. Plot showing the contents of the file dog.jpg\n\nAnd, again, we run our handy function from earlier:\n\n classify(\"..\/..\/sample-images\/dog.jpg\")\n\n [(u'n02113186', u'Cardigan', 0.809839),\n (u'n02113023', u'Pembroke', 0.17665945),\n (u'n02110806', u'basenji', 0.0042166105)]\n\nAs expected, we get different breeds of canines (and not just the \"dog\" category). If you are unfamiliar with the Corgi breed of dogs, the word \"corgi\" literally means \"dwarf dog\" in Welsh. The Cardigan and Pembroke are subbreeds of the Corgi family, which happen to look pretty similar to one another. It's no wonder our model thinks that way, too.\n\nNotice the predicted probability of each category. Usually, the prediction with the highest probability is considered the answer. Alternatively, any value over a predefined threshold can be considered as the answer, too. In the dog example, if we set a threshold of 0.5, Cardigan would be our answer.\n\n###### Figure 2-3. Running the notebook on Google Colab using the browser\n\n###### Tip\n\nYou can follow along with the code in this chapter and execute it interactively without any installations in the browser itself with Google Colab. Simply find the \"Run on Colab\" link at the top of each notebook on GitHub that you'd like to experiment with. Then, click the \"Run Cell\" button; this should execute the code within that cell, as shown in Figure 2-3.\n\n# Investigating the Model\n\nWe got the predictions from our model, great! But what factors led to those predictions? There are a few questions that we need to ask here:\n\n * What dataset was the model trained on?\n\n * Are there other models that I can use? How good are they? Where can I get them?\n\n * Why does my model predict what it predicts?\n\nWe look into the answers to each of these questions in this section.\n\n## ImageNet Dataset\n\nLet's investigate the ImageNet dataset on which ResNet-50 was trained. ImageNet, as the name suggests, is a network of images; that is, a dataset of images organized as a network, as demonstrated in Figure 2-4. It is arranged in a hierarchical manner (like the WordNet hierarchy) such that the parent node encompasses a collection of images of all different varieties possible within that parent. For example, within the \"animal\" parent node, there are fish, birds, mammals, invertebrates, and so on. Each category has multiple subcategories, and these have subsubcategories, and so forth. For example, the category \"American water spaniel\" is eight levels from the root. The dog category contains 189 total subcategories in five hierarchical levels.\n\nVisually, we developed the tree diagram shown in Figure 2-5 to help you to understand the wide variety of high-level entities that the ImageNet dataset contains. This treemap also shows the relative percentage of different categories that make up the ImageNet dataset.\n\n###### Figure 2-4. The categories and subcategories in the ImageNet dataset\n\n###### Figure 2-5. Treemap of ImageNet and its classes\n\nThe ImageNet dataset was the basis for the famous ILSVRC that started in 2010 to benchmark progress in computer vision and challenge researchers to innovate on tasks including object classification. Recall from Chapter 1 that the ImageNet challenge saw submissions that drastically improved in accuracy each year. When it started out, the error rate was nearly 30%. And now, it is 2.2%, already better than how an average human would perform at this task. This dataset and challenge are considered the single biggest reasons for the recent advancements in computer vision.\n\nWait, AI has better-than-human accuracy? If the dataset was created by humans, won't humans have 100% accuracy? Well, the dataset was created by experts, with each image verified by multiple people. Then Stanford researcher (and now of Tesla fame) Andrej Karpathy attempted to figure out how much a normal human would fare on ImageNet-1000. Turns out he achieved an accuracy of 94.9%, well short of the 100% we all expected. Andrej painstakingly spent a week going over 1,500 images, spending approximately one minute per image in tagging it. How did he misclassify 5.1% of the images? The reasons are a bit subtle:\n\nFine-grained recognition\n\nFor many people, it is really tough to distinguish a Siberian husky from a Alaskan Malamute. Someone who is really familiar with dog breeds would be able to tell them apart because they look for finer-level details that distinguish both breeds. It turns out that neural networks are capable of learning those finer-level details much more easily than humans.\n\nCategory unawareness\n\nNot everyone is aware of all the 120 breeds of dogs and most certainly not each one of the 1,000 classes. But the AI is. After all, it was trained on it.\n\n###### Note\n\nSimilar to ImageNet, speech datasets like Switchboard report a 5.1% error rate for speech transcription (coincidentally the same number as ImageNet). It's clear that humans have a limit, and AI is gradually beating us.\n\nOne of the other key reasons for this fast pace of improvement was that researchers were openly sharing models trained on datasets like ImageNet. In the next section, we learn about model reuse in more detail.\n\n## Model Zoos\n\nA model zoo is a place where organizations or individuals can publicly upload models that they have built for others to reuse and improve upon. These models can be trained using any framework (e.g., Keras, TensorFlow, MXNet), for any task (classification, detection, etc.), or trained on any dataset (e.g., ImageNet, Street View House Numbers (SVHN)).\n\nThe tradition of model zoos started with Caffe, one of the first deep learning frameworks, developed at the University of California, Berkeley. Training a deep learning model from scratch on a multimillion-image database requires weeks of training time and lots of GPU computational energy, making it a difficult task. The research community recognized this as a bottleneck, and the organizations that participated in the ImageNet competition open sourced their trained models on Caffe's website. Other frameworks soon followed suit.\n\nWhen starting out on a new deep learning project, it's a good idea to first explore whether there's already a model that performs a similar task and was trained on a similar dataset.\n\nThe model zoo in Keras is a collection of various architectures trained using the Keras framework on the ImageNet dataset. We tabulate their details in Table 2-1.\n\nTable 2-1. Architectural details of select pretrained ImageNet models **Model** | **Size** | **Top-1 accuracy** | **Top-5 accuracy** | **Parameters** | **Depth** \n---|---|---|---|---|--- \nVGG16 | 528 MB | 0.713 | 0.901 | 138,357,544 | 23 \nVGG19 | 549 MB | 0.713 | 0.9 | 143,667,240 | 26 \nResNet-50 | 98 MB | 0.749 | 0.921 | 25,636,712 | 50 \nResNet-101 | 171 MB | 0.764 | 0.928 | 44,707,176 | 101 \nResNet-152 | 232 MB | 0.766 | 0.931 | 60,419,944 | 152 \nInceptionV3 | 92 MB | 0.779 | 0.937 | 23,851,784 | 159 \nInceptionResNetV2 | 215 MB | 0.803 | 0.953 | 55,873,736 | 572 \nNASNetMobile | 23 MB | 0.744 | 0.919 | 5,326,716 | \u2014 \nNASNetLarge | 343 MB | 0.825 | 0.96 | 88,949,818 | \u2014 \nMobileNet | 16 MB | 0.704 | 0.895 | 4,253,864 | 88 \nMobileNetV2 | 14 MB | 0.713 | 0.901 | 3,538,984 | 88\n\nThe column \"Top-1 accuracy\" indicates how many times the best guess was the correct answer, and the column \"Top-5 accuracy\" indicates how many times at least one out of five guesses were correct. The \"Depth\" of the network indicates how many layers are present in the network. The \"Parameters\" column indicates the size of the model; that is, how many individual weights the model has: the more parameters, the \"heavier\" the model is, and the slower it is to make predictions. In this book, we often use ResNet-50 (the most common architecture cited in research papers for high accuracy) and MobileNet (for a good balance between speed, size, and accuracy).\n\n## Class Activation Maps\n\nImage saliency, usually famous in UX research, is trying to answer the question \"What part of the image are users paying attention to?\" This is facilitated with the help of eye-tracking studies and represented in heatmaps. For example, big, bold fonts or people's faces usually get more attention than backgrounds. It's easy to guess how useful these heatmaps would be to designers and advertisers, who can then adapt their content to maximize users' attention. Taking inspiration from this human version of saliency, wouldn't it be great to learn which part of the image the neural network is paying attention to? That's precisely what we will be experimenting with.\n\nIn our experiment, we will be overlaying a _class activation map_ (or colloquially a _heatmap_ ) on top of a video in order to understand what the network pays attention to. The heatmap tells us something like \"In this picture, these pixels were responsible for the prediction of the class `dog` where \"dog\" was the category with the highest probability. The \"hot\" pixels are represented with warmer colors such as red, orange, and yellow, whereas the \"cold\" pixels are represented using blue. The \"hotter\" a pixel is, the higher the signal it provides toward the prediction. Figure 2-6 gives us a clearer picture. (If you're reading the print version, refer to the book's GitHub for the original color image.)\n\n###### Figure 2-6. Original image of a dog and its generated heatmap\n\nIn the GitHub repository (see _http:\/\/PracticalDeepLearning.ai_), navigate to _code\/chapter-2_. There, you'll find a handy Jupyter notebook, _2-class-activation-map-on-video.ipynb,_ which describes the following steps:\n\nFirst, we need to install `keras-vis` using `pip`:\n\n $ pip install keras-vis --user\n\nWe then run the visualization script on a single image to generate the heatmap for it:\n\n $ python visualization.py --process image --path ..\/sample-images\/dog.jpg\n\nWe should see a newly created file called _dog-output.jpg_ that shows a side-by-side view of the original image and its heatmap. As we can see from Figure 2-6, the right half of the image indicates the \"areas of heat\" along with the correct prediction of a \"Cardigan\" (i.e., Welsh Corgi).\n\nNext, we want to visualize the heatmap for frames in a video. For that, we need `FFmpeg`, an open source multimedia framework. You can find the download binary as well as the installation instructions for your operating system at _https:\/\/www.ffmpeg.org_.\n\nWe use `ffmpeg` to split up a video into individual frames (at 25 frames per second) and then run our visualization script on each of those frames. We must first create a directory to store these frames and pass its name as part of the `ffmpeg` command:\n\n $ mkdir kitchen\n $ ffmpeg -i video\/kitchen-input.mov -vf fps=25 kitchen\/thumb%04d.jpg -hide_banner\n\nWe then run the visualization script with the path of the directory containing the frames from the previous step:\n\n $ python visualization.py --process video --path kitchen\/\n\nWe should see a newly created _kitchen-output_ directory that contains all of the heatmaps for the frames from the input directory.\n\nFinally, compile a video from those frames using `ffmpeg`:\n\n $ ffmpeg -framerate 25 -i kitchen-output\/result-%04d.jpg kitchen-output.mp4\n\nPerfect! The result is the original video side by side with a copy of the heatmap overlaid on it. This is a useful tool, in particular, to discover whether the model has learned the correct features or if it picked up stray artifacts during its training.\n\nImagine generating heatmaps to analyze the strong points and shortfalls of our trained model or a pretrained model.\n\nYou should try this experiment out on your own by shooting a video with your smartphone camera and running the aforementioned scripts on the file. Don't forget to post your videos on Twitter, tagging @PracticalDLBook!\n\n###### Tip\n\nHeatmaps are a great way to visually detect bias in the data. The quality of a model's predictions depends heavily on the data on which it was trained. If the data is biased, that will reflect in the predictions. A great example of this is (although probably an urban legend) one in which the US Army wanted to use neural networks to detect enemy tanks camouflaged in trees. The researchers who were building the model took photographs\u201450% containing camouflaged tanks and 50% with just trees. Model training yielded 100% accuracy. A cause for celebration? That sadly wasn't the case when the US Army tested it. The model had performed very poorly\u2014no better than random guesses. Investigation revealed that photos with the tanks were taken on cloudy (overcast) days and those without the tanks on clear, sunny days. And the neural network model began looking for the sky instead of the tank. If the researchers had visualized the model using heatmaps, they would have caught that issue pretty early.\n\nAs we collect data, we must be vigilant at the outset of potential bias that can pollute our model's learning. For example, when collecting images to build a food classifier, we should verify that the other artifacts such as plates and utensils are not being learned as food. Otherwise, the presence of chopsticks might get our food classified as chow mein. Another term to define this is _co-occurrence_. Food very frequently co-occurs with cutlery. So watch out for these artifacts seeping into your classifier's training.\n\n# Summary\n\nIn this chapter, we got a glimpse of the deep learning universe using Keras. It's an easy-to-use yet powerful framework that we use in the next several chapters. We observed that there is often no need to collect millions of images and use powerful GPUs to train a custom model because we can use a pretrained model to predict the category of an image. By diving deeper into datasets like ImageNet, we learned the kinds of categories these pretrained models can predict. We also learned about finding these models in model zoos that exist for most frameworks.\n\nIn Chapter 3, we explore how we can tweak an existing pretrained model to make predictions on classes of input for which it was not originally intended. As with the current chapter, our approach is geared toward obtaining output without needing millions of images and lots of hardware resources to train a classifier.\n\n \"Artificial Intelligence as a Positive and Negative Factor in Global Risk\" by Eliezer Yudkowsky in _Global Catastrophic Risks_ (Oxford University Press).\n\n# Chapter 3. Cats Versus Dogs: Transfer Learning in 30 Lines with Keras\n\nImagine that we want to learn how to play the melodica, a wind instrument in the form of a handheld keyboard. Without a musical background, and the melodica being our very first instrument, it might take us a few months to become proficient at playing it. In contrast, if we were already skilled at playing another instrument, such as the piano, it might just take a few days, given how similar the two instruments are. Taking the learnings from one task and fine tuning them on a similar task is something we often do in real life (as illustrated in Figure 3-1). The more similar the two tasks are, the easier it is to adapt the learning from one task to the other.\n\nWe can apply this phenomenon from real life to the world of deep learning. Starting a deep learning project can be relatively quick when using a pretrained model, which reuses the knowledge that it learned during its training, and adapt it to the task at hand. This process is known as _transfer learning_.\n\nIn this chapter, we use transfer learning to modify existing models by training our own classifier in minutes using Keras. By the end of this chapter, we will have several tools in our arsenal to create high-accuracy image classifiers for any task.\n\n###### Figure 3-1. Transfer learning in real life\n\n# Adapting Pretrained Models to New Tasks\n\nBefore we discuss the process of transfer learning, let's quickly take a step back and review the primary reasons for the boom in deep learning:\n\n * Availability of bigger and better-quality datasets like ImageNet\n\n * Better compute available; i.e., faster and cheaper GPUs\n\n * Better algorithms (model architecture, optimizer, and training procedure)\n\n * Availability of pretrained models that have taken months to train but can be quickly reused\n\nThe last point is probably one of the biggest reasons for the widespread adoption of deep learning by the masses. If every training task took a month, not more than a handful of researchers with deep pockets would be working in this area. Thanks to transfer learning, the underappreciated hero of training models, we can now modify an existing model to suit our task in as little as a few minutes.\n\nFor example, we saw in Chapter 2 that the pretrained ResNet-50 model, which is trained on ImageNet, can predict feline and canine breeds, among thousands of other categories. So, if we just want to classify between the high-level \"cat\" and \"dog\" categories (and not the lower-level breeds), we can begin with the ResNet-50 model and quickly retrain this model to classify cats and dogs. All we need to do is show it a dataset with these two categories during training, which should take anywhere between a few minutes to a few hours. In comparison, if we had to train a cat versus dog model without a pretrained model, it could take several hours to days.\n\n##### From the Creator's Desk\n\nBy Jeremy Howard, cofounder of fast.ai and former chief scientist at Kaggle\n\nHundreds of thousands of students have studied deep learning through fast.ai. Our goal is to get them up and running as quickly as possible, solving real problems quickly. So what's the first thing we teach? It's transfer learning!\n\nThousands of students have now shared their success stories on our forum ( _http:\/\/forums.fast.ai_) describing how with as few as 30 images they have created 100% accurate image classifiers. We've also heard from students that have broken academic records in many domains and created commercially valuable models using this simple technique.\n\nFive years ago, I created Enlitic, the first company to focus on deep learning for medicine. As an initial proof of concept, I decided to develop a lung tumor classifier from CT scans. You can probably guess what technique we used...yes, it was transfer learning! In our open source fast.ai library we make transfer learning trivially easy\u2014it's just three lines of code, and the most important best practices are built in.\n\n## A Shallow Dive into Convolutional Neural Networks\n\nWe have been using the term \"model\" to refer to the part of AI that makes our predictions. In deep learning for computer vision, that model is usually a special type of neural network called a CNN. Although we explore CNNs in greater detail later in the book, we look at them very briefly in relation to training them via transfer learning here.\n\nIn machine learning, we need to convert data into a set of discernible features and then add a classification algorithm to classify them. It's the same with CNNs. They consist of two parts: convolutional layers and fully connected layers. The job of the convolutional layers is to take the large number of pixels of an image and convert them into a much smaller representation; that is, features. The fully connected layers convert these features into probabilities. A fully connected layer is really a neural network with hidden layers, as we saw in Chapter 1. In summary, the convolutional layers act as feature extractors, whereas the fully connected layers act as classifiers. Figure 3-2 shows a high-level overview of a CNN.\n\n###### Figure 3-2. A high-level overview of a CNN\n\nImagine that we want to detect a human face. We might want to use a CNN to classify an image and determine whether it contains a face. Such a CNN would be composed of several layers connected one after another. These layers represent mathematical operations. The output of one layer is the input to the next. The first (or the lowermost) layer is the input layer, where the input image is fed. The last (or the topmost) layer is the output layer, which gives the predictions.\n\nThe way it works is the image is fed into the CNN and passes through a series of layers, with each performing a mathematical operation and passing the result to the subsequent layer. The resulting output is a list of object classes and their probabilities. For example, categories like ball\u201465%, grass\u201420%, and so on. If the output for an image contains a \"face\" class with a 70% probability, we conclude that there is a 70% likelihood that the image contains a human face.\n\n###### Note\n\nAn intuitive (and overly simplified) way to look at CNNs is to see them as a series of filters. As the word filter implies, each layer acts as a sieve of information, letting something \"pass through\" only if it recognizes it. (If you have heard of high-pass and low-pass filters in electronics, this might seem familiar.) We say that the layer was \"activated\" for that information. Each layer is activated for visual patterns resembling parts of cats, dogs, cars, and so forth. If a layer does not recognize information (due to what it learned while training), its output is close to zero. CNNs are the \"bouncers\" of the deep learning world!\n\nIn the facial detection example, lower-level layers (Figure 3-3, a; layers that are closer to the input image) are \"activated\" for simpler shapes; for example, edges and curves. Because these layers activate only for basic shapes, they can be easily reused for a different purpose than face recognition such as detecting a car (every image is composed of edges and curves, after all). Middle-level layers (Figure 3-3 b) are activated for more complex shapes such as eyes, noses, and lips. These layers are not nearly as reusable as the lower-level layers. They might not be as useful for detecting a car, but might still be useful for detecting animals. And higher-level layers (Figure 3-3 c) are activated for even more complex shapes; for example, most of the human face. These layers tend to be more task-specific and thus the least reusable across other image classification problems.\n\n###### Figure 3-3. (a) Lower-level activations, followed by (b) midlevel activations and (c) upper-layer activations (image source: Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations, Lee et al., ICML 2009)\n\nThe complexity and power of what a layer can recognize increases as we approach the final layers. Conversely, the reusability of a layer decreases as we get closer to the output. This will become apparent very soon when we look at what these layers learn.\n\n## Transfer Learning\n\nIf we want to transfer knowledge from one model to another, we want to reuse more of the _generic_ layers (closer to the input) and fewer of the _task-specific_ layers (closer to the output). In other words, we want to remove the last few layers (typically the fully connected layers) so that we can utilize the more generic ones, and add layers that are geared toward our specific classification task. Once training begins, the generic layers (which form the majority of our new model) are kept frozen (i.e., they are unmodifiable), whereas the newly added task-specific layers are allowed to be modified. This is how transfer learning helps quickly train new models. Figure 3-4 illustrates this process for a pretrained model trained for task X adapted to task Y.\n\n###### Figure 3-4. An overview of transfer learning\n\n## Fine Tuning\n\nBasic transfer learning gets us only so far. We usually add only two to three fully connected layers after the generic layers to make the new classifier model. If we want higher accuracy, we must allow more layers to be trained. This means unfreezing some of the layers that would have otherwise been frozen in transfer learning. This is known as _fine tuning_. Figure 3-5 shows an example where some convolutional layers near the head\/top are unfrozen and trained for the task at hand.\n\n###### Figure 3-5. Fine tuning a convolutional neural network\n\nIt's obvious that compared to basic transfer learning, more layers are tweaked to our dataset during fine tuning. Because a higher number of layers have adapted to our task compared to transfer learning, we can achieve greater accuracy for our task. The decision on how many layers to fine tune is dependent on the amount of data at hand as well as the similarity of the target task to the original dataset on which the pretrained model was trained.\n\nWe often hear data scientists saying, \"I fine tuned the model,\" which means that they took a pretrained model, removed task-specific layers and added new ones, froze the lower layers, and trained the upper part of the network on the new dataset they had.\n\n###### Note\n\nIn daily lingo, transfer learning and fine tuning are used interchangeably. When spoken, transfer learning is used more as a general concept, whereas fine tuning is referred to as its implementation.\n\n## How Much to Fine Tune\n\nHow many layers of a CNN should we fine tune? This can be guided by the following two factors:\n\nHow much data do we have?\n\nIf we have a couple hundred labeled images, it would be difficult to train and test a freshly defined model from scratch (i.e., define a model architecture with random seed weights) because we need a lot more data. The danger of training with such a small amount of data is that these powerful networks might potentially memorize it, leading to undesirable overfitting (which we explore later in the chapter). Instead, we will borrow a pretrained network and fine tune the last few layers. But if we had a million labeled images, it would be feasible to fine tune all layers of the network and, if necessary, train from scratch. So, the amount of task-specific data dictates whether we can fine tune, and how much.\n\nHow similar is the data?\n\nIf the task-specific data is similar to the data used for the pretrained network, we can fine tune the last few layers. But if our task is identifying different bones in an X-ray image and we want to start out from an ImageNet trained network, the high dissimilarity between regular ImageNet images and X-ray images would require nearly all layers to be trained.\n\nTo summarize, Table 3-1 offers an easy-to-follow cheat sheet.\n\nTable 3-1. Cheatsheet for when and how to fine tune | **High similarity among datasets** | **Low similarity among datasets** \n---|---|--- \n**Large amount of training data** | Fine tune all layers | Train from scratch, or fine tune all layers \n**Small amount of training data** | Fine tune last few layers | Tough luck! Train on a smaller network with heavy data augmentation or somehow get more data\n\nEnough theory, let's see it in action.\n\n# Building a Custom Classifier in Keras with Transfer Learning\n\nAs promised, it's time to build our state-of-the-art classifier in 30 lines or less. At a high level, we will use the following steps:\n\n 1. Organize the data. Download labeled images of cats and dogs and then divide the images into training and validation folders.\n\n 2. Build the data pipeline. Define a pipeline for reading data, including preprocessing the images (e.g., resizing) and grouping multiple images together into batches.\n\n 3. Augment the data. In the absence of a ton of training images, make small changes (augmentation) like rotation, zooming, and so on to increase variation in training data.\n\n 4. Define the model. Take a pretrained model, remove the last few task-specific layers, and append a new classifier layer. Freeze the weights of original layers (i.e., make them unmodifiable). Select an optimizer algorithm and a metric to track (like accuracy).\n\n 5. Train and test. Train for a few iterations until our validation accuracy is high. Save the model to eventually load as part of any application for predictions.\n\nThis will all make sense pretty soon. Let's explore this process in detail.\n\n##### Solving the World's Most Pressing Computer-Vision Problem\n\nIn early 2014, Microsoft Research was figuring out how to solve the world's most pressing problem at the time: \"Differentiating cats and dogs.\" (Where else would we have gotten the idea for this chapter?) Keep in mind that it was a much more difficult computer-vision problem back then. To facilitate this effort, Microsoft Research released the Asirra (Animal Species Image Recognition for Restricting Access) dataset. The motivation behind the Asirra dataset was to develop a sufficiently challenging CAPTCHA system. More than three million images, labeled by animal shelters throughout the United States, were provided by _Petfinder.com_ to Microsoft Research. When this problem was initially introduced, the highest possible accuracy attained was around 80%. By using deep learning, in just a few weeks, it went to 98%! This (now relatively easy) task shows the power of deep learning.\n\n# Organize the Data\n\nIt's essential to understand the distinction between train, validation, and test data. Let's look at a real-world analogy of a student preparing for standardized exams (e.g., SAT in the US, the Gaokao in China, JEE in India, CSAT in Korea, etc.). The in-class instruction and homework assignments are analogous to the training process. The quizzes, midterms, and other tests in school are the equivalent to the validation\u2014the student is able to take them frequently, assess performance, and make improvements in their study plan. They're ultimately optimizing for their performance in the final standardized exam for which they get only one chance. The final exam is equivalent to the test set\u2014the student does not get an opportunity to improve here (ignoring the ability to retake the test). This is their one shot at showing what they have learned.\n\nSimilarly, our aim is to give the best predictions in the real world. To enable this, we divide our data into three parts: train, validation, and test. A typical distribution would be 80% for train, 10% for validation, and 10% for test. Note that we randomly divide our data into these three sets in order to ensure the least amount of _bias_ that might creep in unknowingly. The final accuracy of the model is determined by the accuracy on the _test set_ , much like the student's score is determined only on their performance on the standardized exam.\n\nThe model learns from the training data and uses the validation set to evaluate its performance. Machine learning practitioners take this performance as feedback to find opportunities to improve their models on a continuous basis, similar to how students improve their preparation with the help of quizzes. There are several knobs that we can tune to improve performance; for example, the number of layers to train.\n\nIn many research competitions (including _Kaggle.com_ ), contestants receive a test set that is separate from the data they can use for building the model. This ensures uniformity across the competition when it comes to reporting accuracy. It is up to the contestants to divide the available data into training and validation sets. Similarly, during our experiments in this book, we will continue to divide data in these two sets, keeping in mind that a test dataset is still essential to report real-world numbers.\n\nSo why even use a validation set? Data is sometimes difficult to obtain, so why not use all the available samples for training, and then report accuracy on them? Sure, when the model begins to learn, it will gradually give higher accuracy predictions on the training dataset (called training accuracy). But because they are so powerful, deep neural networks can potentially memorize the training data, even resulting in 100% accuracy on the training data sometimes. However, its real-world performance will be quite poor. It's like if the student knew the questions that would be on the exam before taking it. This is why a validation set, not used to train the model, gives a realistic assessment of the model performance. Even though we might assign 10-15% of the data as a validation set, it will go a long way in guiding us on how good our model really is.\n\nFor the training process, we need to store our dataset in the proper folder structure. We'll divide the images into two sets: training and validation. For an image file, Keras will automatically assign the name of the _class_ (category) based on its parent folder name. Figure 3-6 depicts the ideal structure to recreate.\n\n###### Figure 3-6. Example directory structure of the training and validation data for different classes\n\nThe following sequence of commands can help download the data and achieve this directory structure:\n\n $ wget https:\/\/www.kaggle.com\/c\/dogs-vs-cats-redux-kernels-edition\/ \n download\/train.zip \n $ unzip train.zip\n $ mv train data\n $ cd data\n $ mkdir train val\n $ mkdir train\/cat train\/dog\n $ mkdir val\/cat val\/dog\n\nThe 25,000 files within the data folder are prefixed with \"cat\" and \"dog.\" Now, move the files into their respective directories. To keep our initial experiment short, we pick 250 random files per class and place them in training and validation folders. We can increase\/decrease this number anytime, to experiment with a trade-off between accuracy and speed:\n\n $ ls | grep cat | sort -R | head -250 | xargs -I {} mv {} train\/cat\/\n $ ls | grep dog | sort -R | head -250 | xargs -I {} mv {} train\/dog\/\n $ ls | grep cat | sort -R | head -250 | xargs -I {} mv {} val\/cat\/\n $ ls | grep dog | sort -R | head -250 | xargs -I {} mv {} val\/dog\/\n\n# Build the Data Pipeline\n\nTo start off with our Python program, we begin by importing the necessary packages:\n\n import tensorflow as tf\n from tf.keras.preprocessing.image import ImageDataGenerator\n from tf.keras.models import Model\n from tf.keras.layers import Input, Flatten, Dense, Dropout,\n GlobalAveragePooling2D\n from tf.keras.applications.mobilenet import MobileNet, preprocess_input\n import math\n\nPlace the following lines of configuration right after the import statements, which we can modify based on our dataset:\n\n TRAIN_DATA_DIR = 'data\/train_data\/'\n VALIDATION_DATA_DIR = 'data\/val_data\/'\n TRAIN_SAMPLES = 500\n VALIDATION_SAMPLES = 500\n NUM_CLASSES = 2\n IMG_WIDTH, IMG_HEIGHT = 224, 224\n BATCH_SIZE = 64\n\n## Number of Classes\n\nWith two classes to distinguish between, we can treat this problem as one of the following:\n\n * A binary classification task\n\n * A multiclass classification task\n\n### Binary classification\n\nAs a binary classification task, it's important to note that \"cat versus dog is really \"cat versus not cat.\" A dog would be classified as a \"not cat\" much like a desk or a ball would. For a given image, the model will give a single probability value corresponding to the \"cat\" class\u2014hence the probability of \"not cat\" is 1 - _P(cat)_. If the probability is higher than 0.5, we predict it as \"cat\"; otherwise, \"not cat.\" To keep things simple, we assume that it's guaranteed that the test set would contain only images of either cats or dogs. Because \"cat versus not cat\" is a binary classification task, we set the number of classes to 1; that is, \"cat.\" Anything that cannot be classified as \"cat\" will be classified as \"not cat.\"\n\n###### Tip\n\nKeras processes the input data in the alphabetical order of the folder names. Because \"cat\" comes before \"dog\" alphabetically, our first class for prediction is \"cat.\" For a multiclass task, we can apply the same concept and infer each class identifier (index) based on the folder sort order. Note that the class index starts at 0 for the first class.\n\n### Multiclass classification\n\nIn a hypothetical world that had only cats and dogs and nothing else, a \"not cat\" would always be a dog. So the label \"not cat\" could simply be replaced with the label \"dog.\" However, in the real world, we have more than two types of objects. As explained before, a ball or a sofa would also be classified as \"dog,\" which would be incorrect. Hence, for a real-world scenario, treating this as a multiclass classification task instead of a binary classification task is far more useful. As a multiclass classification task, we predict separate probability values for each class, and the highest one is our winner. In the case of \"cat versus dog,\" we set the number of classes to two. To keep our code reusable for future tasks, we will treat this as a multiclassification task.\n\n## Batch Size\n\nAt a high level, the training process includes the following steps:\n\n 1. Make predictions on images ( _forward pass_ ).\n\n 2. Determine which predictions were incorrect and propagate back the difference between the prediction and the true value ( _backpropagation_ ).\n\n 3. Rinse and repeat until the predictions become sufficiently accurate.\n\nIt's quite likely that the initial iteration would have close to 0% accuracy. Repeating the process several times, however, can yield a highly accurate model (>90%).\n\nThe batch size defines how many images are seen by the model at a time. It's important that each batch has a good variety of images from different classes in order to prevent large fluctuations in the accuracy metric between iterations. A sufficiently large batch size would be necessary for that. However, it's important not to set the batch size too large; a batch that is too large might not fit in GPU memory, resulting in an \"out of memory\" crash. Usually, batch sizes are set as powers of 2. A good number to start with is 64 for most problems, and we can play with the number by increasing or decreasing it.\n\n# Data Augmentation\n\nUsually, when we hear deep learning, we associate it with millions of images. So, 500 images like what we have might be a low number for real-world training. While these deep neural networks are powerful, a little too powerful for small quantities of data, the danger of a limited set of training images is that the neural network might memorize our training data, and show great prediction performance on the training set, but bad accuracy on the validation set. In other words, the model has overtrained and does not generalize on previously unseen images. And we definitely don't want that.\n\n###### Tip\n\nOften, when we attempt to train a neural network on a small amount of data, the result is a model that performs extremely well on the training data itself but makes rather poor predictions on data that it has not seen before. Such a model would be described as an _overfitted_ model and the problem itself is known as _overfitting_.\n\nFigure 3-7 illustrates this phenomenon for a distribution of points close to a sine curve (with little noise). The dots represent the training data visible to our network, and the crosses represent the testing data that was not seen during training. On one extreme (underfitting), an unsophisticated model, such as a linear predictor, will not be able to represent the underlying distribution well and a high error rate on both the training data and the test data will result. On the other extreme (overfitting), a powerful model (such as a deep neural network) might have the capacity to memorize the training data, which would result in a really low error on the training data, but still a high error on the testing data. What we want is the happy middle where the training error and the testing error are both modestly low, which ideally ensures that our model will perform just as well in the real world as it does during training.\n\n###### Figure 3-7. Underfitting, overfitting, and ideal fitting for points close to a sine curve\n\nWith great power comes great responsibility. It's our responsibility to ensure that our powerful deep neural network does not overfit on our data. Overfitting is common when we have little training data. We can reduce this likelihood in a few different ways:\n\n * Somehow get more data\n\n * Heavily augment existing data\n\n * Fine tune fewer layers\n\nThere are often situations for which there's not enough data available. Perhaps we're working on a niche problem and data is difficult to come by. But there are a few ways that we can artificially augment our dataset for classification:\n\nRotation\n\nIn our example, we might want to rotate the 500 images randomly by 20 degrees in either direction, yielding up to 20,000 possible unique images.\n\nRandom Shift\n\nShift the images slightly to the left, or to the right.\n\nZoom\n\nZoom in and out slightly of the image.\n\nBy combining rotation, shifting, and zooming, the program can generate an almost infinite number of unique images. This important step is called _data augmentation_. Data augmentation is useful not only for adding more data, but also for training more robust models for real-world scenarios. For example, not all images have the cat properly centered in the middle or at a perfect 0-degree angle. Keras provides the `ImageDataGenerator` function that augments the data while it is being loaded from the directory. To illustrate what data augmentations of images look like, Figure 3-8 showcases example augmentations generated by the imgaug library for a sample image. (Note that we will not be using imgaug for our actual training.)\n\n###### Figure 3-8. Possible image augmentations generated from a single image\n\nColored images usually have three channels: red, green, and blue. Each channel has an intensity value ranging from 0 to 255. To normalize it (i.e., scale down the value to between 0 and 1), we use the `preprocess_input` function (which, among other things, divides each pixel by 255):\n\n train_datagen = ImageDataGenerator(preprocessing_function=preprocess_input,\n rotation_range=20,\n width_shift_range=0.2,\n height_shift_range=0.2,\n zoom_range=0.2)\n val_datagen = ImageDataGenerator(preprocessing_function=preprocess_input)\n\n###### Tip\n\nSometimes knowing the label of a training image can be useful in determining appropriate ways of augmenting it. For example, when training a digit recognizer, you might be okay with augmentation by flipping vertically for an image of the digit \"8,\" but not for \"6\" and \"9.\"\n\nUnlike our training set, we don't want to augment our validation set. The reason is that with dynamic augmentation, the validation set would keep changing in each iteration, and the resulting accuracy metric would be inconsistent and difficult to compare across other iterations.\n\nIt's time to load the data from its directories. Training one image at a time can be pretty inefficient, so we can batch them into groups. To introduce more randomness during the training process, we'll keep shuffling the images in each batch. To bring reproducibility during multiple runs of the same program, we'll give the random number generator a seed value:\n\n train_generator = train_datagen.flow_from_directory(\n TRAIN_DATA_DIR,\n target_size=(IMG_WIDTH, IMG_HEIGHT),\n batch_size=BATCH_SIZE,\n shuffle=True,\n seed=12345,\n class_mode='categorical')\n validation_generator = val_datagen.flow_from_directory(\n VALIDATION_DATA_DIR,\n target_size=(IMG_WIDTH, IMG_HEIGHT),\n batch_size=BATCH_SIZE,\n shuffle=False,\n class_mode='categorical')\n\n# Model Definition\n\nNow that the data is taken care of, we come to the most crucial component of our training process: the model. In the code that follows, we reuse a CNN previously trained on the ImageNet dataset (MobileNet in our case), throw away the last few layers, called fully connected layers (i.e., ImageNet-specific classifier layers), and replace them with our own classifier suited to the task at hand.\n\nFor transfer learning, we \"freeze\" the weights of the original model; that is, set those layers as unmodifiable, so only the layers of the new classifier (that we'll add) can be modified. We use MobileNet here to keep things fast, but this method will work just as well for any neural network. The following lines include a few terms such as `Dense`, `Dropout`, and so on. Although we won't explore them in this chapter, you can find explanations in Appendix A.\n\n def model_maker():\n base_model = MobileNet(include_top=False, input_shape =\n (IMG_WIDTH,IMG_HEIGHT,3))\n for layer in base_model.layers[:]:\n layer.trainable = False _# Freeze the layers_\n input = Input(shape=(IMG_WIDTH, IMG_HEIGHT, 3))\n custom_model = base_model(input)\n custom_model = GlobalAveragePooling2D()(custom_model)\n custom_model = Dense(64, activation='relu')(custom_model)\n custom_model = Dropout(0.5)(custom_model)\n predictions = Dense(NUM_CLASSES, activation='softmax')(custom_model)\n return Model(inputs=input, outputs=predictions)\n\n# Train the Model\n\n## Set Training Parameters\n\nWith both the data and model ready, all we have left to do is train the model. This is also known as _fitting the model to the data_. For training a model, we need to select and modify a few different training parameters.\n\nLoss function\n\nThe `loss` function is the penalty we impose on the model for incorrect predictions during the training process. It is the value of this function that we seek to _minimize_. For example, in a task to predict house prices, the `loss` function could be the root-mean-square error.\n\nOptimizer\n\nThis is an algorithm that helps minimize the `loss` function. We use Adam, one of the fastest optimizers out there.\n\nLearning rate\n\nLearning is incremental. The learning rate tells the optimizer how big of a step to take toward the solution; in other words, where the loss is minimum. Take too big of a step, and we end up wildly swinging and overshooting our target. Take too small a step, and it can take a really long time before eventually arriving at the target loss value. It is important to set an optimal learning rate to ensure that we reach our learning goal in a reasonable amount of time. In our example, we set the learning rate at 0.001.\n\nMetric\n\nChoose a metric to judge the performance of the trained model. Accuracy is a good explainable metric, especially when the classes are not imbalanced (i.e., roughly equal amounts of data for each class). Note that this metric is not related to the `loss` function and is mainly used for reporting and not as feedback for the model.\n\nIn the following piece of code, we create the custom model using the `model_maker` function that we wrote earlier. We use the parameters described here to customize this model further for our task of cats versus dogs:\n\n model = model_maker()\n model.compile(loss='categorical_crossentropy',\n optimizer= tf.train.Adam(lr=0.001),\n metrics=['acc'])\n num_steps = math.ceil(float(TRAIN_SAMPLES)\/BATCH_SIZE) \n model.fit_generator(train_generator,\n steps_per_epoch = num_steps,\n epochs=10,\n validation_data = validation_generator,\n validation_steps = num_steps)\n\n###### Note\n\nYou might have noticed the term epoch in the preceding code. One epoch represents a full training step where the network has gone over the entire dataset. One epoch may consist of several minibatches.\n\n## Start Training\n\nRun this program and let the magic begin. If you don't have a GPU, brew a cup of coffee while you wait\u2014it might take 5 to 10 minutes. Or why wait, when you can run the notebooks of this chapter (posted on GitHub) on Colab with a GPU runtime for free?\n\nWhen complete, notice that there are four statistics: `loss` and `acc` on both the training and validation data. We are rooting for `val_acc:`\n\n > Epoch 1\/100 7\/7 [====] - 5s - \n loss: 0.6888 - acc: 0.6756 - val_loss: 0.2786 - val_acc: 0.9018\n > Epoch 2\/100 7\/7 [====] - 5s - \n loss: 0.2915 - acc: 0.9019 - val_loss: 0.2022 - val_acc: 0.9220\n > Epoch 3\/100 7\/7 [====] - 4s - \n loss: 0.1851 - acc: 0.9158 - val_loss: 0.1356 - val_acc: 0.9427\n > Epoch 4\/100 7\/7 [====] - 4s - \n loss: 0.1509 - acc: 0.9341 - val_loss: 0.1451 - val_acc: 0.9404\n > Epoch 5\/100 7\/7 [====] - 4s - \n loss: 0.1455 - acc: 0.9464 - val_loss: 0.1637 - val_acc: 0.9381\n > Epoch 6\/100 7\/7 [====] - 4s - \n loss: 0.1366 - acc: 0.9431 - val_loss: 0.2319 - val_acc: 0.9151\n > Epoch 7\/100 7\/7 [====] - 4s - \n loss: 0.0983 - acc: 0.9606 - val_loss: 0.1420 - val_acc: 0.9495\n > Epoch 8\/100 7\/7 [====] - 4s - \n loss: 0.0841 - acc: 0.9731 - val_loss: 0.1423 - val_acc: 0.9518\n > Epoch 9\/100 7\/7 [====] - 4s - \n loss: 0.0714 - acc: 0.9839 - val_loss: 0.1564 - val_acc: 0.9509\n > Epoch 10\/100 7\/7 [====] - 5s - \n loss: 0.0848 - acc: 0.9677 - val_loss: 0.0882 - val_acc: 0.9702\n\nAll it took was 5 seconds in the very first epoch to reach 90% accuracy on the validation set, with just 500 training images. Not bad! And by the 10th step, we observe about 97% _validation accuracy_. That's the power of transfer learning.\n\nLet us take a moment to appreciate what happened here. With just 500 images, we were able to reach a high level of accuracy in a matter of a few seconds and with very little code. In contrast, if we did not have a model previously trained on ImageNet, getting an accurate model might have needed training time anywhere between a couple of hours to a few days, and tons more data.\n\nThat's all the code we need to train a state-of-the-art classifier on any problem. Place data into folders with the name of the class, and change the corresponding values in the configuration variables. In case our task has more than two classes, we should use `categorical_crossentropy` as the `loss` function and replace the `activation` function in the last layer with `softmax`. Table 3-2 illustrates this.\n\nTable 3-2. Deciding the loss and activation type based on the task **Classification type** | **Class mode** | **Loss** | **Activation on the last layer** \n---|---|---|--- \n1 or 2 classes | binary | binary_crossentropy | sigmoid \nMulticlass, single label | categorical | categorical_crossentropy | softmax \nMulticlass, multilabel | categorical | binary_crossentropy | sigmoid\n\nBefore we forget, save the model that you just trained so that we can use it later:\n\n model.save('model.h5')\n\n# Test the Model\n\nNow that we have a trained model, we might eventually want to use it later for our application. We can now load this model anytime and classify an image. `load_model`, as its name suggests, loads the model:\n\n from tf.keras.models import load_model\n model = load_model('model.h5')\n\nNow let's try loading our original sample images and see what results we get:\n\n img_path = '..\/..\/sample_images\/dog.jpg'\n img = image.load_img(img_path, target_size=(224,224))\n img_array = image.img_to_array(img)\n expanded_img_array = np.expand_dims(img_array, axis=0)\n preprocessed_img = preprocess_input(expanded_img_array) _# Preprocess the image_\n prediction = model.predict(preprocessed_img)\n print(prediction)\n print(validation_generator.class_indices)\n [[0.9967706]]\n {'dog': 1, 'cat': 0}\n\nPrinting the value of the probability, we see that it is 0.996. This is the probability of the given image belonging to the class \"1,\" which is a dog. Because the probability is greater than 0.5, the image is predicted as a dog.\n\nThat's all that we need to train our own classifiers. Throughout this book, you can expect to reuse this code for training with minimal modifications. You can also reuse this code in your own projects. Play with the number of epochs and images, and observe how it affects the accuracy. Also, we should play with any other data we can find online. It doesn't get easier than that!\n\n# Analyzing the Results\n\nWith our trained model, we can analyze how it's performing on the validation dataset. Beyond the more straightforward accuracy metrics, looking at the actual images of mispredictions should give an intuition as to whether the example was truly challenging or whether our model is not sophisticated enough.\n\nThere are three questions that we want to answer for each category (cat, dog):\n\n * Which images are we most confident about being a cat\/dog?\n\n * Which images are we least confident about being a cat\/dog?\n\n * Which images have incorrect predictions in spite of being highly confident?\n\nBefore we get to that, let's get predictions over the entire validation dataset. First, we set the pipeline configuration correctly:\n\n _# VARIABLES_\n IMG_WIDTH, IMG_HEIGHT = 224, 224\n VALIDATION_DATA_DIR = 'data\/val_data\/'\n VALIDATION_BATCH_SIZE = 64\n\n _# DATA GENERATORS_\n validation_datagen = ImageDataGenerator(\n preprocessing_function=preprocess_input)\n validation_generator = validation_datagen.flow_from_directory(\n VALIDATION_DATA_DIR,\n target_size=(IMG_WIDTH, IMG_HEIGHT),\n batch_size=VALIDATION_BATCH_SIZE,\n shuffle=False,\n class_mode='categorical')\n ground_truth = validation_generator.classes\n\nThen, we get the predictions:\n\n predictions = model.predict_generator(validation_generator)\n\nTo make our analysis easier, we make a dictionary storing the image index to the prediction and ground truth (the expected prediction) for each image:\n\n _# prediction_table is a dict with index, prediction, ground truth_\n prediction_table = {}\n for index, val in enumerate(predictions):\n _# get argmax index_\n index_of_highest_probability = np.argmax(val)\n value_of_highest_probability = val[index_of_highest_probability]\n prediction_table[index] = [value_of_highest_probability,\n index_of_highest_probability, ground_truth[index]]\n assert len(predictions) == len(ground_truth) == len(prediction_table)\n\nFor the next two code blocks, we provide boilerplate code, which we reuse regularly throughout the book.\n\nThe following is the signature of the helper function we'll use to find the images with the highest\/lowest probability value for a given category. Additionally, we will be using another helper function, \\- `display`(), to output the images as a grid on-screen:\n\n def display(sorted_indices, message):\n similar_image_paths = []\n distances = []\n for name, value in sorted_indices:\n [probability, predicted_index, gt] = value\n similar_image_paths.append(VALIDATION_DATA_DIR + fnames[name])\n distances.append(probability)\n plot_images(similar_image_paths, distances, message)\n\nThis function is defined the book's Github website (see _http:\/\/PracticalDeepLearning.ai_), at _code\/chapter-3_ ).\n\nNow the fun starts! Which images are we most confident contain dogs? Let's find images with the highest prediction probability (i.e., closest to 1.0; see Figure 3-9) with the predicted class dog (i.e., 1):\n\n _# Most confident predictions of 'dog'_\n indices = get_images_with_sorted_probabilities(prediction_table,\n get_highest_probability=True, label=1, number_of_items=10,\n only_false_predictions=False)\n message = 'Images with the highest probability of containing dogs'\n display(indices[:10], message)\n\n###### Figure 3-9. Images with the highest probability of containing dogs\n\nThese images are indeed very dog-like. One of the reasons the probability is so high may be the fact that the images contain multiple dogs, as well as clear, unambiguous views. Now let's try to find which images we are least confident contain dogs (see Figure 3-10):\n\n _# Least confident predictions of 'dog'_\n indices = get_images_with_sorted_probabilities(prediction_table,\n get_highest_probability=False, label=1, number_of_items=10,\n only_false_predictions=False)\n message = 'Images with the lowest probability of containing dogs'\n display(indices[:10], message)\n\n###### Figure 3-10. Images with the lowest probability of containing dogs\n\nTo repeat, these are the images our classifier is most unsure of containing a dog. Most of these predictions are at the tipping point (i.e., 0.5 probability) to be the majority prediction. Keep in mind the probability of being a cat is just slightly smaller, around 0.49. Compared to the previous set of images, the animals appearing in these images are often smaller and less clear. And these images often result in mispredictions\u2014only 2 of the 10 images were correctly predicted. One possible way to do better here is to train with a larger set of images.\n\nIf you are concerned about these misclassifications, worry not. A simple trick to improve the classification accuracy is to have a higher threshold for accepting a classifier's results, say 0.75. If the classifier is unsure of an image category, its results are withheld. In Chapter 5, we look at how to find an optimal threshold.\n\nSpeaking of mispredictions, they are obviously expected when the classifier has low confidence (i.e., near 0.5 probability for a two-class problem). But what we don't want is to mispredict when our classifier is really sure of its predictions. Let's check which images the classifier is confident contain dogs in spite of them being cats (see Figure 3-11):\n\n _# Incorrect predictions of 'dog'_\n indices = get_images_with_sorted_probabilities(prediction_table,\n get_highest_probability=True, label=1, number_of_items=10,\n only_false_predictions=True)\n message = 'Images of cats with the highest probability of containing dogs'\n display(indices[:10], message)\n\n###### Figure 3-11. Images of cats with the highest probability of containing dogs\n\nHmm...turns out half of these images contain both cats and dogs, and our classifier is correctly predicting the dog category because they are bigger in size in these images. Thus, it's not the classifier but the data that is incorrect here. This often happens in large datasets. The remaining half often contains unclear and relatively smaller objects (but ideally we want lower confidence for these difficult-to-identify images).\n\nRepeating the same set of questions for the cat class, which images are more cat-like (see Figure 3-12)?\n\n _# Most confident predictions of 'cat'_\n indices = get_images_with_sorted_probabilities(prediction_table,\n get_highest_probability=True, label=0, number_of_items=10,\n only_false_predictions=False)\n message = 'Images with the highest probability of containing cats'\n display(indices[:10], message)\n\n###### Figure 3-12. Images with the highest probability of containing cats\n\nInterestingly, many of these have multiple cats. This affirms our previous hypothesis that multiple clear, unambiguous views of cats can give higher probabilities. On the other hand, which images are we most unsure about containing cats (see Figure 3-13)?\n\n _# Least confident predictions of 'cat'_\n indices = get_images_with_sorted_probabilities(prediction_table,\n get_highest_probability=False, label=0, number_of_items=10,\n only_false_predictions=False)\n message = 'Images with the lowest probability of containing cats'\n display(indices[:10], message)\n\n###### Figure 3-13. Images with the lowest probability of containing cats\n\nAs seen previously, the key object size is small, and some of the images are quite unclear, meaning that there is too much contrast in some cases or the object is too bright, something not in line with most of the training images. For example, the camera flash in the eighth (dog.6680) and tenth (dog.1625) images in Figure 3-13 makes the dog difficult to recognize. The sixth image contains a dog in front of a sofa of the same color. Two images contain cages.\n\nLastly, which images is our classifier mistakenly sure of containing cats (see Figure 3-14)?\n\n _# Incorrect predictions of 'cat'_\n indices = get_images_with_sorted_probabilities(prediction_table,\n get_highest_probability=True, label=0, number_of_items=10,\n only_false_predictions=True)\n message = 'Images of dogs with the highest probability of containing cats'\n display(indices[:10], message)\n\n###### Figure 3-14. Images of dogs with the highest probability of containing cats\n\nThese mispredictions are what we want to reduce. Some of them are clearly wrong, whereas others are understandably confusing images. The sixth image (dog.4334) in Figure 3-14 seems to be incorrectly labeled as a dog. The seventh and tenth images are difficult to distinguish against the background. The first and tenth lack enough texture within them to give the classifier enough identification power. And some of the dogs are too small, like the second and fourth.\n\nGoing over the various analyses, we can summarize that mispredictions can be caused by low illumination, unclear, difficult-to-distinguish backgrounds, lack of texture, and smaller occupied area with regard to the image.\n\nAnalyzing our predictions is a great way to understand what our model has learned and what it's bad at, and highlights opportunities to enhance its predictive power. Increasing the size of the training examples and more robust augmentation will help in improving the classification. It's also important to note that showing real-world images to our model (images that look similar to the scenario where our app will be used) will help improve its accuracy drastically. In Chapter 5, we make the classifier more robust.\n\n# Further Reading\n\nTo help understand neural networks and CNNs better, our website features a learning guide which includes recommended resources like video lectures, blogs, and, more interestingly, interactive visual tools which allow you to play with different scenarios in the browser without the need to install any packages. If you're a first-time learner of deep learning, we highly recommend this guide in order to strengthen your foundational knowledge. It covers the theory that you will need to build the intuition to solve future problems. We use Google's TensorFlow Playground (Figure 3-15) for neural networks and Andrej Karpathy's ConvNetJS (Figure 3-16) for CNNs.\n\n###### Figure 3-15. Building a neural network in TensorFlow Playground\n\n###### Figure 3-16. Defining a CNN and visualizing the output of each layer during training in ConvNetJS\n\nWe additionally have a short guide in Appendix A, which summarizes convolutional neural networks, as a ready reference.\n\n# Summary\n\nIn this chapter, we introduced the concept of transfer learning. We reused a pretrained model to build our own cats versus dogs classifier in under 30 lines of code and with barely 500 images, reaching state-of-the-art accuracy in a few minutes. By writing this code, we also debunk the myth that we need millions of images and powerful GPUs to train our classifier (though they help).\n\nHopefully, with these skills, you might be able to finally answer the age-old question of who let the dogs out.\n\nIn the next couple of chapters, we use this learning to understand CNNs in more depth and take the model accuracy to the next level.\n\n# Chapter 4. Building a Reverse Image Search Engine: Understanding Embeddings\n\nBob just bought a new home and is looking to fill it up with some fancy modern furniture. He's flipping endlessly through furniture catalogs and visiting furniture showrooms, but hasn't yet landed on something he likes. Then one day, he spots the sofa of his dreams\u2014a unique L-shaped white contemporary sofa in an office reception. The good news is that he knows what he wants. The bad news is that he doesn't know where to buy it from. The brand and model number is not written on the sofa. Asking the office manager doesn't help either. So, he takes a few photos from different angles to ask around in local furniture shops, but tough luck: no one knows this particular brand. And searching on the internet with keywords like \"white L-shaped,\" \"modern sofa\" gives him thousands of results, but not the one he's looking for.\n\nAlice hears Bob's frustration and asks, \"Why don't you try reverse image search?\" Bob uploads his images on Google and Bing's Reverse Image Search and quickly spots a similar-looking image on an online shopping website. Taking this more perfect image from the website, he does a few more reverse image searches and finds other websites offering the same sofa at cheaper prices. After a few minutes of being online, Bob has officially ordered his dream sofa!\n\n_Reverse image search_ (or as it is more technically known, _instance retrieval_ ) enables developers and researchers to build scenarios beyond simple keyword search. From discovering visually similar objects on Pinterest to recommending similar songs on Spotify to camera-based product search on Amazon, a similar class of technology under the hood is used. Sites like TinEye alert photographers on copyright infringement when their photographs are posted without consent on the internet. Even face recognition in several security systems uses a similar concept to ascertain the identity of the person.\n\nThe best part is, with the right knowledge, you can build a working replica of many of these products in a few hours. So let's dig right in!\n\nHere's what we're doing in this chapter:\n\n 1. Performing feature extraction and similarity search on Caltech101 and Caltech256 datasets\n\n 2. Learning how to scale to large datasets (up to billions of images)\n\n 3. Making the system more accurate and optimized\n\n 4. Analyzing case studies to see how these concepts are used in mainstream products\n\n# Image Similarity\n\nThe first and foremost question is: given two images, are they similar or not?\n\nThere are several approaches to this problem. One approach is to compare patches of areas between two images. Although this can help find exact or near-exact images (that might have been cropped), even a slight rotation would result in dissimilarity. By storing the hashes of the patches, duplicates of an image can be found. One use case for this approach would be the identification of plagiarism in photographs.\n\nAnother naive approach is to calculate the histogram of RGB values and compare their similarities. This might help find near-similar images captured in the same environment without much change in the contents. For example, in Figure 4-1, this technique is used in image deduplication software aimed at finding bursts of photographs on your hard disk, so you can select the best one and delete the rest. Of course, there is an increasing possibility of false positives as your dataset grows. Another downside to this approach is that small changes to the color, hue, or white balance would make recognition more difficult.\n\n###### Figure 4-1. RGB histogram-based \"Similar Image Detector\" program\n\nA more robust traditional computer vision-based approach is to find visual features near edges using algorithms like Scale-Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), and Oriented FAST and Rotated BRIEF (ORB) and then compare the number of similar features that are common between the two photos. This helps you go from a generic image-level understanding to a relatively robust object-level understanding. Although this is great for images with rigid objects that have less variation like the printed sides of a box of cereal, which almost always look the same, it's less helpful for comparing deformable objects like humans and animals, which can exhibit different poses. As an example, you can see the features being displayed on the camera-based product search experience on the Amazon app. The app displays these features in the form of blue dots (Figure 4-2). When it sees a sufficient number of features, it sends them to the Amazon servers to retrieve product information.\n\n###### Figure 4-2. Product scanner in Amazon app with visual features highlighted\n\nGoing deeper, another approach is to find the category (e.g., sofa) of an image using deep learning and then find other images within the same category. This is equivalent to extracting metadata from an image so that it can then be indexed and used in a typical text query-based search. This can be easily scaled by using the metadata in open source search engines like ElasticSearch. Many ecommerce sites show recommendations based on tags extracted from an image while performing a query-based search internally. As you would expect, by extracting the tags, we lose certain information like color, pose, relationships between objects in the scene, and so on. Additionally, a major disadvantage of this approach is that it requires enormous volumes of labeled data to train the classifier for extracting these labels on new images. And every time a new category needs to be added, the model needs to be retrained.\n\nBecause our aim is to search among millions of images, what we ideally need is a way to summarize the information contained in the millions of pixels in an image into a smaller representation (of say a few thousand dimensions), and have this summarized representation be close together for similar objects and further away for dissimilar items.\n\nLuckily, deep neural networks come to the rescue. As we saw in Chapter 2 and Chapter 3, the CNNs take an image input and convert it into feature vectors of a thousand dimensions, which then act as input to a classifier that outputs the top identities to which the image might belong (say dog or cat). The _feature vectors_ (also called _embeddings_ or _bottleneck features_ ) are essentially a collection of a few thousand floating-point values. Going through the convolution and pooling layers in a CNN is basically an act of reduction, to filter the information contained in the image to its most important and salient constituents, which in turn form the bottleneck features. Training the CNN molds these values in such a way that items belonging to the same class have small Euclidean distance between them (or simply the square root of the sum of squares of the difference between corresponding values) and items from different classes are separated by larger distances. This is an important property that helps solve so many problems where a classifier can't be used, especially in unsupervised problems because of a lack of adequate labeled data.\n\n###### Tip\n\nAn ideal way to find similar images would be to use _transfer learning_. For example, pass the images through a pretrained convolutional neural network like ResNet-50, extract the features, and then use a metric to calculate the error rate like the Euclidean distance.\n\nEnough talk, let's code!\n\n# Feature Extraction\n\nAn image is worth a thousand ~~words~~ features.\n\nIn this section, we play with and understand the concepts of feature extraction, primarily with the Caltech 101 dataset (131 MB, approximately 9,000 images), and then eventually with Caltech 256 (1.2 GB, approximately 30,000 images). Caltech 101, as the name suggests, consists of roughly 9,000 images in 101 categories, with about 40 to 800 images per category. It's important to note that there is a 102nd category called \"BACKGROUND_Google\" consisting of random images not contained in the first 101 categories, which needs to be deleted before we begin experimenting. Remember that all of the code we are writing is also available in the GitHub repository.\n\nLet's download the dataset:\n\n $ wget\n http:\/\/www.vision.caltech.edu\/Image_Datasets\/Caltech101\/\n 101_ObjectCategories.tar.gz\n $ tar -xvf 101_ObjectCategories.tar.gz\n $ mv 101_ObjectCategories caltech101\n $ rm -rf caltech101\/BACKGROUND_Google\n\nNow, import all of the necessary modules:\n\n import numpy as np\n from numpy.linalg import norm\n import pickle\n from tqdm import tqdm, tqdm_notebook\n import os\n import time\n from tf.keras.preprocessing import image\n from tf.keras.applications.resnet50 import ResNet50, preprocess_input\n\nLoad the ResNet-50 model without the top classification layers, so we get only the _bottleneck features._ Then define a function that takes an image path, loads the image, resizes it to proper dimensions supported by ResNet-50, extracts the features, and then normalizes them:\n\n model = ResNet50(weights='imagenet', include_top=False,\n input_shape=(224, 224, 3))\n def extract_features(img_path, model):\n input_shape = (224, 224, 3)\n img = image.load_img(img_path, target_size=(\n input_shape[0], input_shape[1]))\n img_array = image.img_to_array(img)\n expanded_img_array = np.expand_dims(img_array, axis=0)\n preprocessed_img = preprocess_input(expanded_img_array)\n features = model.predict(preprocessed_img)\n flattened_features = features.flatten()\n normalized_features = flattened_features \/ norm(flattened_features)\n return normalized_features\n\n###### Tip\n\nThe function defined in the previous example is the `key` function that we use for almost every feature extraction need in Keras.\n\nThat's it! Let's see the feature-length that the model generates:\n\n features = extract_features('..\/..\/sample_images\/cat.jpg', model)\n print(len(features))\n\nannoy\n\n > 2048\n\nThe ResNet-50 model generated 2,048 features from the provided image. Each feature is a floating-point value between 0 and 1.\n\n###### Tip\n\nIf your model is trained or fine tuned on a dataset that is not similar to ImageNet, redefine the \"preprocess_input(img)\" step accordingly. The mean values used in the function are particular to the ImageNet dataset. Each model in Keras has its own preprocessing function so make sure you are using the right one.\n\nNow it's time to extract features for the entire dataset. First, we get all the filenames with this handy function, which recursively looks for all the image files (defined by their extensions) under a directory:\n\n extensions = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG']\n def get_file_list(root_dir):\n file_list = []\n counter = 1\n for root, directories, filenames in os.walk(root_dir):\n for filename in filenames:\n if any(ext in filename for ext in extensions):\n file_list.append(os.path.join(root, filename))\n counter += 1\n return file_list\n\nThen, we provide the path to our dataset and call the function:\n\n _# path to the datasets_\n root_dir = '..\/..\/datasets\/caltech101'\n filenames = sorted(get_file_list(root_dir))\n\nWe now define a variable that will store all of the features, go through all filenames in the dataset, extract their features, and append them to the previously defined variable:\n\n feature_list = []\n for i in tqdm_notebook(range(len(filenames))):\n feature_list.append(extract_features(filenames[i], model))\n\nOn a CPU, this should take under an hour. On a GPU, only a few minutes.\n\n###### Tip\n\nTo get a better sense of time, use the super handy tool `tqdm`, which shows a progress meter (Figure 4-3) along with the speed per iteration as well as the time that has passed and expected finishing time. In Python, wrap an iterable with `tqdm;` for example, `tqdm(range(10))`. Its Jupyter Notebook variant is `tqdm_notebook`.\n\n###### Figure 4-3. Progress bar shown with `tqdm_notebook`\n\nFinally, write these features to a pickle file so that we can use them in the future without having to recalculate them:\n\n pickle.dump(feature_list, open('data\/features-caltech101-resnet.pickle', 'wb'))\n pickle.dump(filenames, open('data\/filenames-caltech101.pickle','wb'))\n\nThat's all folks! We're done with the feature extraction part.\n\n# Similarity Search\n\nGiven a photograph, our aim is to find another photo in our dataset similar to the current one. We begin by loading the precomputed features:\n\n filenames = pickle.load(open('data\/filenames-caltech101.pickle', 'rb'))\n feature_list = pickle.load(open('data\/features-caltech101-resnet.pickle', 'rb'))\n\nWe'll use Python's machine learning library `scikit-learn` for finding _nearest neighbors_ of the query features; that is, features that represent a query image. We train a nearest-neighbor model using the brute-force algorithm to find the nearest five neighbors based on Euclidean distance (to install `scikit-learn` on your system, use `pip3 install sklearn)`:\n\n from sklearn.neighbors import NearestNeighbors\n neighbors = NearestNeighbors(n_neighbors=5, algorithm='brute',\n metric='euclidean').fit(feature_list)\n distances, indices = neighbors.kneighbors([feature_list[0]])\n\nNow you have both the indices and distances of the nearest five neighbors of the very first query feature (which represents the first image). Notice the quick execution of the first step\u2014the training step. Unlike training most machine learning models, which can take from several minutes to hours on large datasets, instantiating the nearest-neighbor model is instantaneous because at training time there isn't much processing. This is also called _lazy learning_ because all the processing is deferred to classification or inference time.\n\nNow that we know the indices, let's see the actual image behind that feature. First, we pick an image to query, located at say, index = 0:\n\n import matplotlib.pyplot as plt\n import matplotlib.image as mpimg\n %matplotlib inline # Show the plots as a cell within the Jupyter Notebooks\n plt.imshow(mpimg.imread(filenames[0]))\n\nFigure 4-4 shows the result.\n\n###### Figure 4-4. The query image from the Caltech-101 dataset\n\nNow, let's examine the nearest neighbors by plotting the first result.\n\n plt.imshow(mpimg.imread(filenames[indices[0]]))\n\nFigure 4-5 shows that result.\n\n###### Figure 4-5. The nearest neighbor to our query image\n\nWait, isn't that a duplicate? Actually, the nearest index will be the image itself because that is what is being queried:\n\n for i in range(5):\n print(distances[0][i])\n\n 0.0\n 0.8285478\n 0.849847\n 0.8529018\n\nThis is also confirmed by the fact that the distance of the first result is zero. Now let's plot the real first nearest neighbor:\n\n plt.imshow(mpimg.imread(filenames[indices[1]]))\n\nTake a look at the result this time in Figure 4-6.\n\n###### Figure 4-6. The second nearest neighbor of the queried image\n\nThis definitely looks like a similar image. It captured a similar concept, has the same image category (faces), same gender, and similar background with pillars and vegetation. In fact, it's the same person!\n\nWe would probably use this functionality regularly, so we have already built a helper function `plot_images()` that visualizes several query images with their nearest neighbors. Now let's call this function to visualize the nearest neighbors of six random images. Also, note that every time you run the following piece of code, the displayed images will be different (Figure 4-7) because the displayed images are indexed by a random integer.\n\n for i in range(6):\n random_image_index = random.randint(0,num_images)\n distances, indices = neighbors.kneighbors([featureList[random_image_index]])\n _# don't take the first closest image as it will be the same image_\n similar_image_paths = [filenames[random_image_index]] +\n [filenames[indices[0][i]] for i in range(1,4)]\n plot_images(similar_image_paths, distances[0])\n\n###### Figure 4-7. Nearest neighbor for different images returns similar-looking images\n\n# Visualizing Image Clusters with t-SNE\n\nLet's step up the game by visualizing the entire dataset!\n\nTo do this, we need to reduce the dimensions of the feature vectors because it's not possible to plot a 2,048-dimension vector (the feature-length) in two dimensions (the paper). The t-distributed stochastic neighbor embedding (t-SNE) algorithm reduces the high-dimensional feature vector to 2D, providing a bird's-eye view of the dataset, which is helpful in recognizing clusters and nearby images. t-SNE is difficult to scale to large datasets, so it is a good idea to reduce the dimensionality using Principal Component Analysis (PCA) and then call t-SNE:\n\n _# Perform PCA over the features_\n num_feature_dimensions=100 _# Set the number of features_\n pca = PCA(n_components = num_feature_dimensions)\n pca.fit(featureList)\n feature_list_compressed = pca.transform(featureList)\n\n _# For speed and clarity, we'll analyze about first half of the dataset._\n selected_features = feature_list_compressed[:4000]\n selected_class_ids = class_ids[:4000]\n selected_filenames = filenames[:4000]\n\n tsne_results =\n TSNE(n_components=2,verbose=1,metric='euclidean')\n .fit_transform(selected_features)\n\n _# Plot a scatter plot from the generated t-SNE results_\n colormap = plt.cm.get_cmap('coolwarm')\n scatter_plot = plt.scatter(tsne_results[:,0],tsne_results[:,1], c =\n selected_class_ids, cmap=colormap)\n plt.colorbar(scatter_plot)\n plt.show()\n\nWe discuss PCA in more detail in later sections. In order to scale to larger dimensions, use Uniform Manifold Approximation and Projection (UMAP).\n\nFigure 4-8 shows clusters of similar classes, and how they are spread close to one another.\n\n###### Figure 4-8. t-SNE visualizing clusters of image features, where each cluster represents one object class in the same color\n\nEach color in Figure 4-8 indicates a different class. To make it even more clear, we can use another helper function, `plot_images_in_2d()`, to plot the images in these clusters, as demonstrated in Figure 4-9.\n\n###### Figure 4-9. t-SNE visualization showing image clusters; similar images are in the same cluster\n\nNeat! There is a clearly demarcated cluster of human faces, flowers, vintage cars, ships, bikes, and a somewhat spread-out cluster of land and marine animals. There are lots of images on top of one another, which makes Figure 4-9 a tad bit confusing, so let's try to plot the t-SNE as clear tiles with the helper function `tsne_to_grid_plotter_manual()`, the results of which you can see in Figure 4-10.\n\n tsne_to_grid_plotter_manual(tsne_results[:,0], tsne_results[:,1],\n selected_filenames)\n\n###### Figure 4-10. t-SNE visualization with tiled images; similar images are close together\n\nThis is definitely much clearer. We can see similar images are colocated within the clusters of human faces, chairs, bikes, airplanes, ships, laptops, animals, watches, flowers, tilted minarets, vintage cars, anchor signs, and cameras, all close to their own kind. Birds of a feather indeed do flock together!\n\n###### Tip\n\n2D clusters are great, but visualizing them in 3D would look stellar. It would be even better if they could be rotated, zoomed in and out, and manipulated using the mouse without any coding. And bonus points if the data could be searched interactively, revealing its neighbors. The TensorFlow Embedding projector does all this and more in a browser-based GUI tool. The preloaded embeddings from image and text datasets are helpful in getting a better intuition of the power of embeddings. And, as Figure 4-11 shows, it's reassuring to see deep learning figure out that John Lennon, Led Zeppelin, and Eric Clapton happen to be used in a similar context to the Beatles in the English language.\n\n###### Figure 4-11. TensorFlow Embedding projector showing a 3D representation of 10,000 common English words and highlighting words related to \"Beatles\"\n\n# Improving the Speed of Similarity Search\n\nThere are several opportunities to improve the speed of the similarity search step. For similarity search, we can make use of two strategies: either reduce the feature-length, or use a better algorithm to search among the features. Let's examine each of these strategies individually.\n\n## Length of Feature Vectors\n\nIdeally, we would expect that the smaller the amount of data in which to search, the faster the search should be. Recall that the ResNet-50 model gives 2,048 features. With each feature being a 32-bit floating-point, each image is represented by an 8 KB feature vector. For a million images, that equates to nearly 8 GB. Imagine how slow it would be to search among 8 GB worth of features. To give us a better picture of our scenario, Table 4-1 gives the feature-lengths that we get from different models.\n\nTable 4-1. Top 1% accuracy and feature-lengths for different CNN models **Model** | **Bottleneck feature-length** | **Top-1% accuracy on ImageNet** \n---|---|--- \nVGG16 | 512 | 71.5% \nVGG19 | 512 | 72.7% \nMobileNet | 1024 | 66.5% \nInceptionV3 | 2048 | 78.8% \nResNet-50 | 2048 | 75.9% \nXception | 2048 | 79.0%\n\n###### Note\n\nUnder the hood, many models available in `tf.keras.applications` yield several thousand features. For example, InceptionV3 yields features in the shape of 1 x 5 x 5 x 2048, which translates to 2,048 feature maps of 5 x 5 convolutions, resulting in a total of 51,200 features. Hence, it becomes essential to reduce this large vector by using an average or max-pooling layer. The pooling layer will condense each convolution (e.g., 5 x 5 layer) into a single value. This can be defined during model instantiation as follows:\n\n model = InceptionV3(weights='imagenet', include_top=False,\n input_shape = (224,224,3), pooling='max')\n\nFor models that yield a large number of features, you will usually find that all code examples make use of this pooling option. Table 4-2 shows the before and after effect of max pooling on the number of features in different models.\n\nTable 4-2. Number of features before and after pooling for different models **Model** | **# features before pooling** | **# features after pooling** \n---|---|--- \nResNet-50 | [1,1,1,2048] = 2048 | 2048 \nInceptionV3 | [1,5,5,2048] = 51200 | 2048 \nMobileNet | [1,7,7,1024] = 50176 | 1024\n\nAs we can see, almost all the models generate a large number of features. Imagine how much faster the search would be if we could reduce to a mere 100 features (a whopping reduction of 10 to 20 times!) without compromising the quality of the results. Apart from just the size, this is an even bigger improvement for big data scenarios, for which the data can be loaded into RAM all at once instead of periodically loading parts of it, thus giving an even bigger speedup. PCA will help us make this happen.\n\n## Reducing Feature-Length with PCA\n\nPCA is a statistical procedure that questions whether features representing the data are equally important. Are some of the features redundant enough that we can get similar classification results even after removing those features? PCA is considered one of the go-to techniques for dimensionality reduction. Note that it does not eliminate redundant features; rather, it generates a new set of features that are a linear combination of the input features. These linear features are orthogonal to one another, which is why all the redundant features are absent. These features are known as _principal components._\n\nPerforming PCA is pretty simple. Using the `scikit-learn` library, execute the following:\n\n import sklearn.decomposition.PCA as PCA\n num_feature_dimensions=100\n pca = PCA(n_components = num_feature_dimensions)\n pca.fit(feature_list)\n feature_list_compressed = pca.transform(feature_list)\n\nPCA can also tell us the relative importance of each feature. The very first dimension has the most variance and the variance keeps on decreasing as we go on:\n\n # Explain the importance of first 20 features\n print(pca.explained_variance_ratio_[0:20])\n\n [ 0.07320023 0.05273142 0.04310822 0.03494248 0.02166119 0.0205037\n 0.01974325 0.01739547 0.01611573 0.01548918 0.01450421 0.01311005\n 0.01200541 0.0113084 0.01103872 0.00990405 0.00973481 0.00929487\n 0.00915592 0.0089256 ]\n\nHmm, why did we pick 100 dimensions from the original 2,048? Why not 200? PCA is representing our original feature vector but in reduced dimensions. Each new dimension has diminishing returns in representing the original vector (i.e., the new dimension might not explain the data much) and takes up valuable space. We can balance between how well the original data is explained versus how much we want to reduce it. Let's visualize the importance of say the first 200 dimensions.\n\n pca = PCA(200)\n pca.fit(feature_list)\n matplotlib.style.use('seaborn')\n plt.plot(range(1,201),pca.explained_variance_ratio_,'o--', markersize=4)\n plt.title ('Variance for each PCA dimension')\n plt.xlabel('PCA Dimensions')\n plt.ylabel('Variance')\n plt.grid(True)\n plt.show()\n\nFigure 4-12 presents the results.\n\n###### Figure 4-12. Variance for each PCA dimension\n\nThe individual variance will tell us how important the newly added features are. For example, after the first 100 dimensions, the additional dimensions don't add much variance (almost equal to 0) and can be neglected. Without even checking the accuracy it is safe to assume that the PCA with 100 dimensions will be a robust model. Another way to look at this is to visualize how much of the original data is explained by the limited number of features by finding the cumulative variance (see Figure 4-13).\n\n plt.plot(range(1,201),pca.explained_variance_ratio_.cumsum(),'o--', markersize=4)\n plt.title ('Cumulative Variance with each PCA dimension')\n plt.xlabel('PCA Dimensions')\n plt.ylabel('Variance')\n plt.grid(True)\n plt.show()\n\n###### Figure 4-13. Cumulative variance with each PCA dimension\n\nAs expected, adding 100 dimensions (from 100 to 200) adds only 0.1 variance and begins to gradually plateau. For reference, using the full 2,048 features would result in a cumulative variance of 1.\n\nThe number of dimensions in PCA is an important parameter that we can tune to the problem at hand. One way to directly justify a good threshold is to find a good balance between the number of features and its effect on accuracy versus speed:\n\n pca_dimensions = [1,2,3,4,5,10,20,50,75,100,150,200]\n pca_accuracy = []\n pca_time = []\n\n for dimensions in pca_dimensions:\n _# Perform PCA_\n pca = PCA(n_components = dimensions)\n pca.fit(feature_list)\n feature_list_compressed = pca.transform(feature_list[:])\n _# Calculate accuracy over the compressed features_\n accuracy, time_taken = accuracy_calculator(feature_list_compressed[:])\n pca_time.append(time_taken)\n pca_accuracy.append(accuracy)\n print(\"For PCA Dimensions = \", dimensions, \",\\tAccuracy = \",accuracy,\"%\",\n \",\\tTime = \", pca_time[-1])\n\nWe visualize these results using the graph in Figure 4-14 and see that after a certain number of dimensions an increase in dimensions does not lead to higher accuracy:\n\n plt.plot(pca_time, pca_accuracy,'o--', markersize=4)\n for label, x, y in zip(pca_dimensions, pca_time,pca_accuracy):\n plt.annotate(label, xy=(x, y), ha='right', va='bottom')\n plt.title ('Test Time vs Accuracy for each PCA dimension')\n plt.xlabel('Test Time')\n plt.ylabel('Accuracy')\n plt.grid(True)\n plt.show()\n\n###### Figure 4-14. Test time versus accuracy for each PCA dimension\n\nAs is visible in the graph, there is little improvement in accuracy after increasing beyond a feature-length of 100 dimensions. With almost 20 times fewer dimensions (100) than the original (2,048), this offers drastically higher speed and less time on almost any search algorithm, while achieving similar (and sometimes slightly better) accuracy. Hence, 100 would be an ideal feature-length for this dataset. This also means that the first 100 dimensions contain the most information about the dataset.\n\nThere are a number of benefits to using this reduced representation, like efficient use of computational resources, noise removal, better generalization due to fewer dimensions, and improved performance for machine learning algorithms learning on this data. By reducing the distance calculation to the most important features, we can also improve the result accuracy slightly. This is because previously all the 2,048 features were contributing equally in the distance calculation, whereas now, only the most important 100 features get their say. But, more importantly, it saves us from the _curse of dimensionality_. It's observed that as the number of dimensions increases, the ratio of the Euclidean distance between the two closest points and the two furthest points tends to become 1. In very high-dimensional space, the majority of points from a real-world dataset seem to be a similar distance away from one another, and the Euclidean distance metric begins to fail in discerning similar versus dissimilar items. PCA helps bring sanity back.\n\nYou can also experiment with different distances like Minkowski distance, Manhattan distance, Jaccardian distance, and weighted Euclidean distance (where the weight is the contribution of each feature as explained in `pca.explained_variance_ratio_`).\n\nNow, let's turn our minds toward using this reduced set of features to make our search even faster.\n\n# Scaling Similarity Search with Approximate Nearest Neighbors\n\nWhat do we want? Nearest neighbors. What is our baseline? Brute-force search. Although convenient to implement in two lines, it goes over each element and hence scales linearly with data size (number of items as well as the number of dimensions). Having PCA take our feature vector from a length of 2,048 to 100 will not only yield a 20-times reduction in data size, but also result in an increase in speed of 20 times when using brute force. PCA does pay off!\n\nLet's assume similarity searching a small collection of 10,000 images, now represented with 100 feature-length vectors, takes approximately 1 ms. Even though this looks fast for 10,000 items, in a real production system with larger data, perhaps 10 million items, this will take more than a second to search. Our system might not be able to fulfill more than one query per second per CPU core. If you receive 100 requests per second from users, even running on multiple CPU cores of the machine (and loading the search index per thread), you would need multiple machines to be able to serve the traffic. In other words, an inefficient algorithm means money, lots of money, spent on hardware.\n\nBrute force is our baseline for every comparison. As in most algorithmic approaches, brute force is the slowest approach. Now that we have our baseline set, we will explore approximate nearest-neighbor algorithms. Instead of guaranteeing the correct result as with the brute-force approach, approximation algorithms _generally_ get the correct result because they are...well, approximations. Most of the algorithms offer some form of tuning to balance between correctness and speed. It is possible to evaluate the quality of the results by comparing against the results of the brute-force baseline.\n\n## Approximate Nearest-Neighbor Benchmark\n\nThere are several approximate nearest-neighbor (ANN) libraries out there, including well-known ones like Spotify's Annoy, FLANN, Facebook's Faiss, Yahoo's NGT, and NMSLIB. Benchmarking each of them would be a tedious task (assuming you get past installing some of them). Luckily, the good folks at _ann-benchmarks.com_ (Martin Aumueller, Erik Bernhardsson, and Alec Faitfull) have done the legwork for us in the form of reproducible benchmarks on 19 libraries on large public datasets. We'll pick the comparisons on a dataset of feature embeddings representing words (instead of images) called GloVe. This 350 MB dataset consists of 400,000 feature vectors representing words in 100 dimensions. Figure 4-15 showcases their raw performance when tuned for correctness. Performance is measured in the library's ability to respond to queries each second. Recall that a measure of correctness is the fraction of top- _n_ closest items returned with respect to the real top- _n_ closest items. This ground truth is measured by brute-force search.\n\n###### Figure 4-15. Comparison of ANN libraries (data from ann-benchmarks.com)\n\nThe strongest performers on this dataset return close to several thousand queries per second at the acceptable 0.8 recall. To put this in perspective, our brute-force search performs under 1 query per second. At the fastest, some of these libraries (like NGT) can return north of 15,000 results per second (albeit at a low recall, making it impractical for usage).\n\n## Which Library Should I Use?\n\nIt goes without saying that the library you use will end up depending heavily on your scenario. Each library presents a trade-off between search speed, accuracy, size of index, memory consumption, hardware use (CPU\/GPU), and ease of setup. Table 4-3 presents a synopsis of different scenarios and recommendations as to which library might be work best for each scenario.\n\nTable 4-3. ANN library recommendations **Scenario** | **Recommendation** \n---|--- \nI want to experiment quickly in Python without too much setup but I also care about fast speed. | Use Annoy or NMSLIB \nI have a large dataset (up to 10 million entries or several thousand dimensions) and care utmost about speed. | Use NGT \nI have a ridiculously large dataset (100 million-plus entries) and have a cluster of GPUs, too. | Use Faiss \nI want to set a ground-truth baseline with 100% correctness. Then immediately move to a faster library, impress my boss with the orders of magnitude speedup, and get a bonus. | Use brute-force approach\n\nWe offer much more detailed examples in code of several libraries on the book's GitHub website (see _http:\/\/PracticalDeepLearning.ai_), but for our purposes here, we'll showcase our go-to library, Annoy, in detail and compare it with brute-force search on a synthetic dataset. Additionally, we briefly touch on Faiss and NGT.\n\n## Creating a Synthetic Dataset\n\nTo make an apples-to-apples comparison between different libraries, we first create a million-item dataset composed of random floating-point values with mean 0 and variance 1. Additionally, we pick a random feature vector as our query to find the nearest neighbors:\n\n num_items = 1000000\n num_dimensions = 100\n dataset = np.random.randn(num_items, num_dimensions)\n dataset \/= np.linalg.norm(dataset, axis=1).reshape(-1, 1)\n\n random_index = random.randint(0,num_items)\n query = dataset[random_index]\n\n## Brute Force\n\nFirst, we calculate the time for searching with the brute-force algorithm. It goes through the entire data serially, calculating the distance between the query and current item one at a time. We use the `timeit` command for calculating the time. First, we create the search index to retrieve the five nearest neighbors and then search with a query:\n\n neighbors = NearestNeighbors(n_neighbors=5, algorithm='brute',\n metric='euclidean').fit(dataset)\n %timeit distances, indices = neighbors.kneighbors([query])\n\n > 177 ms \u00b1 136 \u03bcs per loop (mean \u00b1 std. dev. of 7 runs, 1000 loops each)\n\n###### Tip\n\nThe `timeit` command is a handy tool. To benchmark the time of a single operation, prefix it with this command. Compared to the time command, which runs a statement for one time, `timeit` runs the subsequent line multiple times to give more precise aggregated statistics along with the standard deviation. By default, it turns off garbage collection, making independent timings more comparable. That said, this might not reflect timings in real production loads where garbage collection is turned on.\n\n## Annoy\n\nAnnoy (Approximate Nearest Neighbors Oh Yeah) is a C++ library with Python bindings for searching nearest neighbors. Synonymous with speed, it was released by Spotify and is used in production to serve its music recommendations. In contrast to its name, it's actually fun and easy to use.\n\nTo use Annoy, we install it using `pip`:\n\n $ pip install annoy\n\nIt's fairly straightforward to use. First, we build a search index with two hyperparameters: the number of dimensions of the dataset and the number of trees:\n\n from annoy import AnnoyIndex\n annoy_index = AnnoyIndex(num_dimensions) _# Length of item vector that will be\n indexed_\n for i in range(num_items):\n annoy_index.add_item(i, dataset[i])\n annoy_index.build(40) _# 40 trees_\n\nNow let's find out the time it takes to search the five nearest neighbors of one image:\n\n %timeit indexes=t.get_nns_by_vector(query, 5, include_distances=True)\n\n > 34.9 \u03bcs \u00b1 165 ns per loop (mean \u00b1 std. dev. of 7 runs, 10000 loops each)\n\nNow that is blazing fast! To put this in perspective, even for our million-item dataset, this can serve almost 28,000 requests on a single CPU core. Considering most CPUs have multiple cores, it should be able to handle more than 100,000 requests on a single system. The best part is that it lets you share the same index in memory between multiple processes. Thus, the biggest index can be equivalent to the size of your overall RAM, making it possible to serve multiple requests on a single system.\n\nOther benefits include that it generates a modestly sized index. Moreover, it decouples creating indexes from loading them, so you can create an index on one machine, pass it around, and then on your serving machine load it in memory and serve it.\n\n###### Tip\n\nWondering about how many trees to use? More trees give higher precision, but larger indexes. Usually, no more than 50 trees are required to attain the highest precision.\n\n## NGT\n\nYahoo Japan's Neighborhood Graph and Tree (NGT) library currently leads most benchmarks and is best suited for large datasets (in millions of items) with large dimensions (in several thousands). Although the library has existed since 2016, its real entry into the industry benchmark scene happened in 2018 with the implementation of the ONNG algorithm (short for Optimization of indexing based on _k_ -Nearest Neighbor Graph for proximity). Considering multiple threads might be running NGT on a server, it can place the index in shared memory with the help of memory mapped files, helping to reduce memory usage as well as increase load time.\n\n## Faiss\n\nFaiss is Facebook's efficient similarity search library. It can scale to billions of vectors in RAM on a single server by storing a compressed representation of the vectors (compact quantization codes) instead of the original values. It's especially suited for dense vectors. It shines particularly on machines with GPUs by storing the index on GPU memory (VRAM). This works on both single-GPU and multi-GPU setups. It provides the ability to configure performance based on search time, accuracy, memory usage, and indexing time. It's one of the fastest known implementations of ANN search on GPU. Hey, if it's good enough for Facebook, it's good enough for most of us (as long as we have enough data).\n\nWhile showing the entire process is beyond the scope of this book, we recommend installing Faiss using Anaconda or using its Docker containers to quickly get started.\n\n# Improving Accuracy with Fine Tuning\n\nMany of the pretrained models were trained on the ImageNet dataset. Therefore, they provide an incredible starting point for similarity computations in most situations. That said, if you tuned these models to adapt to your specific problem, they would perform even more accurately at finding similar images.\n\nIn this portion of the chapter, we identify the worst-performing categories, visualize them with t-SNE, fine tune, and then see how their t-SNE graph changes.\n\nWhat is a good metric to check whether you are indeed getting similar images?\n\nPainful option 1\n\nGo through the entire dataset one image at a time, and manually score whether the returned images indeed look similar.\n\nHappier option 2\n\nSimply calculate accuracy. That is, for an image belonging to category _X_ , are the similar images belonging to the same category? We will refer to this similarity accuracy.\n\nSo, what are our worst-performing categories? And why are they the worst? To answer this, we have predefined a helper function `worst_classes`. For every image in the dataset, it finds the nearest neighbors using the brute-force algorithm and then returns six classes with the least accuracy. To see the effects of fine tuning, we run our analysis on a more difficult dataset: Caltech-256. Calling this function unveils the least-accurate classes:\n\n names_of_worst_classes_before_finetuning, accuracy_per_class_before_finetuning =\n worst_classes(feature_list[:])\n\n Accuracy is 56.54\n Top 6 incorrect classifications\n 059.drinking-straw Accuracy: 11.76%\n 135.mailbox Accuracy: 16.03%\n 108.hot-dog Accuracy: 16.72%\n 163.playing-card Accuracy: 17.29%\n 195.soda-can Accuracy: 19.68%\n 125.knife Accuracy: 20.53%\n\nTo see why they are performing so poorly on certain classes, we've plotted a t-SNE graph to visualize the embeddings in 2D space, which you can see in Figure 4-16. To prevent overcrowding on our plot, we use only 50 items from each of the 6 classes.\n\n###### Tip\n\nTo enhance the visibility of the graph we can define different markers and different colors for each class. Matplotlib provides a wide variety of markers and colors.\n\n markers = [ \"^\", \".\",\"s\", \"o\",\"x\", \"P\" ]\n colors = ['red', 'blue', 'fuchsia', 'green', \n 'purple', 'orange']\n\n###### Figure 4-16. t-SNE visualization of feature vectors of least-accurate classes before fine tuning\n\nAah, these feature vectors are all over the place and on top of one another. Using these feature vectors in other applications such as classification might not be a good idea because it would be difficult to find a clean plane of separation between them. No wonder they performed so poorly in this nearest neighbor\u2013based classification test.\n\nWhat do you think will be the result if we repeat these steps with the fine-tuned model? We reckon something interesting; let's take a look at Figure 4-17 to see.\n\n###### Figure 4-17. t-SNE visualization of feature vectors of least-accurate classes after fine tuning\n\nThis is so much cleaner. With just a little bit of fine tuning as shown in Chapter 3, the embeddings begin to group together. Compare the noisy\/scattered embeddings of the pretrained models against those of the fine-tuned model. A machine learning classifier would be able to find a plane of separation between these classes with much more ease, hence yielding better classification accuracy as well as more similar images when not using a classifier. And, remember, these were the classes with the highest misclassifications; imagine how nicely the classes with originally higher accuracy would be after fine tuning.\n\nPreviously, the pretrained embeddings achieved 56% accuracy. The new embeddings after fine tuning deliver a whopping 87% accuracy! A little magic goes a long way.\n\nThe one limitation for fine tuning is the requirement of labeled data, which is not always present. So depending on your use case, you might need to label some amount of data.\n\nThere's a small unconventional training trick involved, though, which we discuss in the next section.\n\n## Fine Tuning Without Fully Connected Layers\n\nAs we already know, a neural network comprises three parts:\n\n * Convolutional layers, which end up generating the feature vectors\n\n * Fully connected layers\n\n * The final classifier layer\n\nFine tuning, as the name suggests, involves tweaking a neural network lightly to adapt to a new dataset. It usually involves stripping off the fully connected layers (top layers), substituting them with new ones, and then training this newly composed neural network using this dataset. Training in this manner will cause two things:\n\n * The weights in all the newly added fully connected layers will be significantly affected.\n\n * The weights in the convolutional layers will be only slightly changed.\n\nThe fully connected layers do a lot of the heavy lifting to get maximum classification accuracy. As a result, the majority of the network that generates the feature vectors will change insignificantly. Thus, the feature vectors, despite fine tuning, will show little change.\n\nOur aim is for similar-looking objects to have closer feature vectors, which fine tuning as described earlier fails to accomplish. By forcing all of the task-specific learning to happen in the convolutional layers, we can see much better results. How do we achieve that? _By removing all of the fully connected layers and placing a classifier layer directly after the convolutional layers (which generate the feature vectors)._ This model is optimized for similarity search rather than classification.\n\nTo compare the process of fine tuning a model optimized for classification tasks as opposed to similarity search, let's recall how we fine tuned our model in Chapter 3 for classification:\n\n from tf.keras.applications.resnet50 import ResNet50\n model = ResNet50(weights='imagenet', include_top=False,\n input_shape = (224,224,3))\n input = Input(shape=(224, 224, 3))\n x = model(input)\n x = GlobalAveragePooling2D()(x)\n x = Dense(64, activation='relu')(x)\n x = Dropout(0.5)(x)\n x = Dense(NUM_CLASSES, activation='softmax')(x)\n model_classification_optimized = Model(inputs=input, outputs=x)\n\nAnd here's how we fine tune our model for similarity search. Note the missing hidden dense layer in the middle:\n\n from tf.keras.applications.resnet50 import ResNet50\n model = ResNet50(weights='imagenet', include_top=False,\n input_shape = (224,224,3))\n input = Input(shape=(224, 224, 3))\n x = model(input)\n x = GlobalAveragePooling2D()(x)\n _# No dense or dropout layers_\n x = Dense(NUM_CLASSES, activation='softmax')(x)\n model_similarity_optimized = Model(inputs=input, outputs=x)\n\nAfter fine tuning, to use the `model_similarity_optimized` for extracting features instead of giving probabilities for classes, simply `pop` (i.e., remove) the last layer:\n\n model_similarity_optimized.layers.pop()\n model = Model(model_similarity_optimized.input,\n model_similarity_optimized.layers[-1].output)\n\nThe key thing to appreciate here is if you used the regular fine-tuning process, we would get lower similarity accuracy than `model_similarity_optimized`. Obviously, we would want to use `model_classification_optimized` for classification scenarios and `model_similarity_optimized` for extracting embeddings for similarity search.\n\nWith all this knowledge, you can now make both a fast and accurate similarity system for any scenario you are working on. It's time to see how the giants in the AI industry build their products.\n\n# Siamese Networks for One-Shot Face Verification\n\nA face verification system is usually trying to ascertain\u2014given two images of faces\u2014whether the two images are of the same person. This is a high-precision binary classifier that needs to robustly work with different lighting, clothing, hairstyles, backgrounds, and facial expressions. To make things more challenging, although there might be images of many people in, for instance an employee database, there might be only a handful of images of the same person available. Similarly, signature identification in banks and product identification on Amazon suffer the same challenge of limited images per item.\n\nHow would you go about training such a classifier? Picking embeddings from a model like ResNet pretrained on ImageNet might not discern these fine facial attributes. One approach is to put each person as a separate class and then train like we usually train a regular network. Two key issues arise:\n\n * If we had a million individuals, training for a million categories is not feasible.\n\n * Training with a few images per class will lead to overtraining.\n\nAnother thought: instead of teaching different categories, we could teach a network to directly compare and decide whether a pair of images are similar or dissimilar by giving guidance on their similarity during training. And this is the key idea behind Siamese networks. Take a model, feed in two images, extract two embeddings, and then calculate the distance between the two embeddings. If the distance is under a threshold, consider them similar, else not. By feeding a pair of images with the associated label, similar or dissimilar, and training the network end to end, the embeddings begin to capture the fine-grained representation of the inputs. This approach, shown in Figure 4-18, of directly optimizing for the distance metric is called _metric learning_.\n\n###### Figure 4-18. A Siamese network for signature verification; note that the same CNN was used for both input images\n\nWe could extend this idea and even feed three images. Pick one anchor image, pick another positive sample (of the same category), and another negative sample (of a different category). Let's now train this network to directly optimize for the distance between similar items to be minimized and the distance between dissimilar items to be maximized. This loss function that helps us achieve this is called a _triplet loss_ function. In the previous case with a pair of images, the loss function is called a _contrastive loss_ function. The triplet loss function tends to give better results.\n\nAfter the network is trained, we need only one reference image of a face for deciding at test time whether the person is the same. This methodology opens the doors for _one-shot learning_. Other common uses include signature and logo recognition. One remarkably creative application by Saket Maheshwary and Hemant Misra is to use a Siamese network for matching r\u00e9sum\u00e9s with job applicants by calculating the semantic similarity between the two.\n\n# Case Studies\n\nLet's look at a few interesting examples that show how what we have learned so far is applied in the industry.\n\n## Flickr\n\nFlickr is one of the largest photo-sharing websites, especially popular among professional photographers. To help photographers find inspiration as well as showcase content the users might find interesting, Flickr produced a similarity search feature based on the same semantic meaning. As demonstrated in Figure 4-19, exploring a desert pattern leads to several similarly patterned results. Under the hood, Flickr adopted an ANN algorithm called Locally Optimized Product Quantization (LOPQ), which has been open sourced in Python as well as Spark implementations.\n\n###### Figure 4-19. Similar patterns of a desert photo (image source)\n\n## Pinterest\n\nPinterest is an application used widely for its visual search capabilities, more specifically in its features called Similar Pins and Related Pins. Other companies like Baidu and Alibaba have launched similar visual search systems. Also, Zappos, Google Shopping, and like.com are using computer vision for recommendation.\n\nWithin Pinterest \"women's fashion\" is one of the most popular themes of pins and the Similar Looks feature (Figure 4-20) helps people discover similar products. Additionally, Pinterest also reports that its Related Pins feature increased its repin rate. Not every pin on Pinterest has associated metadata, which makes recommendation a difficult cold-start problem due to lack of context. Pinterest developers solved this cold-start problem by using the visual features for generating the related pins. Additionally, Pinterest implements an incremental fingerprinting service that generates new digital signatures if either a new image is uploaded or if there is feature evolution (due to improvements or modifications in the underlying models by the engineers).\n\n###### Figure 4-20. The Similar Looks feature of the Pinterest application (image source: Pinterest blog)\n\n## Celebrity Doppelgangers\n\nWebsite applications like _Celebslike.me_ , which went viral in 2015, look for the nearest neighbor among celebrities, as shown in Figure 4-21. A similar viral approach was taken by the Google Arts & Culture app in 2018, which shows the nearest existing portrait to your face. Twins or not is another application with a similar aim.\n\n###### Figure 4-21. Testing our friend Pete Warden's photo (technical lead for mobile and embedded TensorFlow at Google) on the celebslike.me website\n\n## Spotify\n\nSpotify uses nearest neighbors for recommending music and creating automatic playlists and radio stations based on the current set of songs being played. Usually, collaborative filtering techniques, which are employed for recommending content like movies on Netflix, are content agnostic; that is, the recommendation happens because large groups of users with similar tastes are watching similar movies or listening to similar songs. This presents a problem for new and not yet popular content because users will keep getting recommendations for existing popular content. This is also referred to as the aforementioned cold-start problem. The solution is to use the latent understanding of the content. Similar to images, we can create feature vectors out of music using MFCC features (Mel Frequency Cepstral Coefficients), which in turn generates a 2D spectrogram that can be thought of as an image and can be used to generate features. Songs are divided into three-second fragments, and their spectrograms are used to generate features. These features are then averaged together to represent the complete song. Figure 4-22 shows artists whose songs are projected in specific areas. We can discern hip-hop (upper left), rock (upper right), pop (lower left), and electronic music (lower right). As already discussed, Spotify uses Annoy in the background.\n\n###### Figure 4-22. t-SNE visualization of the distribution of predicted usage patterns, using latent factors predicted from audio (image source: \"Deep content-based music recommendation\" by Aaron van den Oord, Sander Dieleman, Benjamin Schrauwen, NIPS 2013)\n\n## Image Captioning\n\nImage captioning is the science of translating an image into a sentence (as illustrated in Figure 4-23). Going beyond just object tagging, this requires a deeper visual understanding of the entire image and relationships between objects. To train these models, an open source dataset called MS COCO was released in 2014, which consists of more than 300,000 images along with object categories, sentence descriptions, visual question-answer pairs, and object segmentations. It serves as a benchmark for a yearly competition to see progress in image captioning, object detection, and segmentation.\n\n###### Figure 4-23. Image captioning feature in Seeing AI: the Talking Camera App for the blind community\n\nA common strategy applied in the first year of the challenge (2015) was to append a language model (LSTM\/RNN) with a CNN in such a way that the output of a CNN feature vector is taken as the input to the language model (LSTM\/RNN). This combined model was trained jointly in an end-to-end manner, leading to very impressive results that stunned the world. Although every research lab was trying to beat one another, it was later found that doing a simple nearest-neighbor search could yield state-of-the-art results. For a given image, find similar images based on similarity of the embeddings. Then, note the common words in the captions of the similar images, and print the caption containing the most common words. In short, a lazy approach would still beat the state-of-the-art one, and this exposed a critical bias in the dataset.\n\nThis bias has been coined the _Giraffe-Tree_ problem by Larry Zitnick. Do an image search for \"giraffe\" on a search engine. Look closely: in addition to giraffe, is there grass in almost every image? Chances are you can describe the majority of these images as \"A giraffe standing in a grass field.\" Similarly, if a query image like the photo on the far left in Figure 4-24 contains a giraffe and a tree, almost all similar images (right) can be described as \"a giraffe standing in the grass, next to a tree.\" Even without a deeper understanding of the image, one would arrive at the correct caption using a simple nearest-neighbor search. This shows that to measure the real intelligence of a system, we need more semantically novel\/original images in the test set.\n\n###### Figure 4-24. The Giraffe-Tree problem (image source: Measuring Machine Intelligence Through Visual Question Answering, C. Lawrence Zitnick, Aishwarya Agrawal, Stanislaw Antol, Margaret Mitchell, Dhruv Batra, Devi Parikh)\n\nIn short, don't underestimate a simple nearest-neighbor approach!\n\n# Summary\n\nNow we are at the end of a successful expedition where we explored locating similar images with the help of embeddings. We took this one level further by exploring how to scale searches from a few thousand to a few billion documents with the help of ANN algorithms and libraries including Annoy, NGT, and Faiss. We also learned that fine tuning the model to your dataset can improve the accuracy and representative power of embeddings in a supervised setting. To top it all off, we looked at how to use Siamese networks, which use the power of embeddings to do one-shot learning, such as for face verification systems. We finally examined how nearest-neighbor approaches are used in various use cases across the industry. Nearest neighbors are a simple yet powerful tool to have in your toolkit.\n\n# Chapter 5. From Novice to Master Predictor: Maximizing Convolutional Neural Network Accuracy\n\nIn Chapter 1, we looked at the importance of responsible AI development. One of the aspects we discussed was the importance of robustness of our models. Users can trust what we build only if they can be assured that the AI they encounter on a day-to-day basis is accurate and reliable. Obviously, the context of the application matters a lot. It would be okay for a food classifier to misclassify pasta as bread on occasion. But it would be dangerous for a self-driving car to misclassify a pedestrian as a street lane. The main goal of this chapter is thus a rather important one\u2014to build more accurate models.\n\nIn this chapter, you will develop an intuition for recognizing opportunities to improve your model's accuracy the next time you begin training one. We first look at the tools that will ensure that you won't be going in blind. After that, for a good chunk of this chapter, we take a very experimental approach by setting up a baseline, isolating individual parameters to tweak, and observing their effect on model performance and training speed. A lot of the code we use in this chapter is all aggregated in a single Jupyter Notebook, along with an actionable checklist with interactive examples. It is meant to be highly reusable should you choose to incorporate it in your next training script.\n\nWe explore several questions that tend to come up during model training:\n\n * I am unsure whether to use transfer learning or building from scratch to train my own network. What is the preferred approach for my scenario?\n\n * What is the least amount of data that I can supply to my training pipeline to get acceptable results?\n\n * I want to ensure that the model is learning the correct thing and not picking up spurious correlations. How can I get visibility into that?\n\n * How can I ensure that I (or someone else) will obtain the same results from my experiments every single time they are run? In other words, how do I ensure reproducibility of my experiments?\n\n * Does changing the aspect ratio of the input images have an impact on the predictions?\n\n * Does reducing input image size have a significant effect on prediction results?\n\n * If I use transfer learning, what percentage of layers should I fine tune to achieve my preferred balance of training time versus accuracy?\n\n * Alternatively, if I were to train from scratch, how many layers should I have in my model?\n\n * What is the appropriate \"learning rate\" to supply during model training?\n\n * There are too many things to remember. Is there a way to automate all of this work?\n\nWe will try to answer these questions one by one in the form of experiments on a few datasets. Ideally, you should be able to look at the results, read the takeaways, and gain some insight into the concept that the experiment was testing. If you're feeling more adventurous, you can choose to perform the experiments yourself using the Jupyter Notebook.\n\n# Tools of the Trade\n\nOne of the main priorities of this chapter is to reduce the code and effort involved during experimentation while trying to gain insights into the process in order to reach high accuracy. An arsenal of tools exists that can assist us in making this journey more pleasant:\n\nTensorFlow Datasets\n\nQuick and easy access to around 100 datasets in a performant manner. All well-known datasets are available starting from the smallest MNIST (a few megabytes) to the largest MS COCO, ImageNet, and Open Images (several hundred gigabytes). Additionally, medical datasets like the Colorectal Histology and Diabetic Retinopathy are also available.\n\nTensorBoard\n\nClose to 20 easy-to-use methods to visualize many aspects of training, including visualizing the graph, tracking experiments, and inspecting the images, text, and audio data that pass through the network during training.\n\nWhat-If Tool\n\nRun experiments in parallel on separate models and tease out differences in them by comparing their performance on specific data points. Edit individual data points to see how that affects the model training.\n\ntf-explain\n\nAnalyze decisions made by the network to identify bias and inaccuracies in the dataset. Additionally, use heatmaps to visualize what parts of the image the network activated on.\n\nKeras Tuner\n\nA library built for `tf.keras` that enables automatic tuning of hyperparameters in TensorFlow 2.0.\n\nAutoKeras\n\nAutomates Neural Architecture Search (NAS) across different tasks like image, text, and audio classification and image detection.\n\nAutoAugment\n\nUtilizes reinforcement learning to improve the amount and diversity of data in an existing training dataset, thereby increasing accuracy.\n\nLet's now explore these tools in greater detail.\n\n## TensorFlow Datasets\n\nTensorFlow Datasets is a collection of nearly 100 ready-to-use datasets that can quickly help build high-performance input data pipelines for training TensorFlow models. Instead of downloading and manipulating data sets manually and then figuring out how to read their labels, TensorFlow Datasets standardizes the data format so that it's easy to swap one dataset with another, often with just a single line of code change. As you will see later on, doing things like breaking the dataset down into training, validation, and testing is also a matter of a single line of code. We will additionally be exploring TensorFlow Datasets from a performance point of view in the next chapter.\n\nYou can list all of the available datasets by using the following command (in the interest of conserving space, only a small subset of the full output is shown in this example):\n\n import tensorflow_datasets as tfds\n print(tfds.list_builders())\n\n ['amazon_us_reviews', 'bair_robot_pushing_small', 'bigearthnet', 'caltech101',\n 'cats_vs_dogs', 'celeb_a', 'imagenet2012',... , 'open_images_v4',\n 'oxford_flowers102', 'stanford_dogs','voc2007', 'wikipedia', 'wmt_translate',\n 'xnli']\n\nLet's see how simple it is to load a dataset. We will plug this into a full working pipeline later:\n\n # Import necessary packages\n import tensorflow_datasets as tfds\n\n # Downloading and loading the dataset\n dataset = tfds.load(name=\"cats_vs_dogs\", split=tfds.Split.TRAIN)\n\n # Building a performance data pipeline\n dataset = dataset.map(preprocess).cache().repeat().shuffle(1024).batch(32).\n prefetch(tf.data.experimental.AUTOTUNE)\n\n model.fit(dataset, ...)\n\n###### Tip\n\n`tfds` generates a lot of progress bars, and they take up a lot of screen space\u2014using `tfds.disable_progress_bar()` might be a good idea.\n\n## TensorBoard\n\nTensorBoard is a one-stop-shop for all of your visualization needs, offering close to 20 tools to understand, inspect, and improve your model's training.\n\nTraditionally, to track experiment progress, we save the values of loss and accuracy per epoch and then, when done, plot it using `matplotlib`. The downside with that approach is that it's not real time. Our usual options are to watch for the training progress in text. Additionally, after the training is done, we need to write additional code to make the graph in `matplotlib`. TensorBoard solves these and more pressing issues by offering a real-time dashboard (Figure 5-1) that helps us visualize all logs (such as train\/validation accuracy and loss) to assist in understanding the progression of training. Another benefit it offers is the ability to compare our current experiment's progress with the previous experiment, so we can see how a change in parameters affected our overall accuracy.\n\n###### Figure 5-1. TensorBoard default view showcasing real-time training metrics (the lightly shaded lines represent the accuracy from the previous run)\n\nTo enable TensorBoard to visualize our training and models, we need to log information about our training with the help of summary writer:\n\n summary_writer = tf.summary.FileWriter('.\/logs')\n\nTo follow our training in real time, we need to load TensorBoard before the model training begins. We can load TensorBoard by using the following commands:\n\n # Get TensorBoard to run\n %load_ext tensorboard\n\n # Start TensorBoard\n %tensorboard --logdir .\/log\n\nAs more TensorFlow components need a visual user interface, they reuse TensorBoard by becoming embeddable plug-ins within it. You'll notice the Inactive drop-down menu on TensorBoard; that's where you can see all the different profiles or tools that TensorFlow offers. Table 5-1 showcases a handful of the wide variety of tools available.\n\nTable 5-1. Plugins for TensorBoard **TensorBoard plug-in name** | **Description** \n---|--- \nDefault Scalar | Visualize scalar values such as classification accuracy. \nCustom Scalar | Visualize user-defined custom metrics. For example, different weights for different classes, which might not be a readily available metric. \nImage | View the output from each layer by clicking the Images tab. \nAudio | Visualize audio data. \nDebugging tools | Allows debugging visually and setting conditional breakpoints (e.g., tensor contains Nan or Infinity). \nGraphs | Shows the model architecture graphically. \nHistograms | Show the changes in the weight distribution in the layers of a model as the training progresses. This is especially useful for checking the effect of compressing a model with quantization. \nProjector | Visualize projections using t-SNE, PCA, and others. \nText | Visualize text data. \nPR curves | Plot precision-recall curves. \nProfile | Benchmark speed of all operations and layers in a model. \nBeholder | Visualize the gradients and activations of a model in real time during training. It allows seeing them filter by filter, and allows them to be exported as images or even as a video. \nWhat-If Tool | For investigating the model by slicing and dicing the data and checking its performance. Especially helpful for discovering bias. \nHParams | Find out which params and at what values are the most important, allow logging of the entire parameter server (discussed in detail in this chapter). \nMesh | Visualize 3D data (including point clouds).\n\nIt should be noted that TensorBoard is not TensorFlow specific, and can be used with other frameworks like PyTorch, scikit-learn, and more, depending on the plugin used. To make a plugin work, we need to write the specific metadata that we want to visualize. For example, TensorBoard embeds the TensorFlow Projector tool within to cluster images, text, or audio using t-SNE (which we examined in detail in Chapter 4). Apart from calling TensorBoard, we need to write the metadata like the feature embeddings of our image, so that TensorFlow Projector can use it to do clustering, as demonstrated in Figure 5-2.\n\n###### Figure 5-2. TensorFlow Embedding Projector showcasing data in clusters (can be run as a TensorBoard plugin)\n\n## What-If Tool\n\nWhat if we could inspect our AI model's predictions with the help of visualizations? What if we could find the best threshold for our model to maximize precision and recall? What if we could slice and dice the data along with the predictions our model made to see what it's great at and where there are opportunities to improve? What if we could compare two models to figure out which is indeed better? What if we could do all this and more, with a few clicks in the browser? Sounds appealing for sure! The What-If Tool (Figure 5-3 and Figure 5-4) from Google's People + AI Research (PAIR) initiative helps open up the black box of AI models to enable model and data explainability.\n\n###### Figure 5-3. What-If Tool's datapoint editor makes it possible to filter and visualize data according to annotations of the dataset and labels from the classifier\n\n###### Figure 5-4. PR curves in the Performance and Fairness section of the What-If Tool help to interactively select the optimal threshold to maximize precision and recall\n\nTo use the What-If Tool, we need the dataset and a model. As we just saw, TensorFlow Datasets makes downloading and loading the data (in the `tfrecord` format) relatively easy. All we need to do is to locate the data file. Additionally, we want to save the model in the same directory:\n\n # Save model for What If Tool\n tf.saved_model.save(model, \"\/tmp\/model\/1\/\")\n\nIt's best to perform the following lines of code in a local system rather than a Colab notebook because the integration between Colab and the What-If Tool is still evolving.\n\nLet's start TensorBoard:\n\n $ mkdir tensorboard\n $ tensorboard --logdir .\/log --alsologtostderr\n\nNow, in a new terminal, let's make a directory for all of our What-If Tool experiments:\n\n $ mkdir what-if-stuff\n\nMove the trained model and TFRecord data here. The overall directory structure looks something like this:\n\n $ tree .\n \u251c\u2500\u2500 colo\n \u2502 \u2514\u2500\u2500 model\n \u2502 \u2514\u2500\u2500 1\n \u2502 \u251c\u2500\u2500 assets\n \u2502 \u251c\u2500\u2500 saved_model.pb\n \u2502 \u2514\u2500\u2500 variables\n\nWe'll serve the model using Docker within the newly created directory:\n\n $ sudo docker run -p 8500:8500 \\\n --mount type=bind,source=\/home\/{ _your_username_ }\/what-if-stuff\/colo\/model\/,\n target=\/models\/colo \\\n -e MODEL_NAME=colo -t tensorflow\/serving\n\nA word of caution: the port must be `8500` and all parameters must be spelled exactly as shown in the preceding example.\n\nNext, at the far right, click the settings button (the gray gear icon) and add the values listed in Table 5-2.\n\nTable 5-2. Configurations for the What-If Tool **Parameter** | **Value** \n---|--- \nInference address |\n\n`ip_addr:8500`\n\nModel name |\n\n`\/models\/colo`\n\nModel type | Classification \nPath to examples | _\/home\/{ _`your_username`_ }\/what_if_stuff\/colo\/models\/colo.tfrec_ (Note: this must be an absolute path)\n\nWe can now open the What-If Tool in the browser within TensorBoard, as depicted in Figure 5-5.\n\n###### Figure 5-5. Setup window for the What-If Tool\n\nThe What-If Tool can also be used to visualize datasets according to different bins, as shown in Figure 5-6. We can also use the tool to determine the better performing model out of multiple models on the same dataset using the `set_compare_estimator_and_feature_spec` function.\n\n from witwidget.notebook.visualization import WitConfigBuilder\n\n _# features are the test examples that we want to load into the tool_\n models = [model2, model3, model4]\n config_builder =\n WitConfigBuilder(test_examples).set_estimator_and_feature_spec(model1, features)\n\n for each_model in models:\n config_builder =\n config_builder.set_compare_estimator_and_feature_spec(each_model, features)\n\n###### Figure 5-6. The What-If tool enables using multiple metrics, data visualization, and many more things under the sun\n\nNow, we can load TensorBoard, and then, in the Visualize section, choose the model we want to compare, as shown in Figure 5-7. This tool has many features to explore!\n\n###### Figure 5-7. Choose the model to compare using the What-If Tool\n\n## tf-explain\n\nDeep learning models have traditionally been black boxes, and up until now, we usually learn about their performance by watching the class probabilities and validation accuracies. To make these models more interpretable and explainable, heatmaps come to the rescue. By showing the area of an image that leads to the prediction with higher intensity, heatmaps can help visualize their learning. For example, an animal often seen in surroundings with snow might be getting high-accuracy predictions, but if the dataset has only that animal with snow as the background, the model might just be paying attention to the snow as the distinctive pattern instead of the animal. Such a dataset demonstrates bias, making the predictions not too robust when the classifier is put in the real world (and potentially dangerous!). Heatmaps can be especially useful to explore such bias, as often spurious correlations can seep in if the dataset is not carefully curated.\n\n`tf-explain` (by Raphael Meudec) helps understand the results and inner workings of a neural network with the help of such visualizations, removing the veil on bias in datasets. We can add multiple types of callbacks while training or use its core API to generate TensorFlow events that can later be loaded into TensorBoard. For inference, all we need to do is pass an image, its ImageNet object ID along with a model into tf-explain's functions. You must supply the object ID because `tf.explain` needs to know what is activated for that particular class. A few different visualization approaches are available with `tf.explain`:\n\nGrad CAM\n\nThe Gradient-weighted Class Activation Mapping (Grad CAM) visualizes how parts of the image affect the neural network's output by looking into the activation maps. A heatmap (illustrated in Figure 5-8) is generated based on the gradients of the object ID from the last convolutional layer. Grad CAM is largely a broad-spectrum heatmap generator given that it is robust to noise and can be used on an array of CNN models.\n\nOcclusion Sensitivity\n\nOccludes a part of the image (using a small square patch placed randomly) to establish how robust the network is. If the prediction is still correct, on average, the network is robust. The area in the image that is the warmest (i.e., red) has the most effect on the prediction when occluded.\n\nActivations\n\nVisualizes the activations for the convolutional layers.\n\n###### Figure 5-8. Visualizations on images using MobileNet and tf-explain\n\nAs demonstrated in the code example that follows, such visualizations can be built with very little code. By taking a video, generating individual frames, and running tf-explain with Grad CAM and joining them together, we can build a detailed understanding of how these neural networks would react to moving camera angles.\n\n from tf_explain.core.grad_cam import GradCAM\n From tf.keras.applications.MobileNet import MobileNet\n\n model = MobileNet(weights='imagenet', include_top=True)\n\n # Set Grad CAM System\n explainer = GradCAM()\n\n # Image Processing\n IMAGE_PATH = 'dog.jpg'\n dog_index = 263\n img = tf.keras.preprocessing.image.load_img(IMAGE_PATH, target_size=(224, 224))\n img = tf.keras.preprocessing.image.img_to_array(img)\n data = ([img], None)\n\n # Passing the image through Grad CAM\n grid = explainer.explain(data, model, 'conv1', index)\n name = IMAGE_PATH.split(\".jpg\")[0]\n explainer.save(grid, '\/tmp', name + '_grad_cam.png')\n\n# Common Techniques for Machine Learning Experimentation\n\nThe first few chapters focused on training the model. The following sections, however, contain a few more things to keep in the back of your mind while running your training experiments.\n\n## Data Inspection\n\nData inspection's first biggest hurdle is determining the structure of the data. TensorFlow Datasets has made this step relatively easy because all of the available datasets are in the same format and structure and can be used in a performant way. All we need to do is load the dataset into the What-If Tool and use the various options already present to inspect the data. As an example, on the SMILE dataset, we can visualize the dataset according to its annotations, such as images of people wearing eyeglasses and those without eyeglasses, as illustrated in Figure 5-9. We observe that a wider distribution of the dataset has images of people wearing no eyeglasses, thus uncovering bias in the data due to an unbalanced dataset. This can be solved by modifying the weights of the metrics accordingly, through the tool.\n\n###### Figure 5-9. Slicing and dividing the data based on predictions and real categories\n\n## Breaking the Data: Train, Validation, Test\n\nSplitting a dataset into train, validation, and test is pretty important because we want to report the results on an unseen dataset by the classifier (i.e., the test dataset). TensorFlow Datasets makes it easy to download, load, and split the dataset into these three parts. Some datasets already come with three default splits. Alternatively, the data can be split by percentages. The following code showcases using a default split:\n\n dataset_name = \"cats_vs_dogs\"\n train, info_train = tfds.load(dataset_name, split=tfds.Split.TRAIN,\n with_info=True)\n\nThe cats-and-dogs dataset in `tfds` has only the train split predefined. Similar to this, some datasets in TensorFlow Datasets do not have a `validation` split. For those datasets, we take a small percentage of samples from the predefined `training` set and treat it as the `validation` set. To top it all off, splitting the dataset using the `weighted_splits` takes care of randomizing and shuffling data between the splits:\n\n # Load the dataset\n dataset_name = \"cats_vs_dogs\"\n\n # Dividing data into train (80), val (10) and test (10)\n split_train, split_val, split_test = tfds.Split.TRAIN.subsplit(weighted=[80, 10,\n 10])\n train, info_train = tfds.load(dataset_name, split=split_train , with_info=True)\n val, info_val = tfds.load(dataset_name, split=split_val, with_info=True)\n test, info_test = tfds.load(dataset_name, split=split_test, with_info=True)\n\n## Early Stopping\n\nEarly stopping helps to avoid overtraining of the network by keeping a lookout for the number of epochs that show limited improvement. Assuming a model is set to train for 1,000 epochs and reaches 90% accuracy at the 10th epoch and stops improving any further for the next 10 epochs, it might be a waste of resources to train any further. If the number of epochs exceeds a predefined threshold called `patience`, training is stopped even if there might still be more epochs left to train. In other words, early stopping decides the point at which the training would no longer be useful and stops training. We can change the metric using the `monitor` parameter and add early stopping to our list of callbacks for the model:\n\n # Define Early Stopping callback\n earlystop_callback = tf.keras.callbacks.EarlyStopping(monitor='val_acc',\n \t\t\t\t\t min_delta=0.0001, patience=10)\n\n # Add to the training model\n model.fit_generator(... callbacks=[earlystop_callback])\n\n## Reproducible Experiments\n\nTrain a network once. Then, train it again, without changing any code or parameters. You might notice that the accuracies in two subsequent runs came out slightly different, even if no change was made in code. This is due to random variables. To make experiments reproducible across runs, we want to control this randomization. Initialization of weights of models, randomized shuffling of data, and so on all utilize randomization algorithms. We know that random number generators can be made reproducible by initializing a seed and that's exactly what we will do. Various frameworks have their own ways of setting a random seed, some of which are shown here:\n\n # Seed for Tensorflow\n tf.random.set_seed(1234)\n\n # Seed for Numpy\n import numpy as np\n np.random.seed(1234)\n\n # Seed for Keras\n seed = 1234\n fit(train_data, augment=True, seed=seed)\n flow_from_dataframe(train_dataframe, shuffle=True, seed=seed)\n\n###### Note\n\nIt is necessary to set a seed in all the frameworks and subframeworks that are being used, as seeds are not transferable between frameworks.\n\n# End-to-End Deep Learning Example Pipeline\n\nLet's combine several tools and build a skeletal backbone, which will serve as our pipeline in which we will add and remove parameters, layers, functionality, and various other addons to really understand what is happening. Following the code on the book's GitHub website (see _http:\/\/PracticalDeepLearning.ai_), you can interactively run this code for more than 100 datasets in your browser with Colab. Additionally, you can modify it for most classification tasks.\n\n## Basic Transfer Learning Pipeline\n\nFirst, let's build this end-to-end example for transfer learning.\n\n # Import necessary packages\n import tensorflow as tf\n import tensorflow_datasets as tfds\n\n # tfds makes a lot of progress bars, which takes up a lot of screen space, hence\n # disabling them\n tfds.disable_progress_bar()\n\n tf.random.set_seed(1234)\n\n # Variables\n BATCH_SIZE = 32\n NUM_EPOCHS= 20\n IMG_H = IMG_W = 224\n IMG_SIZE = 224\n LOG_DIR = '.\/log'\n SHUFFLE_BUFFER_SIZE = 1024\n IMG_CHANNELS = 3\n\n dataset_name = \"oxford_flowers102\"\n\n def preprocess(ds):\n x = tf.image.resize_with_pad(ds['image'], IMG_SIZE, IMG_SIZE)\n x = tf.cast(x, tf.float32)\n x = (x\/127.5) - 1\n return x, ds['label']\n\n def augmentation(image,label):\n image = tf.image.random_brightness(image, .1)\n image = tf.image.random_contrast(image, lower=0.0, upper=1.0)\n image = tf.image.random_flip_left_right(image)\n return image, label\n\n def get_dataset(dataset_name):\n split_train, split_val = tfds.Split.TRAIN.subsplit(weighted=[9,1])\n train, info_train = tfds.load(dataset_name, split=split_train , with_info=True)\n val, info_val = tfds.load(dataset_name, split=split_val, with_info=True)\n NUM_CLASSES = info_train.features['label'].num_classes\n assert NUM_CLASSES >= info_val.features['label'].num_classes\n NUM_EXAMPLES = info_train.splits['train'].num_examples * 0.9\n IMG_H, IMG_W, IMG_CHANNELS = info_train.features['image'].shape\n train = train.map(preprocess).cache().\n repeat().shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)\n train = train.map(augmentation)\n train = train.prefetch(tf.data.experimental.AUTOTUNE)\n val = val.map(preprocess).cache().repeat().batch(BATCH_SIZE)\n val = val.prefetch(tf.data.experimental.AUTOTUNE)\n return train, info_train, val, info_val, IMG_H, IMG_W, IMG_CHANNELS,\n NUM_CLASSES, NUM_EXAMPLES\n\n train, info_train, val, info_val, IMG_H, IMG_W, IMG_CHANNELS, NUM_CLASSES,\n NUM_EXAMPLES = get_dataset(dataset_name)\n\n # Allow TensorBoard callbacks\n tensorboard_callback = tf.keras.callbacks.TensorBoard(LOG_DIR,\n histogram_freq=1,\n write_graph=True,\n write_grads=True,\n batch_size=BATCH_SIZE,\n write_images=True)\n\n def transfer_learn(train, val, unfreeze_percentage, learning_rate):\n mobile_net = tf.keras.applications.ResNet50(input_shape=(IMG_SIZE, IMG_SIZE,\n IMG_CHANNELS), include_top=False)\n mobile_net.trainable=False\n # Unfreeze some of the layers according to the dataset being used\n num_layers = len(mobile_net.layers)\n for layer_index in range(int(num_layers - unfreeze_percentage*num_layers),\n num_layers ):\n \t\tmobile_net.layers[layer_index].trainable = True\n model_with_transfer_learning = tf.keras.Sequential([mobile_net,\n tf.keras.layers.GlobalAveragePooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(64),\n tf.keras.layers.Dropout(0.3),\n tf.keras.layers.Dense(NUM_CLASSES, \n activation='softmax')],)\n model_with_transfer_learning.compile(\n optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate),\n loss='sparse_categorical_crossentropy',\n metrics=[\"accuracy\"])\n model_with_transfer_learning.summary()\n earlystop_callback = tf.keras.callbacks.EarlyStopping(\n monitor='val_accuracy', \n min_delta=0.0001, \n patience=5)\n model_with_transfer_learning.fit(train,\n epochs=NUM_EPOCHS,\n steps_per_epoch=int(NUM_EXAMPLES\/BATCH_SIZE),\n validation_data=val,\n validation_steps=1,\n validation_freq=1,\n callbacks=[tensorboard_callback,\n earlystop_callback])\n return model_with_transfer_learning\n\n # Start TensorBoard\n %tensorboard --logdir .\/log\n\n # Select the last % layers to be trained while using the transfer learning\n # technique. These layers are the closest to the output layers.\n unfreeze_percentage = .33\n learning_rate = 0.001\n\n model = transfer_learn(train, val, unfreeze_percentage, learning_rate)\n\n## Basic Custom Network Pipeline\n\nApart from transfer learning on state-of-the-art models, we can also experiment and develop better intuitions by building our own custom network. Only the model needs to be swapped in the previously defined transfer learning code:\n\n def create_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, (3, 3), activation='relu',\n input_shape=(IMG_SIZE, IMG_SIZE, IMG_CHANNELS)),\n tf.keras.layers.MaxPool2D(pool_size=(2, 2)),\n tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),\n tf.keras.layers.MaxPool2D(pool_size=(2, 2)),\n tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),\n tf.keras.layers.MaxPool2D(pool_size=(2, 2)),\n tf.keras.layers.Dropout(rate=0.3),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dropout(rate=0.3),\n tf.keras.layers.Dense(NUM_CLASSES, activation='softmax')\n ])\n return model\n\n def scratch(train, val, learning_rate):\n model = create_model()\n model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=learning_rate),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\n earlystop_callback = tf.keras.callbacks.EarlyStopping(\n monitor='val_accuracy', \n min_delta=0.0001, \n patience=5)\n\n model.fit(train,\n epochs=NUM_EPOCHS,\n steps_per_epoch=int(NUM_EXAMPLES\/BATCH_SIZE),\n validation_data=val, \n validation_steps=1,\n validation_freq=1,\n callbacks=[tensorboard_callback, earlystop_callback])\n return model\n\nNow, it's time to use our pipeline for various experiments.\n\n# How Hyperparameters Affect Accuracy\n\nIn this section, we aim to modify various parameters of a deep learning pipeline one at a time\u2014from the number of layers fine-tuned, to the choice of the activation function used\u2014and see its effect primarily on validation accuracy. Additionally, when relevant, we also observe its effect on the speed of training and time to reach the best accuracy (i.e., convergence).\n\nOur experimentation setup is as follows:\n\n * To reduce experimentation time, we have used a faster architecture\u2014MobileNet\u2014in this chapter.\n\n * We reduced the input image resolution to 128 x 128 pixels to further speed up training. In general, we would recommend using a higher resolution (at least 224 x 224) for production systems.\n\n * Early stopping is applied to stop experiments if they don't increase in accuracy for 10 consecutive epochs.\n\n * For training with transfer learning, we generally unfreeze the last 33% of the layers.\n\n * Learning rate is set to 0.001 with Adam optimizer.\n\n * We're mostly using the Oxford Flowers 102 dataset for testing, unless otherwise stated. We chose this dataset because it is reasonably difficult to train on due to the large number of classes it contains (102) and the similarities between many of the classes that force networks to develop a fine-grained understanding of features in order to do well.\n\n * To make apples-to-apples comparisons, we take the maximum accuracy value in a particular experiment and normalize all other accuracy values within that experiment with respect to this maximum value.\n\nBased on these and other experiments, we have compiled a checklist of actionable tips to implement in your next model training adventure. These are available on the book's GitHub (see _http:\/\/PracticalDeepLearning.ai_) along with interactive visualizations. If you have more tips, feel free to tweet them @PracticalDLBook or submit a pull request.\n\n## Transfer Learning Versus Training from Scratch\n\nExperimental setup\n\nTrain two models: one using transfer learning, and one from scratch on the same dataset.\n\nDatasets used\n\nOxford Flowers 102, Colorectal Histology\n\nArchitectures used\n\nPretrained MobileNet, Custom model\n\nFigure 5-10 shows the results.\n\n###### Figure 5-10. Comparing transfer learning versus training a custom model on different datasets\n\nHere are the key takeaways:\n\n * Transfer learning leads to a quicker rise in accuracy during training by reusing previously learned features.\n\n * Although it is expected that transfer learning (based on pretrained models on ImageNet) would work when the target dataset is also of natural imagery, the patterns learned in the early layers by a network work surprisingly well for datasets beyond ImageNet. That does not necessarily mean that it will yield the best results, but it can get close. When the images match more real-world images that the model was pretrained on, we get relatively quick improvement in accuracy.\n\n## Effect of Number of Layers Fine-Tuned in Transfer Learning\n\nExperimental setup\n\nVary the percentage of trainable layers from 0 to 100%\n\nDataset used\n\nOxford Flowers 102\n\nArchitecture used\n\nPretrained MobileNet\n\nFigure 5-11 shows the results.\n\n###### Figure 5-11. Effect of % layers fine-tuned on model accuracy\n\nHere are the key takeaways:\n\n * The higher the number of layers fine-tuned, the fewer epochs it took to reach convergence and the higher the accuracy.\n\n * The higher the number of layers fine-tuned, the more time it took per epoch for training, due to more computation and updates involved.\n\n * For a dataset that required fine-grained understanding of images, making more layers task specific by unfreezing them was the key to a better model.\n\n## Effect of Data Size on Transfer Learning\n\nExperimental setup\n\nAdd one image per class at a time\n\nDataset used\n\nCats versus dogs\n\nArchitecture used\n\nPretrained MobileNet\n\nFigure 5-12 shows the results.\n\n###### Figure 5-12. Effect of the amount of data per category on model accuracy\n\nHere are the key takeaways:\n\n * Even with only three images in each class, the model was able to predict with close to 90% accuracy. This shows how powerful transfer learning can be in reducing data requirements.\n\n * Because ImageNet has several cats and dogs, pretrained networks on ImageNet suited our dataset much more easily. More difficult datasets like Oxford Flowers 102 might require a much higher number of images to achieve similar accuracies.\n\n## Effect of Learning Rate\n\nExperimental setup\n\nVary the learning rate between .1, .01, .001, and .0001\n\nDataset used\n\nOxford Flowers 102\n\nArchitecture used\n\nPretrained MobileNet\n\nFigure 5-13 shows the results.\n\n###### Figure 5-13. Effect of learning rate on model accuracy and speed of convergence\n\nHere are the key takeaways:\n\n * Too high of a learning rate, and the model might never converge.\n\n * Too low a learning rate results in a long time taken to convergence.\n\n * Striking the right balance is crucial in training quickly.\n\n## Effect of Optimizers\n\nExperimental setup\n\nExperiment with available optimizers including AdaDelta, AdaGrad, Adam, Gradient Descent, Momentum, and RMSProp\n\nDataset used\n\nOxford Flowers 102\n\nArchitecture used\n\nPretrained MobileNet\n\nFigure 5-14 shows the results.\n\n###### Figure 5-14. Effect of different optimizers on the speed of convergence\n\nHere are the key takeaways:\n\n * Adam is a great choice for faster convergence to high accuracy.\n\n * RMSProp is usually better for RNN tasks.\n\n## Effect of Batch Size\n\nExperimental setup\n\nVary batch sizes in powers of two\n\nDataset used\n\nOxford Flowers 102\n\nArchitecture used\n\nPretrained\n\nFigure 5-15 shows the results.\n\n###### Figure 5-15. Effect of batch size on accuracy and speed of convergence\n\nHere are the key takeaways:\n\n * The higher the batch size, the more the instability in results from epoch to epoch, with bigger rises and drops. But higher accuracy also leads to more efficient GPU utilization, so faster speed per epoch.\n\n * Too low a batch size slows the rise in accuracy.\n\n * 16\/32\/64 are good to start batch sizes with.\n\n## Effect of Resizing\n\nExperimental setup\n\nChange image size to 128x128, 224x224\n\nDataset used\n\nOxford Flowers 102\n\nArchitecture used\n\nPretrained\n\nFigure 5-16 shows the results.\n\n###### Figure 5-16. Effect of image size on accuracy\n\nHere are the key takeaways:\n\n * Even with a third of the pixels, there wasn't a significant difference in validation accuracies. On the one hand, this shows the robustness of CNNs. It might partly be because the Oxford Flowers 102 dataset has close-ups of flowers visible. For datasets in which the objects have much smaller portions in an image, the results might be lower.\n\n## Effect of Change in Aspect Ratio on Transfer Learning\n\nExperimental Setup\n\nTake images of various aspect ratios (width:height ratio) and resize them to a square (1:1 aspect ratio).\n\nDataset Used\n\nCats vs. Dogs\n\nArchitecture Used\n\nPretrained\n\nFigure 5-17 shows the results.\n\n###### Figure 5-17. Distribution of aspect ratio and corresponding accuracies in images\n\nHere are the key takeaways:\n\n * Most common aspect ratio is 4:3; that is, 1.33, whereas our neural networks are generally trained at 1:1 ratio.\n\n * Neural networks are relatively robust to minor modifications in aspect ratio brought upon by resizing to a square shape. Even up to 2.0 ratio gives decent results.\n\n# Tools to Automate Tuning for Maximum Accuracy\n\nAs we have seen since the rise of the nineteenth century, automation has always led to an increase in productivity. In this section, we investigate tools that can help us automate the search for the best model.\n\n## Keras Tuner\n\nWith so many potential combinations of hyperparameters to tune, coming up with the best model can be a tedious process. Often two or more parameters might have correlated effects on the overall speed of convergence as well as validation accuracy, so tuning one at a time might not lead to the best model. And if curiosity gets the best of us, we might want to experiment on all the hyperparameters together.\n\nKeras Tuner comes in to automate this hyperparameter search. We define a search algorithm, the potential values that each parameter can take (e.g., discrete values or a range), our target object to maximize (e.g., validation accuracy), and sit back to see the program start training. Keras Tuner conducts multiple experiments changing the parameters on our behalf, storing the metadata of the best model. The following code example adapted from Keras Tuner documentation showcases searching through the different model architectures (varying in the number of layers between 2 and 10) as well as varying the learning rate (between 0.1 and 0.001):\n\n from tensorflow import keras\n from tensorflow.keras import layers\n import numpy as np\n\n from kerastuner.engine.hypermodel import HyperModel\n from kerastuner.engine.hyperparameters import HyperParameters\n\n # Input data\n (x, y), (val_x, val_y) = keras.datasets.mnist.load_data()\n x = x.astype('float32') \/ 255.\n val_x = val_x.astype('float32') \/ 255.\n\n # Defining hyper parameters\n hp = HyperParameters()\n hp.Choice('learning_rate', [0.1, 0.001])\n hp.Int('num_layers', 2, 10)\n\n # Defining model with expandable number of layers\n def build_model(hp):\n model = keras.Sequential()\n model.add(layers.Flatten(input_shape=(28, 28)))\n for _ in range(hp.get('num_layers')):\n model.add(layers.Dense(32, activation='relu'))\n model.add(layers.Dense(10, activation='softmax'))\n model.compile(\n optimizer=keras.optimizers.Adam(hp.get('learning_rate')),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n return model\n\n hypermodel = RandomSearch(\n build_model,\n max_trials=20, # Number of combinations allowed\n hyperparameters=hp,\n allow_new_entries=False,\n objective='val_accuracy')\n\n hypermodel.search(x=x,\n y=y,\n epochs=5,\n validation_data=(val_x, val_y))\n\n # Show summary of overall best model\n hypermodel.results_summary()\n\nEach experiment will show values like this:\n\n > Hp values:\n |-learning_rate: 0.001\n |-num_layers: 6\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 Name \u2502 Best model \u2502 Current model \u2502\n \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524\n \u2502 accuracy \u2502 0.9911 \u2502 0.9911 \u2502\n \u2502 loss \u2502 0.0292 \u2502 0.0292 \u2502\n \u2502 val_loss \u2502 0.227 \u2502 0.227 \u2502\n \u2502 val_accuracy \u2502 0.9406 \u2502 0.9406 \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\nOn the experiment end, the result summary gives a snapshot of the experiments conducted so far, and saves more metadata.\n\n Hypertuning complete - results in .\/untitled_project\n [Results summary]\n |-Results in .\/untitled_project\n |-Ran 20 trials\n |-Ran 20 executions (1 per trial)\n |-Best val_accuracy: 0.9406\n\nAnother big benefit is the ability to track experiments online in real time and get notifications on their progress by visiting _http:\/\/keras-tuner.appspot.com_, getting an API key (from Google App Engine), and entering the following line in our Python program along with the real API key:\n\n tuner.enable_cloud(api_key=api_key)\n\nDue to the potentially large combinatorial space, random search is preferred to grid search as a more practical way to get to a good solution on a limited experimentation budget. But there are faster ways, including Hyperband (Lisha Li et al.), whose implementation is also available in Keras Tuner.\n\nFor computer-vision problems, Keras Tuner includes ready-to-use tunable applications like HyperResNet.\n\n## AutoAugment\n\nAnother example hyperparameter are augmentations. Which augmentations to use? How much magnitude to augment? Would combining one too many make matters worse? Instead of leaving these decisions to humans, we can let AI decide. AutoAugment utilizes reinforcement learning to come up with the best combination of augmentations (like translation, rotation, shearing) and the probabilities and magnitudes to apply, to maximize the validation accuracy. (The method was applied by Ekin D. Cubuk et al. to come up with the new state-of-the-art ImageNet validation numbers.) By learning the best combination of augmentation parameters on ImageNet, we can readily apply it to our problem.\n\nApplying the prelearned augmentation strategy from ImageNet is pretty simple:\n\n from PIL import Image\n from autoaugment import ImageNetPolicy\n img = Image.open(\"cat.jpg\")\n policy = ImageNetPolicy()\n imgs = [policy(img) for _ in range(8) ]\n\nFigure 5-18 displays the results.\n\n###### Figure 5-18. Output of augmentation strategies learned by reinforcement learning on the ImageNet dataset\n\n## AutoKeras\n\nWith AI automating more and more jobs, it is no surprise it can finally automate designing AI architectures, too. NAS approaches utilize reinforcement learning to join together mini-architectural blocks until they are able to maximize the objective function; in other words, our validation accuracy. The current state-of-the-art networks are all based on NAS, leaving human-designed architectures in the dust. Research in this area started showing promising results in 2017, with a bigger focus on making train faster in 2018. And now with AutoKeras (Haifeng Jin et al.), we can also apply this state-of-the-art technique on our particular datasets in a relatively accessible manner.\n\nGenerating new model architectures with AutoKeras is a matter of supplying our images and associated labels as well as a time limit by which to finish running the jobs. Internally, it implements several optimization algorithms, including a Bayesian optimization approach to search for an optimal architecture:\n\n !pip3 install autokeras\n !pip3 install graphviz\n from keras.datasets import mnist\n from autokeras.image.image_supervised import ImageClassifier\n\n (x_train, y_train), (x_test, y_test) = mnist.load_data()\n x_train = x_train.reshape(x_train.shape + (1,))\n x_test = x_test.reshape(x_test.shape + (1,))\n\n clf = ImageClassifier(path=\".\",verbose=True, augment=False)\n clf.fit(x_train, y_train, time_limit= 30 * 60) # 30 minutes\n clf.final_fit(x_train, y_train, x_test, y_test, retrain=True)\n y = clf.evaluate(x_test, y_test)\n print(y)\n\n _# Save the model as a pickle file_\n clf.export_autokeras_model(\"model.pkl\")\n\n visualize('.')\n\nPost-training, we are all eager to learn how the new model architecture looks. Unlike most of the cleaner-looking images we generally get to see, this will look pretty obfuscated to understand or print out. But what we do find faith in is that it yields high accuracy.\n\n# Summary\n\nIn this chapter, we saw a range of tools and techniques to help investigate opportunities to improve our CNN accuracy. Building a case for iterative experimentation, you learned how tuning hyperparameters can bring about optimal performance. And with so many hyperparameters to choose from, we then looked at automated approaches, including AutoKeras, AutoAugment, and Keras Tuner. Best of all, the core code for this chapter combining multiple tools in a single Colab file is available online on the book's GitHub (see _http:\/\/PracticalDeepLearning.ai_) and can easily be tuned to more than 100 datasets with a single line change and run online in the browser. Additionally, we compiled a checklist of actionable tips along with interactive experiments hosted online to help give your model a little extra edge. We hope that the material covered in this chapter will help make your models more robust, reduce bias, make them more explainable, and ultimately contribute to the responsible development of AI.\n\n# Chapter 6. Maximizing Speed and Performance of TensorFlow: A Handy Checklist\n\nLife is all about making do with what we have, and optimization is the name of the game.\n\nIt's not about having everything\u2014it's about using your resources wisely. Maybe we really want to buy that Ferrari, but our budget allows for a Toyota. You know what, though? With the right kinds of performance tuning, we can make that bad boy race at NASCAR!\n\nLet's look at this in terms of the deep learning world. Google, with its engineering might and TPU pods capable of boiling the ocean, set a speed record by training ImageNet in just about 30 minutes! And yet, just a few months later, a ragtag team of three researchers (Andrew Shaw, Yaroslav Bulatov, and Jeremy Howard), with $40 in their pockets using a public cloud, were able to train ImageNet in only 18 minutes!\n\nThe lesson we can draw from these examples is that the amount of resources that you have is not nearly as important as using them to their maximum potential. It's all about doing more with less. In that spirit, this chapter is meant to serve as a handy checklist of potential performance optimizations that we can make when building all stages of the deep learning pipelines, and will be useful throughout the book. Specifically, we will discuss optimizations related to data preparation, data reading, data augmentation, training, and finally inference.\n\nAnd the story starts and ends with two words...\n\n# GPU Starvation\n\nA commonly asked question by AI practitioners is, \"Why is my training so slow?\" The answer more often than not is GPU starvation.\n\nGPUs are the lifelines of deep learning. They can also be the most expensive component in a computer system. In light of that, we want to fully utilize them. This means that a GPU should not need to wait for data to be available from other components for processing. Rather, when the GPU is ready to process, the preprocessed data should already be available at its doorstep and ready to go. Yet, the reality is that the CPU, memory, and storage are frequently the performance bottlenecks, resulting in suboptimal utilization of the GPU. In other words, we want the GPU to be the bottleneck, not the other way round.\n\nBuying expensive GPUs for thousands of dollars can be worthwhile, but only if the GPU is the bottleneck to begin with. Otherwise, we might as well burn the cash.\n\nTo illustrate this better, consider Figure 6-1. In a deep learning pipeline, the CPU and GPU work in collaboration, passing data to each other. The CPU reads the data, performs preprocessing steps including augmentations, and then passes it on to the GPU for training. Their collaboration is like a relay race, except one of the relay runners is an Olympic athlete, waiting for a high school track runner to pass the baton. The more time the GPU stays idle, the more wasted resources.\n\n###### Figure 6-1. GPU starvation, while waiting for CPU to finish preparing the data\n\nA large portion of this chapter is devoted to reducing the idle time of the GPU and the CPU.\n\nA logical question to ask is: how do we know whether the GPU is starving? Two handy tools can help us answer this question:\n\n`nvidia-smi`\n\nThis command shows GPU statistics including utilization.\n\nTensorFlow Profiler + TensorBoard\n\nThis visualizes program execution interactively in a timeline within TensorBoard.\n\n## nvidia-smi\n\nShort for NVIDIA System Management Interface program, `nvidia-smi` provides detailed statistics about our precious GPUs, including memory, utilization, temperature, power wattage, and more. It's a geek's dream come true.\n\nLet's take it for a test drive:\n\n $ nvidia-smi\n\nFigure 6-2 shows the result.\n\n###### Figure 6-2. Terminal output of `nvidia-smi` highlighting the GPU utilization\n\nWhile training a network, the key figure we are interested in is the GPU utilization, defined in the documentation as the percent of time over the past second during which _one or more_ kernels was executing on the GPU. Fifty-one percent is frankly not that great. But this is utilization at the moment in time when `nvidia-smi` is called. How do we continuously monitor these numbers? To better understand the GPU usage, we can refresh the utilization metrics every half a second with the `watch` command (it's worth memorizing this command):\n\n $ watch -n .5 nvidia-smi\n\n###### Note\n\nAlthough GPU utilization is a good proxy for measuring the efficiency of our pipeline, it does not alone measure how well we're using the GPU, because the work could still be using a small fraction of the GPU's resources.\n\nBecause staring at the terminal screen with the number jumping around is not the most optimal way to analyze, we can instead poll the GPU utilization every second and dump that into a file. Run this for about 30 seconds while any GPU-related process is running on our system and stop it by pressing Ctrl+C:\n\n $ nvidia-smi --query-gpu=utilization.gpu --format=csv,noheader,nounits -f\n gpu_utilization.csv -l 1\n\nNow, calculate the median GPU utilization from the file generated:\n\n $ sort -n gpu_utilization.csv | grep -v '^0$' | datamash median 1\n\n###### Tip\n\nDatamash is a handy command-line tool that performs basic numeric, textual, and statistical operations on textual data files. You can find instructions to install it at _https:\/\/www.gnu.org\/software\/datamash\/_.\n\n`nvidia-smi` is the most convenient way to check our GPU utilization on the command line. Could we get a deeper analysis? It turns out, for advanced users, TensorFlow provides a powerful set of tools.\n\n## TensorFlow Profiler + TensorBoard\n\nTensorFlow ships with `tfprof` (Figure 6-3), the TensorFlow profiler to help analyze and understand the training process at a much deeper level, such as generating a detailed model analysis report for each operation in our model. But the command line can be a bit daunting to navigate. Luckily, TensorBoard, a suite of browser-based visualization tools for TensorFlow, includes a plugin for the profiler that lets us interactively debug the network with a few mouse clicks. This includes Trace Viewer, a feature that shows events in a timeline. It helps investigate how resources are being used precisely at a given period of time and spot inefficiencies.\n\n###### Note\n\nAs of this writing, TensorBoard is fully supported only in Google Chrome and might not show the profile view in other browsers, like Firefox.\n\n###### Figure 6-3. Profiler's timeline in TensorBoard shows an idle GPU while the CPU is processing as well as CPU idling while the GPU is processing\n\nTensorBoard, by default, has the profiler enabled. Activating TensorBoard involves a simple callback function:\n\n tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=\"\/tmp\",\n profile_batch=7)\n\n model.fit(train_data,\n steps_per_epoch=10,\n epochs=2, \n callbacks=[ **tensorboard_callback**])\n\nWhen initializing the callback, unless `profile_batch` is explicitly specified, it profiles the second batch. Why the second batch? Because the first batch is usually slower than the rest due to some initialization overhead.\n\n###### Note\n\nIt bears reiterating that profiling using TensorBoard is best suited for power users of TensorFlow. If you are just starting out, you are better off using `nvidia-smi`. (Although `nvidia-smi` is a far more capable than just providing GPU utilization info, which is typically how most practitioners use it.) For users wanting even deeper access to their hardware utilization metrics, NVIDIA Nsight is a great tool.\n\nAlright. With these tools at our disposal, we know that our program needs some tuning and has room for efficiency improvements. We look at those areas one by one in the next few sections.\n\n# How to Use This Checklist\n\nIn business, an oft-quoted piece of advice is \"You can't improve what you can't measure.\" This applies to deep learning pipelines, as well. Tuning performance is like a science experiment. You set up a baseline run, tune a knob, measure the effect, and iterate in the direction of improvement. The items on the following checklist are our knobs\u2014some are quick and easy, whereas others are more involved.\n\nTo use this checklist effectively, do the following:\n\n 1. Isolate the part of the pipeline that you want to improve.\n\n 2. Find a relevant point on the checklist.\n\n 3. Implement, experiment, and observe if runtime is reduced. If not reduced, ignore change.\n\n 4. Repeat steps 1 through 3 until the checklist is exhausted.\n\nSome of the improvements might be minute, some more drastic. But the cumulative effect of all these changes should hopefully result in faster, more efficient execution and best of all, more bang for the buck for your hardware. Let's look at each area of the deep learning pipeline step by step, including data preparation, data reading, data augmentation, training, and, finally, inference.\n\n# Performance Checklist\n\n## Data Preparation\n\n * \"Store as TFRecords\"\n\n * \"Reduce Size of Input Data\"\n\n * \"Use TensorFlow Datasets\"\n\n## Data Reading\n\n * \"Use tf.data\"\n\n * \"Prefetch Data\"\n\n * \"Parallelize CPU Processing\"\n\n * \"Parallelize I\/O and Processing\"\n\n * \"Enable Nondeterministic Ordering\"\n\n * \"Cache Data\"\n\n * \"Turn on Experimental Optimizations\"\n\n * \"Autotune Parameter Values\"\n\n## Data Augmentation\n\n * \"Use GPU for Augmentation\"\n\n## Training\n\n * \"Use Automatic Mixed Precision\"\n\n * \"Use Larger Batch Size\"\n\n * \"Use Multiples of Eight\"\n\n * \"Find the Optimal Learning Rate\"\n\n * \"Use tf.function\"\n\n * \"Overtrain, and Then Generalize\"\n\n * \"Use progressive sampling\"\n\n * \"Use progressive augmentation\"\n\n * \"Use progressive resizing\"\n\n * \"Install an Optimized Stack for the Hardware\"\n\n * \"Optimize the Number of Parallel CPU Threads\"\n\n * \"Use Better Hardware\"\n\n * \"Distribute Training\"\n\n * \"Examine Industry Benchmarks\"\n\n## Inference\n\n * \"Use an Efficient Model\"\n\n * \"Quantize the Model\"\n\n * \"Prune the Model\"\n\n * \"Use Fused Operations\"\n\n * \"Enable GPU Persistence\"\n\n###### Note\n\nA printable version of this checklist is available at . Feel free to use it as a reference next time you train or deploy a model. Or even better, spread the cheer by sharing with your friends, colleagues, and more importantly, your manager.\n\n# Data Preparation\n\nThere are a few optimizations that we can make even before we do any kind of training, and they have to do with how we prepare our data.\n\n## Store as TFRecords\n\nImage datasets typically consist of thousands of tiny files, each file measuring a few kilobytes. And our training pipeline must read each file individually. Doing this thousands of times has significant overhead, causing a slowdown of the training process. That problem is even more severe in the case of spinning hard drives, for which the magnetic head needs to seek to the beginning of each file. This problem is further exacerbated when the files are stored on a remote storage service like the cloud. And there lies our first hurdle!\n\nTo speed up the reads, one idea is to combine thousands of files into a handful of larger files. And that's exactly what TFRecord does. It stores data in efficient Protocol Buffer (protobuf) objects, making them quicker to read. Let's see how to create TFRecord files:\n\n _# Create TFRecord files_\n\n import tensorflow as tf\n from PIL import Image\n import numpy as np\n import io\n\n cat = \"cat.jpg\"\n img_name_to_labels = {'cat' : 0}\n img_in_string = open(cat, 'rb').read()\n label_for_img = img_name_to_labels['cat']\n\n def getTFRecord(img, label):\n feature = {\n 'label': _int64_feature(label),\n 'image_raw': _bytes_feature(img),\n }\n return tf.train.Example(features=tf.train.Features(feature=feature))\n\n with tf.compat.v1.python_io.TFRecordWriter('img.tfrecord') as writer:\n for filename, label in img_name_to_labels.items():\n image_string = open(filename, 'rb').read()\n tf_example = getTFRecord(image_string, label)\n writer.write(tf_example.SerializeToString())\n\nNow, let's take a look at reading these TFRecord files:\n\n # Reading TFRecord files\n\n dataset = tf.data.TFRecordDataset('img.tfrecord')\n ground_truth_info = {\n 'label': tf.compat.v1.FixedLenFeature([], tf.int64),\n 'image_raw': tf.compat.v1.FixedLenFeature([], tf.string),\n }\n\n def map_operation(read_data):\n return tf.compat.v1.parse_single_example(read_data, ground_truth_info)\n\n imgs = dataset.map(map_operation)\n\n for image_features in imgs:\n image_raw = image_features['image_raw'].numpy()\n label = image_features['label'].numpy()\n image = Image.open(io.BytesIO(image_raw))\n image.show()\n print(label)\n\nSo, why not join all of the data in a single file, like say for ImageNet? Although reading thousands of tiny files harms performance due to the overhead involved, reading gigantic files is an equally bad idea. They reduce our ability to make parallel reads and parallel network calls. The sweet spot to shard (divide) a large dataset in TFRecord files lies at around 100 MB.\n\n## Reduce Size of Input Data\n\nImage datasets with large images need to be resized before passing through to the GPU. This means the following:\n\n * Repeated CPU cycles at every iteration\n\n * Repeated I\/O bandwidth being consumed at a larger rate than needed in our data pipeline\n\nOne good strategy to save compute cycles is to perform common preprocessing steps once on the entire dataset (like resizing) and then saving the results in TFRecord files for all future runs.\n\n## Use TensorFlow Datasets\n\nFor commonly used public datasets, from MNIST (11 MB) to CIFAR-100 (160 MB) all the way to MS COCO (38 GB) and Google Open Images (565 GB), it's quite an effort to download the data (often spread across multiple zipped files). Imagine your frustration if after downloading 95% of the file slowly, the connection becomes spotty and breaks. This is not unusual because these files are typically hosted on university servers, or are downloaded from various sources like Flickr (as is the case with ImageNet 2012, which gives us the URLs from which to download 150 GB-plus of images). A broken connection might mean having to start all over again.\n\nIf you think that was tedious, the real challenge actually begins only after you successfully download the data. For every new dataset, we now need to hunt through the documentation to determine how the data is formatted and organized, so we can begin reading and processing appropriately. Then, we need to split the data into training, validation, and test sets (preferably converting to TFRecords). And when the data is so large as to not fit in memory, we will need to do some manual jiu-jitsu to read it and feed it efficiently to the training pipeline. We never said it was easy.\n\nAlternately, we could skip all the pain by consuming the high-performance, ready-to-use TensorFlow Datasets package. With several famous datasets available, it downloads, splits, and feeds our training pipeline using best practices in a few lines.\n\nLet's look at which datasets are available.\n\n import tensorflow_datasets as tfds\n\n # See available datasets\n print(tfds.list_builders())\n\n ===== Output =====\n ['abstract_reasoning', 'bair_robot_pushing_small', 'caltech101', 'cats_vs_dogs',\n 'celeb_a', 'celeb_a_hq', 'chexpert', 'cifar10', 'cifar100', 'cifar10_corrupted',\n 'cnn_dailymail', 'coco2014', 'colorectal_histology',\n 'colorectal_histology_large', 'cycle_gan' ...\n\nThere are more than 100 datasets as of this writing, and that number is steadily increasing. Now, let's download, extract, and make an efficient pipeline using the training set of CIFAR-10:\n\n train_dataset = tfds.load(name=\"cifar100\", split=tfds.Split.TRAIN)\n train_dataset = train_dataset.shuffle(2048).batch(64)\n\nThat's it! The first time we execute the code, it will download and cache the dataset on our machine. For every future run, it will skip the network download and directly read from the cache.\n\n# Data Reading\n\nNow that the data is prepared, let's look for opportunities to maximize the throughput of the data reading pipeline.\n\n## Use tf.data\n\nWe could choose to manually read every file from our dataset with Python's built-in I\/O library. We could simply call `open` for each file and we'd be good to go, right? The main downside in this approach is that our GPU would be bottlenecked by our file reads. Every time we read a file, the GPU needs to wait. Every time the GPU starts processing its input, we wait before we read the next file from disk. Seems rather wasteful, doesn't it?\n\nIf there's only one thing you can take away from this chapter, let it be this: `tf.data` is the way to go for building a high-performance training pipeline. In the next few sections, we explore several aspects of `tf.data` that you can exploit to improve training speed.\n\nLet's set up a base pipeline for reading data:\n\n files = tf.data.Dataset.list_files(\".\/training_data\/*.tfrecord\")\n dataset = tf.data.TFRecordDataset(files)\n\n dataset = dataset.shuffle(2048)\n .repeat()\n .map(lambda item: tf.io.parse_single_example(item, features))\n .map(_resize_image)\n .batch(64)\n\n## Prefetch Data\n\nIn the pipeline we discussed earlier, the GPU waits for the CPU to generate data, and then the CPU waits for the GPU to finish computation before generating data for the next cycle. This circular dependency causes idle time for both CPU and GPU, which is inefficient.\n\nThe `prefetch` function helps us here by delinking the production of the data (by the CPU) from the consumption of the data (by the GPU). Using a background thread, it allows data to be passed _asynchronously_ into an intermediate buffer, where it is readily available for a GPU to consume. The CPU now carries on with the next computation instead of waiting for the GPU. Similarly, as soon as the GPU is finished with its previous computation, and there's data readily available in the buffer, it starts processing.\n\nTo use it, we can simply call `prefetch` on our dataset at the very end of our pipeline along with a `buffer_size` parameter (which is the maximum amount of data that can be stored). Usually `buffer_size` is a small number; `1` is good enough in many cases:\n\n dataset = dataset.prefetch(buffer_size=16)\n\nIn just a few pages, we show you how to find an optimal value for this parameter.\n\nIn summary, if there's an opportunity to overlap CPU and GPU computations, `prefetch` will automatically exploit it.\n\n## Parallelize CPU Processing\n\nIt would be a waste to have a CPU with multiple cores but doing all of our processing on only one of them. Why not take advantage of the rest? This is exactly where the `num_parallel_calls` argument in the `map` function comes in handy:\n\n dataset = dataset.map(lambda item: tf.io.parse_single_example(item, features), \n **num_parallel_calls=4**)\n\nThis starts multiple threads to parallelize processing of the `map()` function. Assuming that there is no heavy application running in the background, we will want to set `num_parallel_calls` to the number of CPU cores on our system. Anything more will potentially degrade the performance due to the overhead of context switching.\n\n## Parallelize I\/O and Processing\n\nReading files from disk or worse, over a network, is a huge cause of bottlenecks. We might possess the best CPU and GPU in the world, but if we don't optimize our file reads, it would all be for naught. One solution that addresses this problem is to parallelize both I\/O and subsequent processing (also known as _interleaving_ ).\n\n dataset = files.interleave(map_func, num_parallel_calls=4)\n\nIn this command, two things are happening:\n\n * The input data is acquired in parallel (by default equal to the number of cores on the system).\n\n * On the acquired data, setting the `num_parallel_calls` parameter allows the `map_func` function to execute on multiple parallel threads and read from the incoming data asynchronously.\n\nIf `num_parallel_calls` was not specified, even if the data were read in parallel, `map_func` would run synchronously on a single thread. As long as `map_func` runs faster than the rate at which the input data is coming in, there will not be a problem. We definitely want to set `num_parallel_calls` higher if `map_func` becomes a bottleneck.\n\n## Enable Nondeterministic Ordering\n\nFor many datasets, the reading order is not important. After all, we might be randomizing their ordering anyway. By default, when reading files in parallel, `tf.data` still attempts to produce their outputs in a _fixed round-robin order_. The disadvantage is that we might encounter a \"straggler\" along the way (i.e., an operation that takes a lot longer than others, such as a slow file read, and holds up all other operations). It's like a grocery store line where the person in front of us insists on using cash with the exact change, whereas everyone else uses a credit card. So instead of blocking all the subsequent operations that are ready to give output, we skip over the stragglers until they are done with their processing. This breaks the ordering while reducing wasted cycles waiting for the handful of slower operations:\n\n options = tf.data.Options()\n options.experimental_deterministic = False\n\n dataset = tf.data.Dataset.list_files(\".\/training_data\/\")\n dataset = dataset.with_options(options)\n dataset = dataset.interleave(tf.data.TFRecordDataset, num_parallel_calls=4)\n\n## Cache Data\n\nThe `Dataset.cache()` function allows us to make a copy of data either in memory or as a file on disk. There are two reasons why you might want to cache a dataset:\n\n * To avoid repeatedly reading from disk after the first epoch. This is obviously effective only when the cache is in memory and can fit in the available RAM.\n\n * To avoid having to repeatedly perform expensive CPU operations on data (e.g., resizing large images to a smaller size).\n\n###### Tip\n\nCache is best used for data that is not going to change. It is recommended to place `cache()` before any random augmentations and shuffling; otherwise, caching at the end will result in exactly the same data and order in every run.\n\nDepending on our scenario, we can use one of the two following lines:\n\n dataset = dataset.cache() _# in-memory_\n dataset = dataset.cache(filename='tmp.cache') _# on-disk_\n\nIt's worth noting that in-memory cache is volatile and hence only shows performance improvements in the second epoch of every run. On the other hand, file-based cache will make every run faster (beyond the very first epoch of the first run).\n\n###### Tip\n\nIn the \"Reduce Size of Input Data\", we mentioned preprocessing the data and saving it as TFRecord files as input to future data pipelines. Using the **`cache()`** function directly after the preprocessing step in your pipeline would give a similar performance with a single word change in code.\n\n## Turn on Experimental Optimizations\n\nTensorFlow has many built-in optimizations, often initially experimental and turned off by default. Depending on your use case, you might want to turn on some of them to squeeze out just a little more performance from your pipeline. Many of these optimizations are detailed in the documentation for `tf.data.experimental.OptimizationOptions`.\n\n###### Note\n\nHere's a quick refresher on filter and map operations:\n\nFilter\n\nA filter operation goes through a list element by element and grabs those that match a given condition. The condition is supplied as a lambda operation that returns a boolean value.\n\nMap\n\nA map operation simply takes in an element, performs a computation, and returns an output. For example, resizing an image.\n\nLet's look at a few experimental optimizations that are available to us, including examples of two consecutive operations that could benefit from being fused together as one single operation.\n\n### Filter fusion\n\nSometimes, we might want to filter based on multiple attributes. Maybe we want to use only images that have both a dog and a cat. Or, in a census dataset, only look at families above a certain income threshold who also live within a certain distance to the city center. `filter_fusion` can help speed up such scenarios. Consider the following example:\n\n dataset = dataset.filter(lambda x: x < 1000).filter(lambda x: x % 3 == 0)\n\nThe first filter performs a full pass over the entire dataset and returns elements that are less than 1,000. On this output, the second filter does another pass to further remove elements not divisible by three. Instead of doing two passes over many of the same elements, we could instead combine both the filter operations into one pass using an `AND` operation. That is precisely what the `filter_fusion` option enables\u2014combining multiple filter operations into one pass. By default, it is turned off. You can enable it by using the following statement:\n\n options = tf.data.Options()\n options.experimental_optimization.filter_fusion = True\n dataset = dataset.with_options(options)\n\n### Map and filter fusion\n\nConsider the following example:\n\n dataset = dataset.map(lambda x: x * x).filter(lambda x: x % 2 == 0)\n\nIn this example, the `map` function does a full pass on the entire dataset to calculate the square of every element. Then, the `filter` function discards the odd elements. Rather than doing two passes (more so in this particularly wasteful example), we could simply fuse the map and filter operations together by turning on the `map_and_filter_fusion` option so that they operate as a single unit:\n\n options.experimental_optimization.map_and_filter_fusion = True\n\n### Map fusion\n\nSimilar to the aforementioned two examples, fusing two or more map operations prevents multiple passes from being performed on the same data and instead combines them in a single pass:\n\n options.experimental_optimization.map_fusion = True\n\n## Autotune Parameter Values\n\nYou might have noticed that many of the code examples in this section have hardcoded values for some of the parameters. For the combination of the problem and hardware at hand, you can tune them for maximum efficiency. How to tune them? One obvious way is to manually tweak the parameters one at a time and isolate and observe the impact of each of them on the overall performance until we get the precise parameter set. But the number of knobs to tune quickly gets out of hand due to the combinatorial explosion. If this wasn't enough, our finely tuned script wouldn't necessarily be as efficient on another machine due to differences in hardware such as the number of CPU cores, GPU availability, and so on. And even on the same system, depending on resource usage by other programs, these knobs might need to be adjusted over different runs.\n\nHow do we solve this? We do the opposite of manual tuning: autotuning. Using hill-climbing optimization algorithms (which are a type of heuristic-driven search algorithms), this option automatically finds the ideal parameter combination for many of the `tf.data` function parameters. Simply use `tf.data.experimental.AUTOTUNE` instead of manually assigning numbers. It's the one parameter to rule them all. Consider the following example:\n\n dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)\n\nIsn't that an elegant solution? We can do that for several other function calls in the `tf.data` pipeline. The following is an example of combining together several optimizations from the section \"Data Reading\" to make a high-performance data pipeline:\n\n options = tf.data.Options()\n options.experimental_deterministic = False\n\n dataset = tf.data.Dataset.list_files(\"\/path\/*.tfrecord\")\n dataset = dataset.with_options(options)\n dataset = files.interleave(tf.data.TFRecordDataset,\n num_parallel_calls=tf.data.experimental.AUTOTUNE)\n dataset = dataset.map(preprocess,\n num_parallel_calls=tf.data.experimental.AUTOTUNE)\n dataset = dataset.cache() \n dataset = dataset.repeat() \n dataset = dataset.shuffle(2048)\n dataset = dataset.batch(batch_size=64) \n dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)\n\n# Data Augmentation\n\nSometimes, we might not have sufficient data to run our training pipeline. Even if we did, we might still want to manipulate the images to improve the robustness of our model\u2014with the help of data augmentation. Let's see whether we can make this step any faster.\n\n## Use GPU for Augmentation\n\nData preprocessing pipelines can be elaborate enough that you could write an entire book about them. Image transformation operations such as resizing, cropping, color transformations, blurring, and so on are commonly performed on the data immediately after it's read from disk into memory. Given that these are all matrix transformation operations, they might do well on a GPU.\n\nOpenCV, Pillow, and the built-in Keras augmentation functionality are the most commonly used libraries in computer vision for working on images. There's one major limitation here, though. Their image processing is primarily CPU based (although you can compile OpenCV to work with CUDA), which means that the pipeline might not be fully utilizing the underlying hardware to its true potential.\n\n###### Note\n\nAs of August 2019, there are efforts underway to convert Keras image augmentation to be GPU accelerated, as well.\n\nThere are a few different GPU-bound options that we can explore.\n\n### tf.image built-in augmentations\n\n`tf.image` provides some handy augmentation functions that we can seamlessly plug into a `tf.data` pipeline. Some of the methods include image flipping, color augmentations (hue, saturation, brightness, contrast), zooming, and rotation. Consider the following example, which changes the hue of an image:\n\n updated_image = tf.image.adjust_hue(image, delta = 0.2)\n\nThe downside to relying on `tf.image` is that the functionality is much more limited compared to OpenCV, Pillow, and even Keras. For example, the built-in function for image rotation in `tf.image` only supports rotating images by 90 degrees counter-clockwise. If we need to be able to rotate by an arbitrary amount, such as 10 degrees, we'd need to manually build that functionality. Keras, on the other hand, provides that functionality out of the box.\n\nAs another alternative to the `tf.data` pipeline, the NVIDIA Data Loading Library (DALI) offers a fast data loading and preprocessing pipeline accelerated by GPU processing. As shown in Figure 6-4, DALI implements several common steps including resizing an image and augmenting an image in the GPU, immediately before the training. DALI works with multiple deep learning frameworks including TensorFlow, PyTorch, MXNet, and others, offering portability of the preprocessing pipelines.\n\n### NVIDIA DALI\n\n###### Figure 6-4. The NVIDIA DALI pipeline\n\nAdditionally, even JPEG decoding (a relatively heavy task) can partially make use of the GPU, giving it an additional boost. This is done using nvJPEG, a GPU-accelerated library for JPEG decoding. For multi-GPU tasks, this scales near linearly as the number of GPUs increases.\n\nNVIDIAs efforts culminated in a record-breaking MLPerf entry (which benchmarks machine learning hardware, software, and services), training a ResNet-50 model in 80 seconds.\n\n# Training\n\nFor those beginning their performance optimization journey, the quickest wins come from improving the data pipelines, which is relatively easy. For a training pipeline that is already being fed data fast, let's investigate optimizations for our actual training step.\n\n## Use Automatic Mixed Precision\n\n\" _One line to make your training two to three times faster!_ \"\n\nWeights in deep learning models are typically stored in single-precision; that is, 32-bit floating point, or as it's more commonly referenced: FP32. Putting these models in memory-constrained devices such as mobile phones can be challenging to accommodate. A simple trick to make models smaller is to convert them from single-precision (FP32) to half-precision (FP16). Sure, the representative power of these weights goes down, but as we demonstrate later in this chapter (\"Quantize the Model\"), neural networks are resilient to small changes, much like they are resilient to noise in images. Hence, we get the benefits of a more efficient model without sacrificing much accuracy. In fact, we can even reduce the representation to 8-bit integers (INT8) without a significant loss in accuracy, as we will see in some upcoming chapters.\n\nSo, if we can use reduced-precision representation during inference, could we do the same during training, as well? Going from 32-bit to 16-bit representation would effectively mean double the memory bandwidth available, double the model size, or double the batch size can be accommodated. Unfortunately, it turns out that using FP16 na\u00efvely _during training_ can potentially lead to a significant loss in model accuracy and might not even converge to an optimal solution. This happens because of FP16's limited range for representing numbers. Due to a lack of adequate precision, any updates to the model during training, if sufficiently small, will cause an update to not even register. Imagine adding 0.00006 to a weight value of 1.1. With FP32, the weight would be correctly updated to 1.10006. With FP16, however, the weight would remain 1.1. Conversely, any activations from layers such as Rectified Linear Unit (ReLU) could be high enough for FP16 to overflow and hit infinity (`NaN` in Python).\n\nThe easy answer to these challenges is to use automatic mixed-precision training. In this method, we store the model in FP32 as a master copy and perform the forward\/backward passes of training in FP16. After each training step is performed, the final update from that step is then scaled back up to FP32 before it is applied to the master copy. This helps avoid the pitfalls of FP16 arithmetic and results in a lower memory footprint, and faster training (experiments have shown increases in speed by two to three times), while achieving similar accuracy levels as training solely in FP32. It is noteworthy that newer GPU architectures like the NVIDIA Volta and Turing especially optimize FP16 operations.\n\nTo enable mixed precision during training, we simply need to add the following line to the beginning of our Python script:\n\n os.environ['TF_ENABLE_AUTO_MIXED_PRECISION'] = '1'\n\n## Use Larger Batch Size\n\nInstead of using the entire dataset for training in one batch, we train with several minibatches of data. This is done for two reasons:\n\n * Our full data (single batch) might not fit in the GPU RAM.\n\n * We can achieve similar training accuracy by feeding many smaller batches, just as you would by feeding fewer larger batches.\n\nHaving smaller minibatches might not fully utilize the available GPU memory, so it's vital to experiment with this parameter, see its effect on the GPU utilization (using the `nvidia-smi` command), and choose the batch size that maximizes the utilization. Consumer GPUs like the NVIDIA 2080 Ti ship with 11 GB of GPU memory, which is plenty for efficient models like MobileNet family.\n\nFor example on hardware with the 2080 Ti graphics card, using 224 x 224 resolution images and MobileNetV2 model, the GPU can accommodate a batch size up to 864. Figure 6-5 shows the effect of varying batch sizes from 4 to 864, on both the GPU utilization (solid line) as well as the time per epoch (dashed line). As we can see in the figure, the higher the batch size, the higher the GPU utilization, leading to a shorter training time per epoch.\n\nEven at our max batch size of 864 (before running out of memory allocation), the GPU utilization does not cross 85%. This means that the GPU was fast enough to handle the computations of our otherwise very efficient data pipeline. Replacing MobileNetV2 with a heavier ResNet-50 model immediately increased GPU to 95%.\n\n###### Figure 6-5. Effect of varying batch size on time per epoch (seconds) as well as on percentage GPU utilization (Log scales have been used for both X- and Y-axes.)\n\n###### Tip\n\nEven though we showcased batch sizes up to a few hundreds, large industrial training loads distributed across multiple nodes often use much larger batch sizes with the help of a technique called Layer-wise Adaptive Rate Scaling (LARS). For example, Fujitsu Research trained a ResNet-50 network to 75% Top-1 accuracy on ImageNet in a mere 75 seconds. Their ammunition? 2048 Tesla V100 GPUs and a whopping batch size of 81,920!\n\n## Use Multiples of Eight\n\nMost of the computations in deep learning are in the form of \"matrix multiply and add.\" Although it's an expensive operation, specialized hardware has increasingly been built in the past few years to optimize for its performance. Examples include Google's TPUs and NVIDIA's Tensor Cores (which can be found in the Turing and Volta architectures). Turing GPUs provide both Tensor Cores (for FP16 and INT8 operations) as well as CUDA cores (for FP32 operations), with the Tensor Cores delivering significantly higher throughput. Due to their specialized nature, Tensor Cores require that certain parameters within the data supplied to them be divisible by eight. Here are just three such parameters:\n\n * The number of channels in a convolutional filter\n\n * The number of neurons in a fully connected layer and the inputs to this layer\n\n * The size of minibatches\n\nIf these parameters are not divisible by eight, the GPU CUDA cores will be used as the fallback accelerator instead. In an experiment reported by NVIDIA, simply changing the batch size from 4,095 to 4,096 resulted in an increase in throughput of five times. Keep in mind that using multiples of eight (or 16 in the case of INT8 operations), in addition to using automatic mixed precision, is the bare minimum requirement to activate the Tensor Cores. For higher efficiency, the recommended values are in fact multiples of 64 or 256. Similarly, Google recommends multiples of 128 when using TPUs for maximum efficiency.\n\n## Find the Optimal Learning Rate\n\nOne hyperparameter that greatly affects our speed of convergence (and accuracy) is the learning rate. The ideal result of training is the global minimum; that is, the point of least loss. Too high a learning rate can cause our model to overshoot the global minimum (like a wildly swinging pendulum) and potentially never converge. Too low a learning rate can cause convergence to take too long because the learning algorithm will take very small steps toward the minimum. Finding the right initial learning rate can make a world of difference.\n\nThe naive way to find the ideal initial learning rate is to try a few different learning rates (such as 0.00001, 0.0001, 0.001, 0.01, 0.1) and find one that starts converging quicker than others. Or, even better, perform grid search over a range of values. This approach has two problems: 1) depending on the granularity, it might find a decent value, but it might not be the most optimal value; and 2) we need to train multiple times, which can be time consuming.\n\nIn Leslie N. Smith's 2015 paper, \"Cyclical Learning Rates for Training Neural Networks,\" he describes a much better strategy to find this optimal learning rate. In summary:\n\n 1. Start with a really low learning rate and gradually increase it until reaching a prespecified maximum value.\n\n 2. At each learning rate, observe the loss\u2014first it will be stagnant, then it will begin going down and then eventually go back up.\n\n 3. Calculate the rate of decrease of loss (first derivative) at each learning rate.\n\n 4. Select the point with the highest rate of decrease of loss.\n\nIt sounds like a lot of steps, but thankfully we don't need to write code for it. The keras_lr_finder library by Pavel Surmenok gives us a handy function to find it:\n\n lr_finder = LRFinder(model)\n lr_finder.find(x_train, y_train, start_lr=0.0001, end_lr=10, batch_size=512,\n epochs=5)\n lr_finder.plot_loss(n_skip_beginning=20, n_skip_end=5)\n\nFigure 6-6 shows the plot of loss versus learning rate. It becomes evident that a learning rate of 10\u20134 or 10\u20133 might be too low (owing to barely any drop in loss), and similarly, above 1 might be too high (because of the rapid increase in loss).\n\n###### Figure 6-6. A graph showing the change in loss as the learning rate is increased\n\nWhat we are most interested in is the point of the greatest decrease in loss. After all, we want to minimize the time we spend in getting to the least loss during training. In Figure 6-7, we plot the _rate of change_ of loss\u2014the derivative of the loss with regard to the learning rate:\n\n # Show Simple Moving Average over 20 points to smoothen the graph\n lr_finder.plot_loss_change(sma=20, n_skip_beginning=20, n_skip_end=5,\n y_lim=(-0.01, 0.01))\n\n###### Figure 6-7. A graph showing the rate of change in loss as the learning rate is increased\n\nThese figures show that values around 0.1 would lead to the fastest decrease in loss, and hence we would choose it as our optimal learning rate.\n\n## Use tf.function\n\nEager execution mode, which is turned on by default in TensorFlow 2.0, allows users to execute code line by line and immediately see the results. This is immensely helpful in development and debugging. This is in contrast to TensorFlow 1.x, for which the user had to build all operations as a graph and then execute them in one go to see the results. This made debugging a nightmare!\n\nDoes the added flexibility from eager execution come at a cost? Yes, a tiny one, typically in the order of microseconds, which can essentially be ignored for large compute-intensive operations, like training ResNet-50. But where there are many small operations, eager execution can have a sizable impact.\n\nWe can overcome this by two approaches:\n\nDisabling eager execution\n\nFor TensorFlow 1.x, not enabling eager execution will let the system optimize the program flow as a graph and run it faster.\n\nUse `tf.function`\n\nIn TensorFlow 2.x, you cannot disable eager execution (there is a compatibility API, but we shouldn't be using that for anything other than migration from TensorFlow 1.x). Instead, any function that could benefit from a speedup by executing in graph mode can simply be annotated with `@tf.function`. It's worth noting that any function that is called within an annotated function will also run in graph mode. This gives us the advantage of speedup from graph-based execution without sacrificing the debugging capabilities of eager execution. Typically, the best speedup is observed on short computationally intensive tasks:\n\n conv_layer = tf.keras.layers.Conv2D(224, 3)\n\n def non_tf_func(image):\n for _ in range(1,3):\n conv_layer(image)\n return\n\n @tf.function\n def tf_func(image):\n for _ in range(1,3):\n conv_layer(image)\n return\n\n mat = tf.zeros([1, 100, 100, 100])\n\n # Warm up\n non_tf_func(mat)\n tf_func(mat)\n\n print(\"Without @tf.function:\", timeit.timeit(lambda: non_tf_func(mat),\n \t number=10000), \" seconds\")\n print(\"With @tf.function:\", timeit.timeit(lambda: tf_func(mat), number=10000),\n \t \"seconds\")\n\n =====Output=====\n Without @tf.function: 7.234016112051904 seconds\n With @tf.function: 0.7510978290811181 seconds\n\nAs we can see in our contrived example, simply attributing a function with `@tf.function` has given us a speedup of 10 times, from 7.2 seconds to 0.7 seconds.\n\n## Overtrain, and Then Generalize\n\nIn machine learning, overtraining on a dataset is considered to be harmful. However, we will demonstrate that we can use overtraining in a controlled fashion to our advantage to make training faster.\n\nAs the saying goes, \"The perfect is the enemy of the good.\" We don't want our network to be perfect right off the bat. In fact, we wouldn't even want it to be any good initially. What we really want instead is for it to be learning _something_ quickly, even if imperfectly. Because then we have a good baseline that we can fine tune to its highest potential. And experiments have shown that we can get to the end of the journey faster than training conventionally.\n\n###### Note\n\nTo further clarify the idea of overtraining and then generalizing, let's look at an imperfect analogy of language learning. Suppose that you want to learn French. One way is to throw a book of vocabulary and grammar at you and expect you to memorize everything. Sure, you might go through the book every day and maybe in a few years, you might be able to speak some French. But this would not be the optimal way to learn.\n\nAlternatively, we could look at how language learning programs approach this process. These programs introduce you to only a small set of words and grammatical rules initially. After you have learned them, you will be able to speak some broken French. Maybe you could ask for a cup of coffee at a restaurant or ask for directions at a bus stop. At this point, you will be introduced constantly to a larger set of words and rules, and this will help you to improve over time.\n\nThis process is similar to how our model would learn gradually with more and more data.\n\nHow do we force a network to learn quickly and imperfectly? Make it overtrain on our data. The following three strategies can help.\n\n### Use progressive sampling\n\nOne approach to overtrain and then generalize is to progressively show more and more of the original training set to the model. Here's a simple implementation:\n\n 1. Take a sample of the dataset (say, roughly 10%).\n\n 2. Train the network until it converges; in other words, until it begins to perform well on the training set.\n\n 3. Train on a larger sample (or even the entire training set).\n\nBy repeatedly showing a smaller sample of the dataset, the network will learn features much more quickly, but only related to the sample shown. Hence, it would tend to overtrain, usually performing better on the training set compared to the test set. When that happens, exposing the training process to the entire dataset will tend to generalize its learning, and eventually the test set performance would increase.\n\n### Use progressive augmentation\n\nAnother approach is to train on the entire dataset with little to no data augmentation at first, and then progressively increase the degree of augmentation.\n\nBy showing the unaugmented images repeatedly, the network would learn patterns faster, and by progressively increasing the degree of augmentation, it would become more robust.\n\n### Use progressive resizing\n\nAnother approach, made famous by Jeremy Howard from fast.ai (which offers free courses on AI), is progressive resizing. The key idea behind this approach is to train first on images scaled down to smaller pixel size, and then progressively fine tune on larger and larger sizes until the original image size is reached.\n\nImages resized by half along both the width and height have a 75% reduction in pixels, and theoretically could lead to an increase in training speed of four times over the original images. Similarly, resizing to a quarter of the original height and width can in the best case lead to 16-times reduction (at a lower accuracy). Smaller images have fewer details visible, forcing the network to instead learn higher-level features including broad shapes and colors. Then, training with larger images will help the network learn the finer details, progressively increasing the test accuracy, as well. Just like a child is taught the high-level concepts first and then progressively exposed to more details in later years, the same concept is applied here to CNNs.\n\n###### Tip\n\nYou can experiment with a combination of any of these methods or even build your own creative methods such as training on a subset of classes and then generalizing to all the classes later.\n\n## Install an Optimized Stack for the Hardware\n\nHosted binaries for open source packages are usually built to run on a variety of hardware and software configurations. These packages try to appeal to the least common denominator. When we do `pip install` on a package, we end up downloading and installing this general-purpose, works-for-everyone binary. This convenience comes at the expense of not being able to take advantage of the specific features offered by a particular hardware stack. This issue is one of the big reasons to avoid installing prebuilt binaries and instead opt for building packages from source.\n\nAs an example, Google has a single TensorFlow package on `pip` that can run on an old Sandy Bridge (second-generation Core i3) laptop as well as a powerful 16-core Intel Xeon server. Although convenient, the downside of this is that this package does not take advantage of the highly powerful hardware of the Xeon server. Hence, for CPU-based training and inference, Google recommends compiling TensorFlow from source to best optimize for the hardware at hand.\n\nOne way to do this manually is by setting the configuration flags for the hardware before building the source code. For example, to enable support for AVX2 and SSE 4.2 instruction sets, we can simply execute the following build command (note the extra `m` character ahead of each instruction set in the command):\n\n $ bazel build -c opt --copt= **-mavx2** --copt= **-msse4.2**\n \/\/tensorflow\/tools\/pip_package:build_pip_package\n\nHow do you check which CPU features are available? Use the following command (Linux only):\n\n $ lscpu | grep Flags\n\n Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36\n clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm\n constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid\n aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16\n xtpr pdcm pcid dca sse4_1 **sse4_2** x2apic movbe popcnt\n tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch\n cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp\n tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle **avx2** smep bmi2\n erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc\n cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear\n flush_l1d\n\nBuilding TensorFlow from source with the appropriate instruction set specified as build flags should result in a substantial increase in speed. The downside here is that building from source can take quite some time, at least a couple of hours. Alternatively, we can use Anaconda to download and install a highly optimized variant of TensorFlow, built by Intel on top of their Math Kernel Library for Deep Neural Networks (MKL-DNN). The installation process is pretty straightforward. First, we install the Anaconda package manager. Then, we run the following command:\n\n _# For Linux and Mac_\n $ conda install tensorflow\n\n _# For Windows_\n $ conda install tensorflow-mkl\n\nOn Xeon CPUs, MKL-DNN often provides upward of two-times speedup in inference.\n\nHow about optimization for GPUs? Because NVIDIA abstracts away the differences between the various GPU internals with the CUDA library, there is usually no need to build from source. Instead, we could simply install a GPU variant of TensorFlow from `pip` (`tensorflow-gpu` package). We recommend the Lambda Stack one-liner installer for convenience (along with NVIDIA drivers, CUDA, and cuDNN).\n\nFor training and inference on the cloud, AWS, Microsoft Azure, and GCP all provide GPU machine images of TensorFlow optimized for their hardware. It's quick to spin up multiple instances and get started. Additionally, NVIDIA offers GPU-accelerated containers for on-premises and cloud setups.\n\n## Optimize the Number of Parallel CPU Threads\n\nCompare the following two examples:\n\n _# Example 1_\n X = tf.multiply(A, B)\n Y = tf.multiply(C, D)\n\n _# Example 2_\n X = tf.multiply(A, B)\n Y = tf.multiply( **X**, C)\n\nThere are a couple of areas in these examples where we can exploit inherent parallelism:\n\nBetween operations\n\nIn example 1, the calculation of Y does not depend on the calculation of X. This is because there is no shared data between those two operations, and thus both of them can execute in parallel on two separate threads.\n\nIn contrast, in example 2, the calculation of Y depends on the outcome of the first operation (X), and so the second statement cannot execute until the first statement completes execution.\n\nThe configuration for the maximum number of threads that can be used for interoperation parallelism is set using the following statement:\n\n tf.config.threading.set_inter_op_parallelism_threads(num_threads)\n\nThe recommended number of threads is equal to the number of CPUs (sockets) on the machine. This value can be obtained by using the `lscpu` command (Linux only).\n\nPer-operation level\n\nWe can also exploit the parallelism within a single operation. Operations such as matrix multiplications are inherently parallelizable.\n\nFigure 6-8 demonstrates a simple matrix multiplication operation. It's clear that the overall product can be split into four independent calculations. After all, the product between one row of a matrix and one column of another matrix does not depend on the calculations for the other rows and columns. Each of those splits could potentially get its own thread and all four of them could execute at the same time.\n\n###### Figure 6-8. A matrix multiplication for A x B operation with one of the multiplications highlighted\n\nThe configuration for the number of threads that can be used for intraoperation parallelism is set using the following statement:\n\n tf.config.threading.set_intra_op_parallelism_threads(num_threads)\n\nThe recommended number of threads is equal to the number of cores per CPU. You can obtain this value by using the `lscpu` command on Linux.\n\n## Use Better Hardware\n\nIf you have already maximized performance optimizations and still need faster training, you might be ready for some new hardware. Replacing spinning hard drives with SSDs can go a long way, as can adding one or more better GPUs. And let's not forget, sometimes the CPU can be the culprit.\n\nIn fact, you might not need to spend much money: public clouds like AWS, Azure, and GCP all provide the ability to rent powerful configurations for a few dollars per hour. Best of all, they come with optimized TensorFlow stacks preinstalled.\n\nOf course, if you have the cash to spend or have a rather generous expense account, you could just skip this entire chapter and buy the 2-petaFLOPS NVIDIA DGX-2. Weighing in at 163 kgs (360 pounds), its 16 V100 GPUs (with a total of 81,920 CUDA cores) consume 10 kW of power\u2014the equivalent of seven large window air conditioners. And all it costs is $400,000!\n\n###### Figure 6-9. The $400,000 NVIDIA DGX-2 deep learning system\n\n## Distribute Training\n\n\" _Two lines to scale training horizontally!_ \"\n\nOn a single machine with a single GPU, there's only so far that we can go. Even the beefiest GPUs have an upper limit in compute power. Vertical scaling can take us only so far. Instead, we look to scale horizontally\u2014distribute computation across processors. We can do this across multiple GPUs, TPUs, or even multiple machines. In fact, that is exactly what researchers at Google Brain did back in 2012, using 16,000 processors to run a neural network built to look at cats on YouTube.\n\nIn the dark days of the early 2010s, training on ImageNet used to take anywhere from several weeks to months. Multiple GPUs would speed things up, but few people had the technical know-how to configure such a setup. It was practically out of reach for beginners. Luckily, we live in the day of TensorFlow 2.0, in which setting up distributed training is a matter of introducing two lines of code:\n\n mirrored_strategy = tf.distribute.MirroredStrategy()\n with mirrored_strategy.scope():\n model = tf.keras.applications.ResNet50()\n model.compile(loss=\"mse\", optimizer=\"sgd\")\n\nTraining speed increases nearly proportionally (90\u201395%) in relation to the number of GPUs added. As an example, if we added four GPUs (of similar compute power), we would notice an increase of >3.6 times speedup ideally.\n\nStill, a single system can only support a limited number of GPUs. How about multiple nodes, each with multiple GPUs? Similar to `MirroredStrategy`, we can use `Multi``Worker``MirroredStrategy`. This is quite useful when building a cluster on the cloud. Table 6-1 presents a couple of distribution strategies for different use cases.\n\nTable 6-1. Recommended distribution strategies **Strategy** | **Use case** \n---|---\n\n`MirroredStrategy`\n\n| Single node with two or more GPUs\n\n`MultiWorkerMirroredStrategy`\n\n| Multiple nodes with one or more GPUs each\n\nTo get the cluster nodes to communicate with one another for `MultiWorkerMirroredStrategy`, we need to configure the `TF_CONFIG` environment variable on every single host. This requires setting up a JSON object that contains the IP addresses and ports of all other hosts in the cluster. Manually managing this can be error prone, and this is where orchestration frameworks like Kubernetes really shine.\n\n###### Note\n\nThe open source Horovod library from Uber is another high-performance and easy-to-use distribution framework. Many of the record benchmark performances seen in the next section require distributed training on several nodes, and Horovod's performance helped them get the edge. It is worth noting that the majority of the industry uses Horovod particularly because distributed training on earlier versions of TensorFlow was a much more involved process. Additionally, Horovod works with all major deep learning libraries with minimal amount of code change or expertise. Often configured through the command line, running a distributed program on four nodes, each with four GPUs, can be done in a single command line:\n\n $ horovodrun -np 16 -H\n server1:4,server2:4,server3:4,server4:4 python\n train.py\n\n## Examine Industry Benchmarks\n\nThree things were universally popular in the 1980s\u2014long hair, the Walkman, and database benchmarks. Much like the current hype of deep learning, database software was similarly going through a phase of making bold promises, some of which were marketing hype. To put these companies to the test, a few benchmarks were introduced, more famously among them was the Transaction Processing Council (TPC) benchmark. When someone needed to buy database software, they could rely on this public benchmark to decide where to spend their company's budget. This competition fueled rapid innovation, increasing speed and performance per dollar, moving the industry ahead faster than anticipated.\n\nInspired by TPC and other benchmarks, a few system benchmarks were created to standardize performance reporting in machine learning.\n\nDAWNBench\n\nStanford's DAWNBench benchmarks time and cost to get a model to 93% Top-5 accuracy on ImageNet. Additionally, it also does a time and cost leaderboard on inference time. It's worth appreciating the rapid pace of performance improvement for training such a massive network. When DAWNBench originally started in September 2017, the reference entry trained in 13 days at a cost of $2,323.39. In just one and a half years since then, although the cheapest training costs as low as $12, the fastest training time is 2 minutes 43 seconds. Best of all, most entries contain the training source code and optimizations that can be studied and replicated by us. This gives further guidance on the effects of hyperparameters and how we can use the cloud for cheap and fast training without breaking the bank.\n\nTable 6-2. Entries on DAWNBench as of August 2019, sorted by the least cost for training a model to 93% Top-5 accuracy **Cost (USD)** | **Training time** | **Model** | **Hardware** | **Framework** \n---|---|---|---|--- \n$12.60 | 2:44:31 |\n\nResNet-50\n\nGoogle Cloud TPU\n\n| GCP n1-standard-2, Cloud TPU | TensorFlow 1.11 \n$20.89 | 1:42:23 |\n\nResNet-50\n\nSetu Chokshi\n\n(MS AI MVP)\n\n| Azure ND40s_v2 | PyTorch 1.0 \n$42.66 | 1:44:34 |\n\nResNet-50 v1\n\nGE Healthcare\n\n(Min Zhang)\n\n| 8*V100 (single p3.16x large) | TensorFlow 1.11 + Horovod \n$48.48 | 0:29:43 |\n\nResNet-50\n\nAndrew Shaw, Yaroslav Bulatov, Jeremy Howard | 32 * V100\n\n(4x - AWS p3.16x large)\n\n| Ncluster + PyTorch 0.5\n\nMLPerf\n\nSimilar to DAWNBench, MLPerf is aimed at repeatable and fair testing of AI system performance. Although newer than DAWNBench, this is an industry consortium with much wider support, especially on the hardware side. It runs challenges for both training and inference in two divisions: open and closed. The closed division trains the same model with the same optimizers, so the raw hardware performance can be compared apples-to-apples. The open division, on the other hand, allows using faster models and optimizers to allow for more rapid progress. Compared to the more cost-effective entries in DAWNBench in Table 6-2, the top performers on MLPerf as shown in Table 6-3 might be a bit out of reach for most of us. The top-performing NVIDIA DGX SuperPod, composed of 96 DGX-2H with a total of 1,536 V100 GPUs, costs in the $35 to $40 million range. Even though 1,024 Google TPUs might themselves cost in the several millions, they are each available to rent on the cloud at $8\/hour on-demand pricing (as of August 2019), resulting in a net cost of under $275 for the less-than two minutes of training time.\n\nTable 6-3. Key closed-division entries on DAWNBench as of August 2019, showing training time for a ResNet-50 model to get to 75.9% Top-1 accuracy **Time (minutes)** | **Submitter** | **Hardware** | **Accelerator** | **# of accelerators** \n---|---|---|---|--- \n1.28 | Google | TPUv3 | TPUv3 | 1,024 \n1.33 | NVIDIA | 96x DGX-2H | Tesla V100 | 1,536 \n8,831.3 | Reference | Pascal P100 | Pascal P100 | 1\n\nAlthough both the aforementioned benchmarks highlight training as well as inference (usually on more powerful devices), there are other inference-specific competitions on low-power devices, with the aim to maximize accuracy and speed while reducing power consumption. Held at annual conferences, here are some of these competitions:\n\n * LPIRC: Low-Power Image Recognition Challenge\n\n * EDLDC: Embedded Deep Learning Design Contest\n\n * System Design Contest at Design Automation Conference (DAC)\n\n# Inference\n\nTraining our model is only half the game. We eventually need to serve the predictions to our users. The following points guide you to making your serving side more performant.\n\n## Use an Efficient Model\n\nDeep learning competitions have traditionally been a race to come up with the highest accuracy model, get on top of the leaderboard, and get the bragging rights. But practitioners live in a different world\u2014the world of serving their users quickly and efficiently. With devices like smartphones, edge devices, and servers with thousands of calls per second, being efficient on all fronts (model size and computation) is critically needed. After all, many machines would not be capable of serving a half gigabyte VGG-16 model, which happens to need 30 billion operations to execute, for not even that high of accuracy. Among the wide variety of pretrained architectures available, some are on the higher end of accuracy but large and resource intensive, whereas others provide modest accuracy but are much lighter. Our goal is to pick the architecture that can deliver the highest accuracy for the available computational power and memory budget of our inference device. In Figure 6-10, we want to pick models in the upper-left zone.\n\n###### Figure 6-10. Comparing different models for size, accuracy, and operations per second (adapted from \"An Analysis of Deep Neural Network Models for Practical Applications\" by Alfredo Canziani, Adam Paszke, and Eugenio Culurciello)\n\nUsually, the approximately 15 MB MobileNet family is the go-to model for efficient smartphone runtimes, with more recent versions like MobileNetV2 and MobileNetV3 being better than their predecessors. Additionally, by varying the hyperparameters of the MobileNet models like depth multiplier, the number of computations can be further reduced, making it ideal for real-time applications. Since 2017, the task of generating the most optimal architecture to maximize accuracy has also been automated with NAS. It has helped discover new (rather obfuscated looking) architectures that have broken the ImageNet accuracy metric multiple times. For example, FixResNeXt (based on PNASNet architecture at 829 MB) reaches a whopping 86.4% Top-1 accuracy on ImageNet. So, it was natural for the research community to ask whether NAS helps find architecture that's tuned for mobile, maximizing accuracy while minimizing computations. The answer is a resounding yes\u2014resulting in faster and better models, optimized for the hardware at hand. As an example, MixNet (July 2019) outperforms many state-of-the-art models. Note how we went from billions of floating-point operations to millions (Figure 6-10 and Figure 6-11).\n\n###### Figure 6-11. Comparison of several mobile-friendly models in the paper \"MixNet: Mixed Depthwise Convolution Kernels\" by Mingxing Tan and Quoc V. Le\n\nAs practitioners, where can we find current state-of-the-art models? _PapersWithCode.com\/SOTA_ showcases leaderboards on several AI problems, comparing paper results over time, along with the model code. Of particular interest would be the models with a low number of parameters that achieve high accuracies. For example, EfficientNet gets an amazing Top-1 84.4% accuracy with 66 million parameters, so it could be an ideal candidate for running on servers. Additionally, the ImageNet test metrics are on 1,000 classes, whereas our case might just require classification on a few classes. For those cases, a much smaller model would suffice. Models listed in Keras Application ( _tf.keras.applications_ ), TensorFlow Hub, and TensorFlow Models usually carry many variations (input image sizes, depth multipliers, quantizations, etc.).\n\n###### Tip\n\nShortly after Google AI researchers publish a paper, they release the model used in the paper on the TensorFlow Models repository.\n\n## Quantize the Model\n\n\" _Represent 32-bit weights to 8-bit integer, get 2x faster, 4x smaller models_ \"\n\nNeural networks are driven primarily by matrix\u2013matrix multiplications. The arithmetic involved tends to be rather forgiving in that small deviations in values do not cause a significant swing in output. This makes neural networks fairly robust to noise. After all, we want to be able to recognize an apple in a picture, even in less-than-perfect lighting. When we quantize, we essentially take advantage of this \"forgiving\" nature of neural networks.\n\nBefore we look at the different quantization techniques, let's first try to build an intuition for it. To illustrate quantized representations with a simple example, we'll convert 32-bit floating-point weights to INT8 (8-bit integer) using _linear quantization_. Obviously, FP32 represents 232 values (hence, 4 bytes to store), whereas INT8 represents 28 = 256 values (1 byte). To quantize:\n\n 1. Find the minimum and maximum values represented by FP32 weights in the neural network.\n\n 2. Divide this range into 256 intervals, each corresponding to an INT8 value.\n\n 3. Calculate a scaling factor that converts an INT8 (integer) back to a FP32. For example, if our original range is from 0 to 1, and INT8 numbers are 0 to 255, the scaling factor will be 1\/256.\n\n 4. Replace FP32 numbers in each interval with the INT8 value. Additionally, store the scaling factor for the inference stage where we convert INT8 values back to FP32 values. This scaling factor only needs to be stored once for the entire group of quantized values.\n\n 5. During inference calculations, multiply the INT8 values by the scaling factor to convert it back to a floating-point representation. Figure 6-12 illustrates an example of linear quantization for the interval [0, 1].\n\n###### Figure 6-12. Quantizing from a 0 to 1 32-bit floating-point range down to an 8-bit integer range for reduced storage space\n\nThere are a few different ways to quantize our models, the simplest one being reducing the bit representation of the weights from 32-bit to 16-bit or lower. As might be evident, converting 32-bit to 16-bit means half the memory size is needed to store a model. Similarly, converting to 8-bit would require a quarter of the size. So why not convert it to 1-bit and save 32x the size? Well, although the models are forgiving up to a certain extent, with each reduction, we will notice a drop in accuracy. This reduction in accuracy grows exponentially beyond a certain threshold (especially below 8 bits). To go below and still have a useful working model (like a 1-bit representation), we'd need to follow a special conversion process to convert them to binarized neural networks. XNOR.ai, a deep learning startup, has famously been able to bring this technique to production. The Microsoft Embedded Learning Library (ELL) similarly provides such tools, which have a lot of value for edge devices like the Raspberry Pi.\n\nThere are numerous benefits to quantization:\n\nImproved memory usage\n\nBy quantizing to 8-bit integer representation (INT8), we typically get a 75% reduction in model size. This makes it more convenient to store and load the model in memory.\n\nImproved performance\n\nInteger operations are faster than floating-point operations. Additionally, the savings in memory usage reduces the likelihood of having to unload the model from RAM during execution, which also has the added benefit of decreased power consumption.\n\nPortability\n\nEdge devices such as Internet of Things devices might not support floating-point arithmetic, so it would be untenable to keep the model as a floating-point in such situations.\n\nMost inference frameworks provide a way to quantize, including Core ML Tools from Apple, TensorRT from NVIDIA (for servers), and TensorFlow Lite, as well as the TensorFlow Model Optimization Toolkit from Google. With TensorFlow Lite, models can be quantized after training during conversion (called post-training quantization). To minimize accuracy losses even further, we can use the TensorFlow Model Optimization Toolkit during training. This process is called _quantization-aware training_.\n\nIt would be useful to measure the benefit provided by quantization. Metrics from the TensorFlow Lite Model optimization benchmarks (shown in Table 6-4) give us a hint, comparing 1) unquantized, 2) post-training quantized, and 3) quantization-aware trained models. The performance was measured on a Google Pixel 2 device.\n\nTable 6-4. Effects of different quantization strategies (8-bit) on models (source: TensorFlow Lite model optimization documentation) **Model** | **MobileNet** | **MobileNetV2** | **InceptionV3** \n---|---|---|---\n\n**Top-1 accuracy** | **Original**\n\n| 0.709 | 0.719 | 0.78\n\n**Post-training quantized**\n\n| 0.657 | 0.637 | 0.772\n\n**Quantization-aware training**\n\n| 0.7 | 0.709 | 0.775\n\n**Latency (ms)** | **Original**\n\n| 124 | 89 | 1130\n\n**Post-training quantized**\n\n| 112 | 98 | 845\n\n**Quantization-aware training**\n\n| 64 | 54 | 543\n\n**Size (MB)** | **Original**\n\n| 16.9 | 14 | 95.7\n\n**Optimized**\n\n| 4.3 | 3.6 | 23.9\n\nSo, what do these numbers indicate? After quantization using TensorFlow Lite to INT8, we see roughly a four-times reduction in size, approximately two-times speedup in run time, and less than 1% change in accuracy. Not bad!\n\nMore extreme form of quantization, like 1-bit binarized neural networks (like XNOR-Net), claim a whopping 58-times speedup with roughly 32-times smaller size when tested on AlexNet, with a 22% loss in accuracy.\n\n## Prune the Model\n\nPick a number. Multiply it by 0. What do we get? Zero. Multiply your pick again by a small value neighboring 0, like 10\u20136, and we'll still get an insignificant value. If we replace such tiny weights (\u2192 0) in a model with 0 itself, it should have little effect on the model's predictions. This is called _magnitude-based weight pruning,_ or simply pruning, and is a form of _model compression_. Logically, putting a weight of 0 between two nodes in a fully connected layer is equivalent to deleting the edge between them. This makes a model with dense connections sparser.\n\nAs it happens, a large chunk of the weights in a model are close to 0. Pruning the model will result in many of those weights being set to 0. This happens with little impact to accuracy. Although this does not save any space by itself, it introduces a ton of redundancy that can be exploited when it comes time to save the model to disk in a compressed format such as ZIP. (It is worth noting that compression algorithms thrive on repeating patterns. The more the repetition, the higher the compressibility.) The end result is that our model can often be compressed by four times. Of course, when we finally need to use the model, it would need to be uncompressed before loading in memory for inference.\n\nThe TensorFlow team observed the accuracy loss shown in Table 6-5 while pruning the models. As expected, more efficient models like MobileNet observe higher (though still small) accuracy loss when compared with comparatively bigger models like InceptionV3.\n\nTable 6-5. Model accuracy loss versus pruning percentage **Model** | **Sparsity** | **Accuracy loss against original accuracy** \n---|---|--- \nInceptionV3 | 50% | 0.1% \nInceptionV3 | 75% | 2.5% \nInceptionV3 | 87.5% | 4.5% \nMobileNet | 50% | 2%\n\nKeras provides APIs to prune our model. This process can be done iteratively during training. Train a model normally or pick a pretrained model. Then, periodically prune the model and continue training. Having enough epochs between the periodic prunes allows the model to recover from any damage due to introducing so much sparsity. The amount of sparsity and number of epochs between prunes can be treated as hyperparameters to be tuned.\n\nAnother way of implementing this is by using Tencent's PocketFlow tool, a one-line command that provides several other pruning strategies implemented in recent research papers.\n\n## Use Fused Operations\n\nIn any serious CNN, the convolutional layer and batch normalization layer frequently appear together. They are kind of the Laurel and Hardy of CNN layers. Fundamentally, they are both linear operations. Basic linear algebra tells us that combining two or more linear operations will also result in a linear operation. By combining convolutional and batch normalization layers, we not only reduce the number of computations, but also decrease the amount of time spent in data transfer, both between main memory and GPU, and main memory and CPU registers\/cache. Making them one operation prevents an extra roundtrip. Luckily, for inference purposes, most inference frameworks either automatically do this fusing step or provide model converters (like TensorFlow Lite) to make this optimization while converting the model to the inference format.\n\n## Enable GPU Persistence\n\nLoading and initializing the GPU drivers take time. You might have noticed a delay every time a training or inference job was initiated. For frequent, short jobs, the overhead can become relatively expensive quickly. Imagine an image classification program for which the classification takes 10 seconds, 9.9 of which were spent in loading the driver. What we need is for the GPU driver to stay preinitialized in the background, and be ready for whenever our training jobs start. And that's where the NVIDIA GPU Persistence Daemon comes to the rescue:\n\n $ nvidia-persistenced --user _{YOUR_USERNAME}_\n\nOur GPUs will use a bit more wattage during idle time, but they will be ready and available the next time a program is launched.\n\n# Summary\n\nIn this chapter, we explored different avenues for improving the speed and performance of our deep learning pipeline, from storing and reading the data to inference. A slow data pipeline often leads to a GPU starving for data, resulting in idle cycles. With several of the simple optimizations we discussed, our hardware can be put to its maximum efficiency. The handy checklist can serve as a ready reference. Feel free to make a copy for your desk (or your refrigerator). With these learnings, we hope to see your entry among the top performers of the MLPerf benchmark list.\n\n# Chapter 7. Practical Tools, Tips, and Tricks\n\nThis chapter contains material that we, your authors, have encountered during our professional work as well as while working on this book, primarily during experimentation. The material covered here doesn't necessarily fit in any single chapter; rather, it's material that deep learning practitioners could find useful on a day-to-day basis across a variety of tasks. In line with the \"practical\" theme, these questions cover a range of helpful pragmatic guidelines across topics including setting up an environment, training, model interoperability, data collection and labeling, code quality, managing experiments, team collaboration practices, privacy, and further exploration topics.\n\nDue to the fast-changing pace of the AI field, this chapter is a small subset of the \"living\" document hosted on the book's Github repository (see _http:\/\/PracticalDeepLearning.ai_) at _code\/chapter-9_ , where it is constantly evolving. If you have more questions or, even better, answers that might help other readers, feel free to tweet them @PracticalDLBook or submit a pull request.\n\n# Installation\n\n**Q:** _I came across an interesting and useful Jupyter Notebook on GitHub. Making the code run will require cloning the repository, installing packages, setting up the environment, and more steps. Is there an instant way to run it interactively?_\n\nSimply enter the Git repository URL into Binder ( _mybinder.org_ ), which will turn it into a collection of interactive notebooks. Under the hood, it will search for a dependency file, like _requirements.txt_ or _environment.yml_ in the repository's root directory. This will be used to build a Docker image, to help run the notebook interactively in your browser.\n\n**Q:** _What is the quickest way to get my deep learning setup running on a fresh Ubuntu machine with NVIDIA GPUs?_\n\nLife would be great if `pip install tensorflow-gpu` would solve everything. However, that's far from reality. On a freshly installed Ubuntu machine, listing all the installation steps would take at least three pages and more than an hour to follow, including installing NVIDIA GPU drivers, CUDA, cuDNN, Python, TensorFlow, and other packages. And then it requires carefully checking the version interoperability between CUDA, cuDNN and TensorFlow. More often than not, this ends in a broken system. A world of pain to say the least!\n\nWouldn't it be great if two lines could solve all of this effortlessly? Ask, and ye shall receive:\n\n $ sudo apt update && sudo ubuntu-drivers autoinstall && sudo reboot \n $ export LAMBDA_REPO=$(mktemp) \\\n && wget -O${LAMBDA_REPO} \\\n https:\/\/lambdalabs.com\/static\/misc\/lambda-stack-repo.deb \\\n && sudo dpkg -i ${LAMBDA_REPO} && rm -f ${LAMBDA_REPO} \\\n && sudo apt-get update && sudo apt-get install -y lambda-stack-cuda \\\n && sudo reboot\n\nThe first line ensures that all the drivers are updated. The second line is brought to us by the Lambda Labs, a San Francisco\u2013based deep learning hardware and cloud provider. The command sets up the Lambda Stack, which installs TensorFlow, Keras, PyTorch, Caffe, Caffe2, Theano, CUDA, cuDNN, and NVIDIA GPU drivers. Because the company needs to install the same deep learning packages on thousands of machines, it automated the process with a one-line command and then open sourced it so that others can also make use of it.\n\n**Q:** _What is the fastest way to install TensorFlow on a Windows PC?_\n\n 1. Install Anaconda Python 3.7.\n\n 2. On the command line, run `conda install tensorflow-gpu`.\n\n 3. If you do not have GPUs, run `conda install tensorflow`.\n\nOne additional benefit of a CPU-based Conda installation is that it installs Intel MKL optimized TensorFlow, running faster than the version we get by using `pip install tensorflow`.\n\n**Q:** _I have an AMD GPU. Could I benefit from GPU speedups in TensorFlow on my existing system?_\n\nAlthough the majority of the deep learning world uses NVIDIA GPUs, there is a growing community of people running on AMD hardware with the help of the ROCm stack. Installation using the command line is simple:\n\n 1. `sudo apt install rocm-libs miopen-hip cxlactivitylogger`\n\n 2. `sudo apt install wget python3-pip`\n\n 3. `pip3 install --user tensorflow-rocm`\n\n**Q:** _Forget installation, where can I get preinstalled deep learning containers?_\n\nDocker is synonymous with setting up environments. Docker helps run isolated containers that are bundled with tools, libraries, and configuration files. There are several deep learning Docker containers available while selecting your virtual machine (VM) from major cloud providers AWS, Microsoft Azure, GCP, Alibaba, etc.) that are ready to start working. NVIDIA also freely provides NVIDIA GPU Cloud containers, which are the same high-performance containers used to break training speed records on the MLPerf benchmarks. You can even run these containers on your desktop machine.\n\n# Training\n\n**Q:** _I don't like having to stare at my screen constantly to check whether my training finished. Can I get a notification alert on my phone, instead?_\n\nUse Knock Knock, a Python library that, as the name suggests, notifies you when your training ends (or your program crashes) by sending alerts on email, Slack, or even Telegram! Best of all, it requires adding only two lines of code to your training script. No more opening your program a thousand times to check whether the training has finished.\n\n**Q:** _I prefer graphics and visualizations over plain text. Can I get real-time visualizations for my training process?_\n\nFastProgress progress bar (originally developed for fast.ai by Sylvain Gugger) comes to the rescue.\n\n**Q:** _I conduct a lot of experiments iteratively and often lose track of what changed between each experiment as well as the effect of the change. How do I manage my experiments in a more organized manner?_\n\nSoftware development has had the ability to keep a historical log of changes through version control. Machine learning, unfortunately, did not have the same luxury. That's changing now with tools like Weights and Biases, and Comet.ml. They allow you to keep track of multiple runs and to log training curves, hyperparameters, outputs, models, notes, and more with just two lines of code added to your Python script. Best of all, through the power of the cloud, you can conveniently track experiments even if you are away from the machine, and share the results with others.\n\n**Q:** _How do I check whether TensorFlow is using the GPU(s) on my machine?_\n\nUse the following handy command:\n\n tf.test.is_gpu_available()\n\n**Q:** _I have multiple GPUs on my machine. I don't want my training script to consume all of them. How do I restrict my script to run on only a specific GPU?_\n\n_Use_ `CUDA_VISIBLE_DEVICES=GPU_ID`. Simply prefix the training script command as follows:\n\n $ CUDA_VISIBLE_DEVICES=GPU_ID python train.py\n\nAlternatively, write the following lines early on in your training script:\n\n import os\n os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"GPU_ID\"\n\n`GPU_ID` can have values such as 0, 1, 2, and so on. You can see these IDs (along with GPU usage) using the `nvidia-smi` command. For assigning to multiple GPUs, use a comma-separated list of IDs.\n\n**Q:** _Sometimes it feels like there are too many knobs to adjust when training. Can it be done automatically, instead, to get the best accuracy?_\n\nThere are many options for automated hyperparameter tuning, including Keras-specific Hyperas and Keras Tuner, and more generic frameworks such as Hyperopt and Bayesian optimization that perform extensive experimentation to maximize our objective (i.e., maximizing accuracy in our case) more intelligently than simple grid searches.\n\n**Q:** _ResNet and MobileNet work well enough for my use case. Is it possible to build a model architecture that can achieve even higher accuracy for my scenario?_\n\nThree words: Neural Architecture Search (NAS). Let the algorithm find the best architecture for you. NAS can be accomplished through packages like Auto-Keras and AdaNet.\n\n**Q:** _How do I go about debugging my TensorFlow script?_\n\nThe answer is in the question: TensorFlow Debugger (`tfdbg)`.\n\n# Model\n\n**Q:** _I want to quickly know the input and output layers of my model without writing code. How can I accomplish that?_\n\nUse Netron. It graphically shows your model, and on clicking any layer, provides details on the architecture.\n\n**Q:** _I need to publish a research paper. Which tool should I use to draw my organic, free-range, gluten-free model architecture?_\n\nMS Paint, obviously! No, we're just kidding. We are fans of NN-SVG as well as PlotNeuralNet for creating high-quality CNN diagrams.\n\n**Q:** _Is there a one-stop shop for all models?_\n\nIndeed! Explore _PapersWithCode.com_ , _ModelZoo.co_ , and _ModelDepot.io_ for some inspiration.\n\n**Q:** _I've finished training my model. How can I make it available for others to use?_\n\nYou can begin by making the model available for download from GitHub. And then list it on the model zoos mentioned in the previous answer. For even wider adoption, upload it to TensorFlow Hub ( _tfhub.dev_ ).\n\nIn addition to the model, you should publish a \"model card,\" which is essentially like a r\u00e9sum\u00e9 of the model. It's a short report that details author information, accuracy metrics, and the dataset it was benchmarked on. Additionally, it provides guidance on potential biases and out-of-scope uses.\n\n**Q:** _I have a model previously trained in framework X, but I need to use it in framework Y. Do I need to waste time retraining it in framework Y?_\n\nNope. All you need is the power of the ONNX. For models not in the TensorFlow ecosystem, most major deep learning libraries support saving them in ONNX format, which can then be converted to the TensorFlow format. Microsoft's MMdnn can help in this conversion.\n\n# Data\n\n**Q:** _Could I collect hundreds of images on a topic in a few minutes?_\n\nYes, you can collect hundreds of images in three minutes or less with a Chrome extension called Fatkun Batch Download Image. Simply search for a keyword in your favorite image search engine, filter images by the correct usage rights (e.g., Public Domain), and press the Fatkun extension to download all images. See Chapter 12, where we use it to build a Not Hotdog app.\n\nBonus tip: to download from a single website, search for a keyword followed by site:website_address. For example, \"horse site:flickr.com.\"\n\n**Q:** _Forget the browser. How do I scrape Google for images using the command line?_\n\n $ pip install google_images_download\n $ googleimagesdownload -k=horse -l=50 -r=labeled-for-reuse\n\n`-k`, `-l`, and `-r` are shorthand for `keyword`, `limit` (number of images), and `usage_rights`, respectively. This is a powerful tool with many options for controlling and filtering what images to download from Google searches. Plus, instead of just loading the thumbnails shown by Google Images, it saves the original images linked by the search engine. For saving more than 100 images, install the `selenium` library along with `chromedriver`.\n\n**Q:** _Those were not enough for collecting images. I need more control. What other tools can help me download data in more custom ways beyond the search engine?_\n\nWith a GUI (no programming needed):\n\nScrapeStorm.com\n\nEasy GUI to identify rules for elements to extract\n\nWebScraper.io\n\nChrome-based scraping extension, especially for extracting structured output from single websites\n\n80legs.com\n\nCloud-based scalable scraper, for parallel, large tasks\n\nPython-based programmatic tools:\n\nScrapy.org\n\nFor more programmable controls on scraping, this is one of the most famous scrapers. Compared to building your own naive scraper to explore websites, it offers throttling rate by domain, proxy, and IP; can handle _robots.txt_ ; offers flexibility in browser headers to show to web servers; and takes care of several possible edge cases.\n\nInstaLooter\n\nA Python-based tool for scraping Instagram.\n\n**Q:** _I have the images for the target classes, but now need images for the negative (not item\/background) class. Any quick ways to build a big dataset of negative classes?_\n\nImageN offers 1,000 images\u20145 random images for 200 ImageNet categories\u2014which you can use as the negative class. If you need more, download a random sample programmatically from ImageNet.\n\n**Q:** _How can I search for a prebuilt dataset that suits my needs?_\n\nTry Google Dataset Search, _VisualData.io_ , and _DatasetList.com_.\n\n**Q:** _For datasets like ImageNet, downloading, figuring out the format, and then loading them for training takes far too much time. Is there an easy way to read popular datasets?_\n\nTensorFlow Datasets is a growing collection of datasets ready to use with TensorFlow. It includes ImageNet, COCO (37 GB), and Open Images (565 GB) among others. These datasets are exposed as `tf.data.Datasets`, along with performant code to feed them in your training pipeline.\n\n**Q:** _Training on the millions of ImageNet images will take a long, long time. Is there a smaller representative dataset I could try training on, to quickly experiment and iterate with?_\n\nTry Imagenette. Built by Jeremy Howard from fast.ai, this 1.4 GB dataset contains only 10 classes instead of 1,000.\n\n**Q:** _What are the largest readily available datasets that I could use for training?_\n\n * Tencent ML Images: 17.7 million images with 11,000 category labels\n\n * Open Images V4 (from Google): 9 million images in 19.7 K categories\n\n * BDD100K (from UC Berkeley): Images from 100,000 driving videos, over 1,100 hours\n\n * YFCC100M (from Yahoo): 99.2 million images\n\n**Q:** _What are some of the readily available large video datasets I could use?_\n\n**Name** | **Details** \n---|--- \nYouTube-8M |\n\n6.1 million videos, 3,862 classes, 2.6 billion audio-visual features\n\n3.0 labels\/video\n\n1.53 terabytes of randomly sampled videos\n\nSomething Something\n\n(from Twenty Billion Neurons) | 221,000 videos in 174 action classes\n\nFor example, \"Pouring water into wine glass but missing so it spills next to it\"\n\nHumans performing predefined actions with everyday objects\n\nJester\n\n(from Twenty Billion Neurons) | 148,000 videos in 27 classes\n\nFor example, \"Zooming in with two fingers\"\n\nPredefined hand gestures in front of a webcam\n\n**Q:** _Are those the largest labeled datasets ever assembled in the history of time?_\n\nNope! Companies like Facebook and Google curate their own private datasets that are much larger than the public ones we can play with:\n\n * Facebook: 3.5 billion Instagram images with noisy labels (first reported in 2018)\n\n * Google \u2013 JFT-300M: 300 million images with noisy labels (first reported in 2017)\n\nSadly, unless you're an employee at one of these companies, you can't really access these datasets. Nice recruiting tactic, we must say.\n\n**Q:** _How can I get help annotating data?_\n\nThere are several companies out there that can assist with labeling different kinds of annotations. A few worth mentioning include SamaSource, Digital Data Divide, and iMerit, which employ people who otherwise have limited opportunities, eventually creating positive socioeconomic change through employment in underprivileged communities.\n\n**Q:** _Is there a versioning tool for datasets, like Git is for code?_\n\nQri and Quilt can help version control our datasets, aiding in reproducibility of experiments.\n\n**Q:** _What if I don't have access to a large dataset for my unique problem?_\n\nTry to develop a synthetic dataset for training! For example, find a realistic 3D model of the object of interest and place it in realistic environments using a 3D framework such as Unity. Adjust the lighting and camera position, zoom, and rotation to take snapshots of this object from many angles, generating an endless supply of training data. Alternatively, companies like AI.Reverie, CVEDIA, Neuromation, Cognata, Mostly.ai, and DataGen Tech provide realistic simulations for training needs. One big benefit of synthesized training data is that the labeling process is built into the synthesization process. After all, you would know what you are creating. This automatic labeling can save a lot of money and effort, compared to manual labeling.\n\n# Privacy\n\n**Q:** _How do I develop a more privacy-preserving model without going down the cryptography rabbit hole?_\n\nTensorFlow Encrypted might be the solution you're looking for. It enables development using encrypted data, which is relevant, especially if you are on the cloud. Internally, lots of secure multiparty computation and homomorphic encryptions result in privacy-preserving machine learning.\n\n**Q:** _Can I keep my model under wraps from prying eyes?_\n\nWell, unless you are on the cloud, weights are visible and can be reverse engineered. Use the Fritz library for protecting your model's IP when deployed on smartphones.\n\n# Education and Exploration\n\n**Q:** _I want to become an AI expert. Beyond this book, where should I invest my time to learn more?_\n\nThere are several resources on the internet to learn deep learning in depth. We highly recommend these video lectures from some of the best teachers, covering a variety of application areas from computer vision to natural language processing.\n\n * Fast.ai (by Jeremy Howard and Rachel Thomas) features a free 14-video lecture series, taking a more learn-by-doing approach in PyTorch. Along with the course comes an ecosystem of tools and an active community that has led to many breakthroughs in the form of research papers and ready-to-use code (like three lines of code to train a state-of-the-art network using the fast.ai library).\n\n * Deeplearning.ai (by Andrew Ng) features a five-course \"Deep Learning Specialization.\" It's free of cost (although you could pay a small fee to get a certificate) and will solidify your theoretical foundation further. Dr. Ng's first Coursera course on machine learning has taught more than two million students, and this series continues the tradition of highly approachable content loved by beginners and experts alike.\n\n * We would be remiss if we didn't encourage you to note O'Reilly's Online Learning platform in this list. Helping more than two million users advance their careers, it contains hundreds of books, videos, live online trainings, and keynotes given by leading thinkers and practitioners at O'Reilly's AI and data conferences.\n\n**Q:** _Where can I find interesting notebooks to learn from?_\n\nGoogle Seedbank is a collection of interactive machine learning examples. Built on top of Google Colaboratory, these Jupyter notebooks can be run instantly without any installations. Some interesting examples include:\n\n * Generating audio with GANs\n\n * Action recognition on video\n\n * Generating Shakespeare-esque text\n\n * Audio-style transfer\n\n**Q:** _Where can I learn about the state of the art for a specific topic?_\n\nConsidering how fast the state of the art moves in AI, SOTAWHAT is a handy command-line tool to search research papers for the latest models, datasets, tasks, and more. For example, to look up the latest results on ImageNet, use `sotawhat imagenet` on the command line. Additionally, _paperswithcode.com\/sota_ also features repositories for papers, their source code, and released models, along with an interactive visual timeline of benchmarks.\n\n**Q:** _I am reading a paper on Arxiv and I really like it. Do I need to write code from scratch?_\n\nNot at all! The ResearchCode Chrome extension makes it easy to find code when browsing _arxiv.org_ or Google Scholar. All it takes is a press of the extension button. You can also look up code without installing the extension on the _ResearchCode.com_ website.\n\n**Q:** _I don't want to write any code, but I still want to interactively experiment with a model using my camera. How can I do that?_\n\nRunway ML is an easy-to-use yet powerful GUI tool that allows you to download models (from the internet or your own) and use the webcam or other input, such as video files, to see the output interactively. This allows further combining and remixing outputs of models to make new creations. And all of this happens with just a few mouse clicks; hence, it's attracting a large artist community!\n\n**Q:** _8-1If I can test without code, can I train without code, too?_\n\nWe discuss this in detail in Chapter 8 (web-based) and Chapter 12 (desktop-based). To keep it short, tools such as Microsoft's CustomVision.ai, Google's Cloud AutoML Vision, Clarifai, Baidu EZDL, and Apple's Create ML provide drag-and-drop training capabilities. Some of these tools take as little as a few seconds to do the training.\n\n# One Last Question\n\n**Q:** _Tell me a great deep learning prank?_\n\nPrint and hang poster shown in Figure 7-1 from _keras4kindergartners.com_ near the watercooler, and watch people's reactions.\n\n###### Figure 7-1. Satirical poster on the state of AI from keras4kindergartners.com\n\n# Chapter 8. Cloud APIs for Computer Vision: Up and Running in 15 Minutes\n\nDue to repeated incidents of near meltdown at the nearby nuclear power plant, the library of the city of Springfield (we are not allowed to mention the state) decided that it was too risky to store all their valuable archives in physical form. After hearing that the library from their rival city of Shelbyville started digitizing their records, they wanted to get in on the game as well. After all, their collection of articles such as \"Old man yells at cloud\" and \"Local man thinks wrestling is real\" and the hundred-year-old iconic photographs of the Gorge and the statue of the city's founder Jebediah Springfield are irreplaceable. In addition to making their archives resilient to catastrophes, they would make their archives easily searchable and retrievable. And, of course, the residents of Springfield now would be able to access all of this material from the comfort of their living room couches.\n\nThe first step in digitizing documents is, of course, scanning. That's the easy part. Then starts the real challenge\u2014processing and understanding all of this visual imagery. The team in Springfield had a few different options in front of them.\n\n * Perform manual data entry for every single page and every single photograph. Given that the city has more than 200 years of rich history, it would take a really long time, and would be error prone and expensive. It would be quite an ordeal to transcribe all of that material.\n\n * Hire a team of data scientists to build an image understanding system. That would be a much better approach, but there's just one tiny hitch in the plan. For a library that runs on charitable donations, hiring a team of data scientists would quickly exhaust its budget. A single data scientist might not only be the highest-paid employee at the library, they might also be the highest-earning worker in the entire city of Springfield (barring the wealthy industrialist Montgomery Burns).\n\n * Get someone who knows enough coding to use the intelligence of ready-to-use vision APIs.\n\nLogically they went with the quick and inexpensive third option. They had a stroke of luck, too. Martin Prince, an industrious fourth grader from Springfield Elementary who happened to know some coding, volunteered to build out the system for them. Although Martin did not know much deep learning (he's just 10 years old, after all), he did know how to do some general coding, including making REST API calls using Python. And that was all he really needed to know. In fact, it took him just under 15 minutes to figure out how to make his first API call.\n\nMartin's _modus operandi_ was simple: send a scanned image to the cloud API, get a prediction back, and store it in a database for future retrieval. And obviously, repeat this process for every single record the library owned. He just needed to select the correct tool for the job.\n\nAll the big names\u2014Amazon, Google, IBM, Microsoft\u2014provide a similar set of computer-vision APIs that label images, detect and recognize faces and celebrities, identify similar images, read text, and sometimes even discern handwriting. Some of them even provide the ability to train our own classifier without having to write a single line of code. Sounds really convenient!\n\nIn the background, these companies are constantly working to improve the state of the art in computer vision. They have spent millions in acquiring and labeling datasets with a granular taxonomy much beyond the ImageNet dataset. We might as well make good use of their researchers' blood, sweat, and tears (and electricity bills).\n\nThe ease of use, speed of onboarding and development, the variety of functionality, richness of tags, and competitive pricing make cloud-based APIs difficult to ignore. And all of this without the need to hire an expensive data science team. Chapters Chapter 5 and Chapter 6 optimized for accuracy and performance, respectively; this chapter essentially optimizes for human resources.\n\nIn this chapter, we explore several cloud-based visual recognition APIs. We compare them all both quantitatively as well as qualitatively. This should hopefully make it easier to choose the one that best suits your target application. And if they still don't match your needs, we'll investigate how to train a custom classifier with just a few clicks.\n\n(In the interest of full disclosure, some of the authors of this book were previously employed at Microsoft, whose offerings are discussed here. We have attempted not to let that bias our results by building reproducible experiments and justifying our methodology.)\n\n# The Landscape of Visual Recognition APIs\n\nLet's explore some of the different visual recognition APIs out there.\n\n## Clarifai\n\nClarifai (Figure 8-1) was the winner of the 2013 ILSVRC classification task. Started by Matthew Zeiler, a graduate student from New York University, this was one of the first visual recognition API companies out there.\n\n###### Note\n\nFun fact: While investigating a classifier to detect NSFW (Not Safe For Work) images, it became important to understand and debug what was being learned by the CNN in order to reduce false positives. This led Clarifai to invent a visualization technique to expose which images stimulate feature maps at any layer in the CNN. As they say, necessity is the mother of invention.\n\n### What's unique about this API?\n\nIt offers multilingual tagging in more than 23 languages, visual similarity search among previously uploaded photographs, face-based multicultural appearance classifier, photograph aesthetic scorer, focus scorer, and embedding vector generation to help us build our own reverse-image search. It also offers recognition in specialized domains including clothing and fashion, travel and hospitality, and weddings. Through its public API, the image tagger supports 11,000 concepts.\n\n###### Figure 8-1. Sample of Clarifai's results\n\n## Microsoft Cognitive Services\n\nWith the creation of ResNet-152 in 2015, Microsoft was able to win seven tasks at the ILSVRC, the COCO Image Captioning Challenge as well as the Emotion Recognition in the Wild challenge, ranging from classification and detection (localization) to image descriptions. And most of this research was translated to cloud APIs. Originally starting out as Project Oxford from Microsoft Research in 2015, it was eventually renamed Cognitive Services in 2016. It's a comprehensive set of more than 50 APIs ranging from vision, natural language processing, speech, search, knowledge graph linkage, and more. Historically, many of the same libraries were being run at divisions at Xbox and Bing, but they are now being exposed to developers externally. Some viral applications showcasing creative ways developers use these APIs include _how-old.net_ (How Old Do I Look?), Mimicker Alarm (which requires making a particular facial expression in order to defuse the morning alarm), and _CaptionBot.ai_.\n\n### What's unique about this API?\n\nAs illustrated in Figure 8-2, the API offers image captioning, handwriting understanding, and headwear recognition. Due to many enterprise customers, Cognitive Services does not use customer image data for improving its services.\n\n###### Figure 8-2. Sample of Microsoft Cognitive Services results\n\n## Google Cloud Vision\n\nGoogle provided the winning entry at the 2014 ILSVRC with the help of the 22-layer GoogLeNet, which eventually paved the way for the now-staple Inception architectures. Supplementing the Inception models, in December 2015, Google released a suite of Vision APIs. In the world of deep learning, having large amounts of data is definitely an advantage to improve one's classifier, and Google has a lot of consumer data. For example, with learnings from Google Street View, you should expect relatively good performance in real-world text extraction tasks, like on billboards.\n\n### What's unique about this API?\n\nFor human faces, it provides the most detailed facial key points (Figure 8-3) including roll, tilt, and pan to accurately localize the facial features. The APIs also return similar images on the web to the given input. A simple way to try out the performance of Google's system without writing code is by uploading photographs to Google Photos and searching through the tags.\n\n###### Figure 8-3. Sample of Google Cloud Vision's results\n\n## Amazon Rekognition\n\nNo, that title is not a typo. Amazon Rekognition API (Figure 8-4) is largely based on Orbeus, a Sunnyvale, California-based startup that was acquired by Amazon in late 2015. Founded in 2012, its chief scientist also had winning entries in the ILSVRC 2014 detection challenge. The same APIs were used to power PhotoTime, a famous photo organization app. The API's services are available as part of the AWS offerings. Considering most companies already offer photo analysis APIs, Amazon is doubling down on video recognition offerings to offer differentiation.\n\n### What's unique about this API?\n\nLicense plate recognition, video recognition APIs, and better end-to-end integration examples of Rekognition APIs with AWS offerings like Kinesis Video Streams, Lambda, and others. Also, Amazon's API is the only one that can determine whether the subject's eyes are open or closed.\n\n###### Figure 8-4. Sample of Amazon Rekognition's results\n\n## IBM Watson Visual Recognition\n\nUnder the Watson brand, IBM's Visual Recognition offering started in early 2015. After purchasing AlchemyAPI, a Denver-based startup, AlchemyVision has been used for powering the Visual Recognition APIs (Figure 8-5). Like others, IBM also offers custom classifier training. Surprisingly, Watson does not offer optical character recognition yet.\n\n###### Figure 8-5. Sample of IBM Watson's Visual Recognition results\n\n## Algorithmia\n\nAlgorithmia is a marketplace for hosting algorithms as APIs on the cloud. Founded in 2013, this Seattle-based startup has both its own in-house algorithms as well as those created by others (in which case creators earn revenue based on the number of calls). In our experience, this API did tend to have the slowest response time.\n\n### What's unique about this API?\n\nColorization service for black and white photos (Figure 8-6), image stylization, image similarity, and the ability to run these services on-premises, or on any cloud provider.\n\n###### Figure 8-6. Sample of Algorithmia's style transfer results\n\nWith so many offerings, it can be overwhelming to choose a service. There are many reasons why we might choose one over another. Obviously, the biggest factors for most developers would be accuracy and price. Accuracy is the big promise that the deep learning revolution brings, and many applications require it on a consistent basis. Price of the service might be an additional factor to consider. We might also choose a service provider because our company already has a billing account with it, and it would take additional effort to integrate a different service provider. Speed of the API response might be another factor, especially if the user is waiting on the other end for a response. Because many of these API calls can be abstracted, it's easy to switch between different providers.\n\n# Comparing Visual Recognition APIs\n\nTo aid our decision making, let's compare these APIs head to head. In this section, we examine service offerings, cost, and accuracy of each.\n\n## Service Offerings\n\nTable 8-1 lists what services are being offered by each cloud provider.\n\nTable 8-1. Comparison shopping of vision API providers (as of Aug. 2019) **** | **Algorithmia** | **Amazon Rekognition** | **Clarifai** | **Microsoft Cognitive Services** | **Google Cloud Vision** | **IBM Watson Visual Recognition** \n---|---|---|---|---|---|--- \nImage classification |\n\n\u2714 | \u2714 | \u2714 | \u2714 | \u2714 | \u2714\n\nImage detection |\n\n\u2714 | \u2714 | |\n\n\u2714 | \u2714\n\n|\n\nOCR |\n\n\u2714 | \u2714 | |\n\n\u2714 | \u2714\n\n|\n\nFace recognition |\n\n\u2714 | \u2714 | |\n\n\u2714 | |\n\nEmotion\n\nrecognition | \u2714 | |\n\n\u2714 | \u2714 | \u2714\n\n|\n\nLogo recognition | | |\n\n\u2714 | \u2714 | \u2714\n\n|\n\nLandmark recognition | | |\n\n\u2714 | \u2714 | \u2714 | \u2714\n\nCelebrity\n\nrecognition | \u2714 | \u2714 | \u2714 | \u2714 | \u2714 | \u2714\n\nMultilingual tagging | | |\n\n\u2714 | |\n\n|\n\nImage description | | | | \u2714 | |\n\nHandwriting | | | | \u2714 | \u2714\n\n|\n\nThumbnail generation |\n\n\u2714 | | | \u2714 | \u2714\n\n|\n\nContent moderation |\n\n\u2714 | \u2714 | \u2714 | \u2714 | \u2714\n\n|\n\nCustom classification training | | |\n\n\u2714 | \u2714 | \u2714 | \u2714\n\nCustom detector training | | | | \u2714 | \u2714\n\n| \nMobile custom models | | |\n\n\u2714 | \u2714 | \u2714\n\n| \nFree tier | 5,000 requests per month | 5,000 requests per month | 5,000 requests per month | 5,000 requests per month | 1,000 requests per month | 7,500\n\nThat's a mouthful of services already up and running, ready to be used in our application. Because numbers and hard data help make decisions easier, it's time to analyze these services on two factors: cost and accuracy.\n\n## Cost\n\nMoney doesn't grow on trees (yet), so it's important to analyze the economics of using off-the-shelf APIs. Taking a heavy-duty example of querying these APIs at about 1 query per second (QPS) service for one full month (roughly 2.6 million requests per month), Figure 8-7 presents a comparison of the different providers sorted by estimated costs (as of August 2019).\n\n###### Figure 8-7. A cost comparison of different cloud-based vision APIs\n\nAlthough for most developers, this is an extreme scenario, this would be a pretty realistic load for large corporations. We will eventually compare these prices against running our own service in the cloud to make sure we get the most bang for the buck fitting our scenario.\n\nThat said, many developers might find negligible charges, considering that all of the cloud providers we look at here have a free tier of 5,000 calls per month (except Google Vision, which gives only 1,000 calls per month for free), and then roughly $1 per 1,000 calls.\n\n## Accuracy\n\nIn a world ruled by marketing departments who claim their organizations to be the market leaders, how do we judge who is actually the best? What we need are common metrics to compare these service providers on some external datasets.\n\nTo showcase building a reproducible benchmark, we assess the text extraction quality using the COCO-Text dataset, which is a subset of the MS COCO dataset. This 63,686-image set contains text in daily life settings, like on a banner, street sign, number on a bus, price tag in a grocery store, designer shirt, and more. This real-world imagery makes it a relatively tough set to test against. We use the Word Error Rate (WER) as our benchmarking metric. To keep things simple, we ignore the position of the word and focus only on whether a word is present (i.e., bag of words). To be a match, the entire word must be correct.\n\nIn the COCO-Text validation dataset, we pick all images with one or more instances of legible text (full-text sequences without interruptions) and compare text instances of more than one-character length. We then send these images to various cloud vision APIs. Figure 8-8 presents the results.\n\n###### Figure 8-8. WER for different text extraction APIs as of August 2019\n\nConsidering how difficult the dataset is, these results are remarkable. Most state-of-the-art text extraction tools from earlier in the decade would not cross the 10% mark. This shows the power of deep learning. On a subset of manually tested images, we also noticed a year-on-year improvement in the performance of some of these APIs, which is another benefit enjoyed by cloud-based APIs.\n\nAs always, all of the code that we used for our experiment is hosted on GitHub (see _http:\/\/PracticalDeepLearning.ai_.\n\nThe results of our analysis depend significantly on the dataset we choose as well as our metrics. Depending on our dataset (which is in turn influenced by our use case) as well as our minimum quality metrics, our results can vary. Additionally, service providers are constantly improving their services in the background. As a consequence, these results are not set in stone and improve over time. These results can be replicated on any dataset with the scripts on GitHub.\n\n## Bias\n\nIn Chapter 1, we explored how bias can creep into datasets and how it can have real-life consequences for people. The APIs we explore in this chapter are no exception. Joy Buolamwini, a researcher at the MIT Media Lab, discovered that among Microsoft, IBM, and Megvii (also known as Face++), none were able to detect her face and gender accurately. Wondering if she had unique facial features that made her undetectable to these APIs, she (working along with Timnit Gebru) compiled faces of members of legislative branches from six countries with a high representation of women, building the Pilot Parliaments Benchmark (PPB; see Figure 8-9). She chose members from three African countries and three European countries to test for how the APIs performed on different skin tones. If you haven't been living under a rock, you can already see where this is going.\n\nShe observed that the APIs performed fairly well overall at accuracies between 85% and 95%. It was only when she started slicing the data across the different categories that she observed there was a massive amount of difference in the accuracies for each. She first observed that there was a significant difference between detection accuracies of men and women. She also observed that breaking down by skin tone, the difference in the detection accuracy was even larger. Then, finally, taking both gender and skin tone into consideration, the differences grew painfully starker between the worse detected group (darker females) and the best detected group (lighter males). For example, in the case of IBM, the detection accuracy of African women was a mere 65.3%, whereas the same API gave a 99.7% accuracy for European men. A whopping 34.4% difference! Considering many of these APIs are used by law enforcement, the consequences of bias seeping in might have life or death consequences.\n\n###### Figure 8-9. Averaged faces among different gender and skin tone, from Pilot Parliaments Benchmark (PPB)\n\nFollowing are a few insights we learned from this study:\n\n * The algorithm is only as good as the data on which it's trained. And this shows the need for diversity in the training dataset.\n\n * Often the aggregate numbers don't always reveal the true picture. The bias in the dataset is apparent only when slicing it across different subgroups.\n\n * The bias does not belong to any specific company; rather, it's an industry-wide phenomenon.\n\n * These numbers are not set in stone and reflect only the time at which the experiment was performed. As evident from the drastic change in numbers between 2017 (Figure 8-10) and a subsequent study in 2018 (Figure 8-11), these companies are taking bias removal from their datasets quite seriously.\n\n * Researchers putting commercial companies to the test with public benchmarks results in industry-wide improvements (even if for the fear of bad PR, then so be it).\n\n###### Figure 8-10. Face detection comparison across APIs, tested in April and May 2017 on the PPB\n\n###### Figure 8-11. Face detection comparison across APIs in August 2018 on the PPB, conducted by Inioluwa Deborah Raji et al.\n\nHow about bias in image-tagging APIs? Facebook AI Research pondered over the question \"Does Object Recognition Work for Everyone?\" in a paper by the same title (Terrance DeVries et al.). The group tested multiple cloud APIs in February 2019 on Dollar Street, a diverse collection of images of household items from 264 different homes across 50 countries (Figure 8-12).\n\n###### Figure 8-12. Image-tagging API performance on geographically diverse images from the Dollar Street dataset\n\nHere are some of the key learnings from this test:\n\n * Accuracy of object classification APIs was significantly lower in images from regions with lower income levels, as illustrated in Figure 8-13.\n\n * Datasets such as ImageNet, COCO, and OpenImages severely undersample images from Africa, India, China, and Southeast Asia, hence leading to lower performance on images from the non-Western world.\n\n * Most of the datasets were collected starting with keyword searches in English, omitting images that mentioned the same object with phrases in other languages.\n\n###### Figure 8-13. Average accuracy (and standard deviation) of six cloud APIs versus income of the household where the images were collected\n\nIn summary, depending on the scenario for which we want to use these cloud APIs, we should build our own benchmarks and test them periodically to evaluate whether these APIs are appropriate for the use case.\n\n# Getting Up and Running with Cloud APIs\n\nCalling these cloud services requires minimal code. At a high level, get an API key, load the image, specify the intent, make a POST request with the proper encoding (e.g., base64 for the image), and receive the results. Most of the cloud providers offer software development kits (SDKs) and sample code showcasing how to call their services. They additionally provide pip-installable Python packages to further simplify calling them. If you're using Amazon Rekognition, we highly recommend using its `pip` package.\n\nLet's reuse our thrilling image to test-run these services.\n\nFirst, let's try it on Microsoft Cognitive Services. Get an API key and replace it in the following code (the first 5,000 calls are free\u2014more than enough for our experiments):\n\n cognitive_services_tagimage('DogAndBaby.jpg')\n\n Results:\n\n {\n \"description\": {\n \"tags\": [\"person\", \"indoor\", \"sitting\", \"food\", \"table\", \"little\",\n \"small\", \"dog\", \"child\", \"looking\", \"eating\", \"baby\", \"young\", \"front\",\n \"feeding\", \"holding\", \"playing\", \"plate\", \"boy\", \"girl\", \"cake\", \"bowl\",\n \"woman\", \"kitchen\", \"standing\", \"birthday\", \"man\", \"pizza\"],\n \"captions\": [{\n \"text\": \"a little girl sitting at a table with a dog\",\n \"confidence\": 0.84265453815486435\n }]\n },\n \"requestId\": \"1a32c16f-fda2-4adf-99b3-9c4bf9e11a60\",\n \"metadata\": {\n \"height\": 427,\n \"width\": 640,\n \"format\": \"Jpeg\"\n }\n }\n\n\"A little girl sitting at a table with a dog\"\u2014pretty close! There are other options to generate more detailed results, including a probability along with each tag.\n\n###### Tip\n\nAlthough the ImageNet dataset is primarily tagged with nouns, many of these services go beyond and return verbs like \"eating,\" \"sitting,\" \"jumping.\" Additionally, they might contain adjectives like \"red.\" Chances are, these might not be appropriate for our application. We might want to filter out these adjectives and verbs. One option is to check their linguistic type against Princeton's WordNet. This is available in Python with the Natural Language Processing Toolkit (NLTK). Additionally, we might want to filter out words like \"indoor\" and \"outdoor\" (often shown by Clarifai and Cognitive Services).\n\nNow, let's test the same image using Google Vision APIs. Get an API key from their website and use it in the following code (and rejoice, because the first 1,000 calls are free):\n\n google_cloud_tagimage('DogAndBaby.jpg')\n\n Results:\n\n {\n \"responses\": [\n {\n \"labelAnnotations\": [\n {\n \"mid\": \"\/m\/0bt9lr\",\n \"description\": \"dog\",\n \"score\": 0.951077,\n \"topicality\": 0.951077\n },\n {\n \"mid\": \"\/m\/06z04\",\n \"description\": \"skin\",\n \"score\": 0.9230451,\n \"topicality\": 0.9230451\n },\n {\n \"mid\": \"\/m\/01z5f\",\n \"description\": \"dog like mammal\",\n \"score\": 0.88359463,\n \"topicality\": 0.88359463\n },\n {\n \"mid\": \"\/m\/01f5gx\",\n \"description\": \"eating\",\n \"score\": 0.7258142,\n \"topicality\": 0.7258142\n }\n # other objects\n ]\n }\n ]\n }\n\nWasn't that a little too easy? These APIs help us get to state-of-the-art results without needing a Ph.D.\u2014in just 15 minutes!\n\n###### Tip\n\nEven though these services return tags and image captions with probabilities, it's up to the developer to determine a threshold. Usually, 60% and 40% are good thresholds for image tags and image captions, respectively.\n\nIt's also important to communicate the probability to the end-user from a UX standpoint. For example, if the result confidence is >80%, we might say prefix the tags with \"This image _contains...._ \" For <80%, we might want to change that prefix to \"This image _may contain..._ \" to reflect the lower confidence in the result.\n\n# Training Our Own Custom Classifier\n\nChances are these services were not quite sufficient to meet the requirements of our use case. Suppose that the photograph we sent to one of these services responded with the tag \"dog.\" We might be more interested in identifying the breed of the dog. Of course, we can follow Chapter 3 to train our own classifier in Keras. But wouldn't it be more awesome if we didn't need to write a single line of code? Help is on the way.\n\nA few of these cloud providers give us the ability to train our own custom classifier by merely using a drag-and-drop interface. The pretty user interfaces provide no indication that under the hood they are using transfer learning. As a result, Cognitive Services Custom Vision, Google AutoML, Clarifai, and IBM Watson all provide us the option for custom training. Additionally, some of them even allow building custom detectors, which can identify the location of objects with a bounding box. The key process in all of them being the following:\n\n 1. Upload images\n\n 2. Label them\n\n 3. Train a model\n\n 4. Evaluate the model\n\n 5. Publish the model as a REST API\n\n 6. Bonus: Download a mobile-friendly model for inference on smartphones and edge devices\n\nLet's see a step-by-step example of Microsoft's Custom Vision.\n\n 1. _Create a project_ (Figure 8-14): Choose a domain that best describes our use case. For most purposes, \"General\" would be optimal. For more specialized scenarios, we might want to choose a relevant domain.\n\n###### Figure 8-14. Creating a new project in Custom Vision\n\nAs an example, if we have an ecommerce website with photos of products against a pure white background, we might want to select the \"Retail\" domain. If we intend to run this model on a mobile phone eventually, we should choose the \"Compact\" version of the model, instead; it is smaller in size with only a slight loss in accuracy.\n\n 2. _Upload_ (Figure 8-15): For each category, upload images and tag them. It's important to upload at least 30 photographs per category. For our test, we uploaded more than 30 images of Maltese dogs and tagged them appropriately.\n\n###### Figure 8-15. Uploading images on CustomVision.ai\n\n 3. _Train_ (Figure 8-16): Click the Train button, and then in about three minutes, we have a spanking new classifier ready.\n\n###### Figure 8-16. The Train button in the upper-right corner of the CustomVision.ai page\n\n 4. _Analyze the model's performance_ : Check the precision and recall of the model. By default, the system sets the threshold at 90% confidence and gives the precision and recall metrics at that value. For higher precision, increase the confidence threshold. This would come at the expense of reduced recall. Figure 8-17 shows example output.\n\n 5. _Ready to go_ : We now have a production-ready API endpoint that we can call from any application.\n\nTo highlight the effect of the amount of data on model quality, let's train a dog breed classifier. We can use the Stanford Dogs dataset, a collection of more than 100 dog categories. For simplicity, we randomly chose 10 breeds, which have more than 200 images available. With 10 classes, a random classifier would have one-tenth, or 10%, the chance of correctly identifying an image. We should easily be able to beat this number. Table 8-2 shows the effect of training on datasets with different volumes.\n\nTable 8-2. Effect of number of training images on precision and recall **** | **30 training images\/class** | **200 training images\/class** \n---|---|--- \nPrecision | 91.2% | 93.5% \nRecall | 85.3% | 89.6%\n\n###### Figure 8-17. Relative precision and recall for our sample training set with 200 images per class\n\nBecause we haven't uploaded a test set, the performance figures reported here are on the full dataset using the common _k_ -fold cross-validation technique. This means the data was randomly divided into _k_ parts, then ( _k \u2013_ 1) parts were used for training, and the remaining part was used for testing. This was performed a few times, each time with a randomized subset of images, and the averaged results are reported here.\n\nIt is incredible that even with 30 images per class, the classifier's precision is greater than 90%, as depicted in Figure 8-18. And, surprisingly, this took slightly less than 30 seconds to train.\n\nNot only this, we can dig down and investigate the performance on each class. Classes with high precision might visibly be more distinct, whereas those with low precision might look similar to another class.\n\n###### Figure 8-18. Some of the possible tags returned by the API\n\nThis short and convenient approach is not without its downsides, as you will see in the following section. In that section, we also discuss mitigation strategies to help take advantage of this rather useful tool.\n\n## Top Reasons Why Our Classifier Does Not Work Satisfactorily\n\nThere are a number of reasons why a classifier would not perform well. The following are some of them:\n\nNot enough data\n\nIf we find that the accuracy is not quite sufficient for our needs, we might need to train the system with more data. Of course, 30 images per class just gets us started. But for a production-quality application, more images are better. 200 images per class are usually recommended.\n\nNonrepresentative training data\n\nOften, the images on the internet are far too clean, set up in studio lighting with clean backgrounds, and close to the center of the frame. Images that our application might see on a daily basis might not be represented quite so well. It's really important to train our classifier with real-world images for the best performance.\n\nUnrelated domain\n Under the hood, Custom Vision is running transfer learning. This makes it really important to choose the correct domain when creating the project. As an example, if we are trying to classify X-ray images, transfer learning from an ImageNet-based model might not yield as accurate a result. For cases like that, training our own classifier manually in Keras would work best, as demonstrated in [Chapter 3](ch03.html#cats_versus_dogscolon_transfer_learning) (though this will probably take more than three minutes).\nUsing it for regression\n\nIn machine learning, there are two common categories of problems: classification and regression. Classification is predicting one or more classes for input. Regression, on the other hand, is predicting a numerical value given an input; for example, predicting house prices. Custom Vision is primarily a classification system. Using it to count objects by tagging the number of objects is the wrong approach, and will lead to unsatisfactory results.\n\nCounting objects is a type of regression problem. We can do it by localizing each instance of the object in an image (aka object detection) and counting their occurrences. Another example of a regression problem is predicting the age of a person based on their profile photo. We tackle both problems in later chapters.\n\nClasses are too similar\n\nIf our classes look too similar and rely heavily on smaller-level details for distinction, the model might not perform as well. For example, a five-dollar note and a 20-dollar note have very similar high-level features. It's at the lower-level details that show they are really distinct. As another example, it might be easy to distinguish between a Chihuahua and a Siberian Husky, but it's more difficult to distinguish between an Alaskan Malamute and a Siberian Husky. A fully retrained CNN, as demonstrated in Chapter 3, should perform better than this Custom Vision-based system.\n\n###### Tip\n\nA great feature of Custom Vision is that if the model is unsure of any image that it encounters via its API endpoint, the web UI will show those images for a manual review. We can review and manually tag new images on a periodic basis and continuously improve the quality of the model. These images tend to improve the classifier the most for two reasons: first, they represent real-world usage. Second, and more importantly, they have more impact on the model in comparison to images that the model can already easily classify. This is known as semisupervised learning.\n\nIn this section, we discussed a few different ways in which we can improve our model's accuracy. In the real world, that is not the end-all-be-all of a user's experience. How quickly we are able to respond to a request also matters a lot. In the following section, we cover a couple of different ways we can improve performance without sacrificing quality.\n\n# Comparing Custom Classification APIs\n\nAs you might have noticed throughout the book, we are pretty dogmatic about being data driven. If we are going to spend good money on a service, we better get the best bang for our buck. Time to put the hype to the test.\n\nFor a good number of classification problems, these custom cloud-based classifiers perform pretty well. To truly test their limits, we need something more challenging. We need to unleash the toughest doggone dataset, train this animal, and fetch some insightful results\u2014using the Stanford Dogs dataset.\n\nUsing the entire dataset might make it too easy for these classifiers (after all, ImageNet already has so many dog breeds), so we took it up a notch. Instead, we trained our own Keras classifier on the entire dataset and built a mini-dataset out of the top 34 worst-performing classes (each containing at least 140 images). The reason these classes performed poorly was because they often became confused with other similar-looking dog breeds. To perform better, they require a fine-grained understanding of the features. We divide the images into 100 randomly chosen images per class in the training dataset and 40 randomly chosen images per class in the test dataset. To avoid any class imbalances, which can have an impact on predictions, we chose the same number of train and test images for each class.\n\nLastly, we selected a minimum confidence threshold of 0.5 as it appeared to strike a good balance between precision and recall across all services. At a high confidence threshold such as 0.99, a classifier might be very accurate, but there might be only a handful of images with predictions; in other words, really low recall. On the other hand, a really low threshold of 0.01 would result in predictions for nearly all images. However, we should not rely on many of these results. After all, the classifier is not confident.\n\nInstead of reporting the precision and recall, we report the _F1 score_ (also known as _F-measure_ ), which is a hybrid score that combines both of those values:\n\nF1score= 2\u00d7precision\u00d7recall precision+recall\n\nAdditionally, we report the time it took to train, as shown in Figure 8-19. Beyond just the cloud, we also trained using Apple's Create ML tool on a MacBook Pro with and without data augmentations (rotate, crop, and flip).\n\nGoogle and Microsoft provide the ability to customize the duration of training. Google Auto ML allows us to customize between 1 and 24 hours. Microsoft provides a free \"Fast Training\" option and a paid \"Advanced Training\" option (similar to Google's offering) with which we can select the duration to be anywhere between 1 and 24 hours.\n\n###### Figure 8-19. A chart showing the F1 score for custom classifier services, as of August 2019 (higher is better)\n\nFollowing are some interesting takeaways from this experiment:\n\n * Clarifai and Microsoft offered near-instant training for the 3,400 training images.\n\n * Compared to \"Fast Training,\" Microsoft's \"Advanced Training\" performed slightly better (roughly a 1-point increase) for the extra one hour of training. Because \"Fast Training\" took less than 15 seconds to train, we can infer that its base featurizer was already good at extracting fine-grained features.\n\n * Surprisingly, Apple's Create ML actually performed worse after adding in the augmentations, despite taking more than two extra hours to train, most of which was spent creating the augmentations. This was done on a top-of-the-line MacBook Pro and showed 100% GPU utilization in Activity Monitor.\n\nAdditionally, to test the featurizer's strength, we varied the amount of training data supplied to the service (Figure 8-20). Because Microsoft took less than 15 seconds to train, it was easy (and cheap!) for us to perform the experiment there. We varied between 30 and 100 images per class for training while keeping the same 40 images per class for testing.\n\n###### Figure 8-20. Effect of varying size of training data per class on test F1 score (higher is better)\n\nEven though Microsoft recommends using at least 50 images per class, going under that limit did not affect performance significantly. The fact that the F1 score did not vary as much as one would expect shows the value of transfer learning (enabling less data to build classifiers) and having a good featurizer capable of fine-grained classification.\n\nIt bears repeating that this experiment was intentionally made difficult to stress-test these classifiers. On average, they would have performed much better on the entire Stanford Dogs dataset.\n\n# Performance Tuning for Cloud APIs\n\nA photograph taken by a modern cell phone can have a resolution as high as 4000 x 3000 pixels and be upward of 4 MB in size. Depending on the network quality, it can take a few seconds to upload such an image to the service. With each additional second, it can become more and more frustrating for our users. Could we make this faster?\n\nThere are two ways to reduce the size of the image:\n\nResizing\n\nMost CNNs take an input image with a size of 224 x 224 or 448 x 448 pixels. Much of a cell phone photo's resolution would be unnecessary for a CNN. It would make sense to downsize the image prior to sending it over the network, instead of sending a large image over the network and then downsizing it on the server.\n\nCompression\n\nMost image libraries perform _lossy_ compression while saving a file. Even a little bit of compression can go a long way in reducing the size of the image while minimally affecting the quality of the image itself. Compression does introduce noise, but CNNs are usually robust enough to deal with some of it.\n\n## Effect of Resizing on Image Labeling APIs\n\nWe performed an experiment in which we took more than a hundred diverse unmodified images taken from an iPhone at the default resolution (4032 x 3024) and sent them to the Google Cloud Vision API to get labels for each of those images. We then downsized each of the original images in 5% increments (5%, 10%, 15%...95%) and collected the API results for those smaller images, too. We then calculated the agreement rate for each image using the following formula:\n\n% a g r e e m e n t r a t e = numberoflabelsinthebaselineimagealsopresentintestimage numberoflabelsinthebaselineimage \u00d7 100\n\nFigure 8-21 shows the results of this experiment. In the figure, the solid line shows the reduction in file size, and the dotted line represents the agreement rate. Our main conclusion from the experiment was that a 60% reduction in resolution led to a 95% reduction in file size, with little change in accuracy compared to the original images.\n\n###### Figure 8-21. Effect of resizing an image on agreement rate and file size reduction relative to the original image\n\n## Effect of Compression on Image Labeling APIs\n\nWe repeated the same experiment, but instead of changing the resolution, we changed the compression factor for each image incrementally. In Figure 8-22, the solid line shows the reduction in file size and the dotted line represents the agreement rate. The main takeaway here is that a 60% compression score (or 40% quality) leads to an 85% reduction in file size, with little change in accuracy compared to the original image.\n\n###### Figure 8-22. Effect of compressing an image on agreement rate and file size reduction relative to the original image\n\n## Effect of Compression on OCR APIs\n\nWe took a document containing 300-plus words at the default resolution of an iPhone (4032 x 3024), and sent it to the Microsoft Cognitive Services API to test text recognition. We then compressed it at 5% increments and then sent each image and compressed it. We sent these images to the same API and compared their results against the baseline to calculate the percentage WER. We observed that even setting the compression factor to 95% (i.e., 5% quality of the original image) had no effect on the quality of results.\n\n## Effect of Resizing on OCR APIs\n\nWe repeated the previous experiment, but this time by resizing each image instead of compressing. After a certain point, the WER jumped from none to almost 100%, with nearly all words being misclassified. Retesting this with another document having each word at a different font size showed that all words under a particular font size were getting misclassified. To effectively recognize text, OCR engines need the text to be bigger than a minimum height (a good rule of thumb is larger than 20 pixels). Hence the higher the resolution, the higher the accuracy.\n\nWhat have we learned?\n\n * For text recognition, compress images heavily, but do not resize.\n\n * For image labeling, a combination of moderate resizing (say, 50%) and moderate compression (say, 30%) should lead to heavy file size reductions (and quicker API calls) without any difference in quality of API results.\n\n * Depending on your application, you might be working with already resized and compressed images. Every processing step can introduce a slight difference in the results of these APIs, so aim to minimize them.\n\n###### Tip\n\nAfter receiving an image, cloud APIs internally resize it to fit their own implementation. For us, this means two levels of resizing: we first resize an image to reduce the size, then send it to the cloud API, which further resizes the image. Downsizing images introduces distortion, which is more evident at lower resolutions. We can minimize the effect of distortion by resizing from a higher resolution, which is bigger by a few multiples. For example, resizing 3024x3024 (original) \u2192 302x302 (being sent to cloud) \u2192 224x224 (internally resized by APIs) would introduce much more distortion in the final image compared to 3024x3024 \u2192 896x896 \u2192 224x224. Hence, it's best to find a happy intermediate size before sending the images. Additionally, specifying advanced interpolation options like `BICUBIC` and `LANCZOS` will lead to more accurate representation of the original image in the smaller version.\n\n# Case Studies\n\nSome people say that the best things in life don't come easy. We believe this chapter proves otherwise. In the following section, we take a look at how some tech industry titans use cloud APIs for AI to drive some very compelling scenarios.\n\n## The New York Times\n\nIt might seem like the scenario painted at the beginning of the chapter was taken out of a cartoon, but it was, in fact, pretty close to the case of the _New York Times_ (NYT). With more than 160 years of illustrious history, NYT has a treasure trove of photographs in its archives. It stored many of these artifacts in the basement of its building three stories below the ground level, aptly called the \"morgue.\" The value of this collection is priceless. In 2015, due to a plumbing leak, parts of the basement were damaged including some of these archived records. Thankfully the damage was minimal. However, this prompted NYT to consider digitally archiving them to protect against another catastrophe.\n\nThe photographs were scanned and stored in high quality. However, the photographs themselves did not have any identifying information. What many of them did have were handwritten or printed notes on the backside giving context for the photographs. NYT used the Google Vision API to scan this text and tag the respective images with that information. Additionally, this pipeline provided opportunities to extract more metadata from the photographs, including landmark recognition, celebrity recognition, and so on. These newly added tags powered its search feature so that anyone within the company and outside could explore the gallery and search using keywords, dates, and so on without having to visit the morgue, three stories down.\n\n## Uber\n\nUber uses Microsoft Cognitive Services to identify each of its seven million-plus drivers in a couple of milliseconds. Imagine the sheer scale at which Uber must operate its new feature called \"Real-Time ID Check.\" This feature verifies that the current driver is indeed the registered driver by prompting them to take a selfie either randomly or every time they are assigned to a new rider. This selfie is compared to the driver's photo on file, and only if the face models are a match is the driver allowed to continue. This security feature is helpful for building accountability by ensuring the security of the passengers and by ensuring that the driver's account is not compromised. This safety feature is able to detect changes in the selfie, including a hat, beard, sunglasses, and more, and then prompts the driver to take a selfie without the hat or sunglasses.\n\n###### Figure 8-23. The Uber Drivers app prompts the driver to take a selfie to verify the identity of the driver (image source)\n\n## Giphy\n\nBack in 1976, when Dr. Richard Dawkins coined the term \"meme,\" little did he know it would take on a life of its own four decades later. Instead of giving a simple textual reply, we live in a generation where most chat applications suggest an appropriate animated GIF matching the context. Several applications provide a search specific to memes and GIFs, such as Tenor, Facebook messenger, Swype, and Swiftkey. Most of them search through Giphy (Figure 8-24), the world's largest search engine for animated memes commonly in the GIF format.\n\n###### Figure 8-24. Giphy extracts text from animations as metadata for searching\n\nGIFs often have text overlaid (like the dialogue being spoken) and sometimes we want to look for a GIF with a particular dialogue straight from a movie or TV show. For example, the image in Figure 8-24 from the 2010 _Futurama_ episode in which the \"eyePhone\" (sic) was released is often used to express excitement toward a product or an idea. Having an understanding of the contents makes the GIFs more searchable. To make this happen, Giphy uses Google's Vision API to extract the recognize text and objects\u2014aiding the search for the perfect GIF.\n\nIt's obvious that tagging GIFs is a difficult task because a person must sift through millions of these animations and manually annotate them frame by frame. In 2017, Giphy figured out two solutions to automate this process. The first approach was to detect text from within the image. The second approach was to generate tags based on the objects in the image to supplement the metadata for their search engine. This metadata is stored and searched using ElasticSearch to make a scalable search engine.\n\nFor text detection, the company used the OCR services from the Google Vision API on the first frame from the GIFs to confirm whether the GIF actually contained text. If the API replied in the affirmative, Giphy would send the next frames, receive their OCR-detected texts, and figure out the differences in the text; for instance, whether the text was static (remaining the same throughout the duration of the gif) or dynamic (different text in different frames). For generating the class labels corresponding to objects in the image, engineers had two options: label detection or web entities, both of which are available on Google Vision API. Label detection, as the name suggests, provides the actual class name of the object. Web entities provides an entity ID (which can be referenceable in the Google Knowledge Graph), which is the unique web URL for identical and similar images seen elsewhere on the net. Using these additional annotations gave the new system an increase in the click-through-rate (CTR) by 32%. Medium-to-long-tail searches (i.e., not-so-frequent searches) benefitted the most, becoming richer with relevant content as the extracted metadata surfaced previously unannotated GIFs that would have otherwise been hidden. Additionally, this metadata and click-through behavior of users provides data to make a similarity and deduplication feature.\n\n## OmniEarth\n\nOmniEarth is a Virginia-based company that specializes in collecting, analyzing, and combining satellite and aerial imagery with other datasets to track water usage across the country, scalably, and at high speeds. The company is able to scan the entire United States at a total of 144 million parcels of land within hours. Internally, it uses the IBM Watson Visual Recognition API to classify images of land parcels for valuable information like how green it is. Combining this classification with other data points such as temperature and rainfall, OmniEarth can predict how much water was used to irrigate the field.\n\nFor house properties, it infers data points from the image such as the presence of pools, trees, or irrigable landscaping to predict the amount of water usage. The company even predicted where water is being wasted due to malpractices like overwatering or leaks. OmniEarth helped the state of California understand water consumption by analyzing more than 150,000 parcels of land, and then devised an effective strategy to curb water waste.\n\n## Photobucket\n\nPhotobucket is a popular online image- and video-hosting community where more than two million images are uploaded every day. Using Clarifai's NSFW models, Photobucket automatically flags unwanted or offensive user-generated content and sends it for further review to its human moderation team. Previously, the company's human moderation team was able to monitor only about 1% of the incoming content. About 70% of the flagged images turned out to be unacceptable content. Compared to previous manual efforts, Photobucket identified 700 times more unwanted content, thus cleaning the website and creating a better UX. This automation also helped discover two child pornography accounts, which were reported to the FBI.\n\n## Staples\n\nEcommerce stores like Staples often rely on organic search engine traffic to drive sales. One of the methods to appear high in search engine rankings is to put descriptive image tags in the ALT text field for the image. Staples Europe, which serves 12 different languages, found tagging product images and translating keywords to be an expensive proposition, which is traditionally outsourced to human agencies. Fortunately, Clarifai provides tags in 20 languages at a much cheaper rate, saving Staples costs into five figures. Using these relevant keywords led to an increase in traffic and eventually increased sales through its ecommerce store due to a surge of visitors to the product pages.\n\n## InDro Robotics\n\nThis Canadian drone company uses Microsoft Cognitive Services to power search and rescue operations, not only during natural disasters but also to proactively detect emergencies. The company utilizes Custom Vision to train models specifically for identifying objects such as boats and life vests in water (Figure 8-25) and use this information to notify control stations. These drones are able to scan much larger ocean spans on their own, as compared to lifeguards. This automation alerts the lifeguard of emergencies, thus improving the speed of discovery and saving lives in the process.\n\nAustralia has begun using drones from other companies coupled with inflatable pods to be able to react until help reaches. Soon after deployment, these pods saved two teenagers stranded in the ocean, as demonstrated in Figure 8-26. Australia is also utilizing drones to detect sharks so that beaches can be vacated. It's easy to foresee the tremendous value these automated, custom training services can bring.\n\n###### Figure 8-25. Detections made by InDro Robotics\n\n###### Figure 8-26. Drone identifies two stranded swimmers and releases an inflatable pod that they cling onto (image source)\n\n# Summary\n\nIn this chapter, we explored various cloud APIs for computer vision, first qualitatively comparing the breadth of services offered and then quantitatively comparing their accuracy and price. We also looked at potential sources of bias that might appear in the results. We saw that with just a short code snippet, we can get started using these APIs in less than 15 minutes. Because one model doesn't fit all, we trained a custom classifier using a drag-and-drop interface, and tested multiple companies against one another. Finally, we discussed compression and resizing recommendations to speed up image transmission and how they affect different tasks. To top it all off, we examined how companies across industries use these cloud APIs for building real-world applications. Congratulations on making it this far! In the next chapter, we will see how to deploy our own inference server for custom scenarios.\n\n# Chapter 9. Scalable Inference Serving on Cloud with TensorFlow Serving and KubeFlow\n\nImagine this: you just built a top-notch classifier. Your goal, as the Silicon Valley motto goes, is to \" _make the world a better place_ ,\" which you're going to do... with a spectacular Dog\/Cat classifier. You have a solid business plan and you cannot wait to pitch your magical classifier to that venture capital firm next week. You know that the investors will question you about your cloud strategy, and you need to show a solid demo before they even consider giving you the money. How would you do this? Creating a model is half the battle, serving it is the next challenge, often the bigger one. In fact, for a long time it was common for training a model to only take a few weeks, but trying to serve it to a larger group of people was a months-long battle, often involving backend engineers and DevOps teams.\n\nIn this chapter, we answer a few questions that tend to come up in the context of hosting and serving custom-built models.\n\n * How can I host my model on my personal server so that my coworkers can play with it?\n\n * I am not a backend\/infrastructure engineer, but I want to make my model available so that it can serve thousands (or even millions) of users. How can I do this at a reasonable price without worrying about scalability and reliability issues?\n\n * There are reasons (such as cost, regulations, privacy, etc.) why I cannot host my model on the cloud, but only on-premises (my work network). Can I serve predictions at scale and reliably in such a case?\n\n * Can I do inference on GPUs?\n\n * How much can I expect to pay for each of these options?\n\n * Could I scale my training and serving across multiple cloud providers?\n\n * How much time and technical know-how will it take to get these running?\n\nLet's begin our journey by looking at the high-level overview of the tools available to us.\n\n# Landscape of Serving AI Predictions\n\nThere is a multitude of tools, libraries, and cloud services available for getting trained AI models to serve prediction requests. Figure 9-1 simplifies them into four categories.\n\n###### Figure 9-1. A high-level overview and comparison of different inference serving options\n\nDepending on our inference scenarios, we can make an appropriate choice. Table 9-1 takes a deeper look.\n\nTable 9-1. Tools to serve deep learning models over the network **Category and examples** | **Expected time to first prediction** | **Pros and cons** \n---|---|---\n\nHTTP servers\n\n * Flask\n\n * Django\n\n * Apache OpenWhisk\n\n * Python `http.server`\n\n| <5 minutes |\n\n\\+ Simple to run\n\n\\+ Often runs current Python code\n\n\u2013 Slow\n\n\u2013 Not optimized for AI\n\n**Hosted and managed cloud stacks**\n\n * Google Cloud ML\n\n * Azure ML\n\n * Amazon Sage Maker\n\n * Algorithmia\n\n| <15 minutes |\n\n\\+ Easier GUI\/command-line interfaces\n\n\\+ Highly scalable\n\n\\+ Fully managed, reduces the need for DevOps teams\n\n\u2013 Usually limited to CPU-based inference, can be slow for large models\n\n\u2013 Warm-up query time can be slow\n\n**Manually managed serving libraries**\n\n * TensorFlow Serving\n\n * NVIDIA TensorRT\n\n * DeepDetect\n\n * MXNet Model Serving\n\n * Skymind Intelligence Layer with DeepLearning4J\n\n * Seldon\n\n * DeepStack AI Server\n\n| <15 minutes |\n\n\\+ High performance\n\n\\+ Allows manual controls on optimizations, batching, etc.\n\n\\+ Can run inference on GPU\n\n\u2013 More involved setup\n\n\u2013 Scaling over multiple nodes usually requires extra groundwork\n\n**Cloud AI orchestration frameworks**\n\n * KubeFlow\n\n| ~1 hour |\n\n\\+ Makes scaling training and inference easy to manage\n\n\\+ Portability between cloud providers\n\n\\+ Consistent environments across development and production\n\n\\+ For data scientists, integration with familiar tools such as Jupyter Notebooks for sending models to production\n\n\\+ Enables composing conditional pipelines to automate testing, cascading models\n\n\\+ Uses existing manually managed serving libraries\n\n\u2013 Still evolving\n\n\u2013 For beginners, hosted and managed cloud stacks offer an easier learning curve\n\nIn this chapter, we explore a range of tools and scenarios. Some of these options are easy to use but limited in functionality. Others offer more granular controls and higher performance but are more involved to set up. We look at one example of each category and take a deeper dive to develop an intuition into when using one of those makes sense. We then present a cost analysis of the different solutions as well as case studies detailing how some of these solutions work in practice today.\n\n# Flask: Build Your Own Server\n\nWe begin with the most basic technique of _Build Your Own Server_ (BYOS). From the choices presented in the first column of Table 9-1, we've selected Flask.\n\n## Making a REST API with Flask\n\nFlask is a Python-based web application framework. Released in 2010 and with more than 46,000 stars on GitHub, it is under continuous development. It's also quick and easy to set up and is really useful for prototyping. It is often the framework of choice for data science practitioners when they want to serve their models to a limited set of users (e.g., sharing with coworkers on a corporate network) without a lot of fuss.\n\nInstalling Flask with `pip` is fairly straightforward:\n\n $ pip install flask\n\nUpon installation, we should be able to run the following simple \"Hello World\" program:\n\n from flask import Flask\n app = Flask(__name__)\n\n @app.route(\"\/hello\")\n def hello():\n return \"Hello World!\"\n\n if __name__ == \"__main__\":\n app.run()\n\nThe following is the command to run the \"Hello World\" program:\n\n $ python hello.py\n * Running on http:\/\/127.0.0.1:5000\/ (Press Ctrl+C to quit)\n\nBy default, Flask runs on port 5000. When we open the URL _http:\/\/localhost:5000\/hello_ in the browser, we should see the words \"Hello World!,\" as shown in Figure 9-2.\n\n###### Figure 9-2. Navigate to http:\/\/localhost:5000\/hello within a web browser to view the \"Hello World!\" web page\n\nAs you can see, it takes barely more than a few lines to get a simple web application up and running. One of the most important lines in that script is `@app.route(\"\/hello\")`. It specifies that the path `\/hello` after the hostname would be served by the method immediately beneath it. In our case, it merely returns the string \"Hello World!\" In the next step, we look at how to deploy a Keras model to a Flask server and create a route that will serve predictions by our model.\n\n## Deploying a Keras Model to Flask\n\nOur first step is to load our Keras model. The following lines load the model from the . _h5_ file. You'll find the scripts for this chapter on the book's GitHub (see _http:\/\/PracticalDeepLearning.ai_) in _code\/chapter-9_ :\n\n from tf.keras.models import load_model\n model = load_model(\"dogcat.h5\")\n\nNow, we create the route _\/infer_ that would support inference on our images. Naturally, we would support `POST` requests to accept images:\n\n @app.route('\/infer', methods=[POST])\n def infer():\n file = request.files['file']\n image = Image.open(file)\n image = preprocess(image)\n\n predictions = model.predict(image)\n max_index = numpy.argmax(predictions)\n _# We know the labels from the model we trained previously_\n if max_index == 0:\n return \"Cat\"\n else:\n return \"Dog\"\n\nTo test the inference, let's use the `curl` command, as follows, on a sample image containing a dog:\n\n $ curl -X POST -F image=@dog.jpg 'http:\/\/localhost:5000\/infer'\n\n {\"predictions\":[{\"label\":\"dog\",\"probability\":0.8525022864341736}]}\n\nAs expected, we get a prediction of \"dog.\" This has worked quite well so far. At this point, Flask runs only locally; that is, someone else on the network would not be able to make a request to this server. To make Flask available to others, we can simply change `app.run()` to the following:\n\n app.run(host=\"0.0.0.0\")\n\nAt this point, we can give access to our model to anyone within our network. The next question would be\u2014can we do the same to make the model available to the general public? The answer to that question is an emphatic no! The Flask website has a prominent warning stating _\"WARNING: Do not use the development server in a production environment.\"_ Flask indeed does not support production work out of the box and would need custom code to enable that. In the upcoming sections, we look at how to host our models on systems that are meant for production use. With all of this in mind, let's recap some of the pros and cons of using Flask.\n\n## Pros of Using Flask\n\nFlask provides some advantages, namely:\n\n * Quick to set up and to prototype\n\n * Fast development cycle\n\n * Lightweight on resources\n\n * Broad appeal within the Python community\n\n## Cons of Using Flask\n\nAt the same time, Flask might not be your best choice, for the following reasons:\n\n * Cannot scale; by default, it is not meant for production loads. Flask can serve only one request at one time\n\n * Does not handle model versioning out of the box\n\n * Does not support batching of requests out of the box\n\n# Desirable Qualities in a Production-Level Serving System\n\nFor any cloud service that is serving traffic from the public, there are certain attributes that we want to look for when deciding to use a solution. In the context of machine learning, there are additional qualities that we would look for while building inference services. We look at a few of them if this section.\n\n## High Availability\n\nFor our users to trust our service, it must be available almost always. For many serious players, they measure their availability metric in terms of \" _number of nines_.\" If a business claims that its service has four 9s availability, they mean the system is up and available 99.99% of the time. Even though 99% sounds impressive, Table 9-2 puts that downtime per year in perspective.\n\nTable 9-2. Downtime per year for different availability percentages **Availability %** | **Downtime per year** \n---|--- \n99% (\"two nines\") | 3.65 days \n99.9% (\"three nines\") | 8.77 hours \n99.99% (\"four nines\") | 52.6 minutes \n99.999% (\"five nines\") | 5.26 minutes\n\nImagine how ridiculous the situation would be if a major website like Amazon were only 99.9% available, losing millions in user revenue during the eight-plus hours of downtime. Five 9s is considered the holy grail. Anything less than three 9s is typically unsuitable for a high-quality production system.\n\n## Scalability\n\nTraffic handled by production services is almost never uniform across a larger time period. For example, the _New York Times_ experiences significantly more traffic during morning hours, whereas Netflix typically experiences a surge in traffic between the evening and late-night hours, when people chill. There are also seasonal factors in traffic. Amazon experiences orders of magnitude more traffic on Black Friday and during Christmas season.\n\nA higher demand requires a higher amount of resources being available and online to serve them. Otherwise, the availability of the system would be in jeopardy. A naive way to accomplish this would be to anticipate the highest volume of traffic the system would ever serve, determine the number of resources necessary to serve that level of traffic, and then allocate that amount all the time, in perpetuity. There are two problems with this approach: 1) if your planning was correct, the resources would be underutilized most of the time, essentially burning money; and 2) if your estimation was insufficient, you might end up affecting the availability of your service and end up with a far worse problem of losing the trust of your customers and ultimately their wallets.\n\nA smarter way to manage traffic loads is to monitor them as they are coming in and dynamically allocate and deallocate resources that are available for service. This ensures that the increased traffic is handled without loss of service while keeping operating costs to a minimum during low-traffic times.\n\nWhen scaling down resources, any resource that is about to be deallocated is quite likely to be processing traffic at that moment. It's essential to ensure that all of those requests be completed before shutting down that resource. Also, crucially, the resource must not process any new requests. This process is called _draining_. Draining is also crucial when machines are taken down for routine maintenance and\/or upgrades.\n\n## Low Latency\n\nConsider these facts. Amazon published a study in 2008 in which it found that every 100 ms increase in latency in its retail website resulted in a 1% loss of profit. A one-second delay in loading the website caused a whopping $1.6 billion in lost revenue! Google found that a 500 ms latency on mobile websites resulted in a traffic drop of 20%. In other words, a 20% decrease in the opportunity-to-serve advertisements. And this does not affect only industry giants. If a web page takes longer than three seconds to load on a mobile phone, 53% of users abandon it (according to a 2017 study by Google). It's clear that time is money.\n\nReporting average latency can be misleading because it might paint a cheerier picture than a ground reality. It's like saying if Bill Gates walks into a room, everyone is a billionaire on average. Instead, percentile latency is the typically reported metric. For example, a service might report 987 ms @ 99th percentile. This means that 99% of the requests were served in 987 ms or less. The very same system could have a 20 ms latency on average. Of course, as traffic to your service increases, the latency might increase if the service is not scaled up to give adequate resources. As such, latency, high availability, and scalability are intertwined.\n\n## Geographic Availability\n\nThe distance between New York and Sydney is nearly 10,000 miles (16,000 km). The speed of light in a vacuum is roughly 186,282 miles per second (300,000 km per second). Silica glass (used in fiber-optic cables) decreases the speed of light by a factor of roughly 30% down to 130,487 miles per second (210,000 km per second). On a piece of fiber-optic running in a straight line between these two cities, the roundtrip travel time alone for a single request is nearly 152 ms. Keep in mind that this does not account for the amount of time it takes for the request to be processed at the server, or the hops that the packets need to make across multiple routers along the way. This level of service would be unacceptable for many applications.\n\nServices that expect to be used throughout the world must be strategically located to minimize latency for the users in those regions. Additionally, resources can be dynamically scaled up or down depending on local traffic, thus giving more granular control. The major cloud providers have a presence on at least five continents (sorry penguins!).\n\n###### Tip\n\nWant to simulate how long the incoming requests would take from your computer to a particular datacenter around the world? Table 9-3 lists a few handy browser-based tools offered by cloud providers.\n\nTable 9-3. Latency measurement tools for different cloud providers **Service** | **Cloud provider** \n---|--- \nAzureSpeed.com | Microsoft Azure \nCloudPing.info | Amazon Web Services \nGCPing.com | Google Cloud Platform\n\nAdditionally, to determine realistic combinations of latency from one location to another, _CloudPing.co_ measures AWS Inter-Region Latency, between more than 16 US-based AWS datacenters to one another.\n\n## Failure Handling\n\nThere's an old saying that there are only two things that are assured in life\u2014death and taxes. In the twenty-first century, this adage applies not just to humans but also computer hardware. Machines fail all the time. The question is never _if_ a machine will fail, it's _when_. One of the necessary qualities of production-quality service is its ability to gracefully handle failures. If a machine goes down, quickly bring up another machine to take its place and continue serving traffic. If an entire datacenter goes down, seamlessly route traffic to another datacenter so that users don't even realize that anything bad happened in the first place.\n\n## Monitoring\n\nIf you can't measure it, you can't improve it. Worse, does it even exist? Monitoring the number of requests, availability, latency, resource usage, number of nodes, distribution of traffic, and location of users is vital to understanding how a service performs; finding opportunities to improve it; and, more importantly, how much to pay. Most cloud providers already have built-in dashboards providing these metrics. Additionally, recording task-specific analytics like time for model inference, preprocessing, and so on can add another level of understanding.\n\n## Model Versioning\n\nWe have learned in this book (and will continue to learn all the way to the last page) that machine learning is always iterative. Particularly in the case of applications in the real world, data that the model can learn on is constantly being generated. Moreover, the incoming data distribution might shift over time compared to what it was trained on, leading to lower prediction power (a phenomenon called c _oncept drift_ ). To provide users with the best possible experience, we want to keep improving our models. Every time we train our model with newer data to further improve its accuracy and make the best version yet, we want to make it available for our users as quickly and seamlessly as possible. Any good production-quality inference system should provide the ability to provide different versions of a model, including the ability to swap a live version of the model with another version at a moment's notice.\n\n## A\/B Testing\n\nIn addition to supporting multiple versions of a model, there are reasons we'd want to serve different versions of the model at the same time depending on a variety of attributes such as the geographic location of the user, demographics, or simply by random assignment.\n\n_A\/B testing_ is a particularly useful tool when improving a model. After all, if our spanking new model was flawed in some way, we'd rather it be deployed to only a small subset of our users rather than 100% of them at the same time before we catch the flaw. Additionally, if a model meets criteria for success on that small subset, it provides validation for the experiment and justifies eventually being promoted to all users.\n\n## Support for Multiple Machine Learning Libraries\n\nLast but not least, we don't want to be locked into a single machine learning library. Some data scientists in an organization might train models in PyTorch, others in TensorFlow, or maybe scikit-Learn suffices for non\u2013deep learning tasks. The flexibility to support multiple libraries would be a welcome bonus.\n\n# Google Cloud ML Engine: A Managed Cloud AI Serving Stack\n\nConsidering all the desirable qualities we discussed in a production environment in the previous section, it's generally not a good idea to use Flask for serving users. If you do not have a dedicated infrastructure team and would like to spend more time making better models than deploying them, using a managed cloud solution is the right approach. There are several cloud-based Inference-as-a-Service solutions on the market today. We have chosen to explore the Google Cloud ML Engine partly because of the convenient TensorFlow integration and partly because it ties in nicely with the ML Kit material that we touch upon in Chapter 13.\n\n## Pros of Using Cloud ML Engine\n\n * Easy-to-deploy models in production with web-based GUI\n\n * Powerful and easily scalable to millions of users\n\n * Provides deep insights into model usage\n\n * Ability to version models\n\n## Cons of Using Cloud ML Engine\n\n * High latency, offers only CPUs for inference (as of August 2019)\n\n * Unsuitable for scenarios involving legal and data privacy issues where the data must not leave the network\n\n * Imposes restrictions on architecture design of complex applications\n\n## Building a Classification API\n\nThe following step-by-step guide shows how to go about uploading and hosting our a Dog\/Cat classifier model on Google Cloud ML Engine:\n\n 1. Create a model on the Google Cloud ML Engine dashboard at _https:\/\/console.cloud.google.com\/mlengine\/models_. Because this is the first time we're using the dashboard, we need to click ENABLE API, as depicted in Figure 9-3.\n\n###### Figure 9-3. Listing page for machine learning models on the Google Cloud ML Engine dashboard\n\n 2. Give the model a name and a description (Figure 9-4).\n\n###### Figure 9-4. Model creation page on Google Cloud ML Engine\n\n 3. After the model is created, we can access the model on the listing page (Figure 9-5).\n\n###### Figure 9-5. Model listings page on Google Cloud ML Engine\n\n 4. Click the model to go to the model details page (Figure 9-6) and add a new version.\n\n###### Figure 9-6. Details page of the just-created Dog\/Cat classifier\n\n 5. Fill out the necessary information to create the new version. The last field at the bottom requires you to upload the model to Google Cloud Storage before we can use it. Click the Browse button to create a new bucket for storing the model (Figure 9-7).\n\n###### Figure 9-7. Creating a new version for a machine learning model\n\n 6. Create a new bucket with a unique name, a storage class, and region. After you create this bucket, go to _https:\/\/console.cloud.google.com\/storage\/browser_ (in a separate tab while keeping the current one open) to find this newly created bucket and upload the model there (Figure 9-8).\n\n###### Figure 9-8. Creating a new Google Cloud Storage bucket within the ML model version creation page\n\n 7. Our Dog\/Cat classifier model is an _.h5_ file. However, Google Cloud expects a SavedModel file. You can find the script to convert the _.h5_ file to SavedModel on the book's GitHub repository (see ) at _code\/chapter-9\/scripts\/h5_to_tf.ipynb_. Simply load the model and execute the rest of the notebook.\n\n 8. In the Google Cloud Storage browser, upload the newly converted model (Figure 9-9) to the bucket you created in step 6.\n\n###### Figure 9-9. Google Cloud Storage Browser page showing the uploaded Dog\/Cat classifier model in TensorFlow format\n\n 9. Specify the URI on the model version creation page for the model that you just uploaded (Figure 9-10).\n\n###### Figure 9-10. Add the URI for the model you uploaded to Google Cloud Storage\n\n 10. Click the Save button and wait for the model version to be created. As soon as the model version is created, you can begin making predictions against it.\n\n 11. If it's not already present on your machine, you can download and install the Google Cloud SDK from the installation website at _https:\/\/cloud.google.com\/sdk\/install_.\n\n 12. You can use the Cloud ML Engine REST API to make your requests. However, for brevity, use the command-line tools in the Cloud SDK. You first need to convert your image into a _request.json_ file using the _image-to-json.py_ script located at _code\/chapter-9_ :\n\n $ python image-to-json.py --input dog.jpg --output request.json\n\n 13. Next, use the _request.json_ file created in the previous step to execute a request against our model:\n\n $ time cloud ai-platform predict --model DogCat --version v1 \n --json-instances\n request.json\n\n SCORES\n [0.14749771356, 0.8525022864]\n\n real 0m3.370s\n user 0m0.811s\n sys 0m0.182s\n\nAs you can see from the output, we get similar results as with our Flask server; that is, a prediction of \"dog\" with 85% confidence.\n\n###### Note\n\nIf this is your first time using **`gcloud`** , you need to run the following command to tie the command-line tool to your Google account:\n\n $ gcloud auth login\n\nNext, select the project using the following command:\n\n $ gcloud config set project {project_name}\n\nPiece of cake, wasn't it? In our example, we used the Google Cloud SDK to request a prediction for the sake of brevity. In a production scenario, you would want to execute the same prediction request using Google's API endpoints, instead; either by generating HTTP requests or by using their client libraries. We can follow the documentation on Google Cloud Docs for production scenarios.\n\nAt this point, the model is ready to be served to any user anywhere in the world using applications on the browser, mobile and edge devices, desktop, as well as cloud environments. Using a hosted stack is a pretty viable option for individuals and organizations who want the flexibility and reliability that the cloud provides while having to do minimal setup and maintenance work for the infrastructure.\n\nIn contrast, there are situations for which a hosted solution might not be the best approach. Reasons could include pricing models, data privacy issues, legal questions, technical issues, trust concerns, or contractual obligations. In such cases, a solution that is hosted and managed locally (or, \"on-premises\") would be preferable.\n\n###### Tip\n\nFor processing a large number of images at one time, you can modify _image-to-json.py_ to create a _request.json_ file that contains an array of multiple inputs.\n\n# TensorFlow Serving\n\nTensorFlow Serving is an open source library in the TensorFlow ecosystem for serving machine learning models fast. Unlike Flask, it's built for performance, with low overhead, and designed for use in production. TensorFlow Serving is widely used by large companies to serve their models for prediction services. It is one of the integral components of TensorFlow Extended (TFX)\u2014an end-to-end deep learning pipeline in the TensorFlow ecosystem.\n\nAs we saw when we looked at the desired qualities of a production system, TensorFlow serving offers low latency, failure handling, high throughput, and model versioning. Another benefit includes the ability to serve multiple models at the same time on the same service. It implements several techniques to speed up serving:\n\n * During server startup, it starts a burst of threads for fast model loading.\n\n * It uses separate thread pools for loading models and for serving inferences while giving higher priority to the threads in the inference pool. This is crucial to lowering request latency.\n\n * It builds minibatches of incoming asynchronous requests for short periods of time. As we have seen, with the power of batching data on GPUs during training, it aims to bring similar efficiencies on asynchronous requests. As an example, waiting 500 ms to group together several requests for inference. While at worst case, this adds a 500 ms penalty for the first request in the batch, it reduces the average latency across requests and maximizes hardware utilization.\n\n###### Note\n\nTensorFlow serving gives you full control over the model rollout procedure. You can serve different models or different versions of the same kind of model in the same process. You just need to make sure that you know the name and location of the version that you want to remove or put into production.\n\n## Installation\n\nThere are a few different ways of setting up TensorFlow Serving:\n\n * Building from source\n\n * Downloading and installing using APT\n\n * Deploying Docker images\n\nIf you're feeling adventurous, building from source might be the danger you seek. But if you just want to get up and running quickly, we recommend using Docker because it requires minimal steps to get the system up and running. What is Docker, you might ask? Docker provides virtualization of a Linux environment for applications that run within it. It provides isolation of resources that essentially operate as a clean slate for setting up an environment in which an application can run. Typically, an application and all of its dependencies are packaged into a single Docker container that can then be deployed repeatedly as necessary. Because the application is set up in a clean environment, it reduces the likelihood of configuration and deployment errors. This makes Docker very well suited for running applications in production.\n\nThe biggest benefit that Docker provides for us is alleviating \"dependency hell\" because all the necessary dependencies are packaged within the container. One additional advantage of using Docker is that the process of setting up your application remains more or less the same across different platforms, whether you use Windows, Linux, or Mac.\n\nThe Docker installation instructions, depending on the target platform, are available on the Docker home page. This should not take more than a few minutes because the setup is fairly straightforward. After you've installed Docker, you can run the following command to set up TensorFlow Serving for CPU:\n\n $ docker run -p 8501:8501 \\\n --mount type=bind,source=\/path\/to\/dogcat\/,target=\/models\/dogcat \\\n -e MODEL_NAME=dogcat -t tensorflow\/serving\n\nFor GPU-enabled machines, run the following command, instead:\n\n $ docker run -p 8501:8501 --runtime=nvidia \\\n --mount type=bind,source=\/path\/to\/dogcat\/,target=\/models\/dogcat \\\n -e MODEL_NAME=dogcat -t tensorflow\/serving\n\nIn either case, if everything went smoothly, you should have a REST API running on your local port 8501 serving our Dog\/Cat classifier.\n\n###### Note\n\nIn any inference request, the end-to-end latency is a summation of time taken by multiple steps along the process. This includes round-trip network time, request time to serialize\/deserialize the request and response objects, and, of course, time to perform the actual inference. One more component that adds overhead is the serving framework; that is, TensorFlow Serving. Google claims that the overhead contributed by TensorFlow Serving is minimal. In its experiments, it observed that TensorFlow Serving alone was able to handle approximately 100,000 QPS per core on a 16 vCPU Intel Xeon E5 2.6 GHz machine. Because it is measuring the overhead, this excludes the remote procedure call (RPC) time and the TensorFlow inference processing time.\n\nEven though TensorFlow Serving is a great choice for serving inferences from a single machine, it does not have built-in functionality for horizontal scaling. Instead, it is built to be used in conjunction with other systems that can supercharge TensorFlow Serving with dynamic scaling. We explore one such solution in the following section.\n\n# KubeFlow\n\nThroughout this book, we have explored the various steps of an end-to-end deep learning pipeline, from data ingestion, analysis, distributed training (including hyperparameter tuning) at scale, tracking experiments, deployment, and eventually to serving prediction requests at scale. Each of these steps is complex in its own right, with its set of tools, ecosystems, and areas of expertise. People dedicate their lifetimes developing expertise in just one of these fields. It's not exactly a walk in the park. The combinatorial explosion of the know-how required when factoring for the necessary backend engineering, hardware engineering, infrastructure engineering, dependency management, DevOps, fault tolerance, and other engineering challenges can result in a very expensive hiring process for most organizations.\n\nAs we saw in the previous section, Docker saves us the hassle of dependency management by making portable containers available. It helps us make TensorFlow Serving available across platforms easily without having to build it from source code or install dependencies manually. Great! But it still doesn't have an answer to many of the other challenges. How are we going to scale up containers to match rises in demand? How would we efficiently distribute traffic across containers? How do we ensure that the containers are visible to one another and can communicate?\n\nThese are questions answered by _Kubernetes_. Kubernetes is an orchestration framework for automatically deploying, scaling, and managing containers (like Docker). Because it takes advantage of the portability offered by Docker, we can use Kubernetes to deploy to developer laptops as well as thousand-machine clusters in an almost identical manner. This helps maintain consistency across different environments, with the added benefit of scalability in an accessible manner. It is worth noting that Kubernetes is not a dedicated solution for machine learning (neither is Docker); rather, it is a general-purpose solution to many of the problems faced in software development, which we use in the context of deep learning.\n\nBut let's not get ahead of ourselves just yet. After all, if Kubernetes were the be-all and end-all solution, it would have appeared in the chapter title! A machine learning practitioner using Kubernetes still needs to assemble all of the appropriate sets of containers (for training, deployment, monitoring, API management, etc.) that then need to be orchestrated together to make a fully functioning end-to-end pipeline. Unfortunately, many data scientists are trying to do exactly this in their own silos, reinventing the wheel building ad hoc machine learning-specific pipelines. Couldn't we save everyone the trouble and make one Kubernetes-based solution for machine learning scenarios?\n\nEnter _KubeFlow_ , which promises to automate a large chunk of these engineering challenges and hide the complexity of running a distributed, scalable, end-to-end deep learning system behind a web GUI-based tool and a powerful command-line tool. This is more than just an inference service. Think of it as a large ecosystem of tools that can interoperate seamlessly and, more importantly, scale up with demand. KubeFlow is built for the cloud. Though not just one cloud\u2014it's built to be compatible with all major cloud providers. This has significant implications on cost. Because we are not tied to a specific cloud provider, we have the freedom to move all of our operations at a moment's notice if a competing cloud provider drops its prices. After all, competition benefits consumers.\n\nKubeFlow supports a variety of hardware infrastructure, from developer laptops and on-premises datacenters, all the way to public cloud services. And because it's built on top of Docker and Kubernetes, we can rest assured that the environments will be identical whether deployed on a developer laptop or a large cluster in a datacenter. Every single way in which the developer setup is different from the production environment could result in an outage, so it's really valuable to have this consistency across environments.\n\nTable 9-4 shows a brief list of readily available tools within the KubeFlow ecosystem.\n\nTable 9-4. Tools available on KubeFlow **Tool** | **Functionality** \n---|--- \nJupyter Hub | Notebook environment \nTFJob | Training TensorFlow models \nTensorFlow Serving | Serving TensorFlow models \nSeldon | Serving models \nNVIDIA TensorRT | Serving models \nIntel OpenVINO | Serving models \nKFServing | Abstraction for serving Tensorflow, XGBoost, scikit-learn, PyTorch, and ONNX models \nKatib | Hyperparameter tuning and NAS \nKubebench | Running benchmarking jobs \nPyTorch | Training PyTorch models \nIstio | API services, authentication, A\/B testing, rollouts, metrics \nLocust | Load testing \nPipelines | Managing experiments, jobs, and runs, scheduling machine learning workflows\n\nAs the joke goes in the community, with so many technologies prepackaged, KubeFlow finally makes our r\u00e9sum\u00e9s buzzword- (and recruiter-) compliant.\n\n###### Note\n\nMany people assume that KubeFlow is a combination of Kubernetes and TensorFlow, which, as you have seen, is not the case. It is that and much more.\n\nThere are two important parts to KubeFlow that make it unique: pipelines and fairing.\n\n## Pipelines\n\nPipelines give us the ability to compose steps across the machine learning to schedule complex workflows. Figure 9-11 shows us an example of a pipeline. Having visibility into the pipeline through a GUI tool helps stakeholders understand it (beyond just the engineers who built it).\n\n###### Figure 9-11. An end-to-end pipeline illustrated in KubeFlow\n\n## Fairing\n\nFairing allows us to manage the entire build, train, and deploy lifecycle directly through Jupyter Notebooks. Figure 9-12 shows how to start a new notebook server, where we can host all of our Jupyter Notebooks, run training on them, and deploy our models to Google Cloud using the following few lines of code, all the while being in the comfort of a very familiar Jupyter environment:\n\n from fairing.deployers.gcp.gcpserving import GCPServingDeployer\n GCPServingDeployer().deploy(model_dir, model_name, version_name)\n\n###### Figure 9-12. Creating a new Jupyter Notebook server on KubeFlow\n\n## Installation\n\nCreating a new KubeFlow deployment is a fairly straightforward process that is well documented on the KubeFlow website. You can set up KubeFlow using the browser for GCP. Alternatively, you can use the KubeFlow command-line tool to set up a deployment on GCP, AWS, and Microsoft Azure. Figure 9-13 shows a GCP deployment using the web browser.\n\n###### Figure 9-13. Creating a KubeFlow deployment on GCP using the browser\n\nAs of this writing, KubeFlow is in active development and shows no signs of stopping. Companies such as Red Hat, Cisco, Dell, Uber, and Alibaba are some of the active contributors on top of cloud giants like Microsoft, Google, and IBM. Ease and accessibility for solving tough challenges attract more people to any platform, and KubeFlow is doing exactly that.\n\n# Price Versus Performance Considerations\n\nIn Chapter 6, we looked at how to improve our model performance for inference (whether on smartphones or on a server). Now let's look from another side: the hardware performance and the price involved.\n\nOften while building a production system, we want the flexibility of choosing suitable hardware to strike the proper balance between performance, scale, and price for our scenario. Consider building an app that requires cloud-based inference. We can go set up our own stack manually (using Flask or TensorFlow Serving or KubeFlow), or we could use a managed Inference-as-a-Service stack (like the Google Cloud ML Engine). Assuming that our service went viral, let's see how much it would cost.\n\n## Cost Analysis of Inference-as-a-Service\n\nFor Google Cloud ML Engine, as of August 2019 in North America, it costs a rather inexpensive $0.0401 per hour of combined inference time on a single-core CPU machine. There's also an option for a quad-core CPU machine, but really, a single core should suffice for most applications. Running several queries to the server with a small image of 12 KB took roughly 3.5 seconds on average, as illustrated in Figure 9-14. This does sound slow, and is partly because of doing inference on a moderate-speed machine, and, more important on a CPU server. It's worth mentioning that this benchmark is on a warmed-up machine that has recently received an API request and hence has the model preloaded. For comparison, the first query takes between 30 and 60 seconds. This shows the importance of keeping the service running constantly or sending frequent warm-up queries. This happens because the Google Cloud ML engine takes down a model if it notices a prolonged period of nonuse.\n\n###### Figure 9-14. Google Cloud ML Engine showing incoming queries and latency of serving the calls, with end-to-end latency at user's end of about 3.5 seconds\n\nIf a request came in at every second for an entire month, there would be a total of 60 x 60 x 24 x 30 = 2,592,000 calls per month. Assuming that each inference takes 3.5 seconds, a single node would be insufficient. The cloud service would quickly realize that and, in response to the increased traffic, bring up three additional machines to handle the traffic. In total, with four machines running for a month at $0.0401 per hour per node, it would cost a grand total of $115.48. To put this into perspective, for two million calls, that's about the cost of a cup of Starbucks coffee a day for an entire month. And let's not forget this is without involving much of the DevOps team members, whose time is expensive. If we took the hypothetical scenario of a Yelp-like service for which users, on average, upload photos of food at 64 QPS, running inferences on them using a classification model would cost only $7,390.\n\n## Cost Analysis of Building Your Own Stack\n\nLess spending and high scalability, now that's a winning combination. But the one downside is the total roundtrip latency of each request. Taking matters into our own hands, getting a VM with a modest GPU on the cloud, and setting up our scaling pipeline (using KubeFlow or the native cloud load-balancing features with TensorFlow Serving), we would be able to respond either in milliseconds or batch a few incoming queries together (say every 500 ms) to serve them. As an example, looking at the inventory of VMs on Azure, for $2.07 per hour, we can rent out an ND6 machine that features an NVIDIA P40 GPU and 112 GiB RAM. By batching incoming requests every 500 ms to 1 second, this machine can serve 64 requests per second at a total cost of $1,490 per month, and faster than the Google Cloud ML Engine.\n\nIn summary, the cost savings and performance benefits of orchestrating our own cloud machine environment kicks in big time when working on large QPS scenarios, as demonstrated in Figure 9-15.\n\n###### Figure 9-15. Cost comparison of infrastructure as a service (Google Cloud ML Engine) versus building your own stack over virtual machines (Azure VM) (costs as of August 2019)\n\n###### Tip\n\nA common question that arises while benchmarking is what is my system's limit? JMeter can help answer this. JMeter is a load-testing tool that lets you perform stress testing of your system with an easy-to-use graphical interface. It lets you create reusable configurations to simulate a variety of usage scenarios.\n\n# Summary\n\nIn this chapter, we answered the question most engineers and developers ask: how do we serve model prediction requests at scale for applications in the real world? We explored four different methods of serving an image recognition model: using Flask, Google Cloud ML, TensorFlow Serving, and KubeFlow. Depending on the scale, latency requirements, and our skill level, some solutions might be more attractive than others. Finally, we developed an intuition into the cost effectiveness of different stacks. Now that we can show our fabulous classifier model off to the world, all that's left is to make our work go viral!\n\n# Chapter 10. AI in the Browser with TensorFlow.js and ml5.js\n\nWritten in collaboration with guest author: Zaid Alyafeai\n\nYou're a developer who dreams big. You have a kickass AI model that you would like a lot of people to try. How many is a lot? Ten thousand? A million? No, silly. You like to dream big. How about 100 million people? That's a nice round number. Now convincing 100 million people to download and install an app and make space for it on their phones is not an easy sell. But what if we told you that they all have an app already installed, just for you. No downloads. No installs. No app stores. What kind of black magic is this!? Of course, it's the web browser. And as a bonus, it also runs on your PC.\n\nThis is what Google did with its home page when it decided to launch its first-ever AI doodle to their billions of users (Figure 10-1). And what better theme to pick for it than the music of J.S. Bach. (Bach's parents wanted to call him J.S. Bach, 310 years before JavaScript was even created. They had quite the foresight!)\n\nTo explain briefly, the doodle allowed anyone to write one line (voice) of random notes for two measures using mouse clicks. When the user clicked a button labeled Harmonize, the input would then be processed against hundreds of musical pieces written by Bach that contain between two and four lines (voices) of music. The system would figure out which notes would sound best along with the user's input to create a much richer Bach-like sounding musical piece. The entire process ran in the browser, so Google would not need to scale up its machine learning prediction infrastructure at all.\n\n###### Figure 10-1. The Bach music harmonizer doodle from Google\n\nIn addition to the cost savings and the ability to run on any platform, with a browser we can provide users with a richer, more interactive experience because network latency is not a factor. And of course, because everything can be run locally after the model is downloaded, the end user can benefit from the privacy of their data.\n\nGiven that JavaScript is the language of the web browser, it's useful for us to delve into JavaScript-based deep learning libraries that can run our trained model within users' browsers. And that's exactly what we do in this chapter.\n\nHere, we focus on implementing deep learning models in the browser. First, we look at a brief history of different JavaScript-based deep learning frameworks before moving on to TensorFlow.js and eventually a higher-level abstraction for it called ml5.js. We also examine a few complex browser-based applications such as detecting the body pose of a person or converting a hand-drawn doodle to a photograph (using GANs). Finally, we talk about some practical considerations and showcase some real-world case studies.\n\n# JavaScript-Based Machine Learning Libraries: A Brief History\n\nSince the breakthrough of deep learning in recent years, many attempts have been made to make AI accessible to a wider range of people in the form of web-based libraries. Table 10-1 offers a brief overview of the different libraries in the order in which they were first released.\n\nTable 10-1. Historical overview of different JavaScript-based deep learning libraries (data captured as of August 2019) **** | **Active years** | **\u2605 on GitHub** | **Known for** \n---|---|---|--- \nbrain.js | 2015\u2013present | 9,856 | Neural networks, RNNs, LSTMs, and GRUs \nConvNetJS | 2014\u20132016 | 9,735 | Neural networks, CNNs \nSynaptic | 2014\u2013present | 6,571 | Neural networks, LSTMs \nMXNetJS | 2015\u20132017 | 420 | Running MXNet models \nKeras.js | 2016\u20132017 | 4,562 | Running Keras models \nCaffeJS | 2016\u20132017 | 115 | Running Caffe models \nTensorFlow.js (formerly known as deeplearn.js) | 2017\u2013present | 11,282 | Running TensorFlow models on GPU \nml5.js | 2017\u2013present | 2,818 | Easy to use on top of TF.js. \nONNX.js | 2018\u2013present | 853 | Speed, running ONNX models\n\nLet's go through a few of these libraries in more detail and see how they evolved.\n\n## ConvNetJS\n\nConvNetJS is a JavaScript library that was designed in 2014 by Andrej Karpathy as part of a course during his Ph.D. at Stanford University. It trained CNNs in the browser, an exciting proposition, especially in 2014, considering the AI hype was starting to take off, and a developer wouldn't have had to go through an elaborate and painful setup process to get running. ConvNetJS helped introduce AI to so many people for the first time with interactive training demonstrations in the browser.\n\n###### Note\n\nIn fact, when MIT scientist Lex Fridman taught his popular self-driving course in 2017, he challenged students worldwide to train a simulated autonomous car using reinforcement learning\u2014in the browser using ConvNetJS\u2014as shown in Figure 10-2.\n\n###### Figure 10-2. Screenshot of DeepTraffic training a car with reinforcement learning using ConvNetJS\n\n## Keras.js\n\nKeras.js was introduced in 2016 by Leon Chen. It was a Keras port made to work in the browser by using JavaScript. Keras.js used WebGL to run computations on the GPU. It used shaders (special operations for pixel rendering) to run inferences, which made them run much faster than using just the CPU. Additionally, Keras.js could run on a Node.js server on a CPU to provide server-based inferences. Keras.js implemented a handful of convolutional, dense, pooling, activation, and RNN layers. It is no longer under active development.\n\n## ONNX.js\n\nCreated by Microsoft in 2018, ONNX.js is a JavaScript library for running ONNX models in browsers and on Node.js. ONNX is an open standard for representing machine learning models that is a collaboration between Microsoft, Facebook, Amazon, and others. ONNX.js is surprisingly fast. In fact, faster than even TensorFlow.js (discussed in the next section) in early benchmarks, as shown in Figure 10-3 and Figure 10-4. This could be attributed to the following reasons:\n\n * ONNX.js utilizes WebAssembly (from Mozilla) for execution on the CPU and WebGL on the GPU.\n\n * WebAssembly allows it to run C\/C++ and Rust programs in the web browser while providing near-native performance.\n\n * WebGL provides GPU-accelerated computations like image processing within the browser.\n\n * Although browsers tend to be single-threaded, ONNX.js uses Web Workers to provide a multithreaded environment in the background for parallelizing data operations.\n\n###### Figure 10-3. Benchmarking data for ResNet-50 on different JavaScript machine learning libraries on CPU (data source)\n\n###### Figure 10-4. Benchmarking data for ResNet-50 on different JavaScript machine learning libraries on GPU (data source)\n\n## TensorFlow.js\n\nSome libraries offered the ability to train within the browser (e.g., ConvNetJS), whereas other libraries offered blazing-fast performance (e.g., the now-defunct TensorFire). deeplearn.js from Google was the first library that supported fast GPU accelerated operations using WebGL while also providing the ability to define, train, and infer within the browser. It offered both an immediate execution model (for inference) as well as a delayed execution model for training (like in TensorFlow 1.x). Originally released in 2017, this project became the core of TensorFlow.js (released in 2018). It is considered an integral part of the TensorFlow ecosystem, and as a result, it is currently the most actively developed JavaScript deep learning library. Considering this fact, we focus on TensorFlow.js in this chapter. To make TensorFlow.js even simpler to use, we also look at ml5.js, which is built on top of TensorFlow.js and abstracts away its complexities, exposing a simple API with ready-to-use models from GANs to PoseNet.\n\n##### From the Creator's Desk\n\nBy Shanqing Cai, senior software engineer at Google and author of _Deep Learning with JavaScript_ (Manning)\n\nThe forebearer of TensorFlow.js, deeplearn.js, originated from an effort at Google to create an intuitive and interactive visualization to teach people how neural networks are trained. This visualization, today known as \"TensorFlow Playground\" and available at _https:\/\/playground.tensorflow.org_, used an early version of deeplearn.js to train a multilayered neural network entirely in the browser. In building TensorFlow Playground, the engineers became impressed by the potential of using WebGL to perform accelerated training and inference of deep learning models in the browser and at the client side. A team of engineers was assembled at Google to realize this vision, which gave birth to today's TensorFlow.js, a full-blown deep learning library that supports hundreds of operations and dozens of neural network layers and runs on environments ranging from the browser to Node.js, from native-mobile apps to cross-platform desktop apps.\n\n##### From the Creator's Desk\n\nBy Daniel Smilkov, software engineer at Google Brain and coauthor of TensorFlow.js\n\nBefore TensorFlow.js, Nikhil [Thorat] and I were building neural network interpretability tools in the browser. To enable a truly interactive experience, we wanted to run inference and compute gradients directly in the browser without sending data to a server. This led to deeplearn.js (you can still see the package with its API on npm), which we released in August 2017. The project got great momentum with creative coders being one of the earliest adopters. Riding on this momentum, the team grew quickly, and six months later we launched TensorFlow.js.\n\n# TensorFlow.js Architecture\n\nFirst, let's take a look at the high-level architecture of TensorFlow.js (see Figure 10-5). TensorFlow.js runs directly in the browser on desktop and mobile. It utilizes WebGL for GPU acceleration, but also can fall back to the browser's runtime for execution on the CPU.\n\nIt consists of two APIs: the Operations API and the Layers API. The Operations API provides access to lower-level operations such as tensor arithmetic and other mathematical operations. The Layers API builds on top of the Operations API to provide layers such as convolution, ReLU, and so on.\n\n###### Figure 10-5. A high-level overview of the TensorFlow.js and ml5.js ecosystem\n\nBeyond the browser, TensorFlow.js can also run on a Node.js server. Additionally, ml5.js uses TensorFlow.js to provide an even higher-level API along with several prebuilt models. Having access to all of these APIs at different levels of abstraction allows us to build web apps, not only to do simple inference, but also to train models within the browser itself.\n\nFollowing are some common questions that come up during the development life cycle for browser-based AI:\n\n * How do I run pretrained models in the browser? Can I use my webcam feed for real-time interactivity?\n\n * How can I create models for the browser from my TensorFlow trained models?\n\n * Can I even train a model in the browser?\n\n * How do different hardware and browsers affect performance?\n\nWe answer each of these questions in this chapter, starting with TensorFlow.js before moving on to ml5.js. We explore some rich built-in functionality contributed by the ml5.js community, which would otherwise take a lot of effort and expertise to implement directly on TensorFlow.js. We also look at approaches to benchmarking before looking at some motivating examples built by creative developers.\n\nNow let's take a look at how to take advantage of pretrained models to make inferences within the browser.\n\n# Running Pretrained Models Using TensorFlow.js\n\nTensorFlow.js offers lots of pretrained models that we can directly run in the browser. Some examples include MobileNet, SSD, and PoseNet. In the following example, we load a pretrained MobileNet model. The full code is located on the book's GitHub repository (see _http:\/\/PracticalDeepLearning.ai_) at _code\/chapter-10\/mobilenet-example\/_.\n\nFirst, we import the latest bundle of the library:\n\n