Novel Beings: Regulatory Approaches for a Future of New Intelligent Life

Eds. Dr. Sarah Morley and Dr. David R. Lawrence

Edward Elgar Publishing, December 2022

Exploring an innovative area of academic interest that is set to grow exponentially, this collection is an original discussion into the divide between proactive and reactive regulatory approaches to emerging biotechnology and Artificial Intelligence (AI) research that is likely to create new forms of morally valuable life.

This fascinating book examines the promises and perils of conflicting approaches to regulating emerging technologies in the unique context of this probable challenge for law and society. An impressive, and multidisciplinary, selection of expert contributors offer considerations vital to any attempt to address these issues before they become impossible to prevent or rectify. Chapters explore issues such as those posed by genomics, synthetic biology, and neurotechnology, alongside conceptual challenges like the ‘Collingridge dilemma’ of epistemic uncertainty, the role of self-regulation, media portrayals of technology, and the duties we might have to artificial novel beings. More broadly, they discuss the global challenges for society and the law regarding the status of these technological beings, the protections they may warrant and the obligations they may owe to us.

This book will appeal to researchers and academics who are interested in the regulation of emerging technology. It will also provide a beneficial new resource for scholars and postgraduate students studying emerging technology in different fields, such as law, bioethics and philosophy.

The collection is published as part of the Elgar Law, Technology and Society series.


Contents

PART I: PROACTIVE REGULATION

  • Emerging technologies are affecting us in socially, morally, and politically meaningful ways. With this growing recognition, we see the need to develop novel modes of regulation. A recent trend in research and practical interdisciplinary collaboration—known as “embedded ethics”—aims to make a positive impact starting from early stages of development. Embedded ethics is a process of merging ethics and social sciences into technology development teams in an effort to identify and address ethical features of emerging technologies. In this paper, the merits and challenges of this process are explained, with a current project called Responsible Robotics serving as a case study. Embedded ethics can be seen as a preparatory mode of regulation, one that may overcome the difficulties of the Collingridge Dilemma, the problem of regulating technology too early, without knowledge of its impact, or too late for regulation to be truly effective.

Daniel Tigard

  • The court of public opinion may do more harm than good for decision-making around morally-sensitive emerging technologies. Hysterical news features on genetically-modified children, and the common headlines associating AI with sci-fi genocides, are damaging to the public discourse on emerging science. The fear and indulgence of the ‘wisdom of repugnance’ has had previous stark impact on policy discussions, such as in the media storm over mitochondrial replacement therapy- the so-called ‘three parent baby’- and it seems likely that the same would take place in debates over novel beings. Given this, and combined with the glacial pace of regulatory development, Lawrence argues we cannot afford to wait until public opinion- and therefore political and legislative will- is taken over by a ‘good headline’ that offers no consideration to those other than ourselves that could be harmed.

David R. Lawrence

  • A forthcoming challenge for corporate regulation is the emergence of new technology through advances in artificial intelligence. Whilst these developments are hugely beneficial to society they raise ethical dilemmas and create potential for significant harm; such as the development of facial recognition software and video and audio manipulation tools used in fake news. At present no regulation exists which specifically addresses the responsibility of corporations in the development, operation, and disposal of these technologies. How we fill these regulatory gaps must be considered.

    There are two regulatory strategies that can be used: one is hard regulation and the other is a soft regulatory strategy, often referred to as self-regulation or CSR. This article argues that, whilst CSR will always have a role to play, its effectiveness in encouraging companies to behave well is limited. Without this enquiry we may unwittingly grant corporations too much control without appropriate redress for harm.

Sarah Morley

  • This chapter considers two aspects of the regulatory challenges around Artificial Intelligence (AI) – determinism versus values and the flaws of the first generation of commercial AI. The first identifies a particular problematic tech scientific/industry approach to regulation built on an empirical-sci-fi libertarian culture that approaches AI technology in a deterministic way, that considers humans as subjects of technology rather than approaching technology as a social construction shaped by humans. In the determinist conception, regulation is either unnecessary or deregulatory while in a social construction conception regulation plays a role in shaping technology. The second part of the chapter identifies core problematic issues with the first generation of commercial AI focused around human bias, explainability, self-interested AI, trust and AI’s potential to undermine broad legal norms. The chapter concludes that regulation of AI is warranted given the problematic experience of the first wave of AI and outlines three emerging models utilized by China, the EU and the US and the UK.

Alan Dignam

  • This chapter revisits an earlier work which advocated a more rational approach in evaluating controversial emerging biotechnologies. There has since been a singular failure to adjust approaches to biotechnology regulation in line with accelerating technological development, and a continuation of the same illogical leaps and the same subsequent harms. The new and future technologies that will contribute to the development of novel beings are building on ones we are familiar with and will be similarly controversial in their permissibility. We have always acted in arrears, and suffered the consequences when matters outpace and outreach our regulatory capacities. Harris and Lawrence highlight the likely deleterious effects of allowing regressive attitudes to continue as new morally significant technologies become reality, and make a case for evolving our thinking about most effective ways to regulate for them. There is a moral imperative to no longer stick to the old ways of dealing with new technologies.

David R. Lawrence and John Harris

PART II: REACTIVE REGULATION

  • If we accept that at some point novel beings will be brought into existence, then we need to consider how the law should take account of (the emergence of) such beings. This is a sticky problem, because any attempt to engage in preparatory regulation with respect to novel beings is mired in uncertainty. Put simply, we do not know what type of beings they will be, either in terms of their physical nature/embodiment or mental/cognitive characteristics. As a consequence, we lack the relevant context-dependent information needed to propose a detailed regulatory regime. In light of this epistemic uncertainty, in this chapter we do not propose a detailed account of law and regulation for novel beings. Instead, we outline a range of normative principles which could help guide the regulation of precursor technologies without undermining our ability to appropriately regulate emerging novel beings in the future.

Joseph Roberts and Muireann Quigley

  • This chapter focuses specifically at the conceptual level on the potential role patents could play in regulating “novel being” technology in an ‘ethical’ manner i.e., in steering the development and use of “novel being” technologies in an ethical manner. The term “novel being” is used in this chapter to refer to new forms of ‘beings’ which may be created via many different types of technologies, for example, by using artificial technologies, biotechnologies etc. Crucially, such beings are imagined as beings that display characteristics akin to humans, such as sentience, agency and autonomy. The chapter argues that patents have potential to act as drivers, blockers and guiders of ‘ethical’ approaches to the development and use of “novel being” technologies by rightsholders, or by third parties. However, deeper investigation of such issues is warranted if such approaches were to be utilised more broadly in either a reactive or proactive manner to assist in the regulation of ‘novel being’ technologies.

Aisling McMahon

  • Future artificial beings (ABs) may possess the kind of properties that give rise to moral status. At that point, it seems uncontentious that they would deserve protection from certain kinds of ill-treatment. While we can presently be confident that no ABs have reached that point, they are becoming increasingly capable of acting in ways that simulate possession of qualities like sentience and even personhood. This has generated some concerns about whether there should be restrictions placed on how we treat these beings now. In the first part of this chapter, we consider arguments in the context of our present situation; when we are certain that ABs do not care how we treat them, but can behave as if they do. In the second part, we turn our attention to a future situation wherein ABs have reached a stage where we are uncertain about their capacities and moral status.

Colin Gavaghan and Mike King


Next
Next

Special Issue