Bioethics ISSN 0269-9702 (print); 1467-8519 (online) Volume 29 Number 2 2015 pp 118–125
POLICY ON SYNTHETIC BIOLOGY: DELIBERATION, PROBABILITY, AND THE PRECAUTIONARY PARADOX CHRISTOPHER WAREHAM AND CECILIA NARDINI
Keywords synthetic biology, deliberation, precautionary principle, Bayesianism, probability, risk management, ethics
ABSTRACT Synthetic biology is a cutting-edge area of research that holds the promise of unprecedented health benefits. However, in tandem with these large prospective benefits, synthetic biology projects entail a risk of catastrophic consequences whose severity may exceed that of most ordinary human undertakings. This is due to the peculiar nature of synthetic biology as a ‘threshold technology’ which opens doors to opportunities and applications that are essentially unpredictable. Fears about these potentially unstoppable consequences have led to declarations from civil society groups calling for the use of a precautionary principle to regulate the field. Moreover, the principle is prevalent in law and international agreements. Despite widespread political recognition of a need for caution, the precautionary principle has been extensively criticized as a guide for regulatory policy. We examine a central objection to the principle: that its application entails crippling inaction and incoherence, since whatever action one takes there is always a chance that some highly improbable cataclysm will occur. In response to this difficulty, which we call the ‘precautionary paradox,’ we outline a deliberative means for arriving at threshold of probability below which potential dangers can be disregarded. In addition, we describe a Bayesian mechanism with which to assign probabilities to harmful outcomes. We argue that these steps resolve the paradox. The rehabilitated PP can thus provide a viable policy option to confront the uncharted waters of synthetic biology research.
INTRODUCTION Synthetic biology, or the design and creation of artificial biological entities and functions, is a cutting-edge area of research that holds the promise of unprecedented health benefits. The Presidential Commission for the Study of Bioethical Issues (PCSBI) suggests that: [s]ynthetic biology has the opportunity to advance human health in a variety of ways. Improved production of drugs and vaccines, advanced mechanisms for personalised medicine, and novel, programmable
drugs and devices for prevention and healing are among a few of the expected achievements.1 However, in tandem with these large prospective benefits, synthetic biology entails a risk of catastrophic consequences whose severity may exceed that of most ordinary 1
President’s Council for the Study of Bioethical Issues (PCSBI). 2010. New Directions: the Ethics of Synthetic Biology and Emerging Technologies. Washington DC: PCSBI. Available at: http://www.bioethics.gov/ documents/synthetic-biology/PCSBI-Synthetic-Biology-Report -12.16.10.pdf [Accessed 27 Sep 2012].
Address for correspondence: Christopher Wareham, Department of Philosophy, University of Johannesburg, South Africa. Email: [email protected]
uj.ac.za Conflict of interest statement: No conflicts declared © 2013 John Wiley & Sons Ltd
Christopher Wareham and Cecilia Nardini
human undertakings. This is due to the peculiar nature of synthetic biology as a ‘threshold technology’ which opens doors to opportunities and applications that are essentially unpredictable.2 For instance, Bedau and Parke point out that ‘the potential for open-ended evolution makes the long-term consequences of creating [artificial life forms] extremely unpredictable.’3 Fears about these unpredictable and potentially unstoppable consequences have led to declarations from civil society groups calling for the use of a precautionary principle (PP) to regulate the field.4 In its stronger formulations the PP blocks an action, such as undertaking a research project, in case the action may possibly lead to harmful consequences. Although other research fields have been seen as appropriate contexts for a precautionary attitude, calls for the PP to regulate synthetic biology have been particularly vociferous, as discussed below. Nonetheless, despite widespread political recognition of a need for caution, the stronger versions of the precautionary principle have been extensively criticized as a guide for regulatory policy. Among the most forceful criticisms is the claim that the PP urges us to take into account only the possible harmful consequences of an action and to disregard the potential benefits.5 However, the precautionary principle is prevalent in law and international agreements, such as the Cartagena Protocol on Biosafety6 and the European Commission Communication on the Precautionary Principle.7 As such its rejection would require substantial changes to institutional frameworks. Moreover, from the action of hundreds of civil society groups, it appears that citizens want a precautionary principle to regulate synthetic biology.8 This, along with its 2
M. Bedau & E. Parke eds. 2009. The Ethics of Protocells. Cambridge MA: MIT Press. 3 Ibid: 9. 4 See, for example, Friends of the Earth U.S. (FOE-US), International Center for Technology Assessment (ICTA), Action Group on Erosion, Technology and Concentration (ETC). 2012. Principles for the Oversight of Synthetic Biology. Washington DC: FOE-US, ICTA; Montreal: ETC. Available at: http://www.biosafety-info.net/file_dir/ 15148916274f6071c0e12ea.pdf [accessed 27 Sep 2012]. Use of the PP is explicitly invoked also in a 2006 Open Letter concerning the oversight of synthetic biology signed by several civil action groups such as Greenpeace and Genewatch UK. Available at http://www.genewatch .org/article.shtml?als[cid]=396423&als[itemid]=537746 [accessed 7 Mar 2013]. 5 C. Sunstein. Beyond the Precautionary Principle. Univ PA Law Rev 2003; 151: 1003–1058. 6 Secretariat of the Convention on Biological Diversity (SCBD). 2000. Cartagena Protocol on Biosafety to the Convention on Biological Diversity: text and annexes. Montreal: SCBD. Available at: http:// bch.cbd.int/database/attachment/?id=10694 [accessed 27 Sep 2012]. 7 Commission of the European Communities (CEC). 2000. Communication from the Commission on the Precautionary Principle. Brussels: CEC. Available at: http://ec.europa.eu/dgs/health_consumer/library/ pub/pub07_en.pdf [accessed 27 Sep 2012]. 8 FOE-US, ICTA, ETC, op. cit. note 4.
endorsement by democratically elected politicians, may give its adoption, particularly in the context of synthetic biology, a degree of democratic legitimacy. We start from the assumption that, since there is public demand for, and institutional recognition of the precautionary principle, a coherent version should be explicated to provide policy guidance. Given this aim, we focus on rehabilitating the PP by focusing on a plausible tenet we refer to as the evidence-harm proportionality rule. In the following sections we outline an intuitive version of the precautionary principle that integrates the evidence-harm proportionality rule in probabilistic terms. Thereafter we raise and respond to criticisms that have been levelled against the precautionary principle, and which also apply to the particular version of the PP we discuss. We argue that these problems can be resolved and that the rehabilitated PP can provide a viable policy option to confront the uncharted waters of synthetic biology research. Before moving on, a clarification about the role of the precautionary principle, and thus about the scope of our article, is in order. It is important to delineate the decision context in which we see the precautionary principle as having potential usefulness. Some civil action groups, such as the Action Group on Erosion, Technology and Concentration (ETC), appear to want to extend the precautionary principle to the evaluation of a whole field, such that if any work in synthetic biology is to commence, all synthetic biology research must be proven to be sufficiently low risk.9 However, in our view, applying the rule to the category ‘synthetic biology research’ in this way is deeply problematic for obvious reasons. Not least amongst these is that a whole category of entirely safe projects might be found guilty by association and suspended.10 Therefore we focus on a narrower decision context: the evaluation of the risks posed by single research projects and commercial initiatives. Within this scope, we will argue it is possible to evaluate probabilities and apply the precautionary principle effectively.
THE EVIDENCE-HARM PROPORTIONALITY RULE The precautionary principle is usually interpreted as an epistemic requirement for determining the circumstances, 9
Action Group on Erosion, Technology and Concentration (ETC). 2007. Extreme Genetic Engineering: An Introduction to Synthetic Biology. Montreal: ETC. Available at: http://www.etcgroup.org/sites/ www.etcgroup.org/files/publication/602/01/synbioreportweb.pdf [accessed 27 Sep 2012]. 10 Newson also notes that synthetic biology is difficult to define, in part due to overlap between a variety of fields. See A.J. Newson. Current Ethical Issues in Synthetic Biology: Where Should We Go from Here? Account Res 2011; 18.3: 181–193.
© 2013 John Wiley & Sons Ltd
Policy on Synthetic Biology or knowledge conditions in which a research project is too dangerous to pursue.11 One plausible way to formulate this is as a requirement that the conditions of applying precautionary action should be sensitive to the magnitude of the possible harm involved.12 In this case, the degree of possible harm an activity may cause will impact on the evidential standards that are required. For instance, the ETC claim that ‘[w]hether by deliberate misuse or as a result of unintended consequences, synthetic biology will introduce new and potentially catastrophic societal risks.’ Thus, they urge that ‘in keeping with the Precautionary Principle, synthetic microbes should be treated as dangerous until proven harmless.’13 Because synthetic biology imposes ‘catastrophic’ risks, a stronger evidential standard, proof of harmlessness, is required. An intuitive rule for assessing dangers can be seen to underlie this reading of the PP: If the potential harm of an activity is greater, stronger evidence that the harm will not occur is required in order for the activity to take place. Or as a corollary: if the potential harm of an activity is greater, weaker evidence that the harm will take place is required in order to prevent the activity. We will refer to this tenet as the evidence-harm proportionality rule (ehp). This proportionality rule has been formalized within a risk-prevention framework.14 The notion of risk is often modelled as the expected value of some outcome seen as undesirable. At the outset, it should be emphasized that ‘expected value’ is a potentially misleading technical term. It is not the value that we ‘expect’ an event to have in a normal sense. Instead it is a way of combining probabilities and outcome values for the purpose of comparison, and in order to arrive at a decision. Arriving at an expected value entails combining the probabilities of the outcome occurring with an assessment of the extent of the corresponding harm. Evidence about the likelihood of a harmful outcome can inform this assessment and change the probability assigned to the harmful event’s occurrence. Evidence can thereby affect the expected value of an event and thus affect our decision about preventing the action. Note that it will not always do so. Evidence may confirm or strengthen a previous valuation either of the harm, or of the probability that the harm will occur. In these cases, the risk will not
change, but our confidence in the accuracy of the assessment will. Thus, understood in probabilistic terms, the evidence harm proportionality rule holds that if the potential harm of an activity is greater, a higher probability that the harm will not occur is required in order for the activity to take place. Or, as a corollary, if the potential harm of an activity is greater, a lower probability that the harm will take place is required in order to prevent the activity. Given the above, a fruitful way of understanding the precautionary principle is as a limitation on the level of risk that is acceptable for an activity to pose. The precautionary principle mandates taking action, such as stopping the research project, as soon as the risk associated with it becomes substantial. Since the risk is proportional both to the magnitude of the potential harm and to the probability of it occurring, a probabilistic reading of the ehp follows straightforwardly: if the potential harm is greater, a lower probability of it occurring is sufficient to require preventive measures.15 To an extent this seems like common sense. We usually think that evidential requirements increase in proportion to the harm that may be suffered. The evidence-harm proportionality rule can, for instance, be seen as the motivation behind the differing evidential standards required in criminal and civil trials. Civil cases are decided on balance of probabilities, while in criminal trials, which potentially have more severe consequences like the death penalty, evidence is required to demonstrate guilt beyond reasonable doubt. Even a low probability of the harm – an innocent person being convicted – is enough to discourage the guilty verdict. In the following we will only consider the version of the precautionary principle that incorporates the ehp, which we label as the ehpPP. While many cases in which the PP is invoked can be seen as appeals to ehpPP, it should be stressed that the ehpPP is not identical to all instantiations of the PP. Rather, we propose that the ehp is a defensible epistemological tenet, which is often present, though sometimes underplayed in discussions of the precautionary principle.
THE PRECAUTIONARY PARADOX
Sunstein, op. cit. note 5; J. Harris & S. Holm. Extending Human Lifespan and the Precautionary Paradox. J Med Philos 2002; 27: 355– 368; P. Sandin et al. Five Charges against the Precautionary Principle. J Risk Res 2002; 5: 287–299. An alternative conception is the view that PP is an ethical principle like justice or human dignity. For a discussion, see R.H.J. Ter Meulen. The Ethical Basis of the Precautionary Principle in Health Care Decision Making. Toxicol Appl Pharm 2005; 207: 663–667. 12 Such a proportionality requirement is present for instance in CEC, op. cit. note 7. 13 ETC, op. cit. note 9, p. 50. 14 B. Osimani, An Epistemic Analysis of the Precautionary Principle. Dilemata 2013; 5: 149–167.
© 2013 John Wiley & Sons Ltd
However, the ehpPP shares an unfortunate consequence with stronger versions of the PP. As critics of the strong PP have argued, paradox results when harms are so extreme as to justify unfeasibly high standards of evidence both for and against a particular action.16 15
Ibid: 152. Sunstein, op. cit. note 5; Harris & Holm op. cit. note 11; S. Clarke. Future Technologies, Dystopic Futures and the Precautionary Principle. Ethics Inf Technol 2005; 7: 121–126. 16
Christopher Wareham and Cecilia Nardini
This paradoxical outcome, which we will refer to as the precautionary paradox, arises in the case of ehpPP because when a potential harm has an extremely high value, even a vanishingly small probability of it occurring is sufficient to surpass the limit of the maximum risk tolerated, and therefore to trigger the precautionary action. As discussed in the introduction, synthetic biology may involve the risk of catastrophic harms to a greater extent than many other scientific activities. Hence, assessment of synthetic biology research using the ehpPP is liable to the precautionary paradox. To see this, consider that one extreme possibility is that a project in synthetic biology could result in the destruction of all human life by initiating a devastating pandemic. Thus the ehpPP is likely to recommend suspending the project, even if the probability of catastrophe is extremely low. However, there is also a small chance that if we fail to work on the very same project enormous harms will be incurred, say, if it would have allowed us to prevent a future naturally occurring pandemic. In this case, a low probability of preventing harm counsels against suspending the project. As a result the ehpPP potentially recommends two opposing courses of action: we should both pursue and abandon the project. Although initially plausible, even the precautionary principle that incorporates the evidence-harm proportionality rule is paradoxical in cases of extreme harm. If the principle tends to recommend mutually exclusive courses of action, how can it be used to regulate research on synthetic biology? In response to this difficulty, Sandin and colleagues propose that a threshold requirement should be employed.17 Such a requirement would entail that below a certain evidential level possible harms can be ignored. If a potential harm is sufficiently unlikely then it need not be considered. On this view, a synthetic biology project would escape precautionary action if the likelihood of harm were situated below the designated minimum probability threshold. The problem of paradox appears to fall away, since only activities with a reasonable chance of causing harm are considered. Of course, in some instances we may still be torn in two directions. Recent research into creating a highly infectious genetically modified version of the bird flu (H5N1) virus is, arguably, an example of this.18 However when this indecision occurs, it is because both pursuing and not pursuing the project have a realistic probability of causing great harm. The threshold would resolve some cases of paradox that involve extremely low probabilities, but it is impossible and indeed undesirable to remove all cases of legitimate balancing of reasonable threats. In 17
Sandin et al. op. cit. note 11. S. Herfst et al. Airborne Transmission of Influenza A/H5N1 Virus Between Ferrets. Science 2012; 336: 1534–1541. 18
many cases in which the PP is invoked, the threshold may enable a decision by allowing us to reasonably discard extremely low probability harmful outcomes. In other cases the threshold will be able to show us that the risk of harm is sufficient to require precautionary action. Although a threshold would help to remove indecision that results from low probability extreme harms, the threshold solution raises two further difficulties. The first, which we refer to as the problem of arbitrariness, is that it is difficult to provide a justification for any particular minimum probability threshold. The second problem, which we call the problem of ignorance, is that the precautionary principle is invoked in situations of ignorance in which probabilities are underdetermined. If so, it appears impossible to say whether the probability of a particular outcome falls above or below the threshold. We would not know whether an outcome is sufficiently unlikely to be ignored. In what follows we outline these problems in more detail. We argue that thresholds can be justified by appealing to deliberative methods, thus resolving the problem of arbitrariness. Thereafter, we claim that the problem of ignorance can be tempered with recourse to the tools of Bayesian epistemology.
DELIBERATIVE THRESHOLD SETTING The above solution to the problem of paradox requires the setting of a threshold below which particularly improbable harms can be ignored, thus ensuring that only ‘realistic’ possibilities of harm should be considered in applying the PP. However, as Bedau and Triant suggest, such a threshold may well be arbitrary.19 Where and how should the line be drawn that separates a reasonable probability of harm from an unreasonable one? Below we sketch and defend a proposal that mitigates the arbitrariness of assigning a minimum probability threshold by appealing to the thresholds that people actually want. In essence, our proposal is that the assignment of a minimum threshold should be arrived at by ascertaining thresholds that would be endorsed by citizens. That is, the threshold should reflect citizens’ informed evaluation of the probabilities that can be ignored, and those that must be taken into account. If harms are so unlikely that they fall below the threshold, then they should be ignored. An initial question concerns how the judgements of citizens should be arrived at. There are many ways of incorporating citizen valuations in public decision making.20 However, one promising method of canvassing 19
M.A. Bedau & M. Triant. 2009. Social and Ethical Implications of Creating Artificial Cells. In The Ethics of Protocells. M. Bedau & E. Parke eds. Cambridge MA: MIT Press: 31–48. 20 For an excellent discussion of the perceived strengths and weaknesses of various methods of incorporating citizen judgements in the context of health state valuation, see J. Wolff et al. Evaluating Interventions in Health: A Reconciliatory Approach. Bioethics 2011; 26(9): 455–463.
© 2013 John Wiley & Sons Ltd
Policy on Synthetic Biology citizens’ opinions is the focus group. We envisage that focus groups – panels of citizens, perhaps chosen in the same way that jury members are selected – would be asked questions that explore the lower boundaries of probability that can be reasonably considered in cases of extreme harm. This raises further questions: who would have the authority to conduct focus groups? How are participants chosen? And what happens when there is disagreement within and between groups? It is important to note that these questions have been raised and answered in other institutional contexts, such as the evaluation of health states for the purposes of resource allocation.21 However, there are many ways to design and conduct focus groups, and it is unlikely that a one-size-fits-all answer will be effective. Instead our approach below is to paint in broad strokes how such focus groups could function as a solution to the threshold problem. Briefly, we propose that an investigator appointed by a relevant authority would provide examples to explore the probability space that people are prepared to take into account when deciding to take precautions. Willingness to pay (WTP) is a common method for evaluating people’s preferences about risks. Applying WTP methods, for instance, the investigator may question how much people are willing to pay in order to avoid low probability events, and derive a threshold in this manner.22 A further aspect of our proposal is that we advocate that the process whereby the threshold is derived should be deliberative in character.23 This deliberative aspect involves the participants modifying their assessments in light of consistency conditions, known cognitive errors and discussion with other participants. Even with deliberation, it remains likely that participants, or groups of participants, will arrive at different thresholds. If so, then, as mentioned, it is important to have methods of resolving such conflicts. One such method is to aggregate the thresholds, although other possibilities are available.24 The endpoint of this process is to arrive at a threshold of reasonable probability that is endorsed by reasonable people and which can be applied more generally and consistently to policy decisions. If a synthetic biology research project has a given probability of harm, this can 21
D.W. Stewart, P.N. Shamdasani & D.W. Rook. 2007. Focus Groups: Theory and Practice. Thousand Oaks, CA: Sage Publications. 22 WTP is not without its critics. However, we assume that, as in the context of healthcare valuations WTP or a suitable surrogate is available. For an overview of WTP methods see B. O’Brien & J. Viramontes. Willingness to Pay: a Valid and Reliable Measure of Health State Preference? Med Decis Making 1994; 14: 289–297. 23 This fulfils the PCSBI’s ‘principle of democratic deliberation.’ PCSBI, op. cit. note 1, p. 5. It also furthers the European Commission’s aim that decision procedures involving public risks should ‘to the extent reasonably possible all interested parties.’ CEC, op. cit. note 7, p. 4. 24 One alternative might be to calculate the average threshold weighted by the number of votes.
© 2013 John Wiley & Sons Ltd
be compared with the deliberatively established probability threshold and precautions can be taken (or not) in a consistent manner. It might be objected that this solution is epistemically problematic. As Sunstein has suggested, ordinary people are notoriously bad at dealing with probabilities, in part since we are subject to a wide variety of cognitive errors.25 These susceptibilities may mean that threshold setting by focus groups may result in irresponsible risk or, more likely, excessive precaution, due to cognitive shortcomings in understanding probabilities. Deliberative threshold setting may arrive at the wrong thresholds because citizens make mistakes. This objection is not unique to our position. Indeed it is one of the oldest complaints against democraticallyoriented methods. Plato’s Republic, for instance, criticizes democracy on similar epistemic grounds. In response to this type of objection, we make three replies. The first reply points to the additive epistemic benefits of having more participants in a decision process. Estlund and colleagues cite Condorcet’s jury theorem in support of the idea that adding more participants to decisionmaking can increase the likelihood of arriving at the right outcomes. Condorcet demonstrated that increasing the number of decision-makers ‘can make a group more likely to give correct answers than the average member, or even than the most competent member.’26 The implication of Condorcet’s theorem is that if citizens are on average more likely to be right than not, the outcome of procedures that involve the public is extremely likely to be correct. However, Condorcet’s equation only helps the case for deliberative threshold setting if people are likely to have good ideas about what thresholds are reasonable. In support of this idea, our second response emphasizes the educative effects of deliberative threshold setting. As mentioned, the deliberative process we advocate includes discussion of the cognitive errors that people tend to make. A suitably conducted deliberative process could reduce the impact of cognitive mistakes, such as the actomission bias, on threshold criteria. Thus Sunstein’s concern about flawed decision-making need not apply: informed citizens are better equipped to assign threshold values that navigate between overzealous precaution and rash risk-taking. The above arguments undermine the idea that deliberative threshold setting is epistemically defective. Our final response is that even if we denied the epistemic benefits of deliberation, it would still be possible to mitigate the idea that a deliberative threshold is arbitrary by 25
Sunstein, op. cit. note 5. D.M. Estlund et al. Democratic Theory and the Public Interest: Condorcet and Rousseau Revisited. Am Polit Sci Rev 1989; 83: 1317– 1340. 26
Christopher Wareham and Cecilia Nardini
pointing to an ethical benefit: deliberative threshold setting involves more stakeholders in decisions that affect them directly. By including citizens in decisions about the degree of risk they will be exposed to, this method expresses respect for citizens’ autonomous choices. In this way, the adoption of our proposal would increase the legitimacy of decisions about the potentially dangerous outcomes of synthetic biology projects. Thus, even if the epistemic standards of the deliberative threshold do not satisfy everyone, its ethical side effects mitigate the problem of arbitrariness. Thus far we have argued that assigning a threshold makes it possible to avoid the paradoxical consequences of the evidence-harm proportionality rule. Moreover, we argued that the epistemic and ethical benefits of deliberation mitigate the potential arbitrariness of such a threshold. Thus deliberative threshold setting provides a reasonable way to delineate the space of probability that we are prepared to deal with when making decisions about research projects in synthetic biology.
THE IGNORANCE PROBLEM However, there remains a problem in determining whether the probability of particular outcomes falls inside or outside this probability space. As mentioned earlier, the problem of ignorance asserts that, if we cannot assign probabilities to outcomes, we cannot say whether particular harmful outcomes fall above or below the deliberatively established threshold. We now turn to this problem. As Bedau and Triant point out, the most challenging instances of decision-making involving the precautionary principle concern ‘decisions in the dark’ – cases in which the empirical evidence for a particular choice is inconclusive.27 Again, synthetic biology research is likely to involve decisions of this type. The creation of synthetic life has, and will continue to have, surprising results. How are we to assign definite probability values to such outcomes? If a probability value cannot be assigned with reasonable confidence to the possible outcomes of a potentially dangerous research program, there is no way to assess whether the risk is an acceptable one or whether precautionary actions should be taken. Under such severe ignorance the threshold for probability that we advocate is not of much use. In the remainder of this article we argue that the problem of ignorance associated with deciding in the dark can be mitigated by adopting a Bayesian framework for assessing the probabilities involved in the decision. 27
Bedau & Triant, op. cit. note 19.
BAYESIAN APPROACHES TO PROBABILITY Bayesian methods can be understood by contrasting them with frequentist approaches. The most common interpretation of probability, so-called classical or frequentist, relies on the frequency of a phenomenon in order to ground the probability of its occurrence. For instance, we expect a fair coin to have 50% probability of landing on heads because we know that if we tossed it a number of times it would land on heads on average half of these. In a Bayesian framework, instead, the probability assigned to an event corresponds to the degree of belief that a rational agent holds about the event occurring.28 In the case of the coin toss described above, a Bayesian would ground the probability statement of 50% on the fact that there is no reason to believe that one of the two possible outcomes is more likely than the other. In order to be represented as a probability in the Bayesian framework, the degree of belief of a rational agent should follow the rules of probability calculus. This just means that, for instance, if an agent believes that there’s a 60% probability (0.6) that tomorrow will be sunny all day, she cannot believe that there is a 50% (0.5) chance that it will rain, because probabilities of mutually exclusive outcomes should add up to 1. Furthermore, the degree of belief should reflect the available evidence, since a rational agent will typically base her belief upon the knowledge she has of the state of affairs. Alternative versions of Bayesianism differ from each other in the details of how probability should be calculated on the basis of a state of knowledge. Among these, ‘objective Bayesianism’ appears particularly relevant to the problem at hand, due to the way it deals with uncertainty, and its risk-averseness, in a sense to be discussed. In objective Bayesianism, probability is determined on the basis of the previously mentioned norms, plus an additional requirement: equivocation.29 According to the equivocation norm, a rational agent’s beliefs should be as close as possible to an ideal balance of probability. This means that an agent cannot rationally assign higher probability to one of the possible outcomes if she is in a situation of ignorance. Hence, according to the objective Bayesian approach, the only rational assignment of probability is the one that is least committed to all the possible alternatives, or the most equivocal. This entails that in the absence of clear evidence we have to assign equal strength to the available probabilities. If, for instance, we had four potential outcomes and no evidence whatsoever, we should 28
C. Howson & P. Urbach. 2006. Scientific Reasoning: The Bayesian Approach. Chicago, IL: Open Court. 29 J. Williamson. 2010. In Defence of Objective Bayesianism. Oxford: Oxford University Press.
© 2013 John Wiley & Sons Ltd
Policy on Synthetic Biology equivocate and assign a value of 0.25 to each outcome. When some evidence is at hand, but the probability is still underdetermined, objective Bayesianism dictates that the agent should choose the probability assignment that is both consistent with the available evidence and closest to the uncommitted assignment. For instance, in the situation above of four equiprobable alternatives, if the agent had further evidence that one outcome has between 5% and 10% chance of happening, she can only rationally assign 10% of belief to this event, since 10% is consistent with the information she has at hand and at the same time it is closer to the unbiased view of 25% she would hold in absence of any evidence. Foundational disagreement persists about the adequacy of equivocation vis-à-vis other rules for calculating probability in Bayesian terms.30 We do not intend to engage with this debate. We only observe that, due to the equivocation norm, use of objective Bayesianism entails being most risk-averse in a well-defined technical sense because it assigns the highest rational degree of belief to potential harms.31 We find this a convincing reason to consider objective Bayesianism as particularly suited to precautionary deliberation under severe uncertainty.
OBJECTIVE BAYESIANISM APPLIED TO SYNTHETIC BIOLOGY The reasons just described make objective Bayesianism an excellent formal tool for reasoning about the ‘decisions in the dark’ that we are faced with when dealing with synthetic biology projects. In what follows we demonstrate how Bayesian theory and deliberative thresholds can be put to work by means of an example. Let us suppose that a research project in synthetic biology requires creating a synthetic infective agent. However, there are concerns that it might escape and result in a pandemic. Moreover the possible pandemic represents a harm so catastrophic that, if the unamended version of the evidence-harm proportionality rule is applied, even a minuscule probability of the organism leaking out would be enough to suspend the project. On the other hand, not pursuing the research project also involves the possibility of extreme harm, since there is also a chance the project would have allowed us to save millions of lives, say by preventing a future naturally occurring pandemic. The unamended rule provides selfcontradicting guidance. We argued earlier that deliberative threshold setting provides a way out of this paradoxical situation. So 30
For an overview of the discussion see J. Williamson. 2008. Philosophies of Probability. In Handbook of the Philosophy of Mathematics. Handbook of the Philosophy of Science vol. 4. A. Irvine, ed. Amsterdam: Elsevier. 31 Williamson, op. cit. note 29, chapter 3.
© 2013 John Wiley & Sons Ltd
suppose further that a threshold of reasonable probability has been set by making appeal to the kind of public deliberation described earlier. The probability threshold thus set is 0.01%. Thus, if there is a greater than 0.01% chance of harm, precautionary action should be taken, while if the outcome has a probability lower than this, no precautionary action is required. Regulators must decide whether to suspend the project to prevent the risk of a leak, or to continue the project in order to combat against naturally occurring outbreaks. If the probability of harm is above the threshold, the precautionary measure – stopping or continuing the project – must be taken. Security experts are consulted, and they conclude on the basis of the available evidence that the probability that this particular project would prevent a naturally occurring pandemic is well below the threshold of 0.01%. On the other hand if the project goes forward, the event of a leak that leads to a harmful pandemic occurring has a probability between 0.009% and 0.011%.32 Because the probability range of the project preventing a harmful pandemic is below the probability threshold, regulators need not to take the precautionary action of continuing the project. It is not a possibility that regulators are required consider, given the reasonableness threshold of 0.01%. Of course this alone does not provide grounds for suspending the project. It does, however, mean that the regulators’ decision between suspending and continuing should hinge on the possibility that the project will itself have adverse consequences. Does the possibility of a synthetic pandemic require that regulators take the precautionary step of suspending the project? If the probability range of the project causing harm is below the threshold, as it was in the case of continuing the project, the regulators need not intervene. On the other hand, if all values in the probability range are above the threshold, then they should suspend the project. Note that this is only so because in this example suspending the project does not also have a probability of harm that exceeds the threshold. If both suspending and continuing the project have a sufficiently high probability of harm, then the threshold does not solve the paradox. In such cases, though, a more sensitive balancing of evidence is possible, since the brute possibility of terrible harm does not unduly tip the balance. Objective Bayesianism, as a formal, risk-averse method for reasoning with probabilities, could provide useful tools for evaluating threats that fall above the threshold. Greater doubt arises in cases of uncertainty in which we are not sure whether the probability of harm is above or below the threshold. This is so in the example above, in 32 Note that a situation like this, in which the probability is underdetermined by the available evidence, is by no means an artefactual one, and it is particularly present in an advanced field of research like synthetic biology.
Christopher Wareham and Cecilia Nardini
which the likelihood of the project having a harmful impact is set at between 0.009% and 0.011%. Due to the uncertainty surrounding the probability of this outcome, a decision cannot be taken even in presence of the probability threshold of 0.01%, because some of the possible values for the probability of the harmful outcome are above the threshold, but some are below it. An objective Bayesian view on probability is useful in this context because it allows, thanks to the equivocation norm, a probability assignment even in such a situation of uncertainty. In the present problem, equivocation will make us choose the value that is the most unbiased between the alternative outcomes. To simplify this we will suppose there are two outcomes: the project will cause a pandemic, or it will not. Since there are two alternatives, and assuming there is no prior knowledge, the most unbiased value is 50%. As a result, by following equivocation, we should set the probability at 0.011%, which is the value that is, among all the ones consistent with the available evidence, closest to the most equivocal probability of 50%. Since the value of 0.011% is greater than the deliberatively set threshold value of 0.01%, the evidence-harm proportionality rule leads us to the conclusion that the project should be suspended. Objective Bayesianism thus allows us to make precautionary decisions in accordance with rational degrees of belief, despite the fact that the evidence is uncertain. Of course, calculating probability according to the equivocation norm in any real world situation, in presence of a plurality of causes and outcomes, will be more challenging than delineated in this illustrative example. Nonetheless, such calculations can be made, albeit imperfectly, in practice.33 Moreover, as discussed earlier, since equivocation is risk-averse, potential mistakes are likely to fall on the side of greater caution. Incorporating this feature into decision-making about synthetic biology projects thus enables a satisfactory and practicable solution to the problem of ignorance that is in keeping with the spirit of the precautionary principle.
AN OBJECTION: UNFORESEEN OUTCOMES A perceived limitation of our approach is that some outcomes may be entirely unforeseen. As such, when 33
On this see also B. Osimani, F. Russo & J. Williamson. Scientific Evidence and the Law: A Bayesian Formalization of the Precautionary Principle in Pharmaceutical Policy. J Philos Sci Law 2011; 11.
considering precautionary action, we could neither assign such outcomes probabilities, nor factor them into our decision-making at all. This is of course possible, and two considerations will help to clarify our position. The first is that the approach developed here is directed at the assessment of foreseeable harms that might be the outcome of particular research projects and on the containment of the concrete risks they might present. To some extent, then, dealing with entirely unforeseen, extreme harms is beyond our remit. The second reply further justifies this limitation of our scope. If unforeseen harms are seen as an argument against our position, they appear to rule out any decision procedure, since no decision rule can take all unforeseen harms into account. Taking this objection as decisive would be, therefore, more paralysing, and less escapable than the precautionary paradox.
CONCLUSION It is clear that cautious decision-making is desirable in creating research guidelines for synthetic biology, due to the largely unprecedented magnitude of the potential harm. The precautionary principle is a candidate for regulating these high risk decisions that is recognized in policy, and supported by citizen groups. In this article we have been concerned to show that, despite its apparent shortcomings, a precautionary principle that integrates the evidence-harm proportionality rule can be supplemented in order to provide an evidential tool that is both ethically and epistemologically satisfying.
Acknowledgements Thanks to Mark Bedau, Federica Russo, Jan Sprenger, Gregor Betz, and particularly the two anonymous reviewers for insightful comments on earlier drafts. Thanks also to participants of the Zagreb Applied Ethics Conference (2011) for rewarding discussions, and to the editors and reviewers of ‘Politics and the Life Sciences’ for their constructive criticisms. Christopher Wareham completed his doctorate in Foundations and Ethics of the Life Sciences through the European School of Molecular Medicine and the Department of Health Sciences at the University of Milan. His research interests include political theory, health policy, and normative and applied ethics, with a particular emphasis on the ethics of emerging biotechnologies. He is currently a Postdoctoral Research Fellow at the University of Johannesburg, South Africa. Cecilia Nardini has a MSc in Physics from University of Padova and a PhD in Foundations and Ethics of the Life Sciences from University of Milan. Her research interests include Philosophy of Statistics, Biomedical Ethics and Epistemology of Medicine.
© 2013 John Wiley & Sons Ltd