NickBostrom

From ScenarioThinking
Jump to navigation Jump to search

Scenariothinking.org > Ci'Num 07 Homepage > The Ci'Num interviews

Interview with Nick Bostrom, Director, Future of Humanity Institute (Oxford)

June 27, 2007

NickBostrom.jpg

Nick Bostrom, 34, is a philosopher and a leading transhumanist thinker and spokesperson.
In 1998, Bostrom co-founded the World Transhumanist Association. In 2005 he was appointed Director of the newly created Oxford Future of Humanity Institute.


What is the mandate of the Institute for the Future of Humanity?


Our mandate is to look at big picture questions for humanity: the ways in which technology might influence the human condition in the 21st century, the risks and benefits as well as the methodologies related to that. We currently have 3 main research areas. One is human enhancement: the technologies that could expand human capacities, human memory or the human lifespan, and the ethical questions that arise with these prospects. The second area is catastrophic risks, some of which (but not all, there are also natural risks) are related to anticipated future technologies, as the downside of their potential benefits. The third area is the methodologies which are needed to think about these "big picture" questions; that might seem dull, but it is actually where a lot of the exciting work resides.

These areas are both the ones that I feel matter most, and also ones where we might hope to make a contribution. There are important areas on which a lot of people are already working, such as world poverty or global warming. On the other hand, some important areas are neglected. One example could be existential risks, threats to human survival. Also, when I got interested in human enhancement 10 years ago, hardly anyone was looking at that from a philosophical and ethical point of view. Now there are more. The ethics of human enhancement has really blossomed in recent years, in the area of bioethics for example.

One way to look at possible futures is to study situations that would drastically reduce our options to actually choose what we want to be and become. How do you work on these issues? The more extreme of these situations is existential risks, which reduce all our options, because after they happen we are either no longer there, or we survive under conditions that are so bad that they can no longer be improved. Imagine some kind of world dictator with technologies that utterly prevent any kind of change, a "Brave new world" scenario. Human life continues, but it's a tiny fraction of what it could have been. From an individual situation you also have the same existential risks: death, or conditions that permanently destroy your life. Lifetime imprisonment for example, or some severe disability. On this score, I believe extending the human lifespan should be a much greater priority than it is today. It is odd that we spend so much money studying how to prevent and cure particular diseases, like cancer, arthritis or heart diseases, but almost neglect their main common cause, which is the frailty that occurs when you grow old. Age is a huge risk factor in almost any major disease. We should focus much more on this underlying vulnerability rather than just trying to put down the fires after they have started.


Human enhancement is obviously an area of major discussion, and sometimes contention. First, what do you consider is likely to happen on this front during the next decades?


Radical changes such as Mind uploading or "greater-than-human" artificial intelligence are probably several decades away. They might take anywhere between 20 to 80 years, and even with such a wide time interval there is still a significant chance that it will happen earlier or later or never.

As regards aging, it is possible that in two or three decades we will have found the way to slow down the process by 20-30%, as well as to replace certain organs through stem cell therapy which could regrow certain damaged tissues. However, it would seem very optimistic to think that by that time we will already have the full panel of methods to completely arrest aging, or even reverse it, in all organs. That would be highly beneficial, but a low probability by 2030.


Ray Kurzweil seems to think it could occur by then…


Some advances might let you gain more time to wait for the next breakthroughs… Aubrey de Grey from Cambridge speaks of "escape velocity" which would be the point where life expectancy increases by at least one year per year. If you could sustain that, you could achieve very long lifespans.

Life expectancy has been consistently rising by 2 to 3 months per year for quite a long time. In fact, if you draw a line between the life expectancy in the best-performing countries in the world, which differ depending on time (once it was Sweden, now it's Japan) and plot it all the way back to 1840, you will see an amazingly straight line that shows an increase of close to 3 months per year, with almost no fluctuation.

There is however no evidence that death can be avoided altogether. Eliminating all aging, while keeping other causes of death constant, would result in a life expectancy of about 1,000 years rather than our current 80 years. There is a big gap between very long life and immortality. From the point of view of eternity, 80 or 1,000 years is the same thing, a blink of the eye.


If we look 10 to 15 years back, what do you think has been the most underestimated trend or event?


When I first got interested in human enhancement eleven years ago, I would discuss it with friends and colleagues and their question was whether it was reality or science fiction. Now it is generally agreed that at least some of this will become possible. It is not generally agreed when, or to what extent, but the question has changed to: "Should we modify human nature in these ways?" We have moved from feasibility to normativity, and that is a a bit of progress. My hope would be that we take another step and, instead of asking these big, all or nothing questions, we begin to think about how we might achieve these specific enhancements that might benefit humanity, under what conditions, within what kind of social framework, with what safeguards, etc.


Suppose you are faced with an oracle who knows the future in 2030. If you could ask three questions to this oracle, what would they be?


If I were really faced with such a situation, I would want to think a long time to make sure I frame the right questions. However…

One question might be : "How long will it take to build a greater-than-human artificial intelligence?", one that could outperform the best human intelligence in almost all cognitive domains, scientific creativity, social skills, practical wisdom, strategic thinking... When that happens, the game changes quite radically. Because once you can do that, shortly after you will have something that becomes much stronger still. That is a discontinuity, a "singularity", and it is useful to know when we can reach that point, if we ever do. Some people such as Kurzweil, Moravec, Vinge and myself believe that at least, we do not have any good reason to think that this could no happen within some decades, although I think it will probably take close to 50 years. And since the consequences are so big, it is worth thinking about it even if the probabilities you assign to it are low.

A second question might be whether and when molecular nanotechnology will come about, and whether that would lead to an existential disaster, through its military applications for example.

Last, Mind uploading would be another technology that we would like to know whether and when it will be possible. I wouldn't be so interested in the exact magnitude of global warming in 2030. It doesn't make much difference if it is 1.1 degrees or 1.4 degrees. Compared to these other things, it will appear to be a relatively minor factor.


Do you personally have a take on the feasibility of these three events and their possible consequences?


There does not seem to be many problems regarding their feasibility. We do not have enough data to pinpoint when they will happen, so the responsible thing would be to distribute their probability over a wide interval, even up to a fairly near future, but also extending over the century, with some probability that it will never happen. The consequences could range from existential catastrophe to some utopian scenarios where almost all the problems that people are currently coping with are easily solved. It could be difficult to intuitively depict what kind of utopian possibilities would be feasible if we had such control, not just of the world around us, but of human nature itself. In a short essay called "Letter from Utopia", I have been trying to indicate some of what, in principle, might be possible.


The human of the future who is supposed to have written the "Letter from Utopia" not only lives much longer, but she is also more enlightened in many ways. How do you see that developing?


The essay of course does not describe any specific scientific methods to reach that, I have written other papers that look into that. The way we could get there would be by, first, avoiding an existential disaster; second, making available a number of human enhancement technologies (the letter speaks of three transformations: Life expectancy, cognitive capacities and emotional well-being); and three, use those technologies wisely to build ourselves up towards this utopia.

The "Letter" is of course unabashedly normative. In some other papers I take a more analytic, academic approach to these ethical questions. There is a long philosophical tradition on such topics.


When you look at catastrophic risks, what do you focus on most: Those risks that are not taken into account by others, averting risks, or coping with risks?


In theory all of that, but in practice, since we are a small team, we have mostly worked on methodological questions, which are a necessary preliminary to being able to tackle more substantive issues. One topic is probability: You can assign probabilities to different risks, but to what extent and on what basis? If you assign a 7% chance that a nanotechnological disaster will destroy us, how can you make this assertion be something more than just an arbitrary guess?

We also work on the wider category of global catastrophic risks, which includes existential risks at the extreme end, but also global catastrophes that wouldn't cause human extinction but would still be pretty bad. We are preparing a book that will lay wide foundations upon which we can then build more specific analyses. I have also been working on "observation-selection effect", which is a piece of methodology that you need when you look at certain kinds of big-picture issues for humanity, to avoid anthropic bias.


Going back to human enhancement, how could that path go very wrong?


It could go very wrong if we modify ourselves in ways that seem like a clever thing to do at the time, then take another step, then another, which all look like a good idea when they are taken but in fact take us down a path that, if we could now see the end-state that it leads us to we would recoil in horror. We might at each step gain something that seems obvious and tangible, but also loose a subtle value than we can't really articulate until it is too late. Suppose we get a little bit cleverer at each step, or a little bit stronger, but we loose some of our broader appreciation for art or culture, or our love for children, or even something more subtle than that. Leon Kass, the American bio-conservative writer who was until recently chair of the US President's Council on Bioethics, wrote a lot about this possible erosion of subtle human values.

There is also another risk, that these technologies would be used coercively through some state-sponsored programme, to eliminate dissent or other forms of resistance. Dissent is a good example. There are forms of being different or subversive that cause big problems for society; we might indeed be better off if we could eliminate things like psychopathy. So suppose you could, through an embryonic test, know that this embryo is predisposed to become a psychopath, and decide to select another embryo instead. That could be good, but maybe there is another trait you could also detect that causes some kind of social inconveniences… And once you've taken enough of those steps you might have lost the source of a great deal of human value, because in history the kind of people who didn't fit into society sometimes did something great, people like Socrates maybe.

This is actually a significant risk. And the way to avoid it is to better understand what the values involved are, through continuous conversation, discourse and criticism.

Do you think that there is enough research is the areas you just mentioned?


No, not compared to the significance of the topics. Existential risk is perhaps the most egregious example. 6 billion people, many of them devoted to ridiculous pursuits, have not been able to muster up more than one research group about how to assure their own survival. Of course, there are groups seriously working on specific risks such as nuclear wars, pandemics and asteroids. But there is no systematic effort to look at the whole spectrum of existential risks, trying to figure out which ones are the most serious and how to avert them. Our little group seems to be the only one so far, and that is just a little part of what we are doing. There are several reasons for that. Until recently, a great part of the existential risks had not even been conceptualized. Before Eric Drexler's work, nanotechnologies were not on anybody's horizon; before computers, the risk of superintelligent machines couldn't even be conceptualized.

Another reason is that many of these risks are not the responsibility of anybody in particular.. If you are a military organization, nuclear risk is your responsibility. The World Health Organization deals with pandemics. But if you take a broader view nobody in particular is in charge. It falls between the chairs.

Because of the polarisation between those who are "for" or "against" human enhancement or progressive medicine, some people who defend this research feel that they should not give the other side ammunition by pointing at the risks. With regard to nanotechnology, this has actually happened: The mainstream nanotechnology community worries that if they talk about the more advanced, "drexlerian" nanotechnologies, and people start thinking about the risks, then there may be a backlash against all nanotechnology. So they prefer to pretend that these technologies will never happen, that they remain science fiction.

The result is a kind of self-censorship on the part of technologists, which has a harmful effect on existential risks research.


And this self-censorship then also applies to describing the ambition, the vision behind some of this research


Yes, you limit yourself both in being able to discuss the exciting visions of what you could do, and also the risks and what we could do to prevent them. It's difficult enough to figure out these risks when everybody is honest and doing their best, but if in addition there are these political currents flowing through the discourse, it becomes practically impossible to make progress.

What should governments do, either in research or in fostering discussion on these topics?


One idea, which has been initiated by Robin Hanson, is to set up "prediction markets". They are very easy to set up and could help produce better probability estimates of various future events.

Another would be a kind of "Manhattan Project" on anti-aging medicine. We have a big aging population in Europe. They are getting older and frailer all the time, and do we just want them to die after lingering for years in old people's homes, or do we want to look for a long-term solution to that? Europe could take the leas in that area. I would also want to see some more research on various human enhancements such as cognitive enhancements. Our regulatory system is perverse, because it is set up to deal with drugs that cure diseases, and makes it almost impossible for a drugs company to get approval for a drug that doesn't cure a particular disease. As a result, if you develop some compound that could actually help a broad class of people to improve their memory or get more energy, it is currently necessary to invent some disease so you can prescribe the drug! So you see terms like "mild cognitive insufficiency" or indeed, attention-deficit hyperactivity disorder (ADHD) which at the extreme is probably a real disease, but is really part of a whole spectrum – and more and more segments of this natural spectrum now have to be stamped "ADHD". In some part of America, up to 10% of schoolboys are now supposed to suffer from ADHD. This is absurd: If they benefit from a drug that improves concentration, why do they have to be seen as diseased in order to take it?

There could be a regulatory mechanism for enabling lifestyle drugs, and enhancement drugs, and at the same time more research for understanding the safety and efficacy of these drugs.


You would think that large pharmaceutical corporations have great incentives to work on such drugs…


Yes, but we need basic research. Once something is 5 to 10 years away, then pharmaceutical companies will jump in. But many of these technologies are 30 or more years away. We now need to lay the groundwork for future applications.

The public sector spends billions of euros to fund research on cancer or heart disease, which is as it should be, but it is a great failure of imagination not to look at aging per se, since we know that it is responsible for most of those diseases. If you look at the percentage of heart diseases or cancers that are due to aging, you know that it is by far the biggest killer around, especially in developed countries. You can estimate the fraction of deaths that are due to aging as opposed to other factors by looking at the differential death rates of people depending on their age. And it has been estimated of the 150,000 people who die every day, roughly 100,000 deaths can be attributed to aging.