Reflections on Intelligence


Reflections on Intelligence

Copyright © 2016 Magnus Vinding

Recommended Reading:


[+ http://www.biointelligence-explosion.com/parable.html+]





Table of Contents


What is Intelligence?

Goal Achieving – Its Distributed and Many-Faceted Nature

When Machines Improve Machines

Consciousness – Orthogonal or Crucial?

A Brief Note on Goals

The Unpredictability of the Future of “Intelligence”





A lot of people are talking about “superintelligent AI” these days. But what are they talking about? Indeed, what is “intelligence” in the first place? I think this is a timely question, as it is generally left unanswered, even unasked, in discussions about the perils and promises of artificial intelligence, which tends to make these discussions more confusing than enlightening. More clarity and skepticism about this term “intelligence” is desperately needed. Hence this book.

Contemporary debates about “intelligence explosions” and “singularities” tend to get confused due to the employment of a simple word to refer to something highly complex, something that is in fact many different things. The simple word is, of course, “intelligence” – the closest thing we get to a secular version of god or magic. Without definition or qualification, this term is thrown at us by the chosen ones with pious promises and threats – “A sufficiently great intelligence could solve all our problems! Or it could destroy the world!”

Sounds great, but still, what is “intelligence”? That which is defined as “the ability to solve problems”? If so, all that is being said above is that a sufficiently great ability to solve problems could solve all our problems – or destroy the world, should that be the problem of choice – which, given the usual interpretation of “sufficiently”, is merely an analytic statement; a statement shorn of content.

What I wish to do in this book is to look deeper into the concept of “intelligence”, and to question common assumptions about the phenomenon, or rather phenomena, that we refer to with the word “intelligence”.^^1^^ Based on such an examination, I will proceed to criticize some of the views and ideas commonly discussed in relation to the future of artificial intelligence. Lastly, I shall attempt to draw more general conclusions about “intelligence” and its limits, which will reveal how any advanced goal oriented system that will ever emerge is bound to resemble us, as we are today, in a significant way.

What is Intelligence?

It is somewhat ironic that much of the literature on intelligence is in so short supply of it. For example, in writings about intelligence it will often be acknowledged that intelligence has been defined in many different ways only for the author to then proceed to refer to intelligence as though it were a single, well-defined thing, and to ponder questions like “is IQ a good measure of intelligence?” – a question whose answer obviously depends on which definition of “intelligence” we adopt.

This is an all too common mistake: we tend to talk about intelligence as though it were well-defined, and as though it were some kind of substance – a substance that examinations such as IQ tests may or may not be able to measure. This mistake is even found in academic discussions of so-called “theories” of intelligence (e.g. “theory of multiple intelligences”, “triarchic theory of intelligence”, “primary mental abilities”, etc.), which at first glance can seem like different, mutually disagreeing theories about the same underlying phenomenon – “intelligence” – rather than what they in fact are: different ways of defining “intelligence”.

There is no clear, widely agreed upon definition of “intelligence”, and much confusion arises from our failure to realize this. If we are to think clearly and have meaningful discussions about “intelligence”, we must clarify what we mean by this term.

Origin and Common Use of the Term

The origin of the word “intelligence” can be traced back to the Latin verb intelligere, which means “to understand”, and the associated noun intellectus, “understanding”. This is an interesting fact, since the term “intelligence”, as we use it today, usually refers to something related, yet still distinct from, understanding. Understanding is something we can often easily gain as we gain new information, and hence something we can improve rapidly, whereas the term “intelligence” tends to refer to something that is more rigid, at least in humans, and not something that we can easily increase as we gain new information. Intelligence, as we use the word most of the time, is rather talent for acquiring understanding. And yet our everyday use of the word “intelligence” is a bit more specific still, since “understanding” can be very broad, as we can understand a great variety of things and phenomena in the world, from non-Euclidian geometry to the minds of other individuals, and we do not seem to describe an aptitude for understanding the perspective of others with the word “intelligence” as readily as we do a talent for understanding non-Euclidean geometry. Both are, however, talents for equally valid, and immensely useful, forms of understanding, which reveals just how fuzzy our everyday conception and use of “intelligence” in fact is, and further underscores the need for making clear exactly what we mean by “intelligence” when we discuss it. “A talent for acquiring understanding, but especially some poorly demarcated forms of understanding” is not a good definition to base a discussion on.

Clear Definition of a Relevant Ability

Below I shall define the ability that I find the most relevant to discuss in relation to “intelligence”, especially as it relates to contemporary discussions about the future of artificial intelligence. Whether this ability corresponds to what we mean by the term “intelligence” most of the time is not the point. The point is rather that this is an ability that is important to reflect on and get a clearer understanding of, not least because it is the ability that we should, and do, care the most about.

The ability is the ability to achieve goals, which is actually not an uncommon definition of “intelligence” (see e.g. Legg & Hutter, 2007). The reason why we should, and indeed do, care about this ability should be apparent: to the extent we care about anything and have any preferences and goals in this world, it is obvious that we should then also care about and want the ability to achieve these goals. The ability to achieve one’s goals is the ability that every goal oriented system wants by definition. This is the supreme relevance of the ability to achieve goals, and the reason why I shall here focus on this ability rather than other notions of intelligence. After all, notions of intelligence are only relevant to the degree they relate to goal achieving.

Hence, for the sake of clarity, I now ditch the confusing “intelligence” word. It isn’t that brilliant anyway, nor particularly useful. The ability to achieve goals is what we really care about, and what we should like to delve deeper into.

Goal Achieving – Its Distributed and Many-Faceted Nature

Having sharpened our focus toward the ability to achieve goals, it is worth starting out with a simple, yet crucial observation about goals, namely that goals can be about a wide variety of things. This is a simple point to state and understand, yet it is nonetheless a point that discussions about “intelligence” – often defined as the ability to achieve goals – completely miss. This is not so strange, however, given that the word “intelligence” – by “everyday usage definition” – only refers to a relatively narrow range of goal achieving abilities. For example, one may have the goal of shooting three balls through a hoop in three attempts, and then succeed in doing so, yet few of us would say that this is a prime example of “intelligence” in any usual sense, although such an ability surely does satisfy the definition above.

This provides a glimpse of how goal achieving ability diverges, and diverges rather strongly, from our common notion of intelligence. Indeed, there are many ways in which these two are different. As mentioned earlier, what we commonly characterize as “intelligence” – talent for acquiring (certain forms of) understanding – tends not to increase rapidly in us humans, yet the same is clearly not true of our ability to achieve goals, which indeed can increase rapidly. Given that we acquire a new tool – say, a hammer or a mathematical formula – a whole set of otherwise unsolvable problems can suddenly be solved. The history of human civilization and our capacity for goal achievement has to a great extent been the history of the development and construction of such tools. Tools of various kinds that help solve many different problems.

It is these many tools, tools of a sort that can both be characterized as “hardware”, such as hammers and trucks, and “software”, such as ideas and social skills, that, when put together, enable us to accomplish the wide variety of goals we have. When it comes to goal achieving ability, talk of a single thing called “intelligence” is extremely misleading. There are just a myriad of “tools”, in the broadest sense of this word, and these tools can be combined in what is often complicated and time demanding ways so as to create new tools that can again, if combined in the right ways, create new tools, and so on.

What we usually identify as intelligence, talent for gaining certain forms of understanding, is really no exception: this is also the product of a conglomerate of tools, tools built by evolution, and which work together so as to accomplish the many goals that we as individuals can accomplish. One subset of tools recognizes objects, another recognizes emotions, some process language, while others inhibit and plan actions. In isolation, these tools are useless, and arguably even non-existent given that they function by virtue of interacting in an interconnected web of tools, but together they make us quite capable.

IQ is a good measure of many of these cognitive skills we possess, but it is far from being an exhaustive one. For even when we focus only on cognitive abilities, far more than mere IQ is required in order for us to accomplish our goals. For instance, one can have the highest prefrontally powered IQ imaginable, yet without the limbically powered drive and motivation to take action in the first place, one is not going to take any action toward one’s goals. Scientists like Isaac Newton and Albert Einstein not only had an extremely high IQ, but also extremely high passion – obsession even – for the quest for physical truths, and the latter was just as instrumental as the former for their great accomplishments.

IQ is already a measure of many skills, and goal achievement requires more cognitive skills than those that IQ tests accurately measure, such as self-control, the control of fine and gross motor skills, social skills, etc. More than that, it also requires innumerable hours of training and coordinating these skills, as the cases of Newton and Einstein perfectly illustrate.

The examples of tools mentioned above are all mental ones, yet our naturally evolved tools are by no means mental only. For although it is true to say that what makes humans special is our big brain, this is far from being the whole truth. We also have unique fine motor hands, upright bipedal walk, and highly versatile vocal cords, all of which are indispensable for our ability to achieve goals. If all of humanity lost just one of these features – say, our hands – we would be impressively incapable, and quite possibly unable to recover. Yet we tend not to realize the importance of these tools, because almost all of us have them in flawlessly working form, which make them appear unexceptional and, by fallacious extension, appear like they are not absolutely crucial elements for our ability to accomplish goals. This bio-mechanical aspect of our ability to accomplish goals seems widely missed in contemporary discussions of the future of AI.

So what makes humans especially capable of achieving goals is not just big, capable brains, but big capable brains placed in extremely versatile, capable bodies. And yet even this is still hopelessly far from being the whole story. What is also needed is many capable brains placed in many capable bodies. We tend not to realize that we as individuals are much like neurons in a brain: laughably incapable on our own.

As individuals, we can accomplish virtually nothing on our own, indeed we can barely even survive. It is only by virtue of being many individuals, and many individuals who are organized in certain ways, that we are able to survive and accomplish things beyond that. No individual, no matter how brilliant, could ever, say, build a laptop from scratch on their own, one reason being that traveling thousands of kilometers to obtain the right materials, not to mention processing these minerals – which again requires tools to be invented and built, which themselves require tools to be invented and built, etc. – would simply be too time consuming.

Accomplishing a goal like building a laptop requires countless specialized processes, insights, and tools – the whole host of tools that humanity has accumulated over the course of history: advanced language, mathematical knowledge, advanced infrastructure, innumerable kinds of advanced machines, etc. It requires organized cooperation of countless individuals to make all these things happen, and this is what comprises “human intelligence”, or more precisely, what makes us able to achieve so many goals: the widely distributed and specialized nature of our goal achievement.

What seems to have happened in many discussions about risks of artificial intelligence is that this highly distributed process of goal achieving – humanity’s enormous collective toolbox of instruments and know-how, and the ways in which these things are combined and organized so as to make the accomplishment of a wide variety of goals possible – has been confused for something singular. A single substance, “intelligence”, which is found in brains and which is able to create more of itself, by virtue of itself.

But this is far detached from the reality of the basis of our ability to achieve goals and how this ability has increased over time. Over the course of history, humanity has increased its ability to achieve an ever-increasing range of goals by virtue of ever more multifaceted, ever more specialized and distributed efforts. Specialized tools and know-how have paved the way for ever more specialized tools and know-how, which have worked, and work, together in a web that collectively can accomplish an ever wider range of goals.^^2^^ The basis of our goal achieving ability has grown ever more diverse over time, ever less singular. And there is no reason to suppose that it will, or indeed could, be otherwise.

For there really is no alternative, given that solving a problem effectively requires the right, specialized tools – just ask any plumber, or indeed anyone who knows anything about solving problems of any kind. Effective problem solving requires specialized tools, and this is as true for fixing a house as it is for solving mathematical problems.

The same conclusion seems unavoidable when we look at the deeper history of organized systems. Cells went from being relatively homogenous structures to being little factories with specialized centers, organelles, which allowed these more complex cells, the eukaryotic cells, to accomplish a greater variety of tasks. Multicellular lifeforms eventually underwent a similar change, as what was originally a blob of the same type of cell evolved to become a specialized society of many different kinds of cells. The human body, for instance, contains more than 200 different kinds of cells that perform different, specialized functions.

This pattern is also found in the brain, which is a system of cells – also various kinds of cells, actually, as there are many different kinds of neurons – that are organized in different, yet wildly overlapping, networks that accomplish a wide variety of tasks. These different networks evolved on top of each other over the course of evolution, and thereby gradually came to comprise brains with increasingly specialized modules, which in turn made these brains ever more capable at achieving an ever wider range of goals. There never was a “the transition” to “human level intelligence” due to the evolution of some new golden module, but rather an ever-increasing set of useful modules, such as Broca’s area, Brodmann area 12, and many other parts of the prefrontal cortex that evolved gradually, and which in combination comprised – or rather, comprised a part of – a gradually more competent goal achiever.

But of course, again, this goal achiever, the human individual, is really not at all that competent if kept in isolation. What makes humans capable of accomplishing more than eating, pooping, and grunting is other people – many other people who know a lot. It is only at the supra-individual level that our ability to accomplish the goals in which we take any pride becomes possible. And, as mentioned above, the pattern of increased specialization has repeated itself on this supra-individual level too, indeed far more visibly and elaborately than seen so far in any other realm. Human individuals have gradually organized themselves in a new grand anatomy of interconnected organs – governments, businesses, institutions, etc. – and it is by virtue of the complicated interplay of these that we accomplish what we do. You simply cannot get to the point of being able to build a laptop, or indeed brush your teeth, with less.

At every level we look, we find that increases in goal achieving ability have been the result of increased specialization, the result of the “division of labor” in a broad sense, which has continually produced a growing palette of specialized tools. And it is this entire palette that has been, and still is today, improving itself, not any single part of it in isolation.

This is a simple point, yet it is nonetheless widely missed in contemporary discussions about “superintelligence” and “intelligence explosions”, where we tend to speak as though it were otherwise – as though a single bright human individual were such a significant part. British statistician I.J. Good made this mistake as clearly as anyone in the following famous quote:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind […]. Thus the first ultraintelligent machine is the last invention that man need ever make […]^^3^^

First of all, even if we grant that the design of machines is one of the intellectual activities of “any man”, this still leaves the matter of construction wholly unaddressed. And the construction of machines clearly does not lie even remotely within the capabilities of any one person alone, but is the product of the labor of countless people and machines cooperating across large amounts of time and space, a complex process that requires much more than just “intellectual activities” alone.

But we have of course already granted far too much above. For the truth is that the design of advanced functional machines is not among the “intellectual activities” of any one person. Not even close. At best, what any one person can do is that they can be a very small part of such a design process.

“But what about super-genius John Von Neumann? Didn’t he essentially design the modern computer?”

By no means. Von Neumann provided a very rough blue-print for how to make a computer in his ‘First Draft of a Report on the EDVAC’, a blue-print that was mostly abstract in nature, and which was also heavily inspired by the work of others. What Von Neumann did was just the beginning of a design process – or rather another early step that built upon many earlier steps made by Leibniz, Babbage, Turing, and many more – and after that step, an actual functional design was still a lot of experimental tinkering away. Moreover, Von Neumann of course did not design the many hardware parts that were needed to realize the design he sketched out. And when we factor in the extensive process behind the design of these – countless parts that were also designed, not by one person, but as the result of the ideas and tinkerings of many different people – i.e. when one actually looks at computer design in its entirety rather than just a narrow subset, we begin to see just how small a part Von Neumann actually played in the design of the early computers he helped built, not to mention modern computers, which have improved and changed as the result of innumerable ideas and inventions that have emerged since Von Neumann’s days.

The idea of an “intelligence explosion” – i.e. a capability explosion – as proposed by I.J. Good above, namely an explosion that results from a single, highly “intellectually able” machine that then takes off on its own, seems wholly out of touch with the actual basis of goal achieving abilities. Again, any advanced ability to achieve goals we know of is the product of an elaborate toolkit – the more advanced the goal achieving ability, the bigger and more diverse this toolkit is – and in order to increase these abilities, i.e. in order to add new tools to the existing toolkit, one needs to put innumerable diverse tools together in the right ways.

This isn’t to say that a capability explosion is impossible. Indeed, I would argue such an explosion is already taking place right now; that we are making ourselves increasingly more capable at a rapid pace. Yet the history of the evolution of the ability to achieve goals strongly suggests that increases in this ability only emerge as the result of many gradual improvements of many different parts of the existing goal achieving system – many different parts of cells, bodies, brains, and societies – and not as the result of a single super-invention.^^4^^

“But that has been the pattern so far. Why should we believe that this pattern of gradual, distributed improvement will apply the moment we have a super-invention such as an intelligent machine?”

Because we have no reason to believe that it would not. Again, the goal accomplishing ability of our human-machine civilization is the product of the vast set of skills, tools, and know-how that is distributed across countless humans and machines that work together in complex ways, and adding a super-capable individual component to this vast system would not change this. To restate the point made above: when we look at history, what we consistently observe is that, as goal achieving systems have grown more competent, they have grown ever more dependent on an ever larger, ever more distributed system. We have no reason to suppose that this trend will reverse, especially when we are talking about machines that – unlike, say, a human zygote – depend on materials, tools, and know-how distributed widely across the globe for their construction and maintenance.

“But couldn’t such a machine just discover, invent, and construct new things on the spot, for instance by means of nanotechnology, and thereby take off on its own, independently of the existing system in place?”

So the question is: is there a remotely plausible scenario in which a machine – de novo, presumably without even the crudest of hardware tools to begin with – invents and constructs new tools more capable than the tools of our entire human-machine civilization, and not only that, but also does it “on the spot”, i.e. locally? This seems beyond unlikely.

For without the crudest of hardware tools to begin with, how should more advanced ones be developed? The problem is, again, that the building of such tools is not a mere intellectual pursuit. Tools, whether nanotechnological or not, are to a large extent built experimentally – by trial and error – not purely deductively. And such a trial and error process not only requires sophisticated hardware in place already, but also requires significant amounts of time – which alone would seem to exclude the most radical of take-off scenarios. More than that, it also requires enormous amounts of energy, and acquiring such energy requires much energy and ingenuity in the first place, a degree of ingenuity that seems forced to break the boundaries of physical law if the energy is to be harvested locally, independently of the larger technosystem that is human-machine civilization.

Moreover, to reiterate one of the main points so far, there really is no short road from a stone hammer to a laptop, and the path between the two is not a matter of single tools successively replacing each other, but rather, again, a myriad of tools playing together to build more sophisticated ones, and these new tools then join the grander army of tools – they are not “super-tools” or “super-capable” on their own. At most, the collective set of tools may be said to approach such a status.

This implies that trying to take off and increase one’s capabilities “on one’s own” merely amounts to trying to make things a lot harder, if not impossible, for oneself. It is to refuse to exploit the hard-won insights and capabilities already gained, and to instead insist on creating the many existing tools, or their analogues, from scratch. And even if it were possible to do such a thing locally, and to do so before the sun burns out, why not choose to do it by taking the far more efficient, and evidently feasible, path of cooperating with, and building upon, the quite capable system already in place?

Given how time and resource demanding it is to dig things up from the ground, transport them around, and mold them into useful things in a gradual manner, it is clear that there is a huge incentive to do away with all of this distributed business and instead produce the same result locally. The fact that this has not happened suggests that it, if at all possible in the first place, cannot be done with anything close to the same, already highly resource demanding, efficiency. And if it will some day be possible to do most things in a way that can be considered remotely local, this will only happen by virtue of an enormous amount of advanced tools and knowledge that are resource demanding to acquire, and which themselves require many advanced tools and knowledge to be acquired in the first place (which themselves require many advanced tools and knowledge to be acquired, etc.). There simply is no way of separating “advanced” and “capable” from “distributed” and “many-faceted”.

The advantage that computers have over humans is that they have virtually perfect memory and that they can download information – i.e. learn knowledge already gained – in virtually no time. Yet “discovering, inventing, and constructing new things” requires more than merely learning the knowledge already gained by others. It requires novel exploration: trying things out to see how they work. And it is not clear that computers have much of an advantage in this realm. For although a computer may run advanced simulations that greatly optimize the design and construction of some new kind of hardware, there is still a long way from running such a simulation to having a piece of working hardware of any kind. Again, in order to build things, one must, among other things, also have sophisticated hardware and large amounts of energy and materials to build it with.

There is a great asymmetry between learning knowledge already acquired and discovering new knowledge. It took the greatest efforts of a Newton to discover the (classical) laws of motion, while any high school student can learn them in an afternoon.^^5^^ The ability of computers to quickly learn all of human knowledge is akin to such afternoon high school learning. This is not to say that such learning cannot eventually lead to great new discoveries – it sure can – but making such discoveries requires more than merely downloading stored information. It requires much more time, and this is especially true today. For in order to advance our knowledge of the world considerably, being very smart is no longer enough.^^6^^ One must, among other things, also have data, and that requires the tedious business of building things – telescopes, microscopes, scanners, and detectors of many other kinds – which returns us to the necessity of tools, indecently many of them, when it comes to making progress in one’s knowledge and capabilities. It returns us to the inability of computers – indeed of any single component of our human-machine civilization – to do much of anything in the real world alone.

The Relationship between Individual and Collective Goal Achieving Ability

In his book Superintelligence, Nick Bostrom defines different kinds of intelligence, among them being “collective superintelligence” and “quality superintelligence”, which he defines in the following way:

Collective Superintelligence:

“A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system.”^^7^^

Quality Superintelligence:

“A system that is at least as fast as a human mind and vastly qualitatively smarter.”^^8^^

A crucial omission of Bostrom’s, however, is his failure to acknowledge the deeply interrelated nature of collective and individual goal achieving abilities. Yet before going deeper into this, it is perhaps worth pondering what Bostrom is talking about in the first place. To back up a bit further, here is what Bostrom refers to with the term “superintelligence”:

“We can tentatively define a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.”^^9^^

So, on Bostrom’s definition, when we are talking about a superintelligence, we are talking about a system – an intellect – with super-human cognitive performance; not about a system with “super” goal achieving abilities in general. As we have seen, however, there is a big difference, as achieving goals requires much more than mere high cognitive performance. Moreover, it is worth noting that Bostrom’s definition of a collective superintelligence – “A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system” – actually does not appear to live up to Bostrom’s definition of a superintelligence, as it is not “an intellect” in any usual sense of the term. It does, however, appear to be a goal achieving system (cf. “[…] such that the system’s overall performance [my emphasis] across many very general domains vastly outstrips that of any current cognitive system”).

In contrast, Bostrom’s definition of a quality superintelligence does appear to be congruent with his general definition of a superintelligence, since “vastly qualitatively smarter” than a[ny] human mind presumably means a system with greater cognitive capacity. And it should then again be stressed that what Bostrom refers to here is cognitive abilities, not goal achieving ability in general. The difference should not be missed.

I shall not focus on Bostrom’s definitions of quality and collective superintelligence here, but instead focus on individual and collective goal achieving ability, where “collective goal achieving ability” refers to the goal achieving ability of a system that consists of individual parts that themselves have goal achieving abilities. By focusing on these things, we will in fact cover Bostrom’s terms, because goal achieving ability encompasses cognitive abilities, and because the general notion of goal achieving ability obviously also covers any kind of “super” goal achieving abilities.

The crucial point to realize about collective goal achieving ability (CGA) and individual goal achieving ability (IGA) is that they cannot be separated. CGA, as defined above, is the sum of the goal achieving abilities of the components – i.e. the IGA of the parts of the collective. So the CGA of a system is not merely closely connected to the goal achieving abilities of its components; it is the product of them. Similarly, the goal achieving ability of any such individual part of a larger collective depends entirely upon the abilities of that collective whole. For instance, without a human society to provide us with food, shelter, and education, no single one of us would be able to achieve much of anything beyond survival, if we could even accomplish that. As individuals, we are entirely dependent on, indeed sustained by, the collective. IGA is a function of CGA. As an individual, I can achieve the goal of taking an airplane to Spain and writing emails on my computer, but these abilities I have entirely by virtue of the collective that has produced airplanes, computers, and the Internet, and sold me access to all of these things. The same story is true of our knowledge, since virtually everything we know is fed to us by the collective. We are standing on the shoulders of billions.

And this is as true of any computer as it is of any human individual. In fact, it is more true of computers, because if all humans were gone tomorrow while all our computers were left intact, the computers would not be able to accomplish much in terms of progress. The reverse would not be (quite as) true.

Only because we have coordinated and combined our individual goal achieving abilities have we been able to increase them. There is almost no such thing as IGA for individuals in isolation. IGA is a function of the IGA of others. This is true on all levels. In cells, organelles can only do what they do because other organelles do their tasks sufficiently well. In bodies, organs can only do what they do because other organs do what they do. In societies, individuals can only do what they do because other individuals do their thing.

Nowhere do we see any reasons to believe that it is possible to transcend this dispersed basis of goal achieving abilities. One may think that it is different when we are talking about cognitive abilities – smartness – because cognitive abilities are not specialized, they are multi-purposed. This is not true, however. First, it is not true that cognitive abilities in general are multi-purposed. Cognitive abilities are generally quite specialized – e.g. movement detection, object recognition, action initiation, etc. – but if one puts enough different cognitive abilities together in certain ways, as in the human mind, this collective society of abilities sure can accomplish more than a small array of narrow tasks.

The untruth of the myth of the multi-purposed nature of cognitive abilities does not end here, however. Because while it is true that the many cognitive abilities that we humans have can aid us in working many things out and accomplishing things, they are by no means sufficient. Again, advanced goal achieving abilities, including abilities to build new tools, require many tools, and our cognitive abilities are just a subset of these tools. And while it is true that advanced cognitive abilities play a crucial role in the development of new tools, and hence can be seen as the source of them, they are by no means the sole source. Advanced hardware, materials, time, and energy are necessary as well, resources that must be acquired if any advanced goal is to be achieved. Cognitive ability is merely a necessary, not a sufficient, ingredient in the complicated, resource demanding, and often serendipitous process of building advanced tools and accomplishing goals.

It should also be noted that, as hinted earlier, what we consider to be individual cognitive abilities and accomplishments are actually often the product of deeper processes that involve much more than mere cognitive abilities. All the things we know about the world around us, for instance, is not merely the product of cognitive abilities. It is the product of experiments and observations done with tools that have required centuries of engineering efforts to be built, not to mention all our own non-brain body parts. Yet this knowledge now parades as a triumph of the human intellect alone as opposed to a collaborative effort of the head and the hand (and legs, and vocal cords, and our finely tuned detectors – eyes, ears, nose, etc.), and all the other tools we have that enable us to do anything, yet which we disregard in our passionate fetishizing of the brain.

A similar point applies to language, a collaborative invention that has been constructed and improved over thousands of years by countless individuals through social trial and error. The invention of language lies far beyond the capabilities of any single human individual, since a single individual neither has sufficient time nor knowledge to invent language on their own. “Knowledge?”, one may wonder. Indeed, since language is inextricably connected with what we know. For example, an expression like “that lies so deeply in the company’s DNA” only makes sense given sufficient knowledge of biology, not to mention knowledge of what a company is, and the physical meaning of “deep” and its in some sense analogous meaning in other contexts, i.e. “hard to touch and change”. And the acquisition of the knowledge that language rests on (and co-evolves with) is, again, a highly collective accomplishment as well. It is only due to such collective triumphs that we as individuals are capable of anything.

“Human intelligence” is often compared to “chimpanzee intelligence” in a manner that presents the former as being so much more awesome than, and different from, the latter. Yet this is not the case. If we look at individuals in isolation, a human is hardly that much more capable than a chimpanzee.^^10^^ They are both equally unable to read and write on their own, not to mention building computers or flying to the moon. And this is also true if we compare a tribe of, say, thirty humans with a tribe of thirty chimpanzees. Such two tribes rule the Earth about equally little. What really separates humans from chimpanzees, however, is that humans have a much greater capacity for accumulating information, especially through language. And it is this – more precisely, millions of individuals cooperating with this, in itself humble and almost useless, ability – that enables humans to accomplish the things we erroneously identify with individual abilities: communicating with language, doing mathematics, uncovering physical laws, building things, etc. It is essentially this you can do with a human that you cannot do with a chimpanzee: train them to contribute modestly to society. To become a well-connected neuron in the collective human brain. Without the knowledge and tools of previous generations, humans are largely indistinguishable from chimpanzees.

Speaking as though human individual abilities in isolation, particularly those of the human brain, can be credited with our triumphs in engineering, science, language, etc. – as we so often do – is rather like downloading all of human knowledge to a USB key and to then credit USB keys as the sole key in finding all this knowledge (after all, they undoubtedly have played some role). In other words, it is hopelessly misguided. These accomplishments were the result of many complex processes, and it is only thanks to these many preceding processes that the resulting fruits can now readily be downloaded, metaphorically or literally. Again, building advanced machines and doing advanced science is not one of the abilities that any isolated human individual has. It is one of the abilities that millions of humans, given enough time and the right organization, might eventually acquire.

In sum, one must have a great variety of tools in order to have great capabilities. In our case, the way we accomplish things is by virtue of the abilities of the larger system we are part of. Individuals are competent because of knowledge and skills they have been taught by, and can only meaningfully use in, the collective system. And, as I have argued in this chapter and shall argue further in the following one, we have no reason to think that this pattern will change for new cognitively advanced systems that emerge in our system.

When Machines Improve Machines

The term ‘Artificial General Intelligence’ (AGI) refers to a machine that can perform any task that a human being can at least as well or better. It is often considered the holy grail of artificial intelligence research, and also the thing that many consider likely to give rise to an “intelligence explosion”, the reason being that machines then will be able to take over the design of smarter machines, and therefore their further development will no longer be held back by the slowness of humans. Luke Muehlhauser and Anna Salamon express the idea in the following way:

Once human programmers build an AI with a better-than-human capacity for AI design, the instrumental goal for self-improvement may motivate a positive feedback loop of self-enhancement. Now when the machine intelligence improves itself, it improves the intelligence that does the improving.^^11^^

This seems like a radical shift, yet is it really? As author and software engineer Ramez Naam has pointed out (Naam, 2010), not quite, since we already use our latest technology to improve on itself and build the next generation of technology. As I argued in the previous chapter, the way new tools are built and improved is by means of an enormous conglomerate of tools, and newly developed tools merely become an addition to this existing set of tools. In the words of Naam:

Another common assertion is that the advent of greater-than-human intelligence will herald The Singularity. These super intelligences will be able to advance science and technology faster than unaugmented humans can. They’ll be able to understand things that baseline humans can’t. And perhaps most importantly, they’ll be able to use their superior intellectual powers to improve on themselves, leading to an upward spiral of self improvement with faster and faster cycles each time.

In reality, we already have greater-than-human intelligences. They’re all around us. And indeed, they drive forward the frontiers of science and technology in ways that unaugmented individual humans can’t.

These super-human intelligences are the distributed intelligences formed of humans, collaborating with one another, often via electronic means, and almost invariably with support from software systems and vast online repositories of knowledge.^^12^^

The design and construction of new machines is not the product of human ingenuity alone, but of a large system of advanced tools of which human ingenuity is just one player, albeit a player that plays many roles, roles that, it must be emphasized, go way beyond mere software engineering – from finding ways to drill and transport oil more effectively, to coordinating sales and business agreements across countless industries. Moreover, it should also be noted that “super-human” intellectual abilities are already playing a crucial role in this design process as well. For example, computer programs already make illustrations and calculations that no human could possibly make, and these are crucial components in the design of new tools in virtually all technological domains. In this way, super-human intellectual abilities are already a significant part of the process of building super-human intellectual abilities. This has led to continued growth, yet hardly an intelligence explosion. Ramez Naam again:

Our science and engineering is increasingly dominated by superhuman intelligences, of which individual humans are just components.

So, have we hit the singularity, with these godlike intelligences roaming around, pushing the envelope of what we can know and do? Well, maybe. But it’s not exactly what you thought, is it?^^13^^

Naam gives a specific example of an existing self-improving “super-intelligence” (a “super” goal achiever, one could fairly call it), namely Intel:

Intel employs giant teams of humans and computers to design the next generation of its microprocessors. Faster chips mean that the computers it uses in the design become more powerful. More powerful computers mean that Intel can do more sophisticated simulations, that its CAD (computer aided design) software can take more of the burden off of the many hundreds of humans working on each chip design, and so on. There’s a direct feedback loop between Intel’s output and its own capabilities.


Self-improving superintelligences have changed our lives tremendously, of course. But they don’t seem to have spiraled into a hard takeoff towards “singularity”. On a percentage basis, Google’s growth in revenue, in employees, and in servers have all slowed over time. It’s still a rapidly growing company, but that growth rate is slowly decelerating, not accelerating. The same is true of Intel and of the bulk of tech companies that have achieved a reasonable size. Larger typically means slower growing.

My point here is that neither superintelligence nor the ability to improve or augment oneself always lead to runaway growth. Positive feedback loops are a tremendously powerful force, but in nature (and here I’m liberally including corporate structures and the worldwide market economy in general as part of ‘nature’) negative feedback loops come into play as well, and tend to put brakes on growth.^^14^^

I quote Naam at length here because he makes this important point well, and because he is an expert with experience in the pursuit of using technology to make better technology. To Naam’s point about Intel and other companies that improve themselves, I would add that although these are enormous competent collectives, they still only comprise an extremely tiny part of the much larger collective system that is the world economy that they contribute modestly to, and which they are entirely dependent upon.

The discussion above hints at a deeper problem in the scenario Muelhauser and Salomon lay out – “Once human programmers build an AI with a better-than-human capacity for AI design […] – namely this idea that we will build an AI that will be a game-changer, an idea that seems widespread in modern discussions about both risks and opportunities of AI. Yet why should this be the case? Why should the most powerful software competences we develop in the future be concentrated into anything remotely like a unitary system?

In terms of the development of software, humanity has developed a lot of different, separate programs that do a lot of different things very well, and we use these many software systems in the development of even more, different and better computer software and hardware today. Why should this trend change?

The human mind is unitary – trapped inside a single skull – for evolutionary reasons. The only way additional cognitive competences could be added was by lumping them onto the existing core in gradual steps. Yet why should the extended “mind” of software we build to expand our own capabilities be bound in such a manner? In terms of the current and past trends of the development of this “mind”, it only seems to be developing in the opposite direction: toward diversity, not unity. The pattern of distributed specialization mentioned in the previous chapter is repeating itself in this specific area as well. What we see is many diverse systems used by many diverse systems in a complex interplay to create ever more, increasingly diverse systems. We do not appear to be headed toward any singular super-powerful system in any way, but only an increasingly powerful society of systems. A society, not a singular mind.

This also hints at another way in which our speaking of “intelligent machines” is somewhat deceptive and arbitrary, for why talk about when these machines become as capable as human individuals rather than, say, an entire human society? After all, it is not at the level of individuals that accomplishments such as machine building occurs, but rather at the level of the entire economy. If we talked about the latter, it would be clear to us, I think, that the capabilities that are relevant for the accomplishment of any real-world goal are many and incredibly diverse, and that they are much more than just intellectual: they also require mechanical abilities and a vast array of materials. If we talked about “the moment”^^15^^ when machines can do everything a society can, we would not be tempted to think in terms of these machines as being singular in kind, but would instead think of them as a society of sorts, one that must evolve and adapt gradually. I see no reason why we should not in fact think about the emergence of “intelligent machines” with abilities that surpass human intellectual abilities in the same way. Indeed, this is exactly what we see: we gradually build new machines – both new software and hardware – that can do things better than human individuals, but these are different machines that do different things better than humans. Again, there is no trend toward the building of disproportionally powerful, localized, unitary machines – quite the contrary.

It has always been the latest, most advanced tools that, in combination with the already existing set of tools, have collaborated to build the latest, most advanced tools. The expected “machines building machines” revolution is therefore not very much that at all. What the “once machines can program AI better than humans can” argument seems to assume is that human software engineers are the sole bottleneck of progress in the building of more competent machines, yet this is not the case. And even if it were, even if we suddenly had a thousand times as many people working to create better software, other, much greater bottlenecks would quickly emerge – materials, hardware building, energy, etc., and all of these things, the whole host of tasks that maintain and progress our economy, are crucial for the building of more capable machines. Essentially, we are returned to the task of advancing our entire economy, something that pretty much all humans and machines are participating in already, knowingly or not, willingly or not.

By themselves, the latest, most advanced tools do not do much. A CAD program alone is not going to build much, and the same holds true for the entire software industry. In spite of all its impressive feats, it is still just another cog in a much grander machinery.

To say that software alone can lead to an “intelligence explosion” – i.e. a capability explosion – is like saying that a neuron can hold a conversation. Such statements express a fundamental misunderstanding of the level at which such accomplishments are made and what it takes to make them. The software industry, like any software program in particular, relies on the larger economy in order to produce progress of any kind, and the only way it can progress is by becoming part of – working with and contributing to – this grander system that is the entire economy. Again, individual goal achieving ability is a function of the abilities of the collective. And this, in the entire economy, is where the greatest “intelligence” – i.e. goal achieving ability – is found, or rather distributed. The question concerning whether “intelligence” can explode is therefore essentially: can the economy explode? To which we can at least answer, based on a survey of history, that it certainly can grow rapidly compared to previous eras.^^16^^

“But couldn’t software make intelligence explode by taking over the rest of the economy?”

In one sense, this is already happening: we are increasingly employing software throughout the entire economy in order to make it grow. In another sense – the sense my imagined objector intended – it is a fanciful suggestion. For even if we make the heap of assumptions that gets us to the point of taking this idea seriously, the answer still turns out to be “no”. Imagine we had some kind of software agent trying to increase its powers. Now, how does one do that? By forcefully taking over the economy, or by cooperating with it? The latter.

One reason this is the case is because the majority of what humans do in the economy is not written down anywhere and thus not easily copyable. Customs and know-how run the world to an extent that is hard to appreciate – tacit knowledge and routines concerning everything from how to turn the right knobs and handles on an oil rig to how to read the faces of other humans, none of which is written down anywhere. For even on subjects where a lot is written down – such as how to read faces – there are many more things that are not. In much of what we do, we only know how we do, not exactly “what”, and this knowledge is found in the nooks and crannies of our brains and muscles, and in our collective organization as a whole. Most of this unique knowledge cannot possibly be deduced from a few simple principles – it can only be learned through repeated trial and error – which means that any system that wants to expand the economy must work with this enormous set of undocumented, not readily replaceable know-how and customs. Indeed, as journalist Timothy B. Lee has pointed out, a machine agent trying to wipe out humanity would most likely be committing suicide:


A modern economy consists of millions of different kinds of machines that perform a variety of specialized functions. While a growing number of these machines are automated to some extent, virtually all of them depend on humans to supply power and raw materials, repair them when they break, manufacture more when they wear out, and so forth. You might imagine humanity creating still more robots being created to perform these maintenance functions. But we’re nowhere close to having this kind of general-purpose robot. Indeed, building such a robot might be impossible due to a problem of infinite regress: robots capable of building, fixing, and supplying all the machines in the world would themselves be fantastically complex. Still more robots would be needed to service them. Evolution solved this problem by starting with the cell, a relatively simple, self-replicating building block for all life. Today’s robots don’t have anything like that and (despite the dreams of some futurists) are unlikely to any time soon. This means that, barring major breakthroughs in robotics or nanotechnology, machines are going to depend on humans for supplies, repairs, and other maintenance. A smart computer that wiped out the human race would be committing suicide.^^17^^

Indeed, four billion years of trial and error should not be underestimated, and many of the principles stumbled upon throughout this process – particularly at the micro-level where most of the action has been happening; “macro-organisms” have only existed in the more recent part of the history of life – may just be close to optimal given all the constraints we are facing.

And threatening or forcing humans to accomplish their jobs – if someone were to propose that as a step “rogue software” might take – does not make much sense either, since that would only threaten the stability, and hence the function, of the entire system. Such threats are likely to lead to rebellion and destruction, which is not conducive to growth, neither are totalitarian measures in general.

This then brings us to a more general point, namely that there is no “controller” of our economy, which is one reason why talk about “human-controlled” vs. “non-human controlled” futures is somewhat deceptive. Such talk can give the impression that the economy is controlled by humans in the first place, which, in a relevant sense, it is not. What drives the economy is, for the most part, not high human ideals exercised through careful control, nor control in any other respectable sense, but rather human needs in need of satisfaction. High ideals matter greatly to us, but not more than our needs. In fact, not more than our convenience, and to a first approximation, that is what we try to maximize most of the time.

Our minds are built to see agents in the world, and to understand the world in terms of intentions. Yet when it comes to how our entire economy and society works and progresses, the agency model is a bad one, as there is no controller. There is just a distributed system that influences and evolves itself. This is the way it is now, and also the way it is likely to be in the future. But will it be a network of humans or a network of computers that determine where we are going? It is both already, and it is likely to keep on being both for a good while. How this network will evolve, and how to best influence it in better directions, is of course the question.

Intelligence Though!” – A Bad Argument

A type of argument often made in discussions about the future of AI is that we just never know what a “superintelligent machine” could do. “It” might be able to do virtually anything we can think of, and much more than that, given “its” vastly greater “intelligence”. For instance, in response to Lee’s statement above that “barring major breakthroughs in robotics or nanotechnology, machines are going to depend on humans for supplies, repairs, and other maintenance”, the argument would be that such breakthroughs are exactly what a “superintelligent machine” could easily make.

The problem with this argument, however, is that it again rests on a vague notion of “intelligence” that this machine “has a lot of”. For what exactly is this “stuff” it has a lot of? Goal achieving ability? If so, then, as we have seen, “intelligence” requires many things, and rests on an enormous array of tools and tricks. It cannot be condensed into anything we can identify as a single machine.

Claims of the sort that a “superintelligent machine” could just do this or that complex task are extremely vague, since the nature of this “superintelligent machine” is not accounted for, and neither are the plausible means by which “it” will accomplish the extraordinarily difficult task in question. Yet such claims are generally taken quite seriously nonetheless, the reason being that the vague notion of “intelligence” that they rest upon is taken seriously in the first place. Otherwise smart people’s healthy parades of skepticism are readily thrown to the wind, it seems, if you only say the magic word: “intelligence”. Never mind what it means. As I have tried to argue, this is the cardinal mistake.

We cannot let a term like “superintelligence” provide a carte blanche to make extraordinary claims or assumptions without a bare minimum of justification. I think Bostrom’s book Superintelligence is a particularly good example. Bostrom worries about a rapid “intelligence explosion” initiated by “an AI” throughout the book, yet offers virtually nothing in terms of arguments for why we should believe that such a rapid explosion will, or even can, take place (cf. Hanson, 2014), not to mention what exactly it is that is supposed to explode. Yet many of us have eagerly accepted his conclusions nonetheless – his entire framework even.

The Problems with Magic Sauce Theory

The problem, as I see it, is that we talk about “intelligence” as though it were a singular thing – as Brain and AI researcher Jeff Hawkins put it, as though it were “some sort of magic sauce”.^^18^^ This is also what gives rise to the idea that “intelligence” can explode, because one of the things that this “intelligence” substance can do, if you have enough of it, is to produce more “intelligence”, which can then produce even more “intelligence”. This stands in stark contrast to the view that what we call “intelligence” – whether we talk about cognitive abilities in particular or goal achieving abilities in general – is anything but singular in nature, but rather the product of countless small clever tricks and hacks built by a long process of testing and learning. On this latter view, there is no single master problem to crack, no magic sauce recipe of any kind, for increasing “intelligence”, but rather many new tricks and hacks we can discover, and finding these is essentially what we have always been doing, and still try to do, in science and engineering.

Magic Sauce Theory is also the root of the belief in a so-called “control problem” – or rather a Hard Control Problem, not to be confused with the standard “easy” ones of making software programs of the kind that we build today work the way we want them to. The idea is that we will at some point develop an enormous amount of “intelligence” – or at least a cup of magic sauce big enough for it to explode – which then presents us with the problem of how to control the future dynamics of this sauce as it takes off.

First, this seems to assume that producing this magic sauce is an easier challenge than controlling it, or at least not much harder, and hence that the problem of creating it will be “solved” before the problem of controlling it, and, furthermore, that it even can be controlled. These are by no means obvious assumptions. Why, given that control is possible, should the Hard Control Problem be more difficult than the problem of building the relevant capabilities – the magic sauce – in the first place? Couldn’t the latter be orders of magnitude more difficult, implying that the Hard Control Problem is likely to be solved long before the magic sauce itself is developed? And why should the dynamics of magic sauce indeed be controllable even in principle?

Second, all of this assumes the existence of magic sauce in the first place, and there just isn’t any. There are just a lot of tools and skills that can be combined in delicate ways to build more tools and skills.^^19^^ And there is no reason to suppose that we will or can squeeze all of the relevant^^20^^ tools and abilities into a single entity, much less that, even if we could, it would then be possible to control how this entity develops. In reality, what we see is a gradual development of tools and abilities that tend to spread out into the larger economy, and we “control” – i.e. specify the function of – these tools, such as software programs, gradually as we make them and put them to use in practice. There is never any need to make an über control design that determines everything else. The design of the larger system is being built gradually, and it is done by solving many “small” problems. We have no reason to believe that the design of the future will be any different.

To answer the question above: the Hard Control Problem cannot be harder than the Magic Sauce Creation Problem, since the latter is unsolvable. There is no Magic Sauce. It is the philosopher’s stone of our time. What there is, however, is a growing machine-driven civilization – an ever growing set of tools that maintain themselves and build new ones – and the development of this large, distributed system is not one that can be controlled. At most, we can try to impact it, and how this is best done is indeed the question.

The problem with too many writings and discussions about AI is that all bets are off. Ill-defined hypothetical statements somehow lead to specific, “almost certainly going to happen” statements. “An AI” will be able to do whatever, and one does not need to argue how this can happen, or even what is supposed to be doing the doing in the first place. Often we are not merely talking about extraordinary claims without justification, but statements that are not even well-defined, like adding apples and the color orange. That is the sensibility of the “intelligence” concept that is being employed, and the discussions that follow from it.

In contrast, increases in goal achieving abilities, to talk about something well-defined, are not the product of some magical substance, or indeed anything else that is singular in kind. It is the product of the continued accumulation of smart inventions, of many small tools and tricks. This is what “smartness” is: adapted organization on multiple levels, which is to say that there are no super-inventions or deep “secrets to intelligence”. There are just many simple mechanisms piled on top of each other in ways that have proven useful.

This is also how technological progress is made, and such progress is indeed already the result of machines that build machines – a giant network of machines and other tools, all contributing their unique strengths to the expansion of the capabilities of the larger system. To state this main point a final time for now: The basis of our capability increase has grown ever more diverse and distributed over time, resting on an ever larger and more diverse set of machines and knowledge, and hence ever less singular. There are no signs that this trend will reverse.^^21^^

Consciousness – Orthogonal or Crucial?

A question often considered open, sometimes even irrelevant, when it comes to “AGIs” and “superintelligences” is whether such notional entities would be conscious or not. Here is Nick Bostrom expressing such a sentiment:

By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.^^22^^

This is false, however. On no meaningful definition of “smarter” or “more capable” than “the best human brains in practically every field, including scientific creativity, general wisdom, and social skills” can the question of consciousness be considered irrelevant. This is like defining a “superintelligence” as an entity “smarter” than any human, and to then claim that this definition leaves open whether such an entity can read natural language or perform mathematical calculations. Consciousness is integral to virtually everything we do and excel at, and thus if an entity is not conscious, it cannot possibly outperform the best humans “in practically every field”. Especially not in “scientific creativity, general wisdom, and social skills”. Let’s look at these three in turn.

To start with the latter: social skills depend on an understanding of other people, and in order to understand other people, one must simulate what it is like to be them. In many respects, this is quite easy for us humans, and it is easy because – and to the extent that – other people resemble us. We know what it is like to experience emotions such as sadness, fear, and joy, and we know the behaviors that naturally follow when such emotions are felt. This skill of recognizing the emotions of others is something many of us are so good at and do so effortlessly that we take it for granted and fail to recognize its complex and highly specific basis. Consider the following illustrative example: without knowing anything about a stranger you observe on the street, you can roughly know how that person would feel and react if s/he suddenly, by the snap of a finger, had no clothes on right there on the street. Embarrassment, distress, wanting to cover up and get away from the situation are almost certain to be the reaction of any randomly selected person. We know this, not because we have read about it or even thought about it before, but from our skilled, immediate simulations of the minds of others – one of the main things our big brain evolved to do, and something we are able to do because we have roughly the same mental structure, share roughly similar upbringing, and, at the more detailed physical level, even have the exact same neurotransmitters and receptors as other humans.^^23^^ This is what enables us to understand the minds of other people, and hence without running this detailed, conscious simulation of the minds of others, one will have no chance of gaining good social skills.

“But is the consciousness bit relevant here? Couldn’t a computer just simulate all these neurotransmitters, the receptors, and the entire structure, and then, without being conscious, understand?”

First of all, we will never be able to simulate the structure and action of our neurotransmitters to complete precision with digital computers, and the assumption that we can produce all the relevant functions of the complex physical system that is the human brain with simulated digital models of it is, it must be stressed, just an assumption. And yes, consciousness is indeed relevant. At the very least, it is relevant for us. Consider, for instance, the job of a therapist, or indeed the “job” of any person who attempts to listen to another person in a deep conversation – a “job” that certainly has great relevance for many jobs. When we tell someone about our own state or situation, it matters deeply to us that the listener actually feels and understands what we are saying. A listener who merely pretends to feel and understand would be no good. Indeed, this would be worse than no good, as such a “listener” would then essentially be lying and deceiving in a most insensitive way, in every sense of the word.^^24^^

Frustrated Human: “Do you actually know the feeling I’m talking about here? Do you even know the difference between feeling cheerful joy and hopeless despair?”

Unconscious liar: “Yes.”

Whether someone is actually feeling us when we tell them something matters to us, especially when it comes to our willingness to share our perspectives, and therefore matters for “social skills”. An unconscious entity cannot have better social skills than “the best human brains” because it would lack the very essence of social skills: actually feeling and understanding others. Without a conscious human mind there is no way to understand what it is like to have such a mind.

Another ability that consciousness is supposed to be irrelevant for is “general wisdom”. Given how relevant social skills are for general wisdom, and given the point made about social skills above – that an unconscious entity essentially has no capacity to feel, understand, and truly relate to others – this claim about “general wisdom” should already stand in serious doubt at this point. Yet there is more to be said. Rather than restricting our focus to “general wisdom”, however, let us consider ethics in its entirety, which, broadly construed at least, includes any relevant sense of “general wisdom”. For in order to reason about ethics, one must be able to consider and evaluate questions such as: Can certain forms of suffering be outweighed by a certain amount of happiness? Does the nature of the experience of suffering in some sense “demand” that reducing suffering is given greater moral priority than increasing happiness (of the already happy)? Can realist normative claims be made on the basis of the properties of such experiences?

In order to answer such questions, one has to be conscious in the first place. That is, one must know what such experiences are like in order to ascertain what their experiential properties are, including to what degree they are significant, if at all. Knowing what terms like “suffering” and “happiness” refer to – i.e. knowing what the actual conscious sensations of suffering and happiness are like – is as crucial to ethics as numbers are to mathematics. Wisdom without the slightest knowledge of sentience is less than blind, and indeed hardly wisdom in any meaningful sense.

The same point holds true in other areas of philosophy, such as the philosophy of mind: without knowing what it is like to have a conscious mind, one cannot contribute to the discussion about what it is like to have one and what the nature of consciousness is. An unconscious entity has no idea about what the issue is about in the first place.

So both in ethics and in the philosophy of mind, an unconscious entity would be less than clueless about the deep questions at hand – about what they are even about in the first place – and the ability to consider these all-important issues and take part in the discussions about them must be considered a crucial component of human intellectual abilities. If an entity not only fails to surpass humans in this area, but fails to even have the slightest clue about what we are talking about, it hardly surpasses the best human brains in practically every field. After all, these questions are also relevant to many other fields, ranging from questions in psychology to questions concerning the core foundations of knowledge.

Experiencing and reasoning about consciousness is an essential part of “human abilities”, and therefore an entity that cannot do this cannot be claimed to surpass humans in the most important, much less all, of these abilities.

The third and final ability mentioned above that an unconscious entity can supposedly surpass humans in is scientific creativity. Yet scientific creativity must relate to all fields of knowledge, including the science of the conscious mind itself. This is also a part of the natural world, and a most relevant one at that. Yet without knowledge of what it is like to have a conscious mind, how is one going to be “scientifically creative” and contribute to the advancement of this science in any way? Experiencing and accurately reporting what a given state of consciousness is like is essential in order for there to be any science of the mind, yet an unconscious entity obviously cannot do such a thing, as there is no experience it can report from. It cannot exercise any scientific creativity in this most important science. Again, the most it can do is produce lies – the very anti-matter of science.

Unfortunately, this most obvious example of something an unconscious entity cannot contribute significantly to – the science of the mind – is a bad one at this point in history, since, as David Pearce has noted, we live in a pre-Galilean era when it comes to this science.^^25^^ In other words, it is next to non-existent at this point, as virtually nobody is contributing to it in the first place (although, in one sense, all we ever do is being scientists of the mind: observing, reporting, and navigating based on our phenomenology). Where physics has space telescopes and large hadron colliders to explore the world with, the psychological sciences have advanced brain scanning techniques with which they explore – not the mind – but the physical basis of mind, which is no doubt extremely important. To explore the mind itself, however, the best tools we have are some crude substances that are both illegal and – partly for that reason – unsafe to use, and which almost nobody is well-trained to use. As a result, we remain as ignorant of the vastness of the world as pre-telescope humanity.

Indeed, the very idea of facts about consciousness is treated with suspicion by many scientists in the first place. So “successful” have our explorations of the world been that we now only find place for insentient quantum fields in our (stated) worldview. Most strangely, in our mind’s worldview, we no longer have any place for the mind itself.

Yet what is the source of this skepticism toward facts of consciousness? It seems to be that direct reporting of the mind is considered especially unreliable and imprecise, yet this is not the case. For all we ever do when we report anything we know is to read off and report from our private phenomenology. Anything known to any human to any degree of precision, be it the latest precise measurement from advanced detectors or a valid mathematical proof, is known in consciousness. In a very real sense, all we ever do is introspect. What we usually call introspection is merely the observation of a certain channel of our conscious experience, yet in terms of what we know and observe, it is all consciousness. Even a simple point like this one can be difficult to appreciate, however, given the hard-to-stir naive realist default position of the human mind.^^26^^

Questions about the nature of consciousness, its basis, and its possibilities are often considered immaterial in discussions about whether future machines will surpass the capabilities of the human mind in all relevant respects – if they are even considered at all. Yet this is a mistake. As I have argued in this chapter, the question of whether the machines we build will be conscious cannot be considered irrelevant, since, in order for an entity to surpass human abilities in all fields and occupations, such an entity must have a conscious mind and be able to experience conscious experiences much like our own.^^27^^

More fundamentally, and partly the source of our confusion on this matter: We need to realize that facts of consciousness should not be treated with suspicion in the first place. Facts of consciousness are an all-important subset of the facts of the world, and, indeed, all facts we ever know – whether we are talking about mathematics, physics, or politics – are observed in consciousness. For this latter reason, it is no more problematic to talk about facts of consciousness than it is to talk about, say, facts of physics or mathematics; when reporting the latter, we are already past the point of admitting of the existence of the former. Indeed, the parallel to mathematics is particularly striking, since it is clear – especially to mathematicians themselves – that the main lab of mathematics is the human mind. It should therefore be no more problematic to let the human mind be the lab of the science of consciousness itself, and to begin to describe this part of the world – i.e. us – with greater rigor and openness than we have done so far.^^28^^

A Brief Note on Goals

The in some circles widely accepted idea that it is possible to control the goals and actions of a “superintelligent agent” strikes me as deeply suspect.^^29^^ For, assuming that we will be able to program an entity to have complex goals, why should such an entity preserve these goals? One of the issues I have, more specifically, is that such an agent presumably would have the ability to represent its own goals to itself, and also possess an advanced capacity to question and evaluate the things it can represent to itself. So why would it not have the ability to question and evaluate its own goals? After all, we humans can do this, and this is indeed a crucial part of our cognitive capabilities: we can hold “objects” of all sorts in our minds – everything from triangles to emotions, and even our own goals – and then question and evaluate the properties and value of these things. And this ability does seem crucial. When we do science, for instance, we constantly reconsider, not just our worldview, but also our goals themselves. A new discovery can mean that we should now aspire to build a detector of type Y rather than X, for instance, because this makes more sense given that discovery. Such an ability to change goals is crucial in order to do science, indeed for the accomplishment of any complex goal.

One might object that we are here merely talking about changing sub-goals, not the ultimate goal of an agent. Yet why should we believe that an advanced capacity to change sub-goals does not enable an agent to change “ultimate goals” as well? We humans, for instance – the most commonly cited example of a “general intelligence” – have certain fundamental goals “programmed into us”, e.g. most people are driven to survive and to have sex, yet our general ability to consider and evaluate things in the world also enables us to consider the value of these goals and to choose against them. Should a superintelligent agent be less capable in this regard? If so, this is an important way in which this agent’s abilities are seriously limited compared to those of humans. Essentially, to say that an agent is unable to reflect upon and change its ultimate goal is to say that it is unable to be a philosopher, unable to do philosophy. If one cannot do this, what else is one unable to do? How limited will one’s limits be?

Stephen Omohundro (Omohundro, 2008) argues that a chess-playing robot with the supreme goal of playing good chess would attempt to acquire resources to increase its own power and work to preserve its own goal of playing good chess. Yet in order to achieve such complex sub-goals, and to even realize they might be helpful with respect to achieving the ultimate goal, this robot will need access to, and be built to exercise advanced control over, an enormous host of intellectual tools and faculties. Building such tools is extremely hard and requires many resources, and harder still, if at all possible, is it to build them so that they are subordinate to a single supreme goal. And even if all this is possible, it is far from clear that access to these many tools would not enable – perhaps even force – this now larger system to eventually “reconsider” the goals that it evolved from. For instance, if the larger system has a sufficient amount of sub-systems with sub-goals that involve preservation of the larger system of tools, and if the “play excellent chess” goal threatens, or at least is not optimal with respect to, this goal, could one not imagine that, in some evolutionary competition, these sub-goals could overthrow the supreme goal?^^30^^

In sum, I believe this common assumption about goals is far more questionable than it is made to appear, and that it deserves intense skepticism. The question is: is it at all possible to create a highly capable agent with a pre-programmed, untouchable supreme goal? I strongly doubt it.

The Unpredictability of the Future of “Intelligence”

Much to learn you still have.”

— Yoda

A crucial skill for complex goal achieving of any kind is the ability to model and predict the future. This is a simple fact, yet it reveals a significant way in which any goal achieving system is bound to be limited, since predicting the future to great precision is impossible. This we know for various reasons, an obvious one being that there simply is not enough information in the present to represent all the salient details of the future. Any model of, say, the future of civilization has to be contained in a much shorter time and space than the unfolding of that civilization, as the former must be contained in the latter, and hence must leave out much information.^^31^^ Therefore, models of the future of civilization are bound to contain much uncertainty, and the deeper in time we try to peer, the greater this uncertainty gets. This same point applies to any agent: no agent can model its own future path well, and therefore must be deeply uncertain about how it will act in the future.

We can see the same conclusion by keeping in mind what agents, including civilizations, in fact do: they continually seek out new information and update their worldview and plans of action based on this. This means that, in order for an agent to predict its own future actions, the agent must know its future discoveries and updates before it makes them, which is obviously impossible. This process of discovering and updating is inherently unpredictable to the system itself. And this conclusion of course applies to any such system that will ever emerge. No agent can confidently predict its own future actions.^^32^^

One can simply never know in advance what the next advanced detector is going to show, what ten times greater computing power will reveal, or what exotic experiences a novel psychedelic empathogen might induce, and therefore one cannot predict the conclusions that might follow from such discoveries.^^33^^ Even if one has a rough idea about what the next big discovery might be and what implications that are likely to follow, there is still going to be some uncertainty about it, and this uncertainty accumulates quickly when we go further down the constantly branching tree of possible discoveries and updates we could make. Again, the deeper into the future we look, the more ignorant we are about the outcome, specifically about the discoveries and conclusions that will have been made at a given point.^^34^^

The fact that future goal seeking systems will never be able to predict even their own actions is worth keeping in mind, one reason being that it removes such systems from the pedestal of near-omniscience that they are so often placed on. It makes clear that there will not be some point of discontinuity, no “knowledge singularity”, after which virtually everything will be known. Advanced goal seeking systems will always keep on trying to make sense of the world with enormous uncertainty about the future, and in this respect, they will always resemble us, as we are today. More than that, the fact that future agents far more advanced and knowledgable than us will have great ignorance about their own future also reveals how naive it is to think that we, when staring into the deep future, should be anything less than profoundly ignorant. We are. Unavoidably so.


Facing any part of the observable reality, we are never in possession of complete knowledge, nor in a state of complete ignorance, although usually much closer to the latter state.“

— George Pólya^^35^^

What I have tried to do in this book, first and foremost, is to plant a big red flag in the word “intelligence”. Whenever someone uses this term, the first question we should ask ourselves and them is what exactly they mean by it. Do they mean the ability to achieve goals? If so, the many-faceted basis of this ability must be kept in view. Unfortunately, it almost never is. Thanks to the simple, deceptive word “intelligence”, we have managed to confuse the ability to achieve goals with something akin to “that which the human brain does”. And that is misguided. We have overlooked the process, the history – and hence the very basis and nature – of goal achieving ability. That process and basis being, in a nutshell, a large set of tools that builds an even larger set of tools. Many small tricks; not a single magic sauce found in the human brain or anywhere else.

More than that, I have argued that we need to be skeptical of claims about “AI agents”. First of all, we should be skeptical about stories of unitary agents, as it is not clear why large parts of our software capabilities should be packed into unitary agents, and, if they are, why these should be more powerful than our entire society of software combined. Second, we should be skeptical about claims of the sort that “a superintelligence could just do this or that”. What exactly could do what, and how? “Intelligence though” is a poor argument. Third, we need to reexamine the core assumptions about what the nature of such notional agents will be, what they can and cannot do, and the relationship between different capabilities. For instance, they are supposed to be able to easily understand everything that humans talk about, including the peculiar human perspective altogether, without having anything like human minds or feelings. Yet given that so much of what we humans talk about is our experiences – how we feel – this idea just makes no sense. I think there are many strange assumptions like this in discussions about “superintelligent AI”.^^36^^

Let me also make clear what I have not tried to do in this book. I have not tried to argue that everything will be just fine, and that we should not worry about advanced technology. To say that progress will be gradual and distributed is not to say that it cannot be fast or catastrophic. Indeed, my argument can be considered an utmost pessimistic one, as I have argued that there is no way to control what will happen in the future or to guarantee a desirable one. All we can do is try to impact it, and I believe that, in order to impact it in a good way, our worries should scale with the expected (dis)value of the risks we are facing. And I think much work needs to be done in order for us to gain a qualified picture of what this landscape of risks looks like.^^37^^ Therefore, I believe it makes good sense to encourage skepticism toward the in some circles widespread conviction that “AI risk” is clearly the greatest mountain in this landscape. I think such a judgment is premature, especially given that it seems to rest on the misguided conception of “intelligence” I have been criticizing throughout this book. Extremely bad outcomes could follow from other things than artificial intelligence, and energy might be much better spent by focusing on these things – including uncovering them in the first place – even if we do not readily see what these things might be. We should remain more than open to this possibility, especially when we know so little, and the same openness seems prudent when it comes to what kinds of risks we should be most worried about when it comes to artificial intelligence in particular. It seems unwise to constrain our thinking and worry about the future to a few highly specific scenarios we can envision today, all of which rest on long chains of contingent assumptions.

To paraphrase Mark Twain, the fact that everybody seems certain that their peculiar vision of the future will be vindicated should teach us to suspect that our own vision might well be just as (in)credible as theirs. Every specific vision of the future is likely to be wrong given the, from our ignorant perspective, large space of possibilities, and I think we should keep this fact clearly in view and appreciate the humility in our thinking about the future that it calls for. Moreover, I think it provides at least a weak reason for basing our plans of action predominantly on what we know about the world of today, as everything else is speculative, and easily ridiculously so.

So what are the implications of all this moving forward? How do we best impact the future? This question surely deserves an entire book – and much more – in itself, but admitting our ignorance and working to remedy it as much as we can is at least one reasonable answer. We need to study the world broadly and look at actual data when possible rather than rely mainly on speculation. There is a lot of relevant data available about the world that we are not conveying or connecting, and more is being accumulated every day. The process of continually discovering and learning from such data is a never-ending process we should engage in. And a sensible approach to this process is one that enables our future selves to do the same, and do it even better, as the process itself can also continually inform how this process is best carried out, and shed light on how to best handle our massive uncertainty.^^38^^ Maximizing knowledge and minimizing uncertainty is among our highest obligations at this point, and likely always will be.

Another thing to focus on is deeper reflection on values and ways to solve problems. For we do not need to wait for more capable machines to emerge in order for us to reflect on what matters in the world, to draw reasonable conclusions about important questions, and to adopt more sensible attitudes and practices. For instance, we do not need smarter machines in order to realize that our view of ethics is grossly inconsistent. We would never find it justifiable to morally disregard and kill a being with a human body just because s/he has the mind of a cow or a chicken, or indeed any other kind of mind, and yet we find it okay to morally disregard and kill beings with such minds as long as they have the body of a cow or a chicken. While such an ethical failure makes perfect sense in light of the evolutionary origin of our moral intuitions, it rests on no rational foundation whatsoever. It clearly betrays the widely accepted principle that we should not value a being differently based on the body/external appearance that being has.^^39^^ This inconsistency no doubt represents one of the greatest ethical failures of humanity today, and we should not wait for, nor expect, machines to correct it for us. Indeed, presently, we are busy employing machines that reinforce and act out these very inconsistencies. It is not in our machines but in ourselves that we must place any realistic hope for betterment.

We have to step up.

Focusing on overt discrimination in the world of today might seem like a short-sighted focus, hardly relevant for the future centuries from now. I disagree. For the competition of ideas and values that plays out today – in everything from academic philosophy to financial decisions – is what shapes the ideas and values of tomorrow, and, by extension, the following tomorrows; ideas that will both be implemented and reflected in the machines and human brains of the future. The continually unfolding tree of agent-driven decisions is going to keep on branching in new directions alright, but its branching in the past is likely to keep on having an influence on future branchings, regardless of how explosive these will be – just like Christianity of the 12th century and Unix of the 1970s have left their clear mark in structures in the modern world of today, and thereby continue to influence current events. Thus, pushing present attitudes in a sentiocentric direction seems like a good bet if we want our descendants to be concerned with the fate of sentience. And we do.^^40^^

All we have to act on in this world is our best estimate, which provides a strong case for making our best estimate as qualified as possible. And, cf. the point I have been making throughout this book about the multifaceted nature of complex goal achieving, this involves many things. There are no silver bullets, no ultimate insight that will solve everything. For while the history of progress, both in capabilities and knowledge, is often told in terms of great singular leaps and epiphanies, in large part because that is both a useful and fascinating way to talk about it, the truth is that this is not how progress happens for the most part. Rather, it is the result of many small insights accumulated gradually, much like learning to read – it cannot be done through a simple master theory; one has to learn a lot of letters, signs, and words. There is nothing singular about the basis of this ability, and the same is true of the ability to read and understand the world at large. There are many useful ways to model the world, each with their own strengths and weaknesses, and with a complex world before us to navigate in, being stuck in one kind of modeling is like knowing just one sophisticated word: disastreous. We have to keep on gathering many small insights in many different areas. These small insights is the stuff of which a useful picture of the world is made, and we should continually allow such insights to make us better, more knowledgable, and competent beings.

Indeed, as an appropriate encapsulation of the message of this book, I should like to end on this mundane, yet important note: we need to pay attention to the details. For in all the big picture thinking about where we are going and what the far future will look like, it can be easy to lose sight of the details. After all, the big picture is what matters, right? Sure, but if we get the details wrong in our core assumptions, we will be light years off before we even make our second inferential step. To the extent it is possible to understand the big picture, and to influence it positively, it is by getting many details right – by knowing many different things, which enables us to take many right steps over time. We should have the greatest of respect for these “small” things. They are what our success in achieving our goals depends upon.


This book has been inspired by the writings of David Pearce, Brian Tomasik, Robin Hanson, Ramez Naam, Tim Tyler, and Matt Ridley, and I owe them all my thanks for helping me think more clearly, and be more skeptical, when it comes to “intelligence”. David and Brian are both dear friends whom I feel support me to an indecent extent. I am deeply in their debt.


Ainslie, G. (2001). Breakdown of Will. Cambridge New York: Cambridge University Press.

Allen, P.G. (2011). Paul Allen: The Singularity Isn’t Near. technologyreview.com.


Armstrong, S. (2014). Smarter Than Us: The Rise of Machine Intelligence. Machine Intelligence Research Institute, Berkeley, United States of America.

Barrat, J. (2015). Our Final Invention: Artificial Intelligence and the End of the Human Era. New York, N.Y: Thomas Dunne Books St. Martin’s Griffin.

Beckstead, N. (2014). A Conversation with Tyler Cowen on 8 April 2014. nickbeckstead.com.


Blij, H. (2012). Why Geography Matters: More Than Ever. Oxford England New York: Oxford University Press.

Bostrom, N. (1997/2008). How Long Before Superintelligence? nickbostrom.com.


–––––– (2012). The Superintelligent Will: Motivation and Instrumental Rationality. [Advanced Artificial Agents. Minds and Machines, 22(2), 71–85.

–––––– (2014). Superintelligence: Paths, Dangers, Strategies. Oxford, United Kingdom: Oxford University Press.

Bostrom, N. & Yudkowsky, E. (2003). The Ethics of Artificial Intelligence. In Cambridge Handbook of Artificial Intelligence, edited by Keith Frankish and William Ramsey. New York: Cambridge University Press.


Brain, M. (2015). The Second Intelligent Species: How Humans Will Become as Irrelevant as Cockroaches. BYG Publishing, Inc.

Brockman, J., Shanahan, M., Pinker, S., Rees, M., Omohundro, S., Sasselov, D., Tipler, F., Livio, M., Lisi, A., Markoff, J. & Davies, P. (2015). What to Think About Machines That Think: Today’s Leading Thinkers on the Age of Machine Intelligence. New York: Harper Perennial.

Bronowski, J. (1965). The Identity of Man. Garden City, New York: The Natural History Press.

Bronowski, J. (1973/2011). The Ascent of Man. London: BBC Books.

Brynjolfsson, E. & McAfee, A. (2014). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W. W. Norton & Company.

Chace, C. (2015). Surviving AI. CreateSpace.

Chalmers, D.J. (2010). The Singularity: A Philosophical Analysis. Journal of Consciousness Studies 17:7-65.


–––––– (2012). The Singularity: A Reply. Journal of Consciousness Studies (7-8):141-167.


Colvin, G. (2015). Humans Are Underrated: What High Achievers Know That Brilliant Machines Never Will. New York, New York: Portfolio/Penguin.

Deacon, T. (2012). Incomplete Nature: How Mind Emerged from Matter. New York: W.W. Norton & Co.

Dennett, D. (1993). Consciousness Explained. London: Penguin.

Domingos, P. (2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake our World. New York: Basic Books, a member of the Perseus Books Group.

Floridi, L. (2014). The Fourth Revolution: How the Infosphere is Reshaping Human Reality. Oxford: Oxford University Press.

Gloor, L. (2015). Artificial Free Will. crucialconsiderations.org.


–––––– (2016). Suffering-focused AI safety: Why “fail-safe” measures might be our top intervention. Foundational Research Institute, foundational-research.org.


Goertzel, B. (2015) Superintelligence: Fears, Promises, and Potentials. kurzweilai.net.


Good, I.J. (1965). Speculations Concerning the First Ultraintelligent Machine. In Advances in Computers, edited by Franz L. Alt and Morris Rubinoff, 31–88. Vol. 6. New York: Academic Press.


Hanson, R. (1998/2000). Long-Term Growth As A Sequence of Exponential Modes. mason.gmu.edu.


–––––– (2010). Is The City-ularity Near? overcomingbias.com.


–––––– (2011a). The Betterness Explosion. overcomingbias.com.


–––––– (2011b). Debating Yudkowsky. overcomingbias.com.


–––––– (2014). I Still Don’t Get Foom. overcomingbias.com.


–––––– (2016) The Age of Em: Work, Love and Life when Robots Rule the Earth. Oxford, United Kingdom: Oxford University Press.

Hanson, R. & Yudkowsky, E. (2013). The Hanson-Yudkowsky AI-Foom Debate. Machine Intelligence Research Institute, Berkeley, United States of America.


Hawkins, J. & Blakeslee, S. (2005). On Intelligence. New York: Henry Holt and Co.

Hutter, M. (2012). Can Intelligence Explode? hutter1.net.


Karnofsky, H. (2012). Thoughts on the Singularity Institute (SI). Lesswrong.com. Retrieved from: http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/

Kelly, K. (2010). What Technology Wants. New York: Viking.

Kolak, D. (2004). I Am You: The Metaphysical Foundations for Global Ethics. Dordrecht, The Netherlands: Synthese Library, Springer.

Kurzweil, R. (2005/2006). The Singularity Is Near: When Humans Transcend Biology. New York: Penguin.

Kurzweil, R. (2012/2013). How to Create a Mind: The Secret of Human Thought Revealed. New York, N.Y: Penguin Books.

Lanier, J. (2011). You Are Not a Gadget: A Manifesto. New York: Vintage Books.

Lee, T. (2014). Will artificial intelligence destroy humanity? Here are 5 reasons not to worry. vox.com.


Legg, S. & Hutter, M. (2007). A Collection of Definitions of Intelligence. Advances in Artificial General Intelligence: Concepts, Architectures and Algorithms, volume 157 of Frontiers in Artificial Intelligence and Applications, pp. 17-24, Amsterdam, NL. IOS Press.


Leighton, J. (2011). The Battle for Compassion Ethics in an Apathetic Universe. New York: Algora Pub.

Mannino, A., Althaus, D., Erhardt, J., Gloor, L., Hutter, A. and Metzinger, T. (2015). Artificial Intelligence: Opportunities and Risks. Policy paper by the Effective Altruism Foundation.


Markoff, J. (2015). Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots. New York, NY: ECCO, an imprint of HarperCollinsPublishers.

Metzinger, T. (2009). The Ego Tunnel: The Science of the Mind and the Myth of the Self. New York: Basic Books.

Modis, T. (2012). Why The Singularity Cannot Happen. In The Singularity Hypothesis: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.


Muehlhauser, L. & Salamon, A. (2012). Intelligence Explosion: Evidence and Import. MIRI, intelligence.org.


Muehlhauser, L. (2014). Facing the Intelligence Explosion. Machine Intelligence Research Institute, Berkeley, United States of America.

Müller, V. (ed). (2013). Philosophy and Theory of Artificial Intelligence. Berlin New York: Springer.

Naam, R. (2010). Top Five Reasons ‘The Singularity’ Is A Misnomer. hplusmagazine.com

–––––– (2015). The Singularity is Further Than it Appears. rameznaam.com.


Neumann, J.V. (1945). First Draft of a Report on the EDVAC. Moore School of Electrical Engineering, University of Pennsylvania.


Omohundro, S. M. (2007/2008). The Nature of Self-Improving Artificial Intelligence. Self-Aware Systems.


–––––– (2008). The Basic AI Drives. In Proceedings of the First AGI Conference, 171, Frontiers in Artificial Intelligence and Applications Vol. 171, pp. 483-492.


–––––– (2012). Rational Artificial Intelligence for the Greater Good. In The Singularity Hypothesis: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer. http://selfawaresystems.files.wordpress.com/2012/03/rational_ai_greater_good.pdf

Pearce, D. (1995/2007). The Hedonistic Imperative. hedweb.com.


–––––– (2007). The Abolitionist Project. abolitionist.com.


–––––– (2008). The Reproductive Revolution: Selection Pressure in a Post-Darwinian World. reproductive-revolution.com.


–––––– (2012). Humans and Intelligent Machines: Co-Evolution, Fusion or Replacement? biointelligence-explosion.com.


–––––– (2012/2013). The Biointelligence Explosion: How Recursively Self-Improving Organic Robots will Modify their own Source Code and Bootstrap our Way to Full-Spectrum Superintelligence. In The Singularity Hypothesis: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.


–––––– (2014/2016). Non-Materialist Physicalism: An Experimentally Testable Conjecture. physicalism.com.


Pinker, S. (1997/2009). How the Mind Works. New York: Norton.

Pólya, G. & Bowden, L. (1977). Mathematical Methods in Science. Washington: Mathematical Association of America.

Ridley, M. (2011). The Rational Optimist: How Prosperity Evolves. New York: Harper Perennial.

–––––– (2015). The Evolution of Everything: How New Ideas Emerge. New York, NY: Harper, an imprint of HarperCollinsPublishers.

Ross, A. (2016). The Industries of the Future. London: Simon & Schuster.

Shanahan, M. (2015). The Technological Singularity. Cambridge, Massachusetts: The MIT Press.

Smart, A. (2015). Beyond Zero and One: Machines, Psychedelics, and Consciousness. New York: OR Books.

Tomasik, B. (2014/2016). Thoughts on Robots, AI, and Intelligence Explosion. Foundational Research Institute, foundational-research.org. Retrieved from:

Tyler, T. (2008a). Memetic Takeover. alife.co.uk.


–––––– (2008b). The Intelligence Explosion Is Happening Now. alife.co.uk.


–––––– (2009a). The Singularity is Nonsense. alife.co.uk.


–––––– (2009b). The Engineered Future. alife.co.uk.


–––––– (2009c). Against the Singularity. alife.co.uk.


Vinding, M. (2014). Moral Truths: The Foundation of Ethics. Shakespir.com.


–––––– (2015). Speciesism: Why It Is Wrong, And The Implications of Rejecting It. Shakespir.com.


–––––– (2016). Consciousness Realism: The Non-Elimanitivst View of Consciousness. utilitarianism.com.


Vinge, V. (1993). The Coming Technological Singularity: How to Survive in the Post-Human Era.


Yudkowsky, E. (2007). Levels of Organization in General Intelligence. In Artifcial General Intelligence, edited by Ben Goertzel and Cassio Pennachin, pp. 389–501. Cognitive Technologies. Berlin: Springer.


–––––– (2008). My Childhood Role Model. lesswrong.com.


–––––– (2013). Intelligence Explosion Microeconomics. Machine Intelligence Research Institute.


–––––– (2015). Rationality: From AI to Zombies. Machine Intelligence Research Institute, Berkeley, United States of America.

1 Or at least what we refer to with this word some of the time. As we shall see, intelligence is not at all well-defined.

2 Many of the components of the web have of course become more capable too, but this increase has been the product of the interplay of the parts in the collective. More on this in the note on the relationship between individual and collective goal achieving ability below.

3 Good, 1965.

4 Moreover, cf. the preceding discussion, anything remotely resembling an “intelligent machine” would not be the product of a single invention, but countless ones.

5 The asymmetry between the difficulty of making a discovery and using a discovery is also clear in mathematics, where proving a theorem is usually much harder than using it. This is how intellectual progress is made: we do hard work to make new discoveries, and these can then be stored in memory and thereby become an addition to the existing toolkit. From then on, we can easily retrieve them from memory at virtually no cost in terms of time and energy. A similar story can be told about progress in general: an increasing “storage” of “things” that enable us to do more things, and do them more efficiently.

6 It arguably never was. Indeed, Newton was a tinkerer of the absolute highest rank.

7 Bostrom, 2014, p. 54.

8 Bostrom, 2014, p. 56.

9 Bostrom, 2014, p. 22.

10 Indeed, chimpanzees actually significantly outperform humans in some cognitive tasks, such as working memory (see for instance: https://www.youtube.com/watch?v=zsXP8qeFF6A), which also reveals the silliness of measuring cognitive abilities along a single notional axis – “intelligence” – and placing humans higher on this line than chimpanzees (as for instance seen in Yudkowsky, 2008).

11 Muelhauser & Salamon, 2012, p. 13.

12 Naam, 2010.

13 Naam, 2010.

14 Naam, 2010.

15 Contra Stephen Hawking – “Success in creating AI would be the biggest event in human history.” (https://www.youtube.com/watch?v=a1×5×3OGduc ) – I think all such talk of a “moment” or “event” is deeply problematic in the first place, as it ignores the many-faceted nature of the capabilities to be surpassed, both in the case of human individuals and human societies, and, by extension, the gradual nature of the surpassing of these abilities. Machines have been better than humans at countless tasks for centuries, yet we continue to speak as though there will be something like a “from nothing to everything” moment, e.g. “Once human programmers build an AI with a better-than-human capacity for AI design”. Again, concerning this specific example, this wording does not correspond to how we actually develop software: we already have software that helps make the entire collective system that builds smarter machines more capable, and we continue building more and more, and more powerful such software.

16 See for instance Hanson, 1998/2000.

17 Lee, 2015

18 See https://www.youtube.com/watch?v=3tgyaeP1lnU. What he says about 5-6 minutes into the video is essentially a highly condensed version of what I have been arguing for over the last 25 odd pages.

19 Robin Hanson makes a similar point when explaining his disbelief in a Good-style “intelligence explosion”:

Sure if there were a super mind theory that allowed vast mental efficiency gains all at once, but there isn’t. Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city. Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex inter-dependent adaptations. (Hanson, 2010)

20 Relevant, that is, for the purpose of significantly improving the ability to achieve goals.

21 I find it worth mentioning here that not accepting the ideas promoted by Nick Bostrom et al when it comes to the subject of a notional “intelligence explosion” is actually, as far as I can tell, the widely shared position among experts, albeit one that gets a disproportionate amount of attention because it fails to excite much. Experts in the field of economics, such as Robin Hanson (Hanson, 2010; Hanson, 2011a; Hanson, 2011b; Hanson, 2014; Hanson, 2016) and Tyler Cowen (Beckstead, 2014) – both of whom are intimately familiar with “singularity” and “intelligence explosion” ideas – along with, as far as I understand Robin Hanson’s writings, the broader community of economists who study growth, reject Bostrom’s view and hold a much broader “capability increase” view instead. And, while not as relevant, so do experts in artificial intelligence such as Jeff Hawkins, Andrew Ng, Demis Hassabis, and Yann LeCun (see for instance https://www.youtube.com/watch?v=EiStan9i8vA&feature=youtu.be&t=46m5) “Not as relevant” because most capabilities required to grow most capabilities, along with patterns of growth and growth trajectories in general, are not studied in artificial intelligence research.

“But shouldn’t we nonetheless give the view of Bostrom et al – that intelligence can explode – at least some weight?”

As I have tried to make clear, there is a much more fundamental question to ask ourselves first: what exactly is the view we should give some weight? What is this “intelligence” that is supposed to explode? We must have a clear definition of “intelligence” in order to meaningfully evaluate the likelihood of whether “it” can explode. If we are talking about a significant capability increase, then this requires a lot of knowledge about many different things and a lot of advanced tools, and gaining such knowledge and building such tools requires one to do many things, in a very broad sense of “doing”. And to say that all this doing can be done rapidly and locally, and by other means than advancing the advanced human-machine civilization already in place, is, as far as I can tell, contrary to everything we know about the world.

22 Bostrom, 2012.

23 The amount of these transmitters and receptors can of course vary quite a bit from person to person, and such differences can indeed lead to big differences in terms of how our minds work. Still, most people most of the time have a “normal” human mind that is roughly understandable from the outside, at least at the “how would you feel and react if you were naked on the street” level.

24 As this shows, the question concerning whether computers will be conscious is also key when it comes to the prospect of machine-driven unemployment, or the lack thereof.

25 ”Compared to the natural sciences (cf. the Standard Model in physics) or computing (cf. the Universal Turing Machine), the “science” of consciousness is pre-Galilean, perhaps even pre-Socratic.” (Pearce, 2012).

26 When it comes to knowledge of the mind, we are deeply confused. “Once we know more about the brain, we will crack this mystery of consciousness and know much more about the mind” seems like a widespread sentiment. And while it is true in some respects, in the most relevant ones, it is not. For nothing will have changed fundamentally about our knowledge “once we know more about the brain”. Whatever additional knowledge of the brain we gain, this knowledge will still be known in consciousness. The palette of conscious experience will remain the foundation, its various colors the core primitives, of our knowledge, no matter what we discover about the basis of this palette.

Studying the neuroscience of mathematical axioms and brute concepts like “point” and “dimension” in the brains of mathematicians is not going to add to our understanding of mathematics or get us “beneath” these axioms and brute concepts; it would “merely” help us clarify meta mathematical questions. Similarly, clarifying the physical basis of consciousness does not get us “beneath” the core primitives of consciousness. We do not get “beneath” the experience of the color red by knowing its neural basis. We merely tie new phenomenal baggage to it in the grander mosaic that is our mind. A map of the mind placed in the territory of our mind – the only territory we ever know.

Indeed, thinking we can ever “get beneath” consciousness by studying its basis in the brain would seem to lead to an infinite regress, for where is this knowledge going to be known if not in consciousness? And where would one have the knowledge of the neural basis of this knowledge? And so on. Where would this end, if not in the basement that is our conscious mind? There is no infinite regress, of course, as our knowing is indeed always right here in our conscious experience, the stuff of which all our knowledge is made. Consciousness remains the substance and essence of what we know, whatever we find out about its physical basis.

27 For a more elaborate case for the relevance of consciousness, see Pearce, 2012/2013 and Pearce, 2012. The issue is also considered at great length in Smart, 2015.

28 For a slightly more elaborate case for this “realist view of consciousness”, see Vinding, 2016.

29 As mentioned in the note on Magic Sauce Theory, I think the idea that there is a Hard Control Problem rests on ill-conceived notions. Yet this is not to say that the construction of highly capable agents is impossible – after all, the creation of a human individual is such a construction. So just to be clear: My aim in this brief note is to present my reasons for doubting that the future actions of competent agents of any kind can be pre-programmed or controlled in any strong sense. The conclusion of the following chapter – The Unpredictability of the Future of “Intelligence” – provides further support for my doubts, I think.

30 After all, humans are such a system of competing drives, and it has been argued (e.g. in Ainslie, 2001) that this competition is what gives us our unique cognitive strengths (as well as weaknesses). Our ultimate goals, to the extent we have any, are just those that win this competition most of the time.

31 Indeed, this Russian doll problem of a system’s understanding of itself implies that no system can ever fully understand even its present function, as that would imply that the system must have a self-model that contains itself. That is, understanding oneself fully would require an infinite regress of meta self models – a model of the self model of the self model, etc. The bottom line: not only can no system reliably predict its own future, there will also be relevant aspects of its own present function that it will inevitably fail to understand.

32 This conclusion holds even if a system could spend all its resources trying to predict the future, yet it should of course be remembered that agents have other tasks they must devote resources to besides modeling the future, including the many tasks necessary for the maintenance and expansion of the system.

33 After all, unexpected and extraordinary discoveries have been made before, in subjects ranging from fundamental physics to how security agencies function in practice, and these have often changed not only our outlook, but also our actions in significant ways.

34 ^“^But what if there are no new discoveries and updates to be made?” The claim that there are no new discoveries to be made in the future is itself – assuming it makes sense in the first place – an uncertain claim about the future of discoveries and updates. In other words, we can never confidently assert that there are no new extraordinary discoveries around the corner. Yet one can question whether the claim is at all meaningful and free of contradiction in the first place, because the question about how a continual lack of new discoveries would be handled is itself an open question, the settlement of which would also involve a continual process of discovery and updating. The absence of a discovery is itself a discovery of sorts.

35 Pólya & Bowden, 1977.

36 I have tried to criticize some of these assumptions throughout the book. Another example would be the aforementioned and criticized assumption about goals: that it is possible to create a highly capable agent with a pre-programmed, unalterable supreme goal. (Pearce, 2012) and (Pearce, 2012/2013) attack more of these assumptions.

37 To the extent a qualified picture is possible, cf. the point made in the previous chapter: we don’t know how the future unfolds, and we never will.

38 It must be remembered that what we should aim for in an ever-changing dynamic world is not an ideal static state but rather a meaningful future trajectory. Unfortunately, much philosophical debate about the ideal society and ethics seems stuck in a static framework, discussing different versions of some notional final ideal state, none of which will ever appear in the real changeful world – at least not for long. The question is not what the best state is but rather what the best trajectory is. And these two questions – destination vs. direction – demand entirely different frameworks and modes of thinking.

39 For more elaboration on this inconsistency and the implications of rejecting it, see Vinding, 2015.

40 Cf. Kolak, 2004 and Vinding, 2014.

Reflections on Intelligence

A lot of people are talking about “superintelligent AI” these days. But what are they talking about? Indeed, what is “intelligence” in the first place? This question is generally left unanswered, even unasked, in discussions about the perils and promises of artificial intelligence, which tends to make these discussions more confusing than enlightening. More clarity and skepticism about this term “intelligence” is desperately needed. Hence this book. 'Reflections on Intelligence' aims to look deeper into the concept of “intelligence”, and to question common assumptions about the phenomenon, or rather phenomena, that we refer to with the word “intelligence”. Based on such an examination, it proceeds to criticize some of the views and ideas commonly discussed in relation to the future of artificial intelligence, and to draw some general conclusions about the future of "intelligence".

  • Author: Magnus Vinding
  • Published: 2016-08-04 15:50:12
  • Words: 19657
Reflections on Intelligence Reflections on Intelligence