What Should We Do?
Essays on Cause Prioritization and Fundamental Values
Copyright © Magnus Vinding 2017
The following essays were originally published on magnusvinding.blogspot.com
Introduction to Cause Prioritization
“Cause prioritization is the most effective use of altruistic resources.”
People who want to improve the world are, like everybody else, extremely biased. A prime example is that we tend to work on whatever cause we have stumbled upon so far and to suppose, without deeper examination, that this cause is the most important one of all. This cannot be safely assumed, however.
Here’s what a typical path of “cause updating” might look like: We find out that thousands of people die every single day due to extreme poverty, and find that to be the most important cause to work on. Then we realize that humanity torments and kills billions of non-human beings every year, and that discrimination against these beings , which might then prompt us to focus on ending this moral catastrophe. Then we are told about the suffering of , [+ its enormous scope+], and , and then we might (also) . Then we are convinced by arguments about the of the , and then that becomes our main focus. Etcetera.
To be sure, such an evolutionary progression is a good thing. The question is just whether we can optimize it. Might we be able to undertake this process of updating in a more direct and systematic fashion? After all, having undergone a continual process of updating that has made us realize that we were wrong about, and perhaps even completely unaware of, the most pressing causes in the past, it seems reasonable to assume that we are likely still wrong in significant ways today. We should be open to the possibility that the cause we are working on presently is in fact not the most important one we could be working on.
is the direct and systematic attempt to become more qualified about which causes we should prioritize the most. And the importance of such a deliberate effort should be apparent: Working on the cause(s) where we can have the best impact is obviously of great importance — it means that we can potentially help many more sentient beings — and in order to find that cause, or set of causes, deliberately seeking for it does seem significantly more efficient than expecting to stumble upon it by chance without looking. Defying the seductive pull of optimizing specific tasks that further a given cause, cause prioritization goes a step meta and asks: given our values, what causes are most important to focus on in the first place?
I will attempt to explore this question in these essays. I wish to provide a rough framework for how we can think about cause prioritization, and based on this, I will try to point to important causes and questions that I think we should focus on and explore further.
The Tree of Ought — A (Cause) Prioritization Framework
Imagine a couple that tries to make a decision about how to set the table at their wedding. They spend all their time trying to work out this difficult decision, making lists and drawings, and asking Google and friends for advice. Yet underneath their efforts pertaining to the wedding table, a deeper doubt is lingering in their minds: whether they really want to marry each other in the first place. Unfortunately, they have not spent sufficient time contemplating this more fundamental question, and yet what occupies their attention is still the wedding table.
This is clearly unreasonable. Whether it makes sense to spend time on setting the wedding table depends on whether the wedding is sensible in the first place, and therefore the latter is clearly the most important question to contemplate and answer first. Two weeks after the wedding, a divorce is filed. It was all a waste, one that deeper reflection could have prevented.
This example may seem a little weird, yet I think it captures what most of us do most of the time to a striking extent. We all spend significant amounts of energy on planning and executing ill-considered “weddings.” Rather than considering the fundamental questions whose answers determine the sensibility of more specific tasks, we get caught up in ill-considered specific tasks that happen to feel important or interesting.
This is hardly a great mystery when considered from an evolutionary perspective: doing whatever felt most interesting at any given time probably made a lot of sense in our ancestral environment, and no doubt still does much of the time — ignoring every moderately interesting thing that jumps into consciousness is not a recipe for success in today’s world either. The key is balance, of course, yet I believe we are entirely out of balance for the most part, unfortunately. Too often, our focus is guided by a sense of “uh, this seems interesting” — crudely speaking, a dopamine hit — rather than pre-frontally guided considerations about what objectives that are the most reasonable to pursue. When it comes to what we should be doing, we have a huge unrecognized “uh, this seems interesting”-bias, a bias that makes us lose touch with the importance of thinking hierarchically about what we should be doing.
For that is exactly the point that the example above illustrates: we should contemplate the fundamental questions and decisions before we move on to the more specific ones. This is simply the only thing that makes sense, as the answers to fundamental questions are what determine which specific tasks that make sense to pursue in the first place. In short, the specifics are contingent on the fundamentals. And this has significant implications: we need to pay much more attention to the fundamentals.
This is what the “tree of ought” illustrated below is all about. It is a framework for making decisions that emphasizes first things first, while highlighting that “first things first” is best thought of in hierarchical terms.
At the bottom of this tree we have our fundamental values upon which everything else rests and depends — the root and stem of the tree, one could say. From this, something slightly more specific follows, namely the causes we should pursue given our values, the branches of the tree. Finally, on these branches, we find something more specific still, namely interventions that enable us to attain success in our cause area — the leaves of the tree, if you will.
One could of course construct this hierarchical tree with any number of levels, but I find this three-level “value—cause—intervention” division useful, at least for starters. As one moves on, for instance to specific interventions, the tree will keep on getting divided further, as it will then again be useful to think in hierarchical terms. For any specific goal to be achieved, it will always be the case that some tasks and questions are of a more fundamental character than others, and hence more important to solve first.
An illustration of this three-level tree might look like this (there can obviously be any number of causes):
So, at the most general level, this “tree of ought” asks us to consider three questions, in the following order:
1) What are our fundamental values? (Or phrased in realist terms: what matters?)
2) What specific causes should we pursue given our fundamental values?
3) What interventions should we pursue given our specific causes?
I think this is an extremely valuable set of questions, not least due to their ordering: it is clear that our answers to question 3) depend on our answers to question 2), which again depend on our answers to question 1) — or, more reverentially, Question One.
Hence, the tree of ought suggests a rather counter-intuitive idea that does not seem shared by many, namely that contemplating fundamental values, i.e. Question One, should be our first priority. I think this is largely correct, at least if we do not have a highly qualified answer in place already. Our fundamental values can fairly be thought of as the point of departure that determines our forward direction, and if we take off in just a slightly sub-optimal direction and keep on moving, we might well end up far away from where we should ideally have gone. In other words, being a little wrong about fundamental values can result in being extremely wrong at the level of the specifics, which is why it is worth spending a lot of resources on being extremely well-considered about the fundamentals.
So contrary to what we may naively assume, the tree of ought suggests that the question concerning is not an irrelevant, purely theoretical question that prevents us from doing something useful. Rather, it is the question that determines what is useful in the first place. And answering it is far from trivial.
Fundamental Values and the Relevance of Uncertainty
As argued in the previous essay, reflection on fundamental values seems to stand among the most important things we could be doing. This is really quite obvious: working effectively toward a goal requires knowing what that goal is in the first place. And yet it does not seem to me that we, as purportedly goal-oriented “world improvers” or “effective altruists”, have much such clarity when it comes to what our goal is, nor do we seem to be working particularly hard on gaining it. This, I think, is unreasonable. How can we systematically try to help or improve the world as much as possible if we do not have decent clarity about what this in fact means? We can’t. We are trying to optimize an ill-defined function. And that is bound to be a confused endeavor.
It is tempting to be lazy, of course, and to think that we at least have some idea about what a better world looks like — “one that contains less unnecessary suffering”, for instance — and that this suffices for all intents and purposes. Yet that would be a fatal mistake, since fundamental values are what the sensibility of the pursuit of any action or cause rests upon. Everything depends on fundamental values. And even apparently small differences in fundamental values can imply enormous differences in terms of what we should do in practice. This means that a three-word sentiment like “minimize unnecessary suffering”, while arguably a good start, will not suffice for all intents and purposes (after all, what exactly does “minimize”, “unnecessary”, and “suffering” mean in this context? E.g. does “minimize” allow outright destruction of sentient beings or not?). We need to be as elaborate and qualified as possible about fundamental values if we are to bring about the most valuable/least disvaluable outcomes.
Indeed, I would argue that the unmatched importance of fundamental values — after all, clarification of fundamental values is all about clarifying what is most important, which almost by definition makes it the most important thing we could be doing; only when we are reasonably sure that we have a decent map of the landscape of value, a decent idea of what the notional “utility function” we are trying to optimize looks like, can we move effectively toward optimizing accordingly — combined with the fact that serious reflection about fundamental values seems widely neglected, implies that reflection upon fundamental values itself stands as a promising candidate for being the most important cause of all, however detached from real world concerns it may seem.
“Improving the World” — Two Questions Follow
If we have the goal of improving the world, this gives us two basic things to clarify: 1) what does improving the world mean? In other words, what does the goal we are trying to accomplish look like in more specific terms? And 2) how does one accomplish that?
We have an “end question” and a “path question”. And I would argue that we are not sufficiently aware of this distinction, and that we are generally far too fixated on paths compared to ends. We are not wired to reflect on goals, it seems, at least not as much as we are to accomplish an already given goal. We are optimizers more than we are reflectors, which makes sense from an evolutionary perspective. Yet it makes no sense if we are serious about “improving the world”. Success in this regard requires reflection on the aforementioned “what” question, and perhaps far more resources should be spent on reflecting on this question than on attacking the “how” question, since, again, the sensibility of any path depends on the sensibility of the end that it leads to. Paths depend on ends.
What Does Clarification of the “What” Question Look Like?
To many of us, answering this “what” question has consisted in our deeming to be the correct/our preferred moral theory, and then we have jumped to the path stage from there — how do we optimize things based on this theory?
Yet this is much too vague an answer to warrant moving on to the “how” stage already. After all, what kind of utilitarianism are we talking about? Hedonistic or preference utilitarianism? Even similar versions of these two theories often have radically different practical implications. More fundamental still, do we subscribe to or [+ negative utilitarianism+]? The differences in terms of practical implications between the two can be extreme.
What Kind of that Kind of Utilitarian?
And yet much still remains to be clarified at the “what” stage even if we have these questions settled. For instance, if we subscribe to a version of negative hedonistic utilitarianism — i.e. hold that reducing conscious experiences of suffering is our highest moral obligation — this still leaves us with many open questions. For to say that our focus is purely on suffering still leaves open how we prioritize different kinds of suffering. Crucially: are we much more concerned with instances of extreme suffering than we are with comparatively milder forms, perhaps even so much more that we consider it impossible for any number of mildly bad experiences to be worse than a single very bad one? And, similarly, that it is impossible for any number of very bad experiences to be worse than a single very very bad experience, etc. We may place any number of points along the continuum of more or less horrible forms of suffering where no amount of less bad experiences can be considered comparably bad as at that given point, and the differences in terms of the practical implications that follow from views with and without such points can again be enormous (for instance, given such a “chunked” view of the relative disvalue of suffering, averting the risk of instances of maximally optimized states of suffering — “dolortronium” — would seem to dominate everything else in terms of ethical priorities, while other views might only consider it yet another important risk among many).
Is the Continuum Exhaustive or Not?
Another thing that would seem in need of clarification is whether the continuum of more or less (un)pleasant experiences provides an exhaustive basis for ethics, as opposed to merely being an extremely significant part, which it no doubt is on virtually any ethical view that has ever been defended. For example, if we imagine a world inhabited by a single person who suffers significantly and who is destined to suffer in this way for the rest of their life, yet who nonetheless very much wants to live on, would it be right for us to painlessly kill this person if we could? It would seem that we are obliged to do so on hedonistic versions of utilitarianism, and yet saying that such an act is permissible, much less normative, seems highly counterintuitive, and it seems to suggest that where on the continuum of more or less (un)pleasant states of consciousness a person’s experiences fall is — while highly important — not all that matters. One may consider this a strong reason in favor of granting significant weight to preferences in one’s account of what matters.
Yet to consider both the quality of experiences and preferences is arguably still not sufficient when it comes to what is ethically relevant. For imagine that we again have a world inhabited by just one person, a person who experiences the world like you and I do, with the (admittedly rather significant) exceptions that their experience is always exactly hedonically neutral — i.e. neither pleasant nor unpleasant — and that they have no preferences. If preferences and hedonic tone exhaustively account for what makes a being ethically significance, it would seem that there is nothing wrong with killing this being. Yet this does not seem right either, at least not to me. After all, this being does not want to die, so who are we to deem their death permissible. What if we learned that one of our fellow beings in this world actually experiences the world in this way? Would this mean that they are not inherently valuable as individuals? That does not seem right to me.
The Nature of Happiness, Suffering, and Persons
Even if we have clarified the questions above and know that our goal is, say, to minimize extreme suffering and premature death as much as possible (among other things), this still leaves an enormous research project related to the “what” question ahead of us. For what is suffering and what is a person? While the answers to these questions may be fairly clear in phenomenological terms (although perhaps they are not), they are far from clear when we speak in physical terms. What is suffering and happiness in terms of physical states? And what are the differences between the physical signatures of mild and extreme forms of suffering? More generally, what is a person in physical terms? In other words, what does it take to give rise to a unitary conscious mind? Without decent answers to these questions concerning the nature of our main objects of concern, we cannot hope to act effectively toward our goals.
And yet barely anyone seems to have made the clarification of these crucial questions a main priority (, , and are notable exceptions). Such clarification of this aspect of the “what” question must also be considered a neglected cause (one can say that there is a phenomenological side of the “what” question, where we discuss what is valuable in terms of in terms of conscious states [e.g. suffering and happiness], and a physical one, where we describe things in physical states [e.g. brain states], and both are extremely neglected in my view).
Can digital computers mediate a unitary mind that can suffer? Can empty space? If so, does empty space contain more suffering than what we would expect there to be in the same amount of space filled with digital computers of the future? These may seem crazy questions, but much depends on our answers to them. Acting sensibly requires us to have as good answers to such questions concerning the basis of consciousness as we can, as quickly as we can. We need to attack these “what” questions concerning the nature of consciousness, including happiness and suffering in particular, with urgency.
Reflection on Values — Win Win
As mentioned, small differences in fundamental values can yield enormous differences in terms of practical implications, which hints that it makes good sense to spend a significant amount of our resources on becoming more qualified about them. And this applies whether we are moral realists or simply wish to optimize “what we care about”. For a moral realist, there are truths about what has value, and we can discover these or fail to. Similarly, for a moral subjectivist who wishes to optimize “what they care about”, deep reflection seems equally reasonable, since there in this case also are truths to be discovered in some sense: truths concerning what one in fact cares about.
Why, then, do we see so little discussion concerning fundamental values? After all, having a large discussion about these seems likely to help us all become more qualified in our reflections — to reconsider and sharpen our own views — and not least to bring others closer to our own view by causing them to update. And even if discussion only causes others to move away from one’s view, this seems like a welcome call for serious reexamination, which can then be done based on the reasons given for the rejection of one’s view. It seems like a win-win game that we are all guaranteed to gain from, yet we refuse to show up and claim the reward.
One may object that all this reflection is a distraction from the real suffering going on today that we should address with urgency. While I am quite sympathetic to this sentiment, and share it to some degree, the urgency and magnitude of suffering going on right now does not imply that we should reflect less. After all, the primary reason that many of us have made reducing suffering a priority in the first place was reflection, and the same applies to how we got to care about the biggest specific sources of suffering we are concerned with, such as [+ factory farming+] and — we came to realize the importance of these via reflection, not by optimizing already established goals. And who is to say that there may not be more forms of suffering we are still missing, even forms of suffering that could be taking place today? Moreover, the fact that is much bigger than the immediate future, and therefore will contain much more suffering by any standard, implies that if we truly are concerned with reducing suffering, starting today to reflect on how we can best reduce suffering in the future seems among the most sensible things we can do. Even in a world full of urgent catastrophes, we still urgently need to reflect.
However, saying that reflection should be a priority is of course not to say that we should not also be focused on direct interventions. After all, experience with interventions is likely to also teach us many things, and provide valuable input, for our reflections about value and what can be achieved in the space of more or less valuable states of the world.
What Is Value and What is Valuable? — My Own View
In the hope of encouraging such thinking and discussion about fundamental value, I shall here present my own idiosyncratic, yet unoriginal, account of what value is and what has value. This, in my view as a , is an attempt of getting the facts right when it comes to what value is, which is not to say that I do not maintain considerable uncertainty about it (as we should in the case of all difficult factual questions).
I believe value is a property of the natural world — more specifically, .
Perhaps I should be intensely skeptical of myself already at this point. For doesn’t it seem suspiciously self-centered for me, as a conscious being, to claim that consciousness is what, even all, that matters in the world? Why should only conscious beings matter? Why not something else?
This may indeed seem strange, but I think this skepticism gets everything backwards. Contrary to common sense, it is not the case that we have a general Platonic notion of “value” drawn out of some neutral nowhere that we then arbitrarily assign to consciousness. Rather, value emerges and is known in conscious experience, and might then be projected on to the “world out there” from there. In my view, value, like the color red, does not exist, indeed cannot exist, outside conscious experience, because, like red, . We may talk about non-phenomenal value, and even do so in meaningful ways — we can for instance talk about instrumentally valuable things — just like we can talk about red objects “out there”, yet, ultimately, “value” and “red” are not external to consciousness; they are properties/states of it.
“But how does this fit with the thought experiment above that strongly hints that preferences seem intrinsically important as well, and the thought experiment that hinted that even in combination, hedonic tone and preferences do not seem able to provide an exhaustive account for what is valuable?”
Preferences do indeed matter, yet in what sense can they be considered different from our conscious states? Preferences are contained in our experience moment-to-moment, and if a state of experience contains a preference to continue that state, this conscious state can, I would argue — even if it contains pain — be considered valuable in a broader sense of value, yet one that still only places value in experience itself. Preferences are yet another aspect of experience, a highly value significant one.
Another, more controversial response one might give is that our healthy social intuitions that are of great instrumental value — such as a relentless insistence on respect for the preferences and lives of others — cause us to overestimate the badness of death in both thought experiments above (which is a reasonable reaction, and hence not an overestimate, in our social world, where embracing the notional sanctity of life indeed is of immense instrumental value). After all, we do not find it terribly bad, if even bad at all, when a person who very much wants to stay awake falls asleep against their will, and yet the case of painlessly turning off someone’s consciousness against their will is, modulo the secondary effects on others and on ourselves (that we were supposed to ignore in the thought experiments above, given that we had a world inhabited by just one person, which should hold all else equal), in effect the same from the perspective of the person who falls asleep. One might object that in the case of sleep, one will wake up again, yet we could also say in the case of “turning someone off” that we could turn the person on again eight hours later. This hardly makes us see the turning off as less bad, especially if we continue turning the person off like this every day. The fact that the turning off is done by someone else, and that that someone is ourselves of all moral angels in the universe, just does not sit right with our social and moral — to a first approximation, “afraid to get punished/stand outside” — intuitions.
We have strong intuitions about death being a bad thing, which is not at all hard to make sense of in evolutionary terms. In our evolutionary past, we needed our fellow beings whom we cared about to be around for the sake of our survival and for our genes to be propagated. Largely for that reason, it seems safe to say, we have evolved to feel great sorrow and pain when those we care about die. To perceive that as very bad. Yet is the badness in the death or in our perception of it?
I do not have clear answers to these difficult questions. However, what I think is clear is that value ultimately pertains to consciousness and consciousness only. This is the common thread in both thought experiments above: we have pitted hedonic tone and preferences against each other, and also removed them both, yet consciousness was there in the subjects in both cases, and this does seem the undeniable precondition for there to be any value, and hence for any ethical concern to meaningfully apply. If we were talking about unconscious bodies, there would be no dilemma. The only remaining problem would then be the secondary effects of the kind Kant worried about with respect to our harming non-human beings: that hurting them might make us more prone to harming “real” moral subjects.
Conclusively, the claim that value is something found only in consciousness holds, in my view. And not only do I hold that value is ultimately contained in this singular realm that is consciousness, I also think we can measure it along a single scale, at least in theory if not in practice. In other words, I find compelling (see the link for a good case for it).
This is not to say that (dis)value is a simple phenomenon, much less something that can be easily measured. Yet it is to say that it is something real and concrete that we can locate in the world, and something there can be more or less of in the world, which of course still leaves many questions unanswered.
Positive and Negative Value — Commensurable or Not?
For to claim that value comes down to facts about consciousness is rather like saying that science is about uncovering facts about the world — it says nothing about what those facts are. For example, saying that there is positive value in happiness while there is negative value in suffering does not imply that these values are necessarily commensurable. [+ Many have doubted that they are+]. Karl Popper was one such : “[…] from the moral point of view, pain cannot be outweighed by pleasure, and especially not one man’s pain by another man’s pleasure. Instead of the greatest happiness for the greatest number, one should demand, more modestly, the least amount of avoidable suffering for all […]”
So is : “No amount of happiness or fun enjoyed by some organisms can notionally justify the indescribable horrors of Auschwitz.”
I find that I agree with many of — at least when it comes to extreme suffering (to say that extreme suffering cannot be outweighed by any amount of happiness is not to say that this also applies to mild forms of suffering; too often discussions get stuck in this latter dilemma, happiness vs. mild suffering, rather than the former, happiness vs. extreme suffering, where it is much harder to defend a non-negative position).
There is such a thing as unbearable suffering, yet it seems that there cannot be anything analogous on the scale of happiness. The expression “unbearable levels of happiness” makes no sense. Another thing that we find in suffering, at least in extreme suffering, that we do not find in happiness is urgency. There is no urgent obligation for us to create happiness. For instance, imagine that we are at an EA conference and someone shows up with happiness pills that would make everyone maximally happy. Would there be any urgency in giving everyone such a pill as quickly as possible? Would and should we rush to distribute this pill? No. Yet if a single person suddenly fell to the ground and experienced intense suffering, people would and should rush to help. There is urgency for betterment in that case — and that urgency is inherent to extreme suffering while wholly absent in happiness. We would rightly send ambulances to relieve someone from extreme suffering, but not to elevate someone to extreme levels of happiness.
A similar consideration was crucial in my own moving away from the view that happiness and suffering are commensurable, more specifically, a consideration about the that David Pearce advocates. For if happiness and suffering are truly commensurable and carry the same ethical weight, this would mean that a completion of the Abolitionist Project — that is, the abolition of suffering in all sentient life — would not represent a significant change in the status of our moral obligations. We would then have just as great an obligation to keep on moving sentience toward greater heights. Yet this did not seem right to me at all. If we were to abolish suffering for good, we would, I think, have discharged our strongest moral obligations and be justified in breathing a deep sigh of relief.
Another reason in favor of the asymmetrical view is that it seems that the absence of a good is not bad in the same way that the absence of a bad is good. If a person were in deep sleep, experiencing nothing, rather than, say, having the experience of a lifetime, this cannot, I believe, be characterized as a catastrophe. It is in no way similar to the difference between sleeping and being tortured, the difference between which is a matter of catastrophe and great moral weight.
In contemplating any supposed symmetry between suffering and happiness, it seems worth considering whether there is any pleasure so great that it can justify just a single of the atrocities that happen every day — a rape, for instance. Can the pleasure experienced by a rapist, if it is made great enough, possibly justify the suffering it imposes on the rape victim? Classical utilitarianism has it that if the pleasure is great enough for the rapist, the rape can in fact be justified, even normative. pulls the breaks here, however. The level of pleasure experienced by the rapist is irrelevant: imposing such harm for the sake of pleasure cannot be justified. Which is really the more absurd position?
This is of course not to say that there is not great value in happiness. Indeed, there is no contradiction in considering pleasure more valuable than nothing, and to consider increasing happiness to be valuable, yet to not ascribe urgency to it, and to not consider it a moral obligation. This is my view: Happiness is wonderful, but compared to the alleviation of extreme suffering, increasing happiness (of the already happy) seems secondary and morally frivolous — like a rather than a moral obligation. Counterintuitively, however, the urgency of alleviating extreme suffering does actually make boosting happiness and good physical and mental health an urgent obligation too, at least an instrumental one, as we must stay healthy and motivated if we are to effectively alleviate extreme suffering.
The Continuum of Suffering: Breaking Points or Not?
As my repeated mention of extreme suffering above hints, I do believe that there is a along the continuum of suffering, likely many, at which no amount of less bad experiences can be considered as bad as at that given point. For example, it seems obvious to me that no number of moments of tediousness can be of greater disvalue than a single instance of torture. One might argue that such a discrete jump seems “weird” and counterintuitive, yet I would argue that it shouldn’t. We see many such jumps in nature, from the energy levels of atoms to the breaking point of Hooke’s law: you can keep on stretching a spring, and the force with which it pulls will approximately be proportional to how far you stretch it — up to a point, the point where the spring snaps. I do not find it counterintuitive to say that gradually making the degree of suffering worse is like gradually stretching a spring: at some point, continuity breaks down, your otherwise reasonably valid framework of description and measurement no longer applies.
Unfortunately, I do not have a detailed picture of where such points lie or, as mentioned, how many there might be. All I can say at this point is that I think this is an utmost important issue to contemplate, discuss, and explore in greater depth in the future, and that much depends on how we view this issue.
Thus, it seems to me that the prevention of the most extreme forms of suffering — the prevention of the emergence of “dolortronium”, if you will — is our main moral obligation. In my view, this is where the greatest value in the world lies. I could be wrong, however.
The Relevance of Uncertainty — Doing What Seems Best Given our Uncertainty
“When our reasons to do something are stronger than our reasons to do anything else, this act is what we have most reason to do, and may be what we should, ought to, or must do.”
— Derek Parfit, from summary of the first chapter of “On What Matters”
It seems reasonable to maintain some uncertainty when it comes to our view of fundamental values. This again applies whether we are moral realists or subjectivists. In the case of moral realists, there is always the risk og being wrong about what is in fact valuable, while in the case of moral subjectivists, there is the risk of being wrong about what one actually cares about most deeply. This is not, however, to say that one knows nothing, or that one has no functional certainty about anything. For instance, while we may not be able to settle the details about value, we likely all agree and have great confidence in the claim that, all else equal, suffering is bad and worth preventing, and the more intense, the worse and more worth preventing it tends to be.
The interesting question is how to act given our . Doing in light of all that we know seems, well, most reasonable. Yet, given uncertainty about fundamental values, what seems most reasonable is not to merely pick the single ethical theory or account of value that we find the most compelling and then to try to work out the implications and act based on that, although it may be tempting and straightforward. Rather, the most reasonable thing would be to weigh the plausibility of different accounts of value, including one’s preferred one, and to then work out the implications and act based on the collective palette of weighted values one gets from this process, however small one’s may be.
And it is worth noting here how the distinction between absolute and relative uncertainty is highly relevant. For imagine that we know only of three different value theories, and we assign 5 percent credence to value theory A, 10 percent to theory B, and 15 to C. This is not the same situation as if we assign 10 percent to A, 20 percent to B, and 30 to C, although the relative weight between the theories is the same. In the first case, the possibility that we are fundamentally wrong about values is kept far more open than in the latter case, and this has implications for how confident one should be, and how many resources one should put into getting a better grasp of values compared to other things.
To relate this to my own view, I have relatively high confidence in the view that all value relates to consciousness ultimately — more than 90 percent — which is a high absolute credence, yet when it comes to the possibilities of consciousness, my own provincial knowledge of the space of possible states of mind forces me to admit that my view of the landscape of value — that is, the landscape of value found within consciousness — could be deeply flawed. Concerning this question, I have a considerable degree of uncertainty, yet relatively speaking, compared to other accounts of value I have come across, I still find my own view most compelling by far (and this should hardly be surprising, given that my current view already is a product of countless updates based on writings and discussions on ethics). However, the fact that I have changed my mind about value in significant ways over the last few years should also teach me to be humble and to admit that my present view could well be wrong.
How confident I should be in my best estimate concerning what the landscape of value in consciousness looks like is hard to say — 70 percent? 5 percent? For the fact that the landscape of possible experiences lies mostly unexplored before me does not invalidate the limited knowledge of the landscape that I do have, and the reasoning about it I have done, and this knowledge and reasoning does provide what appears to me a decent basis for my current best estimate. I should probably be humble and keep on reflecting, yet at the same time it does not seem like my large uncertainty, in itself, should cause me to change my current estimate — after all, my view might be wrong along all axes, in both positive and negative directions. I might have a too negative view of value, or I might not. If anything, my uncertainty calls for deeper exploration and reflection.
The Views of Others
There are other people in the world than ourselves, and many of them have thought a lot about the subject of value as well, which makes it seem worth paying attention to their views and to update based on that. After all, why should we be more correct than others when it comes to what is valuable? Why give our own perspective a privileged position compared to those of other conscious minds that also experience value? Or phrased in subjectivist-friendly terms: if others upon reflection have found that they value something different from what we value ourselves, might we not in fact also value that upon reflection, at least to a greater degree than we thought?
After all, in dealing with ethics and values, what many of us think matters is not our own view of the perspectives of others, but those perspectives themselves. It therefore makes good sense to listen to those perspectives. And if they report something radically different from what we believe about their perspectives and what they find valuable, who are we to claim we know better on their behalf, about their perspective, which they know intimately and we don’t? Is that not just yet another instance of the “vulgar pride of intellectuals”?
I think this is a valid point, and in my case, this consideration should arguably move my view in a less negative direction, and it probably has, although it is not entirely clear how much it should move me. After all, I do not think my view is that contrary to what others report. Again, my view is not that happiness is not of great value, but rather that it cannot outweigh extreme suffering, and I have yet to encounter a convincing case against this view (and not many have tried to make such a case, it seems). Another reason I should perhaps not move so much is that some of the most influential traditions in Asia — the continent where the majority of the human population lives — such as Buddhism and Jainism seem to share my negative, i.e. suffering-focused, values. The fact that paying close attention to consciousness is a central part of these traditions, and the fact that is strong in most humans and seems likely to influence our evaluation of what is valuable, could well imply that I should resist the tug from the view of the more “positive”, predominantly Western thinkers. More than that, the fact that I do not like the negative view, and very much wish that the magnitude and moral status of negative value were not incommensurably greater than that of positive value, also suggests that if I have a bias in any direction, it is probably away from the negative (for the same reason, I’m likely also strongly biased against moral realism being true, in that I wish that no continuum of truly disvaluable states could exist in the world; unfortunately, I find that there is too much evidence to the contrary).
Yet all this being said, I could be wrong, and I wish we had much more discussion on these matters that we could sharpen our views on. I should also note that there of course are other disagreements about value than the relative significance of positive and negative value, for instance concerning , , and the of . In my view, these things are ultimately all instrumental to how the dynamics of consciousness play out, as opposed to being inherently valuable, which is not to say that these views are not important to discuss, or that they don’t contribute with much wisdom. I think they do, and I do maintain some, although admittedly very small, uncertainty about them being intrinsically valuable.
In trying to be reasonable, it is always worth being aware of one’s own biases. And when it comes to thinking about values, we have many biases that are likely to influence our views and what we say about them. We are social primates adapted to survive and propagate our genes, which means that we have moral intuitions that have been built to accomplish this task efficiently — not to help us land on deep truths about value.
This can influence us in countless ways. For example, as mentioned above, one could argue that the only reason we view death as bad is because it was costly for our group’s survival, and hence our genes, to lose anyone in our group (although I would argue that there indeed are good [+ reasons+] to consider death bad, even if we disregard our immediate feelings about it).
In the case of my own negative view of value, I might be biased in that I’m an organism evolved to signal that I am sympathetic and compassionate, someone who will protect you if you are in pain and trouble. Negative utilitarianism, a skeptical person might claim, is merely an attempt to signal “I’m more ethical than you”, and ultimately, I’m just another horny organism that makes elaborate sounds in order to get satisfied.
I certainly don’t deny the latter proposition, and it is worth being mindful of such pitfalls, even when they seem unlikely to be influential, and when there seems to be many reasons that count against them — for instance, do equally strong, or perhaps stronger, biases not exist in the opposite direction as well? Isn’t a willingness to sacrifice suffering for some positive gain generally much more attractive in a male primate? Does negative utilitarianism really make you appear cooler than the classical utilitarian who prioritizes to work for a future full of happy life above all else? (Although it should be noted that a future full of life is, at least according to David Pearce, incompatible with negative utilitarianism.)
More generally, might our uniquely wired brain lead us to value something that might not at all be valuable, or perhaps only of puny value, compared to what might be found outside of a biological perspective, or at least outside the perspective of what we can remember in the present moment from less than a single lifetime of one species of primate? It is certainly worth pondering.
It is hard to appreciate just how strongly our thinking is influenced by our crude, survival-driven intuitions, many of which contradict each other much of the time, and it is worth being intensely skeptical of these, and mindful of ways in which our narrow perspective might be misguided in general, when thinking about values.
Updated View of Value — What Follows?
What follows from an updated, weighted view of value is that it leads us, however slightly, toward favoring causes and actions that are robust under many different views of value as opposed to our single (immediately) most favored one. What these causes and actions are, and how to assess this, is largely an open question that of course depends on what our palette of weighted values ends up being. Yet a good example of a cause that seems highly normative on almost all accounts of value is the proposed by David Pearce: to move all sentient beings above hedonic zero by making sentience entirely animated by gradients of purely positive experiences.
Regardless of which palette of weighted values we get, however, it seems like the continued effort of gaining more clarity about the composition of this palette seems an ever-relevant task. As mentioned above, and as I will argue below, encouraging such an effort is of utmost importance.
We will always perceive the world from a limited perspective that contains limited information. When we are talking about the most value significant events that can emerge in the world — notionally, “dolortronium” and “utilitronium” — I maintain that none of us have a good idea about what we are talking about. What to do in light of this ignorance, apart from maintaining some degree of humility, is not clear. All we have to go by is our limited information.
What does seem clear, however, is that continued reflection on fundamental values is important. Indeed, given the importance and difficulty of getting fundamental values as right as possible, it seems that seeding a future in which we continually reflect self-critically on fundamental values and how to act on these is among the best things we can do.
Again, this applies to moral subjectivists too, who will also benefit from reflecting on what others have found through their serious reflections, and who might even take the position that they care about what others care about, in which case the importance of ensuring that these others — such as beings of the future — find out what they care about is self-evident.
In conclusion, kindling a research project on fundamental values and widespread, qualified discussion about it — more generally: moving us in the direction of a more reflective future — should be a main priority for anyone who wants to “improve the world”. We need to be much more focused on the “what” question in the future.
Reasons to Focus on Values as Our Main Cause
Given decent clarity about our , the all-important question becomes: what causes and interventions optimize those values?
In this post I shall present some of the reasons in favor of focusing directly on fundamental values in these regards. That is, reasons why a good way to optimize our fundamental values is to reflect on, argue for, and work out the implications of, these values themselves. We may call it “the values cause”.
So why is this sensible? In short, because of the unmatched importance of fundamental values. Our fundamental values comprise the most important and fundamental element in our notional ‘’. They are what determine the sensibility of any cause and intervention we may take part in, and hence what any reasonable choice of causes and interventions should be based on. An important implication of this is that the (expected) sensibility of any cause or intervention cannot be greater than the (expected) sensibility of our fundamental values. For example, if we have 90 percent confidence in our fundamental values, and then choose a cause or intervention based on these, we cannot have greater confidence in the sensibility of this cause or intervention than 90 percent. Indeed, 90 percent would be the level of credence we should have if we were 100 percent sure that the specific cause or intervention optimizes our fundamental values perfectly; a degree of confidence we will of course never have about any cause or intervention. Thus, we must have greater confidence in the sensibility of our values than in the sensibility of any action taken to optimize those values.
In a world where little seems certain, this relationship is worth taking note of. It means that, of all our beliefs pertaining to ethics, our fundamental values are — or at least should be — what we are the most certain about. This also, I would argue, makes them the thing we should be most confident about arguing for in our pursuit for a better world.
Getting Others to Join Us in the Best Way
Arguing directly for our fundamental values rather than causes and interventions derived from those values also, if done successfully, has the benefit of bringing people down alongside with us in the basement of fundamental values from which the sensibility of causes and interventions must be assessed. In other words, not only do we argue for that which we are most confident about when we argue for our fundamental values, we also invite people to join us in the best possible place: our core base from which we ourselves are trying to find out which causes and interventions that best optimize our values. Having more minds to help optimize our tree of ought from the bottom up seems very positive, and the deeper down they join us, the better.
And even if we do not manage to convince people to fully share our fundamental values, arguing for our values likely does at least make them update somewhat in our direction, which, given the large changes in practical implications that can result from small changes in fundamental values, could well be far more valuable than convincing others to agree with specific causes or interventions we may favor. Not least because it might make them more likely to agree with these interventions, which then leads to another, albeit somewhat speculative, reason to focus on fundamental values in practice.
For one counterargument to the argument I have made above is that people might be more receptive to arguments for specific causes or interventions than they are to the fundamental values that recommend those causes. Yet I think the opposite is generally true. I suspect it is generally easier to convince people of one’s fundamental values, or at least make them update significantly toward them, than it is to convince others of one’s most favored causes or interventions.
For example, it seems much easier to convince people that extreme suffering is of great significance and worth reducing than it does to convince them that they should go vegan. And in order to convince people of the importance of a given cause or intervention, it might well require bottom-up reasoning from first principles — in this case, fundamental values — to see the reasonableness of that given cause or intervention. It can indeed seem naive for us to think, after we ourselves have come to support a given intervention based on an underlying value framework, that we should then be able to convince others to support that intervention without communicating this very framework that led us to consider that intervention a sensible one ourselves.
So not only may people be more receptive to our fundamental values than the causes and interventions we support (an admittedly speculative “may”), it might also be that arguing for our fundamental values is the best way to bring people on board with our preferred causes and interventions in many cases, due to the likely necessity of following a chain of inferential steps. And again, if we invite others to try to step in our own inferential footsteps, we might be lucky to have them spot missteps. In this way, we enable others to help us find even better causes and interventions based on our fundamental values than the ones we presently focus on.
An instructive example of failure here, I think, is found in the strategy of most . The vast majority of anti-natalists seems to share the fundamental goal of reducing net suffering, yet their advocacy tends to focus exclusively on anthropocentric anti-natalism — a highly specific and narrow intervention. They appear to confidently assume that this is the best way to reduce suffering in the world, rather than focusing on the fundamental goal of reducing suffering itself, and encouraging about how to best do this. If anti-natalists focused more on the latter, they would likely have more success, both by inspiring more people to take their fundamental values into consideration, and by inviting these others (and themselves not least) to think deeper about which other ideas they might be able to spread that could be more conducive to the goal of reducing suffering than the idea of anthropocentric anti-natalism (which seems rather unlikely to be the best idea to push in order to reduce the most suffering in our future light cone).
Reducing Moral Uncertainty/Updating our Fundamental Values
Another reason to focus on fundamental values is our own . For given that we may be wrong about what we value, whether in a strong moral realist sense or an idealized personal preferences sense (or anything in-between), we should be keen on updating our fundamental values. And reflecting on and discussing them openly is likely among the best ways to do so. To restate this important point once more: given the immense importance of fundamental values, even small updates here could be among the most significant moves we could make.
And fundamental values do appear quite open to change. Indeed, values are contagious and subject to cultural influence to a great extent, as a map of people’s religious beliefs around the world reveals (such beliefs are undeniably closely tied to beliefs about fundamental values). Arguably, our values are subject to change and cultural influence to a significantly greater extent than technological progress is (cf. What Technology Wants by Kevin Kelly), which may be harder to influence and hence might be less of a leverage point for impacting change than focusing on values is. To put things crudely, technologies tend to be developed regardless, while how they are used generally seems more contingent. And arguing values seems among the best ways to impact how we use our powers.
Values are, to a first approximation, ideas, and ideas tend to be updatable and spreadable. In my own case, I used to not care about ethics at all, then I became a , and eventually I updated toward [+ negative utilitarianism+] and as I came upon arguments in their favor. We should expect similar changes to be possible in others, and in ourselves, as we learn more and keep on updating our beliefs.
Not only would we all benefit from having our moral uncertainty reduced/our moral views updated, which is valuable in itself; it seems that we should also expect to benefit from the greater convergence on fundamental values that is likely to follow from mutual discussion and updating on them, even if the magnitude of this updating is small. The reason this is beneficial is that such convergence likely reduces the level of friction in our efforts of cooperation, and on virtually any set of fundamental values, success in achieving the most valuable/least disvaluable future seems to rest on humanity’s ability to cooperate. This makes such cooperation a high priority for all of us. While somewhat speculative, this consideration in favor of convergence on fundamental values, and hence, arguably, in favor of mutual discussion and updating on them, is important to factor in as well.
Fundamental Values and AI Safety
I have tried to explain why I think the framing of the issue of “AI safety” is unsound. But even assuming it isn’t, I would argue that fundamental values should likely still be our main focus, the reason being that we have little clarity or consensus about which values to load a notional super-powerful AI with in the first place (and I should note that I find using the term “AI” in this unqualified way highly objectionable — for what does it refer to?).
The main problem claimed to exist within the cause of “AI safety” is the so-called control problem, particularly what is called the value loading problem: how do we load “an AI” with good values? What seems implicit in such a question, however, is that we have a fairly high level of consensus about what constitutes good values. Yet when we look at modern discussions of ethics, especially , we find that this is not the case — indeed, we see that strong certainty about what constitutes good values is hardly reasonable for any of us. This suggests that we have a lot to clarify before we start programming, namely what values we estimate to be ideal. We must have decent clarity about what constitutes good values before we can implement such values — in anything we do or create. We must solve the values problem before we can solve any notional values loading problem.
For an example of an unresolved question, take the following, in my view critically important one: What are the theoretical upper bounds of the ratio between happiness and suffering in a functional civilization, and [+ can the suffering it contains, if there is any, ever be outweighed by the happiness+]? At the very least, these questions deserve consideration, yet they are hardly ever asked (not to mention the loud silence on the issue of the that would seem, at least in theory, the main corollary of classical utilitarianism; are classical utilitarians obliged to work toward such a shockwave, contra the present dominant view on “AI ethics”, which seems to be an anthropocentric preference utilitarianism of sorts [see note on the goals of AI builders below], which appears very bad from a classical utilitarian perspective, at least compared to a utilitronium shockwave?).
Another example would be the aforementioned subject of population ethics, where many ethicists believe that we should bring about the greatest number of happy beings we can, while [+ many others believe+] that adding an additional happy life to any given population has no intrinsic value. Given such a near-maximal divergence of views on an issue like this, what does it mean to say that we should build a system that does what humans want? What could it mean?
This issue of value implementation underscores the importance of convergence on values, as that would likely make any such project of implementation go smoother (an example of the general point about human cooperation made above). It could well be that trying to make mutual value updating happen among those who try to build the world of tomorrow — both in the realm of software and in other realms — is the better way to implement our values than to bargain at the level of the direct implementation with others who have more divergent values; that is, more divergent values than they would have had if we had put more effort into arguing for our fundamental values directly.
In other words, if humans are going to program values into “an AI”, the best way to impact the outcome of that process could well be to impact the values of these humans and humanity in general. Not least because the goal many of these AI researchers aim to implement in tomorrow’s software simply is “that which humans want” (: “I want to see a future where AI systems help humans get what they want […]”; : “We believe AI should be an extension of individual human wills […]”; the so-called by Google, Facebook, Microsoft, Amazon, IBM, and Apple seems to have essentially the same goal).
that the future of “intelligence” on Earth and beyond will be shaped by a collective, distributed process comprised of what many agents do, which also holds true in the case of a software takeover. And the best way to impact such a collective process in a positive direction is, I think, most likely one where we try to impact values directly.
Whether we deem it the main cause or not, it seems clear to me that “the values cause” must be considered a main cause, and an utmost neglected one at that. Our altruistic efforts ought to be informed by careful considerations based on first principles, those principles being our fundamental values. Yet for the most part, this isn’t what we are doing. If it were, we would have better clarity about what exactly our first principles are; at the very least, we would be aware of the fact that we do not have such clarity in the first place. Instead, we go with intuition and vague ideas like “more happiness, less suffering”, believing that to be good enough for all practical purposes. As I have tried , this is far from the case.
Saying that we should focus much more on fundamental values is not, however, to say that we should not focus on other specific causes and interventions that follow from those values, nor that we should not do advocacy for these. I think we should. What I think it does imply, however, is that we should try to communicate our (carefully considered) fundamental values in such advocacy. For instance, when doing concrete , we should do so by phrasing it in terms of our fundamental values, e.g. concern for sentience and involuntary suffering. Thereby, we both do advocacy for a (relatively) specific cause recommended by our fundamental values and those values themselves, which invites people to consider and discuss both. It does not have to be a matter of either focusing on values or focusing on “doing”. We can encourage people to reflect on fundamental values with our doing.
I am deeply grateful to my friend and mentor David Pearce, and to my friends Ailin, Magnus, and Joachim. Your support and good company means the world to me.
People who want to improve the world are, like everybody else, extremely biased. A prime example is that we tend to work on whatever cause we have stumbled upon so far, and to suppose, without deeper examination, that this cause is the most important one of all. Cause prioritization is the attempt to go beyond this bias and to try to do better. It is the direct and systematic attempt to become more qualified about which causes we should prioritize the most. This collection of essays aims to provide a rough framework for how we can think about cause prioritization, and provides suggestions about important causes and questions we should focus on and explore further.