Notes from What we Owe the Future

William MacAskill has been a hugely influential person in my life thanks to Effective Altruism; I was hanging out at a friend’s place and snooping around the bookshelf when I caught a glimpse of the word “MacAskill” printed onto one of the spines. I borrowed that book, and took notes of the highlights as I was doing so; eventually I decided to get an eBook copy for easier reading which allowed me to grab screenshots of the diagrams.

Chapter 1

  • The value of future lives is nonzero. Even if you don’t weight their lives equally with today’s people, the vast difference in scale between today’s population and the potential future population means that a vast majority of the utility we should optimize for is in the future. Just as EA as it relates to global health means that we shouldn’t neglect lives that are distant from us geographically, longtermism implies not neglecting lives distant from us temporally.

Chapter 2

  • Many systems exhibit a degree of plasticity, during which they are malleable, followed by a time of hardening in and become more resistant to change.

  • Three factors to consider when evaluating an action or intervention:

  1. Is the intervention going to yield a significant change that notably influences day-to-day lives?

  2. Is the intervention going to have a long-term effect? (persistence)

  3. Was the intervention going to happen anyways? (contingency)

Chapter 3

  • Values changes can be hugely important to improving global utility.

  • Case study: abolition of slavery. Significant change to many people’s lives, quite possibly may not have happened (or if it were inevitable, perhaps it would have been inevitable slowly), and likely to carry on for a while. Ergo it’s worth advocating for these value shifts.

  • Fitness landscape: it is possible that there are multiple equilibria of values systems. Some equilibria are not inherently globally-optimal, e.g. values systems that don’t spread themselves (e.g. religion) may get overrun by those that do.

Chapter 4

  • AI presents an opportunity for moral lock-in; a hardening of the values systems and an end to plasticity.

  • Moral lock-in today would be dangerous; it’s evident in the past and there’s no reason to believe that we’re currently at the end state of moral progress (especially given some unresolved moral questions).

  • Experimentation would help to identify the best systems.

Chapter 5

  • Humans don’t have the best track record of seriously taking on risks proactively

  • Engineered pandemics are a second area of concern for short-term reasons civilization would collapse

  • A third world war is more likely than not to occur in the next century based on betting markets; such a world war would incentivize powers to be more fast-and-loose about biowarfare (e.g. engineered pandemics), AI deployment, etc.

Chapter 6

  • MacAskill thinks that devastating wars (killing off ~99% of the population) might not actually kill off humanity, assuming that there’s still access to industrial technologies and tools. However, there’s not enough certainty in this thought to ignore the possibility.

  • In a scenario where there’s a need to re-industrialize, being easily able to access fossil fuel reserves (e.g. surface coal) would be greatly beneficial to ensure energy resources needed to “start-up” the economy from nothing. Full decarbonization is necessary to ensure a long-term future with easily accessible fossil fuel reserves.

  • Agricultural knowledge is unlikely to die off, given the vast number of people employed today in agriculture. (Corollary: What professions would be key for rebuilding civilization from nuclear war, or some other mass population disaster? Even if there are ~2B agriculturalists today, would that number hold up over time as countries develop economically?)

Chapter 7

  • An efflorescence is a short-lived (perhaps a century or two, point is that it isn’t sustained) period of intellectual and economic expansion in a single culture (e.g. Islamic Golden Age, Ancient Greece)

  • Technological progress may be slowing down in the recent part of history (in the last 50 years), as measured by Total Factor Productivity (essentially, the productivity per unit of labor/capital/technology/resources). Nonetheless, the slowdown of growth isn’t too worrisome; exponential growth must eventually taper off and we’ll just arrive at the “destination” asymptote of productivity a bit later.

  • However, a complete stagnation would be more concerning, esp. if we’re in a currently unsustainable economy – e.g. using fossil fuels that must eventually run out; having the capability of nuclear offense without any tools to defend against that threat.

  • Stagnation is possible: total research output can be thought of as a function of a number of man-hours, with diminishing marginal returns on each man hour (we’ve already “picked the low-hanging fruit”, technologically). Population growth is trending negative as countries get richer; there’s a limit on how much of the population can be involved in research. If AGI is developed “in-time” before we reach a state of declining global research throughput, then it’s possible that we could avoid this problem, but AGI could be hard to develop before this point.

Chapter 8

  • Population Ethics - the evaluation of actions that might change who is born, how many people are born, and what their quality of life will be

  • Intuition of neutrality: the view that bringing a new person into the world that would live a happy life isn’t inherently morally valuable (which MacAskill argues against)

    • His claim instead: Having one additional person in the world that lives an overall happy life is an inherently desirable outcome

  • The Repugnant Conclusion:

    • Dominance Addition: making people better off while adding population with a positive wellbeing. A population of 1M +90 people isn’t as good as one with (1M +95 people and 1M +75 people); see Fig. 8.5

    • Non-Anti-Egalitarianism: equally distributing utility while also increasing it in total is ideal. A population of (1M +95 people and 1M +75 people) isn’t as good as one with 2M +86 people; see Fig. 8.6

    • Assuming transitivity, a population of 1M +90 people isn’t as good as a population of 2M +86 people.

    • “Repugnant Conclusion”: following the above example ad infinitum, a population of 1M +90 people isn’t as good as a population of 1T +2 people, for example; see Fig. 8.9

  • 3 way to optimize in population ethics:

  1. Maximize the average utility (downside: a world with 1K +100 people > a world with 1M +99 people)

  2. Maximize total utility (downside: Repugnant Conclusion)

  3. “Critical Level”, you can add lives to the world, provided that the utility of those people is above some threshold (downside: Sadistic Conclusion)

  • Critical Level denies the Dominance Conclusion premise; see Fig. 8.10

  • Sadistic Conclusion: adding fewer people with negative utility could theoretically be preferred over adding more people with small but positive utility; see Fig. 8.11

  • Many of the population ethics considerations can be applied at the micro-level: is it ethical to bring a kid into the world? Is it obligated to bring more kids into the world?

  • Note: Even if a person’s overall life utility is negative, they can product positive externalities - so if someone’s experience is -100, that doesn’t mean it’s better for them to die; it merely means that from their own utility perspective it’d have been better for them to be unborn

Chapter 9

  • By MacAskill’s estimate, most people alive today are likely happy “overall” (so we can assign a positive value to their lifetime happiness)

  • It’s quite likely that humans are getting happier over time also - though this hasn’t been a monotonic trend.

  • Animals’ lives in factory conditions are quite miserable. It’s unclear if animals’ lives in the wild are overall positive or negative (in the sense that from their perspective, it would have been preferable to not have been born). MacAskill notes that if the lives of wild animals are negative on balance, then extinctions caused by humans may be positive from a hedonistic perspective.

  • Non-wellbeing goods are goods which don’t immediately impact wellbeing, e.g. “natural ecosystems”, “educational level of the world”, “artistic accomplishments”, etc. There are trends in both directions here, so it’s a judgment call on whether these are getting any better or worse.

Chapter 10

  • Three guidelines for improving the future: (1) take actions that you’re comparatively confident are good; (2) preserve optionality where possible (e.g. by preserving different political systems and cultures); (3) learn more.

  • Prioritizing between which of these problems we could contribute to is a function of (1) importance – does solving this problem matter? (2) tractability – how much resource-wise do we need to solve this problem? and (3) neglectedness – how much attention is this problem already getting?

  • Personal changes to your life to do good, while beneficial, can easily be outweighed by targeted donations to specific effective organizations.

  • Career-wise, you can apply similar principles as society for improving the future of your career: (1) choose to do good in your career, (2) learn more about different career paths, and (3) build options by seeking upsides. Personal fit is critically important.


For some further reading, I might check out Brian Christian’s, The Alignment Problem. He’s an excellent author who wrote one of my favorite books of all time, Algorithms to Live By.