If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
Everyday when I check Facebook (ideally only the one time) I see fundraising pleas. People who want me to give money to one charity or the other. One guy wants me to fund the construction of a tutoring center in Haiti, another wants me to donate to an organization focused on suicide prevention, and still another wants to use my donation to increase awareness of adolescent mental health issues, and that’s just Facebook. The local public radio station wants my money as well, I get periodic calls and letters from my Alma Mater asking for money, and as of this writing the most recent email in my inbox is a fundraising letter from Wikipedia. Assuming that I have a limited amount of money (and believe me, I do) how do I decide who to give that money to? Which of all these causes is the most worthy?
As you might imagine I am not the first person to ask this question. And more and more philanthropists are asking it as well. It’s my understanding that Bill Gates is very concerned with the question of where his money will do the most good. And there is, in fact, a whole movement dedicated to the question, which has been dubbed effective altruism (EA). EA is closely aligned with the rationalist community, to the point where many people would rather be identified as “effective altruists” then as “rationalists”. This is a good thing, certainly I have fewer misgivings about rationalism in support of saving and improving lives than I have about rationalism left to roam free (see my post on antinatalism.)
From my perspective, EA’s criticisms of certain kinds of previously very common charitable contributions, their views on what not to do, are at least as valuable as their opinions on what people should be doing. For example you might have started to hear criticism recently of giving big gifts to already rich universities. And indeed it’s hard to imagine that giving money to Harvard, which already has a $30 billion dollar endowment, is really the best use of anyone’s money.
While the EA movement mostly focuses on money, there is another movement/website called 80,000 hours which focuses on time. 80,000 hours represents the amount of time you’re likely to spend in a profession over the course of your life, and rather than telling you where to put your money, the 80,000 hours website is designed to help you plan your entire working life so as to maximize it’s altruistic impact.
Of course both of these efforts fall under the more general idea of asking, “What should I worry about? What things are worth my limited time, money and attention, and what things are not?”
If you’re curious, for the effective altruist, one of the answers to this question is malaria, at least according to EA site GiveWell which ranks charities using EA criteria and has two malaria charities at the top of it’s list. These are followed by several deworming charities. For the 80,000 hours movement the question is more complicated, since if everyone went into the same profession the point of diminishing returns would probably come very quickly, or at least well before the end of someone’s career. Fortunately they just released a list of careers where they think you could do the most good. Here it is:
- AI policy and strategy
- AI safety technical research
- Grantmaker focused on top problem areas
- Work in effective altruist organisations
- Operations management in organisations focused on global catastrophic risks and effective altruism
- Global priorities researcher
- Biorisk strategy and research
- China specialists
- Earning to give in quantitative trading
- Decision-making psychology research and implementation
This is an interesting list and I remember that it attracted some criticism when it was released. For example, right off the bat you’ll notice that of the ten jobs listed the first two deal with AI. Is working with AI really the single most important career anyone could choose? The next three are what could be called meta-career paths, as they all involve figuring out what other people should worry about and spend money on, for example setting up a website like 80000hours.org which might strike some as self serving? Biorisk strategy and China specialist are interesting, then at number 9 we have the earn-as-much-money-as-possible-and-then-give-it-away option, before finally landing at number 10 which is once again something of a meta option. If nothing else, it’s worth asking should AI jobs really occupy the top two slots? Particularly given that, as I just pointed out in the last post, there is at least one very smart person (Robin Hanson), who does have a background in AI, and who is confident that AI is most likely two to four centuries away. Meaning, I presume, that he would not put AI in the first and second positions. (If Robin Hanson’s pessimism isn’t enough, look into the recent controversy over algorithmic decision making.) One can only assume that 80000hours.org has some significant “AI will solve everything or destroy everything” bias in their rankings.
Getting back to the question of, “What should we be worrying about?” We have now assembled two answers to that question: we should worry about malaria and AI, and the AI answer is controversial. So for the moment let’s just focus on malaria (though I assume even this is controversial for malthusians) The way EA is supposed to work, you focus all your charitable time and money where it has the most impact, and when the potential impact of a dollar spent on malaria drops below that of a dollar spent on deworming you start putting all your money there. Rinse and repeat. Meaning that from a certain perspective, not only should we worry about malaria, it should be the only thing we worry about until worrying about malaria becomes less effective than worrying about deworming.
As you might imagine this is not how most people work. Most people worry about a lot of things. Would it be better if we only worried about the most important thing, and ignored everything else? Perhaps, but at a minimum the idea that some things are more important to worry about while other things are less important is a standard we should apply to all of our worries. A standard we might use to prioritize some of our worries while dismissing others. It’s only fair, at this point, to ask what are some of the things I would advise worrying about. What worries would I recommend prioritizing and what worries would I recommend ignoring? Well on this question, much like the 80,000 hour people, I will also be exhibiting my biases, but at least I’m telling you that up front.
For me it seems obvious that everyone’s number one priority should be to determine whether there’s an afterlife. If, as most religions claim, this life represents just the tiniest fraction of the totality of existence, that certainly affects your priorities, including prioritizing what to worry about. I know that some people will jump in with the immediate criticism that you can’t be sure about these sorts of things, and that focusing on a world or an existence beyond this one is irresponsible. As to the first point, I think there’s more evidence than the typical atheist or agnostic will acknowledge. I also think things like Pascal’s Wager are not so easy to dismiss as people assume. As to the second point, I think religions have been a tremendous source of charitable giving and charitable ethics. They do not, perhaps, have the laser like focus of the effective altruists, and it’s certainly possible that some of their time and money is spent ineffectively, but I have a hard time seeing where the amount of altruism goes up in a world without religion. Particularly if you look at the historical record.
All of this said, if you have decided not to spend any time on trying to determine whether there’s an existence beyond this one, that’s certainly your right. Though if you have made that decision I hope you can at least be honest and admit that it’s an important subject. As some people have pointed out there could hardly be more important questions than: Where did I come from? Why am I here? Where will I go when I die? And that you at least considered how important these questions are before ultimately deciding that they couldn’t be answered.
I made the opposite decision and consequently, this is my candidate for the number one thing people should be worried about, above even malaria. And much like a focus on AI, I know this injunction is going to be controversial. And, interestingly, as I’ve pointed out before, there’s quite a bit of overlap between the two. One set of people saying, I hope there is a God, and one set of people saying I hope we can create a god (and additionally I hope we can make sure it’s friendly.)
Beyond worrying about the answer to life the universe and everything, my next big worry is my children. Once again this is controversial. From an EA perspective you’re going to spend a lot of time and money raising a child in a first world country money that could, presumably, save hundreds of lives in a third world country. I did come across an article defending having children from an EA perspective, but it’s telling that it needed a defense in the first place. And the author is quick to point out that his “baby budget” does not interfere with his EA budget.
From a purely intellectual perspective I understand the math of those who feel that my children represent a mis-allocation of resources. But beyond that simplistic level it doesn’t make sense to me at all. They may be right about the lives saved, but a society that doesn’t care about reproduction and offspring is a seriously maladapted society (another thing I pointed out in my last post.) I’m programmed by millions of years of evolution to not only want to have offspring, but to worry about them as well, and I’m always at least a little bit mystified by people who have no desire to have children and even more mystified by people who think I shouldn’t want children either.
I have covered a lot of things you might worry about and so far with the exception of malaria everything has carried with it some degree of controversy. Perhaps it might be useful to invert the question and ask what things should we definitely not be worrying about.
The other day I was talking to a friend and he mentioned that he had laid into one of his co-workers for expressing doubt about anthropogenic global warming. Additionally this co-worker was religious and my friend suspected that one of the reasons his co-worker didn’t care about global warming, even if it was happening, was that being religious he assumed that at some point Christ would return to Earth and fix everything.
This anecdote seems like a good jumping off point. It combines religion, politics, baises, prioritization, and money. Also given that he “laid into” his co-worker I assume that my friend was experiencing a fair amount of worry about his co-worker’s attitude as well. Breaking it all down we have three obvious candidates for his worry:
- He could have been worried about religious myopia. Someone who thinks Jesus will return any day now is going to have very short term priorities and make choices that might be counterproductive in the long run, including, but not limited to ignoring global warming.
- He could have been worried that his co-worker was an example of some larger group. Conservative Americans who don’t believe in global warming. And the reason he laid into his co-worker was not because he hoped to change his mind, but because he’s worried by sheer number of people who are opposed to doing anything about the issue.
- It could be that after a bit of discussion, that my friend convinced his co-worker that global warming was important, but my friend worried because he couldn’t get his co-worker to prioritize it anywhere near as high as he was prioritizing it.
Let’s take these worries in order. First are religious people making bad decisions in the short term because they believe that Jesus is going to arrive any day now? I know this is a common belief among the non-religious. But it’s not one I find particularly compelling. I do agree that Christians in general believe that we’re living in the End Times, and that things like the Rapture, and the Great Tribulation will be happening soon. With “soon” being broad and loosely-defined. The tribulations could start in 100 years, they could start as soon as the next Democrat is elected president (I’m joking, but only a little) or we could already be in them. But I don’t see any evidence that Christians are reacting by tossing their hands up, for example most of them continue to have children, and at a greater rate than their more secular countrymen. I understand that having children is not directly correlated with caring about the future, but it’s definitely not unconnected either. And those who are really convinced that things are right around the corner are more likely to become preppers or something similar than to descend into a hedonistic, high-carbon emitting, lifestyle. You may disagree with the manner in which they’re choosing to hedge against future risk, but they are doing it.
What about my friend’s second worry, that his co-worker is an example of a large block of global warming deniers and that this group will prevent effective action on climate change? Perhaps, but is there any group which is really doing awesome with it? In the course of the conversation with my friend, someone pointed out (there were other people involved at various points) that Bhutan was carbon negative. This is true, and an interesting example. In addition to being carbon negative, the Bhutanese are also, by some measures, the happiest people in the world. How do they do it? Well, there’s less than a million of them and they live in a country which is 72 percent forest. So Bhutan has pulled it, off, but it’s hard to see a path between where the rest of the world is and where Bhutan is. (Maybe if malaria killed nearly everyone?) Which is to say I don’t think the Bhutan method scales very well. Anybody else? There is the global poor, who do very well on carbon emissions compared to richer populations. But it’s obvious no one is going to agree to voluntarily impoverish themselves, and we’re not particularly keen on keeping those who are currently poor in that state either. On the opposite side, I haven’t seen any evidence that global warming deniers, or populations who lean that way (religious conservatives) emit carbon at a discernibly greater rate than the rest of us. In fact insofar as wealth is a proxy for carbon emissions and a also a certain globalist/liberal worldview it wouldn’t surprise me a bit if, globally, a concern for global warming actually correlates with increased carbon emissions.
Finally we get to the question of how should we prioritize putting time and money towards mitigating climate change? I’m confident that if it was relatively painless the co-worker would reduce his carbon emissions. Meaning that he does probably have it somewhere on his list of priorities, if only based on the reflected priority it’s given by other people, but not as high on that list as my friend would like. As we saw at the beginning, neither the EA or the 80000 hours people put in the top ten. And when it was specifically addressed by the website givingwhatwecan.org they ended up coming to the following conclusion:
The Copenhagen Consensus 2012 panel, a panel of five expert economists that included four Nobel prize winners, ranked research and development efforts on green energy and geoengineering among the top 20 most cost-effective interventions globally, but ranked them below the interventions that our top recommended charities carry out. Our own initial estimates agree, suggesting that the most cost-effective climate change interventions are still several times less effective than the most cost-effective health interventions.
As long time readers of my blog know I favor paying attention to things with low probability, but high impact. Is it possible global warming fits into this category? Perhaps as an existential risk? Long time readers of my blog will also know that I don’t think global warming is an existential risk. But, for the moment, let’s assume that I’m wrong. Maybe global warming itself isn’t a direct existential threat, but maybe you’re convinced that it will unsettle the world enough that we end up with a nuclear war we otherwise wouldn’t have had. If that’s truly your concern, if you really think climate change is The Existential Threat, then we really need to get serious about it, and you should probably be advocating for things like geoengineering, (i.e. spraying something into the air to reflect back more sunlight) because you’re not going to turn the world into Bhutan in the next 32 years (the deadline for carbon neutrality by some estimates) particularly not by laying into your co-workers when their global warming priority is different than yours. (Not only is this too small scale, it’s also unlikely to work.)
From where I stand, after breaking down the reasons for my friends worries, they seem at best ineffectual and at worst, misguided, and I remain unconvinced that climate change should be very high on our list of priorities, particularly if it just manifests as somewhat random anger at co-workers. If you are going to worry about it, there are things to be done, but getting after people who don’t have it as their highest priority is probably not one of those things. (This is probably good advice for a lot of people.)
In the final analysis, worrying about global warming is understandable, if somewhat quixotic. The combined preferences and activities of 7.2 billion people creates a juggernaut that would be difficult to slow down and stop even if you’re Bill Gates or the President of the United States. And here we see the fundamental tension which arises when deciding what to worry about. Anything big enough to cause real damage might be too big for anyone to do anything about. Part of the appeal of effective altruism is that it targets those things which are large but tractable, and I confess that worries expressed in my writing have not always fallen into that category. When it comes right down to it, I have probably fallen into the same trap as my friend, and many of my worries are important, but completely intractable. But perhaps by writing about them I’m functioning as a “global priorities researcher”. (Number six on the 80,000 hours list!)
Of course, not all my worries deal with things that are intractable. I already mentioned that I worry about being a good person (e.g. my standing with God, should he exist, and I have decided to hope that he does.) And I worry about my children, another tractable problem, though perhaps less tractable than I originally hoped. I may hold forth on a lot of fairly intractable problems, but when you look at my actual expenditure of time and resources my family and improving my own behavior take up quite a bit of it.
Where does all of this leave us? What should we worry about? It seems obvious we should worry about things we can do something about, and we should worry about things that have some chance of happening. Most people don’t worry about being permanently disabled or dying on their next car trip, and yet that’s far more likely to happen than many of the things people do worry about. We should also worry about large calamities, and we should translate that worry into paying attention to ways we can hedge or insure against those calamities. I had expected to spend some time discussing antifragility, and related principles as useful frameworks for worry, but it ended up not fitting in. I do think that modernity has made it especially easy to worry about things which don’t matter and ignore things that do. Meaning, in the end I guess the best piece of advice is to think carefully about our worries, because we each only have a limited amount of time and money, and they’re both very easy to waste.
Is it a waste of money to donate to this blog? Well, as I said, think carefully about it. But really all I’m asking for is $1 a month. I think it’s fair to say that’s a very tractable amount…
“Perhaps, but is there any group which is really doing awesome with it? In the course of the conversation with my friend, someone pointed out (there were other people involved at various points) that Bhutan was carbon negative….but it’s hard to see a path between where the rest of the world is and where Bhutan is…But it’s obvious no one is going to agree to voluntarily impoverish themselves, and we’re not particularly keen on keeping those who are currently poor in that state either. On the opposite side, I haven’t seen any evidence that global warming deniers, or populations who lean that way (religious conservatives) emit carbon at a discernibly greater rate than the rest of us. ”
Well they do seem to buy a lot more straw men. Serious question, would a cap-n-trade program that took, say, 30 years, to reduce CO2 25% from what they otherwise would have been convert the US into Bhutan income wise? How about 15%? Or 10%? Lacking the ability to do something ‘awesome’ is not an excuse for standing in the way of modest improvements. Hence I would take a different view of this. The religious person who excuses global warming is either under the thrawl of dangerous ideas (God operates as a de ex machina who will save us from stupid actions at the last moment therefore we should try to be really, really stupid so he notices) or is playing at Pascal’s Indulgence.
What’s Pascal’s Indulgence? Happy you asked. Indulgences, you recall, were a Roman Catholic ‘hack’. You could ‘pay’ for a serious sin that you like to do by offering up the opposite of a sin in some other area that you don’t mind too much. For example, if you’re rich you could hand over a lot of money. If you like to travel you could go on a pilgrimage. If you’re overweight you can fast.
Pascal’s Indulgence takes the Indulgence and combines it with the wager aspect. If you have your sin on one side, toss in something religious on the other and maybe God will like the second so much he doesn’t notice the first. Sleeping with prostitutes? Campaign against gay marriage. Laundering money? Be against abortion. Happens on a smaller scale. I’ve often been to houses where the person has children from outside of marriage, live in boyfriends, etc. but a Virgin Mary will be on the wall, or a crucifix. Don’t buy into your religion then toss in the decorations, perhaps that will pass just increase there’s something to this.
So perhaps the person being religious is their game at ‘paying’ for failure of proper stewardship of the earth. Or on the other hand perhaps it is about as meaningful as which football team they follow and the person is looking for causal connections were there are none.
“Perhaps as an existential risk? Long time readers of my blog will also know that I don’t think global warming is an existential risk”
Well Ok but must existential come with a capital E? The Black Death reduced the world’s population by maybe just shy of 25%. It took 200 years for population to recover but it nonetheless never came anywhere close to causing human extinction. Yet if some risk is real but capped at no worse damage than that I think it’s valid to treat it as existential…but you’re right in a limited sense. If tomorrow we see the comet on a path to collide with earth I’d say all global warming efforts (and most other efforts) should be redirected towards sending whatever the real life version of Bruce Willis is to divert it.
But sometimes threats can get so big it is pointless to worry too much. Consider the idea that AI is going to unlock some cascade that will destroy humanity. I would say worrying about this is pointless. All the developed nations in the world are going to push towards AI, as they push towards stem cells, genetic engineering etc. If there’s some ultimate fragility that’s destined to break open the zombie virus, well it’s going to happen. Developing caution and best practices are perfect but if the ultimate answer is that developing AI will unleash something, well you aren’t going to direct tens of billions of people to not do it. Yet that’s not quite the same thing about carbon. Tens of billions of people don’t want carbon, they want products and energy. It doesn’t require a heavy hand to let those things be delivered via other paths. Same could be said for afterlives and God. If this is all true then God either cares enough about humans to account for their human imperfectionness….or he doesn’t and only cares about a handful of ‘freaks’. Either we are lucky and he cares about all of us, or if we are unlucky there’s no way to know if we as individuals happen to be the freaks he’s looking for. In other words perhaps Pascal’s Wager works both ways in a sense.
Okay, let’s say a carbon tax would work. Have we ever been close to implementing one? Was there some time in the past where such a bill was brought to a vote and defeated only because the global warming denying republicans voted against it? I don’t think so, my read of legislative history is that we’ve never even come close to implementing a carbon tax. The vote that essentially killed Kyoto (which only called for emissions trading) was 95-0:
https://fivethirtyeight.com/features/a-lesson-from-kyotos-failure-dont-let-congress-touch-a-climate-deal/
If you can point to some place where religious people like the person from the post were the deciding factor in killing climate change legislation, then blaming them would start to carry some weight, but to me it looks like the political will to do something about it isn’t present in either party.
So one of the most religious nations accomplished nothing in the face of climate change is what you’re saying. You want to absolve religious people of blame but I’m approaching from the opposite angle. Shouldn’t all that religion result in at least slightly better moral decisions? I’d be skeptical of a gym not just full of fat customers but where every week the same customers keep getting fatter.
Put another way I recall you once proposed testing the morality of an AI by presenting it with the chance to make moral decisions and letting it proceed if it built up a record of correct moral decisions. 0-95 doesn’t seem very promising. Not that Kyoto is necessarily the prime test here.
My rule of thumb is: the person closest to the problem should be the one empowered to solve it. Many of the things people tell me to worry about are either people not close to the problem trying to solve it, or I’m not close enough to that person to know whether they’re actually solving anything.
I think of you’re looking to find a new career it’s instructive to look at problems you think should be solved and get yourself in a position to solve them. That calculus will be different for different people. Personally, I work in cancer research. Maybe that’s not as important as AI policy, but just last year I helped create a cure for a certain type of cancer (less than 1% of all cancers). Maybe AI is 150 years away. Long before that time I project we’ll have solved cancer and rack up some serious benefits from my career field.
I think it’s more important to focus on proximal problems than global ones you’re not equipped to effectively contribute to solving. In that I’d include your family first.
Cancer research? That’s genuinely very cool. I certainly can’t compete with that. I basically agree with your rule of thumb, though I think I’d reframe it as being solved at the lowest level where it’s possible. National defense is hard to solve at the local level. But there’s a lot of other stuff which could be done at that level and with less waste.
Speaking of proximal problems you’re not equipped to deal with, I didn’t get around to mentioning this in the post, but one of the first things I said when my friend told me about his co-worker was that the problem of global warming is so big that you need to have a way of dealing with it which doesn’t require taking on all of the responsibility yourself. For his co-worker it’s religion, for other people it’s recycling and driving a hybrid. And the latter, despite feeling superior isn’t really doing that much more to combat it than the former. But both of them have found ways to sleep at night…
It’s not about making localized decisions. It’s very specific: the person CLOSEST to the PROBLEM. For national defence, that would be the Federal government. For your family, it would be you.
That still allows you to get yourself closer to the problem if you see a path to doing so. For example, Elon Musk has clearly found ways to get himself closer to many global problems. Sometimes it works (Tesla, Solar City, Space X) and sometimes not. Usually it works when he can use his abilities at engineering and business to assemble a technical solution. He has skills that can be applied to the problem and can often make progress by using them. That’s what I’d suggest to anyone looking to be useful to society.
And I wouldn’t say it’s impossible to influence, tangentially, larger problems – especially large-scale coordination problems. Just that for most of us that influence should reflect reality, not an inflated sense of urgency.
There’s a story in the New Testament that I’ve decided on a new way to interpret. A woman comes and honors Jesus with an expensive gift. One of the apostles (Judas?) says, “hey that was expensive, it could have been sold to help the poor!” Jesus replies, “the poor you have with you always…” basically dismissing the argument. And it’s clear Jesus cares about poor people. He spent huge amounts of time helping them out. As I read this, he’s saying, “don’t apply short-term solutions to long-term problems. Want to help the poor? They’re always there. Go help them in a way that helps them long term.” And I think something similar could be said about yelling at your coworker about global warming. That’s the wrong way to solve that problem. You could throw bags of pennies on the street and be seen “helping the poor” or you could work on real solutions. And I think that’s what you’re trying to get at here. If you’re arguing with a co-worker about a bill that was defeated 0-95, and you’re saying it’s all about building social pressure for change you’ve gone off track somewhere. But I think people turn around and say, “but I do that because I have no other way to help solve the problem! What do you want me to do, nothing?”
In many cases, nothing is better than the wrong thing. Im my experience, it’s easier to think up solutions when you’re not already laboring to implement the wrong solutions.
That’s a good way of putting it. I think my friend would argue that there are some problems which are so important that we should all be moving closer to the problem. But I also think he would admit that arguing with his co-worker isn’t moving either of them closer.
Exactly. Bad solutions distract from real ones. That doesn’t mean you throw up your hands and give up. It means you don’t accept easy answers and call it a day.