Pierre Dragicevic. Last updated November 2024.
In mid-2022, I wrote a pre-print titled Information Visualization for Effective Altruism, where I encourage people involved in information visualization research and in the effective altruism (EA) movement to work together. Just before that, I wrote a pre-print titled Towards Immersive Humanitarian Visualizations where I also talk a little bit about EA. I later reused the text from the two pre-prints in the last chapter of my HDR thesis (Habilitation à Diriger les Recherches).
In the light of revelations about serious problems in certain branches of the EA movement, I no longer support EA as a whole. I do still support EA's general philosophy and its branch focusing on alleviating global poverty, which are the focus of my writings.
More details and my position as of 2023
Here is a text I wrote for my HDR talk I gave in July 2023:
Effective Altruism (or EA) is a movement that started about 20 years ago. The term was coined 12 years ago by a small group of philosophers, who defined it as using evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis. Two key premises are that it is the well being of individuals that matters, and that all individuals count equally. Initially at least, a major focus of EA was on improving global health and reducing global poverty.
EA uses different approaches, one is to encourage individuals to contribute personally. One principle is that if we live a high-income country, we should give part of our income to alleviate World poverty. A more radical and more controversial version of this is that we should take a high-paid job to do this as effectively as possible. Another key idea is that we should avoid wasteful donations, and for that, we need rigorous and independent research on charity impact. There is also a subcommunity of EA focusing on how to best help non-human animals.More recently, EA grew a lot and started to address new cause areas, including AI safety and global catastrophic risks, the idea being that we should work on reducing the likelihood of terrible events like human extinction, for example by doing research. Another new cause area is longtermism, according to which in the very far future, there could be billions as many people as us, for example if we colonize space, and actions we take now could cause astronomical amounts of suffering or well-being in the future, so we should focus on those people. It is a very rough summary but this is more or less the EA landscape as I see it.
I have been following the EA movement for 8 years and up until recently, I was very enthusiastic about it. Especially the core philosophy and the global health part, on which I have been focusing. I was a bit less convinced by the rest and I was wondering why more and more people are talking about AI safety and longtermism but I thought why not — why not have people think about those things. And then in the past few months, lots of newspaper articles criticizing different aspects of EA appeared, which I have missed and I have just started to read, and here are my updated beliefs.
First, the new futuristic cause areas are much bigger, richer, important, and influential than I thought. They are also highly dysfunctional and problematic. AI safety is important, but it has been reported that in the Bay Area there is a community of AI safety researchers, many of whom identify themselves as effective altruists, which has become a sort of an apocalypse cult, with a toxic culture and reported cases of sexual abuse (Huet, 2023).
As for longtermism, it became a powerful ideology with a religious flavor as well, because it has this vision of astronomical or infinite amounts of value in the future, which can potentially provide a motivation and a justification for doing horrible things to people today (Torres, 2022).
And this is all the more scary that one of the main intellectual fathers of longtermism has been recently found to have explicitly racist ideas. He is also a promoter of transhumanism, an ideology that has connections with eugenics (Gault and Pearson, 2023).
So this is all super scary and disturbing, and I am not endorsing those offshots of EA at all.
I am also less supportive of the "earning to give" movement now, because a recent scandal involving a large-scale cryptocurrency scam demonstrated that it can inspire people to do illegal things (Táíwò and Stein, 2022).
Now I am still supportive of the core EA principles and the main figures behind them, because I think those are good guiding principles, but I am sligthly less enthusiastic now that it is clear to me that those ideas can be misused and abused, like any idea unfortunately.
But for now I see no reason not to maintain my full support for EA's efforts in addressing global health and global poverty, especially the charity evaluation aspects, and animal rights as well, and I will continue to explore how can visualization help. The issues I have mentioned are really serious and worrisome, but I am sure there are lots of sincere people in EA working on global health and doing good, there is lots of important research, and I think it would be a wasted opportunity to throw the baby with the bath water and not try to work with them.
That is my position now. It may change as I read more, but probably not by much.
References
Ellen Huet (2023) The Real-Life Consequences of Silicon Valley's AI Obsession.
Émile P. Torres (2020) Understanding "longtermism": Why this suddenly influential philosophy is so toxic.
Matthew Gault and Jordan Pearson (2023) Prominent AI Philosopher and ‘Father' of Longtermism Sent Very Racist Email to a 90s Philosophy Listserv.
Olúfẹ́mi O Táíwò and Joshua Stein (2022) Is the effective altruism movement in trouble?.