You claim OP's interpretation is inaccurate, while it tracks perfectly with many of EA's most notorious supporters.
Given that contrast, I'd ask what evidence do you have for why OP's interpretation is incorrect, and what evidence do you have that your interpretation is correct?
The fact they're notorious makes them a biased sample.
My guess is for the majority of people interested in EA - the typical supporter who is not super wealthy or well known - the two central ideas are:
- For people living in wealthy countries, giving some % of your income makes little difference to your life, but can potentially make a big difference to someone else's
- We should carefully decide which charities to give to, because some are far more effective than others.
I would describe myself as an EA, but all that means to me is really the two points above. It certainly isn't anything like an indulgence that morally offsets poor behaviour elsewhere
I would say the problem with EA is the "E". Saying you're doing 'effective' altruism is another way of saying that everyone else's altruism is wasteful and ineffective. Which of course isn't the case. The "E" might as well stand for "Elitist" in that's the vibe it gives off. All truly altruistic acts would aim to be effective, otherwise it wouldn't be altruism - it would just be waste. Not to say there is no waste in some altruism acts, but I'm not convinced its actually any worse than EA. Given the fraud associated with some purported EA advocates, I'd say EA might even be worse. The EA movement reeks of the optimize-everything mindset of people convinced they are smarter than everyone else who just say just gives money to a charity A when they could have been 13% more effective if they sent the money directly to this particular school in country B with the condition they only spend it on X. The origins of EA may not be that, but that's what it has evolved into.
A lot of altruism is quite literally wasteful and ineffective, in which case it's pretty hard to call it altruism.
> they could have been 13% more effective
If you think the difference between ineffective and effective altruism is a 13% spread, I fear you have not looked deeply enough into either standard altruistic endeavors nor EA enough to have an informed opinion.
The gaps are actually astonishingly large and trivial to capitalize on (i.e. difference between clicking one Donate Here button versus a different Donate Here button).
The sheer scale of the spread is the impetus behind the entire train of thought.
It's absolutely worth looking at how effective the charities you donate to really are. Some charities spend a lot of money on fundraising to raise more funds and then reward their management for raising to much funds with only a small amount being spent on actual help. Others are primarily known for their help.
Especially rich people's vanity foundations are mostly a channel for dodging taxes and channeling corruption.
I donate to a lot of different organisations, and I do check which do the most good. Red Cross and Doctors Without Borders are very effective and always worthy of your donation, for example. Others are more a matter of opinion. Greenpeace has long been the only NGO that can really take on giant corporations, but they've also made some missteps over the years. Some are focused on helping specific people, like specific orphans in poor countries. Does that address the general poverty and injustice in those countries? Maybe not, but it does make a real difference for somebody.
And if you only look at the numbers, it's easy to overlook the individuals. The homeless person on the street. Why are they homeless, when we are rich? What are we doing about that?
But ultimately, any charity that's actually done, is going to be more effective than holding off because you're not sure how optimal this is. By all means optimise how you spend it, but don't let doubts hold you back from doing good.
For sure this is case. But just knowing what you are donating to doesn't need some sort of special designation. Like yes A is in fact much better than B, so I'll donate to A instead of B is no different than any other decision where you'd weigh options. Its like inventing 'effective shopping'. How is it different than regular shopping? Well, with ES, you evaluate the value and quality of the thing you are buying against its price, you might also read reviews or talk to people to have used the different products before. Its a new philosophy of shopping that no one has ever thought of before and its called 'effective shopping'. Only smart people are doing it.
The principal idea behind EA is that people often want their money to go as far as possible, but their intuitions for how to do that are way, way off.
Nobody said or suggested only smart people can or should or are “doing EA.” What people observe is these knee jerk reactions against what is, as you say, a fairly obvious idea once stated.
However it being an obvious idea once stated does not mean people intuitively enact that idea, especially prior to hearing it. Thus the need to label the approach
> However it being an obvious idea once stated does not mean people intuitively enact that idea, especially prior to hearing it. Thus the need to label the approach
This has some truth to it and if EA were primarily about reminding people that not all donations to charitable causes pack the same punch and that some might even be deleterious, then I wouldn't have any issues with it at all. But that's not what it is anymore, at least not the most notable version of it. My knee jerk reaction to it comes from this version. The one where narcissistic tech bros posture moral and intellectual superiority not only because they give, but because they give better than you.
Out of interest, do you identify any of the comments in this discussion as that kind of posturing? The "pro-EA" comments I see here seem (to me) to be fairly defensive in character. Whereas comments attacking EA seem pretty strident. Are you perceiving something different?
My impression of EA is not based on the comments here but the more public figures in this space. It is likely that others attacking EA are reacting to this also, while those defending it are doing so about the general concept of EA rather than a specific realization of EA that commenters like myself are against.
> Subtract billionaire activity from your perception of EA attitude
But that's the problem, that is my entire perception of EA. I see regular altruism where, like in the shopping example I gave above, wanting to be effective is already intrinsic. Doing things like giving people information that some forms of giving are better than others is just great. No issues there at all, but again I see that as a part of plain old regular altruism.
Then there is Effective Altruism (tm) which is the billionaire version that I see as performative and corrupt. Even when it helps people, this seems to be incidental rather that the main goal which appears to be marketing the EA premise for self promotion and back patting.
Obviously EA has a perception problem, but I have to admit it’s a little odd hearing someone just say that they know their perception is probably inaccurate and yet they choose to believe and propagate it regardless.
If it helps, instead of thinking of it as a perception problem, maybe think of it as a language problem. There are (at least) two versions of EA. One of them has good intentions and the other doesn't. But they are both called EA, so its not that people are perceiving incorrectly, its that they hear the term and associate it with one of those two versions. I tried to disambiguate by referring the one just regular altruism and other by the co-opted name. EA has been negatively branded and its very hard to come back from that association.
"A lot of people think that EA is some hifalutin, condescending endeavor and billionaire utilitarians hijack its ideology to justify extreme greed (and sometimes fraud!), but in reality, EA is simply the imperative (accessible to anyone) to direct their altruistic efforts toward what will actually do the most good for the causes they care about. This is in contrast to the most people's default mode of relying on marketing, locality, vibes, or personal emotional satisfaction to guide their generosity."
See? Fair and accurate, and without propagating things I know or suspect to be untrue!
This is perfectly fine definition, if you change the "but in reality" to "and". Like it or not, EA means both of these things simultaneously. So its not that if someone uses one definition that they are wrong, only that they are using that definition. Language is like that. There is no official definition, its whatever people on mass decide to use and sometimes there is a split vote.
I see your point, but if the only red-headed people ever saw was Kathy Griffin and Carrot Top and they were unfunny to them, and also Kathy and Carrot Top were loudly and sincerely proclaiming that they were funny, and that they were funnier than any other comedians, and that it was because they were red headed. How irrational is that perception?
I agree. I think the criticism of EA's most notorious supporters is warranted, but it's criticism of those notorious supporters and the people around them, not the core concept of EA itself.
The core notions as you state them are entirely a good idea. But the good you do with part of your money does not absolve you for the bad things you do with the rest, or the bad things you did to get rich in the first place.
Mind you, that's how the rich have always used philanthropy; Andrew Carnegie is now known for his philanthropy, but in life we was a brutal industrialist responsible for oppressive working conditions, strike breaking, and deaths.
Is that really effective altruism? I don't think so. How you make your money matters too. Not just how you spend it.
The OP's interpretation is an inaccurate summary of the philosophy. But it is an excellent summary of the trap that people who try to follow EA can easily fall into. Any attempt to rationally evaluate charity work, can instead wind up rationalizing what they want to do. Settling for the convenient and self-aggrandizing "analysis", rather than a rigorous one.
An even worse trap is to prioritize a future utopia. Utopian ideals are dangerous. They push people towards "the ends justify the means". If the ends are infinitely good, there is no bound on how bad the "justified means" can be.
But history shows that imagined utopias seldom materialize. By contrast the damage from the attempted means is all too real. That's why all of the worst tragedies of the 20th century started with someone who was trying to create a utopia.
EA circles have shown an alarming receptiveness to shysters who are trying to paint a picture of utopia. For example look at how influential someone like Samuel Bankman-Fried was able to be, before his fraud imploded.
this feels like “the most notorious atheists/jews/blacks/whites/christian/muslims are bad therefore all atheists/jews/blacks/whites/christian/muslims are bad
It's like libertarianism. There is a massive gulf between the written goals and the actual actions of the proponents. It might be more accurately thought of as a vehicle for plausible deniability than an actual ethos.
The problem is that creates a kind of epistemic closure around yourself where you can't encounter such a thing as a sincere expression of it. I actually think your charge against Libertarians is basically accurate. And I think it deserves a (limited) amount of time and attention directed at its core contentions for what they are worth. After all, Robert Nozick considered himself a libertarian and contributed some important thinking on things like justice and retribution and equality and any number of subjects, and the world wouldn't be bettered by dismissing him with twitter style ridicule.
I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract but not to the point of epistemic closure in response to its subject matter.
When a term becomes loaded enough then people will stop using it when they don't want to be associated with the loaded aspects of the term. If they don't then they already know what the consequences are, because they will be dealing with them all the time. The first and most impactful consequence isn't 'people who are not X will think I am X' it is actually 'people who are X will think I am one of them'.
I think social dynamics are real and must be answered for but I don't think any self-correction or lacktherof has anything to do with subject matter which can be understood independently.
I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements and they would have to be blind to ignore it. But the book is wrong for reasons intrinsic to its analysis and it would be catastrophic to treat that point as moot.
I am saying that those who actually believe something won't stick around and associate themselves with the original movement if that movement has taken on traits that they don't agree with.
Literally every comment of mine explicitly acknowledged social indicators, just not to the exclusion of facts. You're trying to treat your comments like they're the mirror image of mine, but they're not.
It is not an unfair filter to (correctly!) note that I specifically accounted for this and agreed with it.
I said:
"I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract"
"I think social dynamics are real and must be answered for"
"I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements"
In the face of that, you're trying to claim that I'm ignoring "social indicators as a valid heuristic."
That's not true and no amount of projection or character attacks can make it true. These are verbatim quotes from both of us. You're attempting to present a point I agree with as if it's a new unacknowledged critique.
Meanwhile, when I say the subject matter of a belief system matters for its content, you don't engage with it but reply to me by re-asserting the point I agree with as if it does the work of responding to me. No amount of social signalling takes the place of evaluating intellectual content on its merits and saying "intellectual content matters" is not a denial of the importance of social signalling.
> "I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract"
And I said that people tend not to associate themselves with labels that have connotations that they don't like.
These two statements are not the same.
> "I think social dynamics are real and must be answered for"
Yet you completely dismiss my point.
> "I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements"
What does this have to do with group associations?
> In the face of that, you're trying to claim that I'm ignoring "social indicators as a valid heuristic."
Because you never acknowledged my point.
> That's not true and no amount of projection or character attacks can make it true.
You are the one who started with the insults, I was following suit.
> Meanwhile, when I say the subject matter of a belief system matters for its content, you don't engage with it but reply to me by re-asserting the point I agree with as if it does the work of responding to me.
Because I wasn't contesting that. I was adding something to it.
I've provided three direct quotes where I explicitly acknowledged social dynamics. You keep claiming I didn't. Now you claim you were 'just adding' and that I 'started with the insults' yet the thread shows you introduced personal criticism first with 'defensive self-regard' while my initial comments were substantive.
Most tellingly: you dismiss my direct, repeated acknowledgments as 'not counting' while claiming credit for 'adding' a point you never actually verbalized until this moment. Your standard for what constitutes 'acknowledgment' shifts based entirely on whether you're demanding it or taking credit for it.
And the Bell Curve example directly illustrates holding proponents responsible for social entanglements, the exact thing you claim I never addressed.
Saying that I am 'trying to mirror your posts' is not an insult? What is it then?
Your entire tone is disdainful and dismissive, and your constant need to insist that I am missing something when you refuse to acknowledge my basic point is tedious.
Yes, you acknowledged 'social indicators'. No, you did not acknowledge that 'people who stick around in clubs filled with other people they vehemently disagree with about core issues tend to be rare'.
If people really believe in something, it stands to reason that they aren't willing to just give up on the associated symbolism because someone basically hijacked it.
Coincidentally, libertarian socialism is also a thing.
Well, in order to be a notorious supporter of EA, you have to have enough money for your charity to be noticed, which means you are very rich. If you are very rich, it means you have to have made money from a capitalistic venture, and those are inherently exploitive.
So basically everyone who has a lot of money to donate has questionable morals already.
The question is, are the large donators to EA groups more or less 'morally suspect' than large donors to other charity types?
In other words, everyone with a lot of money is morally questionable, and EA donors are just a subset of that.
Fair to disagree on that point, but I think the people who would find the EA supporters “morally questionable” feel that way for reasons that would apply to all rich people. I would be curious to hear what attributes EA supporters have that other rich people don’t.
I think the idea the future lives have value, and the value of those lives can outweigh the value of actual living people today is extremely immoral.
To quote[1]:
> In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral considerations.
Isn't this just the Thanos argument, though? Given the huge number of possible future lives under space colonization, all of them ending inevitably in death and suffering, no amount of trying to improve those lives can ever has as much of a positive impact as just avoiding them by pushing for, say, nuclear self-annihilation now, because the somewhat larger suffering for a much, much smaller number of people is a higher "expected value"? I'm not really keen on moral arguments that end up arguing for nuclear war…
> I think the idea the future lives have value, and the value of those lives can outweigh the value of actual living people today is extremely immoral.
This is an interesting take. So if we found out for certain that an action we are taking today is going to kill 100% of humans in 200 years, it would be immoral to consider that as a factor in making decisions? None of those people are living today, obviously, so that means we should not worry about their lives at all?
For very much money, as in, let's say, more than 1000x the median person in the wealth distribution, I'd say it's obviously true.
You cannot make 1000x the average persons wealth by acting morally. Except possibly winning the lottery.
A person is not capable of creating that wealth. A group of people have created that wealth, and the 1000x individual has hoarded it to themselves instead of sharing it with the people who contributed.
If you are a billionaire, you own at least 5000x the median (200000k in the US). If you're a big tech CEO, you own somewhere around 50-100,000x the median. These are the biggest proponents of EA.
The bottom 50% only own about 2% of the wealth anymore, the top 10% own two thirds of the wealth, the top 1% owns a whole third and it's only getting worse. Who is responsible for the wealth inequality? The people at the right edge of the Lorenz curve. They could fix it, but don't, in fact they benefit more from their workers being poorer and more desperate for a job. I hope that explains the exploitation.
> You cannot make 1000x the average persons wealth by acting morally. Except possibly winning the lottery.
The risk profile of early startup founders looks a lot like "winning the lottery", except that the initial investment (in terms of time, effort and lost opportunities elsewhere as well as pure monetary ones) is orders of magnitude higher than the cost of a lottery ticket. There's only a handful of successful unicorns vs. a whole lot of failed startups. Other contributors generally have a choice of sharing into the risk vs. playing it safe, and they usually pick the safe option because they know what the odds are. Nothing has been taken away from them.
The risk profile being the same does not mean that the actions are the same. The unicorns that make it rich invariably have some way of screwing over someone else., Either workers, users, or smaller competitors.
For Google and Facebook, users' data was sold to advertisers, and their behaviour is manipulated to benefit the company and its advertising clients. For Amazon, the workers are squeezed for all the contribution they can give and let go once they burn out, and they manipulate the marketplace that they govern to benefit them. If you make multiple hundreds of millions, you are either exploiting someone in the above way, or you are extracting rent from them.
Just looking at the wealth distribution is a good way to see how unicorns are immoral. If you suddenly shoot up into the billionaire class, you are making the wealth distribution worse, because your money is accruing from the less wealthy proportion of society.
That unicorns propagate this inequality is harmful in itself. The entire startup scene is also a fishing pond for existing monopolies. The unicorns are sold to the big immoral actors, making them more powerful.
What is taken away when inequality becomes worse is political power and agency. Maybe other contributors close to the founders are better off, but society as a whole is worse off.
The problem with your argument is that most organizations by far that engage in these detrimental, anti-social behaviors are not unicorns at all! So what makes unicorns special and exceptional is the fact that they nonetheless manage to create outsized value, not just that they sometimes screw people over. Perhaps unicorns do technically raise inequality, but by and large, they do so while making people richer, not poorer.
Could you please back that up with some evidence. Right now you're just claiming that there are a lot of anti-social businesses but that unicorns are separate from this.
That's quite a claim, as there's a higher probability of unicorns screwing people over. If a unicorn lives long enough it ends up at the top of the wealth pyramid. As far as I can tell, all of the _big_ anti-social actors were once unicorns.
That most organizations engaging in bad behavior aren't unicorns says nothing, because by definition most companies aren't unicorns. If unicorns are less than 0.1% of the population of companies X, then P(X | !unicorn(X)) > P(X | unicorn(X)) is almost guaranteed to be true for all P.
You think the wealth inequality is set up to exploit poor people, but you don't think contributing to the wealth inequality is immoral.
That's an interesting position. I would guess that in order to square these two beliefs you either have to think exploiting the poor is moral (unlikely) or that individuals are not responsible for their personal contributions to the wealth inequality.
I'm interested to hear how you argue for this position. It's one I rarely see.
Given that contrast, I'd ask what evidence do you have for why OP's interpretation is incorrect, and what evidence do you have that your interpretation is correct?