Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Meta cuts Responsible Innovation Team (wsj.com)
453 points by cpeterso on Sept 9, 2022 | hide | past | favorite | 499 comments



Well this part is very familiar:

> its work was given prominence in the engineering operation by former Chief Technology Officer Michael Schroepfer, who announced last year that he was stepping down.

At the end of 2016, I joined Twitter to lead one of the anti-abuse engineering teams. My boss, who led the effort, was great: loved Twitter, hated abuse, very smart. 6 months later the CTO, who had been behind the creation of the anti-abuse engineering effort, left. My boss, previously gung ho, left in a way that made me think he was getting signals from his boss. And shortly after that, said boss's boss said that our team had succeeded so well that there was no need for us. He scattered the engineers, laid of the managers, me included, and declared victory. We all laughed bitterly.

What these have in common for me is a high-level executive launching a special team with great fanfare in a way that addresses a PR problem. But because PR problems are generally fleeting, as soon as do-goodery loses its executive sponsor, everybody goes right back to the short-term incentives, meaning things like "responsibility" go right out the door. At least beyond the level that will trigger another PR disaster.

And if you're wondering why you don't hear more about things like this, you're not supposed to. At least for the managers laid off, it was a surprise meeting and then getting walked out the door. In the surprise meeting, they offered a fair chunk of money (for me, $40k) to sign an additional NDA plus non-disparagement agreement. I happen to have a low burn rate and good savings, so I didn't sign. But I know plenty of people who have looked at the mortgage and kid expenses versus the sudden lack of income and eventually signed.


I don't get a chance to read many comments that cost $40k to make. Thanks.


$40k is two months pay probably.

Edit: To be clear I didn’t say this to shame the OP. I said this to highlight that the company didn’t offer up much. I say this because I have been in a similar position in the past where their “generous” offer was two months severance (same ballpark) which imo was insulting. Didn’t sign it either and told them to F off.


How many month’s salary would the writer need to have lost before you unlock some praise?


I'd look at it a little differently. The 40k puts him on the hook for something. He (or someone who signs sn agreement like that) is in a position where something he says could be construed as violating the agreement and he could get taken to court, even if he's in the right. No agreement, nothing to worry about. Why take an immaterial amount of money to be personally liable for something. Getting to tell the story is a minor perk


I think this somewhat speaks to the pay gap present on this platform. From my position I'd happily accept a 40k payout if you wanted to contract me to never say the word "are" for two years. It's all a balance - if you have 20million sitting in the bank then 40k sounds like nothing, but 40k is a lot of money to most people - enough that they'd be happy to agree to some terms that won't significantly affect their lives just to take home the cash and get, lets say, two months of vacation budgeting (a conservative estimate) to enjoy.

40k is a lot of money.


I don't have 20 million sitting in the bank, but I still definitely wouldn't take that. 40k is definitely a lot to me, but if it's similar to an NDA, you have to assume that the legal and financial penalties if you mess up could be ruinous. And I'm almost definitely going to mess up and accidentally say the word "are".

The 40k isn't a gift, it's a payment in order to take on a liability. I'd have a lot more success not talking about why I was let go, as opposed to not saying the word "are", but I'd still have to weigh the consequences of messing up one drunken or tired evening.


I think the problem is that 40k is a lot of money to the average worker AND it's virtually nothing to the company.

It highlights just how out of control the wealth gap is becoming.


Well uh, the company also employs thousands of people who get more or less equally paid?

Taking a public company vs average employee as a wealth gap example is probably not showing what you want to show.


Maybe wealth gap isn't exactly what I should have highlighted. Instead we could say it shows the power corporations have to buy our basic rights away from us for a pittance.


I get what they are saying. The cost of a lawsuit for any perceived violation of this NDA would easily cost over $40k to fight or settle. If you're highly risk adverse it may actually make sense on a personal decision level to not sign any NDA's for any amount below instant retirement like 8 Figures. If you sign it will always be over your head for the rest of your life.


Yeah 40k would instantly solve all my immediate problems and set me on a path for life haha

One day I hope to see giving up 40 grand as a viable option


I'd miss:

1. International Speak Like a Pirate Day.

2. Passing by John Cook without an R-R-R-R me hearties!


What do pirates have inside their compilers?

IR, matey


Well some things take priority


Maybe you could have an accent.

Ahhh me mateys !


> Why take an immaterial amount of money

k


So two months, assuming 40 hour work week, ends up being a comment with 320 hours of integrity behind it. Still very impressive, thanks for bringing it to the time domain.


It’s hard to have integrity when you are hurting for money. I am glad the poster was doing okay and was able to stand up for what they believed in


Bro where are you working where the typical engineer is making 20k a month?


20k is 240k a year. That’s not an atypical base salary for someone fairly senior at a faang. Then there bonus which brings cash comp to around 290k.

Rsu is the largest part of comp for a senior person so more that half. So add another 300 k on top of that.

That’s a comp of close to 600k a year or 50k a month when averaged out.


The post specifically mentioned Twitter. That number is in line with the pay ranges mentioned in their job listings.


Most larger SV tech companies pay senior+ engineers that between RSUs and cash.

FAANG pays that to new grads. (500k stock over 4 years and 140k salary is pretty standard).


You are not receiving 40k a month liquid cash. That is my point. RSUs have a very, very long time before that worth is liquid. That was more my point.


Many companies vest those RSUs monthly or at least quarterly, and for public companies employees can sell right away. This is a very common paradigm, and in fact financial advisors would frequently recommend employees sell all shares at vest to diversify.

I've worked at places also where 105b-1 plans were generally available (sell all shares at vest even outside trading windows), in which case employees were absolutely receiving 40k+ liquid cash a month. Many senior engineers were receiving substantial more than that.


Some companies vest quarterly with no cliff, so you are going to vest within 3 months of starting.


Before the stock declines, an experienced engineer could get offers for 40k a month before taxes. I didn't believe it till I tried. It's real. Amazon frontloads in cash, Facebook has 3 month vesting.


The cost to the company is the same, one just has better retention properties.


The last 3 companies I worked RSUs vested either quarterly or monthly and were then immediately liquid.


Did you skip basic math classes? Or your parents still pay all your expenses?

Even with 20k bring home salary (which means you have been around a bit, possibly having family, mortgage, other investments, have hobbies that are a bit above just breathing air costs, teavel etc) you are saving just a fraction of that.

For some the fraction is really tiny, for some a bit bigger. 40k (just raw amount, untaxed) can be a year's worth of actual savings easily, even with good job.

Kudos to OP, bits like this keep my faith in humanity.


Assuming 40k is worth two months salary, the actual amount you'd end up getting will be closer to 20 something. It's all about perspective. 20k is a lot for someone making 100k/year or less. 20k is not a lot for someone who works for Big Tech and pulls in a lot more.


yeah that's what I was thinking. If 40k landed in my lap, my current income and burn rate being what it is, I would easily finish all my walls on my house and roof.


This always reads so unreal to me to hear about such a high pay.

I mean, I just joined joined a new job two months ago in a full remote position for 50k Eur/year, working in France. It's already almost 10k more than my previous job.

I read that the median annual wage in 2021 in the US was $45,760. In France, it was 22,184 euros in 2018.

So I guess, $40k is a joke for some, but would make a large difference for most people out there.


It is common practice for companies to offer this if they have something to hide.


its common practice in general to offer anywhere between 2-4 months for redundancies, the non-disparagment clauses are tacked on to make sure you can't grind an axe with your now ex-employer


You limited it to redundancies. I didn't.

I have seen it tried in a situation where a manager called an employee a racial slur. The employee sued and the company settled. Too bad for the company the employee had an email where the racial slur was used. You may not hear about it because it is kept hush hush in Silicon Valley.

Something similar happened to me. A manager said that a certain gender wasn't liked around here. When they laid me off, they gave me an unusually generous package and asked for a signature.


It is pretty common practice for companies to offer this for layoffs, period.

The discussion can start from the point of view of wanting the laid off employees to have a soft landing, and the lawyers push for especially non-disparagement and tightening up NDA.


I didn't limit it to layoffs. You did.


Fair enough, but it is also standard practice in any settlement.


I would of taken it and given it to a charity. Instead we get a silly HN comment.


Easy to say, but forgive me if I'm skeptical.

That's an option I definitely considered, even discussed. But for reasons I mentioned elsewhere I decided to not sign, and have instead since mainly worked for non-for-profits at lower rates. So if we can count the salary difference as giving to charity, I've already given far more than that.


Agreed. That is a power move par excellence.


I’ve done the opposite of you and regretted it. Good for you for sticking to your principals.

I appreciate your $40k comment here!


No shame! I was lucky in all sorts of ways. One is that years of consulting work left me with habits of always having enough of an emergency fund that I could afford the time off. Another is living in a moment where my weird brain quirks made me a highly paid professional rather than that annoying car mechanic who might solve a hard diagnostic problem quickly but takes forever to get around to your oil change. And perhaps most importantly, early on I had a couple of dubious jobs that made me really think about my ethical standards for work.

I'm glad to hear you learned your lesson!


> a highly paid professional rather than that annoying car mechanic who might solve a hard diagnostic problem quickly but takes forever to get around to your oil change.

There are so many of us who can relate. Huge kudos to you for your integrity and for staying humble.


> my weird brain quirks made me a highly paid professional rather than that annoying car mechanic who might solve a hard diagnostic problem quickly but takes forever to get around to your oil change

Thank you so much for being humble about your skills. I wish more of us took this perspective


I’m slightly surprised they’d just lay you off instead of trying to fill other internal positions first. That seems like it’d cost less than $40k for one thing.


> fill other internal positions first

Not (necessarily) getting down on OP, but the sort of person who's attracted to anything that Twitter would call "anti-abuse" might not be the sort of person who's appropriate in any other role.


No, that's exactly correct. I joined Twitter to solve a problem. A problem that my boss's boss, as well as plenty of other people, did not give two shits about solving. He certainly did not want me under him, and it was far easier for him to engineer a "layoff" than try to transfer me somewhere else.

And I get the Machiavellian calculus. I cared about Twitter and its users, especially the marginalized ones that were facing a lot of abuse. He cared about his own career, and wanted people who would put his advancement first. I would not have done that, so pushing out me and anybody else who really cared about the problem was the correct move in his self-centered analysis.


From a feature/capability perspective, what would be the most impactful ways for Twitter to help those users?


Five years later, I have no idea. Answering that requires a lot of data that isn't available publicly. For me it would start with who's getting the abuse these days, and where it's coming from.

But if I had to guess, top of my list would be faster, more targeted action to deplatform shitheels. For a while last year I was tracking a ring of nazi-ish jerks who managed to stay around eternally by getting new accounts. They'd either circulate the new account before the old one got banned or come back, follow a few people in the account cluster, and then say the were looking for their "frens", which would prompt a wave of retweets in the cluster so as to rebuild their follow graph.

I'd also look for ways to interfere with off-Twitter organization of harassment of particular accounts, which has some characteristic signals. Interfere both by defanging the content (attenuated notifications, downranking) and faster suspension/banning for jerks that are part of the waves.

I'd also love to see more support on the recipient side, but I haven't kept up with what those tools look like these days.

That said, I believe they've made progress since I left; I think twitter has a notably lower proportion of shitty behavior than years ago. So kudos to whoever's been working on this problem across the reorgs that have happened since I left.


It would be fascinating to read a deep dive from you on your thoughts on all the thorny questions that always come up with respect to this.

You seem all in on the idea that Twitter should be stamping out nasty speech, and I can certainly respect that philosophy for a private company. I certainly wouldn't want to personally work to support the aforementioned shitheels. But I'm curious what you think of the broader ethics that people often bring up in these discussions. Is it bad that the contemporary "public square" is dominated by private companies that get to determine the rules? Should there be a real digital public square that isn't dominated by private companies? Could such a thing even work?

I dunno, maybe you're tired of this so uninterested in writing such a thing, but I feel like so many people act like the answers to this stuff are both obvious and easy to implement, and I know that's not true, and would be interested in the view from the front lines.


That sounds like several books worth of writing, so I'll pass, but I'm glad to take a quick swing at the points you mentioned.

There is a real digital public square: the internet. Anybody can publish anything. Anybody, private companies included, can help people publish. But all of those people have freedom of association. Nobody has to use a particular publisher, and none of those publishers have to publish anything they don't want to.

Many popular platforms have started out with a "free speech wing of the free speech party" ethos. I was part of one of those, bianca.com, which won a webby in 1997 and was gone by 2000. The lesson of every one of those, ours included, is that you can't have all the kinds of speech in one place. E.g., Twitter can have black people or people who shout abuse at black people. So the question for any platform is: who are you ok excluding?

This is not a new lesson: https://twitter.com/IamRageSparkle/status/128089253502461952...

I think it only feels new in the digital context because people often start from a assume-spherical-cows perspective. Which can be adequate if you're building an ecommerce system or something. But when building tools to shift society online, it's hopelessly naive. So I'd be interested to read a "real digital public square" proposal that started from actual human behaviors. Say, people who studied physical public squares, their impacts, how they're regulated, and the social, non-legal mechanisms that keep them from just being filled with nazis, spam, and porn. But the ones that come out of the same naivete that we had in the mid-1990s or that Jack and Zuck had a decade later? We know what happens, and it never works out well.


I missed this, thank you for your response! I'm entirely with you on this, and I wish I was someone who had the ability to offer generous book advances :)


Thanks! Very kind.


Sorry, but your posts are making it clearer why your team was disbanded. You have been arguing that:

- Social networks can be blamed for basically any evil that happens in the world. You express elsewhere the belief that Facebook can be blamed for the Rohingya Genocide. And thus by definition that your colleagues and past/future work mates could also be held morally responsible for genocide or more or less anything bad, via the transitive property of whatever. This belief requires sloppy and motivated reasoning to sustain.

- That perhaps your job was to ban anyone you regarded as "shitheels", "jerks" or "Nazi-ish" (!). In other words anyone whose views you simply didn't like. That's not an anti-abuse policy worth of the name.

With these attitudes it is easy to believe that the team you were on was not only highly disruptive towards colleagues and users but ineffective and a legal liability as well.


Ok, random internet person. Having read a few words, you surely know better than me what happened among hundreds of people half a decade ago.

Except that you are not a very good reader, because I explicitly didn't say "Facebook can be blamed". Sorry, but you're not tall enough to ride this ride. Come back when you've grown up some.


You said in this thread [1], quote:

"Facebook has a body count"

and when someone responded blaming Facebook for an actual genocide you replied with:

"Yeah, when I say that Facebook has a body count, I'm not kidding"

Then when someone else disagreed with you, stating that "the entire (read 100%) responsibility for the Rohingya Genocide or other similar events [lies with] the perpetrators" you doubled down by saying:

"Facebook is morally responsible for that harm"

and proceeded to argue that their "barest minimum" standard should be to ask, "Hey, is there a way to be sure this next feature is responsible for zero deaths?"

You very clearly argued that in fact Facebook can be blamed for, well, god knows what. Genocide and "body counts" at minimum but it's pretty clear that you think almost anything can be blamed on a social network or other communication providers, given that basically all organized human activity involves technology-mediated communication at some point.

And now you're trying to claim you never said Facebook can be blamed! If you didn't really mean that then you need to learn to better express yourself, because a whole lot of people in that thread interpreted it in that exact way and you proceeded to respond without correcting them, indeed, by telling them why their views would lead to more airline accidents (!). Ye gods, you must have been a serious liability problem waiting to happen with these attitudes. Imagine lawyers bringing up your views in discovery and then saying, Twitter's own staff definitely believe Twitter can and should be blamed for ${litigants harm} by their own arguments. Case closed, your honor.

[1] https://news.ycombinator.com/item?id=32781165


Responsibility and blame are different things, chickadee. That you can't tell the difference is not my problem, so from here on out you're on your own. Maybe keep your lawyer fantasies in your head next time you try to have a discussion with adults.


Ban algorithmic feeds and recommenders. Just show chronological order of content from people you follow.

Failing that, squelch viral content (slow it down).

Ban 'likes' and so forth.

Treat all personal data as property of that person. Transmuting demographic data from an asset into a liability.

FWIW, here's my prior answer to your question, citing the reform efforts of US Sen Warner's SAFE TECH Act and Information & Democracy's Policy Framework:

https://news.ycombinator.com/item?id=32152428


Ban all sharp and pointy objects. Mandate that helmets and Hazmat suits be worn at all times. Personal responsibility and agency is a threat to our democracy.


Imo Ban images of tweets. It’s what allows the pile on to live forever.


Just being in that group might actually be a scarlet letter for other Twitter hiring managers.


Banning spambots is an essential part of running social media, it's not a secret team of SJWs. And they're currently in a lawsuit since Elon claims they don't actually do it and just pretend to.


> Banning spambots

OP was pretty explicit that his job was not banning spambots, but "protecting marginalized users who face a lot of abuse".


He said that's what he cared about, but the comment was about the "group" and the whole group probably does both.

Also, Twitter ships anti-abuse features all the time so they seem to care about it. Although they're not especially strong, like the one that asks you to reconsider if your tweet has too many swear words.


That is not the case. All the developers who decided to stay at Twitter happily moved to other jobs. As should have happened, as they were all, smart, competent, and dedicated people.


Very glad to hear that. Sounds like Twitter took better care of these employees than many other companies would have in a similar situation.


> other Twitter hiring managers

Or any other hiring managers anywhere else - I'd be really hesitant.


I was laid off (with a future date), paid an additional $30k to stay till that date and then hired back with an additional $30k bump in salary. Companies do all sort of stupid stuff.


As someone who (after 15+ years of career) has been feeling alienated (or worse, mocked for) when bringing up ethical issues around tech I work or design I wanted to thank you for the integrity. It feels good to not be alone.


Out of curiosity, what do you feel you gained by not signing the NDA/non-disparagement that was worth so much to you?


It's not what I gained. It's what I would have lost.


What did you feel you would have lost?

I can imagine answers, but I've never been in that situation. There are may be things I won't think of.

This is why I'm asking - my first instinct is take the money so I'd like to fully understand the thinking of the opposite side.

In any case thanks for replying.


There are two basic approaches to work. You can be a minion or a professional.

If you're a minion, then you just build the volcano headquarters and the atomic missiles because, hey, it's your job. They don't pay you to think. Your job is to make your boss look good. You give 110% and think outside the box only when it's firmly inside the box of the primate dominance hierarchy you are embedded in.

Professionals, though, explicitly recognize they are part of a society. They owe something to the client/employer, but also to the public and the profession. For example, read through the preamble here: https://ethics.acm.org/code-of-ethics/software-engineering-c...

I see myself as a professional. I sell my labor, not my ethics. So what I would have lost by selling my ability to speak out about a problem? My integrity. My independence. My freedom. $40k is a lot of money, but it wasn't worth 40 years of shutting up about anything Twitter would want me to keep quiet about.


This is the same sort of worldview regarding professional ethics that led me to make my first discussion with any client free. It’s unethical (and bad business) to ask customers to pay my rates in order to explain their problems to me, it would be taking advantage of them to spend an hour or two, or even three, work out they’re either not a customer I want, or that I haven’t got the experience with the technology they are using, or that yes I could help them but their budget wouldn’t cover the time needed at the rate I charge… etc.

I borrowed this from the way quite a few lawyers work. Its served me well and garnered quite a bit of good will over the years, but at the end of the day I do it primarily because I sleep better knowing I’m not ripping people off and taking advantage of them.


For sure! I have no tattoos and probably never will. But "value for value" has such meaning for me that it would sure be a candidate.


You are a legit hero. Our society focuses exclusively on contributions that involve some dramatic moment, and doesn’t recognize those who show up for what’s right every day.

Thank you, seriously.


Hero points.

Thanks also for explicitly spelling out what it means to be a professional. (duty to profession and society as well as employer, written source, summary "sell my labor not my ethics")


I'm always ripping on people using the title Engineer when talking about software development. Your comment is a perfect example of how I would expect an Engineer to think about their work. I love it.


> I'm always ripping on people using the title Engineer when talking about software development.

Why?


In many places, engineers are registered professionals along the lines of doctors or lawyers. As Wikipedia says:

> The foundational qualifications of an engineer typically include a four-year bachelor's degree in an engineering discipline, or in some jurisdictions, a master's degree in an engineering discipline plus four to six years of peer-reviewed professional practice (culminating in a project report or thesis) and passage of engineering board examinations. A professional Engineer is typically, is a person registered under an Engineering Council which is widely accepted.

Personally, I mostly refer to myself as a developer, not an engineer. Funnily, my great-grandfather came to America to work as an "engineer" in the Minnesota mines back when the standards for "engineer" were much looser. I imagine we'll have a similar process in the coming decades, where some sorts of software engineer will be licensed, and that you won't be able to call yourself by that term without getting into trouble.


So having a B/M.Sc. in Software Engineering is not enough to qualify for the title of Software Engineer? :D


The degree means something (assuming the school is accredited) but the title doesn't. Anybody can call themselves a Software Engineer.

For a very brief moment in time, some jurisdictions fought against the use of the title. In Canada Engineer is a protected title (like CPA) and they tried to hold Software Engineer to the same professional standards as other engineers (civil engineer, mechanical engineer, chemical engineer, etc...). This meant having a degree, working under a Professional Engineer for some period of time, then passing an exam demonstrating expertise and high ethical standards. I don't remember all the details, but I think Microsoft fought this and today Software Engineer doesn't imply anything in Canada.

If Software Engineer were a real profession, then you could be sued for malpractice and lose your ability to work in the field. It would also mean you have real power to push back against employers asking you to do illegal or unethical things. You would have personal liability and wouldn't be able to claim you were just following orders.

https://engineerscanada.ca/become-an-engineer/use-of-profess...


Ok - maybe in North America. I have never heard of such a thing in Europe. I work with engineers of different disciplines.

I have never heard of anyone having to “work under a professional” for some time after taking their degree.


Depends where in Europe I suppose:

https://en.wikipedia.org/wiki/European_Engineer

If you are talking about software engineer, then no you wouldn't have heard of that. You can't get a EUR ING designation as a software engineer (AFAIK).


In my opinion, it should not be.

Academic study just isn't sufficient to understand modern software development. One of the hallmarks of academia is how short-lived everything is. But many of the interesting issues in software development only become visible on long-lived projects. As a hiring manager, I consider fresh-out-of-school developers to be dangerous until proven otherwise.

I don't yet think software development is stable enough to turn it into proper engineering. But if it were, what I'd be looking for is a relatively high standard that includes both exams and a few thousand hours of supervised practical experience, similar to what they require of electricians or therapists.


Kudos to you, and glad you took the time to speak about your thoughts here. It's good for people to have exposure to later career challenges... and especially options they might not consider.


This is so incredibly refreshing to see. I can't express just how much I appreciate and respect this.


I have huge respect for you. 40k USD is a ton of money where I live due to the exchange rate.


Thanks for this very thorough (and patient) answer. I'm also happy to be a professional as well.

Ethics is not just a box you tick


I found this comment inspiring and I want to thank you for writing it. This is the work ethic I want to have and keep.


After a certain amount of money to cover life expenses + recreation, additional money has diminishing returns. Once one reaches that point, leading a fulfilling life becomes the next highest priority. I would speculate the kind of person who joins an anti-abuse team would not find accepting hush-money about the lack of anti-abuse systems to be something they would find aligning with their vision of a fulfilling life.


What is that number for you? (and others)

For me it's pretty dang high. Like on the order of 5-20 million USD in the bank. At some point I'm going to stop working. Rent and life expenses where I live currently are probably $100k a year (yes I could live on less or move, but that's part of the point).

Let's say I stop working at 65 and I live to 85. That means I need at least 2 million dollars in the bank to keep spending $100k a year and it assumes I die at 85. If I happen to live to 95 or 105 I'd be S.O.L. Also add in escalting medical expenses, inflation, other issues and 5 million in the bank is IMO the minimum I'd need to feel I could discard other money and stop worrying about it.

And that assumes I stop working at 65. If I was trying to stop earlier that would go up. I get at some I could theoretically live off the interest.

My point is, at least for me

> a certain amount of money to cover life expenses + recreation, additional money has diminishing returns.

Is generally false. It's usually part of the "if you make $75k a year, more won't make you happier" but IMO that's not true because I'm screwed if that $75k a year stops.

Also, source of that point is often misquoted. It says happiness doesn't increase with more money. But life satisfaction does increase forever with more money. Here's an article pointing out the original study did say satisfaction increased as well as a new study that says happiness increase too.

https://www.forbes.com/sites/alexledsom/2021/02/07/new-study...

If I had even more money I'd angel invest. I think that would be pretty fulfilling. If I had even more money there's all kinds of projects I'd like to fund. I expect I'd be pretty proud to fund them.


I never said more money doesn't make you happier. I said more money has diminishing returns, and other things become more important. Even you are only able to suggest things that bring you personal fulfillment as a way you can use more money. This actually supports my point that it makes sense for someone to decline money that doesn't bring them fulfillment if fulfillment is what they're going for.


Maybe, but as someone that turned down $6 million because of my conscience (sold stock solely because I didn't like feeling guilty holding it and it's gone up 4x-6x since then), I could do a ton of good things for others if I had that $6 million.

It's not like we're talking about killing babies. Where' talking about signing an NDA or in my case not being 100% on board with a company's behavior in ways that are arguably ambiguous. As an example, if I had New York Times stock and was upset at their hypocrosy of complaining about ad supported companies while themselves being an ad supported company. Whether ads on NYT are good or bad is an debatable point. The point being, nothing the company who's stock I was holding was unambigously evil. But I chose my conscience over arguably trivial issues. In this particular case I think it was a mistake. If the company had been truely evil (by my definition of evil) then I'd be more okay with my decision.


I am sorry, but no, you did NOT turn down $6M. You have made an investment decision that cost you imaginary gains. This is a big difference from turning down a $6M paycheck


It's not as different as you're making out. There was no reason to believe it would go down (Apple, Amazon, Google, Facebook, etc were all going up). I fully expected it to go up. I knew I was likely to lose money by not just keeping the stock. I choose to sell 100% for conscience. I had no other reason to sell. Plenty of money in my bank accounts.


> that turned down $6 million

By that logic we all (er, at least most of us) turned down $100mm by not throwing $1k into bitcoin at the right time...


Bad comparison. You'd actually have to spend money to buy into a bet. I already owned the stocks. And further there's the reason. I didn't sell to cash out, I sold solely to stop feeling bad anytime a company's stock I owned did something questionable. It's very comparable, especially given the company in question. I even expected it to go up at the time. It would only have have to go up a few 2% to equal 40k.


one coulda-been is as good as any other.


> I said more money has diminishing returns, and other things become more important.

More money has exponentially increasing returns. Other things don’t increase in intrinsic importance, being wealthy just frees one to focus on them.


But nothing really came out of it? It's like twitters reputation got ruined or there was some repurcusion for the company freely talking about it


This implies that the poster is the kind of person who doesn't find fulfillment in making choices that align with their ethics system unless the world also aligns with their ethics system. Like I said before, I don't think someone who would join an anti-abuse team follows this behavior pattern.

Additionally, the belief in one's own moral choices and behavior is often one of the important steps in finding fulfillment in yourself. If your fulfillment is reliant on external validation, you will always be found wanting.


What repercussion to expect? TFA is literally about Facebook doing the same thing, today.

Don't see many pitchforks...


Yeah that was my point, the thread was about being offered 40k to be quiet, but the thing I'm wondering is be quiet about what? Might as well take the 40k if there was nothing actually scandalous going on


I suspect you're looking at it from either a nihilistic "let the world burn, I just want to get mine" perspective, or (probably more likely?) a perspective heavily influenced by consequentialist ethics ("is the good I can do by being able to talk about these things worth more, in a net-good-in-the-world kind of way, than $40K? I'm not sure enough that it is, to justify turning down the money"). Or maybe a blend of the two (most folks get a bit selfish and let their ethics slide at least some, when sufficiently-large dollar values start to get thrown around, after all)

There are other perspectives, though. Here's a starting point for one of the big categories of thinking that might lead one to turn down $40K essentially just on principle:

https://en.wikipedia.org/wiki/Virtue_ethics


Great points!

For what it's worth, I'm at root a consequentialist, but see virtue framings as cognitively much more tractable given both the weird feedback loops of one's own psychology and the immense computational cost of trying to predict consequences over coming decades.


Well they couldn’t have made that comment, for one.


It's interesting that people value free speech so much, yet seem to have no quarrel with being allowed to sell it away.


The freedom to talk about something you spent a significant part of your life on?


> what do you feel you gained by not signing the NDA/non-disparagement that was worth so much to you?

I wouldn’t sign that. Less, if I’m honest, out of any sense of duty, and more to maintain leverage. If that story is worth $40k to the firm it could be worth more to me, down the road, should my past employer and I find ourselves disagreeing.


It was worth $40k to the OP, but for Twitter it cost that multiplied by the number of engineers (minus OP).

That silence was worth a lot to Twitter.


I do not know about Twitter but two months salary (ballpark for $40k for manager) is my company standard offer in layoffs (I think it's actually 2 months + 2 weeks for every year of service). You have to sign release+NDA to get it though. Twitter did not try to buy silence - they just offer money to everyone to keep good feelings (generous severance) and avoid petty litigation.

Example of buying silence is $7M settlement with security guy and even that apparently did not work.


Twitter absolutely was trying to buy silence. Per a suggestion from my lawyer, I offered to sign a version with everything except the non-disparagement clause. They said no.

And Twitter very clearly did not give a shit about "good feelings". Otherwise they wouldn't have done it for us as surprise meetings and security guards walking us out after, with salary ending that day. If you want good feelings, the way you do it is giving people paid time to find a new job. It would also have been nice to be able to finish up my work and tidy up loose ends. E.g., I had an employee whose annual review I was in the middle of doing. She was stellar and she deserved to be properly celebrated, rather than some rando manager come in and try to guess after the fact.


I was just about to say: stall them out, wait until everyone else signs, then bring up how if they're willing to spend $40k * number_of_employees to keep the silence you're making them a counter offer of $40k * number_of_employees.


> That silence was worth a lot to Twitter

Totally agree. These agreements make sense. Just saying that if I were shown the door, I wouldn’t sign. (Different question if I’m asked while happily employed. Would be more inclined in that case, provided it only applied through the date of signing and if it were mutual.)


It isn't just silence. There are usually a set of things in a severance deal - like agreeing not to sue for wrongful termination or reconfirming existing NDA or exclusivity or other employment terms. When I've been on the managerial side of things removing the potential downside of any future lawsuits was the overriding concern.


Sounds like blackmail. You’re not whistleblowing or suing for damages, so how else are you getting leverage from a story in disagreement?


It’s not blackmail to want the opportunity to be able to tell the truth in the future. Jennette McCurdy recently wrote a memoir and mentions that Nickelodeon offered her $300k to sign an NDA about her time as a child actor there. She declined their offer. She didn’t blackmail Nickelodeon but it was absolutely the right decision for her since she’s more than made up the $300k on the sales of her book. And for the public, who got to read her excellent book.


I get not signing NDAs in general because you want to be able to tell the truth. This whole thread is an example!

It's the "if we end up disagreeing part" that seems like blackmail. And, hey, maybe that's what they're going for! Could say "yep, I want to be able to blackmail them if I need to." But the comment I replied to wasn't that explicit.


> how else are you getting leverage from a story in disagreement?

It’s leverage held in reserve, not planned for immediate exploit. If we part ways and all is hunky dory, it sits stale. But if e.g. the firm gets bought by an asshole and he frivolously pursues ex employees for dumb reasons, e.g. deciding a non-compete covers everything from finance to gardening, I have something to fight back with. (One can similarly ask why the company needs non-disparagement protection.)

Employers and former employees get in stupid tiffs all the time.


It's not blackmail. You're taking $40k in exchange for an unbounded number of problems. Maybe someday some exec will come up with "Oh, we used C++ at Twitter, so answering that question on StackOverflow is a violation of your NDA, please return the $40k immediately." Now you're on the hook for either $40k of court costs, or writing them a $40k check. (I don't think you get the $15k you paid in taxes, either.)

For a few million dollars for a bounded period of time, sure, disconnect yourself from the Internet for that bounded period of time. For $40k, you're just taking on unnecessary problems that you don't actually have the resources to solve.

(I was offered $0 to sign an NDA after leaving my last job. I did not sign it.)


Why would you use your real name on stackoverflow? Using your realname creates these imagined problems


I think that violating a contract using a fake name is pretty risky. If you're going to do it on I2P-only services or something like that, maybe you'll get away with it, but the best way to avoid getting caught for a crime or contract violation is to not commit it in the first place.

I read sagas about people getting doxxed every day, even children. You may trust your opsec, but it only takes one slip-up to be out a large quantity of money. I wouldn't risk it.


What if you're already using your real name on stackoverflow? Then part of that money is to compensate changing names on various sites and perhaps losing history and it in general restricts things you can do for life.


Leverage comes from having something that someone else cares about. You have the ability to talk about and disparage the company. The company wants you to not have that ability. They want this so much that they're willing to pay you even if you don't have immediate plans to do anything. That's leverage.

How is it blackmail if the company is the one offering the money?


Offering to pay someone for their silence sounds like bribery. Not accepting it, or otherwise ensuring that you're compensated properly for what that silence is going to cost you, doesn't turn that into blackmail. Twitter approached op, so to speak, not the other way around.


Blackmail's legality depends on whether monetary compensation is being demanded.


Being able to share the content above when it is relevant has value, doesn't it? If the opinion is that this sort of corporate behavior is not healthy, and the person hopes to reduce or even eliminate that sort of behavior, then they need to be able to share that information freely with others.

$40k is a lot of money to some people, and not significant to others. To that person, being able to share the single post above might have more more value than an extra $40k in their savings account.


We gained the story, they did it for the benefit of the community.


The ability to write that comment?


And more importantly, not fearing being sued for accidentally saying the wrong thing to the wrong person.


And having a cleaner conscience down the road.


Maybe I'm just the worst kind of person because I'd 100% sign that and still talk.

I get that with tech salaries 40k is practically a relocation fee, but I see those agreements as being about as binding as an EULA. I'd revel in them coming after me and can't for the life of me imagine damages past the 40k they offered unless you straight up lie.


Leaving that kind of money on the table to share a story is not a practical action that makes sense to me without a six figure book deal in place.


This sounds like it's coming from somebody who has never been sued, and never seen up close how painful it is. My mom was sued after she declared bankruptcy after a business failure. Her lawyer was strongly optimistic and the judge not only found in her favor, but also gave opposing counsel a dressing down that still warms my heart. Even so, it was 18 months of being put through the ringer. If she lost it would have been devastating, so she was always living under a cloud. And that period was both painful and expensive. So the concern isn't some sort of cash damage at the end of a trial, but everything leading up to it.

Would they have come after me for anything I've said so far? Probably not. But probably isn't definitely. Even without signing it, I still have to measure every word against the odds I'll make somebody at Twitter mad enough to try to make an example of me. Signing would have made that much more fraught.


Well I think it's quite simple...

You're willing to sacrifice 40k to make comments on HN freely.

I'm not only willing, but would truly relish, even years fighting over some kitchen sieve of a anti-defamation contract in court. (all while the employer is forced to air out exactly what they tried to pay me off for mind you)

I'm not saying the latter approach is everyone's cup of tea, but I can definitely tell you which one I think is more likely to make an impact that hurts them...


I believe you have that belief about yourself. But I still think it's not a belief informed by experience. Don't let me stop you from trying, though.


I experienced the change of tides first hand. Without proper Senior Management support, you are essentially non existing, no matter how good you do.


Sounds like pretty standard big corp stuff. I honestly don't understand why a NDA is even required here.


They probably don't want bad PR from everyone knowing that they laid off the team that is responsible for fighting abuse on the platform, given that it is still a problem that the public may be concerned about.


The featured article being #2 on HN says something.


> In the surprise meeting, they offered a fair chunk of money

People are getting hung up on the dollar figure in replies here, which I think misses the point.

It's super common for companies to tie a severance package (assuming more than statutory requirement, ymmv with jurisdiction) to a non-disclosure, non-disparagement, non-compete that is often far more aggressive than whatever you signed at the beginning. It's definitely worth thinking about whether or not the agreement makes sense to you.

This is one of the big differences with executive contracts; often all of this is already sorted out at time of hire, and everyone is pretty much open eyed about the failure modes.


It's really hard to find a good metric for "Keeping a social network healthy". Most of the changes introduced by a responsibility team will inherently hurt short term metrics or block others from making a change.

I wonder if there is room for a tool which tracked the rate at which users interact with negative or psychologically high risk content on a user generated site.


> our team had succeeded so well that there was no need for us

> At least for the managers laid off, it was a surprise meeting and then getting walked out the door.

I don't get it. Call me naive, but when a team is dismantled because of change in business priorities, people don't get fired. Unless the company is shrinking, valuable employees are reassigned to different teams.


> I don't get it. Call me naive, but when a team is dismantled because of change in business priorities, people don't get fired

This really depends on the change in priorities and skillet the people represent. People aren't actually fungible, after all.


Id wager that given the sizes of the organizations involved, there is enough breadth and surface area to have fungible roles.

Even the CEO can be replaced with another key employee.

So i think parent is onto something with comment. And I think that's also the elephant in the room that no one talks about.

Why did this specific set of employees end up working in said team?


Is there a catchy term for this pattern?

- "Pump and dump."

- "Savior to villain."

I see this pattern frequently - with corporations' power and influence, their ethics go checked only by law and public sentiment..


I think that the mistake is thinking that corporations can have ethics in the first place. A person can have ethical and moral codes, a corporation can't. Execs can write as many documents as they want, a corporation is not a person and will never be able to have the same consistency that we'd expect from a normal person with morals.


Corporations don't have brains, but they do have cultures. Those cultures include ethics, just ones that often differ from what outside individuals would want. Or put more simply, "fuck the rich" and "fuck the poor" are both moral codes.


Thank you for sharing this. An expensive comment but one worth making.


> And shortly after that, said boss's boss said that our team had succeeded so well that there was no need for us.

This is infuriating. The boss’s boss knew this wasn’t true. You knew it wasn’t true. Why lie? Especially with such an obvious and unconvincing lie? Twitter is so addicted to misinformation they’re even spreading it internally and off-platform.


I presume because there were enough people who either liked him or found that view convenient that nobody who mattered was going to double-check his words.

Which is what happens all the time in power hierarchies. Some people enjoy that bad thing are happening, more are indifferent, and most of the rest are kept too busy to really think about it. Or too distant from power for any thinking to lead to action. E.g., America's history with slavery.

But yes, if you are paying attention, it's fucking infuriating.


This is called having a guardian angel.


such payouts can be negotiated. I would have asked for $400k. (then still not signed it!)


Teams like this attract a certain type of person, and it's not a builder. In order to justify your own existence, you have to invent blockers to place in front of people who are actually trying to build things.


At a previous job I joked that groups like that would place to attract / corral employees that should be cut during the next round of layoffs.

In theory these teams could do some good things, in reality it attracts or creates horrible people with INCREDIBLE efficiency / creates a cycle of endless meetings / recommendations... and it is endless because it is in their best interest to have an endless amount of "work" and inject themselves in a way that costs them nothing, and everyone else a great deal.

I figured these teams were the easiest place to find people who provide nothing at all / get in the way of folks doing things and almost always they don't accomplish their goals anyhow. They wouldn't even know if they did accomplish their goals anyway as these groups tend to center everything on their actions / the goal is them doing things endlessly.

Somehow word got back to them about the joke, they blamed the wrong person for the joke, tried to raise a hubbub with HR (to their credit HR told them to pound sand). And then they got all laid off ...



Probably best not to tell HR that there's a joke going around that your team should be laid off. Wouldn't want to give them any ideas.


But you need somehow oversight right? (specially for something as critical as social networks used by billions)

I'm a believer in some sort of cross-validation and quantification. The ethical impact of a technology should be quantified, and addressed if significantly negative (or promoted/incentivized if positive!) -- you can quantify number of users impacted, give various arguments, soft measures and estimates. Then, you can have other teams validate those estimates. After all said and done, you can again evaluate an intervention (on both ethical terms, and on the productivity and profitability of your company), and see if it was good or not. If a team is repeatedly giving bad advice, I think you can then address that (retraining, lay off, etc.?). Ethics is not impossible to do :)

I believe a certain generalization of this is needed for the entire society[1], to externally address good/bad technologies. A good example I cite all the time is Open source contributions. We still haven't found a good mechanism to pay OSS, despite the immense value it brings to society (openness itself has a value that direct sale couldn't capture). I'm a big believer in distributed evaluation and cross-evaluation: if we had distributed entities to evaluate impact of various ventures (for efficiency, properly categorized and subdivided into science, technology, social costs/benefits, environmental costs/benefits, etc.) we could apply the same logic to make society more adaptive, distributed, and less focused only on the single goal of profit. (I plan to elaborate this further, I think we are going to need to "patch" our current system in some sensible and careful ways to get a really good 3rd millennium!)

[1] Previously sketched this here: https://news.ycombinator.com/item?id=28833230


I think the problem is that the way these groups work ends up being something other than intelligent oversight. It's a human problem.

They attract or breed people who see it as a great / easy job simply to provide oversight but have no investment / zero consequences for what happens. They can give tasks, eat up time, it costs them nothing, but costs everyone else tons of time.

They are also free of any consequences ... how do you know such a group did any good? It's entirely for them to define. "These products were ethical, and we made them that way." Easy to say.

IMO these kinds of questions are up to the folks in charge of the products, beyond just a random group deciding "is this ethical" as they actually also have to understand and know them. If the folks making the product can't handle it, that's the problem.


Perhaps if being ethical costs a lot of time, your business is unethical by design.

Or worse, you're cutting corners that should not be have been cut.

Now in terms of this particular abuse prevention case, the problem requires a serious social and technical solution that would be expensive for Twitter to implement, and would harm their revenue to boot...


takes notes


> Teams like this attract a certain type of person, and it's not a builder.

Ditto for financial audit teams and so-called "IT security": all they do is block, I've never had anyone in either function build something or help me work faster, just additional processes and bureaucracy that slows down real work.

edit: I thought my sarcasm would be apparent, but Poe's law strikes again.


It's because security, audit, and ethics teams aren't judged by how much they enable others, they're judged by how many threats they block. I work at a well-known tech company but joined when they were still very small. Our original security "team", a team of 2 engineers, were highly plugged into the product and the concerns of the then-small engineering team. This team was always willing to help and enable builders to stay security conscious.

As we grew into a large company and our security team became an entire organization at the company, the org lost all connection with the product and became a more classic blocker-based team. The org lost empathy with the product and was judged on no product metrics, so naturally the culture in the org began to just be saying "no" to engineers all the time. A few of the earlier hires still try to help, but for the most part they just block.


"Fast" is not the only descriptor that should be applied to work where a financial audit might be involved. "Correct" probably has a few things to say. "Legal," too.


It's not their job to build, it's their job to ensure that what you build isn't crap on any number of fronts (security, compliance with regulations). You know, important things that impact your users (if you have the best software ever, but it leaks user PII in the HTML because you're a dickhead who wanted to build fast without any regard for security, that's not great).


Sure, if they helped you figure out a way to build, that would be fine. Far more often, I am just told to kill feature/bugfix/concept. At a past job, the security people were against the idea of self password reset at all.


The security team at my current company also refuse to allow self password resets. The only way is to call our service desk and give them your DoB and last 4 of your SSN... and they completely ignored me when I pointed out that for at least half the population the US that information is more or less public due to several data leaks.


I haven't worked much with financial teams, but the security teams I've worked with have absolutely helped build our products. They put practices & systems in place that made a golden path to getting software green-lit for production.


Same with QA. Fire those guys!


This, but non-ironically.


Security is a discrete objective with a distinct success / failure metric.

"Responsible innovation" is not discrete.

In simplest terms, one can judge if security is succeeding with a security audit. Either access is avaiable to the right parties, or not.

One cannot say with certainty if one is being a responsible innovator.

If your products are great bc they are are built quickly on top of a swiss cheese security policy, the security team is à blocker (or at worst, a delayer) of the company bankruptcy from fraud/fines/asset loss.

If responsible innovation is blocking product, its not certain what (if any consequences) there will be if the product is unblocked. Therein lies the problem


I used to feel the same when I worked for a bank. And then I worked at a place where security was so unregulated that TBs of data and millions of user’s accounts could be stolen just with a shared password and no one would notice


Ages ago, one of the managers on my team was complaining about how unhelpful the rest of the company was being (with our team's upcoming product release). I explained the job of everyone else in the building was to say "No" and their job was to get those people to say "Yes".

"But, but, but aren't we all on same side?!"

"Oh you sweet innocent child..."

Once this manager understood the rules, they did quite well playing the game.


I'm in a highly regulated industry segment and work in an information security related function. You've made my day.


You're welcome!


> financial audit teams and so-called "IT security"

They do fill a role that actually needs to be filled, though.


Yes. Does the GP think the same thing about security engineering teams? What about QA engineers?


Not really Poe's law. The statement may be exaggerated, but is also true to a degree.


> ...without a clear indicator of the author's intent, every parody of extreme views can be mistaken by some readers for a sincere expression of the views being parodied.

This checks all the boxes on Poe's law:

  [x] Parody
  [x] Reductio ad absurdum intent
  [x] Extreme view
  [x] Mistaken as sincere


I strongly disagree. I worked at an analytics company that had and needed a privacy ethics department. They were great. It was a mix of lawyers, philosophers (degree in the subject), and former developers.

They consistently thought about the nuances of baking privacy into a product in a way that I didn't have the background or time for. Every time I worked with them, they helped me build a better product by helping me tweak the specs in a way that had minimal impact on our users (unless they were a bad actor) and strongly increased privacy protections. It was like having a specialized PM in the room


I don’t remember the source but there was a linked article here a while ago about how to effectively manage “blockers” like this. The example given was a med device company where the engineers hated the regulatory department who always told them no. Manager decided to actually increase engagement and bring on one person from the reg side to their meetings. At some point a person said “well we’d love to do this but no way would the FDA allow us to do it.” The regulatory employee basically said “do you even know the regulations? There’s a very easy pathway to doing exactly that.”

My boss once did a similar thing with the quality department. Suddenly we sailed through our DFMEAs. Some people do live to block others, but some are trying to do an important job. Engagement usually pays more than just whinging.


> Every time I worked with them, they helped me build a better product by helping me tweak the specs in a way that had minimal impact on our users (unless they were a bad actor) and strongly increased privacy protections. It was like having a specialized PM in the room

It sounds like they were focused on building product too, then, which is not at all what this is about.


What did the philosophers do on a day to day basis? I am curious, sounds like an interesting role.


I would think they'd more accurately be described as "ethicists" and probably fulfilled a function closer to a Business Analyst. Reviewing and transmitting requirements, gathering use-case data, reviewing standards & procedures, and working on "fit and function" type stuff.


They all did the same thing, they just came to it from a different background.

The day to day was reading the comms for a bunch of projects, finding places where you could be valuable, then reaching out to a bunch of project owners and convincing them to talk to you about the project.

I had them reach out once because I was working on a project that dealt with data for school children. We were exploring to see if we could use data help teachers intervene early and prevent bullying. They reached out to make sure we understood the legal implications of dealing with data for minors, and that we weren't just building a product that would encourage the administration to pigeonhole students.

After a brief conversation we got signoff and some great feedback on how to support positive outcomes while avoiding the risk of creating something that felt Orwellian


> philosophers

Sounds like something from Silicon Valley, the TV show.


This is also true for security teams and privacy teams and accessibility teams. Yet they are extremely important.


Good security and privacy teams do a lot of work to establish good practices and build frameworks and platforms for others to build upon.

The bad security teams just show up and expect to say criticize everything and stop other teams without providing workable alternatives.

The problem with having a team with a vague charter (“responsible innovation”) with unclear scope (e.g. not just security or just privacy) is that it’s really hard for them to feel like they’re building foundations for others to build upon. It’s too easy for them to fall back to roles where they see themselves as the gatekeepers and therefore backseat drivers of other teams. It becomes a back door for people to insert themselves into power structures because nobody is entirely sure where their charter begins and where it ends.


Of course. But good security teams also say "no you can't do that even though it would be the fastest path to market." And looking at a team that says "no" and saying "ugh these people aren't builder they are just getting in the way" is a great way to end up with a company that is irresponsible towards their users.

We can talk about whether this particular team is effective. We can talk about unclear scope (though, IMO, the scope here is no less clear than "privacy"). But the complaint that teams that put barriers in front of other teams are always bad is insufficient at best and downright dangerous at worst.


> But the complaint that teams that put barriers in front of other teams are always bad is insufficient at best and downright dangerous at worst.

Nobody made that complaint.

Please don't misrepresent my statement, which was specifically, "to justify your own existence, you have to put blockers in front of people who are actually trying to build things."

Good security teams build & they help others build as well. They build secure-by-default platforms, secure common libraries, and guidelines for others to follow with confidence. Working with a good security team means you're confident your product will get to market faster because, at launch time, you know you'll be green-lit since you followed the org's security best practices.

That kind of team doesn't have to invent blockers in order to justify its own existence.


You should edit your statement to include the word "invent" - that goes a long way towards describing the blockers you talk about as invalid, made up, not relevant to real business constraints, etc.

Definitely helps you make your point.


So then the followup question is, based on what do you make the claim that the team under discussion didn't build anything?

You admit that teams in this space can and do often build tools to make things secure by default or private by default, so why couldn't this team be involved inbuilding "responsible" by default?

Your argument here is basically that you believe security and privacy "blockers" are inherently of value, so a team enforcing them isn't 'inventing' blocks, but other teams are.


Very much disagree. Good security teams, good privacy teams and good accessibility teams should understand that their highest priority should always still be to ship product. Their job should just be to make sure that the product that ships is secure, ensures privacy and is accessible.

The difference between these things and something like a "Responsible Innovation Team" is that the goals of the latter are often poorly defined and amorphous. What does "Responsible Innovation" really even mean? But contrast that with, for example, security goals, which are primarily "don't get breached". Security folks, privacy folks and accessibility folks should all have a very well-defined job to do. For the other "PR-focused" teams, they usually have to make work up to justify their jobs.

Note that, obviously, it's also possible to have bad security people, for example, who only see their job as to throw up roadblocks and demand that your security checklist is ever-growing. I'd even say these are the majority of "security" people. But the good security people who are able to integrate themselves successfully into your product development processes are worth their weight in gold.


It seems that you could also see the responsible innovation team in the same light -- 'the product that ships should be innovating in a responsible way'.

Also IME none of these things are binaries -- a product could be more or less secure, privacy-respecting, and accessible, just as it could be seeking to innovate more or less responsibly. Different aspects of security, privacy, accessibility, and responsible innovation will be important at different times, depending on what's happening in the world.

The jobs are never as well-defined as you'd imagine.


Same with QA. Msft rolled QA and dev into a single role and (IMO) it was a detriment to the dev cycle. It's difficult to switch builder and blocker hats.


QA is an interesting one because many companies don't separate QA from development and it works pretty well. I think the main reason you can pull this off is because you can "do QA" by writing automated tests, so it still fits into the "builder" style of work that engineers prefer.

If you asked engineers to do a large amount of manual testing, they would either (a) tell you they were and then secretly automate it, (b) simply not do it, or (c) quit.


We used to have a large QA silo at a previous job. They did exactly that. We broke that down that by moving the QA engineers into the respective product teams - and kept them, there.

Turns out many kinds of businesses/product teams (including Microsoft's Windows team, I'd say, from personal user experience) need dedicated QA people because many worthwhile tests can't really be automated well enough with a reasonable effort, or if they can, it's something that often breaks and needs regular maintenance.

I suspect that in the Windows case they simply stopped testing those troublesome cases.


Agreed. QA is really helpful for anything UX/UIrelated (where the alternative is often experimentation, which really is only good for changing already existing things, and not necessarily ideal for new things). Or if something is so big, complex, and crusty like Windows, where it’d be a Sisyphean task to go back and add automated testing everywhere it could theoretically be useful, it’s a lot better as a stopgap than idealistically declaring you don’t need QA since in theory you could automate things (if you were willing to spend 2000 person-years on it).


In case you're confused about the downvote(s):

https://news.ycombinator.com/item?id=32730380


Does Microsoft no longer have STEs and SDETs?


IIRC, the STE role was eliminated in 2006 or so.

In 2014, for most divisions, the SDET role was rolled into the function of SDEs, reportedly resulting in drastic attrition of the converted SDETs. The exception was the operating systems division, which was reported to have laid off all of its SDETs at that time.


IMO, security and accessibility are more objective ends than "responsible innovation" which can be plagued by bias and personal agendas.


I think these are all examples of teams that can have concrete goals: accessibility teams can be enforcing a standard or regulation (like WCAG), privacy teams can be auditing for compliance to GDPR or COPPA, security teams can be monitoring for security updates, pen-testing, and auditing for compliance to standards like PCI DSS.

"Responsible Innovation" is kind of nebulous and I would expect it to be problematic unless given clear rules of engagement and supplied with a strong leader.

That said, somebody needs to be responsible for dealing with the endless stream of products that are deeply flawed out of the box: builtin racism, builtin stalking/harassment tools, "0-day" doxing of existing users ...


Consider privacy. I'd wager that most people on HN would prefer it if major social media and advertising companies were doing more than waiting for legislation and implementing the bare minimum. "Privacy" in the aggregate is vague rather than concrete. Similarly, you could imagine a team who is responsible for addressing actual policy concerns around things like amplifying sectarian violence and also having a broader and more vague mission around "harm".


I think most privacy teams are responsible for compliance to legislation and, hence, have clearly defined, concrete goals. Are you disagreeing?


One would hope the goal of a privacy team would be to work to improve and maintain privacy, even if they are not strictly legally compelled to do so in the specific case.


No. I am saying that privacy teams often go beyond that and that this "beyond" is in the vague and ill defined territory.


Legal is the barest.minimum,.and at times actually can force bad practice to boot. (e.g. UK Network Security Guidance)

Thing is, any way Twitter can innovate in a way that would bring income is likely unethical. Either it will try to increase engagement by polarizing people - increasing heat; selling user data; showing advertising or additional worse things.


Yep, blockers like:

- should we do this?

- who do we hurt by doing this?

- oh god people are hurting why are we still doing this?


Did you come up with the list or do you actually know what that team actually did? I think only the latter provides a valuable entry point for discussion. In the case of the former, we end up fighting each others imagined scenarios - and imagination is limitless. It sure leads to a lot of discussion, none of it bound by reality though.

It would be nice, but would require an inside person to dare make potentially traceable (by their contents alone) public comments so it's unlikely, to know what the team actually tried to do, did do, and what it achieved. Without actual facts the discussion will just end up a free-for-all.


I think it's safe to assume that the team did the thing that it was assigned to do and that it was named for. It's certainly makes more sense to discuss that than to announce that we have no ability to discuss anything unless we were personally both on the team and managing the team, and were involved in making the decision to cut the team.

edit: I'm sorry, we do get to discuss how they were probably wrecker nutcases stifling people who actually build things in order to make up for their own inadequacies and inability to do the same. It's only assuming that the ethics team worked on ethics that is out of bounds.


An awful lot of people are making comments based on the assumption that such a team exists only to invent problems. It's worth at least one person interjecting that Facebook is causing at least some problems and that a team like this could have a place, even if nobody knows precisely what this team did.

Many people are taking it for granted that Facebook should have no interest in reducing harm. I'm glad somebody pushed back on that.


Funny that the "you don't actually know" critique comes in response not to the nakedly disparaging post that kicked this off, but the comment arguing for responsible software development.

We of course don't know enough of the specifics, because Facebook works hard to keep it that way. But we do know that Facebook has a body count. If you're looking for a "valuable entry point for discussion", maybe start with Facebook's well-documented harms.


And because I Ctrl+F-ed it and couldn’t find anything, one of those documented harms is the Rohingya Genocide. Putting this here so that we know what we’re talking about.

Seeing devs non-ironically complain about internal departments like this one which was set up in order not to let that happen again kind of saddens me a lot. No, productivity and work is not more important than making sure that that work doesn’t enable a genocide campaign in a specific corner of the world.


Yeah, when I say that Facebook has a body count, I'm not kidding. Facebook touches people's lives in so many ways that it's hard to even estimate the total number. But it seems the barest ethical minimum to say, "Hey, is there a way to be sure this next feature is responsible for zero deaths?"


> "Hey, is there a way to be sure this next feature is responsible for zero deaths?"

What makes you think this possible? I don't see where Facebook is particularly responsible here. Telephones and radios have been used to coordinate assassinations and genocides. Movies have been used to justify invasions. Why isn't anyone burning the effigies of Alexander Graham Bell, Guglielmo Marconi, and Thomas Edison?


Sorry, explain to me how Marconi directly profited from the Rwandan genocide?

In any case, perfection is rarely possible but often an excellent goal to aim for. For example, consider child car fatalities. We might not immediately get it to zero, but that is no reason to say, "Fuck it, they can always have more kids."


I think people fundamentally disagree here. I would attribute the entire (read 100%) responsibility for the Rohingya Genocide or other similar events to the perpetrators. Facebook is just another tool, and its creators bear no more blame for the actions of their users in the real world than the manufactures of the vehicles driven by the Burmese military.


Responsibility is not zero sum. The people who pulled the triggers? Responsible. The people who gave those people the orders? Responsible. People who told them where to find the people they killed? Responsible. Arms dealers who sold them the guns? If they had any inkling that this was a possible outcome, then they share in the responsibility. The people behind the legacy media efforts that whipped people into a frenzy? Responsible. And so on.

Facebook is not like a screwdriver, a simple, neutral tool that is occasionally used to harm. Facebook is an incredibly complex tool for connecting humans, a tool with all sorts of biases that shape the nature and velocity of social interaction.

People have known for decades that online communication systems enable harm. This was a well-understood fact long before Facebook existed. Facebook is morally responsible for that harm (as are the perps, etc, etc). Something they understand perfectly well because they do a lot to mitigate that harm while crowing about how socially responsible they are being.

You might disagree with most ethicists on this, as well as with Facebook itself. But you'll have an uphill struggle. Even the example you pick, vehicles, doesn't work, because car manufacturers have spent decades working to mitigate opportunities for the tools they create to cause harm. Now that cars are getting smarter, that harm reduction will include preventing drivers from running the cars into people.


Responsibility is zero sum, or any conversations about it are meaningless. This is quite handily illustrated by your comment actually - where does it end? How much of an "inkling" must people have to carry responsibility? It doesn't sound like a question of fact exactly.

The only way to objectively agree about responsibility is to use an actor/agent model, where actors are entirely responsible for their actions, and only actions which directly harm others are considered. Otherwise we're discussing which butterflies are responsible for which hurricanes. I'm happy to be wrong here, but I just don't see an alternative framework that can realistically, objectively, draw actionable boundaries around responsibility for action. This by the way is the model that is used in common law.

Facebook being a complex tool strengthens my point. Should providers of complex tools be responsible for every possible use of them? Is it not possible to provide a complex tool "as is" without warranty? Wouldn't constraining tool makers in this way be fundamentally harmful?

> online communication systems enable harm... Facebook is morally responsible for that harm

Everything can be seen to "enable harm". Facebook being morally responsible is not a statement of fact, it's an opinion. Facebook's actions to mitigate are a) to evade/delay regulatory action, b) to maintain their public image or c) by a small group of activist employees. Only a) and b) align with their fiduciary duty to shareholders.


> Responsibility is zero sum, or any conversations about it are meaningless.

That's incorrect. I'd suggest you start with Dekker's "Field Guide to 'Human Error", which looks at airplane safety. Airplanes are only as safe as they are because many different people and groups see themselves as responsible. Your supposedly "objective" model, if followed, would drastically increase deaths.


"The National Transportation Safety Board is an independent federal agency charged by Congress with investigating every civil aviation accident in the United States and significant accidents in other modes of transportation — highway, marine, pipeline , and railroad."

The reason airplanes are safe is because the government is doing its job to regulate the space.


That might be a reason, but it is not the reason.


Prior to the establishment of NTSB in the Air Commerce Act of 1926, aviation accidents were common [1]. Congress determined that it was necessary to establish an investigation and regulation framework immediately at the beginning of the era of commercial aviation and this has been enormously successful. Many times Congress does not act fast enough to prevent harms (meat packing, banking, pharmaceuticals), but when they do get around to doing their job safety improves. Individual companies must be compelled to act subordinate to federal or state regulatory frameworks, and to not act as vigilantes.

[1] https://en.wikipedia.org/wiki/List_of_accidents_and_incident...


Oh, so post hoc ergo propter hoc?

I'm not saying the NTSB isn't important. But it's far from the only reason that we have fewer crashes. Government regulation can be helpful in improving standards, but they set a minimum bar. Aviation's high levels of current safety are a collaboration between many, many people. Starting with individual pilots, engineers, and maintenance techs, going up through collaborative relationships, through companies and civil society organizations, and up through national and international regulatory agencies. All of these people are taking responsibility for safety.


From your comments in this thread, I think that you believe that safety is a collective effort and that there is no single individual or entity directly responsible for enforcing a culture of accountability, is that right?

If so, how do you explain the catastrophic failures in construction, food safety, and banking prior to the top-down government oversight of those industries?


That is not in fact how I'd sum up my thoughts. "Enforcing a culture of accountability" is valuable, but neither necessary nor sufficient for safety depending on context. Food safety's an obvious example there. People still get sick from food. And plenty of restaurants and manufacturers would never cause illness if government oversight were to vanish.


Every aviation accident post mortem I've read assigns finds a finite list of causes and contributing factors. Each system/person has strong ownership that is responsible for making the recommended changes. These post mortem reports also explicitly do not find civil or criminal liability - that is a zero-sum process.


Liability is indeed strict zero sum. But you're confusing that with moral responsibility, which isn't.

They serve two different purposes. The former comes out of a zero-sum, adversarial setting where the goal is to figure out who pays for a past harm. The latter comes from a positive-sum collaborative setting where everybody is trying to improve future outcomes.

If I release a new product tomorrow, I'm responsible for what happens. As are the people who use it. But if somebody dies, then liability may be allocated only to one of us or to both in some proportion.

Again, read Dekker.


"Responsibility" is semantically a bit nebulous, but seems to me much more related to "liability" than "continuous improvement". The question "Who is responsible?" reads a lot more like "Who is liable?" than "How did this bad thing happen?". If you release a new product, you may be accountable to your org for how it performs, but (IMO) you're not morally responsible for the actions of others using it. If your new product is a choking hazard you're not guilty of manslaughter.

"morally responsible" ~= "guilty"


> If your new product is a choking hazard you're not guilty of manslaughter.

But you are still imho (morally) responsible for the deaths occurring out of the use of your product (this is where we would probably disagree). Even if you were not legally guilty.

I like another example that to me clarified the distinction between these concepts better.

Imagine one morning, you open your front door and find a baby having been placed there somewhen during the night. The child is still alive. You are not guilty in any way for the baby's fate, but now that you have seen it you are responsible for ensuring that it gets help. You would be guilty if you would allow it to freeze to death or similar.


> You would be guilty if you would allow it to freeze to death or similar.

This significantly varies by jurisdiction, and isn't settled at all. I don't think being present makes you responsible either. Unappealing as it may seem, you should indeed be able to pull up a chair, have some popcorn, and watch the baby freeze. People should only bear obligations they explicitly consented to. I don't think anyone has the moral authority to impose such an involuntary obligation on anyone else.

Modelling society as a constrained adversarial relationship between fundamentally opposed and competing groups is more accurate than assuming there is "one team" that knows a fundamental "good" or "right" and that the rest of us just need to "see it". People who perform honour killing or preside over slavery are just as sure of their moral superiority as you are. What we need is a world where we can coexist peacefully, not one where we are all united under one religion of moral correctness.


All communication systems enable harm, and more generally all systems that allow people to interact enable harm. In the US, the true responsibility for regulating harms lies with the duly elected government exercising its regulatory powers on behalf of the people. It does not lay with the unelected unaccountable members of Responsible Innovation Teams and Online Safety Teams. This form of tyranny persists because the majority of our representatives established their power bases before the advent of the Internet. Hopefully, in the next decade or two, we will be able to effectively subjugate and regulate the King Georges of the large social platforms.


There are laws put the onus on banks to proactively determine that their services aren't used to fund terrorism and multiple funky/opaque processes in banking specifically for that purpose. It kind of makes sense to me to have social media companies upheld to a similar standard that they are not used to organize terrorism or other war crimes.


I'm not sure I agree with these laws. They are very difficult to actually enforce to an objective standard. They also transfer the burden of law enforcement away from police departments and on to private organizations. What it translates to in practice is a bunch of (mostly irrelevant) mandatory training for employees, and an approval from an auditor who isn't very familiar with the business. I think police (and no one else) should do policing.


> What it translates to in practice is a bunch of (mostly irrelevant) mandatory training for employees, and an approval from an auditor who isn't very familiar with the business.

In the context of ensuring a bank doesn't transfer money to terrorists, this is completely wrong. Banks have a whole list of operations and processes, and failing this is enforced by actual jail time. This is why "know your customer" exists in banking. In the context of terrorism, there is no police enforcement regarding terrorists; often, we are talking military enforcement.


Yes, but my point is that "transferring money to someone" is not a true crime. It doesn't have a victim. And yes, our governments should use military/diplomatic channels to fight terrorism directly - that's what they're for.


> Yes, but my point is that "transferring money to someone" is not a true crime. It doesn't have a victim.

This is also incorrect. Transferring money to a terrorist organization is a crime because countries have declared it illegal. Of course well-funded terrorists have victims and enablement of well-funded terrorism has clear victims.

And yes, the government is using military and diplomatic channels to fight terrorism directly-- by ensuring the resources they have access to are limited.


You realize that other groups already do policing? IRS, EPA, OSHA, USPS, hell the secret service will get involved if you are forging currency.


There are two possible things you could mean by that.

One is that you think a lot of things just shouldn't be enforced, and that we should allow a lot more harm than we do now. Genocide?

The other is that you think we should have a lot more police to take over the harm-reducing regulatory actions now in place. That we should take the tens, maybe hundreds of thousands of social media moderators now working, but make them government employees and give them guns.

I can't decide which is more cartoonishly evil.


I don't see why people working in an office need guns, but yes, enforcement of laws should be done by... law enforcement. This isn't too controversial really, if I make a credible threat to someone online, it's a criminal matter for the police. Just as if I had sent it in the mail. The same should be true for all other types of crime (fraud, money laundering, etc.). Police should (and do) conduct investigations and arrest offenders.

Social media moderators exist to protect the public image of the platform, and enforce community guidelines. They should not be burdened with law enforcement simply because we can't be arsed to do proper police work.


Facebook could have chosen to be a completely neutral platform. They could have followed the ISP model, making Facebook another platform like email, RSS, or http. They just had to not make editorial decisions - leave the feed sorted by recency, and only remove illegal material. This is what safe harbor provisions assumed a company would do, allowing platforms who simply pass information between parties to avoid liability for that information.

But they wanted to be valued higher, so they explicitly chose to instead step into the editor's role and became responsible for the content on the platform.


Here’s a good example where engineers needed a team like this:

- should we name this device we want to put in every house after one of the most common American female names?!

engineers and ceo: I see no issue with that!

Several millions people named Alexa now have everyone from toddlers to their friends yelling their name ordering them to do stupid tasks and “Alex stop” repeatedly.

The name cratered in popularity for good reason.

Yet Amazon still has not renamed their dumb speaker


I think you're misplacing blame for this, I don't think the engineers are the root cause of this problem. Why don't these devices let people set their own custom codephrases? I suspect that wouldn't fly with management/marketing etc, who want to create a marketable brand out of the codeword. In fact I'm virtually certain that engineers at amazon weren't the ones who chose the name "Alexa" in the first place, that decision probably went through a dozen squared meetings and committees of marketers and PR people.


They have since 2021, expanding the set to "Alexa", "Amazon", "Computer", and "Echo". [0]

Nonetheless, defaults are incredibly powerful. [1]

[0] https://www.rd.com/list/how-to-change-alexas-name/ [1] https://www.nngroup.com/articles/the-power-of-defaults/


Alexa was only the 32nd most popular name in the US for girls. A little over 6k babies were named Alexa in the US prior to the speaker's launch.

The "Alexa stop" thing, is it a real or invented harm?

My name happens to match the lead character of Karate Kid and I constantly asked to do the crane pose when I was 7. Doesn't seem to have traumatized me.


> Alexa was only the 32nd most popular name in the US for girls.

32nd most popular is not exactly obscure. Why did they have to give the computer a human name in the first place? Probably because it helps people form some sort of parasocial relationship to the product, which is gross, but probably good for business.


I used to work there, after the launch (and therefore, of the name). One of the reasons given was that Alexa was a distinctly suitable word for proper recognition by the ASR model embedded on the device software.

Also probably because it was good for business.


The popularity of the name cratered after launch. That doesn’t signify anything to you?

That anecdote is nice but honestly it sounds like you survived a less frequent and more temporary somewhat similar but much milder version so now you’re stating everyone else needs to get over themselves?


What does it signify? Annoyance? Yes. Harm, such that it required a committee of enlighted priests to have blocked it? Where is the evidence for it?


For illustrative purposes: Dan stop! Dan stop! Dan set timer for 50 minutes. Timer set for 15 minutes. Dan stop! Dan cancel timer! Dan set timer for Fiifttyy minutes” Dan turn off kitchen light” Dan set thermostat to 68. Dan play Music.

Your name is now Kleenexified to mean robotic female servant, no harm!

It doesn’t seem like you googled or looked into how people named Alexa actually feel before pronouncing how they should feel.

This comment chain really shows why these responsibility vetting teams are needed, a lot of corporate workers are not empathetic or considerate beyond their immediate siloed task and assume everyone should react exactly the same as they did to only very tangentially similar experiences.


I'm pretty sure that a Product Manager made the final call on the name of the device. Some DS nerds might have given a list of names that could be used and presented some stats on the accuracy of the device recognizing the name, but the PM probably made the final call.


Already a pretty bad list. It should at least be "how can we do thing we want to do in a responsible way."

You can guarantee no harm by doing nothing but that's not a good enough answer.


If I consult with you on how to kill my wife in a responsible way, I hope you'll tell me that there's no way for me to kill my wife in a responsible way.


My experience is that most people in engineering organizations are not sociopaths, but some are.

The problem is trying to get people that think everything you listed is obvious and boring to spend 40 hours a week staying out of the way 99% of the time, but also being politically savvy enough to get a few CxO’s fired each year.

Also, since the office is normally doing nothing (unless the office has already completely failed), the people in it need to do all of that, and continuously justify their department’s existence when their quarterly status updates range from “did nothing” to “cut revenue by 5% but potentially avoided congressional hearings 5 years from now” to the career-killing “tattled to the board and tried to get the CEO fired”.

If you know how to hire zero-drama, product- and profit-focused people that can effectively do that sort of thing, consider a job as an executive recruiter.


inventing harms to prevent like arsonist firefighters.


And here I thought it was a given that everyone understood the harm of social media, for example, to children and teenagers.


But they actually care about

> [getting] the Facebook dating team’s ... to avoid including a filter that would let users target or exclude potential love interests of a particular race

you see, you have to just ignore those people in the feed, you can't filter them, its better and not racist that way. and who knows, you might become not racist if you see a pretty girl/boy you like, but actually that's probably just racist fetishizing

responsible innovation is doing the same dei doublespeak


That's exactly why I'm skeptical of whether a team like this could have been addressing real harms. If I heard about a nutrition team at Frito-Lay, I would assume they're working on nonsense until proven otherwise, because how could you meaningfully improve nutrition under the constraint that your company needs to sell lots of potato chips?



We have verifiable evidence of Facebook as a platform being used to instigate genocide[0] among other issues. Dismissing a concern that a platform could be used for harm against children as fallacious reasoning is a fallacy fallacy if you have no additional points to add to the discussion as to why you feel that is relevant.

[0]: https://www.bbc.com/news/world-asia-46105934


I'm no fan of Facebook but I have a hard time understanding why Facebook is singled out for this. If what FB did is illegal, then they can be charged for their crimes.

However if we're critizing from purely a moral standpoint, how is this any different than claiming that cell phone carriers should be preventing this type of thing over phone calls or texts?

For the record, I don't find that to be a convincing argument either but it's the inconsistency of perspective that irks me.


That’s fair but again the post in question was essentially responding with “fallacy” and no further comment or context.


The Rwandan genocide was spawned by radio propaganda from RTLM. Classifying social media as especially harmful to children when damage can be made from any sort of mass media is disingenuous.


'blockers' like "have moderation staff speak the language of the country you operate in"? feature velocity is not an inherent good



Yep, Facebook incited genocide - not the actual people who made the posts - Facebook. Lay the blame at the feet of the perpetrators, not the makers of the tools they used. This is like blaming Mercedes for the holocaust because they made Hitler's car(s).


I'm really confused by this because there are most certainly cases where companies are responsible not to provide goods or services to war criminals or terrorists, by law. So it's already established that yes, you can blame the makers of the tools that perpetrators used.


I wouldn't conflate the offloading of policing functions on to private corporations with the assignment of moral responsibility.


Mercedes didn't have an easy way to detect and shut off violence they just decided not to use. I understand the point you're trying to make, but following that logic, should there be no moderation anywhere, for anything then, if it's ultimately only the poster's fault if they post something illegal like "let's all go to bob's house at 123 lane and kill him?"


As it works out, I would say it’s more in the vein of blaming IBM for their culpability in the Holocaust, not Mercedes, although clearly Meta’s role was nowhere near as awful as IBM’s. https://en.wikipedia.org/wiki/IBM_and_the_Holocaust


Can I ask why? The Nazis could no more perpetrate the holocaust without cars than they could without computers.


I think the difference here is like between a German soldier who was ordered to do various things ultimately playing part in perpetration of Holocaust and/or other Hitler regime's crimes without possibly knowing/seeing/understanding the whole picture - after all millions of Germans did actively participated in Hitler regime - and say a USA person who would intentionally go there to join the SS to participate in it.


They are bullshit jobs for the overproduced elite who demand high status. Now they are going to find out these are only viable in the bull market.


Are you saying that the "builders" at facebook are salt-of-the earth plebeians? That has not been my experience.


>overproduced elite How many elite should there be?


Roughly speaking, the amount that can be absorbed into the power structure.

https://en.wikipedia.org/wiki/Elite_overproduction


> How many elite should there be?

The number we have today minus those unemployed minus those on responsible innovation teams. (Using the broad definition of elites, which captures pretty much everyone who went to college.)


Enough to staff all Starbucks outlets in the country. /s


If the goal is to stay in business long term, thoughtless 'building' isn't worth much.

Steve Jobs: "focus is about saying no"

Mark Zuckerberg: "move fast and breaks things"

Facebook is the world's least-trusted, most-reviled brand. They earned it by building garbage.


> They earned it by building garbage.

No one (aside from engineers) cares about what facebook built. Facebook had scandal after scandal and on top of it, everyone's grandparents signed up so teenagers couldn't post all of the screwed up stuff they were doing anymore. Posting there turned into a mix of corporate scrutiny (especially prospective employers) mixed with the old forwards from grandma emails you used to get in the late 90s/early 2000s.

That's why facebook is dying.


I see most of FB's "scandals" (eg: dark patterns, illegal ad-buying filters, API over-sharing with third parties) and flaws (re: "teenagers couldn't post" without pinging grandpa) as the result of garbage product design.

At a glance, our comments appear to differ substantively. At closer inspection, the only disagreement we have is what to include in the 'building garbage' bucket.

In case I misused language, I can rephrase my original point: FB is in a bad place today not because FB was ever slow to build, but because what FB did build was too often bad for users (for that matter: bad for humanity).


I'm going to have to disagree.

They built an amazing product... for a world that had only teenagers on social media. This was the truth for quite a while, and was true of the world they created their product in. They just never really updated their product to account for the new reality. Sure, they changed the algorithm a ton and re-tweaked the UI, but the algorithm and the UI weren't the product. The experience people had on the site was the product.

They're sort of like IBM's mainframe division. Mainframes are kind of cool if you ever get to tinker around with their hardware, and they're impressive engineering marvels (so are facebooks systems). But the world has changed since IBM was on top, and they don't fit what most people need anymore.


Misleading privacy controls, privacy-negligent API, FB Beacon... these FB features date back quite a few years, and were terrible for users. That is the sense in which I say it's bad design.

If I make a pizza with tasty, fresh ingredients - but then top it off with rotten fish guts and mouse turds – it's a garbage pizza.


None of those are the product. Once again, the product is the experience using the website. Those are implementation details that only weird nerds like you and I are concerned about 99% of the time. The privacy loss that most end users (or former end users) are worried about is their parents/spouse/employers being on facebook. These are not things facebook built, but consequences of the social graph changing.

And those "features" are all fine in a world where the internet is mostly young twenty-somethings with no power, as the internet was in the early-mid 2000s. Also for a product where you can just put in a fake name, as we could back in the day.


So apparently we do disagree substantively:

  those "features" are all fine in a world where the 
  internet is mostly young twenty-somethings with no power
No matter a FB user's age, it is a problem to implement a dark pattern that leaks private information. There are countless illustrations of that, but one obvious one is how it resulted in many early FB users falling victim to financial fraud (eg: http://givemebackmycredit.com/blog/2011/04/facebook-and-iden...). As for Beacon, it was clearly poisonous from the start (https://en.wikipedia.org/wiki/Facebook_Beacon).

I closed my FB account in 2011 specifically because the design of the site abused its users in countless outrageous ways (come to think of it: "Are you sure you want to close your account? [These friends] will miss you").

I really don't see us coming to any agreement. It's been fifteen years and, while FB has pumped out plenty of LoC, FB has never shown an iota of respect for its users. No surprise then that the brand is dirt, and Zuckerberg has to spin fairytales about how its 'metaverse' will stem the decline. Their ability to execute is not the problem; the demographics of their user base is not the problem; the product itself is the problem.


Sometimes putting blockers in front of people is absolutely what's needed. It's a common tactic for compliance and security - both of which can completely tank your company if not handled correctly. I suspect it's especially true if there are people with the skills to build who claim they are the only ones that are "actually trying to build things." Likely they don't have the right level of understanding of what needs to be built.


The idea is that those blockers or objections are valid. Something is going wrong if they continually raise issues which do not meet the organisations goals.

That said, it does sound like a strange function to allocate to a specific team.


I don't know what this team is or did. However, to paraphrase Jeff Goldblum from Jurassic Park, if everyone is preoccupied with building things, who will be the one that stops and thinks whether they should?


IME, they don't have the authority to invent/implement blockers. Instead, they attend/hold conferences, give talks, and write documents that nobody acts on. In theory, it's a good way for a company to advertise/lobby to the think-tank set. Clearly that hasn't played out for FB, hence disbanding the team.


Exactly. This is the right explanation.


Well put. Intentions can start off genuine, where the non builder truly believes the processes and barriers they set up improve building (and early on it's usually true), but it easily morphs into a tool for maintaining power and influence regardless of the value add.


Sometimes that's what you need.

EDIT: I don't know about these specific people or whether the blockers they put in place are justified. However, I've definitely worked with developers who obviously prefer to just write code until something works with little regard to the reliability, efficiency, security, or maintainability of their output. I spent almost all of my time cleaning up messes rather than getting to build anything new myself. It's a terrible experience.


> regard to the reliability, efficiency, security, or maintainability

Yeah, that's not what this team was responsible for, though. That's what an enterprise architecture team ought to be responsible for, but rarely are.


Good god, as if "builder" was the only type of person that has value.

Most "builders" are, if not constrained, like cancer cells. Unlimited consumption & growth without a counterbalance. In order to justify their own existence, they need to continue to churn out things without concern of what they consume, what use their product has.

Maybe, just maybe, there's value in having different kinds of personalities - unchecked by itself, pretty much every tendency is ruinous.


People who produce things are cancer. Got it.

Also, false dichotomy there saying a "responsible innovation" team is needed to limit the building of products with no use; that's a solved problem. We have markets for that.


Thomas Midgley Jr. was an impressively prolific builder. Navigators are just as important as drivers. You need both, not either in exclusion.


The people who choose to build things (it's not a binary quality, everyone can build, some are better) are not taking moral responsibility for the things they are building.

It is hurting society. What is to be done? That is the issue at the heart of this headline.


What?? The internet is extremely toxic. If you don't have supervision while building your app, it will attract the most horrifying people like this one time someone almost convinced me that the age of consent should be abolished but I just realized they were abusing my naivety and gaslighting me. We need platforms developed lawfully and keep making sure users are liable to prevent that.


This is always a vicious cycle that starts with good intent but spirals into an oppressive regime.

1) It's starts after the product launch post-mortem 2) Team creates a process to improve avoiding future mistakes 3) Process innocently includes ability to hard block any product changes without someone or some group's approval 4) Politics emerges behind the scenes from those with social capital to get around the hard blocks 5) New process to make things "more fair" by adding more hard blocking rules, usually resulting in lots of pointless meetings and paperwork to justify new process and show upper management that things are working as intended 6) More politics as teams start hiring for people who are able to get around the heavy process by being "good at the game" 7) New team is formed around the process because it's now a full time job 8) People good at politics join this new team and everyone else is pulled into pointless meetings and handed paperwork 9) Rinse and repeat

The problem I've always observed is that these systems are put in place with the best intent to offer guidance so that company, employees and consumers can all benefit by building the right best thing. However, the systems are given blocking abilities that after time do more damage than good.

You'd think, if something were not blocking no one would listen, and maybe you're right. I definitely have no idea how teams can avoid this spiral trap, however, I don't think it's a bad thing to sunset process like this after a while. Just like Google is willing to throw away products on a whim, we should be willing to throw away process teams on a whim.


100% this.

It's very much in the spirit of "the bureaucracy is expanding to meet the needs of the expanding bureaucracy". There's no better way for someone to justify their existence than creating problems but appearing to solve problems for other people. Compliance, security, etc. These things are usually necessary but it does attract a certain kind of person who seems predisposed to empire building through obstacle creation.

You see the same thing with people who become moderators. I saw this years ago on Stack Overflow. You see it on Wikipedia. You see it on reddit. You have to be constantly vigilant that these people don't get out of hand. These people end up treating the process as the goal rather than the means to an end.

I remember when the moderators starting closing SO topics as "not constructive". There were a ton of questions like "should I do X or Y?" and the mods decided one day to start closing these became there was no definitive answer. You can list the advantages and disadvantages of each option and list the factors to consider. That can be incredibly value. Can this descend into a flamewar? Absolutely. But just kill the flamewars and the people who start them. No point throwing out the baby with the bath water.


> I remember when the moderators starting closing SO topics as "not constructive".

I don't think that was a decision made by the mods, I think that's policy created by SO. I really like that it's so focused, clearly defines what is on and off topic, and strictly enforces that policy. You're right, such discussion can be very valuable, but if these policies didn't exist or weren't enforced, I don't think the site would be nearly as useful. Doesn't mean that the people with closed questions are bad, or the conversation can't be had in chat or a different site.

> Can this descend into a flamewar? Absolutely. But just kill the flamewars and the people who start them.

Meanwhile the strong contributors get tired of dealing with (even more) toxicity and move on, the signal:noise ratio drops, and it turns into just another (ostensibly) programming-focused forum. Not to mention that the more time mods spend dealing with toxic behavior, the less time they have for work that improves the content of the site.


> I don't think that was a decision made by the mods, I think that's policy created by SO.

There was little to no direction from the top, at least at that time. Mods were just self-appointed community members who organized in such a way to impose their will on the community. It was pretty classic tyranny of the minority. Many of the more prolific contributors (of which I was one at the time) were deeply frustrated by this. It was a frequent topic on Meta SO.

You saw these waves of subtle policy changes if you tracked edits to your contributions. Suddenly there'd be a bunch of edits all based on some new formatting policy and a lot of it was just pointless busywork.

Some people just fall into this trap of elevating the process to where the process becomes a goal and is treated of equal (or even higher) value to the content it's applied to. Moderation is important and has a place but, left to their own devices, moderators will create work for themselves just based on their idle capacity. You need to avoid this.


> get tired of dealing with (even more) toxicity

And then "toxicity" just gets redefined to an always leftward-moving definition of anything "right-wing". Every time.


https://en.wikipedia.org/wiki/User:Guy_Macon/Wikipedia_has_C... comes to mind as another good example of this dynamic.

For another, the ever expanding administrative overhead of colleges is a large contributor to driving tuition increases. See https://stanfordreview.org/read-my-lips-no-new-administrator... for an idea of how bad the problem is getting.


I didn’t expect to see such a concise and spot-on statement in these comments. Thank you.


Maybe if the builders stopped building shitty things, we wouldn’t need teams like this.


The elephant in the room just broke thru the door....


[flagged]


Gotta love mocking people who were just fired.


Pretty much any time a major tech company announces layoffs the peanut gallery shows up talking about how everyone getting cut is lazy/dead weight.

In reality it's far more likely they had the misfortune of being assigned to teams that weren't core to the business' operation or driving revenue or some Exec/VP lost a political battle and their department got the axe.


It’s a classic post hoc rationalization that happens when folks are fully committed to loving business and bosses over actual people’s well-being. It’s gross.


For sure.

To be fair, organizations select for people who love the business and its leaders over people who are willing to point out when the emperor has no clothes. Sort of the same way that in subcultures where abuse is common, the people who remain are selected for those who will support abusers and blame the abused.

So I get how it happens, and why people like to victim-blame as a way of making themselves feel better about their choices. But it's still thoroughly gross.


We are very hypocritical that way. You can get upvotes for being glad you were laid off, you can get upvotes for saying the people who got laid off were bad people anyway.


So when is the appropriate time to mock them? When they are employed making high six figures to invent harms and then ways to reduce them?


Why are you insisting they're inventing harms? Do you have evidence to support this? Are you just opposed to the idea of ethics?


I have a creepy feeling that this entire thread has been infested by Facebook insiders who are working on something that is awful, and lobbied and heckled to get the people fired who were telling them it was awful. No one else would have this sort of venom.


How about we just don't mock people?


Mockery is a very powerful tool to speak truth to power.


It's kind of nice to watch people who call you names for a large amount of money get fired, yes.


I'm not a fan in general, but in this context understanding the team's mindset seems relevant to understanding what the practical effects of this cut will be. Should we expect Facebook to innovate less responsibly in general, or was this team working towards alignment with specific cultural attitudes? For example, the article describes how the team stopped Facebook Dating from adding a race filter and saw that as one of their proud accomplishments - this seems like more of a values judgment than a question of responsibility, since it's not a question that's anywhere near settled in broader society.


Attacks on character should have justification in the comment, not merely a reference to a Twitter existence.


> these people are also nutcases.

In what way?


"Zvika Krieger is a subversive ritualist and radical traditionalist who is passionate about harnessing ancient wisdom to create modern meaning, fostering mindfulness and authentic connection amidst digital distraction, and bridging the sacred and profane. Zvika is the Spiritual Leader of Chochmat HaLev, a progressive spiritual community in Berkeley, CA for embodied prayer, heart-centered connections, and mystical experiences. "

One of them is an apparent cult leader, for example.


> subversive ritualist and radical traditionalist

This sounds like the person I’d put in charge of a department designed to run in circles.


If they were a deacon in their church, you wouldn't blink at it.


Many people in this industry would blink at that. I recall a whole lot of people on HN blinking at the christian beliefs (among other things) of the guy at Google who said the text generator was alive.


His resume has some impressive accomplishments, but has enough BS that it makes me suspicious of what teaching courses on ethical design actually entails.


[flagged]


It's weird that you assume "builder" means "programmer." At a healthy software company, most roles are builder roles:

- designer

- product manager

- marketer

- sales

- customer success

Each of these roles delivers something additive. They make the product easier or nicer to use, make it address the market better, or even just make people aware that it exists. Marketers build complex promotional structures to gain users. Salespeople put together actual accounts paying actual money so the product can continue to improve. Customer success helps people use the product, and a good CS team filters high-signal feedback back to the product & engineering orgs.

People in all of these roles (and more) are building the product. If their roles were cut, the product would be worse: uglier, slower, less useful, less used, less purchased, and less successful.

However, there are roles that attract folks who don't want to build. It's smart to get rid of them.


You can try to sell me on it emotionally but it isn’t going to happen.

Economic trade is just something humans do. Babbling about it in rambling fluffy semantics is not what gives rise to economic activity.

A lot of these builders should build themselves into fuller better rounded people than office workers who memorized an emotional caricature they use to extract attention from others. HR, marketing, etc exist to insulate financiers, give off an air of building big ideas, but it’s layers of indirection so the financier can avoid real work.

The majority don’t have to import your relative emotional context anymore than they have to import a preachers religion.

That’s what’s going down with the economy; the majority are tired of the over hyped coddled office worker. This is just the start of the barnacle shedding. Source? Data models forecasting economic “plot lines” I collate with others to inform investment choices, not a bunch of high emotion semantic gibberish intended to sell me on a relative perspective.

SEs and otherwise, too many are addicted to an emotional confidence game that is sapping agency in other contexts. This is low hanging fruit being shed, but ML will come for content creators and SEs sooner than most expect; it’s all electron state in machines. Generating data/code as a job will be automated; that is a stated goal of many powers that be in tech.

I’m not the only one who sees it; in a 2 year old interview Gabe Newell was calling ML content creation an extinction level event that’s right around the corner.

More and more “builders” of hype and ephemera are going to be receiving the wake up call as the months and years progress. Building layers of indirection as a career is not long for this world.

But more simply; “software builders” is not language I see as constrained to SE. Like you said it takes a village to build a software business. I think you inferred a bit much.


Nothing in the comment you’re replying to mentioned software engineers. They claim people building things are unlikely to spend time in “responsible innovation” teams. That’s true if one’s building rockets, medicines or cat GIF sorters.


> But building the software is not hard.

This reminds me of 400lb men who can't walk up the stairs without getting winded yelling "are you kidding me???" at the TV when they see NFL quarterbacks throw a bad pass.


It's like reading that BP disbands their team for renewable energy innovation. Am I supposed to be sad? I'm not. I don't care. It was all fake from the very beginning even if the team had many people who were naive enough to think otherwise.

The point of such teams is not to improve ethics. The point is to have enough credibility to affect the discussion. (E.g. observe how much "research" in AI ethics has financial ties to large companies that are heavily invested in AI-based products at the time.)

What a properly ethical corporation would do is openly admit conflicts of interests and listen to external feedback.


Not really, because renewables will actually make BP better and there's a legitimate case that such a team would improve long term shareholder value.

This team at Facebook had a very low chance of doing anything good for the company.


Meta is on the same path that Nokia and Kodak once followed.

They will survive, but they will bleed and shrink a lot, and their relevance will likely be insignificant ten years from now.

The Metaverse bet is dumb, they simply can't execute and even if they could this whole Metaverse thing is likely to stay in the videogame's realm for a very long time.


> and even if they could this whole Metaverse thing is likely to stay in the videogame's realm for a very long time

There are already concerts with paid tickets going on in platforms like Fortnite, that just happen to be able to act as multi-user VR environments, even though that wasn't their designed purpose. The demand is real.

I don't see it as much of a bet that "people are already creating these events, and want to build professional services around doing so; these people would appreciate a platform to run such events on that won't be shut down/go unmaintained in two years just because the game that the platform was built for ceases to be relevant; and people who pay to attend these events are willing to download a client to consume the content they paid for." That just seems to be a series of common-sense facts to me.

Whether Facebook can actually end up as the platform of choice for hosting said events is entirely non-obvious. But, if nothing else, they do have connections, channel partners, global scale to deploy reliable infrastructure that can be trusted to not buckle under event load, etc. It's up in the air how much things like "the aesthetics of the experience" really matter, compared to those. How much does a band care about the aesthetics of the venue?


> There are already concerts with paid tickets going on in platforms like Fortnite, that just happen to be able to act as multi-user VR environments, even though that wasn't their designed purpose. The demand is real.

You have that the other way around. Fortnite was able to host a concert because people like to play Fortnite. There is no demand for VR concerts, there is perhaps some demand for concerts in already-populated online spaces.

People aren't flocking to Fortnite because of the concerts. People who play Fortnite are flocking to these concerts when they happen in their game.


> There is no demand for VR concerts, there is perhaps some demand for concerts in already-populated online spaces.

Megan Thee Stallion, Billie Eilish, and the Foo Fighters are all releasing VR concerts this year.

My wife, who has never played Fortnite, attended several VR concerts over the past few years.

Given how popular listening to music is on services like Youtube, I believe it is short-sighted to fail to see the appeal of an immersive 3D-audio VR experience for music fans.


Foo fighters also released 29 tracks for Rock Band, which was really popular for a couple of years.

New toys are fun to play with, for a bit. That doesn't necessarily mean they're going to stick around or have much long term relevance.


They’re not only fun to play with, but when someone needs to purchase cachet for a new toy and they come to you to chat about it, cha-ching.


> There are already concerts with paid tickets going on in platforms like Fortnite, that just happen to be able to act as multi-user VR environments, even though that wasn't their designed purpose. The demand is real.

There's no demand to watch computer animated concerts through heavy glasses in Fortnite coming from people who don't already spend a bunch of time playing Fortnite. This is an add-on to Fortnite, not anything that anybody really wants to do for its own sake.

What's the point of watching a computer animated "concert" in VR anyway? Who would get any joy out of that?


Ask the 12.3 million who watched Travis Scott have a concert in Fortnite [1]

There is no way to convey this without being a little bit of a jerk, but I think your take demonstrates being out of touch, in the "no wireless, less space than a nomad, lame, also Dropbox is just some tooling around rsync" sense that HN tends to get sometimes. This is enough of a phenomenon that other mainstream artists are starting to get into it, which means there are enough people getting joy out of it for it to have a market of its own.

I think you ignore the metaverse (the concept, not Facebook's janky implementation) at your own peril. The next generation is growing up with virtual concerts, and in many cases virtual artists.

[1] https://variety.com/2020/digital/news/travis-scott-fortnite-...


i urge you to examine for any worldwide trends that were occurring in April 2020 that might have made people inclined to attend virtual events


And you think that inclination is temporary? I urge you to examine the history of concerts in that game. A bit under 11 million watched a concert by Marshmello a year prior.

And that's not counting the upcoming stuff. Trying to write off something this successful as a pandemic driven fad kinda speaks to that whole being out of touch thing I was talking about.

https://www.theverge.com/2021/9/28/22699014/fortnite-soundwa...


ok so a bunch of ppl were playing fornite in april 2020. that doesnt mean that people want to watch vr concerts


On the contrary, more than 12 million people watched the concert live. That's time spent in going to the VR concert, not playing a competitive shooter. Are you suggesting that all of them did something they had no desire to do?


Second Life has been doing this for over a decade. I see no reason to expect Meta to do a better job, and see every reason for people to assume they’ve done a maliciously-bad job, given their reputation.


It'll be worse than Second Life because they'll censor anything that isn't brand-friendly to Facebook (and there is certainly a lot of stuff like that on SL - it's a lot of the reason people play it!)


Would you trust Meta with a new social platform? Do you think your friends will?

If they can execute (I believe they can't) and if the Metaverse is the next huge thing that some are predicting (I think their vision is too dependent on VR/AR magically becoming practical)

They still have a huge reputation issue; they are already bleeding users on FB and Insta.

They need a miracle.


Have privacy geeks who keep saying "no one would trust FB with X" ever looked at the numbers? I keep seeing this sentiment around Meta and VR, yet guess which VR company has like >70% market share.

https://www.statista.com/statistics/1222146/xr-headset-shipm...


I'd like to see numbers.

I mean, engagement, not market share, total time spent by users with the headset on.

My intuition is that 90% of that "market share" is collecting dust.


Sounds like you just believe whatever you want then, no? How is 80% of sales in Q4 not a pretty good indicator that people do, in fact, "trust Meta with a new X" (or at least don't care enough)?

As for that new intuition, what good reason is there to believe Oculus headsets are so vastly underused after-purchase compared to competitors? And what has that to do with Meta's reputational issues?


> How is 80% of sales in Q4 not a pretty good indicator that people do

Because everybody I know who's bought a VR headset (any brand) lets it collect dust after a month, and I don't think that's a fluke. Buying a VR headset is not using a VR headset.


My point was with regards to the common insinuation that people (i.e. consumers) will not be using Facebook products because of the company's reputation issues, which I think has never showed up in the underlying numbers. Through all the scandals whether it's political weaponization by users and advertisers, privacy issues or else, their userbases never stopped growing ever larger whether it's FB+Messenger, Instagram, WhatsApp and now VR. There was a single (recovered) QoQ decline in global active users in it's history, seemingly unrelated to any scandal.

The company sure has reputational issues, but as much as people hate it (I have my own deep issues with it), it's not stopping the larger masses of people from using their products. In fact, >85% of humans (ex-China) with an Internet connection to their apps and >70% of VR headset purchasers.

Whether VR is a fad, people who buy headset find long-lived utility from them or the push is a good/bad bet for $META is unrelated to what I meant to say, which is their reputation is not stopping them from currently dominating the VR space (non-premium only so far, but maybe with Cambria). Doesn't seem reasonable to think the 7-8 digits of current VR enthusiasts are less morally principled then the 9-10 figure masses would be.


You’re confirming OP’s point, this is a video-game thingie. I personally don’t know anyone who plays or cares about Fortnite, I’m a man in my early 40s with little to no friends who work in the IT industry.


Im just dumbfounded that they would take a burgeoning technology like VR and set the impossible goal that it would one day supplant the absurd profits of their ad business.

Thats like trying to tag your kid as a Heisman trophy winner as the kid is being birthed. Sure be a proud parent but understand you are delusional.

Facebook is a company with nearly two decades of big tech experience, 40,000+ employees, and unfathomable amounts of capitol/IP/assets. To see them make the same logical leap as a two person startup of fresh-out-of-college optimists.


Someone I know recently got contacted by a recruiter for Meta's VR team, with the pitch that the team "wants to get 1Bn [users] by end of year!".

That'd amount to converting 30% of all Facebook users to VR users, worldwide, in four months. Including a bunch of users internationally who don't even own computers, and certainly can't afford VR hardware. "Delusional" is absolutely the word for it.


Supposedly Apple is dropping a headset in the next year. Should be interesting to see what people think of the industry after that.


I think VR is as much about GAMES as it is hardware. I don't think anyone, even Apple, can nail the hardware but they might. But I know FOR SURE Apple won't nail the games. Look at Amazon's attempts (lol).


Yeah that's my assumption as well but who knows. They're getting much better at content.


What are your expectations about Apple in this regard?

I am curious, but I have low expectations.


Honestly, no idea. I'm just interested to see the shoe drop.


> set the impossible goal that it would one day supplant the absurd profits of their ad business

Who says they’re replacing ads with VR?


I had a front row seat to Kodak's demise, and I can tell you that the depth of their denial internally was far, far greater than Meta's today. Not to say that Meta isn't in trouble, but I wouldn't count them out just yet. They are at least trying to keep up. Kodak only embraced the digital world grudgingly, and they sabotaged all their most promising initiatives because they threatened the (incredibly) lucrative film gravy train that they couldn't bring themselves to accept was ending. I'm sure all the decision makers who presided over that downfall took comfortable early retirement offers and didn't suffer for their mistakes.


> The Metaverse bet is dumb

It's a moon shot, but I wouldn't call it dumb, it's actually a pretty obvious business decision. Keep in mind, Oculus (now being rebranded as Meta) is currently the preeminent VR platform and has sold more devices than every other competitor combined. This is a potentially huge opportunity for them to expand their social media reach into a completely pristine market that they already dominate. Meta is in a unique opportunity to define the future of VR, it'd be dumb not to try something like this.

Of course, none of this addresses the abysmal branding or Meta's horrible reputation or the public's perception of Zuck as the herald of a technocratic dystopia. I also agree that VR will indefinitely remain within the realm of videogames, and gaming is not in Meta's DNA which I think will probably be their biggest barrier to success. Still... they own the VR market, they also have plenty of cash on hand, and that goes a very long way when trying to make the impossible possible, e.g. the xbox...


Microsoft had DirectX and a decent kernel before building the XBox. The hardware part was much easier, and also an easy target (do better than Sony) and they had a slow start.

Also, MS had quite a bit of experience in building/maintaining a platform for developers.

Frankly, we're not talking about the same kind of leap.


At the end of the day, they will have IP rights for a lot of VR related technology, and that will be worth something.


I think VR needs pretty badly to pivot in general, it's an amazing technology that tries to replace phones/computer interfaces where it should be working with them.

I don't want to strap a thing to my face and "be in VR" for a long period of time, navigating menus by pressing giant virtual buttons or pointing lasers at them. I want to, while using google maps on a phone or computer, be able to hold the headset up to my face and look around briefly and then take it back off. Same with data visualizations, 3D models, short-form entertainment, etc.

Standing in VR looking at loading screens is an awful experience. It should already be loaded to what I want to do.

It's hard to imagine Meta making that pivot since they seem to want to have a walled garden like Apple's. But they could at the very least make navigating/loading by a phone app an option. Even track the phone in VR, have a mirrored twin of it (with a kind of blown-up holographic version of it you can more easily read), that you can use in and outside of VR, and instead of their controllers.


> The Metaverse bet is dumb

I see it as an attempt to keep propping up the stock value, I don't even think they believe it themselves


They definitely believe in it.

Talk to anyone you know who works at Meta - it's obviously the top thing on Zuck's mind (and that's been clear every Thursday for years now). FRL is getting huge funding in a way it wouldn't if this were just to prop value.

(personally, I admire that - FB has pushed VR technology forward a couple of generations, singlehandedly)


Well, if this is a PR op this is the dumbest and more cost-inefficient one you can imagine. We're talking 10B a year.

10 Billions a year.


I has no idea it was this costly, that's crazy ... Yeah maybe I'm wrong and they really seriously bet on this weird project


How much of that is just relabeled activities that they would do anyway? Relabeling things is almost free, and leads to very high numbers.


FB wont be the one to make it happen, but I can see how a "minecraft with stable diffusion" will be all the rage in ten years.


yes, this is the obvious explanation and i dont get why public discussion can't make any progress but insists on throwing out the same "i dont think my mom wants to live inside minecraft yet" weak take every chance it gets.

edit: it coincided with fb reporting saturated user numbers


A lot of technology has started with gaming driving much of the early development, including PCs, graphics, AI, and the internet.

Whether Meta can execute on the metaverse bet is fair question, but I expect most of the changes they are envisioning will come one way or another.


I don't think it's totally dumb.

There's a lot of IP and patent rights to be gained. Even if Metabook shrinks back, they will always have a possible revenue stream in enforcing those rights and collecting royalties on newcomers to the VR market.


Meta has already screwed up Facebook from a UI perspective which is 2D. It feels clunky and ugly in my opinion, and was better designed when it originally came out. Now they seem to be on the same path with their Metaverse apps - low quality Wii like graphics that feel 20 years old already.


Why do you think they can't execute? What is execution in your mind and why is it out of reach?


First, company culture, on the technical side, a team full of fat, overpaid cats surrounded by the worst kind of bureaucracy, most of them spent most of their careers doing web stuff, what they achieved so far is underwhelming, to say the least.

Second, Meta is a huge corporation, the money makers don't let the kids play for too long, and they won't let Zuck burn all the cash, no matter how much theoretical power he has in his hands, they won't let him bleed the cash cow to death.

Third, we have enough History to see a pattern, Nokia and Kodak saw the iceberg, it didn't matter.

And small things like Carmack leaving and Zuck being delusional are clear hints of what is going on internally.


Execute to me would be providing the user experience they're showing in their own videos, many of which are simply not possible with headset technology.

https://www.youtube.com/watch?v=SAL2JZxpoGY

Just a ton of little things. Woman floating horizontally in space. Possible to render. Not possible to convey to the player. Cards, basically impossible to actually have cards that have good physics in a networked game like that. The smooth floaty motion and spinning is particularly dumb.


>Woman floating horizontally in space. Possible to render. Not possible to convey to the player

You could just have head and hand tracking with the rest of the body rigged and animated separate from the tracking. Player doesn't control the legs, they just dangle and animate.

The cards are technically possible. The trick is you just fake the physics. It doesn't need to be networked in a low latency way at all.

I would say the least realistic is the fine grained tracking and handling of things as thin as cards. Hand tracking isn't tight enough to manipulate a hand of cards naturally yet. That said, if you give some leeway to what the first person UX is vs how other avatars are displayed, its not so far off from whats possible especially if you would allow "reveal your hand and have it float away in zero G" to be pre-canned animations that the user triggers.


> You could just have head and hand tracking with the rest of the body rigged and animated separate from the tracking. Player doesn't control the legs, they just dangle and animate.

But you cannot feel that way. Yes, you can render it. You cannot make a player actually feel like they're there. You can't have fun doing loops like that. Even if we could somehow suppress the immense nausea of doing it, you would not be having fund doing a loop because you are not doing a loop. This is the big lie of VR. It's just the eyes, not the body, and there's a huge difference. You arguably get more bodily immersion with a 3rd person view watching your avatar's body do things.

"I am the person I'm watching doing the thing" > "I'm literally me, and I'm going to try to believe that the weird body ik is what I'm doing".

Ready Player One (which I don't like but is the poster child for fake VR UX) does this a lot with subtle things, like the characters jump from a precarious ledge and then struggle for balance on the precipice. That's impossible to achieve! You may struggle for balance irl, you may put balance mechanics into your game, but you cannot force a player to FEEL off balance because their VR foot is now on a ledge.

> I would say the least realistic is the fine grained tracking and handling of things as thin as cards. Hand tracking isn't tight enough to manipulate a hand of cards naturally yet. That said, if you give some leeway to what the first person UX is vs how other avatars are displayed, its not so far off from whats possible especially if you would allow "reveal your hand and have it float away in zero G" to be pre-canned animations that the user triggers.

There's a really big gap between what they're implying and what we can actually do though. Cards are super thin, and bendy. You can stack them together. You can manipulate them with your fingers. You can fan them out and shuffle them. Paper thin collisions are not something that any game engine really tries to support. You get fake deformation (bending) of meshes, but it's always kind of shit. The physics of it is extremely difficult to do, and trying to replicate that level of precision is probably impossible in the foreseeable future for a number of reasons including the ones you mention.

Can you make a great player UX with cards? Absolutely. You can do all sorts of stuff to give the visual implication of bendy cards and shuffling and fancy finger work. But you cannot actually let the players do it with their hands. You're not there. You're interacting with a very, very high level interface to the game world. We won't ever get the user experience meta is implying. It's pretty much impossible without a fundamental shift in how game engines work.


> The Metaverse bet is dumb

I think something that can improve the remote work experience could be worth a lot. It’s still to early to judge.


The Metaverse bet is dumb, but I wonder at the ten years.. Both Whatsapp and IG seem relatively sticky.


This isn't a surprise.

Meta's headcount grew 32% in one year, and revenues went down 1% YoY in Q2. Wall Street expects it to go down even more YoY in Q3. Layoffs usually happen in September since that's when budgets are set. So they objectively overhired, expect to shrink, are now just laying off non-revenue generating divisions.

Some can argue there's no such thing as non-revenue generating divisions. Every division contributes, and this one could have helped with the Meta brand, public perception, user retention, etc. But the real way you solve that (as AI researchers have solved) is not having a firefighting team that's expected to fix everything; it's instilling the proper behaviors, processes and culture within each and every team in the company. Having these discrete "divisions" serves PR goals more than anything, and that's expensive PR.

This is exactly like Steve Jobs laying off the Advanced Technology Group. Did that signal a "neglect" of R&D? No.

I personally believe that even Meta management is far more pessimistic about its stock than Wall Street is, and that's why they're playing it careful. Wall Street expects a 10% growth in revenue in 2023. Usercount isn't growing. So FB needs to cut spending and focus on core product and increasing revenue per user. Straightforward.


I wonder if internal estoppel is a thing? Two options

Option A

  1) ethicist objects to something

  2) management declines the objection

  3) whistle blower reveals the declined objection

  4) public outcry ensues, stock price falls

  5) board members become agitated

  6) c-suite wakes up to another gut punch
Option B

  1) ethicist raises an objection

  2) management acts on the suggestion

  3) revenue misses target

  4) stock price falls

  5) board members become agitated

  6) C-Suite wakes up to another gut-punch
Choose.


It seems we can gain a lot of efficiency by having a different c-suite gut punched every morning as a part of the NYSE opening bell ceremony.

Then we can just skip steps 1-5.


In Option A above, it seems like things mostly stop at 3 for FB. Public outcry? Falling stock price (as a result of public outcry)? I don't really see that happening very much.


It's hard to say because the facebook hearings happened at the same time as the iPhone anti-tracking stuff that cost facebook billions but I think it had an effect.


Was thinking of Frances Haugen revealing her identity coinciding with the market drop [1]. I dunno; IANAB (Banker)

[1] https://www.barrons.com/articles/facebook-stock-tech-apple-5....


> Facebook dating team’s decision to avoid including a filter that would let users target or exclude potential love interests of a particular race

Lots of minorities actually love this feature because they can find people from their own community. If it were up to teams like this, they would remove the gender filter as well. I’m glad these crazy Twitter people were fired.


...Or maybe facebook dating is a little trite and shouldn't even be a thing?

You get decisions like this after profiling use cases - targeting by race might be vector for targeting minorities for hate crimes.

Community/social sites (in my opinion) should not double as dating sites - for safety reasons.

The reason we can't have nice things is because of bad actors: all design needs to account for bad actors.

Assholes ruin design: Don't shoot the messenger - or stop taking their calls when your own team tell you they exist.


What they should've done is add a second filter to avoid getting matched up with people who use the race filter.


Not to sound too bleak, but isn't pandora's jar already opened as of many years ago? We can hardly perceive the societal harms social media has done given we still live in the era where it is the main metaphor. You cannot reform it once it's out in other words.

> The team was made up of engineers as well as people with backgrounds in civil rights and ethics, and advised the company’s product teams on “potential harms across a broad spectrum of societal issues and dilemmas

I suppose the question is whether people are surprised that this team existed in the first place. It sounds like it adds lots of legal liability of knowing about certain problems and not doing anything about them in due time. I wonder what they ended up finding if they found anything at all not already known to the public.


I feel like that team spent a lot of time at the water cooler.


It’s like a team in a wildfire tasked with discovering potential harms to the forest.


[flagged]


Zero. They don't use slack


What do they use?


Messenger lol


For team communication and org-wide messaging? How? That's insane if true


Workplace (Facebook for companies)


Touché


The irresponsible innovation team is still hiring!


which team would you rather work on anyway?


React is doing fairly well. The new hooks stuff is very flexible.


That's the rest of the teams.


Has any one seen teams like this actually work? They seem to mostly exist for PR purposes.


PR is the intended purpose.


But do the teams know that? I love picking on Timnit Gebru and her time at Google. She clearly didn't know what her actual job was, which is funny because she seems to be good at marketing herself. You think she'd know Google was using her for marketing.


Timnit Gebru called out her boss' boss' boss on Twitter on an unrelated issue where the boss isn't even relevant and acted like she didn't know why she was fired. This person is straight up toxic, and I hope she isn't a role model for AI ethicists, but many praise her as a role model...


I believe they do. That's why groups like this love the word "optics".


I have a question about incentives in organizations. If you form a team to identify problems in your org, wouldn't the team keep finding more problems, no matter what, to justify the value of themselves? A DEI officer will find more injustice in a university so over the years U-M History Department had 2.3 DEI officers per staff member. Or Gebru's team had been finding more and more egregious behavior in Google AI.

On the other hand, security teams are highly regarded in a company, even though they are supposed to identify security/privacy problems in the org as well. What made security different from those ethics teams?


Security is provable in some way, as in “look you can exploit us I’m doing it right now” - many of the others are more nebulous.


>security teams are highly regarded in a company

i doubt it. High regard for a security team is only projected, especially outside, because it is perceived as a virtue. "Security" these days in many respects is like communism ideology back then in USSR - there is just no option of not claiming allegiance to it. Where is internally it is usually just a theater. Just look at all those large publicly known breaches.

>What made security different from those ethics teams?

You can claim allegiance to ethics without having an ethics team, fortunately.


"Group was put in place to address potential downsides of the company’s products; Meta says efforts will continue"

should really say

"Group was put in place to make people think the company cares about the downsides of the company's products; Meta says something meaningless about continuing efforts"


Facebook used to be organized at a high level into different groups named after different company goals. Like Engagement, Growth, Utility. Facebook should be engaging, Facebook should grow, Facebook should be useful to people. Eventually they got rid of the "Utility" group while keeping "Engagement" and "Growth", leaving a bunch of us feeling like... I guess Facebook gave up on being useful?


I bet they are having to take a long hard look at where they are spending their money. A team like that is great when money is plentiful, but a drain when it's not.


There's been a lot of talk from Zuckerberg lately about there being too much cruft at the company. I wonder if he's feeling pressure from investors and the tech world in general over his big bet on the Metaverse. If he can make the company leaner, he might be able to reduce some of the pressure.


It is impossible to have an internal oversight team that’s without a conflict of interest.

From law enforcement to newspapers to government to technology — unless there’s demand from the public and oversight is independently funded and managed, without fail, function of team will end up either functionally meaningless or aligned to the parent organization’s core objectives.

Beyond that, no venture is without flaws, and until people are able to acknowledge that optimal solution do not mean zero negative impact, it will be a race to the bottom for which another culture that’s able to manage the complexity either by luck or skill will eventually replace those who are unable to do so.


There's a big difference between a conflict of interest with the company at large and a conflict between teams. For example at Google SRE vs SWE is set as a conflict which mirrors reliability vs dev-velocity trade-offs. And that works OK because both are valued by the company and so by being the decision maker on that conflict leaders get given a way to adjust that trade-off.

In an ideal world the "responsible innovation" team would represent one side of a "speed and scope" vs "PR, compliance and political considerations". So even though it goes against some of Meta's goals, it would be valued for achieving others.

However sadly in practice any time a set-up like this has "money" on one side of the balance, it's always going to win. So the team was set up with an impossible task.


Did they have power to create necessary changes to ameliorate the identified harms?


Of course! So long as they don’t affect business objectives.


Are Virtue signaling jobs the first to go in this recession?


> Responsible Innovation Team

Whenever I come across a dystopian sounding name like that, I immediately distrust them.


Obviously if Google no longer needs to “not be evil” Meta would be required to do “irresponsible innovation” just to stay competitive. So the team has to go.


Devil's advocate: Private firms shouldn't be in the business of evaluating the harm they cause to society. We the people, and the enforcement arm of the people, the government, should pass legislation and enforce laws to prevent such harms. If the people, i.e. the government, is unwilling to perform this basic function, private firms shouldn't feel obligated to take the mantle of responsibility.


That sounds great in theory but there is a major problem: in the U.S. at least, corporations are political actors. So you can't fob off responsibility from corporations onto "the people". In the U.S., corporations literally are the people.


I feel like this stance ignores regulatory capture as a concept. In any activity at the scale of individual humans it is generally considered inappropriate to adopt an attitude of "I'ma do tf I feel until someone tells me I'm fucking up". Why should activities at the scale of private firms differ?


Wait, so the response to regulatory capture by companies should be...regulation by companies? Like, just removing the middleman?


Never said that, but since we've apparently landed at "people are complete sociopaths and absolutely must be forced to act in ways that aren't overtly harmful" what, specifically, are your recommendations? Because from where I'm sitting that sounds an awful lot like ironclad reasoning to eliminate profit motive from society.


I'm not recommending anything. I'm just saying that "we shouldn't rely on companies for regulation of their behavior" is not an opinion easily countered by "but we need to, just in case of regulatory capture", because it's putting the cart before the horse.


Still isn't what I said. but since we're doing another lap through available options are 1. do nothing 2. government regulation <insert sovereign individual screeching here> 3. corporate self-regulation (because that's always successful /s) 4. Eliminate all incentives for unethical behavior.


Corporations have outsized influence on our government. While the working class is too busy working, these corporations are able to hire full-time lobbyists whose entire full-time job is to advocate for them.

Though we definitely still have miles to go to limit the influence of corporations on gov't (and even public opinion by limiting media centralization and funding independent media), I doubt we'll ever be able to fully limit their influence.


> Private firms shouldn't be in the business of evaluating the harm they cause to society

Private firms have been evaluating the harm they cause since tobacco. They need to know precisely what they need to cover up under the guise of discovering societal dangers for social good.


Nah. Tobacco companies didn't give a shit about societal dangers. They were trying to deflect liability, which is a business decision. It's a business decision that they're forced to make because we have laws about liability and don't rely on the generosity of CEOs.


I literally said that private companies have been using studies to deflect liability under the guise of social good.


Imagine there is a manufacturing plant and they have a waste stream of highly toxic chemicals. The cheapest solution is to pump it into the ground, and if in a decade or two later, those chemicals make their way into the water table it is someone else's problem. For now, management has succeeded in producing higher profits. That scenario has happened in hundreds of sites around the US, including in Silicon Valley, and billions have been spent by the government to clean it up (imperfectly).

Under your proposal, we the people and the laws would need to anticipate any harm a company might do, otherwise they are off the hook. Sure, there are laws now against semiconductor companies pushing their chemicals into the ground water, but what future chemical might be in use that isn't named in some law currently?

A second point is that of regulatory capture. If a company can spend $100M lobbying politicians, courting regulators, spending on PR to sway public opinion to believe false things, and as a result earn $1B in extra profits, that would be AOK under your plan.

Every time someone trots out the false claim that it is a CEOs legal responsibility to maximize profits, I have to wonder about their ethical framework, as if maximizing profits is an inherent good.


> Every time someone trots out the false claim that it is a CEOs legal responsibility to maximize profits, I have to wonder about their ethical framework, as if maximizing profits is an inherent good.

The problem is that you're talking about how good people are, when other people are talking about how well things work, and about figuring out how to make them work better. Your ethical framework seems to be that people should be good and do good things, but with no reference to what things are good and what things aren't other than the intuition and improvisation of good people, and no reference to what we do if they don't do what we think are good things.

If you're relying on the CEO to be a good person, you've dismissed governance as a possibility and anointed kings. In general, I don't think that kings work in my best interest, so I prefer regulations.


I'm not saying that we should only rely on CEOs to do the right thing. We absolutely still need regulation, and mechanisms for fighting regulatory capture. But the original claim was that companies have no responsibility towards ethical behavior; I'm saying we need to strengthen the moral responsibility of CEOs, not weaken it. Sure, it isn't perfect, but it's better than "CEOs are right to do whatever they can get away with."


> If the people, i.e. the government

Well that's one issue. The government only weakly equals the people, because of the electoral college disenfranchising voters in NY and CA, lack of ranked choice voting, Black voter disenfranchisement, lack of statehood for DC and Puerto Rico, and the corporatocracy system of influence.

> If the people are unwilling to perform this basic function, private firms shouldn't feel obligated to take the mantle of responsibility.

Sounds made up.


> Sounds made up.

What sounds made up?

Also, related to nothing, why have a bunch of people on the internet started replying to everything over the past few weeks with "sounds made up"?


You are right, but for a different reason than you think:

Whenever there's a lawsuit against Meta, the plaintiff can do discovery and demand all this stuff. So the only defense is not to have it.

"Meta top management was made aware of all these harms, but did nothing about them." -- that's not what their lawyers want to hear.


I wonder if that team got created for PR purposes during those privacy debacles a bunch of years ago.

Bonus benefit from disbanding the team now: if more shit happens, can re-create the team as a show of action without actually changing anything.


I like the way you think. You'll go far in top management.


The first parts are certainly true, but private firms certainly did not "feel obligated to take" that mantle: they saw a chance to grab it and run.

Meta dissolving that team can mean two things, either that they lost hope to get away with it, or that they are confident that they will get away with anything, without even pretending.


If private firms are only beholden to the profits of their shareholders (i.e., the way modern capitalism is mostly structured), I agree with you and I think that's what we're seeing here: Facebook dissolving this team because they believe the costs (to them) outweigh the benefits (to them). That being said, I think this entire structure is arguable: why are private firms only beholden to shareholders when they're built on the back of public resources: infrastructure, education, talent, laws, knowledge, etc.


> the government, is unwilling to perform this basic function

Which usually happens due to lobbying by said private firms


I would say there should be checks and balances. Of course companies should be looking at what is harmful or not, but they should't be the only ones. It's all about checks and balances.


Private firms and businesses are run by people. What you're essentially saying is that people should not care about how their actions harm society as long as it makes them a profit.

"If society didn't want me to kill all of those people why didn't they stop me?" is not really a good look. Abdicating all of your personal responsibility to "society" is just an excuse to be a sociopath.


This is woefully unrealistic.


I agree, that business is like a game, and it is up to the government to set the rules of that game.

If profit is winning, then any large enough business will do anything within the bounds of those rules in order to increase profit. You wouldn't ask basketball team to stop taking advantage of whatever edge they can to win would you?


In this game the players influence referee appointments.


The issue here is the players are making the rules.


> Group was put in place to address potential downsides of the company’s products

Oh, you mean like how Meta has access to your data, sells it, and also sends some/all of it to government intelligence agencies?

Or how they manipulate, censor and promote content, shaping public opinion (of Facebook users anyway)?

Or how Facebook is addictive and exacerbates anxiety in many people?

Those kinds of downsides? Gee, I wonder could possibly happen to such a group.


Simple enough to trust employees to be conscientious and good

Most engineers I know are responsible and ethical, and have strong moral compasses.


Don't worry everyone, Meta only cut the Responsible Innovation Team because they stopped innovating years ago


get rid of all the Ethical AI folks too. in theory, it might be useful, but in practice, it's just bureaucracy


Perhaps someone or a group of people should put up the money to keep this team going outside and independent of Facebook, maybe as some kind of "Responsible Social Media" non profit entity. It sounds like important work and without Facebook or some other for profit employer being able to control what they say, imagine the insights the rest of the community will be able to get.

Something similar happened in Australia when an old climate change denying government dispanded the "Climate Commission" (a government department tasked with investigating climate change) to save face. The employees of that particular department packed up and left their office, got funding and now they're known as the non profit Climate Council. They have remained a thorn in the government's side ever since and are still going almost a decade later https://en.m.wikipedia.org/wiki/Climate_Commission


It says much those teams are really just a welfare program in the showcase window.

Good to see they get let go.


> slowed its hiring amid rumors of potential layoffs

From a Silicon Valley veteran: if you work there and read this, and you don't already have your resume circulating, it's too late. You want to be out there before all the other Meta employees.


From another Silicon Valley veteran: this was a horizontal team that lost its exec sponsor and so didn't have a clear way to make impact. This kind of thing happens all the time at companies and panicking is uncalled for.


How is "too late" when there haven't been any layoffs? Are you claiming there's a mass exodus ongoing from Meta right now?


This is a thread about cuts being made. He's implying there will be more cuts. If you aren't circulating your resume today, you probably won't be tomorrow. Don't wait for the cuts to get to you before you start circulating

Unfortunately these times can be a bit rough to navigate. It can be tempting to cling to one's current position rather than move to a new company & be first on the chopping block


My experience is that having The Register run an article of the form “company $X cuts growth targets; cuts $Crown_Jewels team” is worth 1000x more than circulating a resume.


At least four of the responses to this got the point.

By the time layoffs or a mass exodus actually happen, you're competing with 100s of other good devs for the best jobs (not that you won't find some job).


The idea is if they fired 2,000 devs (for example), suddenly you have to compete with 2,000 people (who are all likely good devs) all immediately looking for a job.

This may not have been a problem in 2018, but it's definitely a problem in today's current environment with large amounts of bay area companies slowing their hiring.

If you look for a job before all this, you're in a much better position at least.

I wasn't around during the dot com era, but there were stories of devs ending up taking any job (programming or not) to make ends meet.


There are currently over a million open software engineering roles in the United States alone


The market has slowed but I still think engineers from firms like Meta are not really facing a tough job market. It's just much harder now to get your foot in, to join startups, or to get hired from a less prestigious company.


> Meta are not really facing a tough job market.

The word "facing" is present tense, which is not the appropriate context for this scenario. Everyone will have trouble if Meta fires a bunch of engineers. People from Meta may have a little less trouble, and some companies will even open up new reqs to absorb some for cheaper. But there are a limited number of open positions out there. No real imagination or speculation is required here, since this has all happened many times before.


They're claiming that by the time the layoffs start, recruiters' inboxes will already be saturated with exMeta CVs.


If you wait until the layoffs happen, you'll be competing in a much larger pool.


Anyone who's been at Meta will be able to walk into any high paying job they want. I don't think it's an issue.


Not true. They'll be able to walk into some high paying job. Just maybe not the one they wanted the most, because some other Meta dev got it.


This vaguely reminds me of Silicon Valley season 3 episode, where Hooli CEO throws entire Nucleus team under the bus, to „take responsibility“.

Cracks me up, just how close SV (series) is to out there..


Responsible innovation implies creating value. Why bother if you make more money by extracting value from your own users.


Did they form a new team to deal with the issues the "Responsible Innovation Team" discovered?


They don't want anyone quantifying just how irresponsible their "innovations" were.


Stop using that propaganda outlet.


Brilliant comment because this could mean Facebook or Engadget, and either way it fits.


Very good point. What is your recommended Facebook alternative?


The OP may be referring to linking to the WSJ


WSJ very well may be propaganda platform but I think of it as a dog-whistle channel from true owners of our society (.01%) to the wannabes (1%). And its good to see whats being signaled there.


How would the WSJ journalists (who are probably in the 20%) know what to write? Is the dog whistle message conveyed from the owners to the editors to the journalists?


Most likely that, the message would be encoded in the paper's policy and the knowledge of what type of stories would be acceptable for publication. as somebody once said, most journalistic censorship is self-censorship.

the fact that journalist himself is in 20% is irrelevant as its his bosses who finally decide what gets to your/mine eyeballs.


I like ello (ello.co). I also prefer to use Telegram & dont have FB installed on any of my devices.


Nginx

Edit: If I’m honest: S3.


I think these sort of "Responsible Innovation Teams" and whatever else they are called are double-evil for various reasons:

1) They are literally created by these companies as a means for deflecting attention from the real problems that are occurring at a higher / hidden level. For example, these people have no idea that Facebook has actively lobbied to prevent countries from enacting a digital tax that would more fairly spread the tax revenue associated with profits generated in countries around the world. There's nothing these people would ever do about that in their "work."

2) The people who take these jobs are generally quite terrible. They are willing to accept a paycheck from a company whose ethics and tactics they supposedly find reprehensible. If we take their jobs at face value, they are paid to show up every day and hate what they see and criticize it. It takes a certain type of toxic mental state to find that appealing, regardless of how much money you're making for doing it. And, if you're motivated by money to take such a job, well... What does that say about you?

3) These teams spread the false belief that these companies are so big and powerful that you have to try to "fix it from within," rather than doing what we have done so well for so long in the tech industry: we've burned these empires to the ground and started all over, with something better/faster/[eventually] bigger.

4) People within the organization who know that these teams exist for regulatory / PR purposes think that they've been given cover to continue to act in shady and unethical ways. These sorts of groups perversely make it easier for the bad actors within these organizations to continue acting bad, and sometimes even act worse. Facebook and its current/former executives are the gold standard for this: the more you see "regrets," "internal dissent," etc, the more utterly depraved and shameless the behavior behind the scenes is.

In summary: Bad people working for bad companies that do bad things. Pointless jobs at best, a net negative at worst. These are toxic jobs that attract toxic people.

Thankfully, none of this really matters, because Meta/Facebook is slowly disintegrating and there's nothing Zuckerberg or anyone else can do about it.

(And before someone posts the inevitable "2 billion active users" response, do remember that these are network effect businesses in which the network effects need to be constantly replenished with new and exciting reasons to stay connected. If people are more excited to be connected somewhere else, you're dying. And, the sort of people who invent exciting new reasons to be connected don't work at evil corporations that silo "responsible innovation" off from the "real work.")


Maybe they found it was hopeless...


It's like the FB itself is not actually harmful for society [0] /s.

0 - https://www.forbes.com/sites/kashmirhill/2014/06/28/facebook...


It was not responsible or not innovative?


When did Ethics and Facebook ever come in the same line, lol!


I'm no fan of this. While I don't know what their value and impact was—and this could have led to the team's demise. It's also Meta—I doubt they were also fully empowered to ask the right questions and push for the right change.

A company with this level of influence over the world should work to be self-calibrating: don't significantly manipulate people, block the spread of hatred, don't let unknown actors influence global politics, don't design your products in a manner that others can use them in ways you don't expect that cause people harm, etc. Does that still happen at Meta after this? I doubt it.


>While I don't know what their value and impact was

Seems like that's an important piece of information to have before passing judgement on the decision, I think.


Contextually, my `-and this could have led to the team's demise` was an attempt to add a fair qualifier to my statement, but probably missed the mark.

I agree, but given the company, context, and reporting, the only people that would categorically know are Meta insiders, and journalists poking at them...not most casual HN commenters. :)


I realize this doesn't need to be repeated but companies have one mission and one mission only, to make more money. Anything that works against that is a distraction. Turns out spreading hate, manipulating global politics, and exploiting humans psychology is extremely lucrative.

The problem isn't misguided or unregulated corporations the problem is capitalism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: