Apart from your last paragraph which is a little contentious, I agree with what you say.
I dont understand why people here require that every tech ceo to be some professional programmer or engineer. I don't think you _need_ to be that deep in it as the CEO. There are plenty of leaders at OpenAI that already fit the bill.
Sam is good at getting funding, seeing the bigger picture, and rallying towards a cause. That is the job of a CEO. It doesn't matter (imo) that he doesn't know how many parameters the next release will have. All that matters is he knows the impact of the new release and knows who to defer to for actual technical decisions.
The joke is that "taste" usually implies you have some strong personal sense of self and style, but if you walked into tech offices in the bay area everyone looks like that and acts/talks the same.
So its ironic that these same people are talking about "taste" when they ostensibly have very little.
The thing is, do humans _need_ most software? The less surfaces that need to interact with humans, the less you need humans in the loop to design those surfaces.
In a hypothetical world where maybe some AI agents or assistants do the vast majority of random tasks for you, does it matter how pleasing the doordash website looks to you? If anything, it should look "good" to an ai agent so that its easier to navigate. And maybe "looking good" just amounts to exposing some public API to do various things.
UIs are wrappers around APIs. Agents only need to use APIs.
Yes, if it's not redundant software. The ultimate utility is to a human. Sure, at some point humans stopped writing assembly language and employed a compiler instead, so the abstraction level and interfaces change, but it's all still there to serve humans.
To use your example, do you think humans will want to interact with AI agents using a chat interface only? For most tasks humans use computers today, that would be very unwieldy. So the UI will migrate from the website to the AI agent interface. It all transforms, becoming more powerful (hopefully!), but won't go away. And just how the advent of compilers led to an increase of programmers in the world, so will AI agents. This is connected with Javon's paradox as well.
I think "taste" is definitely an overused meme at this point, its like tech twitter discovered this word in 2024 and never stopped using it (same with "agency", "high leverage", etc).
Having read the article, I think I see the author's argument (*). I think "taste" here in an engineering context basically just comes down to an innate feeling of what engineering or product directions are right or wrong. I think this is different from the type of "taste" most people here are talking about, though I'm sure product "taste" specifically is somewhat correlated with your overall "taste." Engineering "taste" seems more correlated with experience building systems and/or strong intuitions about the fundamentals. I think this is a little different from the totally subjective, "vibes based taste" that you might think of in the context of design or art.
Now where I disagree is that
1. "taste" is a defensible moat
2. "taste" is "ai-proof" to some extent
"Taste" is only defensible to the extent that knowing what to do and cutting off the _right_ cruft is essential to moving faster. Moving faster and out executing is the real "moat" there. And obviously any cognitive task, including something as nebulous as "taste," can in theory be done by a sufficiently good AI. Clarity of thought when communicating with AI is, imo, not "taste."
Talking specifically about engineering - the article talks about product constraints and tradeoffs. I'd argue that these are actually _data_ problems, and once you solve those, tradeoffs and solving for constraints go from being a judgement call to being a "correct" solution. That is to say, if you provide more information to your AI about your business context, the less judgement _you_ as the implementer need to give. This thinking is in line with what other people here have already said (real moats are data, distribution, execution speed).
I think there's something a bit more interesting to say about the user empathy part, since it could be difficult for LLMs to truly put themselves in users shows when designing some interactive surfaces. But I'm sure that can be "solved" too, or at least, it can be done with far less human labor than it already takes.
In general though, tech people are some of the least tasteful people, so its always funny to see posts like this.
Well considering basically the entire market was down these past few days, Google included, its unlikely attributable to this paper alone. Its most likely correlated with general war/trade route restrictions/potential recession fears, or at least, more correlated with those than it is with this paper.
This paper was released a year ago and was probably part of how google got to 1m context before other labs.
Answer: Any job where the majority (or all) of your work can be done strictly by using a computer, and for tasks that have easily verifiable and objective outcomes. And from an economic perspective, jobs that have the highest cost (i.e, highest margins for AI companies to replace) have a strong economic incentive to be automated first. So Software, Finance, Accounting, Law, etc.
Yes - this means software engineers are likely the first to go, along with other high paying computer jobs.
One thing that irks me about this place is the great-confidence people make claims, when they have zero idea about stuff outside of their domain.
I know ten people who work across Accounting and Finance in high-level positions who have all told me that in the past few months, the LLM-steam has wore off and they aren't seeing any material benefits.
yeah and 2s has not been doing too hot for a few years now. Jane street I buy - they tend to recruit a lot of CMU students. But definitely less than < 15 of the new grads they hire each year are from CMU. They maybe hire on the order of 50-100 new grad SWEs a year.
It will probably be a lot worse since white collar workers (especially the ones that AI is targeting, like banking, software, etc since they are super high margin jobs to automate) traditionally make and spend more than the average worker.
These are the people getting mortgages and sending kids to private school and whatnot. If their spending power suddenly drops to 0, its probably going to be pretty bad. I wonder what the housing market would look like in these cases.
I agree. I think most companies would be better off being 100% AI driven since synchronization problems for agents (or whatever the fad will be) is likely much lower than human social synchronization, and has more rich information transfer between "workers" (so less ambiguity, less tradeoffs to be made, etc).
As soon as a person enters the loop you add a manual sync point that probably doesn't need to be there. I think this is why you are increasingly seeing companies tell their people to be "on the loop" or "out of the loop" with their AI. The less syncing with a person, the better. And I think once this experiment runs its course, we will probably find out that human social interaction matters much less than we thought it did, especially for super transactional things like a corporate job where most of your work is done on a computer.
I dont understand why people here require that every tech ceo to be some professional programmer or engineer. I don't think you _need_ to be that deep in it as the CEO. There are plenty of leaders at OpenAI that already fit the bill.
Sam is good at getting funding, seeing the bigger picture, and rallying towards a cause. That is the job of a CEO. It doesn't matter (imo) that he doesn't know how many parameters the next release will have. All that matters is he knows the impact of the new release and knows who to defer to for actual technical decisions.
reply