I think the answer is to do multi-disciplinary work.
Venture outside of pure theoretical math. Learn some other domain knowledge and combine it with your mathematical ommph. That's the easiest way to make an impact now rather than potentially decades later.
> "Requirements documents that were once a page are now twelve. Status updates that were once three sentences are now bulleted summaries of bulleted summaries. Retrospective notes, post-incident reports, design memos, kickoff decks: every artifact that can be elongated is, by people who do not read what they produce, for readers who do not read what they receive."
Great article. The "elongation" of workplace artifacts resonated with me on such deep level. Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays. Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).
So now the "productivity-gain bottleneck" is people who still care enough to review manually.
This paragraph hit home with me as well. I work at a large tech company that's a household name and the practice of using AI to pad out design documents has become totally out of control over the last 4 or 5 months. Writing documentation is arduous and a little painful, which as it turns out is a good thing as it incentivizes the writer to be as succinct as possible. Why the fuck should I -- along with five other engineers -- bother to read and review your design if you didn't even bother to write it?
Taking a distance uni class now to maybe swap away from dev work and my submitted works that are to be reviewed and commented on by other students all come back with AI generated feedback and it's making me go insane. If I needed AI feedback I'd go ask an AI but for any communication now it's a cointoss if you're getting a human reply.
Yeah, but I guess harder for the professor to check up on too. It's a course on a specific kind of creative writing so human feedback would be QUITE helpful instead of AI responses about how good parts are.
I'm starting to see pushback for this. I know a Product Manager that was fired for padding his documentation with AI to the point there were mistakes and wasted work due to AI hallucinations.
I see it even on my GitHub project, issues and pull request comments get longer, responses get longer, all generated by ai and read by ai. This text is no longer for human consumption, but to provide context to ai.
Seems like we risk the atrophy of western software while surpassed by software developed in places and cultures where they don't "move fast and break things".
We have never needed to "move slow and fix things" more than right now.
I'm not opposed to "move fast and break things" but our problem is that's the only lever we pull. For every "... and break things" there needs to be a phase of "clean up, everybody do your share". It seems the modern development framework is allergic to cleaning up. There's so many excuses given but if you don't clean up you can't move fast.
In physical reverse engineering there's a common pattern people use: buy 3. One to break, one to modify, one to reference. You need the one to break because you're going in blind. The problem has a lot of unknown unknowns. It's often difficult to take things apart (especially these days) without breaking them. But the second time it is much easier to do nondestructively.
But I'm also a big fan of taking time to think and understand. To gain deep understanding of things. I've always found this to be helpful and allowed me to move faster in the long run but I often face resistance to this because everyone wants me to "move fast".
The problem is I think people have the illusion that you can run a marathon by doing consecutive 100m dashes. It sounds nice in theory but I think there's no surprise that burnout is at an all time high and things are getting sloppy.
It's weird, we've systematically created a work structure that has the same principles as scams: frame everything as an emergency so the mark doesn't have time to think. Why the fuck are we scamming ourselves?
What I find particularly irritating is that you can actually prompt the fcking AI to be short.
> Writing documentation is arduous and a little painful, which as it turns out is a good thing as it incentivizes the writer to be as succinct as possible.
It takes more effort to be brief, even for humans. Good documentation writers were always brief.
Simply saying "be concise" isn't enough. I often have Claude write first drafts for me (which, for the record, I review completely and rewrite as needed before publishing) and even when told to be concise, there are times when what comes out is unusably long and wordy.
I've seen some of this as well. It's OK to send me an agentic screed if it's just going to be consumed by my agent, but I want a nicely written summary up top that was made by you... I'm starting to value poor grammar, typos, and other signs of legitimacy
I work under the assumption that the primary audience of everything I write at work is an AI. Managers will take what I send and have it summarized and evaluated by some chatbot or agent. (Of course, I cannot send them the summary myself.)
So like ATS checkers for resumes, I find myself needing an AI checker for my text.
Ultimately, we will have AI write everything for another AI to parse, which will be a massive waste of energy. If only there was some agreed-upon set of rules, structures, standards, and procedures to facilitate a more efficient communication...
If that is your manager, do so, sure. But make sure your manager is "such a manager".
If I was your manager, and you sent me your seventeen page AI generated thing coz you think I'm just gonna summarize anyway and I expect something long: You misread me.
I make a point all the time to everyone that won't listen, to not send me walls of text. I'm not gonna read them. I'm gonna ignore them, close your bug reports until I can understand them because you spent the time to make them short and legible. If you use AI for that, I don't care. But I better have something short and that when I read it makes actual sense and when I verify it, holds up. If I wanted to just ask AI, I'd do it myself. You have to "value add" to the AI if you want to be valuable yourself.
I agree. I send 2 sentence replies to most things my bosses boss sends me. He’s near retirement, dude doesn’t want me to send him a book. He knows the thinking under the work our team is doing is solid.
The only time I send something longer is if it’s a postmortem for some prod issue, which I write by hand.
I use AI every day, often multiple agents at once, but knowing when it’s appropriate and when I need to be the one thinking really hard about something.
I go through this with my vendor budgets and contract negotiations right now. We are encouraged to put all their proposals in AI and have it refute each point. I know for a fact they are putting my negotiations in their own AI and having it counter-propose my points. It's an arms race of my AI fighting against their AI. Where does it end.
This is the focus of my new startup, which uses a single-layer model to transform bullet points into bullet points. Please invest in IdentityMatrixLLM, LLC, etc.
I have a hard time trying to find any reasons for the S̶k̶y̶n̶e̶t̶ owners of the Skynet not to get rid of that walking bipedal inefficiency called human.
> Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).
I feel the loss of this signal acutely. It’s an adjustment to react to 10-30 page “spec” choc-a-block with formatting and ascii figures as if it were a verbal spitball … because these days it likely is.
When I read some written content, before AI, I learned a few different things in order. First, just by its mere existence, I learned that someone had found an idea worth expending some effort to express. Next, I would learn the words of the content. Next, I would usually acquire some kind of knowledge that I was able to synthesize or extract from the content. That last step isn't a given, but it's very likely to happen given the pre-filter implied by the first bit of information I learned.
There's no pre-filter anymore. It's exceedingly hard for me to quickly determine how important a person thinks an idea is or how much thought they've put into it in the age of AI, and so there's no guarantee that if I invest the time to read the content then there will be a proportional amount of meaning available for me to extract. This risk always existed even with works written by humans, but now it's overwhelming and has decreased my overall of exposure to new ideas that I didn't explicitly go looking for because I have a much higher expectation that information placed in front of me unsolicited will just be a waste of my time.
and there's no longer any difference between the 'hey here's an idea I had' document and the 'this has undergone a lot of review and has been signed off by all of the stakeholders' document. Which one do you take as canon that can't be changed, and which not? When it all looks like the same AI slop
Does anyone know where that style came from? Did it become popular in listicles or on github or something? Or is there one person deep inside OpenAI or Anthropic who built the synthetic data pipeline and one day made the decision on a whim to doom us to an eternity of emoji bullet points?
I think it likely performed well in A/B preference tests with chat users.
I've noticed Claude does far fewer listicles than ChatGPT. I suspect that they don't blindly follow supervised learning feedback from chats as much as ChatGPT. I get Apple vs Google design approach from those two companies, in that Apple tends not to obsess over interaction data, instead using design principles, while Google just tests everything and has very little "taste."
In general I feel like the data approach really blinds people to the obvious problem that "a little" of something can be preferable while "a lot" of the same is not. I don't mind some bullet points here and there but when literally everything is in bullet points or pull quotes it's very annoying. I prefer Claude's paragraph style.
I suppose the downside is that using "taste" like Apple does can potentially lead a product design far, far away from what people want (macOS 26), more so than a data approach, whereas a data approach will not get it so drastically wrong but will never feel great.
I’m given to understand that Anthropic uses something called Constitutional AI, where there is a central document of desirable and undesirable qualities (as well as reinforcement learning) whereas OpenAI relies more heavily on direct human feedback and rating with human trainers evaluating responses and the model conforming to those preferences.
I also much prefer the output of Claude at present.
Yeah and for much of the HN crowd, we aspire to have better tastes than the average. So if the supervised learning uses average human trainers it will most likely be seen as having poor taste for much of HN.
I think it's funny how we are all tweaking LLM output by adding instructional tokens instead of, say, finding a vector that indicates "user asked for emojis", and forbidding emoji tokens in the sampling unless that vector passes a threshold.
All of the PMs I interacted with across companies started using Notion for everything at the same time. Filling Notion documents with emojis was the style of the time.
This slightly pre-dated AI tools becoming entirely usable for me.
It's the style of "blazing fast library made with :heart: in rust :crab:" that was popular in github README.md. My guess is that because the models are told to use md they overfit to the style of md documents too.
It was an annoying way of writing on places like LinkedIn and marketing copy for 3 or 4 years before LLMs appeared on the scene. I remember realising that I can't read them (my brain jumps between the words and the picture making it hard to focus on the content) before AI appeared.
Both predate common use of LLMs, unless my memory is even more shaky than usual on this. I'm sure I saw them appear a fair amount on GitHub and related project pages, but I couldn't tell you more specifically how they started & grew.
Somehow they must have been over-represented in the training data (or something in the tokenising/training/other processes magnifies the effective presence of punctuation) because I don't remember them being that common and LLMs seem to love spewing them out. Or perhaps it is a sign of the Habsburg problem: people asked LLMs to produce README files like that because they'd seen the style elsewhere, it having spread more organically at first, and the timing was just right for lots of those early examples to get fed back into training data for subsequent models.
You're not supposed to read the Jira ticket. You're supposed to paste the link along with instructions for your Claude agent to "do this ticket, no mistakes," then raise an MR for whatever it writes. The text is a wire protocol between agents. If a PM doesn't care enough about the requirements to write, or even read them, then would they even notice if the code works or not? Why would they care about that? What does "works" even mean if no human knows the spec?
Everyone's job is to please their manager. Their job is shipping functional product features only if that's what their manager likes. In functional companies, that should be the case. There aren't many functional companies.
Indeed. I've spent my professional career seeking out positions at companies of increasing prestige and technical renown, each with a higher reputation for professionalism and performance than the last. And yet this invariant has held in every position.
As far as I can tell, the only difference between each company has been the quality of the manager I was supposed to please, which I have noticed (perhaps predictably) is not strongly correlated with the company's reputation or success.
Don't forget that they're also functionally structured. The managers don't own products or features, they manage functions (engineering, sales, design). And in practice, they usually only manage people, with little control over the function. So the managers aren't particularly interested or tied to shipping product features. The PM maybe, but they don't have reports or own much.
That is the current fad, so that is what a lot of bosses like. There have been different fads in the past, there will be different ones in the future. Some of the fads have a useful core that remains today, some of them are completely gone. All of them were overhyped at the start.
They technically are, but at least in the US we're allergic to "anti-business" punitive fines and liability, so it ends up as a cost of doing business. Wouldn't want an anti-business law to scare away a company leaking everybody's data. Think of the economy!
It does not at all indicate the effort that went into doing the thing. Clearly not.
I propose that what you enjoy is having a token of the appearance of effort, easily constructed and easily observed and easily suitable for low-effort handling of these proxy objects for actual work.
Recently I reviewed some vibe-coded stuff and sent a list of issues and suggestions to the “author,” figuring he’d read it and then go through each one with Claude until fixed.
Instead he didn’t read it at all, and just threw the whole thing at Claude Code as a big prompt. The result was… interesting!
This is happening with coworkers now. It’s honestly insulting.
They put up a PR with all the obvious tells, the markdown table of files that changed, the description that basically parrots back things the human obviously wanted them to stress in the task (“this implements a secure, tested (no regressions) implementation of a Foo…”), and the code is an absolute mess of one-off functions placed in any random file with no thought to the way the codebase is actually organized.
Then I give feedback after spending like an hour going through their 2000 line change, and then here comes back an update with a very literal interpretation of my feedback that clearly doesn’t really understand what I was even saying. Complete with code comments that parrot back what I said (“// Use the expected platform abstractions for conversion (not bespoke methods”).
Reviewing coworkers PR’s feels like I’m just talking to the LLM directly at this point, but with more steps and I have less control over the output.
The last place I worked for, if it happened with someone new in the company or the team, I would find a polite way to say "do your job and fix this shit" and it worked.
Some people have put me on their blacklists after these interactions, sure, but they're the exact people I don't want to work with again. The important thing here is that I've never done someone else's work for free.
You tell Claude to review it and if it breaks something you blame Claude. No one can get mad at you for it because they don't want to look like luddites.
That’s what this whole thread is about. Appearances of productivity, laziness, and the offloading of real work downstream by sending of “looks good enough” ai generated work.
Checkmarks as bullets on progress/comparison lists I really like, assuming you mean //. The emoji properly put me off looking deeper into whatever it is that I am looking at unless I was really interested to start with.
Both predate common use of LLMs, unless my memory is even more shaky than usual on this, but must have been over-represented in the training data (or something in the tokenising/training/other processes magnifies the effective presence of punctuation) because LLMs seem to love spewing them out.
I wish cultural norms around documentation would shift to "pull" rather than "push" — generating "views" of organized knowledge on the fly instead of making endless rearrangements of the same information. It's become too cheap in terms of proof of (mental) work to spray endless pages of notes, reports, memos, decks, etc. but the "documentation is good" paradigm hasn't caught up yet.
Ideally AI would minimize excessive documentation. "Core knowledge" (first principles, human intent, tribal knowledge, data illegible to AI systems) would be documented by humans, while AI would be used to derive everything downstream (e.g. weekly progress updates, changelogs). But the temptation to use AI to pad that core knowledge is too pervasive, like all the meaningless LLM-generated fluff all too common in emails these days.
> Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays.
A huge AI signal to me is not em dashes, not emoji, not even the "not X, it's Y" construction which oh god I'm falling into the trap right now aren't I.
It's a combination of these factors plus a tendency to fluff out the piece with punchy but vague language, often recapitulating the same points in slightly reworded ways, that sounds like... an eighth grader trying to write an impressive-sounding essay that clears the minimum word limit.
Did the bright sparks who trained these things just crack open the printer paper boxes in their parents' homes filled with their old schoolwork, and feed that into the machine to get it started?
Another commenter above this proposed a pretty compelling theory for the source of this style: SEO-inflated prose online. If the models were trained on the internet, "higher quality" content needed to be indicated to them during RL somehow. Search engine ranking is an easy-to-obtain metric that's kind of like "quality" if you squint, turn around, and lobotomize yourself. So the AIs have a high likelihood of producing the kinds of content that is rewarded by Google SEO.
Search engines only show a snippet of the content and that always looks convincing. It's the whole content that is off and, unfortunately, a few seconds/minutes can pass before you realize it (If you ever do).
Search engines track that. It's what a "long click" means. If you click a result, then return fairly fast and keep searching or clicking other links, they infer low quality (for that query at least).
Well, and Google's proxy read of "quality" might have flawed assumptions. A concise page where you get what you need and leave quickly might read as "high bounce rate".
Another hint is when the structure and formality of the response doesn’t match the medium. Like when someone sends you a whole article back in DMs along with headings for the sections.
Even though real humans write like that when writing documents, they never did that in informal messaging.
I work for an "AI-native" company now and have found this to be the case.
EVERYONE (engineers, pms, managers, sales) uses Claude Code to read and write Google Docs (google workspace mcp). Ideas, designs, reports. It's too much for one person to read and, with a distributed async team, there's an endless demand for more.
So for every project there's always one super Google Doc with 50 tabs and everyone just points their claude code at it to answer questions. It's not to be read by a human, it's just context for the agent.
Everyone cranks out endless pages of slop, that everyone else then has to ingest. Anthropic collects a fee from all of you and is the only winner here.
I'm looking forward to the impending crash when the AI providers actually start charging what it costs to run these models. It's going to be a bloodbath, and it's going to be cathartic as fuck.
They are so far removed from the process they can claim they are any % more productive and no one is able to contradict them. Call it a ‘productivity theatre’
The economic reality check is going to be devastating. It won’t be a crash of AI as a tech, it will be a crash of every ‘AI native’ company that does not even know what is their product any more.
I really hope that more people become aware of how much of our society is turning into kayfabe.
Just think of the rise of all the new types of ____ theatre like this that have been coined over the last decade or more. It's not an accident or fad, it reflects something true that's happening to society at large.
Everything authentic and valuable is being turned into something inauthentic, based only on conjured up perceived value and competition to fulfill the perception, and not real or useful purposes. It's all in the service of propping up systems that no longer function for the majority of people, or even for basic needs. And until a lot more people are willing to point out that the emperor is quite naked, even at their own social or financial risk, this will continue to rot everything down to the foundation.
I used to have a colleague (senior engineer) who never cared to write a single line in Pull Request descriptions, as if other people had to magically know what he meant to achieve with such changes.
Now? His PRs have a full page description with "bulleted summaries of bulleted summaries"!
My colleague had a problem with commit messages, so now they're all written by AI. I don't know what depth of hell he managed to get the prompt from, but they're all now in the format "Updated /path/to/file: fixed issue in thingamabob", which means they're all at least 200 characters long and half of it is the file path, an absolutely pointless thing to put in a commit message. The best part is that whenever you look at GitLab or GitHub, instead of seeing the commit message next to the file you just see the file name again, then the message is cut off.
Unfortunately, there is pressure to treat this stuff in good faith. Maybe the PR author really did write all this. Maybe they really did spend 6 hours writing this document.
So, I approach it in good faith, but I do get upset when people say "I'll ask claude". You need to be the intermediary, I can also prompt claude and read back the result. If you are going to hire an employee to do work on your behalf, you are responsible for their performance at the end of the day. And that's what an AI assistant is. The buck stops with you. But I don't think people understand that and that they don't understand they aren't adding value. At some point, you have to use your brain to decide if the AI is making sense, that's not really my job as the code/doc reviewer. I want to have a conversation with you, not your tooling, basically.
> I do get upset when people say "I'll ask claude"
The dude is just acting like a manager with a technical employee (agent) who does the hands-on work. If you are upset about this you should be hopping mad about the whole manager-director-VP-SVP hierarchy above this dude.
As long as each part of the hierarchy understands what they need to know at their level and what they produce, I have no problem with "the whole hierarchy".
You're saying this as if it's some rebuttal ad absurdum, when it's absolutely the case: when the higher layers don't understand what they do, we have a problem with that too, and that's been true since forever. Remember Dilbert and Office Space, and making fun of the ignorant middle managers and execs?
In this case, what we're complaining about is coders not understanding the code they ship (because some AI wrote it and they don't bother to review it or guide the AI fully).
I just stopped reading my work emails and the announcement channels. Everything that actually matters either ends up DMed to me or shows up in my calendar.
> Reminded me of when I had to be extra wordy to meet the 1000 minimum word limit for my high school essays.
Minimum word lengths are the greatest dis-service high school and college have ever done to future communication skills. It takes years for people to unlearn this in the workplace.
Max word counts only please. Especially now with AI making it so easy to produce fluff with no signal.
I write the words that I hear in my head, as though I am speaking. With the exception of timed, in-class essays, I always turned in papers far in excess of any minimum during high school.
In college, I took a constructive writing course because I thought "Hey, easy A!" After the second or third week, the professor told me that, while the class had a word minimum, I would also be given a separate word maximum. She said I needed to learn brevity and simplicity, before anything else.
The point being: I was able to cruise through high school with my longwindedness as a cheat code, never stressing about minimum lengths, despite my writing being crap in other ways.
Although I have regressed in the two decades since, it helped me a good deal. I am grateful to that professor for doing that.
I write a lot and have on several occasions tried dictation as an initial draft authoring step. It was trash every time.
Good for thinking through a concept but unsalvageable in the edit phase. Easier to throw away and rewrite now that you know what to say.
Nowadays I like conversation as an ideating step. Talk to a bunch of people, try to explain yourself until they get it, see what questions they ask. Sometimes in HN threads like this :)
Then write it down.
You get super high signal writing where every sentence is load bearing. I’ve had people take my documents and share them around the company as “this is how it’s done”
It can take weeks of work to produce a 500 word product vision document. And then several months to implement, even with AI.
Hmm... when I really care about the quality of something, I basically write what I think/speak, then try to edit it down by half. I don't find it unsalvageable, but the editing does require an order of magnitude more time than the initial draft of thoughts vomited into the keyboard.
No because the document is not the work. Management wants someone to figure out the solution to their problems. The document is just a step in solutioning.
Without the doc, others would have to re-do all that work if you get hit by a bus. Or you’d be stuck in endless meetings conveying the vision instead of figuring out the next problem.
Document length is inversely proportional to the quality of your thinking/insight. When you create fluff, everyone can see you didn’t do the work.
It's going to depend on the type of team and environment you work in. Probably on how senior you are as well.
If your boss asks you for specific documents and expects a quick turnaround, and you regularly take 3 weeks or whatever to produce them, then yeah probably.
If your boss generally leaves you alone to find and solve problems on your own, then probably not.
I design boardgames and it's easy to write a lot of rules. It's more difficult to write concise rules. Most of my time is spent editing rules to their absolute minimum.
"I have made this letter longer than usual, only because I have not had time to make it shorter." - Blaise Pascal
Reminds me of how I document procedures. I spend a significant amount of time thinking about how to write things so that I provide enough information for a Jr to understand each step (and hopefully learn something) without over explaining. Organization is also important.
I had the opposite issue. Writing was agony and every section would be written, reviewed and rewritten to get my point across; only to be tortured by a miminum word count that was 20% away after saying all i cound think of saying.
I've gotten better at phrasing myself adequately in one go. Rute mechanical memorization has also made writing itself cheaper. (read my username)
I can now yap quite adequately over text, yet i regularly find AIs at a minimum 2x as verbose as my preferred phrasing after manual word mashing.
When writing on paper, either I will pause thinking enough, or will sometimes lose where a thought was going. I am much faster at typing than writing, so I end up with more, then edit/delete afterwards (if I feel like writing well). I am much worse at writing long-form thoughts than I was back in college, now that 99% of what I do is type.
An odd tradeoff of my verbal-based writing seems to be that I am a fairly slow reader. I read aloud in my head, albeit a bit faster than I could speak, but I still hear the words as an internal monologue.
When discussing this a few times with friends, I've learned how different everyone's experiences are when bridging thoughts=>speaking, thoughts=>writing, thoughts=>typing, and text=>thoughts (or even text=>understanding).
Same as the heavy focus on rewording in your own words: basically teaching you to plagiarise by cheating. I find it distasteful.
Even though almost copying is everywhere (patents, graphic design, business): albeit in other areas it is often applauded and less obviously deceptive.
We talk about countries copying e.g. Japan was notorious for it. I think the underlying motivation there is ownership - greedy people feeling they own everything (arts and technology). "We own that and you stole it from us" along with the entitlement of never recognizing when copying others.
Considering that many high school kids won’t want to put in any effort at all, how else do you convey the amount of detail and effort you expect for a given writing assignment? It’s an imperfect proxy but I can’t think of a better one.
Yeah. 1000 words is not a long essay that requires padding, and any competent teacher marks an essay with 1000 words achieved mainly by repetition and bad sentence construction much lower than one discussing the subject matter in a suitable level of detail, and probably lower than a better- written essay which gets marks deducted for only having 985 words.
Since "write an essay" can be anything from three paragraphs to a 50 page paper and the teacher probably doesn't think either is the appropriate response to the task, some sort of numerical guide is a good starting point, even if a fairly wide range is a better guide than just a minimum...
(plus actually there are real world work tasks involving composing text that fits within a certain word range, and since being concise and focused isn't AI text generation's strong suit, I'm not sure those work tasks will disappear...)
Yeah, this is seemingly the only effective proxy for "write with some amount of depth." If the word count gets BS'd then it will be obvious when reading the output.
> Yeah, this is seemingly the only effective proxy for "write with some amount of depth." If the word count gets BS'd then it will be obvious when reading the output.
My high school professors had a really good solution to this:
Minimum word lengths but you have to write the essay in class by hand. You have 2 periods.
Some of us still write a lot but having limited time and space (4 pages) really put a hard limit without saying so. In higher classes they started saying “I’m gonna stop reading after 3 pages so make sure you get to the point”
I spent 2 years (coincidentally the same teacher for two years) in high school where once a week the only thing done that period was to write an essay (by hand) on some topic/prompt given immediately before beginning.
The grading was thorough and harsh. In college I was never graded harder on writing. My writing and comprehension abilities improved dramatically over that period of time.
With rubrics, or more simply the teacher could hand out an example essay at the start of the year that conveys the style and level of detail they are looking for when they assign an essay. Then they can refer to that when they make an assignment. Implicitly that gives a word count or number of pages, but allows for marking down for "too much repetition" or "needs more detail"
The ambiguous "needs more detail" thing would lead to a lot of students making it too brief in good faith, too long in good faith and both be frustrated and angry. You can write really good mini essay on a topic. And you can write really good super long essay on the same topic.
Demanding that students mind read is not a good strategy. Specifying expected length, checking for it is a good strategy. Teacher should also check for other things - whether paragraphs logically follow, grammar, sentence structure, you name it. But dont make them guess.
But like, there must be a document somewhere stating that, right? In my state the school system has publicly documented procedures which are enshrined as laws.
Have a second of critical thinking on this topic will make it abundantly obvious why this line of questioning is anti-education and anti-intellectual. You write in school to practice. No just composition, but grammar, spelling, individual sentences. Practice requires volume.
Subject yourself to a classroom of kids that you must teach to write, and throw out minimums. Will some students do fine? Sure, of course, and what of the others that turn in one sentence? That never grow? That have to go into the math class and hear their idiot parents say "why are you learning that we have calculators"
> Subject yourself to a classroom of kids that you must teach to write, and throw out minimums.
Strawman argument; the correct thing to do is not to throw out minimum word count and leave it at that, rather to emphasize the role of brevity and concision while still being sufficiently thorough.
It's widely understood that LOC is a poor measure for many coding purposes, so it shouldn't be controversial that word count is an equally flawed measure for prose.
This ENTIRE argument is about whether or not minimum word count is a good idea, perhaps improve your reading comprehension before pretending to know logical fallacies
Almost your entire post history is angry and confrontational, just like here, and I was also talking about whether or not word counts are a good idea, obviously; right back at you about reading comprehension.
It can help to force depth into a topic that requires it, and more expression and emotion into writing where that is of value. It also forces the writer to think more deeply about the topic and organize their thoughts.
While I hated it in high school, but think I better understand it now. I think part of the problem is they never explained the "why" or the "how", just the requirement. I wasn't able to write anything more than a page or two without extreme difficultly until college when the requirements went up to 30 pages.
In theory, someone who can write a 30 page paper could effectively distill it down to a short memo when needed, summarizing their primary point(s). Someone who can only write short memos would have a hard time writing something longer one day if/when required. I was trying to do a knowledge transfer one day, opened up Word, and just typed 20 pages on everything I knew about a tool we used heavily, but wasn't documented anywhere. I don't think I could have done that before I was forced to write those longer papers in college.
Where I encounter it at the higher education level is that academic-level research almost universally has maximum word counts or page counts rather than minimums: if you think you can get your point across in fewer words, you should. No reviewer is going to object to the paper being too short, so long as you succeeded in making your case.
John Nash's Ph.D. Thesis is notorious for being short: it's still 27 pages (typed, with hand-written equations and a whopping total of two citations) but that's an order of magnitude below average. On the other hand, most of us don't invent game theory.
Students used to minimum-word-count essays sometimes have to do some self-retraining to realize that the expectation is that you have more that you want to say than you have room to say it, and the game is now to figure out how to say more in fewer words.
Off topic, and not to diminish Nash's work, but quite famously (I thought) Von Neumann and Morgenstern did a bit of the 'inventing' too, and a bit earlier
I guess, but have you actually encountered a teacher grading an assignment solely based on word count?
I certainly wish more teachers encouraged parsimony and penalized fluff and bullshittery, but I'd be surprised to find them doing it outside of some narrow cases where the point is just to make you write something at all.
Tthey generally want to encourage their students to engage with the topic at a certain level and practice the thinking needed to research, structure, and implement an argument of a certain length. They want you to put at least 5 pounds of idea in the 5-10 pound idea bag.
If you're convinced you've hacked word economy and satisfied the assignment except for this goshdarnpeskyminimumwordcount, you're probably misunderstanding the lesson the instructor is willing to read through a bunch of bad writing to impart and cheating yourself.
That's actually the trick. If you assign word count, MLA style, grammar, you just have to look for the errors. You don't have to engage with the ideas at all, or provide conversational feedback - just cryptic notes in the margins, like "???" or "awk"
Journalists and writers are often given a deadline and a target length. "Give me 500 words of copy by the end of tomorrow." The editor and publisher of a magazine need to get all words and graphics ready by a strict and regular deadline.
The idea was to get people to include more substance. Instead of just saying "Washington crossed the Delaware" to get students to include reasons why, impacts, further narrative, etc. IDK if it was effective or not. Probably at least a little; there's only so many ways to rewrite the same thing over and over. I know in my case though I submitted essays below the word count a few times, but since I actually included the content they were looking for I didn't have any problems
Audience is important. Devs should stick to the agile manifesto to communicate among themselves.
Decision makers want to see a wall of text in every project plan, decision document, and strategic plan. Not because they know anything about it, or even attempt to read it, but because they want to trust that you've thought about everything and provided a good recommendation.
AI is going to pull the wool over their eyes and they'll have no idea until it explodes in their face. I really think we're going to see a reversal of the 2000s high trust business environment, and as we move to a low trust environment, I hope you're all drinking buddies with your VP ;)
I remember my first semester university writing class, when on the first day the teacher told us we had learned to pad our writing in high school, and now we were going to learn how to be short and concise because every assignment would be limited to one page.
it was only after I had to manage others that I realized the logic for a lot of these simplistic metrics and rules. they are in place to hold accountable the worst performers. a simple example is when i introduced flexible work hours. it was fine with most people, but there are always a few members that abuse the system. they stretch it to the very limit to what can be interpreted as "flexible". as a manager it posed a dilemma for me. i didn't want to take away this privilege just because of a few abusers, but it was both unfair and set bad precedents if I allowed them to get away with this. and let's say they couldn't be easily fired. most of my peers simply ended up going back to a system where people punched in and out.
Could not you just say to those few: 'you can't because I do not trust you'? You are the manager after all, your job is not to make them feel good but to make them work.
I don't think "some people on the team have privileges and others don't based on the manager's discretion" would be healthy in the long run either. Can you imagine interviewing for a team, asking about the PTO policy, and finding out that it varied like that? It would look pretty indistinguishable from "the people who that manager likes have special treatment" to me. You could hide it from prospective employees, but not knowing about it beforehand and then finding out from one of my teammates that the manager revoked their privileges (who presumably would have a chip on their shoulder about it and present the info with their own biases) would make me concerned that there was a bait-and-switch and now I'm stuck on a toxic team.
Yeah, I understand but on other hand you can't reward everyone with the same thing for different outcome. This is exactly what is happening with they pay, some people earns more, some less. People complain about it too. Do you think it is toxic too?
We people being people, and being manager when there is no outcome when everyone is happy, this is why I am not going to be manager. I just wanted to know honest opinion about how to solve it from the OP, or even if this is solvable.
A healthy company already needs to have processes for dealing with employees that aren't meeting expectations that don't involve revoking benefits like PTO. Those should be suitable for issues like this rather than crafting punishments specific to the nature of what specifically is going wrong.
An example of how a healthy company would deal with an employee who isn't meeting expectations? I honestly didn't think that needed an explanation, but okay:
The manager should be giving feedback to the employee, ideally as close to the moment when the expectations are not met (e.g. someone acting poorly in a meeting, take them aside after the meeting and explain what they did that was bad and what would have been a better way to ask). The manager should offer help or resources if appropriate. If the issue persists even after it's addressed a couple times, they should bring it up in their 1:1s with the employee and mention that it's been a recurring problem and try to understand why it's happening and see if there's a way to avoid the issue entirely. I've never been a manager, so I don't know exactly when and how this sort of thing needs to involve record-keeping with HR, but at the very least there should be some form of meeting notes already been kept for their conversations in their 1:1s with employees to track things like this (and plenty of other things; maybe the employee has given feedback to them about other teammates, non-teammate coworkers, or even the manager themself; all of this is important to keep track of for accountability purposes). The manager should make it clear that the issue persisting might affect their ability to remain employed, and if it continues to happen, termination for cause is a last resort.
It do not need an explanation, I am curious about your opinion about it. Since you are not OP of this thread it does not matter that much bu I am curious why he or she did not do just that instead of revoking remote work for everyone... does not makes, does it?
Any system is going to have a free rider problem. I genuinely believe that if we stopped trying to force a large chunk of the population to look like they're busy when they have zero intrinsic desire to do anything well and will continually cut corners wherever they can, we'd reach a productivity golden age where there would be enough surplus for them to fuck off and be lazy out of the critical path. The stumbling block here is always the perception of unfairness, and it's a big one, but for anyone that really cares about their work or its quality, do you really want to always have to work with people who will only do the bare minimum to survive? Hopefully you aren't cold enough to want them to starve, but should they be forced to participate and drag everyone else down just to prove some kind of innate moral ethic? I wish that we as a society could approach this pragmatically instead of moralizing under a veneer of pragmatism.
Well, in many layers of overhead in companies people operate at the level of high schoolers, so it is no surprise unfortunately, that the output comes across like that too.
it actually insane that this sort of thing is tolerated. Its a culture thing and frankly just rude. My org is pretty AI-pilled and this type of behavior will just not fly. I need to be assured im talking to a human who is using their brain.
If I paste something from an AI into chat, I always identify it as such by saying something like "my claude instance says this:". I also don't blindly copy paste from it, I always read it first and usually edit it for brevity or tone. Feel like this should be the absolute minimum for sending AI content to a person.
Even that is pretty useless because we have no idea what context "your Claude instance" has. All you're doing is dressing up some bullshit to look authoritative.
When I started my PhD I was already really good at typesetting with LaTeX. I started to bring in fully typeset works in progress for my supervisor to read through. These proofs often had fatal flaws. He asked me to stop typesetting until after the work had been verified because it looked too convincingly correct due to being typeset.
That was about 15 years ago but I've never forgotten it. Drafts should look like drafts. Scrappy work and proofs of concept should look as such. Stop fucking with people by making your bullshit, scrappy ideas look legit. Progress is a cooperative effort. It's not about trying to make people say yes.
Can confirm. I saw some fresh out of college colleagues do this in text docs. Al nice markup, but the text content was very drafty. I always sent them back to keep the format concept-y if you are tuning the text first.
There’s people who use AI to solve problems, and then there’s people who have completely offloaded all of their thinking to LLMs. I have a manager who when asked a question won’t think even for a moment about it and will just paste paragraphs of AI generated text back.
> The "elongation" of workplace artifacts resonated with me on such deep level
Well put. I generally skip AI-generated PR descriptions for this reason as they tend to miss the forest for the trees. Sometimes a large change can be explained by a short yet information-rich description ("migrate to use X instead of Y", "Implement F using pattern P") that only a human could and should write.
Hah, lately I've had one particular coworker demanding in code reviews that I provide more 'detailed' MR descriptions. (All of his are clearly AI generated.)
We need to demand better from our coworkers and from ourself.
Young "AI native" coworker opens PRs with 3 screen slop description, I flagged that "I know he ain't reading all that, and therefore I ain't reading all that", so he should just give a max half-screen overview. I expect that the PR description makes sense, is correct, and have been reviewed by the person opening the PR. You can still use agents for that, but at least there is a chance with shorter descriptions that it's not completely bs.
> Professional formatting, length, and clear prose are no longer indicators of care and work quality (they never were, but in the past, if someone drafts up a twelve page spec, at least you know they care enough to spend a lot of time on it).
On the flipside what was that quote, something like "Sorry for the long letter, I didn't have time to write a short one"?
Whenever I see AI-generated content put forward for my attention, I extract myself from the situation with the minimum possible time expenditure from my side.
It's some sort of a leverage: "I spend 5 minutes prompting, so that you could spend 30 minutes reviewing". Not gonna happen LLM buddies.
This is happening at my place as well. I am a senior leader, but I find it hard to push back on this. I something looks plausible and everyone has reacted with a thumbs up (but probably only skimmed the document), when is the first one saying “what is this shit?”
The length itself is not an indicator per se, but you can sense when it is not honest. If others do not have a sense for it, it seems like complaining about something new.
But you've just perfectly described the tacit knowledge problem.
Yes, you can spend all your time writing docs, or just mentor a junior and let them grok the system through osmosis.
Also your doc won't ever have 100% coverage unless you write an absolute tome. Tacit knowledge are things that are so obvious that you wouldnt even think of writing it down in the first place.
NIMBYism has never been about preserving neighborhood characteristic, or noise and traffic concerns. Menlo Park is not Big Sur. Sure, some concerns are reasonable and should be investigated, but most of the time they're bureaucratic distractions that's been weaponized by people who want to delay progress and protect their investment.
For most Americans, A house is their primary savings account, retirement plan, and probably where they keep majority of their wealth. We don't build new housing in old neighborhoods because it would de-value the investment of too many people. Until we can solve this problem (where people are incentivized to pull the ladder up behind them), we will always have housing shortages. It's just too profitable.
Anecdotally, what we found in Austin was a combination of two factors:
First, awareness of the futility and selfishness of "growth elsewhere" as a solution is much higher in younger people — and by younger, I mean currently under fifty. Generational turnover in Austin had been eating away at the NIMBY majority, and conversations about housing in Austin have long been polarized more by age than by left/right political sentiment. There's a caricature, with a strong vein of truth, of the old Austin leftist who has Mao's little red book on their shelves and thinks apartment buildings are an abomination, and Austinites of that generation are experiencing mortality. At the same time, younger people are adopting more and more urbanist mindsets compared to their parents.
However, I think a much much bigger factor was the influx of younger people, especially young people with experience of larger cities, diluting the votes of the older NIMBYs. Austin has been shaped by growth for half a century, but its "discovery" in the 2000s and very brief status as a darling of coastal hipsters (remember that term?) has had a lasting effect on Austin's popularity and its demographics. It's been twenty years since it was the "it" place for Brooklynites to visit, but in that twenty years, it's had a lot of exposure for young urban dwellers, and some of them discovered they liked it and moved here, bringing their comfort with dense living and their appreciation that growth can bring a lot of positives.
Personally, every homeowner I know in Austin has seen their houses depreciate significantly this decade, and I don't think it changed a single person's mind about Austin's housing policy. People who opposed the reforms are bitter about the outcome, and people who supported the reforms say it sucks for us personally, but it's what we set out to accomplish, and we're glad that it worked.
People see lower property taxes as a silver lining for short-term swings in the market, but I don't know anybody who thinks this is a short-term swing that they can ride out.
Nobody is happy about their property values going down long term. It exposes them to the risk of a big loss if they're forced to sell because of events in their life.
> Austinites of that generation are experiencing mortality.
This is such a funny and novel way of saying "old people in Austin are dying" I just had to point it out.
Also, I like the way this comment is written in general. Felt easy to read for its length, and most importantly the tone stayed fun and personal while still being informative and on topic.
> For most Americans, A house is their primary savings account, retirement plan, and probably where they keep majority of their wealth.
If you allow for increases in density, that house (actually the land beneath it, but still.) becomes more valuable as it's redeveloped. So that American homeowner does benefit, by unlocking the upside of "evil gentrification" (or actually, density increase).
That can only happen if the higher density coincides with equal economic growth in the neighborhood. Otherwise, the higher density could result in a negative home valuation trend.
Given the above uncertainty, and higher density could result in more traffic, noise, crime, nymbys are likely taking the correction position for wealth preservation and quality of life.
"Traffic" doesn't come from higher density, it comes from zoning bans on mixed-use neighborhoods which force people to drive everywhere. The "crime" argument is especially silly: why assume that higher density only ever attracts criminals? Usually, having more people around is a positive.
You can assume higher density has "more crime" because the increase in people means if you want to keep the same absolute rate of crimes (which is the only thing people ever notice--every violent or sexual crime will be repeated in the news), you have to correspondingly increase the efficiency of crime-fighting, and American police aren't up to the task, even if they were motivated to do so.
A paper came out about this recently: The City as an Anti-
Growth Machine.
> Logan and Molotch's “urban growth machine” remains foundational in urban theory, describing how coalitions of landowners, developers, and politicians promote urban growth to raise land values. This paper argues that under financialized capitalism, the dynamics have inverted: asset appreciation now outweighs productive investment, and urban land is increasingly treated as a speculative asset.
I'm not sure why new housing devalues old housing. In my mind, higher density generally makes an area more desirable (e.g. because higher density enables more jobs, better infrastructure) and raises the value. Imagine as an extreme example and existing house in the middle of nowhere around which a metropolis is developed. Surely the value of the house, or at least the land it is built on, goes up, even though it loses its "cabin in the woods" appeal.
You think if there were modern highrises in Menlo Park a tiny 2BR shack next door would still sell for $2M? It’s a supply and demand issue, nothing more.
What is your mental model for this then? If the "2BR shack" can be built from scratch for 300k, and the value for the lot + shack is $3M, then the land value is $2.7M. Most expensive real estate is land value, not actual structure value.
I see what you're saying, my point is that the principle thing driving it's value isn't the land nor the shack, it's the regulatory framework of the area.
Yes, it would go way, way, way up because if there is a high rise next door someone wants to knock down the shack and put up another high rise, a commercial building, etc.
When regulations are reduced to allow more density, the value of the land goes up because its productivity increases. The land can do more now, e.g. hold 10 apartments vs 1 house. The same land generates more rent so developers are willing to pay more for that land.
Meanwhile, the value of housing units goes down due to increased competition among sellers/landlords.
Consider two zoning changes.
1) You are a homeowner and more units are allowed on your parcel, e.g. single-family -> duplex. That increases your land value.
2) You are a homeowner and there is more density around you, but not on your parcel, e.g. apartments are allowed nearby but not on your street. Your land value does not increase. Your home value decreases due to increased competition. (Of course, there may be long term effects like the increased density actually leading to economic windfalls in the area, increasing its desirability, and then increasing your home value.)
it's not that density per se drives down existing costs, but density almost always brings more housing stock to the market (unless they are simultanously tearing down housing elsewhere) and housing stock drives down the cost of housing, which is the point of the original article.
So if we take it as an assumption that density increases housing stock, there is lots of evidence that density drives down prices of existing land/home values.
Density is not going to drive down cost for the same kind of housing. A SFH is not the same as a smaller home on a denser plot, much less an apartment block in a high rise. So the SFH owner who pursues increased density does indeed benefit.
The article only talks about rent, not price of housing, which I think is an important data point.
Homeowners don't want housing prices to fall. Ever. They don't care about rent prices (at least, not directly). But renters care about both — obviously lower rent prices are good, but many want to be able to enter the housing market but it's prohibitively expensive.
Perhaps falling rent prices has a similar effect on home prices — the value of buying a home for the purposes of renting becomes less desirable due to lower rental revenue, so prices fall. Not sure, the macroeconomics of housing never made sense to me because it's never as simple as pure supply and demand.
As an example, my wife and I finally decided to buy a house in a fast-growing CA suburb (not in Bay Area). The house was constructed in 2021 and sold for $611k. Plenty of renovations have been done on the house, we'd estimate around $20k+ worth of renovations, and the neighborhood and surrounding area has only grown since then (more parks, housing, great schools, stores etc).
The house was listed for sale at $600k; even then we were able to underbid and get our offer accepted. Inspections turned out clean, just minor cosmetic issues.
I don't keep an eye on the rental market but we've lived at two different rental properties and both of those places went up in rent once each, so I can only assume that rent is going up everywhere in this area.
Point is, rent and real estate don't always go in lock step.
I think there are a big range of opinions people have. There are some hardcore housing resisters whose opinions get a lot of sway because of the way processes work (public consultations, activism, etc). Lots of people are a bit sceptical because of pretty legitimate reasons – noise, traffic, disruption, aesthetics.
I think there probably are balances where people could generally be happier with new construction and that opinion could be clear enough to overrule those who would never be happy with it. Things like:
- ways of having locals vote on new development with small enough constituencies that they can be paid off (ie some of the gains that would have gone to developers or other positive externalities can be captured by those who are more effected) with lower taxes or new roads or parks or whatever
- making residents vote instead of having consultations will lead to less bias in favour of the most obnoxious
- allowing apartment blocks to vote to accept offers of redevelopment (eg you get a newer apartment; more apartments are added to the block and sold to fund the redevelopment)
- having architectural standards that locals are happy with for new buildings
- allow streets to vote to upzone themselves (I don’t love this as it’s basically prisoners dilemma – if your street does it, land value increases and you gain; if every street does it land value only increases a bit but now you are upzoned)
I basically think that there are developments that can be broadly appealing and we are in a bad local minimum in lots of places of having bigger governments trying to push development on unwilling smaller governments/groups
Fundamentally as a society we need to stop treating housing as an investment. It is and should be a utility.
Suring property prices is a relatively new phenomenon (as in, post-WW2). The true origins of NIMBYism, at least in the US, is (you guessed it) racism. Long before segregation ended, and long after, there was economic segregation. Redlining [1], HOAs [2], the post-WW2 GI Bill [3], where highways were built [4][5], etc.
In fact this is a good rule of thumb: if you're ever confused why something is the way it is in the US, your first guess should pretty much always be "because racism".
Case in point: my parents. Built a house in 1988 and they still live there. Two people in 3500 square feet. Four bathrooms and five bedrooms. Meanwhile, you need a family income of 3x the median to rent a townhouse 1/3rd the size nearby.
This is beyond ridiculous and it’s totally unsustainable.
Hate to be the bearer of bad news here, but the boomers will never die. Gen X will become the new boomers, and then the millennials after them. Individual people die, but interests stay the same.
There’s a lot of truth here, but two countervailing points: first younger generations own homes than Boomers at equivalent ages; second Boomers are particularly blind to the effects of zoning and strongly oppose development due to see firsthand the effects of 1950s urban redevelopment. They also love cars.
Us younger generations will have seen firsthand the negative effects of zoning, we do not possess a visceral opposition to development, and there is much greater appreciation of walkable neighborhoods.
> If NIMBYs were primary motivated by making money the prudent thing to do would be to support unrestricted zoning and then develop or sell the lot.
That is highly dependent on what exactly is being built next to your home. Sure, if it's more luxury housing then it'll probably drive the value of your home up. If it's low-income housing then it probably won't. And what we need is more of the latter rather than the former.
> you can take out loans against the value of the equity but this isn’t particularly common.
It's because it's an investment, you're going to get the return once you finally sell your home. Only in a pinch if someone needs a large amount of money to start a business or pay for an emergency will they mortgage their house.
> And what we need is more of the latter rather than the former.
You just need to wait. The luxury housing that gets built today becomes low-income housing as it ages. There's no short-circuiting that process the way the incentives are set up, but you can drive down prices across the board by building more, even more luxury housing.
>For most Americans, A house is their primary savings account
This is true for California, where people (foolishly) rely on their home value as their retirement plan, which further incentivizes NIMBYism.
But in places like Texas (and other areas with affordable housing), the house is just treated as something you pay off to have a low housing cost in retirement. And your investments are your retirement+savings account.
I wasn’t trying to say one was better or not, just different. Californians wrap up a large amount of their retirement savings in their houses though, so keeping those home prices high is important to them and that’s a reason for stalling development.
I think Californians do, a lot of time, retire with a higher net worth. But most of them do that because they’re more relatively house-poor during their lives - they take out larger mortgages, and save more into their net worth.
As opposed to Texans, who have higher disposable income since they have smaller house payments. It’s less incentive to save so they may spend more.
So that’s a partial advantage to California - the expensive homes force a higher savings rate, naturally.
But, at retirement age, a lot of their net worth is tied up in their home. So to unlock a lot of those savings they need to move to a lower cost of living state like Arizona, Nevada, Florida, etc.
While the Texans can just stay in their paid-off house.
So yeah it’s just different.
Texans are just paying off their home throughout their life and staying in it. They have larger disposable income to go towards other stuff (kids, lifestyle) while Californians gotta pay that mortgage
It's not that they're "intuitively better" it's that prior generations of them have passes less insane state and local law and they're not at the tail end of a ~20yr industry boom so "pay down my house and cash out to somewhere cheaper" doesn't make sense as a retirement strategy for as many of them.
Master planning has never worked for my side projects unless I am building the exact replica of what I've done in the past. The most important decisions are made while I'm deep in the code base and I have a better understanding of the tradeoffs.
I think that's why startups have such an edge over big companies. They can just build and iterate while the big company gets caught up in month-long review processes.
For most working-class Americans, education is a form of job-training.
In the AI maximalist world where humans are obsolete and cannot contribute to the economy in any meaningful way, there is actually no reason for public education to exist beyond being a free day care for non-rich people. Why learn algebra/calculus at all if the AIs can do it? Why should the US invest billions of dollars into public education instead of data centers?
I hope the US and AI leaders are still "speciesist" in that they put humans first. I hope AI will cure all illnesses, unlock space travel, and lead to flourishing of humanity, not just a flourishing of datacenters. It's also possible that AI just cleave societies in half and we are all worse off for it.
I thought the same as gp, that putting teachers at high risk invalidates the whole visualization. If this is intended to be useful for future career planning, with meaningful gradations between specializations, than it should exist in the probability space where human agency still matters. And in that space, from a Riccardian and political economy perspective, high human-touch jobs with strong public unions should be among the safest.
To borrow a concept from Simon Willison: you need to "hoard things you know how to do”. You need to know what is possible; you need to be able to articulate what you want. AI is a fast car, but it’s empty and still needs a driver. As long as humans are still in the loop, the quality of the driver matters.
Terminology matters, if you use the right words, the AI will work better.
Just saying "use red/green TDD" is a shortcut to a very specific way of fixing bugs.
Or when you use a multi-modal model to transcribe video saying "timecode" instead of "timestamp" will improve the results (AV production people say timecode, programmers say timestamp, it hits different parts of the training material)
Good advice to the younger folks. You can afford to look stupid. So go ahead and do that thing you wanted to try. There's more acceptance because of your age. You're expected to fail in some ways.
Once you have a mortgage, a reputation to maintain, an image of competence to uphold at work, you pretty much can't afford to look stupid in my opinion.
Intelligence and ignorance are two different things. It is a sign of intelligence to be able to acknowledge your ignorance when it exists. Then you use your intelligence to correct that. Even with a mortgage this has never failed me. 20 years, 2 employers due to an ownership change, and several RIFs survived.
The power of saying, "I don't know, but I will find out" is underestimated.
Max Tegmark, a cosmologist and MIT professor, is known for his "provocative ideas" and has a self-imposed rule regarding his work: "Every time I've written ten mainstream papers, I allow myself to indulge in writing one wacky one". This approach allows him to pursue unconventional, "crazy" theories without jeopardizing his reputation as a serious scientist.
I've managed to go my whole career using regex and never fully grokking it, and now I finally feel free to never learn!
I've also wanted to play with C and Raylib for a long time and now I'm confident in coding by hand and struggling with it, I just use LLMs as a backstop for when I get frustrated, like a TA during lab hours.
> my whole career using regex and never fully grokking it
Sorry to hear that, nobody ever told me either. Had you invested a bit of time earlier in your career, it would have paid dividends 100x fold. The key is knowing what’s wheat and what’s chaff. Regex is a wheat.
With that said, maybe you tried.. everyone has their limits.
If you're going to deploy what you make with them to production without accidentally blowing your feet off, 100%, be they RegExp or useEffect(), if you can't even tell which way the gun is pointing how are you supposed to know which way the LLM has oriented it?
Picking useEffect() as my second example because it took down CloudFlare, and if you see one with a tell-tale LLM comment attached to it in a PR from your coworkers who are now _never_ going to learn how it works, you can almost be certain it's either unnecessary or buggy.
For things Im working on seriously for my work, for sure, I spend time understanding them, and LLMs help with that. I suppose, also having experience Im already prone to asking questions about things I have a suspicion can go wrong
But there is also a ton of times something isnt at all important to me and I dont want to waste 3 hours on
I disagree. It's worth asking why some people find brand watches beautiful? Where did they get their sense of aesthetic? Were they born with a congenital preference for RM 16-01 Citron?
Culture shapes our taste. Companies go on multi-decade billion-dollar campaigns to shape our culture. We like certain things because famous actors or athletes endorse them; because hip hop artists rap about them; because influencers talk about them; because Hollywood portrays them a certain way. This extends to all modern aesthetic preferences from architecture to watches to cars to furniture to dating.
I think the argument pg is making is that brand-obsessed cultures are not maximally truth/beauty-seeking and gets really weird. e.g. Japanese Ohaguro, Chinese foot binding, various cranial deformation practices from the Mayans to the Huns, high-heels, ugly (to outside observers) watches.
It's a really thought-provoking essay. But it's too heterodox and "autistic" to share with most of my friends. Socially speaking, it's best to outwardly embrace the current zeitgeist.
> I disagree. It's worth asking why some people find brand watches beautiful? Where did they get their sense of aesthetic? Were they born with a congenital preference for RM 16-01 Citron?
There's plenty of art that's celebrated, but also kinda weird and ugly. Is "Vertumnus" by Giuseppe Arcimboldo (1591) also a product of the "brand age"? What about various gargoyles and grotesques on old church buildings?
Some people just like weird art, maybe because they think it reflects their own quirky or rebellious nature. Some of these people have money. I don't see why we need some sort of a cynical theory of a "brand-obsessed culture" at the center of it. How many people in your social circle are obsessed with brands? We might have a brand or two we like, typically because we like the way the products look or work. That's about it.
I know some people who like expensive watches. They talk about the design a lot more than they talk about who made it.
Citron is obviously a weird watch but you can always find weird expensive examples of anything. Most expensive watches look normal and they look really beautiful thanks to the attention that goes into building them.
Yes, what I find beautiful is the craftsmanship, dedication, and the singular, almost monastic focus required to become a master in some human pursuit, whether its software, sushi, or making watches. I find dedication and sacrifice deeply moving and eternally beautiful.
Venture outside of pure theoretical math. Learn some other domain knowledge and combine it with your mathematical ommph. That's the easiest way to make an impact now rather than potentially decades later.
reply