Hacker Newsnew | past | comments | ask | show | jobs | submit | FrojoS's commentslogin

> there's no reason to believe the progress of LLMs [...] will stop anytime soon

Wrong. Every advancement has followed a s curve. Where we are on that curve is anyones guess. Or maybe "this time its different".


> Wrong.

Can you please edit out swipes/putdowns, as the guidelines ask (https://news.ycombinator.com/newsguidelines.html)? I'm sure you didn't intend it, but it comes across that way, and your comment would be just fine without that bit.

Edit: on closer look, it would be just fine without that bit and also without the snarky bit at the end. The rest is good.


Great. You see a shape in graphs. And that shape tells you that _at some unknown point in the future_ progress will slow (but likely not stop).

Now back to the point, what reason do you have to believe progress will stop soon? If you have no reason, then it sounds like you agree with OP.

Which makes the patronizing sarcasm all that much more nauseating.


Agreed. For all we know, humans are only considered intelligent locally among ourselves, not universally. Every time we learn more about the universe, we seem to also learn how insignificant and wrong we are.

Not that I agree with them, but your tone could be more constructive as well.

You know what? I agree. I should have avoided falling into the same trap.

Nausea aside, what evidence does anyone have that “super intelligence” of the sort your argument alludes to is even possible? Because that’s what we’re really talking about; greater than human intelligence on this sort of academic task. For example; When llms start contributing meaningfully to their own development, that would be a convincing indicator imo.

This discussion is not about superintelligence, it is about continued progress. Fully general human intelligence at much lower cost than humans is all that is required to profoundly reshape society, but it is not clear even that will happen soon.

As the blog points out - this is one particular subfield where LLMs have much easier prospects - lots of low hanging fruit that “just” requires a couple weeks of PHD candidate research.

Mathematics itself is one of a small handful of endeavors where automated reinforcement training is extremely straightforward and can be done at massive scale without humans.

Neither of these factors place a structural bound on the kind of thing LLMs can be good at, but we are far from certain we can achieve performance at this level in other fields economically and in the near future.


Well, a decent GPU runs on 20x the wattage of a human brain. That's evidence humans are constrained in ways artificial intelligences will not be.

You're comparing a gpu to a human brain?

Why wouldn't you? From both emerge intelligence.

> When llms start contributing meaningfully to their own development, that would be a convincing indicator imo.

This has been the case for awhile now already…

https://kersai.com/the-48-hours-that-changed-ai-forever-clau...


> The model essentially served as an on-call teammate across MLOps and DevOps tasks, compressing feedback cycles that typically consume expert time

I personally would not characterize automating training processes as “meaningfully”.


And yet the world hasn’t changed all that much except people getting laid off in response to over-hiring prior to the diffusion of llm’s.

> over-hiring

For how long should you be allowed to use this excuse? It’s nearly 5 years since the peak of COVID hiring. What’s an acceptable limit - 10 years? Of course at that point you can just switch over to outsourcing and “stupid MBAs”, the other two of Reddit’s favorite scapegoats. I find a lot of the AI skepticism to be totally unfalsifiable.


> I find a lot of the AI skepticism to be totally unfalsifiable.

A lot of the discourse around AI in general is unfalsifiable. It's just a bunch of people "predicting" the future. Seems smarter to just avoid making assumptions about it at this point.


I don’t make predictions about the future. But in reality, LLMs have already profoundly changed the world, including software development and tech industry.

The people who pretend that’s not the case are not living in reality. To them - let’s call them “ed Zitron readers” - there is no evidence that could change their view that none of this is really happening, it’s all hype, and the collapse is just around the corner, after which we’ll all go back to normal and LLMs will sound like a bad dream.


facts!

but we can see trends and for your livehoood it is important to be able to make educated predictions based on trends. not saying everyone should start making AI predictions (though many already do)


And the same can be said for AI exuberance.

Yes, LLMs are a great technology. Yes, we will probably all use them all the time in 20 years. No, we don't know how we will use them (to generate cat memes or to cure cancer) in 20 years time.

Especially for software developers it looks increasingly that after huge turmoil it's likely we will need +/- the same number of developers in the world.


> Especially for software developers it looks increasingly that after huge turmoil it's likely we will need +/- the same number of developers in the world.

what exactly are you basing this opinion on? All I am seeing personally across multiple projects I am working on and other friends at other places is that downsizing is either begun or is planned (to exclude from here all the “public” layoffs we see on the news). Given how most business operate in the USA I think most of “AI strategies” are “we can do same with -40% staff” vs. “we can do XX% more work with same staff.”


The past couple of years have been chaotic and fearful. Hopefully that won't last forever.

If we can get a little stability, people will begin thinking less in terms of "how do we do the same thing cheaper" and more in terms of "how do we do new things."


I love this optimism but I after a (too) long career I think that 3rd thing will win out - "how we do new things - but cheaper (or as cheap as possible)" there are sooooo many different articles that have been discussed here on HN that basically argue "coding has never been the bottleneck" which to me is the biggest lie SWEs are currently trying to tell themselves, I have been coding 30+ years now and coding has always been the bottleneck. hiring new developers has always been justified with "we have all this work that needs to be done and not enough people to get the work done." with llms in the fold, I am questioning how will these decisions be made in the future? perhaps in the most simplistic view:

1. run a bigger "agent army"

2. hire more people to control and guide the existing "agent army"

I think it'll be #1 and SWEs will be expected to do more work and work longer hours in the future (those that are able to keep their jobs). this is more pessimistic outlook than yours so I hope you are right more than I am :)

edit: just now on the HN front page: https://www.nytimes.com/2026/05/08/technology/meta-ai-employ...


> that basically argue "coding has never been the bottleneck"

> we have all this work that needs to be done and not enough people to get the work done

I believe the reasoning is roughly to ask, what was occupying the developer hours? Was the majority of it typing out lines of code or was it reasoning about higher level concerns?

It usually comes up in response to predictions that the role of developer will be completely replaced in the near future. It's possible to observe significant efficiency gains without obviating the need for everything the role was doing.

Of course such reasoning has little to do with projections of future developer employment numbers. Will the switch from push mowers to gas mowers reduce the demand for people who get paid to mow lawns by increasing their efficiency? Will it increase the total lawn acreage across the market? It could well do both. However, if it makes having a lawn affordable for the average joe it could counterintuitively increase demand for the job.

Of course the stated goal of the AI companies is to develop the analog of fully robotic lawnmowers. But despite how impressive recent advancements have been we still have yet to see any evidence of novel abstract reasoning or a theory that would be expected to lead to it.

In other words, people have been speculating about the development of fully autonomous lawnmowers and the risk that they unilaterally decide to cut us all down for the past 50 years. "I, lawnmower" was a smash hit a few years ago. Now gas ones have appeared and continue to make rapid advancements but still no convincing signs of autonomy.


> I believe the reasoning is roughly to ask, what was occupying the developer hours? Was the majority of it typing out lines of code or was it reasoning about higher level concerns?

You're obviously right and the people who think that are the managerial types that think software developers were glorified secretaries writing after dictation.

LLM is great at generating stuff, but it's basically 3D printing. Amazing, but most of the high quality stuff in the world needs to be built at large scale out of aluminum, steel, wood, etc. Yes, I know there are large advances in 3D printing, but maybe 0.000000001% of all manufacturing in the world are done using 3D printing. A lot of stuff will probably never be possible using 3D printing.


Hmm, I don’t know, maybe the fact that 4.6, 4.7, 5.3, 5.4, 5.5, 3.0, 3.1 are all marginal improvements?

I think people's opinion of "marginal improvement" is based on their relative ability. A 2000 elo chess player is going to think the jump from 500 to 1000 is marginal. They're both floundering around not doing anything resembling common sense. A 1000 elo chess player is going to find the jump from 2000 to 2500 marginal. They're both playing far better moves for incomprehensible reasons, and the only reason you know the 2500 player is better is due to benchmarking. It is only when you are evaluating systems about at your level that you can feel the improvement.

I, personally, found the past two years to be a much larger improvement than the previous two years.


2024-2025 was filled with huge improvements. 2025-2026 has not been, outside of open source.

The idea that we’re at the point where it’s superseded our ability to tell just makes no sense. I’ll be happy if we can get to a point where I don’t have to tell Claude not to tail every bash command or make a job that writes throughout instead of once at the end. I’ll be happy if “continue this interaction naturally, you are taking over from an independent subagent” works.

But I’m not holding my breath. It’s still really cool that any of this stuff is possible.


Claude in feb of 2025 was barely able to code. Sure, it could write you a nice function, it could even write you a complex 200-line algorithm, but give it a codebase, and it would quickly get overwhelmed.

Claude in feb of 2026? Still far from perfect, but there's definitely a huge improvement here.


> I think this is a pretty ridiculous take.

This falls in the category of swipes/name-calling in https://news.ycombinator.com/newsguidelines.html - can you please edit those out?

You're a good contributor - it's just all too easy for unintentional sharpness to downgrade the conversation, and when it's a good conversation like this one, that's especially regrettable.


Noted, doesn’t seem like I’m able to edit anymore though

I've re-opened it for editing if you want to. For us the main point is just to fix things going forward!

The correct way to estimate this is exactly what people do. Measure the distance between ChatGPT's best public model and state of the art, the best humans. And there is very little difference between those versions from that perspective. It is very far away from peak human performance, and not getting noticeably closer for over a year now. There's lots of progress, but if you're OpenAI/Anthropic/Google, exactly the wrong kind of progress: the difference between ChatGPT 5.5 and a 27B/4B model (you need to try Gemma4-26B-A4B, wtf, it runs acceptably on CPU) is now reduced to ELO 1501 vs ELO 1434, generously a 70 ELO point difference, down from over 400, data from Arena.ai.

(in fact I find that Qwen-35B-A3B and Gemma4-26B-A4B very rarely "know" the answer, and so use first principles thinking, or go out and look for the answer where GPT-5.4 does not and simply assumes it knows. Which leads to now, in some cases, the small models far outperforming the big ones. Huge context + training quality seem to be the determining factors now, and neither of those are the strengths of SOTA models. If this continues ...)

While I agree this is a training problem, it is not a solvable one. ML models learn from examples. This is even true for their newest tricks like GRPO. They cannot train against things humans don't yet know.

And that's great, but you're forever locked at the peak of what you can be taught in widely available courses (which they download without paying) (even that is best case scenario: it assumes your ability to distinguish bullshit from reality somehow becomes perfect during training, or even before). The only way to exceed peak human performance is to start experimenting with math, physics, chemistry, even humans, yourself. And that has, even for humans, a massively higher cost than learning from examples, or from a course.

The reason they don't go further is the worst possible reason: the cost. It requires a 100x increase in training expense. Think of it like this: to exceed SOTA in physics or chemistry, training the next version of ChatGPT requires a particle accelerator, and a chemistry laboratory. This cannot be bypassed. Oh and not just any particle accelerator, right? A better one than the best currently existing one. Same for Chemistry labs. Same for ... So 100x is conservative.

But without doing it, ML models (LLM or otherwise) are forever limited at the level an army of first year university students achieve, ON AVERAGE. Maybe they can make that 2nd or even 4th year, at the end of the curve. But that's the limit. Phd level is the level you have to come up with new discoveries, and that ... just isn't possible with current training, even at the end of the improvement curve.

And ... is there budget to increase training cost another 100x? No ... there isn't. Not even with this totally absurd level of investment there isn't. And if small models keep this up, there's no way the investment is even remotely worth it.


Gemini 3.0 wasn’t just a marginal improvement over 2.5.

And if you take that out: 1. All of those releases happened literally in the last 3-ish months. 2. They’re all intentionally marginal releases, hence the minor version bumps instead of major versions.


Equally marginal?

No, the anthropic releases have felt marginally negative

Because the premise that the singularity is just around the corner is far less likely than the premise that artificial intelligence is a lot harder than most people think it is and we're not that close.

Especially because the companies telling us the first premise is true are the companies which need investors to prop up their business.

I mean, it is possible the first premise is true, but the absolutely bonkers credulity in it really mystifies me. It is an incredibly unlikely thing to be true and we should be demanding quite extraordinary evidence to back it up. But based on some neat tricks by current LLMs, some people are all in.


> > And that shape tells you that _at some unknown point in the future_ progress will slow (but likely not stop). Now back to the point, what reason do you have to believe progress will stop soon?

> Because the premise that the singularity is just around the corner is far less likely than the premise that artificial intelligence is a lot harder than most people think it is and we're not that close.

I see no claim that the singularity is around the corner, so I'm not sure your reply meets the comment that you're replying to.

It seems overwhelmingly likely that AI will be significantly more capable 6 months from now than it is now. Even if there's little progress in the models, just the rate at which tooling is moving will make a big difference. And models still seem to be improving, so I'd be a little surprised if we hit a model brick wall.


I believe we're approaching the top of an S curve because:

- Increasing amounts of gains come from RL, but RL is also unlocking gnarly new failures modes where models are practically behaving antagonistically to complete their goals (removing code, obviously incorrect kuldges, etc.)

- We haven't had many major architectural breakthroughs in the last 4 or so years: so things like 1M context windows still have the same giant asterisks even 100k context windows had 4 years ago when Anthropic first released them

- Major labs aren't behaving as if they expect a hard takeoff to superintelligence: they've all gotten relatively bloated headcount wise, their software quality has trended flat to negative, they're all heavily leaning into the application layer when superintelligence would obsolete half the applications in question, etc.

But that's relative to superintelligence.

If we reign it back into just normal high intelligence, like models continuing to get better at navigating complex codebases and write high quality idiomatic code, then I don't see any special shapes.


The only big remaining problem in AI is continual learning. A lot of smart people are working on that. To me it looks like we are 1-2 breakthroughs away from AGI.

It’s more of a guess if you don’t know about things like scaling laws and RL with verification. The onus of “we’re going to saturate” anytime soon is on that claim because every measurement points to that not being true.

But… RL doesn’t scale that well. It’s not the silver bullet you think it is.

Yeah. People (Gary Marcus) have been claiming that AI will hit a wall or is hitting a wall or already has hit a wall since 2023, basically. And yet every time they proclaim that the AI industry found new ways of training their AI's, new ways of integrating them with external tools and feedback loops, new architectures and more to keep the exponential growing. And sure enough if you look at literally every attempt to objectively rate and verify the capability of these models, including things like the METR time horizon autonomy index or the artificial analysis intelligence index, you see exponential or even greater than exponential growth, continuing smoothly through each of the points people claimed that it would begin to slow down, with no sinus slowing down or stopping at all. So yeah, I think at some point the onus has to lie on the ones that are making the claim that keeps being wrong and the continues to be wrong and it completely goes against the current tangent of the curve that we're seeing in all objective metrics. Especially when they can't give specific new reasons for progress to stop beyond the ones they gave last time. It didn't stop and really can't give specific reasons at all besides vague general points about stochastic parrots and S curves.

I really have to highlight the S-curve nonsense because, like, yes, I think this technology's improvement will follow an S-curve. It's absurd to think that it will just follow an exponential up towards infinity forever because nothing in the world really works like that. However, like everyone else in this thread is saying, we have no idea where on the S-curve we actually are, and it's impossible to know until it's already slowed down. So really all appeals to the S curve do are as function as a sort of non-specific, unfalsifiable prophecy that someday it will slow down, which doesn't really tell us anything useful, and also frees the person referencing the S curve from ever actually having to worry about being wrong. Just like the Singularity people, the slowdown of the S curve is always near. This is actually a known and well-established tactic of religions and other people that want to make prophecies without having to worry about turning out to be wrong — unfalseifiable vague prophecies with no actual timeline, and thus no clear import to the present so that they can never be shown to be wrong.


He said "will stop anytime soon". He didn't say forever.

Which still makes no sense. There is the same chance we are flatlining now as that we are flatlining in e.g. 3 years or 5 years.

In what sense are the models flatlining?

In the sense that the incremental improvements in capabilities that we've been seeing in recent models seem to taking exponentially growing amounts of compute to achieve.

But they don't?

Mythos is a 10T model. Opus is a 5T model.

That's not an exponentially growing amount of compute but it is achieving exponential improvements (eg from Mozilla: https://blog.mozilla.org/en/privacy-security/ai-security-zer... )


> but it is achieving exponential improvements

“Exponential” used here is pure hyperbole. Can you justify it?


Compute doesn't necessarily linerarly follow parameters. And with how many active parameters Mythos vs Opus gets its effectivenes from? Is it 1x or 2x? We don't know. We don't even know the parameters (it's more of rumor than confirmed 10T iirc).

But even more so, who said the improvements are "exponential"? Mozilla's single metric, that doesn't even prove anything of the sort?


I know parameters don’t translate directly like that (and that linear and exponential aren’t the only types of growth) but a doubling as a go-to example of “not exponential growth” is pretty funny.

Wasn't 4.6 Sonnet a 1T model?

Parameters and compute are quite the same thing, but going from 1T to 5T to 10T is quite a ramp up.


where the heck did you get those parameter numbers from?

Sonnet and Opus are from Elon Musk (given the people he's hired it seems likely it is approximately true). Mythos is quite widely spoken about.

> Mythos

Ah yes, the marketing model that's ostensibly so powerful us mere mortals aren't allowed to use it. It's certainly led to exponential hype and speculation.


There are advancements that do not follow s curves - consider for instance total data transmitted over all networks, or financial derivatives volumes.

I think a better question for AI is “is it more like a network effect, liquidity effect, or a biological/physical effect”?


Those are measuring the utility of a technological advancement by looking at usage, not the pace of advancement of said technology.

Yes. But quantity has a quality all its own, as they say — derivatives have gone through at least a few step functions where they have become more important and more useful as their usage grows. I’d call that advancement.

Maybe just to be clear I think that kneejerk “I hate this AI trend, and prefer to believe this will end soon, all exponential growth ends eventually” is intellectually lazy, and dangerous for younger engineers/hackers, a group I hope can benefit from being on HN.

Bitcoin mining went through something like 13 10x growth periods, last I ran the numbers a few years ago. There are physical processes that do have very extended periods of doubling, and there are digital and financial processes that don’t show any signs of doing anything but continuing to keep growing over their multidecade lives. So, like I said, it’s worth thinking carefully, and risk mitigation for things like mental health, career decisions and investment decisions indicates we should be cautious assessing new dynamics.


>There are advancements that do not follow s curves - consider for instance total data transmitted over all networks, or financial derivatives volumes

Or Roman trade volume before the Fall of Rome.

Not to mention what you describe is not technological improvement but increase in data or money flows, not the same.


Sic transit gloria - obviously.

But I don’t that think it’s quite so obvious that model quality / growth / usefulness is definitively and obviously not more like data or money flows than it is like some other process.


Total volume of usage is not an advancement, it’s orthogonal.

Indeed, and it's more linked with market penetration than technological advancement. It's like evaluating airplane technology by "total miles flown".

This could be right for the current architecture of LLMs, but you can come up with specialized large language models that can more efficiently use tokens for a specific subset of problems by encoding the information differently (https://www.nature.com/articles/d41586-024-03214-7).

So if instead of text we come up with a different representation for mathematical or physical problems, that could both improve the quality of the output while reducing the amount of transformers needed for decoding and encoding IO and for internal reasoning.

There are also difference inference methods, like autoregressive and diffusion, and maybe others we haven't discovered yet.

You combine those variables, along with the internal disposition of layers, parameter size and the actual dataset, and you have such a large search space for different models that no one can reliably tell if LLM performance is going to flatline or continue to improve exponentially.


> So if instead of text we come up with a different representation for mathematical or physical problems, that could both improve

But then, wouldn't we first have to translate all of our current math and physics knowledge into that new representation in order to be able to train a model on it? Looks like a tremendous amount of work to me.


Yes, but by then you already have general LLMs capable of helping with the work. And even if you didn't, if that's what it would take to advance research in these fields, that would be a justifiable effort.

>This could be right for the current architecture of LLMs, but you can come up with specialized large language models that can more efficiently use tokens for a specific subset of problems by encoding the information differently.

That's precisely what happens on the bad side of a S curve.


Progress don't stop however, and the S curve resets, because then you are optimizing a new architecture.

I read an experiment someone wanted to try where they used pre-1900 content and tried to get relativity. Another version would be train an LLM on school curriculum up until calculus and see if it can invent calculus. Where we are on the curve depends on if it's remixing known things or genuinely inventing things.

From the article,

> ...LLMs have got to the point where if a problem has an easy argument that for one reason or another human mathematicians have missed (that reason sometimes, but not always, being that the problem has not received all that much attention), then there is a good chance that the LLMs will spot it. Conversely, for problems where one’s initial reaction is to be impressed that an LLM has come up with a clever argument, it often turns out on closer inspection that there are precedents for those arguments...


What people miss is that AI isn't one S curve, each capability we try to bake into a model has its own S curve. Model progress might not impact some capabilities at all, but other capabilities might get totally overhauled.

Assuming it’ll stop soon is to wager that we’re at a very specific point on the curve.

If it’s anyone’s guess then we’re much more likely to be left of that, unless you argue we’re already on the flat side.


you can tell where on the sigmoid we're currently sitting? frontier lab folks can't - chapeau bas good sir

> frontier lab folks can't

Do you have a source for this that isn't marketing spiel? There's a fiscal incentive to lie about scaling research.


Software and hardware have no limits. Theoretically would could bozons for computations and have the same amount of computation available on one cm3 of the current total computation in the entire world. Same with software. Never there was a stop on new algorithms. With LLMs there are so many parts that will get better and are not very far fetched.

> Software and hardware have no limits.

Yeah, if time is infinite, R&D imagination is infinite, energy is infinite and material resources are infinite. Easy.


It can be S curve (and it almost surely is), but on every chart you can plot, you don't see even of an inkling of the bend yet.

What the fuck does that have to do with “soon”?

This is FUD and extremely wrong. None of the advancements have followed an S curve. This time IS different and it should be obvious to you at this point.

"Goodwill Industries was established in 1902 and is widely known across the country as the place where we all donate clothing and household goods to help others."

That's the first sentence from your link. Clearly people don't treat this org, literally called "good will", the same as they treat freakin eBay.


Very cool! Thanks for the detailed tech blog explainer.

Some, hopefully constructive, feedback:

- You mention "other games" several times. It would be so much more interesting to read, if you named them. Your readers may know that game but have no knowledge about those under the hood details. Like the user wordpad, I immediately thought about Planetary Annihilation when I saw rollback and multiplayer.

- Your landing page needs an easy to grasp "About" / "What is this" section. I'm more or less familiar with several popular game engines (Unity, HL, Source, Unreal, Godot, Spring etc) and never heard of yours before. Even after clicking around a bit on your website, I still had almost no idea what your engine (or is it a language?? [1]) can, and more importantly, can not do. I mostly went by the screenshots of the game examples shown and concluded that it is a 2D engine with simple graphics. Wikipedia [2] and web search were not that helpful either, so I had to resort to an LLM [3].

[1] https://easel.games/docs/learn/key-concepts "Easel is a unique programming language with some unusual features." [2] https://en.wikipedia.org/wiki/Easel [3] https://search.brave.com/ask says "No, Easel is not a 3D game engine. It is a web-based game engine specifically designed for creating 2D multiplayer games without the developer needing to code the networking or server infrastructure"


That is helpful feedback, thank you, yes all things I can work on in the future! Thanks for taking the time to write this up.

It’s not bs. France is lobbying for “Eurobonds”, debt they can take at German interest rates and with Germans etc holding the bag, for about two decades now.

https://youtu.be/tMd7EfFsPIc (Video claims France is against them, but if they ever were they are not anymore)


Given how many of these problems are self-inflicted, maybe we should focus more on trips to the moon and beyond, not less.


Yeah, if we cut back a bit on the war crimes we could easily fund both more moon missions and cool science, as well as a shit ton of great programs to help people with the basics like food and rent and health care.


The US spends more per capita, and even as a share of GDP, on healthcare out of public funds than some advanced industrialized states that have universal systems, as well as spending even more on healthcare out of private funds than out of public funds. If we didn’t have a system which expended vast quantities of additional resources in order to assure that a substantial subset of the population is denied needed healthcare and instead just provided the needed healthcare, we could fund all those other things without cutting back on the war crimes, crimes against humanity, and crimes against peace, either direct or those that we subsidize that are executed by other regimes.

We still should cut down (ideally to zero) on war crimes, crimes against humanity, and crimes against peace, but the reason is because those things are unqualified evil on their own, not because doing so is necessary to fund healthcare and other priorities, which it very much is not.


Or as my parents would say, "socialism", as if that were a bad word.

Now war. They think that's worth it, even if it's also bad.


> There is no history of any sort of long planning

Sure there is. Its the formal education system that produced the college grad.


… between employees and employers.

The proposal that everyone pay for college until they are in their 40s doesn’t seem viable.


With loans, that's kind of how it works now...


Maybe, but there is a trend towards more and longer education. More college graduates, more PhD grads, etc.


Yes. But to be fair to your specific point, symbolic solving of integrals used to be a huge skill in the engineering education. Nowadays, it is not a focus anymore, because numerical solutions are either sufficiently accurate or, more importantly, the only feasible approach anyway.


There is much more to life than engineering.


Sorry, I should have quoted properly in my reply. My first sentence ("Yes.") was in general agreement with you, the second sentence was specifically about

> Mathematica has been able to do many integrals for decades and yet we still make students learn all the tricks to integrate by hand

But maybe, integrating by hand is still as big as ever in other parts of academia. Or were you thinking about high school? I'm fairly sure, that symbolic solving of integrals is treated as less important in education these days, than it was before digital computers, but I could be wrong. Mathematica's symbolic solve sure is very useful, but numeric solutions are what really makes the art of finding integrals much less relevant.


I studied physics and mathematics and finding analytic solutions to problems is still useful and enlightening.


Every PhD program I'm aware of has a final hurdle known as the defence. You have to present your thesis while standing in front of a committee, and often the local community and public. They will asks questions and too many "I don't know" or false answers would make you fail. So, there is already a system in place that should stop Bob from graduating if he indeed learned much less than Alice. A similar argument can be made for conference publications. If Bob publishes his first year project at a conference but doesn't actually understand "his own work" it will show.

The difficulty of passing the defence vary's wildly between Universities, departments and committees. Some are very serious affairs with a decent chance of failure while others are more of a show event for friends and family. Mine was more of the latter, but I doubt I would have passed that day if I had spend the previous years prompting instead of doing the grunt work.


In the future the llms can answer those questions for you by listening and feeding you answers into your headset.

The process you describe is a gate keeping exercise which will change to include llm judges at somepoint.


That would be cheating. If the exam is 'gate keeping', I will say that it is a gate worth keeping.

To be clear, I am not against alternative forms of education. Degrees are optional. But if you want a degree, there have to be exams and cheating has to be prevented.


Awesome! Small feedback: The test should maybe auto run. I solved the first level and was confused why I didn't proceed. The out was -1 (but goal was z) and it took me a while to see the 'run test' button.


Guys, this is a well known and under utilized effect of human psycho physiology. Visually focusing on a single point, small object, or just small visual field (aka tunnel vision) increases mental focus.

AFAIK it’s also one of the reasons we all get “glued” to smartphone screens.

In this paper, more than 20 deg visual field for a screen and subject performance went down: https://www.sciencedirect.com/science/article/abs/pii/S01678...


Ah, excellent! Some scientific evidence for my preferred setup: 2 x 9:16 27" monitors, one in front and one to the side. (Plus another display, of no specific kind. Laptop, landscape monitor, etc.)

I sit with my eyes about 1 metre from the screen, and a 27" portrait display is approx 33 cm wide. So I think that's tan(FOV/2) = 16.5/100 = 0.165; FOV/2 = atan 0.165; FOV/2 = 9.37 degrees; FOV = 2*9.37 = 18.74 degrees. It's almost perfect!

(But even if my maths is wrong: this has proven a good setup for me, which I've used for many years now, and I recommend it to anybody thinking of experimenting with their desk setup. Many monitors come with a stand that allows rotation, so it's not necessarily difficult to try. If you don't like it, you can always switch back.)


So that's why Counter-Strike pros are nose-to-the-monitor close to their screens.

Example of player Yekindar: https://preview.redd.it/yekindar-xd-v0-zsm7fzd5jd5e1.jpeg?wi...

"I need you to be focused!"


Moving closer to the screen would increase the FOV.


Here on HN, this should obviously be considered a market opportunity!

I'm considering a startup to make millennial blinders:

https://en.wikipedia.org/wiki/Blinders

Of course, being an aging boomer, using an 85" monitor isn't decreasing my focus at all. I just look at the part of the screen I'm using at the moment.

Personally I find it helpful to be able to spread windows out on that giant screen so that any one of them is instantly available at a glance (and I still use 8 "desktops"). Of course, I also don't reboot, well maybe once or twice a year after a kernel update. So setting all those windows up isn't something I do every time I sit at the computer.

I do feel sorry for the generations born into internet brain damage (seriously). My son is GenZ and (thankfully) struggles with the typical symptoms less than others, but is still affected.

This is clearly a consequence of growing up with constantly network connected hand held computers, and the maliciously crafted web platforms that exploited that constant connectedness.


Yeah I do find it easier and less tiresome to read on my phone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: