Hacker Newsnew | past | comments | ask | show | jobs | submit | tehjoker's commentslogin

Part of the issue there is that the data quantity prior to 1905 is a small drop in the bucket compared to the internet era even though the logical rigor is up to par.

Yet the humans of the time, a small number of the smartest ones, did it, and on much less training data than we throw at LLMs today.

If LLMs have shown us anything it is that AGI or super-human AI isn't on some line, where you either reach it or don't. It's a much higher dimensional concept. LLMs are still, at their core, language models, the term is no lie. Humans have language models in their brains, too. We even know what happens if they end up disconnected from the rest of the brain because there are some unfortunate people who have experienced that for various reasons. There's a few things that can happen, the most interesting of which is when they emit grammatically-correct sentences with no meaning in them. Like, "My green carpet is eating on the corner."

If we consider LLMs as a hypertrophied langauge model, they are blatently, grotesquely superhuman on that dimension. LLMs are way better at not just emitting grammatically-correct content but content with facts in them, related to other facts.

On the other hand, a human language model doesn't require the entire freaking Internet to be poured through it, multiple times (!), in order to start functioning. It works on multiple orders of magnitude less input.

The "is this AGI" argument is going to continue swirling in circles for the forseeable future because "is this AGI" is not on a line. In some dimensions, current LLMs are astonishingly superhuman. Find me a polyglot who is truly fluent in 20 languages and I'll show you someone who isn't also conversant with PhD-level topics in a dozen fields. And yet at the same time, they are clearly sub-human in that we do hugely more with our input data then they do, and they have certain characteristic holes in their cognition that are stubbornly refusing to go away, and I don't expect they will.

I expect there to be some sort of AI breakthrough at some point that will allow them to both fix some of those cognitive holes, and also, train with vastly less data. No idea what it is, no idea when it will be, but really, is the proposition "LLMs will not be the final manifestation of AI capability for all time" really all that bizarre a claim? I will go out on a limb and say I suspect it's either only one more step the size of "Attention is All You Need", or at most two. It's just hard to know when they'll occur.


Humans need way less data. Just compare Waymo to average 16 year-old with car.

A 16 year old has been training for almost 16 years to drive a car. I would argue the opposite: Waymo’s / Specific AIs need far less data than humans. Humans can generalize their training, but they definitely need a LOT of training!

When humans, or dogs or cats for that matter, react to novel situations they encounter, when they appear to generalize or synthesize prior diverse experience into a novel reaction, that new experience and new reaction feeds directly back into their mental model and alters it on the fly. It doesn't just tack on a new memory. New experience and new information back-propagates constantly adjusting the weights and meanings of prior memories. This is a more multi-dimensional alteration than simply re-training a model to come up with a new right answer... it also exposes to the human mental model all the potential flaws in all the previous answers which may have been sufficiently correct before.

This is why, for example, a 30 year old can lose control of a car on an icy road and then suddenly, in the span of half a second before crashing, remember a time they intentionally drifted a car on the street when they were 16 and reflect on how stupid they were. In the human or animal mental model, all events are recalled by other things, and all are constantly adapting, even adapting past things.

The tokens we take in and process are not words, nor spatial artifacts. We read a whole model as a token, and our output is a vector of weighted models that we somewhat trust and somewhat discard. Meeting a new person, you will compare all their apparent models to the ones you know: Facial models, audio models, language models, political models. You ingest their vector of models as tokens and attempt to compare them to your own existing ones, while updating yours at the same time. Only once our thoughts have arranged those competing models we hold in some kind of hierarchy do we poll those models for which ones are appropriate to synthesize words or actions from.


In a word, JEPA?

No 16 year old has practiced driving a car for 16 years.

They were practicing object recognition, movement tracking and prediction, self-localisation, visual odometry fused with porpiroception and the vestibular system, and movement controls for 16 years before they even sit behind a steering wheel though.

If you see gaining fine motor control, understanding pictographic language […] as a prerequisite to driving a car, then yes, all of them are

That's an exaggeration. Nobody is trained to read STOP signs for 16 years, a few months top. And Waymo doesn't need to coordinate a four-limbed, 20-digited, one-headed body to operate a car.

Well, I also think that there is a lot that we process 'in background' and learn on beforehand in order to learn how to drive and then drive. I think the most 'fair' would be to figure out absolute lowest age of kids that would allow them to perform well on streets behind steering wheel.

i am not making a point that it is, I am rather expanding on the possible perspective in which 16 years of training produce a human driver.

That being said, you don't really need training to understand a STOP sign by the time you are required to, its pretty damn clear, it being one of the simpler signs.

But you do get a lot of "cultural training" so to speak.


I think what anthropic did yesterday was good, but I had to take a step back and think, well it wasn’t a bridge too far for them to allow claude to be used in the wildly illegal maduro kidnapping operation.

Right the red line wasn’t much of a line. If you’re drawing your line only at unconstitutional mass surveillance and allowing the DoD to build skynet because Claude’s not ready for it yet that’s not really a line of principle.

How is that not a line of principle? Principle doesn't mean where we'd all agree, nor does it mean what we'd deem acceptable, it just means there is a line somewhere - and mass surveillance or fully autonomous AI in the kill chain is a very clear principle.

It's unprincipled because the implication is that once claude improves enough to be trusted with autonomous killing the company will be ok with it.

But to gp's point, that is a principle. Perhaps not yours, but they outlined their stance and stuck to it despite threats and consequences.

Contrast Sam's OpenAI announcement which was very carefully worded to appear to uphold the same principles, but is currently being rightfully disassembled as retaining various potential outs that would allow violating the signaled principles.

Honest and staunch about clearly stated principles is better than wiggly and dishonest about weasel-worded impressions of a principle.

And all of that is orthogonal to whether you (or anyone) agrees with a given principle or given revealed behavior.


It’s a line that no one else had enough backbone to draw so…

no one else at the time had chosen to be in the bed with the US military at the level anthropic was.

Did you ask these too: what was the full context? To what degree was Anthropic aware in advance? What was their action space (their options)? What would be the consequences of their next actions?

And of course: and what sources are you using?

I get it: moral oversimplification is tempting for many people. I understand digging in takes time, but this situation warrants extra consideration.

Ethics is complicated and much harder than programming. Ethical reasoning is a muscle you have to train. Generally speaking, it isn’t the kind of skill that you build in isolation. At the very least, a lot of awareness and introspection is required.

I’d like to think that HN is a fairly intelligent community. But I don’t assume too much. Going based on what I’ve seen here generally, I see a lot of shallow thinking. So I think it’s a reasonable concern to think many of us here have a pretty large blind spot (statistically) when it comes to “softer” skills like philosophy and ethics.

This is not me “blaming” individuals; our industry has strong bias and selection criteria. This is my overall empirical take based on participating here for years.

Still, I’d like to think we are sufficiently intelligent and we have sufficient means and time to fill the gaps. But we have to prove it. I suggest we start modeling and demonstrating the kind of behavior and reasoning that we want to see in the world.

You can probably tell that I lean heavily towards consequentialist ethics, but I don’t discount other kinds of ethical thinking. I just want everyone to think hard harder. Seek more context. Ask what you would do in another’s shoes and why. Recognize the incentives and constraints.

Many people are tempted to judge others. That’s human. I suggest tamping that down until you’ve really marinated in the full context.

Also, each of us probably has more influence with your own actions than merely judging others.

And let me be brutally honest about one’s impact. Organizing and collaborating is so much of a force multiplier (easily 100X) that not doing it for things you care about is moral failure!

I’m not discounting good intentions, but in my system of ethics, I put much more emphasis on our actions. And persuasion is an action, which is what I’m hoping to do here.


It looks like Anthropic may have been caught by surprise, but it is incredibly naive if so

https://www.axios.com/2026/02/13/anthropic-claude-maduro-rai...


There's been a fair amount of speculation that pushing back after discovering that that had happened was what instigated this week's fun.

That would certainly make me feel more positively disposed if that credibly came to light, though I would still wonder how dumb were they to think the military wouldn’t do stuff like that

EDIT: that may be the case actually

https://www.axios.com/2026/02/13/anthropic-claude-maduro-rai...


Do we know they were consulted on that, as opposed to it being the wake-up call that led to the breakup?

You know what? I have not seen an American company take a stand like this… uh ever. I don’t think there should be any engagement with the military what so ever but I will offer a kudos to Anthropic.

I don’t really expect this to last but if it does I will happily continue to offer this kudos on an indefinite basis.


i kinda get the impression this was from 2023 and also it is not clear what this dissident did, hard to evaluate whether i should care without knowing that

Interesting article, but there's no rate of investigation success quoted. The engineering is interested, but it's hard to know if there was any point without some kind of measure of the usefulness.

We did not want to make the post engineering-focused, but we have 18 companies in production today (we wrote about PostHog in the blog). At some point we should post some case studies. The metric we track for usefulness is our monthly revenue :)

they often contact that work out, i wouldnt be surprised if some of that is already ai. cheaper than hiring if you get it right

what a solitary existence

It is when "defense" means invasion and subjugation of other countries. All countries pose their military operations as "defense." Inquiring minds should ask if a country surrounded on sides by two oceans with two pacified neighbors has any real threats or merely opportunities for cheap labor, market access, and mineral rights abroad.

This has been going on for a very long time (read what Smedley Butler said in "War is a Racket"), but after the Iraq War, the credibility of the US should be somewhere in hell.


It's not black and white. There is an entire spectrum of completely justifiable and extremely questionable uses of military power by the US.

US did rename the Department of Defense to Department of War so not sure how much posing is left..

The framing of this is that the United States conducts legitimate operations overseas, but that is extremely far from the truth. It treats China as a foreign adversary, which is nearly purely the framing from the U.S. side as an aggressor.

AI should never be used in military contexts. It is an extremely dangerous development.

Look at how US ally Israel used non-LLM AI technology "The Gospel" and "Lavender" to justify the murder of huge numbers of civilians in their genocide of Palestinians.


ukraine is using ai in a military context with some effectiveness. i dont think theres much of a problem with having the drone take over the last couple minutes of blowing up a russian factory

> Claude only talks about safety, but never released anything open source.

im still working through this issue myself but hinton said releasing weights for frontier models was "crazy" because they can be retrained to do anything. i can see the alignment of corporate interest and safety converging on that point.

from the point of view of diminishing corporate power i do think it is essential to have open weights. if not that, then the companies should be publicly owned to avoid concentration of unaccountable power.

https://www.youtube.com/watch?v=66WiF8fXL0k&t=544s


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: