Hacker Newsnew | past | comments | ask | show | jobs | submit | starkeeper's commentslogin

It's a complete fabrication.

They do not care about customers because they are the customer and users are hostages. They only care about hostage count and other shitty metrics.

So is this a door-knocking bitcoin robbery game?

Microsoft is famous for choosing terrible names but how could they be this terrible.

Hopefully they are building an army to conquer the planet.


Or they pack up to leave.


Local LLM = Now & When, not If


We need to manipulate the eye socket in a more biologically and socially acceptable plus human comfortable way.

All these face lids are just hobbyists I think the real deal comes later.


Normalizing this shit is more evil than anything too. Really classy overlords strikes again.


I'm so excited to interview for a career at Meta!

Also, why are the investors not suing the legs off of Zuck for the whole meta verse debacle? It is a scam and pure fraud. Also dumb name, sue for that too. Should have just renamed it meeme.


On a similar note I recently deleted a whole bunch of automated tests because if the AI is going to write most of the code then I should test it to make sure it's good! This won't work for all projects, but for my indie games it's a good idea.


> I recently deleted a whole bunch of automated tests because if the AI is going to write most of the code then I should test it to make sure it's good!

??

You say you deleted the tests, because you "should test it"? The logic seems inconsistent.

Sanity checking LLM-generated code with LLM-generated automated tests is low-cost and high-yield because LLMs are really good at writing tests.


I think LLMs are really bad at writing tests. In the good old days you invested in your test code to be structured and understandable. Now we all just say "test this thing you just generated".

I shipped a really embarrassing off-by-one error recently because some polygon representations repeat their last vertex as a sentinel (WKT, KML do this). When I checked the "tests", there was a generated test that asserted that a square has 5 vertices.


You still have to review LLM results.

But LLMs let you skip all the boring parts - setting up harness, writing some initial inputs, adding asserts for every output. And then _you_ get to do actually important stuff, like ensuring square has 4 vertices.


I suppose that my generalization was too broad and that LLMs can be either good or bad at writing tests depending on your workflow and expectations.

I'm closely supervising the LLM, giving it fine-grained instructions — I generally understand the full interface design and most times the whole implementation (though sometimes I skim). When I have the LLM write unit tests for me, it writes essentially what I would have written a couple years ago, except that it tends to be more thorough and add a few more tests I wouldn't have had the patience to write. That saves me quite a bit of time, and the LLM-generated unit tests are probably somewhat better than what I would have written myself.

I won't say that I never see brain-dead mistakes of the "5-vertex square" variety (haha) — by their nature, LLMs tend towards consistency rather than understanding after all. But I've been using Claude Opus exclusively for while and it doesn't tend to make those mistakes nearly as often as I used to see with lower-powered LLMs.


> ...because LLMs are really good at writing tests.

No, they're absolutely shit at writing tests. Writing tests is mostly about risk and threat analysis, which LLMs can't do.

(This is why LLMs write "tests" that check if inputs are equal to outputs or flip `==` to `!=`, etc.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: