Hacker Newsnew | past | comments | ask | show | jobs | submit | robot_jesus's commentslogin

By and large I agree, but it doesn’t need to be either/or.

Many of the most popular games in the past decade are procedurally generated and have nothing “intentionally” placed (apart from tuning/tweaking the balance of the seeding algorithms).


> have nothing “intentionally” placed (apart from tuning/tweaking the balance of the seeding algorithms).

I think you underestimate the intentionality that goes into developing procedural generation. Something like Dwarf Fortress isn't "place objects randomly" - it is layers upon layers of carefully crafted systems that build upon each other to produce specific patterns of outcome


By calling it out in my comment, I was trying to not underestimate it.

I guess what I'm saying is: Couldn't a world model with targeted training and thoughtfully tuned system prompts be directionally similar to the layered systems to produce specific patterns of outcome?


I've had good luck with using LLMs to create procedural content engines for my game prototypes. So the distinction between AI and procedural might get even blurrier.

Right, and I wondered how these world models might be use in a careful way (just as agents can be used carefully to accelerate work).

Are video game developers using these systems in their workflows? Would love to learn more!


Which game would that be apart from Minecraft?

Dwarf fortress, no man's sky, elite dangerous, ...

The combination of "many", "most popular", and "nothing" is overstating it by a wide margin but for example the majority of the vegetation in games as far back as oblivion was procedurally placed.


Battlefield 2 had procedural trees and terrain the year before. I think it more or less came with open world maps?

No Man's Sky, Terraria, Dead Cells, to name a few.

Dead Cells just arranges a few pre-designed rooms together for each stage, doesn't it?

If it does do that, it doesn't feel that way. I never found it particularly repetitive.

A recent example is Megabonk, a rouge-like with procedural levels. Each run is unique but the levels have a consistent theme.

Well, the marathon record has been broken 53 times since the early 1900s. So, there are a lot of factors at play. Better training, better nutrition, better tactics, and, yes, better shoes.

The advancements in shoes have made a measurable impact, but there are lots of optimizations being worked on.


Also population and access. In the time since the early 1900s a lot more humans exist, and more of them have the opportunity to attempt this record. Population in Africa exploded in that time and access improved significantly.

If you're bloody quick and born in Birmingham (either of them) in 1900 you can probably find out about and get yourself a chance to attempt that world record, but if you're born in Kapsabet (in Kenya) in 1900 good luck, even in Nairobi I wouldn't bet on it.


Sky Team is great, I agree. For a few more 2p co-ops to try out, I can recommend Sail, Burgle Bros (give it a few playthroughs to get a feel), and Regicide. All are available on BGA if you want to try them and I've loved playing them.


Agree 100%. This hobby jumped the shark probably 5-10 years ago.

Thanks to crowdfunding, there are deluxe editions of games all the time being announced for $400–500.

Games ship with "6 expansions in box" which sounds great and like a ton of replayable content, until you realize that they're poorly playtested, lack balance, and add a confounding (and sometimes contradictory) number of rules.

As you noted, games come with a ridiculous number of minis and trinkets and baubles that drive the price of new games well past $100 in many cases.

As the industry has gotten larger, many publishers are turning more toward bankable IP as opposed to innovative concepts. Or, they're releasing a bajillion reskins of the same game (looking at you, TtR, Azul, Pandemic, 7 Wonders, etc..) This is not unique to board games by any stretch. But it's a sign of an inflection point.

I'm not saying there aren't good games being released. I'm saying they're harder to find and getting drowned out by the shameless cash grabs and lazy IP-based games.

Go find some of the classics by Rosenberg, Knizia, Feld, Luciani, and others. You'll get a lot more bang for your buck.


> Games ship with "6 expansions in box" which sounds great [...] until you realize that they're poorly playtested, lack balance, and add a confounding (and sometimes contradictory) number of rules.

Hot take: I have never played an expansion that I liked more than the base game.


I won't argue that. There are a handful that I think improve the experience (some of the early Carcassonne ones, for example) but they are by far the exception rather than the rule.



> We've seen all the American models be closed and proprietary from the start

What about Gemma and Llama and gpt-oss, not to mention lots of smaller/specialized models from Nvidia and others?

I would never argue that China isn't ahead in the open weights game, of course, but it's not like it's "all" American models by any stretch.


gpt-oss is good but I haven't heard anything about an update. It seems like one and done, to shut up people complaining about non-Open AI


The more accurate version is only Chinese companies (plus Facebook briefly) really open source their frontier models. The rest are non frontier. They are either older or specialized for something.


It's all openwashing, all of the ones you listed at somepoint have expressed how important and valuable open weights and locally usable models are. Every single one of them has then increasingly focused and pushed closed, proprietary or cloud usable only options since saying/doing that.

I'm annoyed at myself, because I thought/hoped/praised chinese AI when they were opening up as Llama was closing, but Qwen looks to be doing the same playbook here as Llama/Meta, Gemma/Google and OpenAI/gpt-oss.


It’s typical to complain about AI slop that hits the front page here, but it’s worth noting that a lot of the (presumably) human-written content is slop in its own right.

This piece was some self-indulgent rambling that didn’t really have any connective threads.


They're not perfect but the local model game is progressing so quickly that they're impossible to ignore. I've only played around with the new qwen 3.6 models for a few minutes (it's damn impressive) but this weekend's project is to really put it through its paces.

If I can get the performance I'm seeing out of free models on a 6-year-old Macbook Pro M1, it's a sign of things to come.

Frontier models will have their place for 1) extensive integrations and tooling and 2) massive context windows. But I could see a very real local-first near future where a good portion of compute and inference is run locally and only goes to a frontier model as needed.


I've had really good results form qwen3-coder-next. I'm hoping we get a qwen3.6-coder soon since claude seems to get less-and-less available on the pro plan.


Out of curiosity, why not just try it with one of the many local managers like LM Studio or Ollama or oMLX, etc?

The Gemini app is kind of terrible (apart from the models) but Gemma 4 runs great locally already.


I run lmstudio now and it’s more like a “chat” bot. Where as Gemini app is more like an agent.


You can enable the lm studio server and use any openai compatible harness to use the models that are running inside it. OpenCode, pi, even Claude and Codex...


I’ve been missing agentic capabilities from almost all local LLM apps. It’s like they’re all stuck in 2023.

That’s why I started using OpenCode for this. It works pretty well, the web UI comes pretty close to a general chat app. You can use folders to organize your sessions like projects (which annoyingly Gemini still doesn’t have) with files and extra instructions.

It’s pretty powerful.


OpenCode is one solution, but there are also several alternatives.

For example pi-dev, but even Codex is open source and it should work with any locally-hosted model, e.g. by using the OpenAI-compatible API provided by llama-server.

I have not used pi-dev until now, but the recent presentation of pi-dev by its developer (reported on other HN threads) has convinced me that he is among the people who can distinguish good from bad, which unfortunately cannot be said about many people creating AI applications.

So I intend to switch to using pi-dev as a coding assistant for my locally-hosted models, but I do not have yet results demonstrating that this is the right choice, besides its lead developer being more trustworthy than the others.


I too am interested in Pi and Codex, but haven’t seen any full-featured web UIs for them yet. Would be happy to know if there are some!

One thing I’m considering (depending on how happy I am with OpenCode after trying to remove some questionable functionality it has) would be to make Pi (or Codex) speak the OpenCode protocol so that its web UI can be used with it.


Same with Reddit. A decade ago it felt like they were down more than they were up. And it didn't slow down their growth trajectory. Instead, as soon as it was back there would be a thousand shitposts about "How did you all survive the outage? Did you <gasp> work?"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: