> It would surely sell more if people would actually explain what the game is, without using niche words like "sokoban".
Sure, sure. Here's the annotated second paragraph from TFA:
When game designer <Stephen Lavelle> [0] (Increpare Games) released Stephen's Sausage Roll back in April 2016, it was accompanied by <a trailer> [1] that showed almost nothing about the game, yet word still spread quickly. Puzzle developers and fans praised the game for its impeccable design, teasing out layers of deep puzzling and mind-expanding discoveries from so few puzzle elements. It was also renowned for its uncompromising, yet always fair, difficulty curve, with immensely challenging puzzles from the very start. These sentiments are still held to this day, as this beloved sausage-pushing sokoban continues to influence new generations of puzzle developers, <inspiring some of the best sokoban games> [3] ever made and introducing "Sausage-likes” to the puzzle vernacular.
Clicking link [3] leads us to a page that has this as its first paragraph:
Sokoban games, also known as block-pushing or box-pushing games, are turn-based puzzle games in which you control a character pushing or moving objects around on a grid. The genre has origins in the 1982 game Sokoban, designed by Hiroyuki Imabayashi, in which you have to push boxes around a warehouse onto designated targets. The japanese word 倉庫番 (“sōkoban”) translates to “warehouse keeper”.
Incidentally, link [3] is repeated in the final paragraph of TFA, which I will also copy and annotate:
Learn more about Stephen's Sausage Roll in <our database of thinky games> [4], where you can also find <similar games> [5] and some of the <best sokoban games ever made>. [3]
Link [4] is pretty clear about what the game is. Did you bother to click it in order to "[l]earn more about Stephen's Sausage Roll", as it invites you to do?
I'm confused. What do you mean by this? Does dnsmasq not put the names of DHCPv6 clients into its hostname database? If ISC DHCPd is commanded to update DNS, does it only update for DHCP clients and not DHCPv6 clients?
They probably mean that when using SLAAC - I guess the easiest way to get ipv6 connectivity - there is no equivalent to the way you can update DNS the way it would work with DHCPv4 or DHCPv6.
You pointed out one way - justuse DHCPv6, but that looses some of nice SLAAC properties.
A different way is to run mdns and let the devices announce their own hostnames.local.
Different tradeoffs, but in practice not too difficult to get to work.
> You pointed out one way - justuse DHCPv6, but that looses some of nice SLAAC properties.
Android refuses to implement DHCPv6. So (if you have any Android devices in play) at best you can use DHCPv6 for some of your devices while still needing to also have SLAAC. And yes, mDNS might work, but that's another service (or two, right? One to resolve other devices, another to advertise this device) to run on every device, and you'd better hope that every device can run the needed services. Which... actually brings us back to Android; AFAICT, Android can resolve mDNS but doesn't show up itself. As someone who can and does SSH to my phone (termux), this is kind of a sticking point.
If that's the case, then you've got to think of SLAAC as operating exactly like IPv4 address autoconfiguration (sometimes called "IPv4LL")... except that you usually get globally-routable IP addresses out of it.
If you want the management niceties that you often get when using DHCP, then you have to use DHCP.
Some very loud purists might say "SLAAC is the only way to use IPv6!". I completely ignore the convenience of LAN-side prefix delegation and say two things:
1) "Good luck with telling your IPv6 clients about things like your preferred NTP server."
2) "For ages, Router Advertisements have had entirely independent 'autoconfigure your addresses', 'use stateful configuration for your "other" configuration' [0], and 'use stateful configuration for your addresses' bits. It's legal to have any number of them enabled. This is a deliberate choice by the folks defining IPv6."
In general, the folks who scream about how IPv6 NAT and DHCPv6 should not exist and should never be used should be ignored... at least about that topic.
[0] Things like NTP and DNS that other good stuff that DHCP can be used to tell hosts about.
I mostly meant that DHCPv6 was an afterthought, and was complaining about the length of IPv6 addresses when they are truly random/EUI64. As a network guy who has had to write down or quickly type IP addresses down for troubleshooting thousands of times, v4 is much easier for humans to work with than a full v6.
(Oh and Android doesn't support DHCPv6 at all, but that doesn't matter much for server environments/DNS reachability).
In hindsight of EUI64 being shunned in favor of privacy addresses, plus how much of the IPv6 space is reserved for future use, I wonder if IPv6 could have achieved all of its goals with a 64 or 80 bit address instead of 128.
> I mostly meant that DHCPv6 was an afterthought...
I'd call DHCPv6 "a recognition that a complete break from how IPv4 networks have been historically managed simply is not practical for many network operators... especially the larger ones".
And it's true that the DHCPv6 RFC (3315) was published in 2003; five years after the SLAAC RFC (2462). But it's also true that 2003 is twenty-three years from today. Regardless of how you feel about the five-year gap between 2462 and 3315, DHCPv6 has been available for nearly a quarter century. DHCPv6-PD (Prefix Delegation) is how every ISP that I've had that provided IPv6 service to my home [0] has provided globally-routable address space to my LAN. I assume that it's how it's done by most ISPs who don't want to have their customers deal with manually splitting up a wider-than-64 prefix onto their LAN.
But -like I said- I've only had experience with two US ISPs. Perhaps everyone else does it differently?
> Oh and Android doesn't support DHCPv6 at all...
Oh, but it does! And in the most useless way -for an ordinary end user- possible! It uses DHCPv6... but only for prefix delegation! [1]
It's almost as if the folks who make the decisions for the Android project have never used an Android device anywhere except for a Very Large Professionally-Managed Enterprise Network.
> In hindsight of EUI64 being shunned in favor of privacy addresses...
If by "privacy addresses", you mean the "periodically generate a new temporary address and use that for new outbound connections" thing, then I shun "privacy" addresses [2]... but I recognize that I may hold an minority opinion.
> ...plus how much of the IPv6 space is reserved for future use, I wonder if IPv6 could have achieved all of its goals with a 64 or 80 bit address instead of 128.
Sure, maybe. But -IMO- it's way better to have much too much address space than to have too little. Plus, if we ever manage to stop playing the "crab bucket" game and get our asses off of this rock, we might appreciate all the extra address space as we set up very long range networks connected by very high-latency links.
Somewhat related: I've read discussions from actually-informed folks who express the opinion -given our quarter-century of hindsight- it's pretty clear that (de facto because of SLAAC) reserving 64 of the address for the host part was quite a bit of a waste of address space. I wonder if they would have made the addresses 32 bits shorter if they'd reduced the host part by 32 bits.
[0] Granted, that's only two ISPs -Comcast and Monkeybrains-, but
1) Those two ISPs span like (fuck me to death, I'm old) quite a bit more than twenty years of personal ISP history
2) Comcast is either the or one of the largest ISPs in the US. They also -notably- run an all-IPv6 infrastructure network. I don't claim that they obviously know the right way to manage IPv6 networks, but I will claim that they have a lot of experience with it.
[2] in part because of the wide spread of the "use a user-configurable DUID along with the IAID" mechanism for generation of the host part of the address (rather than relying on the interface's MAC address), and in part because there are eleven-zillion ways to track a World Wide Web user that have absolutely nothing to do with that user's IP address. IMO, all the "privacy" addresses add is complication.
> Things like NTP and DNS that other good stuff that DHCP can be used to tell hosts about.
Look up RFC 6106 (published 15 years ago). Router advertisements have carried DNS resolver info for a long time now.
Once again, the old adage “IPv6 haters don’t understand IPv6” applies.
As much as I would like hosts to use the local NTP server, most will ignore the NTP server you specify in DHCP anyway, so it’s kind of a moot point.
Edit: RFC 6016 actually supersedes RFC 5006 from 2007. That’s nearly two full decades we have had DNS info in RAs. That’s the year Itanium2 came out (any greybeards here old enough to remember that one?)
> ...the old adage “IPv6 haters don’t understand IPv6” applies.
I'm an IPv6 hater. Sure. [0][1][2]
> ...RFC 6106...
Yes. I'm quite aware of the RDNSS field in RAs. In past experience from ten-ish years ago, [3] I found that it is unreliably recognized... some systems would use the data in it, and others would only ignore it. In contrast, DHCPv6 worked fine on everything I tested it on except for Android. Might this be because RFC 6106 was published in 2010, while RFC 3315 ("stateful" DHCPv6) was published in 2003, and RFC 3736 ("stateless" DHCPv6) was published in 2004? Maybe.
> ...RFC 6016 actually supersedes RFC 5006 from 2007.
An attentive reader notes that RFC 5006 is an experimental RFC. It took another four years for a non-experimental version of the standard to be published.
So, anyway. Yeah, I should have said
Things like NTP and (sometimes) DNS and that other good stuff...
Whoops. But, my point stands... how do you communicate to clients the network's preferred NTP servers or nearly all of the other stuff that DHCPv6 communicates if one choses to use only SLAAC?
[3] Perhaps things have gotten better in the intervening years? Should I find myself bored as hell one evening, I'll see what the state of device/OS support is.
These days, when I hear a project owner/manager describe the project as a "clean room reimplementation", I expect that they got an LLM [0] to extrude it. This expectation will not always be correct, but it'll be correct more likely than not.
[0] ...whose "training" data almost certainly contains at least one implementation of whatever it is that it's being instructed to extrude...
If so, I wonder how good a LLM c++ port to plain and simple C would look.
It seems there is a signal (here on HN) that coding LLM would be really good at mass porting c++ code to plain and simple C to remove the c++ kludge dependency.
As far as LLM-produced correctness goes, it all comes down to the controls that have been put in place (how valid the tests are, does it have a microbenchmark suite, does it have memory leak detection, etc.)
There's much more to it than that. One unmentioned aspect is "Has the tooling actually tested the extruded code, or has it bypassed the tests and claimed compliance?". Another is "Has a human carefully gone over the extruded product to ensure that it's fit for purpose, contains no consequential bugs, and that the test suite tests all of the things that matter?".
There's also the matter of copyright laundering and the still-unsettled issue of license laundering, but I understand that a very vocal subset of programmers and tech management gives zero shit about those sorts of things. [0]
[0] I would argue that -most of the time- a program that you're not legally permitted to run (or distribute to others, if your intention was to distribute that program) is just as incorrect as one that produces the wrong output. If a program-extrusion tool intermittently produces programs that you're not permitted to distribute, then that tool is broken. [1]
[1] For those with sensitive knees: do note that I said "the still-unsettled issue of license laundering" in my last paragraph. Footnote zero is talking about a possible future where it is determined that the mere act of running gobs of code through an LLM does not mean that the output of that LLM is not a derived work of the code the tool was "trained" on. Perhaps license-washing will end up being legal, but I don't see Google, Microsoft, and other tech megacorps being very happy about the possibility of someone being totally free to run their cash cow codebases through an LLM, produce a good-enough "reimplementation", and stand up a competitor business on the cheap [2] by bypassing the squillions of dollars in R&D costs needed to produce those cash cow codebases.
[2] ...or simply release the code as Free Software...
user@ubuntu-server:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 25.10
Release: 25.10
Codename: questing
user@ubuntu-server:~$ uname -a
Linux ubuntu-server 6.17.0-7-generic #7-Ubuntu SMP PREEMPT_DYNAMIC Sat Oct 18 10:10:29 UTC 2025 x86_64 GNU/Linux
user@ubuntu-server:~$ getent ahosts us.archive.ubuntu.com
91.189.91.82 STREAM us.archive.ubuntu.com
91.189.91.82 DGRAM
91.189.91.82 RAW
91.189.91.81 STREAM
91.189.91.81 DGRAM
91.189.91.81 RAW
91.189.91.83 STREAM
91.189.91.83 DGRAM
91.189.91.83 RAW
2620:2d:4002:1::102 STREAM
2620:2d:4002:1::102 DGRAM
2620:2d:4002:1::102 RAW
2620:2d:4002:1::101 STREAM
2620:2d:4002:1::101 DGRAM
2620:2d:4002:1::101 RAW
2620:2d:4002:1::103 STREAM
2620:2d:4002:1::103 DGRAM
2620:2d:4002:1::103 RAW
user@ubuntu-server:~$ ip --oneline link | grep -v lo: | awk '{ print $2 }'
enp0s3:
user@ubuntu-server:~$ ip addr | grep inet6
inet6 ::1/128 scope host noprefixroute
inet6 fe80::5054:98ff:fe00:64a9/64 scope link proto kernel_ll
user@ubuntu-server:~$ fgrep -r -e us.archive /etc/apt/
/etc/apt/sources.list.d/ubuntu.sources:URIs: http://us.archive.ubuntu.com/ubuntu/
user@ubuntu-server:~$ sudo apt-get update
Hit:1 http://us.archive.ubuntu.com/ubuntu questing InRelease
Get:2 http://security.ubuntu.com/ubuntu questing-security InRelease [136 kB]
<snip>
Get:43 http://security.ubuntu.com/ubuntu questing-security/multiverse amd64 c-n-f Metadata [252 B]
Fetched 2,602 kB in 3s (968 kB/s)
Reading package lists... Done
I didn't think to wrap that in 'time', but it only took a few seconds to run... more than two and less than thirty.
The IPv6 packet capture running during all that reveals that it never tried to reach out over v6 (but that my multicast group querier is happily running):
I even manually ran unattended-upgrade, which looks to have succeeded. Other than unanswered router solicitations and multicast group query membership chatter, there continued to be no IPv6 communication at all, and none of the messages you reported appeared either in /var/log/syslog or on the terminal.
You aren't running it during an external transitive failure that happened on April 15th.
The problem isn't the happy path, the problem is when things fail, and that linux, in particular made it really hard to reliably disable [0]
Once that hits someone's vagrant or ansible code, it tends to stick forever, because they don't see the value until they try to migrate, then it causes a mess.
The last update on the original post link [1] explains this. The ipv4 host being down, not having a response, it being the third Tuesday while Aquarius is rising into what ever, etc... can invoke it. It causes pains, is complex and convoluted to disable when you aren't using it, thus people are afraid to re-enable it.
> ...linux, in particular made it really hard to reliably disable
Section 10.1 of that Archi Wiki page says that adding 'ipv6.disable=1' to the kernel command line disables IPv6 entirely, and 'ipv6.disable_ipv6=1' keeps IPv6 running, but doesn't assign any addresses to any interfaces. If you don't like editing your bootloader config files, you can also use sysctl to do what it looks like 'ipv6.disable_ipv6=1' does by setting the 'net.ipv6.conf.all.disable_ipv6' sysctl knob to '1'.
> You aren't running it during an external transitive failure...
I'll assume you meant "transient". Given that I've already demonstrated that the only relevant traffic that is generated is IPv4 traffic, let's see what happens when we cut off that traffic on the machine we were using earlier, restored to its state prior to the updates.
We start off with empty firewall rules:
root@ubuntu-server:~# iptables-save
root@ubuntu-server:~# ip6tables-save
root@ubuntu-server:~# nft list ruleset
root@ubuntu-server:~#
We prep to permit DNS queries and ICMP and reject all other IPv4 traffic:
And we do an apt-get update, which fails in less than ten seconds:
root@ubuntu-server:~# apt-get update
Ign:1 http://security.ubuntu.com/ubuntu questing-security InRelease
Ign:2 http://us.archive.ubuntu.com/ubuntu questing InRelease
<snip>
Could not connect to security.ubuntu.com:80 (91.189.92.23). - connect (111: Connection refused) Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4000:1::102). - connect (101: Network is unreachable) <long line snipped>
<snip>
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/questing-security/InRelease Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4000:1::102). - connect (101: Network is unreachable) <long line snipped>
W: Some index files failed to download. They have been ignored, or old ones used instead.
root@ubuntu-server:~#
In this case, the IPv6 traffic I see is... an unanswered router solicitation, and the multicast querier chatter that I saw before. [0] What happens when we change those REJECTs into DROPs
root@ubuntu-server:~# iptables -D OUTPUT -o enp0s3 -j REJECT
root@ubuntu-server:~# iptables -D INPUT -i enp0s3 -j REJECT
root@ubuntu-server:~# iptables -A OUTPUT -o enp0s3 -j DROP
root@ubuntu-server:~# iptables -A INPUT -i enp0s3 -j DROP
root@ubuntu-server:~#
...and then re-run 'apt-get update'?
root@ubuntu-server:~# apt-get update
Ign:1 http://security.ubuntu.com/ubuntu questing-security InRelease
Ign:1 http://security.ubuntu.com/ubuntu questing-security InRelease
Ign:1 http://security.ubuntu.com/ubuntu questing-security InRelease
Err:1 http://security.ubuntu.com/ubuntu questing-security InRelease
Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4002:1::103). - connect (101: Network is unreachable) <v6 addrs snipped> Could not connect to security.ubuntu.com:80 (91.189.92.24), connection timed out <long line snipped>
<redundant output snipped>
W: Some index files failed to download. They have been ignored, or old ones used instead.
root@ubuntu-server:~#
Exactly the same thing, except it takes like two minutes to fail, rather than ~ten seconds, and the error for IPv4 hosts is "connection timed out", rather than "Connection refused". Other than the usual RS and multicast querier traffic, absolutely no IPv6 traffic is generated.
However. The output of 'apt-get' sure makes it seem like an IPv6 connection is what's hanging, because the last thing that its "Connecting to..." line prints is the IPv6 address of the host that it's trying to contact... despite the fact that it immediately got a "Network is unreachable" back from the IPv6 stack.
To be certain that my tcpdump filter wasn't excluding IPv6 traffic of a type that I should have accounted for but did not, I re-ran tcpdump with no filter and kicked off another 'apt-get update'. I -again- got exactly zero IPv6 traffic other than unanswered router solicitations and multicast group membership querier chatter.
I'm pretty damn sure that what you were seeing was misleading output from apt-get, rather IPv6 troubles. Why? When you combine these facts:
* REJECTing all non-DNS IPv4 traffic caused apt-get to fail within ten seconds
* DROPping all non-DNS IPv4 traffic caused apt-get to fail after like two minutes.
* In both cases, no relevant IPv6 traffic was generated.
the conclusion seems pretty clear.
But, did I miss something? If so, please do let me know.
[0] I can't tell you why the last line in the 'apt-get update' output is only IPv6 hosts. But everywhere there were IPv6 hosts, the reported error was "Network is unreachable" and for IPv4 the error was "Connection refused".
This part is exactly the problem I was talking about:
root@ubuntu-server:~# apt-get update
...
Could not connect to security.ubuntu.com:80 (91.189.92.23). - connect (111: Connection refused) Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4000:1::102). - connect (101: Network is unreachable) <long line snipped>
<snip>
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/questing-security/InRelease Cannot initiate the connection to security.ubuntu.com:80 (2620:2d:4000:1::102). - connect (101: Network is unreachable) <long line snipped>
W: Some index files failed to download. They have been ignored, or old ones used instead.
Well... in this case the output does show the failure to connect to 91.189.92.23, but that looks like a different kind of message to the "W:" lines, so maybe it doesn't show up on all setups or didn't make it into the logs on disk, or got buried under other output.
If you look at just the W: lines, it mentions a v6 address but the machine doesn't have v6 and the actual problem is the Connection Refused to the v4 address. The output is understandably misleading but ultimately the problem here has nothing to do with v6.
> ...ultimately the problem here has nothing to do with v6.
I agree... more or less. The remainder of this message is a reply to nyrikki, but I'm sticking it under your comment because you might also appreciate how weird it looks like this guy's setup is.
nyrikki: The rest of this message is directed directly at you:
============================
Actually, what's up with your link-local addresses? They have really odd flags on them.
The only way I can figure that you got into that configuration was to remove the kernel-generated link-local address and add a new one with the arguments 'scope link noprefixroute'. Even if a router on your network advertised a fe80::/64 prefix, that does nothing at all, as hosts are supposed to [0] ignore advertised prefixes that are link-local.
Yeah. After playing around with this for a bit, I can see that your network is at either least as misconfigured as one would be if -say- your DHCP server was giving leases with an invalid default gateway, or it is very, very specially configured for very special reasons.
Starting with the ubuntu-server host in the "IPv4 traffic is REJECTed" configuration from my last comment, we do this on the host to delete the kernel-supplied link-local address and instruct the OS to create an address in the link-local address space that can be used for global addresses.
root@ubuntu-server:~# ip addr del fe80::5054:98ff:fe00:64a9/64 dev enp0s3
root@ubuntu-server:~# ip addr add fe80::5054:98ff:fe00:64aa/64 noprefixroute dev enp0s3
root@ubuntu-server:~#
We then configure our upstream router to either
* Send RAs on the local link without a prefix
or
* Send RAs on the local link with a link-local prefix (so they're ignored by the Ubuntu host)
or we hard-code the address of a next-hop router on our host. One (or more) of these three things sets up the host with a default route. If you do none of them, you don't get a default route, and global traffic goes nowhere.
Then -because either you or something running on the host deleted the kernel-provisioned link-local address, and then explicitly instructed the kernel to create a link-local address that can be used to reach global addresses- the local host starts emitting IPv6 traffic with a link-local source address and a global destination address.
When presented with this sort of traffic, my router immediately sends back a ICMP6 "destination unreachable, beyond scope", which immediately terminates the connection attempt on the host, so the behavior ends up being exactly the same as when the host didn't have a misconfigured link-local address. But. You claim to be having trouble.
So, there are one or more things that might be going on that explain your trouble.
1) You have a firewall on this host that is dropping important ICMP6 traffic, causing it to miss the "this destination address is beyond your scope" message from the router. Do. Not. Do. This. ICMP is network-management traffic which tells you important things. Dropping important ICMP traffic is how you have mysterious and annoying failures.
2) Your router is configured to ignore link-local traffic with non-link-local destination addresses, rather than replying that the destination is out of scope. On the one hand, this seems stupid to me, but on the other hand, we got here through a misconfiguration that seems very unlikely to me to happen often, [1] so the router admin might not have thought about it when making "locked down" firewall rules.
3) There's some middlebox on the path to the router that's dropping your traffic because not all that many folks would expect to see link-local source and global destination, and middleboxes are widely known for dropping stuff that's even a little bit abnormal.
Investigating your misconfigured host (and maybe also connected network) has been interesting. I'd love to try to figure out if SystemD can be misconfigured to produce the host configuration that we're seeing (or if this misconfiguration is 100% bespoke), but I hear a hot burrito calling my name. Maybe I'll get bored and do more investigation later.
Also, you might object to my conclusion with "But this couldn't happen on IPv4! Clearly IPv6 is too complicated!". I would reply with "What would happen if your host couldn't get a lease from a DHCPv4 server, autoconfigured an address in the IPv4 link-local (169.254.0.0/16) address range, and the network's upstream router was configured to silently drop traffic from that subnet? At least the IPv6 link-local address range is prohibited from sending traffic off the local link [2] and fails the transmission attempt immediately."
[0] ...and Ubuntu questing does ignore such prefixes...
[1] ...that is, a link-local address that has been configured to handle global traffic...
[2] ...unless -as we've discovered- you specifically tell the OS otherwise...
> Actually, what's up with your link-local addresses? They have really odd flags on them.
They were probably configured by one of the fancy network config daemons (systemd-networkd, dhcpcd or similar). They like to take over RA processing, and they add IPs with "noprefixroute" so they can add the route themselves separately.
RAs have nothing to do with link-locals, but I bet one or the other of those daemons also takes over configuring link-local addresses and does the same thing there. If you looked in the routing table, there'll be a prefix route for fe80::/64 that was added by the daemon.
This wouldn't affect how DNS replies are sorted though. On machines without non-link-local v6, AAAA records aren't handled by trying them first and then expecting them to quickly fail. They're handled by pushing them to the bottom of the list so that the A records are tried first.
> They were probably configured by one of the fancy network config daemons (systemd-networkd, dhcpcd or similar). They like to take over RA processing, and they add IPs with "noprefixroute" so they can add the route themselves separately.
Makes sense, yeah.
While I don't see a way to do this with dhcpcd, I have no clue what Lovecraftian horrors systemd-networkd generates, so maybe it's the culprit. And whatever is doing this, this behavior is not configured by default on Ubuntu Server version Questling. Out of the box, I get regular kernel-assigned link-local addresses.
But I don't understand why you'd want to do this for link-local addresses... not automatically, anyway. It looks like doing this has the disadvantage that it erases the baked-in "This shouldn't be used for global-scope transmissions. Send back 'Network is unreachable' in those cases." rule that you get for free with the kernel-generated address. Sheesh. I wonder if there's some additional logic in a stupid daemon somewhere that manages a firewall rule that restores the "Network is unreachable" ICMPv6 response to outbound global-scope packets that come from the link-local address... just to add more moving parts that can get out-of-sync.
> This wouldn't affect how DNS replies are sorted though.
Yeah.
It's a pity that I don't work with OP. I'd rather like to take a look at this system and the network it's hooked to.
> It looks like doing this has the disadvantage that it erases the baked-in "This shouldn't be used for global-scope transmissions.
I tried with the kernel-generated LL and my kernel does attempt to use a link-local source when connecting to GUA addresses if it has no other address to connect from. And it works:
(...so long as the destination is on the local network. In this case I assigned 2001:db8::1 to the router, but the router will issue an ICMPv6 redirect for other IPs on the network, which is awkward for me to test but should also work.)
I note that you didn't run `ip route add fe80::/64 dev enp0s3` after adding the LL with noprefixroute, which... seems to break surprisingly little? Because the packet gets sent to the router, which does still have a route for fe80::/64 to the same network, so it issues an ICMPv6 redirect and the client ends up doing NDP anyway.
> So, there are one or more things that might be going on that explain your trouble.
Ah, there's secret option #4:
4) This rather weird configuration has been deliberately set up by the sysadmin that manages this system and network and ordinarily works fine, but the "external transitive failure that happened on April 15th." affected both IPv4 and IPv6 traffic (which, duh, that happens frequently)... but it was an intermittent failure so unrelated changes made by OP caused him to come to the wrong conclusions and point the blame cannon at the wrong part of the system.
> All articles are mostly a regurgitation of all the negativity that gets aired here all the time (a lot of it already fixed or debunked) and 0 discussion of utility.
There are multiple sections that talk directly about utility. Here's one of them: [0]
But, sure. I'll bite. Here's the third paragraph of the first part of the essay [1]:
This is *bullshit* about *bullshit machines*, and I mean it. It is neither balanced nor complete: others have covered ecological and intellectual property issues better than I could, and there is no shortage of boosterism online. Instead, I am trying to fill in the negative spaces in the discourse. “AI” is also a fractal territory; there are many places where I flatten complex stories in service of pithy polemic. I am not trying to make nuanced, accurate predictions, but to trace the potential risks and benefits at play.
I'd say that the specific sort of "utility" discussion that you're probably looking for would be classified as "boosterism". [2]
[0] is a throwaway paragraph that handwaves at second-hand accounts of generic things LLMs can do, with no further discussion, apparently because he (surprisingly!) has almost no first-hand experience with them. Then there are 10 pages of negativity with dozens of links to stuff that has been discussed to death here and in media. The "negative spaces" he's filling are already overflowing.
His lack of personal experience with LLMs was the most disappointing aspect, because he does not really know what we're dealing with. He's just going off what he's read / heard. So again, where's the incisive insight?
Now, here's a concrete example of what I mean by utility: a single person being able to rewrite an entire open source project from scratch in a few days just so it could be relicensed. Is that good or bad? I don't know! Is it a stupefying example of what's possible? Yes! Is that "breathless boosterism?" Only if you ignore the infinite nuances involved.
> Eh. Carefully read through and consider [3].
Hadn't come across this one before, but there's not much in there I hadn't seen and even discussed in past comments. As an example, it still mentions the METR study from 2025 without mentioning the very pertinent follow-up from just a couple of months back... which is not very surprising to me: https://news.ycombinator.com/item?id=47145601 ;-)
It does mention (and then gloss over) the real finding of the DORA and related reports, which is pertinent to my original point: LLMs are simply an amplifier of your existing software discipline. Teams with strong software discipline see amazing speedups, those with poor discipline sees increased outages.
And, to my original point, who knows what good software discipline looks like? Hint: it's not the capital class.
You missed the part where he is consistently unimpressed by the failure of LLMs to do the task he hands to them, it seems. Go re-read Section 1.5 "Models are Idiots". Make sure to read the footnotes. They're sure to address most of the counterarguments you might make.
> Is that "breathless boosterism?"
How you phrased it? Yes. It ignores the "infinite nuances involved" such as maintainability, infosec soundness of the work product, the completely untested legality of "license washing" to name a few. Also, you missed the part where I said
Due to their nearly-universally breathless nature, I know that's how I classify the overwhelming majority of such discussions.
> Hadn't come across this one before, but there's not much in there I hadn't seen and even discussed in past comments. ... It does mention (and then gloss over) the real finding of the DORA and related reports...
Yeah, I figured that you would be unable (or unwilling) to understand this one. Here's the summary, straight from the author's keyboard:
* Fred Brooks' No Silver Bullet was correct.
* No Silver Bullet applies to LLMs the way it applied to other things, and empirical evidence on LLM coding impact sure seems to agree.
* You'll get better returns from working on strong software development fundamentals than from forcing all your programmers to use Claude for everything, and that's a repeated message in basically all the major literature.
* If LLMs do turn into a revolutionary world-changing silver bullet giving everyone coding superpowers, you'll be able to just adopt them fully when that happens.
> You missed the part where he is consistently unimpressed by the failure of LLMs to do the task he hands to them...
Not really, those are exactly the things said by people who dabble with LLMs a little and turn to "breathless naysaying" without any effort to really figure out this new technology. I mean, the series literally ends with "maybe I'll try to code with it."
> Yes. It ignores the "infinite nuances involved" such as maintainability, infosec soundness of the work product, the completely untested legality of "license washing" to name a few.
Not really, I did say "Is it good or bad? I don't know!" and literally mentioned the infinite nuances. I did not want this to become a tangent about those nuances (that's what I hoped would be in TFA) but I do know that being able to write or rewrite entire projects single-handedly is tremendous utility.
> Yeah, I figured that you would be unable (or unwilling) to understand this one.
Not really, just that I've already discussed all the points in that piece in past comments with way more studies on "empirical evidence on LLM coding impact" with way more nuance. If you want to follow the threads in the comment I linked, you'll come across some of those comments.
> You'll get better returns from working on strong software development fundamentals than from forcing all your programmers to use Claude for everything, and that's a repeated message in basically all the major literature.
Not really, the repeated message in all the latest reports like DORA and DX and CircleCI (which your link mentions but glosses over) very clearly indicates that using LLMs with strong software development fundamentals (what I called "discipline") is a huge force multiplier. See point 3 of this link as a representative example: https://www.thoughtworks.com/en-us/insights/blog/generative-... For these teams, productivity will literally be proportional to their tokens rather than their devs, because each dev is so highly leveraged.
> If LLMs do turn into a revolutionary world-changing silver bullet giving everyone coding superpowers, you'll be able to just adopt them fully when that happens.
Yes, but at this point it's unlikely to be a silver bullet, and I never claimed it would be. What I am saying is that it is a huge accelerant, but needs steering by skilled operators, engineers who know the discipline but also understand how to work with AI.
And in my experience it takes a surprising amount of time and practice to learn how to leverage AI effectively.
Which aphyr clearly has not done. Which is why this series is such a disappointment.
> Not really, those are exactly the things said by people who dabble with LLMs a little...
From the footnote in section 1.5:
The examples I give in this essay are mainly from major commercial models (e.g. ChatGPT GPT-5.4, Gemini 3.1 Pro, or Claude Opus 4.6) in the last three months; several are from late March. Several of them come from experienced software engineers who use LLMs professionally in their work. Modern ML models are astonishingly capable, and they are also blithering idiots. This should not be even slightly controversial.
I wonder just how Scottish the Scotsman has to be before you'll let him order a drink.
> And in my experience it takes a surprising amount of time and practice to learn how to leverage AI effectively.
Let's ignore -for a minute- the fact that people who actually use these things as part of their dayjobs were consulted, which moots this complaint.
Every six-ish months we hear "Wow. All the past commentary on LLMs is completely invalid. These new models aren't just a step change — they're a whole new way of working.".
If we consider only that datapoint, it's pretty obvious that you're not missing out on much by choosing to just work on skills that are universally applicable and "evergreen". But, when you add in to that the fact that every six-ish months we also hear "Wow. These new revs of the LLM products are just as stupid and nondeterministic as the old ones. They also still make the same classes of stupid mistakes, are pretty much as dangerously unreliable as they always have been [0], and -just like previous versions- have 'capability rot' that cannot be anticipated, but might be caused by inability to handle current demand, deliberate shifting of backend resources to serve newer, more-hyped LLM products, or even errors in the vibecoded vendor-supplied tooling that interfaces with the backend.", the decision to ignore the FOMO and hype becomes pretty obviously correct.
> I mean, the series literally ends with "maybe I'll try to code with it."
Well, this is how the series ends:
The security consequences are minimal, it’s a constrained use case that I can verify by hand, and I wouldn’t be pushing tech debt on anyone else. I still write plenty of code, and I could stop any time. What would be the harm?
Right?
...Right?
There's a certain subtlety to this that you missed. [2]
If we ignore that subtlety, I expect that your retort to a report that goes "Wow. They suck just as hard at coding for me as they do for everything else I've attempted to use them for. I'm not surprised because I've talked to professional programmers who regularly use these things in their dayjobs and I'm getting results that are similar to what they've been reporting to me." will be "Bro. You didn't spend enough time learning how to use it, bro!".
By way of analogy, I'll also mention -somewhat crassly- that one doesn't have to have an enormous bosom to understand that all that weight can cause substantial back pain. One can rely on both one's informed understanding of the fundamentals behind the system under consideration, as well as first-hand testimony from enormous-bosom-equipped people to arrive at that conclusion.
[1] is so bad, like the worst imaginable thing you can think of... like if this is the possible fuckup all bets are off what other fuckups you might need to deal with. I got hit with this problem several times and I was like "well this is just impossible..." absolutely mind-blown
>. Several of them come from experienced software engineers who use LLMs professionally in their work.
So, not from personal experience. And we don't know which examples came from which users or what they used them for. We get enough hearsay on HN and again, there's nothing in this series that has not been discussed here. There is however, a ton of other hearsay missing in the series, which is the utility so many people are finding (in many cases, along with actual data or open source projects.)
> Every six-ish months we hear ...
I've been yelling about LLMs since early 2024 [0]! They needed much more "holding it right" back then. Now it's way easier, but the massive potential was clear way back then.
> They also still make the same classes of stupid mistakes, are pretty much as dangerously unreliable as they always have been.
Yes, and this is where a lot of the skill in managing them comes into play. Hint: people are dangerously unreliable too.
> One can rely on both one's informed understanding of the fundamentals behind the system under consideration, as well as first-hand testimony from enormous-bosom-equipped people to arrive at that conclusion.
Of course, but when faced with many contradictory opinions, I prefer data. And the preponderence of data I've looked at and discussed [0] paints a very different picture.
> There's a certain subtlety to this that you missed.
From TFA:
> I want to use them. I probably will at some point.
My complaint is that he is speaking entirely from second-hand information and provides no new insight of his own. That he has trepidations to actually get his hands dirty with them does not change it, and only makes it worse that he spent 10 pages going on about them! He's a technologist, not a journalist! So, I'm genuinely curious, what subtlety did I miss?
No, the rest of the quote you snipped that from talks about how some of the reports are from personal experience and some are second-hand reports from trusted, knowledgeable people.
> No, the rest of the quote you snipped that from talks about how some of the reports are from personal experience...
And those were "dabbling" as I mentioned above, which is why there is no insight.
Concrete example: the most detailed of his personal experiences reported is about generating and modifying 3D renderings of a bathroom. There is barely enough detail to comment on his approach, but this is an active area of research that people are publishing papers on, eg: https://arxiv.org/abs/2512.17459 and https://arxiv.org/abs/2511.17048 -- these are non-trivial and often involve custom models, so that Gemini got even partial results is making the opposite point of what he intended.
But if he expected good results in a few hours, that's just dabbling. It's almost as if he expected a silver bullet...
Some folks are suggesting ferrite beads, others are suggesting shorter cables.
If one has a medium-sized chunk of money to burn, one could try fiber optic cabling. I've personally had -AFAICT- perfect results from Monoprice's "SlimRun AV" fiber DisplayPort cables, and Nippon Labs' fiber HDMI cables. [0] I expect that Monoprice's fiber HDMI cables and Nippon Labs' fiber DisplayPort cables are also fine, but I've never used those, so I cannot comment.
For folks concerned about "dreadfully fragile" fiber optic cables, I do know that the Monoprice cables are durable... a vigorous misadventure caused me to torque the hell out of the monitor-side connector. The connector bent, forcing the case split a bit at the seam. After some counter-bending of the connector and pushing its case back mostly closed, the cable works fine. Given the outward similarity in build quality, I expect that the Nippon Labs cable I have is at least as durable.
[0] Both families of cables drive my "4k" HDR monitor at 60Hz without lossy compression.
> But you don't fire a table saw because it doesn't know when to stop cutting, right?
If I purchased a table saw and that table saw irregularly and unpredictably jumped past its safeties -as we've plenty of evidence that LLMs [0] do-, then I would [1] immediately stop using that saw, return it for a refund, alert the store that they're selling wildly unsafe equipment, and the relevant regulators that a manufacturer is producing and selling wildly unsafe equipment.
[0] ...whether "agentic" or not...
[1] ...after discovering that yes, this is not a defective unit, but this model of saw working as designed...
> But that's the thing: the table saw has safeties. Someone put them there.
You noticed that I mentioned that this hypothetical table saw has poorly-designed, entirely inadequate safeties? Things like Opus treating the data it presents to the user as commands that it should execute [0] is definitely [1] a sign of solid, well-designed safety mechanisms.
You might choose to retort "Well, that's because the user isn't running the tool in the mode that makes it wait for confirmation before doing anything of consequence!". In reply, I would point in the general direction of the half-squillion studies indicating that a system whose safety requires an operator to remain vigilant when presented with a large volume of irregularly-presented decision points (nearly all of which can be safely answered with a "Yes, do it.") does not make for a safe system. [2] It -in fact- makes for a system that's designed [3] to be unsafe.
You might also choose to retort "That's never happened to me, or anyone that I know about.". Intermittent failures of built-in safeties that happen under unpredictable circumstances are far, far worse than predictable failures that happen under known ones. I hope you understand why.
[2] I would also -somewhat wryly- note that "An AI Agent that does all of your scutwork, but whose every decision you have to carefully scrutinize, because it will irregularly plan to do something irreversibly destructive to something you care about." is not at all the picture that "AI" boosters paint of these tools.
Just to drive home the "These things have poorly-designed, entirely inadequate safeties", here [0] is a report from three weeks ago of the then-latest version of Claude Code being commanded to enter into the "Don't modify anything" mode, reporting to the user that it was in the "Don't modify anything" mode, and then proceeding to modify things as if it was not actually in the "Don't modify anything" mode.
I'm sure if I dug around, I would find hundreds of reports of these tools [1] jumping over their safeties to do things that are unexpected, and not-infrequently hazardous. I expect that such reports will continue, because "building robust, effective, and reliable safeties" has very, very clearly not been a significant priority for the major LLM companies. But, I've more than proven my point, so I'll leave the small pile of evidence at this.
I'm not sure that HN vote count is a good indicator of interest? HN alerted me to the existence of the intro post. I read the intro, noticed that it was one in an ongoing series, and have been checking your blog for new installments every few days.
I suspect that if you'd not broken up the post into a series of smaller ones, the sorts of folks who are unwilling to read the whole thing as you post it section by section would have fed the entire post to an LLM to "summarize".
Sure, sure. Here's the annotated second paragraph from TFA:
Clicking link [3] leads us to a page that has this as its first paragraph: Incidentally, link [3] is repeated in the final paragraph of TFA, which I will also copy and annotate: Link [4] is pretty clear about what the game is. Did you bother to click it in order to "[l]earn more about Stephen's Sausage Roll", as it invites you to do?[0] <https://increpare.com/>
[1] <https://www.youtube.com/watch?v=lCNqYLGwqxU>
[3] <https://thinkygames.com/lists/best-sokoban-games/>
[4] <https://thinkygames.com/games/stephens-sausage-roll/>
[5] <https://thinkygames.com/games/stephens-sausage-roll/similar/>
reply