Hacker Newsnew | past | comments | ask | show | jobs | submit | nu11ptr's commentslogin

> fewer automatic restarts

No automatic restarts! I understand that in our security patching world that patching and restarting automatically is the default, fine, but there absolutely should be a dead simple way of disabling auto restarts in settings. I'm fine if it pesters me to restart or whatever, perhaps with growing alarm the longer I wait, but it should always be optional in the end. There are just no words for how bad it can be for mission critical workloads when your computer restarts without your consent. Please make disabling this simple.


I disagree, at least on end-user devices as opposed to servers.

If you make it possible to defer updates indefinitely, users will. Guaranteed. Doesn't matter how urgent or critical the update is, how bad the bug or vulnerability it patches is, how disastrous the consequences may be: they'll never, ever voluntarily apply them.

If you're running a server, and willing to accept the risk of deferral because 1) you're in a better position to assess the risk and apply compensating controls than a regular user is, and 2) you're OK accepting the personal risk of having to explain to your boss why you kept deferring the urgent patch until after it blew up in your face, then yes, you should have a control to delay or disable it.

But end users? No. I use to believe otherwise, but now I've seen far, far too many cases where people train themselves to click "Delay 1 day" without even consciously seeing the dialog.


The real sin is combining security updates with feature updates. An argument can be made for enforced security updates(1). There is no good argument for forcing feature updates.

Most security-only updates have a low risk of interfering with with the user or causing instability. Most feature updates have a high risk of doing so.

(1) Although I think there should be some way of disabling even those, even if that way is hard to find and/or cumbersome to keep the regular users away.


The problem is that there's dozens of security updates every month, so even if you can skip feature updates, you'll have to reboot every Patch Tuesday anyway.

Even the Server Core edition, which has a much smaller "surface area" needs reboots almost every month.


To be fair, they just need to bring hotpatching out of Intune/B2B licenses.

Alright, I can buy that. Although from a dev POV I can also appreciate the not-fun of testing a combinatorial explosion of security updates vs features.

Basically, if I trust you (the dev/software maker/whatever) to not change UIs and add in bullshit, I'm okay having auto updates on. Unfortunately can't trust much now

> I disagree, at least on end-user devices as opposed to servers.

And who determines what is an "end-user device" vs a "server"?

> If you're running a server, and willing to accept the risk of deferral because 1) you're in a better position to assess the risk and apply compensating controls than a regular user is, and 2) you're OK accepting the personal risk of having to explain to your boss why you kept deferring the urgent patch until after it blew up in your face, then yes, you should have a control to delay or disable it.

So you do want choice after all it seems. Who do you think should make this choice on risk vs. workload/criticality?

I would say you actually agree with me mostly based on your comments, but you have not clarified _who_ makes these choices. I'm saying as the consumer, _I_ should get to make that choice. In the enterprise, my admin will make that choice via group policy, but I do not want Microsoft determining what I'm allowed to do with my OS. They are of course free to keep doing that, but then I also have the right to keep not buying their products.


> And who determines what is an "end-user device" vs a "server"?

Someone who decides to buy a Windows 20whatever Server license for the related hardware.


No thanks. I should be able to use any copy of Windows for whatever use case I want. MS is free to disagree, and I am therefore free to keep not buying their products.

If it was kernel level only, maybe. But why does windows seem like it needs to restart after every little update?

I'm the wrong person to ask about that. I've gone ages between Debian reboots while applying regular updates, and I'm not sure what it is about the Windows model that requires a reboot after patching a few things.

Fedora also wants to reboot to install (dnf) updates offline, as I understand it's to prevent potential instability from running processes getting confused when their files get swapped out under their feet.

It's also good since you can't swap out the kernel without rebooting.

I assume Microsoft took the same approach, just replace everything offline then reboot into a fully up-to-date system without any chance of things in RAM still being outdated.


> It's also good since you can't swap out the kernel without rebooting.

Yeah you can. Ksplice.com We got bought by Oracle so it's in their ecosystem but the technology exists.


These automatic restarts are just the outcome of bigger problem with how Windows Update has been changed initially in W10. Namely the removal of selective updates installing and indirectly lack of QA, are the main sources of problems here.

Windows isn't MacOS that runs on set of verified configurations - it runs on variety of hardware with vendor drivers and other software. That combined may cause issues but so lack of testing - we know that Microsoft in its wisdom dismantled QA and replaced it with this prosthetics of enthusiasts community that all the time suggest "sfc /scannow". Now they put Charlie Bell in role of "engineering quality" position but I have no hope that something will change with a good outcome for users.

And users should be again allowed to avoid updates which were proven to cause issues - that's the fundamental need here. Deterring a scheduled action isn't enough.

Considering Windows behavior, all the telemetry that was smuggled to W7 in poorly described updates, I see how appealing is to Microsoft to use this big updates package format and add features, components which surely would be avoided by experienced users. Since W10 and maybe even partially during W7 they're fighting their users when it comes to control over operating system.

I'm on CachyOS now but I still get calls from friends who struggle with all this MS circus. Recently, this friend lost data to bitlocker encrypted machine because she didn't had backup keys. She's that kind of user that doesn't know what happens on the screen beside text processor and web browser - everything is a nuance that has to be quickly dealt with by "next next done" tactic. Should she be more patient and read what's being displayed on the screen - sure but I've told her that years ago.

Anyway, CachyOS: arch-update renders a popup in KDE about recommended restart, sometimes update process requires restarting services and users can select ones it needs or everything listed altogether. There's snapshots support for updates: https://wiki.cachyos.org/configuration/btrfs_snapshots/ and pretty sure other distributions have this as an option as well.


HN is the best tech site on the web for a reason. It has a generally intelligent audience, and while there are certainly inappropriate comments, compared to what you find on social media or even other sites, it is unique and far more respectful. Due to this, you can often have better and more meaningful discussions.

Sadly, probably not. I fear new languages will struggle from here on out. As a language guy, very few things in this new AI world make me more sad than this.


I don't get the feeling this will happen. LLMs are extremely good at learning new languages because that's basically their whole point. If your new language has a standard library, and the LLM can see its source code, I am sure you can give it to any last-generation AI and it will happily spit out perfectly correct new code in it. If you give it access to a reference docs, then it can even ensure it never generates syntactically incorrect code quite well. As long as your error messages are enough to understand what a problem's root cause is, the LLM will iterate and explore until it gets it right.

Not sure if this is a good example, but I used ChatGPT (not even Codex) to fix some Common Lisp code for me, and it absolutely nailed it. Sure, Common Lisp has been around for a long time, but there's not so much Common Lisp code around for LLMs to train on... but OTOH it has a hyperspec which defines the language and much of the standard libraries so I believe the LLM can produce perfect Common Lisp based on mostly that.


What hardware do you have it running on? Do you feel you could replace the frontier models with it for everyday coding? Would/will you?


Around 20ish tokens a second with 6-bit quant at very long context lengths on my AMD AI Max 395+

I’m trying to use local models whenever possible. Still need to lean on the frontier models sometimes.


I'm getting ~30 tok/s on the A3B model with my 3070 Ti and 32k context.

> Do you feel you could replace the frontier models with it for everyday coding? Would/will you?

Probably not yet, but it's really good at composing shell commands. For scripting or one-liner generation, the A3B is really good. The web development skills are markedly better than Qwen's prior models in this parameter range, too.


That seems oddly low / slower by a fair amount than i get on my m4. (I believe it was ~45 tok/s?)

What quant are you using? How much ram does it have?


60 to 70 on a 5080, but only tinkering for now. The smaller models seem exceptionally good for what they are, and some can even do OCR reliably.


What quantization are you running on the 5080? I'm waiting to receive mine.


Thinking about getting a new MBP M5 Max 128GB (assuming they are released next week). I know "future proofing" at this stage is near impossible, but for writing Rust code locally (likely using Qwen 3.5 for now on MLX), the AIs have convinced me this is probably my best choice for immediate with some level of longevity, while retaining portability (not strictly needed, but nice to have). Alternatively was considering RTX options or a mac studio, but was leaning towards apple for the unified memory. What does HN think?


I've been mulling the same, but decided against (for now)

Using Claude Code Max 20 so ROI would be maybe 2+ years.

CC gives me unlimited coding in 4-6 windows in parallel. Unsure if any model would beat (or even match) that, both in terms in quality and speed.

I wouldn't gamble on that now. With a subscription, I can change any time. With the machine, you risk that this great insane model comes out but you need 138GB and then you'll pay for both.


We are on the same wavelength. I'm thinking maybe a pass for now.


> What does HN think?

Thermals. Your workloads will be throttled hard once it inevitably runs hot. See comments elsewhere in thread about why LLMs on laptops like MBP is underwhelming. The same chips in even a studio form factor would perform much better.


Strix Halo machines are a good option too if you are at all price sensitive. AMD (with all the downsides of that for AI work) but people are getting decent performance from them.

Also Nvidia Spark.


I have a Mac Studio with 128GB and a M4 Max and I'd recommend it. The power usage is also pretty good, but you may not care if you live somewhere where energy is cheap.


Have you used this for Rust coding by chance? I'm curious how it compares to Opus 4.6. I realize it isn't going to think to the same level, but curious how code quality is for a more straight forward task.


Quite good. I ported my codebase from Go to Rust in a fraction of the time it would have taken me to rewrite it.


> The computer science answer: a compiler is deterministic as a function of its full input state. Engineering answer: most real builds do not control the full input state, so outputs drift.

To me that implies the input isn't deterministic, not the compiler itself


You're not wrong but I think the point is to differentiate between the computer science "academic" answer and the engineering "pragmatic" answer. The former is concerned about correctly describing all possible behavior of the compiler, whereas the latter is concerned about what the actual experience is when using the compiler in practice.

You might argue that this is redefining the question in a way that changes the answer, but I'd argue that's also an academic objection; pragmatically, the important thing isn't the exact language but the intent behind the question, and for an engineer being asked this question, it's a lot more likely that the person asking has context for asking that cares about more than just the literal phrasing of "are compilers deterministic?"


> ... the important thing isn't the exact language but the intent behind the question ...

If we're not going to assume the input state is known then we definitely can't say what the intent behind the question is - for many engineering applications the compiler is deterministic. Debian has the whole reproducible builds thing going which has been a triumph of pragmatic engineering on a remarkable scale. And suggests that, pragmatically, compilers may be deterministic.


It matters a lot. For instance, many compilers will put time stamps in their output streams. This can mess up the downstream if your goal is a bit-by-bit identical piece of output across multiple environments.

And that's just one really low hanging fruit type of example, there are many more for instance selecting a different optimization path when memory pressure is high and so on.


Like throwing dice: deterministic in theory, seemingly random in practice except under strictly controlled conditions.


Also, most real build systems build from a clean directory and checkout.. so, outside of a dev's machine they should be 100% reproducible, because the inputs should be reproducible. If builds aren't 100% reproducible that's an issue!


> To me that implies the input isn't deterministic, not the compiler itself

or the system upon which the compiler is built (as well as the compiler itself) has made some practical trade offs.

the source file contents are usually deterministic. the order in which they're read and combined and build-time metadata injections often are not (and can be quite difficult to make so).


I mean, if you turn off incremental compilation and build in a container (or some other "clean room" environment), it should turn out the same each time. Local builds are very non-deterministic, but CI/CD shouldn't be.

Either way it's a nitpick though, a compiler hypothetically can be deterministic, an LLM just isn't? I don't think that's even a criticism of LLMs, it's just that comparing the output of a compiler to the output of an LLM is a bad analogy.


> I mean, if you turn off incremental compilation and build in a container (or some other "clean room" environment), it should turn out the same each time. Local builds are very non-deterministic, but CI/CD shouldn't be.

lol, should. i believe you have to control the clock as well and even then non-determinism can still be introduced by scheduler noise. maybe it's better now, but it used to be very painful.

> Either way it's a nitpick though, a compiler hypothetically can be deterministic, an LLM just isn't? I don't think that's even a criticism of LLMs, it's just that comparing the output of a compiler to the output of an LLM is a bad analogy.

llm inference is literally sampling a distribution. the core distinction is real though, llms are stochastic general computation where traditional programming is deterministic in spirit. llm inference can hypothetically be deterministic as well if you use a fixed seed, although, like non-trivial software builds on modern operating systems, squeezing out all the entropy is a non-trivial affair. (some research labs are focused on just that, deterministic llm inference.)


That sounds great, but if Opus generates 20% better code think of the ramifications of that on a real world project. Already $100/month gets you a programmer (or maybe even 2 or 3) that can do your work for you. Insanity. Do I even care if there is something 80% as good for 50% the cost? My answer: no. That said, if it is every bit as good, and their benchmarks suggest it is (but proof will be in testing it out), then sure, a 50% cost reduction sounds really nice.


If I was building an application using massive amounts of calls to the api, I’d probably go with Gemini. For a Copilot, definitely Opus.


> but nothing comes to the top of my mind for other languages

"cargo clippy --fix" for Rust, essentially integrated with its linter. It doesn't fix all lints, however.


Not sure how the UI engine itself compares, but to me it is all about the available components (as a total non-designer, although AI helps with that now). The only choice I have at the moment that would meet my needs is gpui, as gpui-component now exists.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: