Hacker Newsnew | past | comments | ask | show | jobs | submit | kaliszad's commentslogin

Many people assume that companies need or want global enterprise level of management of infrastructure or 24/7 support. That's simply not the case. Many small and mid-sized companies just need their applications to run. There is no CTO on the board and nobody else really cares where the stuff runs if it fits a certain budget, is available enough to not cause major disruptions and is responsive enough to not cause complaints. Some companies may care about a certain level of compliance/ security and whether their admins/ DevOps people seem to be in agony most of the time but of those there aren't many. That's also a reason why the EU introduced directives such as NIS2, DORA, CRA, CER, even the now 10 year old GDPR and more.

Most companies I have seen have never updated the BIOS of their servers, nor the firmware on their switches. Some of those have production applications on Windows XP or older and you can see VMware ESXi < 6.5 still in the wild. The same for all kinds of other systems including Oracle Linux 5.5 with some ancient Oracle DB like 10g or something, that was the case like 5 years ago but I don't think the company has migrated away completely to this day.

Any sufficiently old company will accrete systems and approaches of various vintages over time only very slowly ripping out some of those systems. Usually what happens is that parts of old systems or old workarounds will live on for decades after they have been supposedly decommissioned. I had a colleague who was using CRT monitors in 2020 with computers of similar vintage, probably with Pentium III or early Pentium IV, because he had everything set up there and it just worked for what he was doing. I don't admire it, yet that stuff works and I do respect that people don't want to replace expensive systems just because they are out of support, when they do actually work and they have people taking care of them.


Totally, but then you probably don’t want SREs. If you’re okay with 99% availability (~7 hours of downtime a month assuming 24x7 goal), you can get by with much cheaper staffing and won’t have to deal with the turnover from SREs who get bored.

I hope they expand it in such a way that anybody could uncover the ships of the Russian "shadow fleet" and put more pressure on politicians and officials. Suspicious draught or erratic position changes or incorrect data upon leaving/ entering a port would be key to detecting possible circumvention of sanctions.

Many are just not that diligent with proper dental hygiene. Interdental brushes/ superfloss are used only occasionally if at all and not every day. There are people that brush for 2 minutes and call it a day, because they heard it's enough in some advert or because the electric toothbrush stops. Well, it turns out you need a lot longer than that and a reasonable technique if you want to keep your teeth clean and healthy.

The acidic, sugary drinks and food don't help at all. Drinking mineral water (no sugar, no extra acids) or to wash the mouth with regular water or a low concentration sodium bicarbonate / baking soda solution to balance pH after eating/ drinking something else would probably help. Of course, if not dissolved completely, the baking soda could act as an abrasive which wouldn't be that great for tooth health so probably just use regular water.

You don't need to invest much money to keep good oral health, it certainly is much cheaper to fix the problems that will arise if you don't, if they can be fixed at all. It however does cost effort/ time.


You certainly don’t need to brush for longer than 2 minutes. Overbrushing is a concern as well.


Correct. The worst thing you can do with a tooth brush is to drink something acidic or extremely sugary (like a sports gel) and then immediately brush your teeth aggressively.

There are three main tooth diseases: Cavities caused by bacteria, loose teeth through gum disease and (permanent) erosion of enamel.

The last one is easy but annoying to prevent: Only brush your teeth 30 minutes after you got rid of food debris/sugar/acid in your mouth using your tongue and by drinking water.


If you brush for 2 minutes I can guarantee you have not cleaned your teeth properly if you have 28-32 of them, use interdental brushes (TePe, Curaprox)/ superfloss and a regular tooth brush.

If you are worried about brushing off your enamel, you should get correct tooth brushes, not use abrasive tooth pastes, not brush immediately after drinking acids as another commenter has written. Some people have soft enamel but effect of some medication/ sickness/ malnutrition during childhood but that is relatively rare. If in doubt, consult dental hygiene specialist or a dentist.

Source: My wife is an established dental hygienist keeping up with the newest approaches, going to advanced courses/ master classes, visiting conferences.


Consult a passionate dental hygienist or get a second opinion from a different dentist. Either you are doing yourself no favors by biting to push over the top and should probably get some kind of a retainer like boxers have to prevent overloading your teeth.

Or you are one of many people that have been told how great their teeth are yet which have periodontitis/ gum inlfamation. (Source: My wife is an established dental hygienist keeping up with the newest approaches, going to advanced courses, visiting conferences.) If your gums are redish instead of light pink that's a good indication. If you are bleeding on regular use of interdental brushes/ flossing that's another hint something might be off.


Many people seem to be running OpenCode and similar tools on their laptop with basically no privilege separation, sandboxing, fine-grained permissions settings in the tool itself. This tendency is reflected also by how many plugins are designed, where the default assumption is the tool is running unrestricted on the computer next to some kind of IDE as many authentication callbacks go to some port on localhost and the fallback is to parse out the right parameter from the callback URL. Also for some reasons these tools tend to be relative resource hogs even when waiting for a reply from a remote provider. I mean, I am glad they exist, but it seems very rough around the edges compared to how much attention these tools get nowadays.

Please run at least a dev-container or a VM for the tools. You can use RDP/ VNC/ Spice or even just the terminal with tmux to work within the confines of the container/ machine. You can mirror some stuff into the container/ machine with SSHFS, Samba/ NFS, 9p. You can use all the traditional tools, filesystems and such for reliable snapshots. Push the results separately or don't give direct unrestricted git access to the agent.

It's not that hard. If you are super lazy, you can also pay for a VPS $5/month or something like that and run the workload there.


Hi.

> Please run at least a dev-container or a VM for the tools.

I would like to know how to do this. Could you share your favorite how-to?


I have a pretty non-standard setup but with very standard tools. I didn't follow any specific guide. I have ZFS as the filesystem, for each VM a ZVOL or dataset + raw image and libvirt/ KVM on top. This can be done using e.g. Debian GNU/ Linux in a somewhat straight forward way. You can probably do something like it in WSL2 on Windows although that doesn't really sandbox stuff much or with Docker/ Podman or with VirtualBox.

If you want a dedicated virtual host, Proxmox seems to be pretty easy to install even for relative newcomers and it has a GUI that's decent for new people and seasoned admins as well.

For the remote connection I just use SSH and tmux, so I can comfortably detach and reattach without killing the tool that's running inside the terminal on the remote machine.

I hope this helps even though I didn't provide a step-by step guide.


If you are using VSCode against WSL2 or Linux and you have installed Docker, managing devcontainers is very straightforward. What I usually do is to execute "Connect to host" or "Connect to WSL", then create the project directory and ask VSCode to "Add Dev Container Configuration File". Once the configuration file is created, VSCode itself will ask you if you want to start working inside the container. I'm impressed with the user experience of this feature, to be honest.

Working with devcontainers from CLI wasn't very difficult [0], but I must confess that I only tested it once.

[0] https://containers.dev/supporting


>> Please run at least a dev-container or a VM for the tools.

> I would like to know how to do this. Could you share your favorite how-to?

See: https://www.docker.com/get-started/

EDIT:

Perhaps you are more interested in various sandboxing options. If so, the following may be of interest:

https://news.ycombinator.com/item?id=46595393


Note that while containers can be leveraged to run processes at lower privilege levels, they are not secure by default, and actually run at elevated privileges compared to normal processes.

Make sure the agent cannot launch containers and that you are switching users and dropping privileges.

On a Mac you are running a VM machine that helps, but on Linux it is the user that is responsible for constraints, and by default it is trivial to bypass.

Containers have been fairly successful for security because the most popular images have been leveraging traditional co-hosting methods, like nginx dropping root etc…

By themselves without actively doing the same they are not a security feature.

While there are some reactive defaults, Docker places the responsibility for dropping privileges on the user and image. Just launching a container is security through obscurity.

It can be a powerful tool to improve security posture, but don’t expect it by default.


Hi. You are clearly an LLM user. Have you considered asking an LLM to explain how to do this? If not, why not?


would an LLM have a favourite tool? I'm sure it'll answer, but would it be from personal experience?


I checked with Gemini 3 Fast and it provided instructions on how to set up a Dev Container or VM. It recommended a Dev Container and gave step-by-step instructions. It also mentioned VMs like VirtualBox and VMWare and recommended best practices.

This is exactly what I would have expected from an expert. Is this not what you are getting?

My broader question is: if someone is asking for instructions for setting up a local agent system, wouldn't it be fair to assume that they should try using an LLM to get instructions? Can't we assume that they are already bought in to the viewpoint that LLMs are useful?


the llm will comment on the average case. when we ask a person for a favourite tool, we expect anecdotes about their own experience - I liked x, but when I tried to do y, it gave me z issues because y is an unusual requirement.

when the question is asked on an open forum, we expect to get n such answers and sometimes we'll recognise our own needs in one or two of them that wouldn't be covered by the median case.

does that make sense?


> when we ask a person for a favourite tool

I think you're focusing too much on the word 'favourite' and not enough on the fact that they didn't actually ask for a favourite tool. They asked for a favourite how-to for using the suggested options, a Dev Container or a VM. I think before asking this question, if a person is (demonstrably in this case) into LLMs, it should be reasonable for them to ask an LLM first. The options are already given. It's not difficult to form a prompt that can make a reasonable LLM give a reasonable answer.

There aren't that many ways to run a Dev Container or VM. Everyone is not special and different, just follow the recommended and common security best practices.


In 2026? It will be the tool from the vendor who spends the most ad dollars with Anthropic/Google/etc.


Because I value human input too.


I've started a project [1] recently that tries to implement this sandbox idea. Very new and extremely alpha, but mostly works as a proof of concept (except haven't figured out how to get Shelley working yet), and I'm sure there's a ton of bugs and things to work through, but could be fun to test and experiment with in a vps and report back any issues.

[1] https://github.com/jgbrwn/shelley-lxc


Claude asks you for permissions every time it wants to run something.


Until you run --dangerously-skip-permissions


That's why you run with "dangerously allow all." What's the point of LLMs if I have to manually approve everything? IME you only get half decent results if the agent can run tests, run builds and iterate. I'm not going to look at the wall of texts it produces on every iterations, they are mostly convincing bullshit. I'll review the code it wrote once the tests pass, but I don't want to be "in the loop".


I really like the product created by fly.io's https://sprites.dev/ for AI's sandboxes effectively. I feel like its really apt here (not sponsored lmao wish I was)

Oh btw if someone wants to run servers via qemu, I highly recommend quickemu. It provides default ssh access,sshfs, vnc,spice and all such ports to just your local device of course and also allows one to install debian or any distro (out of many many distros) using quickget.

Its really intuitive for what its worth, definitely worth a try https://github.com/quickemu-project/quickemu

I personally really like zed with ssh open remote. I can always open up terminals in it and use claude code or opencode or any and they provide AI as well (I dont use much AI this way, I make simple scripts for myself so I just copy paste for free from the websites) but I can recommend zed for what its worth as well.


I also program a lot in Clojure/Script. Do you also consider thinking token and the number of iterations in the token efficiency?


I don't think thinking tokens are affected, as LLMs "think" mostly in plain language, with occasional code snippets.


I would assume for certain problems LLMs have a solution readily available for JavaScript/ TypeScript or similarly popular languages but not for Clojure/Script. Therefore my thinking was that the process of getting to a workable solution would be longer and more expensive in terms of tokens. I however don't have any relevant data on this so I may just be wrong.


For small SMBs using Proxmox is reasonably ok-ish. Running in production for 2+ years already our customers are quite happy. We also sent some patches to Proxmox for other much larger clients...


If you were more polite, you could have a good entry to the discussion.

Yes, Proxmox is built on Debian so anything Debian can do Proxmox VE can mostly do as well without major issues.


Anyone who is really committed to their infrastructure will not build it on top of highly proprietary stuff where you have 0 visibility into what's actually happening so you can only hope that somebody fixes it sustainably, in a reasonable time frame and permanently.

With open source, if you have the right people, you can find/ bisect down to the commit and function where the problem is exactly, which speeds up the remedy immensely. We have done such a thing with backup restores from the Proxmox Backup Server. The patches are now in Proxmox VE 9.0 because the low hanging fruit problem was actually with the client code not the Proxmox Backup Server.


You can have a look at XCP-ng. They have the expertise and it's originally a fork of Citrix XenServer however they are completely on their own feet now delivering some interesting advancements.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: