Hacker Newsnew | past | comments | ask | show | jobs | submit | jcgrillo's commentslogin

> any software company should be legally responsible for not being able to match the resources of a nation-state that might want to compromise their data

No. Not the company, holding companies responsible doesn't do much. The engineer who signed off on the system needs to be held personally liable for its safety. If you're a licensed civil engineer and you sign off on a bridge that collapses, you're liable. That's how the real world works, it should be the same for software.


Define "safety".

This wasn't unlike how the U.S. did it in Vietnam. They would have a small, unarmed helicopter fly low with an observer and an M-16 to spot (or more likely draw fire) with some Cobra and/or Huey gunships higher up. When the little bird found some targets the big ones would come down and lay waste to the entire area.

This feels like what happens when the selection pressure isn't there. Building for "the next war" (or more broadly "the future") is always bound to be an utter boondoggle, because despite your best intentions and the most strenuous furrowing of your eyebrows you'll have literally no fucking idea what the actual demands of that situation will be. You have to react, that's it. Trying to predict is futile. So better to try to set yourself up to react better?

My latest Mac OSX wtf was sometimes the terminal window shrinks by 1 or 2 columns every time I wake the computer up from sleep, but only when connected via thunderbolt USB C hub to external monitor. Terrifying to imagine how that must be. By contrast, Linux/BSD desktops don't generally seem to pull this kind of weird mindfuck horror movie shit? Like it either works or it's completely, obviously, totally broken. Not some weird subtle in-between thing.

The only thing I need to do to unbreak gnome is twiddle the ctrl:nocaps thing in xkb-options. Everything else is optional.

> This can be a ton of data though, so we're trying to figure out what to compress and how. We also have the challenge of figuring out how to scrub logs of any potentially sensitive information.

This is fundamentally a data modeling problem. Currently computer telemetry data are just little bags of utf-8 bytes, or at best something like list<map<bytes, bytes>>. IMO this needs to change from the ground up. Logging libraries should emit structured data, conforming to a user supplied schema. Not some open-ended schema that tries to be everything to everyone. Then it's easy to solve both problems--each field is a typed column which can be compressed optimally, and marking a field as "safe" is something encoded in its type. So upon export, only the safe fields make it off the box, or out of the VPC, or whatever--note you can have a richer ACL structure than just "safe yes/no".

I applaud the industry for trying so hard for so long to make everything backwards compatible with the unstructured bytes base case, but I'm not sure that's ever really been the right north star.


Grand solutions require broad coordination, and they often devolve back into a modified-but-equivalent version of the previous problem. :(

Stream-of-bytes is classically difficult model to escape. Many have tried.


Yeah. There are good reasons things are bad. But there's also a foolish consistency. Like, you can just do things! If you decide monitoring is important you can decide not to outsource it. Most everyone doesn't, though. Probably because they don't think it's very important, and the existing tools get it done well enough, and it's the muscle memory of the subjectively familiar (if objectively fantastically overpriced).

Well, in the early days of infrastructure growth, when designing bespoke monitoring systems and protocols would be relatively low-cost, it's still nowhere near the highest-ROI way to spend your tech team's time and energy.

And to do it right (i.e. low-risk of of having it blow up with negative effects on the larger business goals), you need someone fairly experienced or maybe even specialized in that area. If you have that person, they are on the team because of their other skills, which you need more urgently.

SaaS, COTS, and open source monitoring tools have to cater to the existing customers. The sales pitch is "easy to integrate". So even they are not incentivized to build something new.

It boils down to the fact that stream-of-bytes is extremely well-understood, and almost always good enough. Infinitely flexible, low-ceremony, no patents, and comes preinstalled on everything (emitters and consumers). It's like HTTP in that way.

And the evolution is similar too. It'll always be stream-of-bytes, but you can emit in JSON or protobuf etc, if it's worth the cognitive overhead to do so. All the hyperscalers do this, even when the original emitter (web servers, etc) is just blindly spewing atrocious CLF/quirky-SSV text.


> It'll always be stream-of-bytes, but you can emit in JSON or protobuf etc, if it's worth the cognitive overhead to do so.

This is the crux of it. That's great until you encounter a need for a schema, and then it's "schema-on-read" or some similar abomination. And the need might not manifest until you're pushing like 1TB/day or more of telemetry data with hundreds or thousands of engineers working on some >1MLoC monstrosity. Hard to dig out of that hole.

The situation is tragically optimal--we've achieved some kind of multiobjective local maximum on a rock in the sewer at the bottom of a picturesque alpine valley and declared victory. We should do better.

Or maybe I'm overly optimistic.


> The situation is tragically optimal--we've achieved some kind of multiobjective local maximum on a rock in the sewer at the bottom of a picturesque alpine valley and declared victory. We should do better.

But it's a very comfortable rock. pointy in all the right places.


til it ain't

> if they had a coherent product vision, and trusted their engineers to use AI how they see fit, then I’m sure they would be more successful

Out of curiosity, what do you think might be a successful application for AI in Uber's business? It seems like this is the sort of thing AI applications end up being. Does it actually get better than this?


yes, only a monster would put the pickles underneath the tomato

Yes. The biasing function is that (mostly) only the less smart ones get exposed and caught.

The whole sector is still quite likely a nonsense dead end.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: