Hacker Newsnew | past | comments | ask | show | jobs | submit | foruhar's commentslogin

The recent bombing of the bridge in Iran was apparently a double-tap strike by the US.

https://en.wikipedia.org/wiki/2026_Karaj_B1_bridge_attack


That's a fairly decent defintion of vibecoding across multiple sessions.

Hyperbolically, I think it's one of humanity's greatest resources. I can find anything from precision machining, LLM internals, historical footage of WWI, music performances from pretty much any era, and on, and on. There are so many things that I didn't know there was any footage of or that I didn't a single thing about that I find there pretty much daily.

I wish the BBC would publish their whole archive through YT. The few things that they do put up are often so mind expanding whether it's Berty Russel, The Beatles, or some cracking Scottish chap going for a bike ride with a bottle of whisky.


Worth noting that most of youtube videos can no longer be discovered through search. Search results can now only be sorted by "Relevance" and "Popularity" while you used to be able to sort by release date

Search results are also non-exhaustive and biased towards recent videos as noted in this study https://arxiv.org/abs/2506.11727

Basically many videos can no longer be discovered if you don't have a url to the video or the channel, and the algorithm doesn't recommend it


The non exhaustive thing is annoying as hell. You might as well delete old videos because there’s no way to get to them if you don’t remember the link. I used to be able to find this video I took in college 20 years ago. There’s just no way for me to get to it anymore.

Sounds like a good opportunity for a big indexing company to add some value by using thei-

Wait...


Is that related to freetube removing the video tab (of a channel), sort by age dropdown menu?

I don’t know. All I know is the video I’m looking for has a very unique title and used to come up as the main search result less than 3 years ago and it’s unfindable now. It’s from 2006.

I've noticed that the search is especially bad on the history tab, where even searching the exact title of a video I've seen before doesn't always display it. I've found that the best search for old or niche videos is to ask Gemini with a description of the video (I found it gives better results than GPT 5.1) but it's really unfortunate that the native search isn't more useful.

Realistically you cannot make every video discoverable given the massive ever growing amount of content.

Wikipedia says there's 14.8 billion videos currently uploaded to YouTube, it seems technically easy to index that amount of title+description?

The more likely explanation is that Google doesn't want YouTube to be crawled, which gives them a massive moat for AI training


you can add search filters to the search bar in youtube.

e.g this will return videos published in that time range with a duration longer than 2m

cat videos after:2014-01-01 before:2014-12-31 >2m

[edit] - the duration doesn't remove shorts I think there's just no shorts published in that time range.


> some cracking Scottish chap going for a bike ride with a bottle of whisky.

I've seen that one!


Here it is for those who haven't: https://www.youtube.com/watch?v=cZk2jV5gJbM

When I looked it up, turns out I've seen it too!


I hope he is doing well, or at least had a great ride through life.

One really stunning deep dive into the bowels of weird YT content: https://youtu.be/JAALDob9Ev0?si=vooePQoQM0TURpNK

  Grady Smith published an investigative video titled "This TikTok Girl Band Ruined My Life" on October 18, 2022. The project focuses on Taylor Red, a group of triplet sisters with red hair who originally started as a traditional country band. Smith's video explores their transition from serious musicians to creators of surreal, fast-paced TikTok and YouTube content that appears specifically designed to capture the attention of toddlers and children.

There is a subreddit https://www.reddit.com/r/DeepIntoYouTube/ which brings up channels and videos with negligible number of views usually in range of 100 or less.

This video https://www.youtube.com/watch?v=3emFAf3jqQQ for example is from 15 years ago with 102 views right now.


It was in pre-tiktok world(push to regular content update) & before the purge. A lot of content is now gone. Its a great resource but very loosely coupled with humanity/human knowledge (and arguably a pretty poor resource for it, both theoretically (linear information with contant velocity such as video) and practically (the content just isn't there on youtube, search is truncated etc.)).

> I didn't a single thing about that I find there pretty much daily.

Rarely(never?) have I found new knowledge on youtube, however its a great source of joy/emotions/slop.


What purge?

I'm searching Google trying to figure out what you're talking about but not getting any meaningful results.


Somewhere during the 2010s YouTube became completely sanitized. It went from a general video platform for adults to some dumbed down media company that wouldn't offend negligent mothers in Idaho that gave their kids an ipad rather than parent them

Barely literate workers in 3rd world countries then went on a mass "moderation" spree deleting anything that might even remotely be considered controversial

Videos with millions of views were delisted overnight and the associated channels received community standards violation strikes


Apparently there was a purge of extremist content and another purge of AI slop? I wasn't aware of any major publicised purges, though I do remember Google saying a few years ago that they'd be deleting inactive Google accounts (with the exception of accounts with public Youtube videos I think).

(Edit: found a link that covers the first half of what I'm talking about. It took some digging. There is no way you'd have found it with the little info you had)

https://www.nbcnews.com/storyline/isis-terror/ads-shown-isis... )

I have de-lurked because I can actually contribute to this. I am almost positive that what this is referring to is the time ISIS/ISIL (as it was still sometimes referred to then) uploaded the first video of one of their hostages (a kidnapped journalist?) being beheaded on YouTube. It would have been between 2013 and 2017 inclusive.

Advertising was in full swing on youtube with household names like Pepsi and McDonalds advertising regularly on youtube. BUT ads weren't restricted to certain types of videos then... i don't know if you were paying attention to world events then but ISIS was always in the news and when they released the beheading video it was linked EVERYWHERE. so of course when people went to go and watch a gruesome beheading, before or after it played they would see "da da da da da, I'm loving it".

There was a brief but MASSIVE public outrage against any company whose advertisements were involved, because people thought these companies were endorsing ISIS and beheadings. They didn't understand that the advertisers were paying Youtube for coverage but had no say in exactly what videos recevied what ads. They just blamed the companies they saw in connection with the video. As damage control, these major companies of course instantly pulled all ads from running on youtube and pointed the finger at YouTube, LOUDLY. Youtube lost a substantial amount of revenue and reputation pretty much overnight. Probably in less than 24 hrs. To repair their own reputation and become an attractive and reliable investment for advertisers asap, YouTube immediately took measures to prevent this occurring again. Thus was the first purge.

I do not remember what other measures or standards were originally but they've changed over the years since. Most of the people talking about its rollon effects were youtubers talking about how it affected them personally in youtube videos, with vague or dramatic titles, which is why you would not find many results on google. They didnt want google to find them and see them criticising them and take their videos down too. I do not think the cottage industry we now have around influencers and content creation, including networking and news, had really gotten off the ground then, so nobody that i can think of would have been systematically documenting it in a written text-searchable form. Thus, no google presence.

It's really scary to me that such a major shaping event in our online lives and thus our culture has gone largely undocumented except through videos which people delist, delete, or get copyright struck down, all the time.

Tldr: Isis has a substantial share in the blame for ruining youtube. Isis is still going.


> Rarely(never?) have I found new knowledge on youtube,

Did you ever try? There are experts in many fields posting about all kinds of stuff in there, from professional knowledge, to the most mainstream of hobbies, to very obscure stuff.


> Rarely(never?) have I found new knowledge on youtube, however its a great source of joy/emotions/slop.

I suspect you are not looking very hard. I have learned a tremendous amount about everything from stone cutting to metalworking to welding to Kalman filters to linear algebra. There is a lot out there. The main annoyance I have is keeping AI slop out of my feed so that I can instead learn from genuine experts. There is a huge amount out there.


Appending 'before:2024' to your search term works on YouTube and gives results from the pre-slopocine era.

Quite a lot of stuff is on iPlayer. But as always, licensing is the killer.

(Not to mention reputational risk, which is why so many episodes of Top Of The Pops are hidden)


It's one of the US's (or some future world government) greatest future public utilities for sure.

Right now it's ruining it's own content by overoptimizing for engagement slop. Making the creators dumber and consumers poorer, limiting ad growth in the long term.


"Hyperbolically, I think it's one of humanity's greatest resources."

The content is one of humani .... oh it is all of ... oh its in the hands of ... a commercial company renowned for adverts.

Is there not a better place for human creativity than ... Google? Should my TV license fee fund Google?

Fuck off (hyperbolically)!


> oh it is all of ... oh its in the hands of ... a commercial company renowned for adverts.

As opposed to governments renowned for colonizing half the world, destroying countless cultures, committing genocide in living memory?


> As opposed to governments renowned for colonizing half the world, destroying countless cultures, committing genocide in living memory?

Yes. Private companies are capable of the same, with addition of having profit as a sole purpose of existence.


Lowkey one the best things about LLMs, finally we have truly indexed YouTube which made up a massive amount of knowledge consumable and searchable in text format. I hate watching YouTube videos but like the information they provide between Youtube’s AI feature and Perplexitiy etc. Video indexing, it’s been a life saver.

Agreed - I've never followed YouTube that closely but apparently there was a time where everyone thought that YouTube favoured videos that were around 10 minutes in length... so everyone padded their short videos to 10 minutes.

It wasn't about favoring them. Videos needed to be 10 minutes long to get mid-roll ads.

Happy days at the ant colony.

Smalltalk too was originally a full OS running on bare metal back in the Xerox Alto days (1972-ish).

The "OS" (or rather "kernel") was actually the VM which was implemented in microcode and BCPL. The Smalltalk code within the image was completely abstracted away from the physical machine. In today's terms it was rather the "userland", not a full OS.

It's refreshing to see Oberon getting some love on the Pi. There’s a certain 'engineering elegance' in the Wirthian school of thought that we’ve largely lost in modern systems.

While working on a C++ vector engine optimized for 5M+ documents in very tight RAM (240MB), I often find myself looking back at how Oberon handled resource management. In an era where a 'hello world' app can pull in 100MB of dependencies, the idea of a full OS that is both human-readable and fits into a few megabytes is more relevant than ever.

Rochus, since you’ve worked on the IDE and the kernel: do you think the strictness of Oberon’s type system and its lean philosophy still offers a performance advantage for modern high-density data tasks, or is it primarily an educational 'ideal' at this point?


I don't know. Unfortunately we don't have an Oberon compiler doing similar optimization as e.g. GCC, so we can only speculate. I did measurements some time ago to compare a typical Oberon compiler on x86 with GCC and the performance was roughly equivalent to that of GCC without optimizations (see https://github.com/rochus-keller/Are-we-fast-yet/tree/main/O...). The C++ type system is also pretty strict, and on the other hand it's possible and even unavoidable in the Oberon system 3 to do pointer arithmetics and other things common in C behind the compiler's back (via the SYSTEM module features which are not even type safe). So the original Oberon syntax and semantics is likely not on the sweet spot of systems programming. With my Micron (i.e. Micro Oberon, see https://github.com/rochus-keller/micron/) language currently in development I try for one part to get closer to C in terms of features and performance, but with stricter type safety, and on the other hand it also supports high-level applications e.g. with a garbage collector; the availabiltiy of features is controlled via language levels which are selected on module level. This design can be regarded as a consequence of many years of studying/working with Wirth languages and the Oberon system.

There was a couple of PhD theses at ETH Zurich in the 90s on optimizations for Oberon, as well as SSA support. I haven't looked at your language yet, but depending on how advanced your compiler is, and how similar to Oberon, they might be worth looking up.

I'm only aware of Brandis’s thesis who did optimizations on a subset of Oberon for the PPC architecture. There was also a JIT compiler, but not particularly optimized. OP2 was the prevalent compiler and continued to be extended and used for AOS, and it wasn't optimizing. To really assess whether a given language can achieve higher performance than other languages due to its special design features, we should actually implement it on the same optimizing infrastructure as the other languages (e.g. LLVM) so that both implementations have the same chance to get out the maximum possible benefit. Otherwise there are always alternative explanations for performance differences.

It might have been Brandis' thesis I was primarily thinking about. Of the PhD theses at EHTz on Oberon, I'm also a big fan of Michael Franz' thesis on Semantic Dictionary Encoding, but that only touched on optimization potential as a sidenote. I'm certain there was at least one other paper on optimization, but it might not have been a PhD thesis...

I get the motivation for wanting to use LLVM, but personally I don't like it (and have the luxury of ignoring it since I only do compilers as a hobby...) and prefer to aim for self-hosting whenever I work on a language. But LLVM is of course a perfectly fine choice if your goal doesn't include self-hosting - you get a lot for free.


I don’t like LLVM either, because its size and complexity are simply spiraling out of control, and especially because I consider the IR to be a total design failure. If I use LLVM at all, it would be version 4.0.1 or 3.4 at most. But it is the standard, especially if you want to run tests related to the question the fellow asked above. The alternative would be to build a frontend for GCC, but that is no less complex or time-consuming (and ultimately, you’re still dependent on binutils). However, C on LLVM or GCC should probably be considered the “upper bound” when it comes to how well a program can be optimized, and thus the benchmark for any performance measurement.

> However, C on LLVM or GCC should probably be considered the “upper bound” when it comes to how well a program can be optimized, and thus the benchmark for any performance measurement.

Is it? Isn't it rather the case that C is too low level to express intent and (hence) offer room to optimize? I would expect that a language in which, e.g. matrix multiplication can be natively expressed, could be compiled to more efficient code for such.

I would rather expect, that for compilers which don't optimize well, C is the easiest to produce fairly efficient code for (well, perhaps BCPL would be even easier, but nobody wants to use that these days).


> I would expect that a language in which, e.g. matrix multiplication can be natively expressed, could be compiled to more efficient code for such.

That's exactly the question we would hope to answer with such an experiment. Given that your language received sufficient investments to implement an optimal LLVM adaptation (as C did), we would then expect your language to be significantly faster on a benchmark heavily depending on matrix multiplication. If not, this would mean that the optimizer can get away with any language and the specific language design features have little impact on performance (and we can use them without performance worries).


Rochus, your point about LLVM and the 'upper bound' of C optimization is a bit of a bitter pill for systems engineers. In my own work, I often hit that wall where I'm trying to express high-level data intent (like vector similarity semantics) but end up fighting the optimizer because it can't prove enough about memory aliasing or data alignment to stay efficient.

I agree with guenthert that higher-level intent should theoretically allow for better optimization, but as you said, without the decades of investment that went into the C backends, it's a David vs. Goliath situation.

The 'spiraling complexity' of LLVM you mentioned is exactly why some of us are looking back at leaner designs. For high-density data tasks (like the 5.2M documents in 240MB I'm handling), I'd almost prefer a language that gives me more predictable, transparent control over the machine than one that relies on a million-line optimizer to 'guess' what I'm trying to do. It feels like we are at a crossroads between 'massive compilers' and 'predictable languages' again.


When you call LLVM IR a design failure, do you mean its semantic model (e.g., memory/UB), or its role as a cross-language contract? Is there a specific IR propert that prevents clean mapping from Oberon?

Several historical design choices within the IR itself have created immense complexity, leading to unsound optimizations and severe compile-time bloat. It's not high-level enough so you e.g. don't have to care about ABI details, and it's not low-level enought to actually take care of those ABI details in a decent way. And it's a continuous moving target. You cannot implement something which then continus to work.

To be fair they also kind of share that opinion, hence why MLIR came to be, first only for AI, nowadays for everything, even C is going to get its own MLIR (ongoing effort).

Is anyone attempting to implement Oberon on LLVM IR? Sounds like a fun project

Threre are at least two projects I'm aware of, but I don't think they are ready yet to make serious measurements or to make optimal use of LLVM (just too big and complex for most people).

That benchmark is a great data point, thanks for sharing. The performance parity with unoptimized GCC makes sense, given how much heavy lifting modern LLVM/GCC backends do for C++.

Your approach with Micron and the 'language levels' is particularly interesting. One of the biggest hurdles I face in C++ with these high-density vector tasks is exactly that: balancing the raw 'unsafe' pointer arithmetic needed for SIMD and custom memory layouts with the safety needed for the rest of the application.

Having those features controlled at the module level (like your Micron levels) sounds like a much cleaner architectural 'contract' than the scattered unsafe blocks or reinterpret_cast mess we often deal with in systems programming. I'll definitely keep an eye on the Micron repository—bridging that gap between Wirth-style safety and C-level performance is something the industry is still clearly struggling with (even with Rust's rise).


you can check also XDS modula2/oberon-2 programming system. is an optimizing complier https://github.com/excelsior-oss/xds

Tesla was the real power grid guy. The scope of his invention from the generators at Niagara Falls power generation to the transformers to the motors is pretty impressive. More so given that he was eventually given the patents (originally issued to Marconi) for radio transmission.


The fact that Edison is pervasively over-credited is really another example of the highly visible executive claiming personal credit for the labors of employees.


Two others who come to mind are https://en.wikipedia.org/wiki/Charles_Proteus_Steinmetz and https://en.wikipedia.org/wiki/Charles_F._Scott_(engineer)

Steinmetz contributed heavily to AC systems theory which helped understand and expand transmission. while Scott contributed a lot to transformer theory and design (I have to find his Transformer book.)


My sense is that the Gemini models are very capable but the Gemini CLI experience is subpar compared to Claude Code and Codex. I'm guess that it's the harness but since it can get confused, fall into doom loops, and generally lose the plot in a way that the model does not in Gemini Studio or the Gemini app.

I think a bunch of these harnesses are open source so it surprises me that there can be such a gulf between them.


It's not just the tooling. If you use Gemini in opencode it malfunctions in similar ways.

I haven't tried 3.1 yet, but 3 is just incompetent at tool use. In particular in editing chunks of text in files, it gets very confused and goes into loops.

The model also does this thing where it degrades into loops of nonsense thought patterns over time.

For shorter sessions where it's more analysis than execution, it is a strong model.

We'll see about 3.1. I don't know why it's not showing in my gemini CLI as available yet.


Its not just subpar, its not even sub-sub-par.

It goes into loops and never completes a task 8 times out of 10 that i've used it.


Gemini-3.0-flash-preview came out right away with the 3.0 release and I was expecting 3.0-flash-lite before a bump on the pro model. I wonder if they have abandoned that part of the Pareto/price-performance.


I couldn't find anything more recent that this but apparently it has made the streets safer for pedestrians too. "Traffic fatalities in the Congestion Pricing zone are down 40% from last year."

This is from July 2025: https://transalt.org/press-releases/new-data-from-transporta...


This video shows the systems being built and shipped with cooling, cabling, etc.

It’s pretty mind blowing what this crisis shows from the manipulation of atoms and electrons all the way up to these clusters. Particularly mind blowing for me who has cable management issues with a ten port router.

https://youtu.be/1la6fMl7xNA?si=eWTVHeGThNgFKMVG


what's mind blowing about the video you shared was the amount of coper cable used.

I thought with fiber we wouldn't need coper cables maybe just for electricity distribution but clearly I was wrong.

thanks for sharing


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: