Hacker Newsnew | past | comments | ask | show | jobs | submit | kriro's commentslogin

XFCE is also my go to. But I have moved on from caring too much about desktop environments as long as they don't get in the way. I went through a phase of trying pure openbox and all kinds of things and settled on XFCE. It doesn't do everything like I want but that's fine. I mostly open a terminal, a browser, thunderbird, some programming environment and a latex editor these days.

I'd personally bet on Google and Meta in the long run since they have access to the most interesting datasets from their other operations.


Agree. Anyone with access to large proprietary data has an edge in their space (not necessarily for foundation models): Salesforce, adobe, AutoCAD, caterpillar


I'm pretty sure it works very differently for different people so you have to figure out your own process. I've tried different things but at the end of the day, I simply have a notebook next to my laptop/in my laptop bag and write down everything in freeform text. No index, no bullet points and things like that. I put a date and start writing. I'll usually do some TODOs as checklists to get them out of my brain and bothering me at the start of the day but only big items, not each and every step. It's a mix of work and private things. Just writing stuff down is helpful for me, even if I never reference it again.

I do use the Feynman Technique if I come across something interesting and try to explain it on paper. So if I was using it just for work, I'd probably do that. Something like "Spec driven development (Github Spec Kit and similar toolkits) is essentially a bunch of md files that provide more context for agents. There are some scripts that provide scaffolding, having agents write the md uses a lot of tokens so writing them manually after the scaffold is generated makes more sense. Try with a small project."


A+ app, I turned on sound and was not disappointed.

Love the movie, got a spray can and sprayed my whole keyboard army green after watching it then realized I can't 10 finger type. What a golden age of interesting young people in computer security. Roughly one year later (iirc), I read "Smashing the Stack for Fun and Profit" which might have been my most influential IT related read. It's probably tied with "Man-Computer Symbiosis" :)


I'd actually say the opposite is the case. B2B (even SaaS) is probably the most robust when it comes to AI resistance. The described "in house vibe coded SaaS replacement" does not mirror my experience in B2B at all. The B2B software mindset I've encountered the most is "We'll pay you so we don't have to wrestle with this and can focus on what we do. We'll pay you even more if we worry even less." which is basically the opposite of...let's have someone inhouse vibe code and push to production. B2B is usually fairly conservative.


Reminds me of a blog post a while back saying that gigabit fiber at home would lead to everyone running their own email server.


There was no chance that everyone would be running their own email server, but if it wasn't for the lack of IPv6 adaptation a plug and go home email server solution would probably see a decent amount of use. I'd bet we'd already be seeing it as a feature in most mid-ranged home routers by now.


The mail server in a router is easy to host, the problem is:

1) Uptime (though this could be partially alleviated by retries)

and most of all:

2) "Trust"/"Spam score"

It's the main reason to use Sendgrid, AWS, Google, etc. Their "value" is not the email service, it's that their SMTP servers are trusted.

If tomorrow I can just send from localhost instead of going through Google it's fine for me, but in reality, my emails won't arrive due to these filters.


> "Trust"/"Spam score"

See jwz's struggles with hosting his own email. (Not linking to his blog here with HN as the referrer...)

With email, the 800 lb gorillas won, and in the end it didn't even solve the spam problem.


I use a small local provider (posteo) and have 0 problems with spam.

So a 20 pound monkey can also throw around some weight. To be fair I only use it for personal stuff its probably different if you need enterprise scale l.


> With email, the 800 lb gorillas won, and in the end it didn't even solve the spam problem.

I have a 15+ year old Gmail account that I've used everywhere. Spam has been a non issue since 15+ years ago.


Your experience isn’t representative. Mine isn’t either.


I've seen plenty of Gmail accounts over the years and they pretty much look the same.

The only Gmail accounts that are "overrun by spam" are those of people subscribing to lots of spammy newsletters and then not knowing how to unsubscribe from them (or figuring they'd stay subscribed in case the next newsletter is the Magical One™). But that's 100% self inflicted and you can't save those people with any technical solution.

Email spam isn't a day to day problem for Gmail (at least) since Bayesian email filtering was first implemented.


The specific concern around uptime & reliability was baked into email systems from almost the start - undeliverable notifications (for the sender) and retries.

But yes, the “trust / spam score” is a legit challenge. If only device manufacturers were held liable for security flaws, but we sadly don’t live in that timeline.


Its not a device/MTA issue, SMTP just is not a secure protocol and there is not much you can do in order to 'secure' human communication. Things like spoofing or social engineering are near impossible to address within SMTP without external systems doing some sort of analysis on the messages or in combination with other protocols like DNS.


SMTP isn't at fault, the social ecosystem is at fault. Every system where identities are cheap has a spam problem. If you think a system has cheap identities and no spam, it probably doesn't have cheap identities — examples are HN or Reddit.


Trust / spam score is the largest one I think, second to consumer ISPs blocking the necessary ports for receiving mail.

Even if your "self hosting" is renting a $5/month VPS, some spam lists (e.g. UCEPROTECT) proactively mark any IP ranges owned by consumer ISPs and VPS hosting as potential spam. I figured paying fastmail $30/yr was worth never having to worry about it.


Not to detract from your wider point, but there's a few ISPs which own IP blocks which aren't blacklisted.

I had quite a bit of success with it and of course, DKIM and the other measures you can take some years back.

For personal emails, I don't think I had any which fed straight into spam.


For "Trust", I believe patio11 described this system as the "Taxi Medallion of Email".

e.g. you spend a lot of money to show that you are a legitimate entity or you pay less money to rent something that shows you are connected to said entity.


If everyone ran a mail server at home spam scores wouldn't be so strict


3) Upgrades suck. Admin also sucks

Maintenance is probably my number one reason for giving up on projects where I'm responsible for feeding the pet.


It's all about spam though, right?

Without some kind of federation or centralization, it seems hard to distinguish a hobbyist from a spammer if both of them are using a plug-and-go. Forcing that responsibility into the hands of Google, Zoho, and Microsoft seems like the best compromise, unfortunately.


For one, if my power goes out for an extended period of time I'd still like to be able to access my email. Communications really can't be hosted locally.


The email server bit directly correlates, too. Will everyone vibe-ops their own email server using AI? Of course not.


What a weird take. I was running my own email server 25 years ago on a 512 kbit ADSL line. No problem at all, would even be enough bandwidth today for most messages.

(Back then email still worked from residential IP addresses, and wasn't blocked by default)


I agree with you. In B2B SaaS you don't sell the software, you sell your expertise in a specific domain and the responsability you take for owning that expertise. The fact that the development costs are nearly zero will make them more valuable and more protifable


Development has always been a small fraction of B2B SaaS expenses after the first couple of years anyways


B2B is a large corp is like you describe, but it's very different in SMBs, and there are many, many more SMBs.


My experience is that SMBs are generally not run by people who feel confident doing any kind of self managed IT.

No amount of LLM usage is going to change them into full stack vibe coders who moonlight as sysadmins. I just don't see it happening.

Not until, that is, a new generation, that has grown accustomed to the tech, takes over.

Until then the current SMBs will for the most part fulfill their IT needs from SaaS businesses (of which I think there will be more due to LLMs lowering the barrier for those of us who feel confident in our coding and sysadmin skills already).


What new generation? Younger generations are less accustomed to self-managed tech.


This 100% most of new devices are locked, mobile etc.


Having seen how clueless the new generation is and the amount of brain rot they get from using LLMs over honing their own skills, I'd say it's the opposite...


I assume a vibe coded agent would moonlight as the sysadmin to maintain the vibe coded LoB app.


I'm considering SaaS replacements with in house code in situations where my general thoughts are "how can this possibly be the pricing for this?" which is not uncommon.


Well before vibe coding, tons of open source software existed (and exists) to replace SaaS. With lots of features and knobs and real communities. But I still often pay for SaaS because managing it is a headache. Some human has to do it. I can pay the human or I can pay the company. I really don’t see how vibe coded toys can replace real battle tested SaaS products. A better explanation is the bubble in PE ratio is deflating and it’s happening all over, regressing to the mean. AI is a convenient explanation for everything


How many SaaS companies are public? How is that bubble deflating?

These are real risks to these companies.

Your in-house teams can build replacements, it's just a matter of headcount. With Claude, you can build it and staff it and have time left over. Then your investment pays dividends instead of being a subscription straight jacket you have to keep renting.

I think there's an even faster middle ground: open source AI-assisted replacements for SaaS are probably coming. Some of these companies might offer managed versions, which will speed up adoption.


> Your in-house teams can build replacements, it's just a matter of headcount. With Claude, you can build it and staff it and have time left over. Then your investment pays dividends instead of being a subscription straight jacket you have to keep renting.

Lets take Figma as an example, Imagine you have 1000 employees, 300 of them need Figma, so you are paying 120k per year in Figma licenses. You can afford 1 employee working on your own internal Figma. you are paying the same but getting 100x worst experience, unless your 1 employee with CC can somehow find and copy important parts of Figma on his own, deploy and keep it running through the year without issues, which sounds ludicrous.

If you have less than 1000 employees it wouldnt even make sense to have 1 employee doing Figma


>Lets take Figma as an example, Imagine you have 1000 employees, 300 of them need Figma, so you are paying 120k per year in Figma licenses.

I mean in an example that almost happened... "you are paying 120k per year in Figma licenses, Adobe buys it, you are paying 500k per year in Figma licenses"

At least up until the point of vibe coding it was still worth the SaaS provider charging at least as much if not slightly more than you doing it yourself because most businesses weren't going to anyway.


By that logic, people should never use any Saas products because someday the price will increase. Then why even use Claude Code, someday they will get sick of losing money and increase the price to $1000/month.


I mean, yes it is a potential company ending risk if the continuation of your business depends on said SaaS product.


I mean, for that example there's even less to do: you just put your employees directly on Nano Banana or one of the simple Nano Banana wrappers.

If you need rich outputs, there are tools for that now too.

Let me put it another way - would you want to be Adobe or Figma right now?

And applied to the original point, would you feel comfortable being a SaaS company right now?


> you just put your employees directly on Nano Banana or one of the simple Nano Banana wrappers.

So you end up spending the money elsewhere? with exploratory design you can easily spend 10k a month on these models as a company of 1000, thus completely losing any monetary savings. Anyway you look at it, Saas worked because costs were spread out and low enough to not optimize it too much.


Now you have an entire in-house product to manage and build features on. It could potentially work but so much of what my company pays for is about much more than the software itself. One example would be BrowserStack for very specific browser and mobile app testing edge cases. Can’t vibe code this. Another would be a VPN service with the maximum number of locations to test how our system behaves when accessing from those locations. Another would be hosted git. Another is google suite and all of its apps. How can we vibe code Google Docs and Sheets and Drive and all of the integrations and tooling? It simply isn’t going to happen.


Maybe you are right and the companies do want to pay and not worry about these problems. But now they have a lot more SaaS options to chose from. The incumbent companies like Salesforce and Atlassian have less of a moat. Maybe they'll keep the power users but if a customer is only using 80% of the feature set there is new competition. Competition might come in the form of a startup but it can also come from existing SaaS companies expanding into adjacent domains. Canva now does docs. Notion does email. etc


Also, it is my experience that exec and boards favour safe and well known B2B partners over in house. It's a more publicly defensible approach that gives them an out if things go wrong and shareholders get upset.


For big corporations at least prices of SaaS are rarely an issue. Issues are: we don’t have the time to introduce a new tool, what about our processes, we don’t have the right people.


So how much Constellation Software stock are you buying since the market seems to think they are dead in the water after a 50% drawdown?


hard disagree, several b2b categories are going extinct because AI just completely replaced them.

I mean if we want recent examples just look at tailwindui since it's technically a SaaS.


> we want recent examples just look at tailwindui since it's technically a SaaS.

This is a terrible example. Show me someone ripping out their SAP ERP or SalesForce CRM system where they're paying $100k+ for a vibe coded alternative and I'll believe this overall sentiment.


These examples are going to be lagging indicators of the underlying sentiment.

Just because it cannot be done today, doesn't mean there is not a real appetite in large enterprises to do exactly this.

Without naming names, I know of at least one public company with a real hunger for exactly this eventuality.


I have heard this from execs at public companies as well. I think a HUGE part of this appetite is that today no one has yet been subjected to doing business on a bunch of apps cobbled together by vibe coders.

They are just hearing the promise that AI will allow them to build custom software that perfectly melds to their needs in no time at all, and think it sounds great.

I suspect the early adopters who go this route are going to be in for a rude awakening that AI hasn’t actually solved a lot of hard problems in custom software development.


In the world of B2B software many of the 'hard problems in custom software development' have not been solved by human coders either - it can be an extremely grim market for anyone who cares about software quality. I'm completely unconvinced that on average a vibe-coded app is worse than the typical B2B slop.


Exactly — that’s why SaaS isn’t going away.


I too have an appetite for magic beans, but unfortunately, I'll be unable to eat them until they exist. As it stands now, it doesn't seem like AI stuff can produce anything with this large a scope.


So, do their AI devs have deep knowledge of the business processes, regulations/legal (of course in all kinds of regions), scaling, security, ... ? Because the LLMs sure as hell are lacking that knowledge (again, in depth).

Of course, once AGI is available (if it is ever) everything changes. But for now someone needs to have the deep expertise.


> I know of at least one public company with a real hunger for exactly this eventuality.

Hunger or actually putting real investment against it? I'll wait.


>> This is a terrible example. Show me someone ripping out their SAP ERP or SalesForce CRM system where they're paying $100k+ for a vibe coded alternative and I'll believe this overall sentiment.

I cannot imagine an SMB or fortune 500 ripping out Salesforce or SAP. However, I can see a point-tool going away (e.g., those $50/mo contracts which do something tiny like connect one tool to another.)


TailwindUI isn't really what I'd consider SaaS -- it was a buy once and download software product.

That means to keep making money they need keep selling new people. According to them, their only marketing channel was the Tailwind docs, AI made it so not nearly as many people needed to visit the tailwind docs.

If they had gone with the subscription SaaS model, they'd probably be a little better off, as they would have still had revenue coming in from their existing users.


> I mean if we want recent examples just look at tailwindui since it's technically a SaaS.

How is it in any way B2B? At most B2C + freelancers / individuals / really small SME.

It didn't have any clues a med/large B2B would look for e.g. SSO, SOC2 and other security measures. It doesn't target reusability that I as a B would want. The provided blocks never work together. There aren't reusable components.

Tailwind UI or now Tailwind Plus is more like vibe coding pre-AI.


Sorry but tailwindui is not a SAAS. There is no service or hosting. You buy a coded template once and then receive updates. It is totally not the same as a critical B2B SAAS that is running 24-7 on the vendor's servers providing real support and service.


TailwindUI unfortunately sits in a position of being an easy to disrupt business with current AI.

Now attempt the same with Zoom, I suspect vibe coding will fall down on a project that complex to fit the mental model of a single engineer maintained a widely used tool


Perhaps the case for premium CSS SaaS businesses, I guess (which seems particularly primed for disruption even pre-AI), but there are many more robust B2B categories out there that aren't literal code + docs as a service.


There is a paradigm shift but personally I like to zoom out a little:

It used to be that your new b2b product has to try and displace a spreadsheet. Now it has to displace an agent.


how dont people understand? if you have a VC funded b2b saas, you need to charge huge margins for the investors to get a return. now, small teams can vibe code a replacement and charge 90% less money. AI is going to kill saas margins.

i literally cannot understand why people keep repeating that non tech companies will build their own software, thats not the bear case for saas


Atlassian: surviving since 2002 because no-one could previously build a kanban board or project management app


I think the value is lost on the end user, but it’s more readily apparent to everyone above them.

I’ve talked to many non engineering managers that love Jira, love the reports, the way they can see work flows, do intake etc.

Engineers and even alot of engineering managers loathe it, largely, but I think we’re the collective afterthought

Also, FWIW, a lot of pain people have with Jira is self inflicted by the people who setup the instance and how it works, vs vanilla Jira


Did vanilla Jira for a while, battled with a web app that is actively trying to make you hate it—switched our team to Linear, couldn't be happier ever since.


As far as the Atlassian suite goes I do much prefer Trello.

I only mean this all to be fair to Atlassian, that not all issues with Jira derive from anything they’re doing specifically


no the difference is 90% cost savings, which was previously impossible


How does a company charging 10% their competitor afford to compete on marketing, sales, design, user testing or customer service?

(this is even granting that AI is a 10x speedup for developers, which I don't agree with and no-one has shown)


Well for marketing and sales your bigger competitor is already doing the work of showing companies that they want the functionality at all, and the cheaper competitor's sales and marketing pitch can be: we are much cheaper.

This is pretty much what blacksmith.sh does -- GitHub Actions but it's on faster and cheaper hardware. I'm sure they spend non-trivial amounts on marketing but "X but much cheaper" doesn't sound like a difficult sale.

(edit) And the design, sadly, can be as simple as "rip-off bigger competitor" -- of course if one day you are the big competitor because you "won" in the market, you'll need to invest in design, but by then I guess you'll have the money?


they dont, which is why these companies are going to get smoked. a small team of people will compete with atlassian head on. the whole saas business model is under threat


Yeah.... The code isn't the hard part. That's not where the value is.

This hard part when you're doing in house stuff is getting a good spec, ongoing support, and long term maintenance.

I've gone trough development of a module with a stakeholder, got a whole spec, confirmed it, coded it, launched it, and was then told it didn't work at all like what they needed. It was literally what they told me... I've said 'yes we can make that report, what specific fields do you need' and gotten blank stares.

Even if you're lucky and the original stakeholder and the code are on the same page, as soon as you get a coworkers 'wouldnt it be nice if...' you're going to have a bad day if it's hand coded, vibecoded, or outsourced...

This has always been the problem, it's why no-code never _really_ worked, even if the tech was perfectly functional.


this is what the stock market is pricing in


The reality is anyone generate useful code with an AI agent now. Dores in accounting can now automate all her spreadsheets in a single afternoon.

Not trying to hype AI, but we are in an interesting transitional period.


The accounting saas dores presumably uses doesn't "automate spreadsheets" as its core value prop.

related: i'm thinking these vibe coded solutions are revealing to everyone how important and under appreciated good UX is when it comes to implicit education of any given thing. Like given this complex process, the UX is holding your hand while educating you through a workflow. this stuff is part of software engineering yet it isn't "code".


I, on the other hand, can't wait to fire every single B2B subscription we've got.

B2B SaaS is a VULN. They get bought out, raise prices, fail. And then you have extremely large amounts of unplanned spend and engineering to get around them.

I remember when we replaced the feature flags and metrics dashboards with SignalFX and LaunchDarkly. Both of those went sour. SignalFx got bought out and quadrupled their insane prices. LaunchDarkly promised the moon, but their product worked worse than our in-house system and we spent nearly a year with a couple of dedicated headcount engineering workarounds.

Atlassian, you name it - it's all got to go.

I just wish I could include AWS in this list. Compute and infra needs to be as generic as water.

If you're working at SaaS, find an exit. AI is coming for you. Now's a great time to work on the AI replacement of your product.


> And then you have extremely large amounts of unplanned spend and engineering to get around them

You get the same shocks with internal teams, just from other causes. And you have to manage them.

I'm sure you've only ever seen brilliant software created by internal software teams?


> And then you have extremely large amounts of unplanned spend and engineering to get around them.

I have no idea how you are spending "large amounts" of unplanned spend on Saas products. Every company I worked for had Saas subscription costs being under 1% of capex. Unless you add AWS, which is actually "large amounts" but good luck vibe coding that.


Metrics at a fintech processing billions of dollars of daily GPV, plus the signals from every microservice in the constellation are enormous. Huge scale time series data.

We had an in-house system that worked, but it was a two pizza team split between time series and logging. "Internal weirdware" got thrown around a lot, so we outsourced to SignalFx for a few years. It was bumpy. I liked our in-house system better, and I didn't build it.

Splunk then buys SignalFx and immediately multiplies the pricing at a conveniently timed contract renewal. Suddenly every team in the company has to plan an emergency migration.


What agents are you using? If you stick to opentelemetry and open source agents and develop a collector infrastructure - You can switch across different vendors with lower impact and ramp off time.

Your supply chain is messed up. You need sign longer contracts with price guarantees.


If you’re working in engineering, find an exit. AI is coming for you.


Some people care less about squeezing out performance and more about open standards. I like having more choices, especially open ones.

I am a user, I like to tinker, I'm fairly confident there's more than 1% of people who care about these things. If you live in a country that is threatened by export embargos and the like it also makes a lot of sense to prioritize open.


ISA being open matters very little if chip design isn't and RISC-V isn't going to change much here


The number of companies creating RISC V implementations is pretty hopeful. There's way more competition here than x64 or ARM, and that could yield some interesting results.


It matters in that it opens up competition and allows fully-open designs, which should keep prices low and products available, but you're right that having fully-open state-of-the-art chips is unlikely to happen any time soon.


exactly.

in fact, such ISA is only going to fuel more closed ecosystems as it made hundreds of Chinese vendors to join the game for free, they all suddenly got the chance to build their totally closed platforms.


Which makes the whole ecosystem a lot more open. None of those suppliers is going to have the market power to lock you in. You can get it from the lowest cost provider until something higher value comes along.

And if you are a country, nobody can kill your RISC-V ecosystem. Worst case, you have to design your own chips but at least all the software exists and is established. And Ooen Source cores exist and are getting better. They may not be bleeding edge but they could be good enough if push came to shove. The BOOM chip just got vector extensions.


Open standards don't mean a thing; you can't execute code on a standard. There are past open ISAs like OpenSPARC, MIPS, and OpenPOWER that never gained any traction.

High performance implementations, i.e. actual chips you can buy, are going to be proprietary and that's not going to change. Engineering hardware is expensive.


This is a bold prediction but I thing “alliances” will form where industry players collaborate (like we are seeing in video codecs). And the basic core could become an Open Source project just like Linux did. Operating Systems and codecs were (and are) expensive too.

But there are different levels of proprietary. Having your entire software ecosystem impossible to lock-in means something. And competition tends to breed openness.

MIPS certainly did gain a lot of traction. It was a real force at one point and the world is awash in them. But of course MIPS (the company) is RISC-V now.


An operating system can be coded on one not particularly powerful computer by one person and it costs a few pennies to compile and test. A lot of other open source projects were also initiated by one or two talented people. Software is absurdly inexpensive to develop relative to its complexity.

A cutting edge processor requires personnel across several disciplines and millions in specialized equipment to both validate the implementation of the architecture and the electrical behavior of the circuits and each time it's "compiled" (a batch of test chips fabbed and QAed), it takes a few weeks to be delivered and costs hundreds of thousands of dollars. The ISA being open and royalty-free doesn't affect any of those massive costs.

To use a famous quote: "The answer to any question starting, 'Why don't they...' is almost always, 'Money'" Nobody is offering up that kind of money without practical guarantees of success and some kind of profit at the end of it.


The idea that a chip takes more "personnel" than an operating system or a codec is wrong. An individual can make toys of either software or hardware. "Real" ones take dozens or hundreds of people. There are 5000 people involved in the Linux kernel. That is design, not production. Production (manufacturing) is what is free in software.

The Linux kernel may be "free" but it represents millions of man hours (or years) of engineering. Creating a viable RISC-V chip would be easier.

Creating the AV2 video codec cost money. I assure you. There is a reason that the Alliance for Open Media is a list of Fortune 500 companies and not a bunch of individual developers.

I have worked in industries dominated by a single chip supplier that made the chips that everybody used. Video surveillance is a good example. It would have been much cheaper for the major players in that industry to fund the collaborative development of chips they could all use and that could maybe be "tweaked" to add differentiated value for the largest players. It would save them money. It would give them more control (even more valuable).

I assume you know what a "chiplet" is. RISC-V is going to change things. In my view, you are focused on the wrong constraints.

We are both saying that money matters. We are simply coming to different conclusions about what that means.


I'm fairly confident there's more than 1% of people who care about these things

If there were an economically viable number of people who cared about those things (and it would need to be significantly more than 1%), we'd be running SPARC or POWER or maybe SuperH derived systems, all of which have open source, royalty free implementations.

For example, OpenSPARC is something like 20 years old, and covers SPARC v8 through t2. SPARC LEON is a decade older, and is under a GNU license, and has been to space.

And that doesn't consider going the Loongsoon route: take an existing ISA (e.g. MIPS), just use it, but carve off anything problematic (4 instructions covered by patents).

It's a pretty inescapable fact on the ground that in the 'processor hierarchy of needs', an open source license is of no consequence in the actual market.


I hesitate to say this as you seem very knowledgeable but you are missing some pretty massive facts that destroy your argument here.

There are already literally billions of RISC-V chips in the wild. Qualcomm alone has shipped a billion or more. They wrote an article back in 2023 where they disclosed that they had already shipped 650 million of them by that point. Andes Technology has said that there are 2 billion chips using their IP. A recent industry report suggested that RISC-V could represent 25% of the global SoC market by 2030. That is based on growth trajectory, not speculation.

RISC-V is not some obscure ISA that cannot get any traction.

There are a dozen or more credible competitors designing modern 64 bit RISC-V CPUs. Most of them have shipped silicon. Some have shipped multiple generations. Has any ISA ever had so many independent companies independently creating core designs (not designs from a single source like ARM)?

Tenstorrent alone likely made $500 million dollars in 2025. Easier to confirm is that they closed a $650 million funding round.

NVIDIA has announced CUDA support for RISC-V. I do not remember them doing that for SPARC, or POWER, or SuperH.

The current RISC-V standard, RVA23, includes advanced instructions for things like vectorization and virtualization. Many large, important industry players are involved in designing future extensions as well.

RISC-V is an officially supported platform in many mainstream Linux distributions including aggressively commercial ones like Red Hat Enterprise Linux but also foundational ones like Debian and its derivatives (like Ubuntu).

GCC and Clang have excellent support for RISC-V. FFMPEG just released hand-written vector optimizations for RISC-V. Again, can we say this about any of the platforms you mentioned?

It's a pretty inescapable fact on the ground that RISC-V has an absolute mountain of support in the industry. And starting this year, multiple vendors will be shipping cores faster than you can license from ARM.

Honestly, what universe are you living in?


Honestly, what universe are you living in?

The one where I actually read what I'm replying to.

I never one single time said RISC-V wasn't successful. Not even implied it. What I did say, should you ever climb of your apparently thinking-averse, pre-conceived notions is that its license isn't the overriding reason it's successful, because the world is full of open source ISAs that never gained any traction. Something you might be aware of if you took a brief break from furiously jerking off over RISC-V and paid attention.


> Some people care less about squeezing out performance and more about open standards. I like having more choices, especially open ones.

you need to be totally autistic to believe that Chinese vendors are going to share anything meaningful with you. they don't hate you, they want their paying customers to be happy, but the brutal competitions in China doesn't allow them to be open in any sense. For products like RISC-V processors and MCUs, the moat is extremely low, being open leads to quick death. It is not about how much stuff they share with you as paying customer, it is about how much they are willing to share with their competitors when there are hundreds of companies trying everything to survive.

as a developer, you just need to ask yourself a dead simple question - how such risc-v platforms are going to be more open than raspberry pi.


How you heard of Deep Computing?

They are pushing their RISC-V products into the Linux mainline before those products even ship.

Those autistic Chinese also contribute a rather surprising amount of Open Source RISC-V out of their academic world.


I have increasingly negative things to say about this.

There is (so far) nothing 'open' about RISC-V. and I wonder if there really ever was any desire for it, at this point.

This whole "Open ISA" crap appears to be a thin veneer to funnel quite large sums of investment into an otherwise completely proprietary and locked-down environment that could never harm the incumbents in any meaningful way - while still maintaining just enough of a pretense of open source, that the (regrettably myself included) shallow nerds and geeks could get smitten by it.

Where is the RTL? Where are the GDSII masks? Why am I unable to look at the branch predictor unit in the Github code viewer? Or (God forbid!) the USB/HDMI/GPU IP? I reject the notion that these are unreasonable questions.

I want my SoC to have a special register that has the git SHA ID of the exact snapshot of the repository that was used to cook the masks. that, now that - is Open Source. that is Open Computing. And nothing less!

I dont care about the piece of paper with instruction encodings - the least interesting part of any computer!

Wasn't that the whole point? We're more than a quarter of a century in and we're still begging SoC vendors for datasheets. Really incredibly embarassing and disappointing.


> Where is the RTL? Where are the GDSII masks? Why am I unable to look at the branch predictor unit in the Github code viewer? Or (God forbid!) the USB/HDMI/GPU IP? I reject the notion that these are unreasonable questions.

As you note correctly, the ISA is open, not this CPU (or board).

The important point is that using an open ISA allows you to create your own CPU that implements it. This CPU can then be open (i.e. you providing the RTL, etc.), if you so desire

I assume it will be much more difficult (or impossible?) to provide the RTL for a CPU with an AMD64 ISA, since that one has to be licensed. I wonder if you paying for the license allows you to share your implementation with the world. Even if it does, it's less likely that you will do so, given that you will have to pay for the licensing fee and make your money back

Since there is no license to pay for in case of RISC-V, it allows you to open up the design of your CPU without you having to pay for that privilege


My superficial understanding is that arm does not prevent from sharing implementation details of your own design but most chips also license a starting implementation that has such limitations. So the end result is often more restricted than the ISA licence some would require


Most ARM licensees aren't permitted to create custom implementations, only to use IP cores provided by ARM. There are a couple of companies who do have an architectural license, allowing them to create their own implementations, but there are only a few of those and they aren't likely to share. (It's also possible that the terms of their license prohibit them from making their designs public.)


The important point is that using an open ISA allows you to create your own CPU that implements it.

So? You've been able to do that since...computers. Anyone can roll their own ISA any time they want. It's a low-effort project that someone with maybe a Masters student level of knowledge can do competently. When I was in school, we even had a class where you would cook up an (simple) ISA and implement it (2901 bit-slice processors); these days they use FPGAs.

So you got your own processor for your own ISA...that was slow, expensive (no economy of scale) and without a market. But very fun, and open source, at least. And if "create your own CPU that implements it" is what you want, go forth and conquer...everything you need is already there and has been for a long time.

But if your goal is "I want an open source ISA that I can produce that's price and/or performance competitive with the incumbents", well, that's a totally different ballgame.

And there are open source ISAs that have been around for decades (SPARC, POWER, SuperH). These are ISAs that already have big chunks of ecosystem already in place. The R&D around how to make them competitive already exists. Some, like LEON SPARC have even gone into something like production (and flown in space).

So, yes, an open source ISA affords the possibility that we can make processors based on our own ISAs on our own terms. It has even in extremely rare occasions produced a product. But the fact remains, the market hasn't cared in the slightest to invest what's required to turn that advantage into a real competitor to the incumbent processors.


Completely wrong.

Yes, you can create your own ISA. But to run what software?

If I create my own RISC-V implementation, I can install Ubuntu on it. Maybe even Steam.

See the difference?

And, the market has responded with a tidal wave of CPU contenders. Like in the rest of the world, not all of them target the highest end portion of the market. But some are choosing to play there. Have you checked-out Ascalon?

And why did Qualcomm pay all that money for Ventana recently? You do not expect them to release high-end RISC-V chips? I mean, they already ship many low-end ones.


> And why did Qualcomm pay all that money for Ventana recently? You do not expect them to release high-end RISC-V chips? I mean, they already ship many low-end ones.

Ventana is an extremely bad example to be used here. It is acquisition price is undisclosed, it could be just some $ for acquiring the team behind it. Secondly, Qualcomm's nuvia acquisition was pretty huge, there is no reason whatsoever to believe the Ventana acquisition is remotely comparable, that proves no one uses RISC-V anyway.


> no uses RISC-V anyway

Here is an article from a company called Qualcomm from two years ago saying that they had, at that time, already shipped 650 million RISC-V cores.

https://www.qualcomm.com/news/onq/2023/09/what-is-risc-v-and...

I notice that the three benefits they flag for RISC-V are: flexibility, control, and visibility.

I wonder how they felt about "control" after ARM tried to stop them from commercializing the value of their Nuvia acquisition? I wonder if it had anything to do with their next big acquisition being RISC-V based instead?

I also wonder, why on their Oryon page does Qualcomm never meanion ARM. Not even once. Even to the question, is Oryon x86, they do not answer that it is ARM. Why not?

https://www.qualcomm.com/processors/oryon


Why don't you read what was written instead of being the unthinking RISC-V fanboi in the room. My only point was that the RISC-V license is probably not the biggest factor in its success, since there have been many, many open source ISAs that weren't successful.



Couldn't have said it better. The moments these people promise everything would be free is a massive red flag. Unfortunately it seems most poodle haven't learned the lesson.


It is a free non-copyleft licence, it is the expected result that derivatives are not similarly free


> There is (so far) nothing 'open' about RISC-V.

with the majority players being Chinese vendors (those you can buy, not including those building RISC-V for their own in-house applications), RISC-V is far less open than ARM or x64.

expecting openness from Chinese vendors is like trying to hook up with some virgin bar girls in your favourite gogo bar in Bangkok.


Get your virgin bar girls here…

https://github.com/OpenXiangShan/XiangShan


are you joking?

if you search their public media releases, they mentioned that their cores are used by some imaginary vendors for undisclosed platforms. just go and check how CLOSE those junks are. product names and models are always omitted, it is always "certain vendor", "one AI card", no spec no details whatsoever...

searching their names on taobao.com returns 0 hit, searching their names on the largest Chinese second hand platform returns 0 hit. 4 years after they started doing their great open project, you can't even buy one from the OPEN market! that is VERY OPEN to me.


Holy swerving goal-posts Batman.

Ok fine. Here is a link to a completely open design:

https://github.com/tenstorrent/riscv-ocelot

And here is a high-performance evolution of it that you can license. They would be happy to take your check today. https://tenstorrent.com/ip/risc-v-cpu

Silicon will be available in a few months.

And just for you, it is not Chinese.


The RP2350 has a couple Hazard3 cores in it.



“People who care about these things” enough that they’re buying Mini ITX RV motherboards? Definitely well under 1% of the market.


Good times, remember riding our bikes to Toys 'R' Us of all places to buy the game with a buddy. Paddled back, played through the Orc campaign until 4 a.m. in the morning. One of my all time favorites.


Well good news, these days there's another layer. "Not even GPT4-level LLM" bots that frustrate you into giving up by circling to the FAQs over and over.


Library/API conflicts are the biggest pain point for me usually. Especially breaking changes. RLlib (currently 2.41.0) and Gymnasium (currently 0.29.0+) have ended in circles many times for me because they tend to be out of sync (for multi-agent environments). My go to test now is a simple hello world type card game like war, competitive multi-agent with rllib and gymnasium (pettingzoo tends to cause even more issues).

Claude Sonnet 4.5 was able to figure out a way to resolve it eventually (around 7 fixes) and I let it create an rllib.md with all the fixes and pitfalls and am curious if feeding this file to the next experiment will lead to a one-shot. GPT-5 struggled more but haven't tried Codex on this yet so it's not exactly fair.

All done with Copilot in agent mode, just prompting, no specs or anything.


Thank you for doing that nad being a voice for liberty.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: