XFCE is also my go to. But I have moved on from caring too much about desktop environments as long as they don't get in the way. I went through a phase of trying pure openbox and all kinds of things and settled on XFCE. It doesn't do everything like I want but that's fine. I mostly open a terminal, a browser, thunderbird, some programming environment and a latex editor these days.
Agree. Anyone with access to large proprietary data has an edge in their space (not necessarily for foundation models): Salesforce, adobe, AutoCAD, caterpillar
I'm pretty sure it works very differently for different people so you have to figure out your own process. I've tried different things but at the end of the day, I simply have a notebook next to my laptop/in my laptop bag and write down everything in freeform text. No index, no bullet points and things like that. I put a date and start writing. I'll usually do some TODOs as checklists to get them out of my brain and bothering me at the start of the day but only big items, not each and every step. It's a mix of work and private things. Just writing stuff down is helpful for me, even if I never reference it again.
I do use the Feynman Technique if I come across something interesting and try to explain it on paper. So if I was using it just for work, I'd probably do that. Something like "Spec driven development (Github Spec Kit and similar toolkits) is essentially a bunch of md files that provide more context for agents. There are some scripts that provide scaffolding, having agents write the md uses a lot of tokens so writing them manually after the scaffold is generated makes more sense. Try with a small project."
A+ app, I turned on sound and was not disappointed.
Love the movie, got a spray can and sprayed my whole keyboard army green after watching it then realized I can't 10 finger type. What a golden age of interesting young people in computer security. Roughly one year later (iirc), I read "Smashing the Stack for Fun and Profit" which might have been my most influential IT related read. It's probably tied with "Man-Computer Symbiosis" :)
I'd actually say the opposite is the case. B2B (even SaaS) is probably the most robust when it comes to AI resistance. The described "in house vibe coded SaaS replacement" does not mirror my experience in B2B at all. The B2B software mindset I've encountered the most is "We'll pay you so we don't have to wrestle with this and can focus on what we do. We'll pay you even more if we worry even less." which is basically the opposite of...let's have someone inhouse vibe code and push to production. B2B is usually fairly conservative.
There was no chance that everyone would be running their own email server, but if it wasn't for the lack of IPv6 adaptation a plug and go home email server solution would probably see a decent amount of use. I'd bet we'd already be seeing it as a feature in most mid-ranged home routers by now.
The mail server in a router is easy to host, the problem is:
1) Uptime (though this could be partially alleviated by retries)
and most of all:
2) "Trust"/"Spam score"
It's the main reason to use Sendgrid, AWS, Google, etc. Their "value" is not the email service, it's that their SMTP servers are trusted.
If tomorrow I can just send from localhost instead of going through Google it's fine for me, but in reality, my emails won't arrive due to these filters.
I use a small local provider (posteo) and have 0 problems with spam.
So a 20 pound monkey can also throw around some weight. To be fair I only use it for personal stuff its probably different if you need enterprise scale l.
I've seen plenty of Gmail accounts over the years and they pretty much look the same.
The only Gmail accounts that are "overrun by spam" are those of people subscribing to lots of spammy newsletters and then not knowing how to unsubscribe from them (or figuring they'd stay subscribed in case the next newsletter is the Magical One™). But that's 100% self inflicted and you can't save those people with any technical solution.
Email spam isn't a day to day problem for Gmail (at least) since Bayesian email filtering was first implemented.
The specific concern around uptime & reliability was baked into email systems from almost the start - undeliverable notifications (for the sender) and retries.
But yes, the “trust / spam score” is a legit challenge. If only device manufacturers were held liable for security flaws, but we sadly don’t live in that timeline.
Its not a device/MTA issue, SMTP just is not a secure protocol and there is not much you can do in order to 'secure' human communication. Things like spoofing or social engineering are near impossible to address within SMTP without external systems doing some sort of analysis on the messages or in combination with other protocols like DNS.
SMTP isn't at fault, the social ecosystem is at fault. Every system where identities are cheap has a spam problem. If you think a system has cheap identities and no spam, it probably doesn't have cheap identities — examples are HN or Reddit.
Trust / spam score is the largest one I think, second to consumer ISPs blocking the necessary ports for receiving mail.
Even if your "self hosting" is renting a $5/month VPS, some spam lists (e.g. UCEPROTECT) proactively mark any IP ranges owned by consumer ISPs and VPS hosting as potential spam. I figured paying fastmail $30/yr was worth never having to worry about it.
For "Trust", I believe patio11 described this system as the "Taxi Medallion of Email".
e.g. you spend a lot of money to show that you are a legitimate entity or you pay less money to rent something that shows you are connected to said entity.
Without some kind of federation or centralization, it seems hard to distinguish a hobbyist from a spammer if both of them are using a plug-and-go. Forcing that responsibility into the hands of Google, Zoho, and Microsoft seems like the best compromise, unfortunately.
For one, if my power goes out for an extended period of time I'd still like to be able to access my email. Communications really can't be hosted locally.
What a weird take. I was running my own email server 25 years ago on a 512 kbit ADSL line. No problem at all, would even be enough bandwidth today for most messages.
(Back then email still worked from residential IP addresses, and wasn't blocked by default)
I agree with you. In B2B SaaS you don't sell the software, you sell your expertise in a specific domain and the responsability you take for owning that expertise. The fact that the development costs are nearly zero will make them more valuable and more protifable
My experience is that SMBs are generally not run by people who feel confident doing any kind of self managed IT.
No amount of LLM usage is going to change them into full stack vibe coders who moonlight as sysadmins. I just don't see it happening.
Not until, that is, a new generation, that has grown accustomed to the tech, takes over.
Until then the current SMBs will for the most part fulfill their IT needs from SaaS businesses (of which I think there will be more due to LLMs lowering the barrier for those of us who feel confident in our coding and sysadmin skills already).
Having seen how clueless the new generation is and the amount of brain rot they get from using LLMs over honing their own skills, I'd say it's the opposite...
I'm considering SaaS replacements with in house code in situations where my general thoughts are "how can this possibly be the pricing for this?" which is not uncommon.
Well before vibe coding, tons of open source software existed (and exists) to replace SaaS. With lots of features and knobs and real communities. But I still often pay for SaaS because managing it is a headache. Some human has to do it. I can pay the human or I can pay the company. I really don’t see how vibe coded toys can replace real battle tested SaaS products. A better explanation is the bubble in PE ratio is deflating and it’s happening all over, regressing to the mean. AI is a convenient explanation for everything
How many SaaS companies are public? How is that bubble deflating?
These are real risks to these companies.
Your in-house teams can build replacements, it's just a matter of headcount. With Claude, you can build it and staff it and have time left over. Then your investment pays dividends instead of being a subscription straight jacket you have to keep renting.
I think there's an even faster middle ground: open source AI-assisted replacements for SaaS are probably coming. Some of these companies might offer managed versions, which will speed up adoption.
> Your in-house teams can build replacements, it's just a matter of headcount. With Claude, you can build it and staff it and have time left over. Then your investment pays dividends instead of being a subscription straight jacket you have to keep renting.
Lets take Figma as an example, Imagine you have 1000 employees, 300 of them need Figma, so you are paying 120k per year in Figma licenses. You can afford 1 employee working on your own internal Figma. you are paying the same but getting 100x worst experience, unless your 1 employee with CC can somehow find and copy important parts of Figma on his own, deploy and keep it running through the year without issues, which sounds ludicrous.
If you have less than 1000 employees it wouldnt even make sense to have 1 employee doing Figma
>Lets take Figma as an example, Imagine you have 1000 employees, 300 of them need Figma, so you are paying 120k per year in Figma licenses.
I mean in an example that almost happened... "you are paying 120k per year in Figma licenses, Adobe buys it, you are paying 500k per year in Figma licenses"
At least up until the point of vibe coding it was still worth the SaaS provider charging at least as much if not slightly more than you doing it yourself because most businesses weren't going to anyway.
By that logic, people should never use any Saas products because someday the price will increase. Then why even use Claude Code, someday they will get sick of losing money and increase the price to $1000/month.
> you just put your employees directly on Nano Banana or one of the simple Nano Banana wrappers.
So you end up spending the money elsewhere? with exploratory design you can easily spend 10k a month on these models as a company of 1000, thus completely losing any monetary savings. Anyway you look at it, Saas worked because costs were spread out and low enough to not optimize it too much.
Now you have an entire in-house product to manage and build features on. It could potentially work but so much of what my company pays for is about much more than the software itself. One example would be BrowserStack for very specific browser and mobile app testing edge cases. Can’t vibe code this. Another would be a VPN service with the maximum number of locations to test how our system behaves when accessing from those locations. Another would be hosted git. Another is google suite and all of its apps. How can we vibe code Google Docs and Sheets and Drive and all of the integrations and tooling? It simply isn’t going to happen.
Maybe you are right and the companies do want to pay and not worry about these problems. But now they have a lot more SaaS options to chose from. The incumbent companies like Salesforce and Atlassian have less of a moat. Maybe they'll keep the power users but if a customer is only using 80% of the feature set there is new competition.
Competition might come in the form of a startup but it can also come from existing SaaS companies expanding into adjacent domains. Canva now does docs. Notion does email. etc
Also, it is my experience that exec and boards favour safe and well known B2B partners over in house. It's a more publicly defensible approach that gives them an out if things go wrong and shareholders get upset.
For big corporations at least prices of SaaS are rarely an issue. Issues are: we don’t have the time to introduce a new tool, what about our processes, we don’t have the right people.
> we want recent examples just look at tailwindui since it's technically a SaaS.
This is a terrible example. Show me someone ripping out their SAP ERP or SalesForce CRM system where they're paying $100k+ for a vibe coded alternative and I'll believe this overall sentiment.
I have heard this from execs at public companies as well. I think a HUGE part of this appetite is that today no one has yet been subjected to doing business on a bunch of apps cobbled together by vibe coders.
They are just hearing the promise that AI will allow them to build custom software that perfectly melds to their needs in no time at all, and think it sounds great.
I suspect the early adopters who go this route are going to be in for a rude awakening that AI hasn’t actually solved a lot of hard problems in custom software development.
In the world of B2B software many of the 'hard problems in custom software development' have not been solved by human coders either - it can be an extremely grim market for anyone who cares about software quality. I'm completely unconvinced that on average a vibe-coded app is worse than the typical B2B slop.
I too have an appetite for magic beans, but unfortunately, I'll be unable to eat them until they exist. As it stands now, it doesn't seem like AI stuff can produce anything with this large a scope.
So, do their AI devs have deep knowledge of the business processes, regulations/legal (of course in all kinds of regions), scaling, security, ... ? Because the LLMs sure as hell are lacking that knowledge (again, in depth).
Of course, once AGI is available (if it is ever) everything changes. But for now someone needs to have the deep expertise.
>> This is a terrible example. Show me someone ripping out their SAP ERP or SalesForce CRM system where they're paying $100k+ for a vibe coded alternative and I'll believe this overall sentiment.
I cannot imagine an SMB or fortune 500 ripping out Salesforce or SAP. However, I can see a point-tool going away (e.g., those $50/mo contracts which do something tiny like connect one tool to another.)
TailwindUI isn't really what I'd consider SaaS -- it was a buy once and download software product.
That means to keep making money they need keep selling new people. According to them, their only marketing channel was the Tailwind docs, AI made it so not nearly as many people needed to visit the tailwind docs.
If they had gone with the subscription SaaS model, they'd probably be a little better off, as they would have still had revenue coming in from their existing users.
> I mean if we want recent examples just look at tailwindui since it's technically a SaaS.
How is it in any way B2B? At most B2C + freelancers / individuals / really small SME.
It didn't have any clues a med/large B2B would look for e.g. SSO, SOC2 and other security measures. It doesn't target reusability that I as a B would want. The provided blocks never work together. There aren't reusable components.
Tailwind UI or now Tailwind Plus is more like vibe coding pre-AI.
Sorry but tailwindui is not a SAAS. There is no service or hosting. You buy a coded template once and then receive updates. It is totally not the same as a critical B2B SAAS that is running 24-7 on the vendor's servers providing real support and service.
TailwindUI unfortunately sits in a position of being an easy to disrupt business with current AI.
Now attempt the same with Zoom, I suspect vibe coding will fall down on a project that complex to fit the mental model of a single engineer maintained a widely used tool
Perhaps the case for premium CSS SaaS businesses, I guess (which seems particularly primed for disruption even pre-AI), but there are many more robust B2B categories out there that aren't literal code + docs as a service.
how dont people understand? if you have a VC funded b2b saas, you need to charge huge margins for the investors to get a return. now, small teams can vibe code a replacement and charge 90% less money. AI is going to kill saas margins.
i literally cannot understand why people keep repeating that non tech companies will build their own software, thats not the bear case for saas
Did vanilla Jira for a while, battled with a web app that is actively trying to make you hate it—switched our team to Linear, couldn't be happier ever since.
Well for marketing and sales your bigger competitor is already doing the work of showing companies that they want the functionality at all, and the cheaper competitor's sales and marketing pitch can be: we are much cheaper.
This is pretty much what blacksmith.sh does -- GitHub Actions but it's on faster and cheaper hardware. I'm sure they spend non-trivial amounts on marketing but "X but much cheaper" doesn't sound like a difficult sale.
(edit) And the design, sadly, can be as simple as "rip-off bigger competitor" -- of course if one day you are the big competitor because you "won" in the market, you'll need to invest in design, but by then I guess you'll have the money?
they dont, which is why these companies are going to get smoked. a small team of people will compete with atlassian head on. the whole saas business model is under threat
Yeah.... The code isn't the hard part. That's not where the value is.
This hard part when you're doing in house stuff is getting a good spec, ongoing support, and long term maintenance.
I've gone trough development of a module with a stakeholder, got a whole spec, confirmed it, coded it, launched it, and was then told it didn't work at all like what they needed. It was literally what they told me... I've said 'yes we can make that report, what specific fields do you need' and gotten blank stares.
Even if you're lucky and the original stakeholder and the code are on the same page, as soon as you get a coworkers 'wouldnt it be nice if...' you're going to have a bad day if it's hand coded, vibecoded, or outsourced...
This has always been the problem, it's why no-code never _really_ worked, even if the tech was perfectly functional.
The accounting saas dores presumably uses doesn't "automate spreadsheets" as its core value prop.
related: i'm thinking these vibe coded solutions are revealing to everyone how important and under appreciated good UX is when it comes to implicit education of any given thing. Like given this complex process, the UX is holding your hand while educating you through a workflow. this stuff is part of software engineering yet it isn't "code".
I, on the other hand, can't wait to fire every single B2B subscription we've got.
B2B SaaS is a VULN. They get bought out, raise prices, fail. And then you have extremely large amounts of unplanned spend and engineering to get around them.
I remember when we replaced the feature flags and metrics dashboards with SignalFX and LaunchDarkly. Both of those went sour. SignalFx got bought out and quadrupled their insane prices. LaunchDarkly promised the moon, but their product worked worse than our in-house system and we spent nearly a year with a couple of dedicated headcount engineering workarounds.
Atlassian, you name it - it's all got to go.
I just wish I could include AWS in this list. Compute and infra needs to be as generic as water.
If you're working at SaaS, find an exit. AI is coming for you. Now's a great time to work on the AI replacement of your product.
> And then you have extremely large amounts of unplanned spend and engineering to get around them.
I have no idea how you are spending "large amounts" of unplanned spend on Saas products. Every company I worked for had Saas subscription costs being under 1% of capex. Unless you add AWS, which is actually "large amounts" but good luck vibe coding that.
Metrics at a fintech processing billions of dollars of daily GPV, plus the signals from every microservice in the constellation are enormous. Huge scale time series data.
We had an in-house system that worked, but it was a two pizza team split between time series and logging. "Internal weirdware" got thrown around a lot, so we outsourced to SignalFx for a few years. It was bumpy. I liked our in-house system better, and I didn't build it.
Splunk then buys SignalFx and immediately multiplies the pricing at a conveniently timed contract renewal. Suddenly every team in the company has to plan an emergency migration.
What agents are you using? If you stick to opentelemetry and open source agents and develop a collector infrastructure -
You can switch across different vendors with lower impact and ramp off time.
Your supply chain is messed up. You need sign longer contracts with price guarantees.
Some people care less about squeezing out performance and more about open standards. I like having more choices, especially open ones.
I am a user, I like to tinker, I'm fairly confident there's more than 1% of people who care about these things. If you live in a country that is threatened by export embargos and the like it also makes a lot of sense to prioritize open.
The number of companies creating RISC V implementations is pretty hopeful. There's way more competition here than x64 or ARM, and that could yield some interesting results.
It matters in that it opens up competition and allows fully-open designs, which should keep prices low and products available, but you're right that having fully-open state-of-the-art chips is unlikely to happen any time soon.
in fact, such ISA is only going to fuel more closed ecosystems as it made hundreds of Chinese vendors to join the game for free, they all suddenly got the chance to build their totally closed platforms.
Which makes the whole ecosystem a lot more open. None of those suppliers is going to have the market power to lock you in. You can get it from the lowest cost provider until something higher value comes along.
And if you are a country, nobody can kill your RISC-V ecosystem. Worst case, you have to design your own chips but at least all the software exists and is established. And Ooen Source cores exist and are getting better. They may not be bleeding edge but they could be good enough if push came to shove. The BOOM chip just got vector extensions.
Open standards don't mean a thing; you can't execute code on a standard. There are past open ISAs like OpenSPARC, MIPS, and OpenPOWER that never gained any traction.
High performance implementations, i.e. actual chips you can buy, are going to be proprietary and that's not going to change. Engineering hardware is expensive.
This is a bold prediction but I thing “alliances” will form where industry players collaborate (like we are seeing in video codecs). And the basic core could become an Open Source project just like Linux did. Operating Systems and codecs were (and are) expensive too.
But there are different levels of proprietary. Having your entire software ecosystem impossible to lock-in means something. And competition tends to breed openness.
MIPS certainly did gain a lot of traction. It was a real force at one point and the world is awash in them. But of course MIPS (the company) is RISC-V now.
An operating system can be coded on one not particularly powerful computer by one person and it costs a few pennies to compile and test. A lot of other open source projects were also initiated by one or two talented people. Software is absurdly inexpensive to develop relative to its complexity.
A cutting edge processor requires personnel across several disciplines and millions in specialized equipment to both validate the implementation of the architecture and the electrical behavior of the circuits and each time it's "compiled" (a batch of test chips fabbed and QAed), it takes a few weeks to be delivered and costs hundreds of thousands of dollars. The ISA being open and royalty-free doesn't affect any of those massive costs.
To use a famous quote: "The answer to any question starting, 'Why don't they...' is almost always, 'Money'" Nobody is offering up that kind of money without practical guarantees of success and some kind of profit at the end of it.
The idea that a chip takes more "personnel" than an operating system or a codec is wrong. An individual can make toys of either software or hardware. "Real" ones take dozens or hundreds of people. There are 5000 people involved in the Linux kernel. That is design, not production. Production (manufacturing) is what is free in software.
The Linux kernel may be "free" but it represents millions of man hours (or years) of engineering. Creating a viable RISC-V chip would be easier.
Creating the AV2 video codec cost money. I assure you. There is a reason that the Alliance for Open Media is a list of Fortune 500 companies and not a bunch of individual developers.
I have worked in industries dominated by a single chip supplier that made the chips that everybody used. Video surveillance is a good example. It would have been much cheaper for the major players in that industry to fund the collaborative development of chips they could all use and that could maybe be "tweaked" to add differentiated value for the largest players. It would save them money. It would give them more control (even more valuable).
I assume you know what a "chiplet" is. RISC-V is going to change things. In my view, you are focused on the wrong constraints.
We are both saying that money matters. We are simply coming to different conclusions about what that means.
I'm fairly confident there's more than 1% of people who care about these things
If there were an economically viable number of people who cared about those things (and it would need to be significantly more than 1%), we'd be running SPARC or POWER or maybe SuperH derived systems, all of which have open source, royalty free implementations.
For example, OpenSPARC is something like 20 years old, and covers SPARC v8 through t2. SPARC LEON is a decade older, and is under a GNU license, and has been to space.
And that doesn't consider going the Loongsoon route: take an existing ISA (e.g. MIPS), just use it, but carve off anything problematic (4 instructions covered by patents).
It's a pretty inescapable fact on the ground that in the 'processor hierarchy of needs', an open source license is of no consequence in the actual market.
I hesitate to say this as you seem very knowledgeable but you are missing some pretty massive facts that destroy your argument here.
There are already literally billions of RISC-V chips in the wild. Qualcomm alone has shipped a billion or more. They wrote an article back in 2023 where they disclosed that they had already shipped 650 million of them by that point. Andes Technology has said that there are 2 billion chips using their IP. A recent industry report suggested that RISC-V could represent 25% of the global SoC market by 2030. That is based on growth trajectory, not speculation.
RISC-V is not some obscure ISA that cannot get any traction.
There are a dozen or more credible competitors designing modern 64 bit RISC-V CPUs. Most of them have shipped silicon. Some have shipped multiple generations. Has any ISA ever had so many independent companies independently creating core designs (not designs from a single source like ARM)?
Tenstorrent alone likely made $500 million dollars in 2025. Easier to confirm is that they closed a $650 million funding round.
NVIDIA has announced CUDA support for RISC-V. I do not remember them doing that for SPARC, or POWER, or SuperH.
The current RISC-V standard, RVA23, includes advanced instructions for things like vectorization and virtualization. Many large, important industry players are involved in designing future extensions as well.
RISC-V is an officially supported platform in many mainstream Linux distributions including aggressively commercial ones like Red Hat Enterprise Linux but also foundational ones like Debian and its derivatives (like Ubuntu).
GCC and Clang have excellent support for RISC-V. FFMPEG just released hand-written vector optimizations for RISC-V. Again, can we say this about any of the platforms you mentioned?
It's a pretty inescapable fact on the ground that RISC-V has an absolute mountain of support in the industry. And starting this year, multiple vendors will be shipping cores faster than you can license from ARM.
The one where I actually read what I'm replying to.
I never one single time said RISC-V wasn't successful. Not even implied it. What I did say, should you ever climb of your apparently thinking-averse, pre-conceived notions is that its license isn't the overriding reason it's successful, because the world is full of open source ISAs that never gained any traction. Something you might be aware of if you took a brief break from furiously jerking off over RISC-V and paid attention.
> Some people care less about squeezing out performance and more about open standards. I like having more choices, especially open ones.
you need to be totally autistic to believe that Chinese vendors are going to share anything meaningful with you. they don't hate you, they want their paying customers to be happy, but the brutal competitions in China doesn't allow them to be open in any sense. For products like RISC-V processors and MCUs, the moat is extremely low, being open leads to quick death. It is not about how much stuff they share with you as paying customer, it is about how much they are willing to share with their competitors when there are hundreds of companies trying everything to survive.
as a developer, you just need to ask yourself a dead simple question - how such risc-v platforms are going to be more open than raspberry pi.
I have increasingly negative things to say about this.
There is (so far) nothing 'open' about RISC-V. and I wonder if there really ever was any desire for it, at this point.
This whole "Open ISA" crap appears to be a thin veneer to funnel quite large sums of investment into an otherwise completely proprietary and locked-down environment that could never harm the incumbents in any meaningful way - while still maintaining just enough of a pretense of open source, that the (regrettably myself included) shallow nerds and geeks could get smitten by it.
Where is the RTL? Where are the GDSII masks? Why am I unable to look at the branch predictor unit in the Github code viewer? Or (God forbid!) the USB/HDMI/GPU IP? I reject the notion that these are unreasonable questions.
I want my SoC to have a special register that has the git SHA ID of the exact snapshot of the repository that was used to cook the masks. that, now that - is Open Source. that is Open Computing. And nothing less!
I dont care about the piece of paper with instruction encodings - the least interesting part of any computer!
Wasn't that the whole point? We're more than a quarter of a century in and we're still begging SoC vendors for datasheets. Really incredibly embarassing and disappointing.
> Where is the RTL? Where are the GDSII masks? Why am I unable to look at the branch predictor unit in the Github code viewer? Or (God forbid!) the USB/HDMI/GPU IP? I reject the notion that these are unreasonable questions.
As you note correctly, the ISA is open, not this CPU (or board).
The important point is that using an open ISA allows you to create your own CPU that implements it. This CPU can then be open (i.e. you providing the RTL, etc.), if you so desire
I assume it will be much more difficult (or impossible?) to provide the RTL for a CPU with an AMD64 ISA, since that one has to be licensed. I wonder if you paying for the license allows you to share your implementation with the world. Even if it does, it's less likely that you will do so, given that you will have to pay for the licensing fee and make your money back
Since there is no license to pay for in case of RISC-V, it allows you to open up the design of your CPU without you having to pay for that privilege
My superficial understanding is that arm does not prevent from sharing implementation details of your own design but most chips also license a starting implementation that has such limitations. So the end result is often more restricted than the ISA licence some would require
Most ARM licensees aren't permitted to create custom implementations, only to use IP cores provided by ARM. There are a couple of companies who do have an architectural license, allowing them to create their own implementations, but there are only a few of those and they aren't likely to share. (It's also possible that the terms of their license prohibit them from making their designs public.)
The important point is that using an open ISA allows you to create your own CPU that implements it.
So? You've been able to do that since...computers. Anyone can roll their own ISA any time they want. It's a low-effort project that someone with maybe a Masters student level of knowledge can do competently. When I was in school, we even had a class where you would cook up an (simple) ISA and implement it (2901 bit-slice processors); these days they use FPGAs.
So you got your own processor for your own ISA...that was slow, expensive (no economy of scale) and without a market. But very fun, and open source, at least. And if "create your own CPU that implements it" is what you want, go forth and conquer...everything you need is already there and has been for a long time.
But if your goal is "I want an open source ISA that I can produce that's price and/or performance competitive with the incumbents", well, that's a totally different ballgame.
And there are open source ISAs that have been around for decades (SPARC, POWER, SuperH). These are ISAs that already have big chunks of ecosystem already in place. The R&D around how to make them competitive already exists. Some, like LEON SPARC have even gone into something like production (and flown in space).
So, yes, an open source ISA affords the possibility that we can make processors based on our own ISAs on our own terms. It has even in extremely rare occasions produced a product. But the fact remains, the market hasn't cared in the slightest to invest what's required to turn that advantage into a real competitor to the incumbent processors.
Yes, you can create your own ISA. But to run what software?
If I create my own RISC-V implementation, I can install Ubuntu on it. Maybe even Steam.
See the difference?
And, the market has responded with a tidal wave of CPU contenders. Like in the rest of the world, not all of them target the highest end portion of the market. But some are choosing to play there. Have you checked-out Ascalon?
And why did Qualcomm pay all that money for Ventana recently? You do not expect them to release high-end RISC-V chips? I mean, they already ship many low-end ones.
> And why did Qualcomm pay all that money for Ventana recently? You do not expect them to release high-end RISC-V chips? I mean, they already ship many low-end ones.
Ventana is an extremely bad example to be used here. It is acquisition price is undisclosed, it could be just some $ for acquiring the team behind it. Secondly, Qualcomm's nuvia acquisition was pretty huge, there is no reason whatsoever to believe the Ventana acquisition is remotely comparable, that proves no one uses RISC-V anyway.
I notice that the three benefits they flag for RISC-V are: flexibility, control, and visibility.
I wonder how they felt about "control" after ARM tried to stop them from commercializing the value of their Nuvia acquisition? I wonder if it had anything to do with their next big acquisition being RISC-V based instead?
I also wonder, why on their Oryon page does Qualcomm never meanion ARM. Not even once. Even to the question, is Oryon x86, they do not answer that it is ARM. Why not?
Why don't you read what was written instead of being the unthinking RISC-V fanboi in the room. My only point was that the RISC-V license is probably not the biggest factor in its success, since there have been many, many open source ISAs that weren't successful.
Couldn't have said it better. The moments these people promise everything would be free is a massive red flag. Unfortunately it seems most poodle haven't learned the lesson.
with the majority players being Chinese vendors (those you can buy, not including those building RISC-V for their own in-house applications), RISC-V is far less open than ARM or x64.
expecting openness from Chinese vendors is like trying to hook up with some virgin bar girls in your favourite gogo bar in Bangkok.
if you search their public media releases, they mentioned that their cores are used by some imaginary vendors for undisclosed platforms. just go and check how CLOSE those junks are. product names and models are always omitted, it is always "certain vendor", "one AI card", no spec no details whatsoever...
searching their names on taobao.com returns 0 hit, searching their names on the largest Chinese second hand platform returns 0 hit. 4 years after they started doing their great open project, you can't even buy one from the OPEN market! that is VERY OPEN to me.
And here is a high-performance evolution of it that you can license. They would be happy to take your check today.
https://tenstorrent.com/ip/risc-v-cpu
Good times, remember riding our bikes to Toys 'R' Us of all places to buy the game with a buddy. Paddled back, played through the Orc campaign until 4 a.m. in the morning. One of my all time favorites.
Well good news, these days there's another layer. "Not even GPT4-level LLM" bots that frustrate you into giving up by circling to the FAQs over and over.
Library/API conflicts are the biggest pain point for me usually. Especially breaking changes. RLlib (currently 2.41.0) and Gymnasium (currently 0.29.0+) have ended in circles many times for me because they tend to be out of sync (for multi-agent environments).
My go to test now is a simple hello world type card game like war, competitive multi-agent with rllib and gymnasium (pettingzoo tends to cause even more issues).
Claude Sonnet 4.5 was able to figure out a way to resolve it eventually (around 7 fixes) and I let it create an rllib.md with all the fixes and pitfalls and am curious if feeding this file to the next experiment will lead to a one-shot. GPT-5 struggled more but haven't tried Codex on this yet so it's not exactly fair.
All done with Copilot in agent mode, just prompting, no specs or anything.
reply