Hacker Newsnew | past | comments | ask | show | jobs | submit | rldjbpin's commentslogin

all of the grievances resulting in this move is a simple outcome of the cost of convenience. but it should not need going full opposite end to get something good enough.

dedicated servers, as hinted by others here, addresses the vast majority of issues one may face for any non-enterprise needs. if you know about IOPS and care about them, odds are that running a simple open-source project [1] on top of one is all you need to do to move on with your day.

need redundancy, etc.? can complement with another one in another provider/region or put CF in front of your box. this is clearly working well enough for some of the commenters who are able to sell their own service on top of this approach.

[1] https://disco.cloud/


context: previously worked for an automobile oem

the current trend in the industry (before ai everything) has been software defined vehicle [1].

while the ux has been horible, the things hidden away from us are also becoming very bloated, and beyond the microcontroller-level complexity.

as a side-effect, even if you build a modern, mass-market car without screens, the ones in the future would still need to be connected to the manufacturer for ota updates for core functionality. expect supply-chain issues like people faced with axios, etc.

[1] https://en.wikipedia.org/wiki/Software_Defined_Vehicle


context: using student pack's "pro" plan for a long time, with exposure to enterprise "pro" plan also.

given the recent changes that kneecapped the plan for students [1], i feel less bad after seeing this. always had monthly limit on premium requests shown in the extension (which i would watch in dread creep up), the daily/weekly "usage limits" part seem ambiguous at best.

using agentic workloads as the basis for this change does not sit quite right with me. if you look at the newly added debug mode, you may notice the token consumption as well as the subagent/tool calls made behind the scenes. my takeaways:

- it consumes way too much tokens for simple tasks (had one use case where the agent burnt 16+ million tokens just to make 50 line change in a monorepo using plan -> agent approach)

- even when you select a model in the dropdown, the subagents/tools can be called with an entirely different model, often the haiku-4.5. gpt-4o is widely used for creating summaries or titles to display for the plan.

- the new reasoning modes have exacerbated the token burning as the agent tends to loop a whole lot. the prompt vs plan token ratio is quite minuscule, and when combined with your own instruction files and skills, it just goes out of the window.

i think they have given a generous model in the past, but by kneecapping the lower tier, it no longer justifies existence. if they want to raise prices, they can raise the floor. or rather put some work in improving their own orchestration system before putting the blame on the users vibing it out.

[1] https://news.ycombinator.com/item?id=47500445


for the company that is one of the major players in tracking similar data across the web, i don't see much wrong with this.

if they continue to share their work through open releases despite the leadership change, i hope we get to benefit with their work.

not quite optimistic about the result as i wonder if on aggregate we all consistently interact with computers the most efficient way possible. maybe to beat captcha or scraper detection through mimicry perhaps.


the chassis upgrade is welcome and it is nice to see the company grow enough to start creating custom hardware, and supporting standards that should keep things modular without sacrifice.

at the same time, there is a decent level of risk with using "new" standards before the industry catches up. lpcamm2 is great and should allow faster memory while "upgradable". the issue is with only having one slot which forces you to replace memory instead of adding to it. this is working with the assumption of having a single slot, which i am happy to be proven wrong on.

the current timing is a shame but at least when one needs to shell out so much money after all, might as well get better performance and hardware along with it.


phones are indeed becoming more repairable, and the legislation is working upto an extent.

on the other hand, mandating easier to repair components is ineffective if the manufacturer does not support the parts sale or use parts otherwise widely available in the market.

this goes beyond for other consumer electronics. in the world of laptops, which are generally more repairable, i've had my own experience with a mid-range one from lenovo, the largest vendor worldwide. [1]

the laptop was from the covid-era and one of the refresh of their popular lineup which has seen minimal changes under the hood. despite that, when i had to replace its fans and battery, i had to look for third-party sellers for the components. they are quite easy to replace but as a regular consumer it is tricky to find the correct parts and not overspend on them.

maybe with the new silicon carbide batteries, we could have a "nokia bl-5c" moment, without the counterfeit explody part.

[1] https://www.statista.com/statistics/267018/global-market-sha...


let the analyst and news say what they want - the entire situation is artificial and is up to the manufacturers.

the current relative spike in the prices misses the medium-term trend of the vast decrease in memory price post-covid that led to the recent surge. the cartel got another opportunity to make bank and they will use that lever to the max.

funnily enough i've been personally stuck with 16 gigs since 2015, across three memory generations! but i am used to the past when you would spend 80-100 on an 8gb stick (jdec timings, nothing fancy but from a major brand) without accounting for inflation.


from a ux perspective, all of this is quite exciting and is made possible with our growing capabilities with computer vision and language understanding.

however, it is quite interesting, the way these things are named and branded. calling it "computerless" or "screenless" is quite funny semantically. of course we need computing (often through the cloud even) to get it to run. and the word "screen" was originally meant to describe a surface to be projected on, which a lot of these solutions do!!

i'd wager that along the same lines, pretty much any smart voice assistant in your room could fit into this bucket, albeit with different capability set.

this is still at a stage where it seems like a lot of work to make it do what we take for granted on a daily basis. very far from reality for those who wanted to get rid of screens altogether. for that, the focus should more be on what one does on a screen than the hardware itself.


The moral judgement of a practice not unknown to those who handle production deployment (takes me back to the days where i had to use a local maven repository with dated dependencies) is on a very shaky foundation.

We used to focus more on finding issues before a new release, and while it remains common to find bugs in older ones, not having enough users should not be used as a crutch for testing.

> (dependency cooldowns) don't address the core issue: publishing and distribution are different things and it's not clear why they have to be coupled together.

Besides some edge cases for a large project, the core issue remains code quality and maintainability practices. The rush to push several patches per day is insane to me, especially in current AI ecosystem.

Breaking changes used to have enough transitionary period, see Python 2 to 3, while today it is done on a whim, even by SaaS folks who should provide better DX for their customers. Regardless, open-source/source-available projects now expect more from their users, and I wonder how much of it remains reasonable.


the core thesis of the article and comparison with non-"losers" seem vague at best. comparing model providers with hardware vendor with software ecosystem is like differentiating between apple to oranges.

if hardware moat was to be discussed, then compare with nvidia, amd and google's tpu division perhaps. in-house intelligence is best left alone for apple. they are relying on the "peers" for underlying capabilities as is. [1] [2]

outside of inference and (pro/con)sumer space, there is little to offer for the enterprise or the people developing the lowest end of the stack. even the recent tinygrad egpu is shockingly slow [3]. which might made gb10 look much more capable for in-house training.

regardless, most of the industry "moat" does not appear sustainable at best. only time will tell how it will turn out for everyone but on a positive note, apple does not put all its eggs in this basket, which is probably wiser.

[1] https://news.ycombinator.com/item?id=40636980

[2] https://news.ycombinator.com/item?id=46589675

[3] https://www.youtube.com/watch?v=C4KWsmezXm4


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: