Hacker Newsnew | past | comments | ask | show | jobs | submit | acchow's commentslogin

Wikipedia say 3.6 billion Chrome users.

I get that there is undesirable real estate in Japan that gets sold "for a song".

But then how does it quickly get resold at 40x?


> But then how does it quickly get resold at 40x?

Because the new "owners" are buying the Business Manager visa, not the actual property.


Performing 40 songs in exchange for a property does seem like serious effort...

Suckers who don't realize it's not worth what they're paying.

Most people are convinced LLMs can do this.

Cal AI, which claims to generate a nutritional breakdown based off a photo, has $30 million in annual recurring revenue.


... Bloody hell. I mean that's basically fraud, surely. It is _not possible to do this even vaguely accurately_.

> senate and congress

Senate and congress are both elected. Their re-election is effectively jury nullification.

The people do not care about the crimes.


Between citizens united, gerrymandering, the electoral college, winner take all elections, and voter supression, I don't think we can say that "elections" in America reflect the will of the people.


Also that only ~30-40 congressional districts of the 435 US House seats are competitive this cycle.


That post doesn't say anything about training for SVG generation


https://blog.google/innovation-and-ai/models-and-research/ge...

> Code-based animation: 3.1 Pro can generate website-ready, animated SVGs directly from a text prompt. Because these are built in pure code rather than pixels, they remain crisp at any scale and maintain incredibly small file sizes compared to traditional video.


> I think something like 96GB RTX PRO 6000 Blackwells would be the minimum to run a model of this size with performance in the range of subscription models.

GLM 5.1 has 754B parameters tho. And you still need RAM for context too. You'll want much more than 96GB ram.


Which in the series specifically?


H1b is tied to employment, not to the employer. You can change employers on the same H1.

It’s not great. But this is similar to how health insurance is tied to employment, not to the employer. Both citizens and H1 employees experience the same abuse here


No it’s worse for them. A person on an H-1B has a ticking time bomb to find a new job or leave the country.


> asserting that LLMs will never generate 'truly novel' ideas or problem solutions

I don't think I've had one of these my entire life. Truly novel ideas are exceptionally rare:

- Darwin's origin of the species - Godel's Incompleteness - Buddhist detachment

Can't think of many.


> Every MCP server injects its full tool schemas into context on every turn

I consider this a bug. I'm sure the chat clients will fix this soon enough.

Something like: on each turn, a subagent searches available MCP tools for anything relevant. Usually, nothing helpful will be found and the regular chat continues without any MCP context added.


Absoultely.

I'll add to your comment that it isn't a bug of MCP itself. MCP doesn't specify what the LLM sees. It's a bug of the MCP client.

In my toy chatbot, I implement MCP as pseudo-python for the LLM, dropping typing info, and giving the tool infos as abruptly as possible, just a line - function_name(mandatory arg1 name, mandatory arg2 name): Description

(I don't recommend doing that, it's largely obsolete, my point is simply that you feed the LLM whatever you want, MCP doesn't mandate anything. tbh it doesn't even mandate that it feeds into a LLM, hence the MCP CLIs)


Yup, routing is key. Just like how we've had RAG so we don't have to add every biz doc to the context.

I agree with the general idea that models are better trained to use popular cli tools like directory navigation etc, but outside of ls and ps etc the difference isn't really there, new clis are just as confusing to the model as new mcps.


You’re spot on. Anthropic blogs talk about a ToolSearchTool to solve this problem - https://www.anthropic.com/engineering/advanced-tool-use


> > Every MCP server injects its full tool schemas into context on every turn

> I consider this a bug. I'm sure the chat clients will fix this soon enough.

ANTHROP\C's Claudes manage/minimize/mitigate this reaonably.


That’s a trade off, now you need multiple model calls for every single request


Yes we just RAG to be applied on tools. Very simple to implement.


I don’t think so. Without a list of tools in context the ai can’t even know what options it has, so a RAG like search doesn’t feel like it would be anywhere near as accurate


The RAG helps select the tool needed for the task at hand. Semantic search returns only the tools that match. Very efficient.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: