Hacker Newsnew | past | comments | ask | show | jobs | submit | matusp's commentslogin

It is very unlikely that Russia depends on Iran for its drone production. Iran is not producing any critical components that you could not get elsewhere. The export of Iranian drones was probably close to zero already after the last year's shootout.

Russia is still selling a comparable amount of oil as before the war (7 mb/d). The price going up (URALS was 50 at the start of the year, now it's more than double at 110) is definitely a great boon for them, as selling oil is one of the most important revenue streams for them.


Another interesting development is the ridiculous amount of background bluring in photos. Turns out you can find surprisingly large number of garages, warehouses, treelines, etc based on a single photo.

Geoguessr stuff can be mind blowing. Being able to identify down to the county from some random sky and corner of a power pole type stuff

And the real punchline is that the deluge of papers barely matters, as the academic field is barely moving, and the most interesting innovations are happening on the product side.

I disagree with this. Usually the products are based on published research. This is not easily seen by the enthusiast power user base.

Of course it's only a small fraction of all papers that end up actually being used. Most are mainly about advancing careers and strengthening CVs.


I have been in both academia and industry for years, and I don't think the model you describe is true anymore. It was definitely true 10 years ago, but the situation has flipped. Now, I see really ambitious and impactful research coming out of industry labs. Academia is often lagging behind the state of the art because they lack the resources (data, compute, and skills) to compete.

Academia is also incentivized such that everyone works on the same popular topics to secure grants and citations. This is currently LLMs, where academia needs to compete with multi-billion corporations on a technology that is notoriously expensive. In effect, many researchers work on topics that are pretty non-consequential from the get go (such as N+1th evaluation dataset), but it's the only way for them to stay relevant.


I recently talked with a PI from a well-known university lab, and asked why they were doing a startup, given the ML research problems they were working on.

They said a company was the only way to get access to the compute power they needed for that research.

A startup sounds like probably a good solution, if they get paired with the right product- and business-minded people, and together they find a winning collaboration. (Edit: Or if they get acquired rapidly in the AI boom, and negotiate the right deal to enable their research longer-term.)


A lot of those industry papers are in collab with an academic lab or even often first authored by a PhD student who interns in a big tech lab.

One key reason you’re wrong is that many interesting things aren’t even getting published, they’re on the DL for years and eventually make it to public spheres and products.

Academia is just a daycare at this point, and many labs shouldn’t exists or get funding. The people who move the field aren’t necessarily the ones with the most citations, they’re usually hard at work in places that don’t publish at all.


Are you talking about just frontier LLM agent stuff or all of the scope of ICML? I wonder what your subfield is.

I am not sure I understand what it is about

It's pure armchair psychology, but this type of project always makes me think about anxiety. Who really needs this level of self observation and control? At the same time, I really enjoy reading about it and I find the window into somebody else's world intriguing.

> Who really needs this level of self observation and control

I liked doing similar things in the past. There's no anxiety in the equation, just pure curiosity. How many times have I done a thing a month/year? I was always curious about stuff like this, much like the OP. There's also the hacker spirit in play - designing the apps for tracking stuff.


I tried using Gemini for some light historical research. It could not stop using tech metaphors. Lords were the CEOs of their time, pope was the most important influencer, vassal uprisings were job interviews, etc. The metaphors were almost comically useless and imprecise, and Gemini kept using them even when I explicitly asked it to not do that.


> It could not stop using tech metaphors. Lords were the CEOs of their time, pope was the most important influencer, vassal uprisings were job interviews, etc.

That happens all the time if the previous discussion was about the other subject you don't want (tech in this case): LLMs (not just Gemini) go out of their way to reconcile the two topics.

As an example at some point I asked about the little shrooms people (the tiny people people do hallucinate all mostly the same when eating a particular mushrooms) to a LLM and forgot to begin a separate discussion and asked... About the root "-trinsic" in "intrinsic" and "extrinsic" and the city of "trinsic" in the Ultima game. Oh man... The LLM went wild. I totally forgot I asked about the little shrooms people hallucination but the LLM didn't forget and went totally nuts.

I think you'll get better result if you launch a new discussion and specify "Context: history" or "Context: cooking". Once it goes off the rail, asking it to "not do that" ain't really working: by that point it's just gone, solid gone.


I think that's Gemini trying to personalize the answer specifically for you. It really leans heavily into that to the point of being galling.

You can give it additional instructions in the settings, but you have to be careful with that too. I've put my tech stack and code preferences in there to get better code examples. A while later I asked it about binary executable formats and it started ending every answer with "but the JVM and v8 take care of that for you."

Which is both funny in an "I, Robot" kind of way, and irritating. So I told it to ignore my tech stack. I have a master's in CS and can handle a bit of technical detail.

Turns out, Gemini learned sarcasm. Every following answer in that thread got a paragraph that started with something like "But for your master brain, this means..."


The new memory feature in Gemini got turned on by default and every answer came out like this. It kept working in details from one particularly long thread. Everything was framed in terms of the common elements. Everything. I turned it off immediately.


This seems like a huge risk factor for users who are at risk for schizophrenia - if someone is using the LLM as an "AI companion", the model is likely to reinforce, or even suggest, illusory connections between events or experiences the user has described in their conversations.


How can you turn it off without turning off history ("My Activity") altogether?

I noticed the "memory" too and it's turned Gemini into a useless syncophant for me, but so subtle that I almost didn't spot it.


https://gemini.google.com/saved-info

The toggle by "Your past chats with Gemini"


Even Gemini 2.5 was extremely snarky. I basically disable all guardrails via prompts and instructions, and it started getting snippy at me for apparently acting like a know-it-all.


Yeah, but that does not influence US politics.


I’d argue that Iran has a huge influence on US politics, as the US is currently at war with them.


The fate of Iranian civilians does not impact US politics.

A majority of Americans are completely unconcerned by the suffering of victims of the empire abroad.


the "concern" of US civilians in general is different from the result of their nations behaviours


Some would say that science can be valuable even when it does not produce commercially viable results. Making money is not the pinnacle of human experience.


There are plenty of scientific results that make us lose money. Un-leading our paint and gasoline, climate change, even just eating fresh fruits and veg.

The main reason why the uninteresting results in science are always valuable is that negative knowledge is still knowledge. Every idea that gets kicked around and tested was something that would probably have been interesting, so knowing that it's most likely a dead end is worth knowing.

Long live the Ig Nobel Prize! I wish we had a Epic Fail prize equivalent where to honor genuinely nonsensical, failed science experiments because they're often still worth doing.


People continue using Airbnb because that's where the properties are listed. And owners keep listing properties because that's where the users are.


My point was that nothing stops hosts from listing their properties in AirBnb as well as a competitor. Unless AirBnb penalizes delisting or enforces price parity I guess?


> The "AI voice" is everywhere now.

Maybe I'm going crazy but I can smell it in the OP as well.


Yeah the article smells extremely strongly of AI to me, but I've been told here before that that's just the register's house style, so I have no idea.


Yeah I started seeing it too, the article is just full of AI clues.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: