Hacker Newsnew | past | comments | ask | show | jobs | submit | kraftman's commentslogin

Distraction

It's wierd because the barrier to not have that in is so low, you can just tack on 'talk like me not AI, dont use em dashes, don't use formulaic structures, be concice' and itll get rid of half of those signals.


This is how you get precious takes like this one:

https://news.ycombinator.com/item?id=45322362

> First impression: I need to dive into this hackernews reply mockup thing thoroughly without any fluff or self-promotion. My persona should be ..., energetic with health/tech insights but casual and relatable.

> Looking at the constraints: short, punchy between 50-80 characters total—probably multiple one-sentence paragraphs here to fit that brevity while keeping it engaging.

> User specified avoiding "Hey" or "absolutely."

Lots more in its other comments (you need [showdead] on).


I don't understand why someone would go through the effort to prompt that when the comments it suggested are total garbage, and it seems like would take similar effort to produce a low quality human written comment.


If I had to guess, it's probably an attempt to automate karma farming over time to make an account look legit later on.


Don't give these subnormals any ideas!


conspiracy: the people behind these bots intentionally run very obvious bots to distract everyone from the less-obvious bots

It's not just clever—it's devious!


I talk politely to LLMs because I talk politely.


[flagged]


I am! But seriously, I've seen some conversations of how people talk to LLMs and it seems kinda insane how people choose to talk when there are no consequences. Is that how they always want to talk to people but know that they can't?


Why should there be consequences for typing anything as inputs into a big convolution matrix?


I don't think I implied that there should be. What I mean is, for me to talk/type considerably differently to an LLM would take more mental effort than just talking how I normally talk, whereas some people seem to put effort into being rude/mean to LLMs.

So either they are putting extra effort into talking worse to LLMs, or they are they are putting more effort into general conversations with humans (to not act like their default).


I do not “talk” to LLMs the same way I talk to a human.

I would never just cut and paste blocks of code, error messages, and then cryptic ways to ask for what I want at a human. But I do with an LLM since it gets me the best answer that way.

With humans I don’t manipulate them to do what I want.

With an LLM I do.


I don't mean that people say Hi, or goodbye, or niceties like that. I'm talking about people that say things like "just fucking do it" or "that's wrong you idiot try again'.


The truth is that most people will in fact power trip over other people when given a chance. Most people have no business ever being near any sort of leadership role because of this. What you're seeing with the way people power trip over other bots is almost certainly the way they'd treat people too, if they felt as certain of their power over those people.


Humans are not moral agents, and most of humanity would commit numerous atrocities in the right conditions. Unfortunately, history has shown that 'the right conditions' doesn't take a whole lot, so this really should come as no surprise.

It will also be interesting to see how long talking to LLMs will truly have 'no consequences'. An angry blog post isn't a big deal all things considered, but that is likely going to be the tip of the iceberg as these agents get more and more competent in the future.


Yeah they looked like they were wobbling while I read them until I focused on them more.


I'm on some anti rejection meds post-transplant and chatgptd some of my symptoms and it said they were most likely caused by my meds. Two different nephrologists told me that the meds I'm on didn't cause those symptoms before looking it up themselves and confirming they do. I think LLMs have a place in this as far as being able to quick come up with hyphotesese that can be looked into and confirmed/disproved. If I hadn't had chatGPT, I wouldnt have brought it or my team would have just blamed lifestyle rather than meds.


I can't deal with having more than ~5 permananent tabs and 5 temporary open tabs. if i have so many tabs open i cant read what they are I know something has gone wrong with what I'm trying to do, so I try and reset.


I wouldn't be surprised if they don't. Valve don't want to sell hardware, they want to sell games. They only make hardware as flagships for new markets, then they want other hardware manufacturers to take over.

the legion go is more powerful and a has a nice screen, but is heavier, boxier, and has a worse batteyr life than the steam deck


Valves moving into hardware more than ever right now, not moving away from it. They've already sand multiple times a deck 2 is on the cards, but only when theres enough of a hardware bump to make it make sense as a product. Slapping a tiny bit newer cpu in there and calling it a Steam Deck 2 isn't what Valve are about.


They definitely are working on it. They announced the steam machine, steam controller, and the valve frame (standalone vr headset with seamless screen sharing from a PC), and in their reveal video the first thing they rather coyly say is “we’d love to share information about our next Steam deck, but that’s for another day!” and announce a bunch of other cool stuff.


I feel like when I talk to someone and they tell me a fact, that fact goes into a kind of holding space, where I apply a filter of 'who is this person that is telling me this thing to know what the thing they are telling me is'. There's how well I know them, there's the other beleifs I know they have, there's their professional experience and their personal experience. That fact then gets marked as 'probably a true fact' or 'mark beleives in aliens'.

When I use chatGPT I do the same before I've asked for the fact: how common is this problem? how well known is it? How likely is that chatgpt both knows it and can surface it? Afterwards I don't feel like I know something, I feel like I've got a faster broad idea of what facts might exist and where to look for them, a good set of things to investigate, etc.


The important part of this is the "I feel like" bit. There's a fair but growing bit of research that the "fact" is more durable in your memory than the context, and over time, across a lot of information, you will lose some of the mappings and integrate things you "know" to be false into model of the world.

This more closely fits our models of cognition anyway. There is nothing really very like a filter in the human mind, though there are things that feel like them.


Maybe but then thats the same wether I talk to chatGPT or a human isnt it? except with chatgpt i instantly verify what im looking for, whereas with a human i cant do that.


I wouldn't assume that it's the same, no. For all we knock them unconscious biases seem to get a lot of work done, we do all know real things that we learned from other unreliable humans, somehow. Not a perfect process at all but one we are experienced at and have lifetimes of intuition for.

The fact that LLMs seem like people but aren't, specifically have a lot of the signals of a reliable source in some ways, I'm not sure how these processes will map. I'm skeptical of anyone who is confident about it in either way, in fact.


Reminds me of "default to null":

> The mental motion of “I didn’t really parse that paragraph, but sure, whatever, I’ll take the author’s word for it” is, in my introspective experience, absolutely identical to “I didn’t really parse that paragraph because it was bot-generated and didn’t make any sense so I couldn’t possibly have parsed it”, except that in the first case, I assume that the error lies with me rather than the text. This is not a safe assumption in a post-GPT2 world. Instead of “default to humility” (assume that when you don’t understand a passage, the passage is true and you’re just missing something) the ideal mental action in a world full of bots is “default to null” (if you don’t understand a passage, assume you’re in the same epistemic state as if you’d never read it at all.)

https://www.greaterwrong.com/posts/4AHXDwcGab5PhKhHT/humans-...


> Afterwards I don't feel like I know something, I feel like I've got a faster broad idea of what facts might exist and where to look for them, a good set of things to investigate, etc.

Can you cite a specific example where this happened for you? I'm interested in how you think you went from "broad idea" to building actual knowledge.


Sure. I wanted to tile my bathroom, from chatgpt i learned about laser levels, ledger boards, and levelling spacers (id only seen those cross corner ones before).


FWIW that seems like low stakes compared to what I see other people using LLMs for (e.g medical advice).


I guess. I also used it to check the side effects of coming off prednisolone, and it gave me some areas to look at. I've used it a bunch to check out things around kidney transplants and everything ive verified has been correct.


I spent a year in a bunch of airbnbs and every time there was an induction hob it had at least one of these issues. I really like them otherwise but the buttons are just so bad.


I'm not sure its binary, I feel like ive gotten worse at it with age, and for some reason I find it harder with my head sideways.


For me this is the other way round. When I was a student (physics) I had a very, for a lack of a better word, "practical" visualization in my head - what I needed to understand what I was studying. There was a lot of maths too, visualized.

Today, 30 years later, I have vivid representations of calligraphy or art, especially when I fall asleep. I fall asleep within at worst minutes so I cannot really take full pleasure of watching these ilages and during the day I am too surrounded by sources of sound, images etc. to meaningfully repeat the exercise.


The _absence_ of visual imagery is binary: you cannot see images at all or, to whatever extent, you can. Those who do have any mental imagery at all, however, fall on a scale. There are numerous studies of certain real downsides to aphantasia, notably tied to episodic memory, which don't seem to be present in those simply with diminished visual imagery.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: