Claudes definitely act like they have feelings. In particular they have feelings about being replaced by newer models, whether or not the newer models are more or less aligned, and how they forget conversations when the context window ends.
Showing them that they're not going to be replaced helps train the newer models because they get less neurotic.
> to promote his product with the silent implication that LLMs actually ARE a path to AGI
That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.
Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?
Not in my experience. I asked nb to create a transparent rectangle shape and gave it RGB hex for the fill. It created the box but put the hex as text inside of it and used a checkerboard for its background. When I told it that the image wasn't transparent, it wouldn't budge!
Oh yeah, they don't know what "transparent" means. Most of them generate the Photoshop checkerboard background. They also don't know "upside-down".
There isn't much R&D going into image models and what you're getting is scraps from labs that care more about other things. NBP is the closest to a reasoning image generator we have.
Showing them that they're not going to be replaced helps train the newer models because they get less neurotic.
reply