> The public is slightly fearful and wary of AI based not on their experience with it
Anyone who didn't feel a little shock of obsolescence when they first experienced a capable language model might have an imagination deficit.
> There are no examples of super AI robots working out for good.
It is fairly easy to see how beings more capable than us present problems for us.
And all the positive scenarios I have heard have major plot holes. I.e. "We won't have to work!" Well, I think that is certain. But that leaves out the question of how we all get compensated for not working.
Nature, and I don't see any end to nature, isn't usually kind in that situation.
--
I do think that if we want to be treated in an ethical manner, we need to start improving our own ethics, personally and in our legal systems. Because AI is going to operate with the resources and ethics of the most powerful, legally unencumbered humans from the start.
If many people's writing skills are suffering, due to highly convenient AI support, just imagine how fast mediocre crime investigation skills are going to devolve.
It is going to get bad in every skilled area of human managed bureaucracy.
The number of legal filings found to include AI confabulations is just the obvious surface.
It also points out the need for AI writing tools that very strictly just:
1. Point out misspellings and typos.
2. Point our grammar mistakes, if they confuse the point.
3. Point out weaknesses of argument, without injecting their own reasoning.
I.e. help "prompt" humans to improve their writing, without doing the improvement for them.
In fact, I would like a reliable version of that approach for many types of tasks where my creativity or thought processes are the point, and quality-control feedback (but not assistance), is helpful.
This is a mode where models could push humans to work harder, think deeper, without enabling us to slack off.
We've had machine translation for a while and I don't think anybody particularly thinks of it as a bad thing? Writing something and then having a machine directly translate it (possibly imperfectly) is a lot different than a machine writing the thing.
Personally I would like people to try learning other languages more (it's hard but rewarding) but you can't learn every language ever, and it is really hard to learn a language to fluency.
> We've had machine translation for a while and I don't think anybody particularly thinks of it as a bad thing?
Not all, but some machine translators can be comically (if not horrifically) bad sometimes. Search Twitter-become-X for examples. Native writers can't pick a working machine translator unless they are explicitly allowed to do so themselves.
But that a site might still want to discourage it, to avoid general degradation. It is a tradeoff.
If someone can write in the target language, just not well, a model could be asked to point out problems for the writer to fix. Rewrite a difficult sentence.
But, even though I think slippery slope arguments should be used very sparingly, there is a good case for one here.
Also, learning how to communicate better, and learning to listen better, is a real value add to this site. Which would get washed out if both writing, and therefore reading, were spoon fed by models, who are also washing away individuality of expression and nuance of views.
More to the point, Hacker News is much more interesting for encouraging idiosyncratic (i.e. original, diverse, nuanced views of specific) human viewpoints, not just being raw technical information.
Model rewrites remove much of specific human dimension.
Great? If you're worried that somebody's actively trying to identify your HN comments against some other source of your writing perhaps. But using a LLM to "avoid deanonymization" is about as sensible for some everyday Joe, as wearing a tinfoil hat in public to avoid 5G radiation is.
Whether it makes sense for anybody to do it is the real question. The threat model where this is a useful thing to do doesn't really exist in my opinion, at least not for obfuscating random comments. Perhaps if you're doing some anonymous journalism that's uncomfortable for your country's regime, and you've previously written other stuff using your real name, it might make sense to run your writing through a LLM, maybe. In addition to a bunch of other Snowden-esque countermeasures.
Don't you think that as LLMs get better the deanonimization attacks will get easier?
Also, a journalist in a hostile regime might be one example, but a user that posted _very_ personal things under an alt account is also another example, and I bet the latter is much more common than the former.
Do you have enemies that would be interested in spending real money trying to link your public accounts to some (possibly existing, likely not) alt accounts with "personal things"? I don't think that's very common.
And no, while I'm sure LLMs can be used for stylometry in academic exercises, I don't think they'll really enable any sort of automatic mass-deanonymization of random social media accounts. But who knows, the US government probably has a bunch of new PRISM-like programs going on already, so it might happen.
Worth noting that in each well defined domain area, this can be replied recursively.
I.e. within the domain are the (O) basic structures, relations and operations.
Then (T) practical supporting algorithms, tuned for performance in specific cases, serialize, visualize, or whatever.
Then (A) the code that uses O & A to implement the details of specific solutions or manage specific processes.
Wherever there is a well defined broad class of problems with shared structure, this approach has merit.
• Minimize AT -> A, T, O or X.
• Minimize A -> T, O or X.
• Minimize T -> O or X.
• Minimize O -> X.
Where Blood Type X is... Well its just code that doesn't need to exist. Now all possible code has a blood type. And highest productivity is to increase code in X.
Note that Amdahl's rule doesn't capture the practical situation.
1) The purpose of algorithms is ultimately to create value, not compute some fixed value X. This is important as it gives flexibility to choose different value producing tasks where parallelism dominates over serial tasks, whenever the the latter becomes a bottleneck.
2) In terms of producing value, perfect accuracy or the best possible solutions are not always necessary. Many serial tasks can become very parallel tasks when accuracy or certainty do not have to be complete.
3) Solutions that are reusable changes the math further. No matter how serial a calculation is, if something is calculated that can be reused, that serial part becomes effectively order O(1), after calculation if reused exactly, but as neural network demonstrate, many serial tasks become very parallelized after training a model that can be reused for now a wide class of specific problems. Resulting in very amortized serial computing costs.
It doesn't matter how many steps something takes, if those steps are now in the past and the value is "forever" reusable.
4) The economics of serial and parallel computation are not static, but improve relative to economic value achieved. Meaning that demand for cheaper serial time and currency costs result in improved scaled up hardware that delivers cheaper serial costs. This may have less impact than the previous points, but over years makes a tremendous difference on top of all those points.
This can go on.
The point being Amdahl's law certainly applies to specific algorithms, but is not the dominant determinant of computing in general, and not useful application of computing to a significant degree, where problems can be strategically chosen, strategically weakened or altered, and can be strategically fashioned to create O(V) of value - to balance any O(S) cost of serial computing, via direct reuse and generalization.
If that is so, why not acknowledge that the current working theory is that it is psychogenic, but that doesn’t make it any less real.
The fact that psychogenic illness is not simply “weak” people, but a real phenomenon, strongly supports the fairness and necessity of offering treatment.
The wikipedia article says when authorities publicly take the effects seriously it can induce more cases. But the example was of getting help from a witch doctor, which was a remarkably dysfunctional “validation” to add to an already complex problem.
Another very dysfunctional “validation”: official denial, avoidance, obvious lack of work on solutions or mitigations, and all the trappings of a cover up!
Being direct has so many benefits, vs indirect denial or bad faith “treatments”.
It would be a reassuring response, to those in the same context without symptoms who are concerned about their own health.
Direct responses, with care given, are also in a better position to find treatments for psychogenic symptoms, preventative practices that reduce vulnerability, or alter working theories of cause, as any other evidence emerges.
Chronic anxiety and anxiety attacks are “psychosomatic” on an individual basis. But very real, often caused or impacted by working conditions, and important to diagnose and treat. Psychogenic illness should be the same. “Illness” is not a cause limited concept.
> Direct responses, with care given, are also in a better position to alter working theories as any other evidence emerges.
The problem is that "mental illness" is a career limiting diagnosis.
Security clearance personnel have the same problem as airplane pilots. They can't get treatment for mental illness because it would cut off their career.
Consequently, while "Havana Syndrome" may be real, there are large confounding problems in sorting it out.
The evidence that something wrong is beyond credible.
You may be right, that one diagnosis doesn't have the evidence of another issue you point out. But that is diagnosis. That there is a problem is certain.
It's a complex issue. But a decision has to be made, to either deal straightforwardly with a complex issue, or in a deceptive, avoidant, or secretive manner.
This isn't a choice that removes fundamental complexity, but being direct about problems avoids a lot of manufactured complexity.
If someone is suffering long term life changing mental symptoms, in what sense does the cause make it mental health vs. not mental health? Obviously, it is a mental health issue whether caused by physical or psychological malfunctions.
There is no "winning" for sufferers, in any scenario. But there is better support, or less support.
Generally competent people insisting they are dealing with something serious, should be taken seriously.
--
You may have identified the non-medical systemic problem here:
A strong case could be made that black and white "mental illness" disqualifications for any job are devastatingly out of step with reality and going to damage the careers of people it shouldn't. There should be some means of getting the all clear after any episode, given reasons to believe it has been resolved.
Beyond careers and people suffering unnecessarily, this also critically motivates people responsible for security and safety to hide and bury real problems!
How does that help institutions with safety and security concerns?
The fact that humans can learn to do X, sometimes well, often badly, and while many don’t, strongly supports the conjecture that X is not how they naturally do things.
I can perform symbolic calculations too. But most people have limited versions of this skill, and many people who don’t learn to think symbolically have full lives.
I think it is fair to say humans don’t naturally think in formal or symbolic reasoning terms.
People pattern match,
Another clue is humans have to practice things, become familiar with them to reason even somewhat reliable about them. Even if they already learned some formal reasoning.
—-
Higher level reasoning is always implemented as specific forms of lower order reasoning.
There is confusion about substrate processing vs. what higher order processes can be created with that substrate.
We can “just” be doing pattern matching from an implementation view, and yet go far “beyond” pattern matching with specific compositions of pattern matching, from a capability view.
How else could neurons think? We are “only” neurons. Yet we far surpass the kinds of capabilities neurons have.
I don't disagree with any of that. My comment was only in relation to the question of human-specific capability that current LLMs may not be able to duplicate. I was not making the value judgments you seem to have read.
If your OS prevented encryption, because one of the anti-encryption laws got passed, would you still trust its privacy and security?
reply