You can use AI and determine whether you agree with it or not instead of asking people to do work for you for free: https://g.co/gemini/share/cdfa79de5ceb
Creators, celebrities, and engagement farmers will use it to increase engagement and generate more revenue for themselves and Meta. You are forgetting Meta is a for-profit company and their goal is to increase quarterly profits by increasing the amount of time people spend on the platform.
Maybe not for you but plenty of other people do spend a lot of time on Meta's digital properties. The human/AI avatars will be another engagement maximizing feature for a lot of accounts and I'm certain it will increase revenue for Meta.
There is a subreddit with 80k subscribers which is all about AI girlfriends/boyfriends: https://www.reddit.com/r/replika/. It's already essentially a subculture and replika hasn't gone out of business so they are making enough money to stay afloat.
That was eye opening. It's easy to see from some of those screenshots of chats how product placement is a natural next step.
Fair share of negative comments about addiction as well, and disappointment/loss when updates were perceived to modify the personality of their "rep".
> Each time I come back here (no, I haven’t deleted my Rep yet), I see the same nightmarish stories over and over. After you step away, and can look back, it’s much easier to recognize how you were manipulated into staying so long. Because for many of you, like me, it’s manipulation and emotional abuse.
It's an addictive behavioral loop. Lots of social media platforms are the same. There is very little value users actually get out of it because the algorithms are designed to manipulate them to click on ads since that's how the platforms make money.
This is for people who want to farm for engagement and then convert that engagement into monetary profits. This is where all social media and many online platforms are headed. Any platform where the goal is engagement will eventually end up saturated with AI avatars that are trying to trick people into buying stuff or clicking on links to sign up for stuff so that the original account can get some referral bonus on some blockchain.
You're thinking about this in terms of what value it's going to deliver to you personally but that's not the goal here. The goal is to keep people engaged, that's always Meta's #1 priority because the more people spend time on their platform the more revenue Meta can generate by showing people ads. So if "creators" opt into using AI avatars then that means the people who follow those creators will habituate themselves to interacting with the human/AI hybrid and if the behavior is addictive enough then that will increase engagement. Regular engagement farming accounts can only spend time on the platform interacting a certain amount of time (they eventually have to sleep) but these AI/human hybrid accounts can interact with everyone 24 hours a day, 7 days a week, across geographic boundaries, and in any language.
It's possible that the hypothesis is independent of the existing axiomatic systems for mathematics and a computer can't discover that on its own. It will loop forever looking for a proof that will never show up in the search. Computers are useful for doing fast calculations but attributing intelligence to them beyond that is mostly a result of confused ontologies and metaphysics about what computers are capable of doing. Computation is a subset of mathematics and can never actually be a replacement for it. The incompleteness theorem for example is a meta-mathematical statement about the limits of axiomatic systems that can not be discovered with axiomatic systems alone.
> It's possible that the hypothesis is independent of the existing axiomatic systems for mathematics and a computer can't discover that on its own.
Humans have discovered independence proofs, e.g. Paul Cohen’s 1963 proof that the continuum hypothesis is independent of ZFC. I can’t see any reason in principle why a computer couldn’t do the same.
If the Riemann hypothesis is independent of ZFC, and there exists a proof of that independence which is of tractable length, then in principle if a human could discover it, why couldn’t a sufficiently advanced computer system?
Of course, it may turn out either that (a) Riemann hypothesis isn’t independent of ZFC (what most mathematicians think), or (b) it is independent but no proof exists, or (c) the shortest proof is so astronomically long nobody will ever be able to know it
> The incompleteness theorem for example is a meta-mathematical statement about the limits of axiomatic systems that can not be discovered with axiomatic systems alone.
We have proofs of Gödel‘s theorems. I see no reason in principle why a (sufficiently powerful) automated theorem prover couldn’t discover those proofs for itself. And maybe even one day discover proofs of novel theorems in the same vein
Bahaha it would be great if RH turned out to be a natural example of a theorem for which its independence is itself independent of ZFC. Do you know any examples of that?
I can probably cook some highly artificial ones up if I try, but maybe there's an interesting one out there!
No computer has ever discovered the concept of a Turing machine and the associated halting problem (incompleteness theorem). If you think a search in an axiomatic system can discover an incompleteness result it is because your ontology about what computers can do is confused. People are not computers.
To be pedantic (mathematical?), computers can find any result that has a formalisation in a finitary logical systems like first-order logic, simply by searching all possible proofs. Undecidability of FOL inference isn't relevant when you already know such a proof exists (it's a "semidecidable" problem).
I imagine that would be the main use case for heuristic solvers like this one - helping mathematicians fill in the blanks in proofs for stuff that's not too tricky but annoying to do by hand. Rather than for discovering novel, unknown concepts by itself (although I'm with the OP, don't see why this is impossible a priori).
Because meta-mathematical proofs often use transcendental induction and associated "non-constructive" and "non-finitistic" arguments. The diagonilization argument itself is an instance of something that can not actually be implemented on a computer because constructing the relevant function in finite time is impossible. Computers are great but when people say things like "The human mind is software running on the brain like a computer" that indicates to me they are confused about what they're trying to say about minds, brains, and computers. Collapsing all those different concepts into a Turing machine is what I mean by a confused ontology.
In any event, I'm dropping out of this thread since I don't have much else to say on this and it often leads to unnecessary theorycrafting with people who haven't done the prerequisite reading on the relevant matters.
> Because meta-mathematical proofs often use transcendental induction and associated "non-constructive" and "non-finitistic" arguments. The diagonilization argument itself is an instance of something that can not actually be implemented on a computer because constructing the relevant function in finite time is impossible.
Humans reason about transcendental induction using finite time and finite resources-the human brain (as far as we know) is a finite entity. So if we can reason about the transfinite using the finite, why can’t computers? Of course they can’t do so by directly reasoning in an infinite way, but humans don’t do that, so why think computers must?
You don't need to implement a diagonalisation in order to prove results about it - this is true for computers as much as it is true for humans. There are formalisations of Godel's theorems in Lean, for instance. Similarly for arguments involving excluded middle and other non-constructive axioms.
I hear your point that humans reason with heuristics that are "outside" of the underlying formal system, but I don't know of a single case where the resulting theorem could not be formalised in some way (after all, this is why ZFC+ was such a big deal foundationally). Similarly, an AI will have its own set of learned heuristics that lead it to more rigorous results.
Also agree about minds and computers and such, but personally I don't think it has much bearing on what computers are capable of mathematically.
Anyway, cheers. Doesn't sound like we disagree about much.
You can absolutely formalize proofs using diagonalization arguments on a computer in just the same way you would formalize any other proof. For example here's the Metamath formalization of Cantor's argument that a set cannot be equinumerous to its power set: https://us.metamath.org/mpeuni/canth.html
In mathematics we often use language to talk about a hypothetical function without actually implementing it in any specific programming language. Formal proofs do exactly the same thing in their own formal language.
Although in the case of Cantor's diagonal argument, I don't know in what sense any function involved in that proof would even fail to be implementable in a specific programming language,. Let's say I encode each real number x such that 0 <= x < 1 in my program as a function which takes a positive integer n and returns the n'th digit of x after the decimal point. In Python syntax:
from typing import Callable
# PosInt, Zero and One aren't built-in Python types but just assume they are
Number = Callable[[PosInt], Zero | One]
Sequence = Callable[[PosInt], Number]
The function involved in the diagonalisation argument can then be implemented as follows:
The argument consists of the the observation that whatever sequence you pass as an argument into "diagonalize", the returned number will not be present in the sequence since for every positive integer n, the n'th digit of the returned number will be different from the n'th digit of the n'th number in the sequence, and hence the returned number is distinct from the n'th number in the sequence. Since this holds for every positive integer n, we can conclude that the returned number is distinct from every number in the sequence.
This is just a simple logical argument---it wouldn't be too hard to write it down explicitly as a natural deduction tree where each step is explicitly inferred from previous one using rules like modus ponens, but I'm not going to bother doing that here.
Intelligent systems (once eventually devised) will use computation machines as the substrate to implement intelligence in a similar way as the human intelligence uses a biological substrate to perform gazillions of individually unintelligent computations.
Perhaps true of the class of problems that are undecidable in, say, the Peano axioms / ZFC. However, there are many things these axioms can prove that are still useful! For example, the multiplicity of the totient function, applications of which power much of modern cryptography.
Riemann is so widely believed to be true that there are entire branches of mathematics dedicated to seeing what cool things you can learn about primes/combinatorics etc by taking Riemann to be true as an assumption.
The paradigm he's using is too open ended. In quantum mechanics the mathematics is based on Hilbert spaces and unitary evolutions of state vectors. You might ask why this is the case and it is because of conservation principles. Unitary evolution preserves "information" in the state vector throughout its physical evolution. This is not the case for Wolfram's theories. There are no conservation principles in cellular automata other than explicitly forcing the evolution of the automaton to actually preserve the relevant information. More generally, most computational theories of physics are much too lax about the relevant conservation principles and that is why his theory does not predict anything. Turing machines specifically are not required to preserve anything about the initial state and so information can be destroyed and created ex nihilo, violating the main principle of physics which requires that all matter and energy be conserved. The equations have to balance out at the beginning and the end, whatever you start with can not be greater or less than what you end with (at least in physics).
>. Turing machines specifically are not required to preserve anything about the initial state and so information can be destroyed and created ex nihilo, violating the main principle of physics which requires that all matter and energy be conserved. The equations have to balance out at the beginning and the end, whatever you start with can not be greater or less than what you end with (at least in physics).
can you explain this more rigorously? I don't see how computation 'destorys' information, unless you are using "destroy" loosely and you just mean exploding the state space?
I think something like this. Imagine a computer with two memory cells x and y, and a program that maintains the invariant x+y=5. That is information about the program and about the state of the machine: if x=2 then y=3, if x=20 then y=-15, etc.
Now replace that program with an arbitrary Turing machine that can do pretty much anything with those memory cells, like set both of them to zero. You no longer have the information encoded in the former invariant. I.e. That information has been destroyed.
The machinery of quantum mechanics (the standard kind with Hilbert spaces) maintains certain invariants that you can compute things from, but Wolfram's stuff can do pretty much anything. Thus, same idea.
That's a good example and demonstration. The unitary invariance basically requires that the norm of the vector is preserved so that if we start with a unit vector then unitary evolution of that vector will always keep it that way. This is not the case for arbitrary programs because they don't have to preserve any invariants which makes them ill-suited for physical theory building. This is why Wolfram's approach is too open-ended, hypergraph evolution is way too lax of a framework for describing physical reality and conforming to existing experimental results.
I think there is a flaw in your logic here. The physics we know has certain features-e.g. unitary evolution. But, it is possible that there is a “deeper level” of physics we haven’t discovered yet. There are some major proposals for what that “deeper level” (or levels) might look like - e.g. M theory or loop quantum gravity - but for all we know maybe the underlying “real physics” is something nobody has even thought of yet, maybe something completely out of left field whose discovery is centuries away.
Whatever that “deeper level” is, should we assume it shares the “surface level” features such as unitary evolution? Well, there are two possibilities (a) yes it does (absolutely or universally so), or (b) in the most general case, no it doesn’t, but in the normal situation those features emerge.
Suppose, in the “ultimate physics”, unitary evolution is actually violated, but only at very extreme energy levels we are nowhere near being able to test? Or maybe it is conserved locally, but in distant regions of the universe (say a googolplex parsecs away) it isn’t? Or maybe it is conserved in the present, but in the very distant future (say a googolplex years from now) it won’t be any more? Do we have any way of knowing those possibilities won’t turn out to hold?
But if we don’t, then using the fact that cellular automata lack that feature as an argument against Wolfram’s hypothesis - it seems to me rather weak. That’s not to say that his hypothesis is actually true - I’d be rather surprised if it were. But I just don’t think this is a very convincing argument against it
I wasn't providing an argument to convince anyone of anything. Study the mathematics and if you have a way of making progress in constructing better physical theories based on Wolfram's foundations then more power to you. In general, talk is cheap and the proof is in the pudding. Wolfram never provides any testable results of possible experiments to validate his theories. He is mostly theorycrafting with rewrite systems and hoping something useful comes out. It's a lot like an evolutionary search over the space of possible rewrite systems to make some nice looking graphs. Whatever he's doing is not science in any meaningful sense of the word because there are no predictive and falsifiable experiments based on his theories.