It really isn't a problem. The analogical congruence holds.
Typically free will is defined according to the action being taken without the constraint of necessity based on a person's own desire.
In the game theoretic model that Nash showed optimal the decision making act that the agent has to do isn't about deciding over actions. It is over strategies. That might be a bit confusing so lets just use an example.
If you play rock paper scissors in the agent's modeling of the problem it isn't modeling it as an action choice of rock, paper, or scissors. It is actually modeling it as an action choice over probability vectors. The action choice of rock corresponds with the probability vector of [1 0 0], paper with [0 1 0], and rock with [0 0 1]. There are an infinite number of different policies, but it turns out that the rational one to pick is [1/3 1/3 1/3] playing each option with equal probability.
Notice that here the agent is taking an action, but not one that is constrained by necessity. Notice that the choice of it is due to the modeling of what is in the preference of the agent. Compare that with the definition of free will. The same thing is happening.
So the congruence does hold, but what makes it appear to not hold is that most people rejecting free will neglect computationally irreducible phenomenon. In actuality, computationally irreducible functions which allow for stochastic signals show up in cellular automata without the requirements of dualism. Selection and variation are then much more then enough to show that surviving agents will protect these information sources so as to make them unobserved, because failure to do that isn't optimal and so agents which don't aren't selected for.
> Ah, this is where the problem starts.
We could try and reject, not the analogical congruence which holds, but the system of using analogical reasoning in the first place, but this goes badly. The first big problem is that all of our knowledge comes via theory-laden proxies and removing analogical validity removes the validity of evidence in general. A stranger result is that compression is justified through analogical congruence. So you can no longer claim to have knowledge over the state and dynamics function, because you have knowledge over the compressed form of it. To claim knowledge, you now need to physically be it.
> Ah, this is where the problem starts.
There are many invisible octopuses. Fish are going to encounter decision problems wherein these camouflaged predators are both on their right and left yet the decision context shows them the exact same thing in both cases. So which should they pick? Always left? Then they always die. Always right? Then they always die. The only winning solution is to pick both, because then sometimes the agent lives to reproduce. How do we pick both? Well, if we do it in a way that is computationally reducible then the octopus which observes the fish invisibly can anticipate it. So now it is wherever the fish decides to go. So it has to decide to do both, but it has to do so in a way that is unobservable. This decision problem doesn't go away when someone chants the magic word of dualism. The agent still needs to decide in an unpredictable manner or is going to die. So when optimization processes build things? They end up approximating a solution to this problem.
This type of decision problem has played out billions of times. It has played out over billions of years. I don't know which time it happened which was the first, but where it was, that is where the problem really starts. And since that problem and onward a filter has been killing the things that answer incorrectly.
It really isn't a problem. The analogical congruence holds.
Typically free will is defined according to the action being taken without the constraint of necessity based on a person's own desire.
In the game theoretic model that Nash showed optimal the decision making act that the agent has to do isn't about deciding over actions. It is over strategies. That might be a bit confusing so lets just use an example.
If you play rock paper scissors in the agent's modeling of the problem it isn't modeling it as an action choice of rock, paper, or scissors. It is actually modeling it as an action choice over probability vectors. The action choice of rock corresponds with the probability vector of [1 0 0], paper with [0 1 0], and rock with [0 0 1]. There are an infinite number of different policies, but it turns out that the rational one to pick is [1/3 1/3 1/3] playing each option with equal probability.
Notice that here the agent is taking an action, but not one that is constrained by necessity. Notice that the choice of it is due to the modeling of what is in the preference of the agent. Compare that with the definition of free will. The same thing is happening.
So the congruence does hold, but what makes it appear to not hold is that most people rejecting free will neglect computationally irreducible phenomenon. In actuality, computationally irreducible functions which allow for stochastic signals show up in cellular automata without the requirements of dualism. Selection and variation are then much more then enough to show that surviving agents will protect these information sources so as to make them unobserved, because failure to do that isn't optimal and so agents which don't aren't selected for.
> Ah, this is where the problem starts.
We could try and reject, not the analogical congruence which holds, but the system of using analogical reasoning in the first place, but this goes badly. The first big problem is that all of our knowledge comes via theory-laden proxies and removing analogical validity removes the validity of evidence in general. A stranger result is that compression is justified through analogical congruence. So you can no longer claim to have knowledge over the state and dynamics function, because you have knowledge over the compressed form of it. To claim knowledge, you now need to physically be it.
> Ah, this is where the problem starts.
There are many invisible octopuses. Fish are going to encounter decision problems wherein these camouflaged predators are both on their right and left yet the decision context shows them the exact same thing in both cases. So which should they pick? Always left? Then they always die. Always right? Then they always die. The only winning solution is to pick both, because then sometimes the agent lives to reproduce. How do we pick both? Well, if we do it in a way that is computationally reducible then the octopus which observes the fish invisibly can anticipate it. So now it is wherever the fish decides to go. So it has to decide to do both, but it has to do so in a way that is unobservable. This decision problem doesn't go away when someone chants the magic word of dualism. The agent still needs to decide in an unpredictable manner or is going to die. So when optimization processes build things? They end up approximating a solution to this problem.
This type of decision problem has played out billions of times. It has played out over billions of years. I don't know which time it happened which was the first, but where it was, that is where the problem really starts. And since that problem and onward a filter has been killing the things that answer incorrectly.