The rational actor model assumes that a person will behave optimally - using all information available to make and carry out the best decision possible for their goals.
I strongly suspect that a better model is that people instead of optimizing their outcomes instead optimize the ease of decision making while still getting an acceptable course of action. Most of our biases serve to either allow us to make decisions quicker or minimize the odds of catastrophically bad outcomes for our decisions, which fit nicely with this model. The fact is that indecision is often worse than a bad decision, and the evolutionary forces that shaped our brains are stochastic in nature and thus don't dock points for missed opportunities.
The idea you’re describing sounds similar to Satisficing Theory [1]. I agree this approach does a much better job of describing real life decision making than the traditional rational actor model. Unfortunately, Satisficing rarely gets discussed (at least in my experience) in mainstream economics/psychology, despite having been around since the 1950s.
Finding the best possible choice is impossible, but selecting the choice that maximizes expectation is possible. The former would be driving a different route because you know that a specific driver would rear-end you, and is impossible because it requires knowing the future. The latter would be driving a different route because you know that there's an annoying left turn.
The distinction isn't in whether your have access to all possible information for use in a decision. It's whether you use all available information in a decision.
Not sure if you realize this is coming off a pedantic, but everybody realizes what you are getting at. It's just not useful or relevant.
Define information being available as what people are able to load up into working memory to make the decision. You can maximize with those factors easily.
I think the fact that you think this is pedantic rather than useful and relevant demonstrates that you don't realize what he's getting at, possibly because your definition of "information being available" is wrong; it would make type 1 thinking the same as type 2.
I can "load up" the axioms of set theory plus the necessary definitions into working memory, but I'm still not claiming any millennium prizes. I do not think that a model of a person that is only limited by information would be anything close to a person that is limited by computational ability.
True, but even execution with literally zero unforced errors with the information one does have is something that can be pursued.
Or can it? Is it even possible or are humans so fundamentally flawed they they inevitably fail on day one? Pointing to monks is a standard example, but they tend to isolate themselves from difficult environments.
The laws of physics don't stop us, but something does.
This is "bounded rationality" [1], where people make the best decisions possible given computational constraints on how they make decisions. A lot of interesting work tries to derive human cognitive biases from this idea.
> The rational actor model assumes that a person will behave optimally - using all information available to make and carry out the best decision possible for their goals.
I strongly suspect that a better model is that people instead of optimizing their outcomes instead optimize the ease of decision making while still getting an acceptable course of action.
This is a very profound insight that I completely agree with. I've noticed that exact phenomena in my own life in my peer groups. Basically disengaging, not looking for new local maximas (in fairness, because they are hard to detect as they are happening) because the current situation is good enough to keep coasting on.
> optimize the ease of decision making while still getting an acceptable course of action
This might explain some behavior, but how does this model explain why many people choose to hurt others out of spite even if it means hurting themselves? Those choices are neither easy, nor optimal, nor ultimately acceptable as many people who do stupid things like that end up regretting it. It seems to me and most of historical humanity that something is fundamentally broken in us beyond merely missing out on the optimal outcome due to stochastic acceptableness. Sometimes we deliberately choose to do something very difficult that we know is wrong because we desire the bad outcome. That is messed up.
Punishing bad behavior is critical for any social group. If people know that no matter how much they break the social contract you're not going to do anything about it, the social contract no longer exists. This goes for both future interactions with the person who made the transgression, as well as third parties who are aware of the transgression.
Now a rational actor would carefully evaluate the consequences of possible responses to come up with an appropriate option and if the cost of their feud were greater than the likely reward then they'ed simply let it go. While it leads to better outcomes, this is a slow and draining process.
On the other hand, a simple "eye for an eye" response will often lead to suboptimal results, particularly when the perceived sleight is very different from the actual transgression, but people still will be hesitant to mess with you all the same. While in our modern era of functional justice systems this approach is generally unnecessary, the overwhelming majority of our evolutionary history did not contain such a luxury.
I tend to optimize for the least amount of perceived effort, most benefit, least actual productive output. I will spend 2 days writing a utility so I never have to do the same repetitive 30 second task twice.
I think there cannot be a single perspective for optimal behavior. If I work I want to be efficient, the opposite is true if I want to relax. When I want to have fun or be creative rationality isn't necessarily good company.
I also don't want to take every opportunity I get, that would be pretty exhausting. I would have opportunity to save some taxes if I invest a few hours into tax law. Certainly and opportunity and pretty productive. But I just don't want to because I hate doing taxes.
Sure, these models do not apply to individuals (although this fact is often neglected). Also a model is always a simplification. Intrinsic to that is that it will by definition only ever be approximative. It neglects parts of reality, hopefully the less important ones but you cannot be sure about correctness and extend of approximation.
For example if I know a behavioral scientist that I just don't like for any reason and he suggest I should exercise more, I might go eat an extra pot of ice cream. This would render "nudging" quite ineffective or worse have the opposite than the intended effect.
I think it is more constructive to accept limitations of a model. It can help for prognosis and diagnostics. Why is it for example that people exercise less? Probably work load or distractions from entertainment or whatever reason. I think the field should concentrate on trying to get answers to such questions.
Psychology is interesting and much of the content that cannot be replicated is probably still true under certain circumstances. But for generalization these circumstances need to be known.
The basic form of the rational actor model assumes nothing more than that the prediction errors an actor makes should be assumed to be unsystematic [unless proven otherwise] for the purpose of modelling. (And by extension, that some empirically observed group-level systematic deviations from theoretically optimal behaviour might be better explained by constraints on ability to act than on inability to anticipate)
Which is a pretty good null hypothesis, actually.
That's entirely consistent with people frequently optimising for ease of decision making, it's just not consistent with slavish adherence to a particular specified decision making function an economist has designed policy around exploiting. The canonical example in macroeconomics being that if a government announced its intention to increase inflation, it would be unreasonable to assume that people weren't rational enough to consider asking for a pay rise.
Epicycles were abandoned because we had a more parsimonious default model, not because we wanted to have a more complex idea of reality and handwaved about maybe being more multidisciplinary.
Also, evolution takes into account uncertainty of information. In contrast, when we reason intellectually, our first step is to clean up the data and get clear on what the question actually is - though we typically don't count that as "reasoning".
On the last point, evolution doesn't dock points for missed opportunities... provided someone else didn't miss them.
The deeper problem is modeling goal setting. We know people will hurt themselves and to punish others and economics is stuck at thinking people only wish to maximize value. People are much more complex than that.
If you apply the cognitive biases model to algorithms which have superhuman performance in various games - like AlphaZero, DeepBlue, Pluribus, and so on - the natural result is to conclude that these models are predictably irrational. The reason you get this conclusion is because it turns out to be necessary to trade off theoretical optimal answers for the sake of speed. The behavioral economic view of human irrationality ought to be considered kind of dumb in view of that result. But it is actually so much worse than that for the field, because the math shows that sacrificing optimality for speed would be something that even an infinitely fast computational intelligence would be forced to do. It isn't irrational; it is a fundamentally necessary tradeoff. In imperfect information games your strategy space is continuous, EV is a function of policy, and many games even have continuous action spaces. If you thought Go was high branching factor you thought wrong; Go is an example of a freakishly low branching factor. It is infinitely smaller than the branching factor in relatively trivial decision problems.
If you've never looked at cognitive biases through the lens of performance optimization you should try it. What seems like an arbitrary list from the bias perspective becomes clever approximative techniques in the performance optimization perspective.
I often think about why this isn't more commonly known among people who call themselves rationalists and tend to spend a lot of time discussing cognitive bias. They seem to be trending toward a belief that general superintelligence is of infinite power, doubling down on their fallacious and hubristic appreciation for the power of intelligence.
I say this, because when you apply the algorithms that don't have these biases - the behavioral economist view wouldn't find them to be irrational since they stick to the math, they follow things like the coherence principles for how we ought to work with probabilities as seen in works by Jayne, Finett, and so on - they either don't terminate, or, if you force them to do so... well... they lose to humans; even humans who aren't very good at the task.
>why this isn't more commonly known among people who call themselves rationalists
because most of these people do nothing else but writing blogs about rationalism. Same reason university tests are sometimes so removed from practicality compared to evaluation criteria in business, the people who make them do nothing else but write these tests.
I suspect if you put some rationalists into the trenches in the Donbass for a week they'd quickly have a more balanced view of what's needed to solve a problem besides rational contemplation.
The thing about continuous space solutions is that they are typically differentiable, which means you can use a gradient descent or LM optimization rather than needing to fully explore the solution space. Typically there are large regions which are heuristically excludable, which is what you are getting at I think, but even an unbiased sampling plus gradient descent often makes problems much more tractable than discrete problems.
The type of learning problem where I agree with your point is in something like learning how to classify hand written digits. My point about the continuous nature being unsearchable in practice is about recursive forms - if I choose this policy, my opponent will choose to react to the fact that I had that policy.
In your learning problem where thing were made tractable by differentiation you have something like an elevation map that you are following, but in the multi-stage decision problem you have something more like a fractal elevation map. When you want to know the value of a particular point on the elevation map you have to look for the highest point or the lowest point on the elevation map you get by zooming in on the area which is the resultant of your having chosen a particular policy.
The problem is that since this is a multi-agent environment they can react to your policy choice. So they can for example choose to have you get a high value only if you have the correct password entered on a form. That elevation map is designed to be a plain everywhere and another fractal zoom corresponding with a high utility or a low error term only at the point where you enter the right password.
Choose a random point and you aren't going to have any information about what the password was. The optimization process won't help you. So you have to search. One way to do that is to do a random search; if you do that you eventually find a differing elevation - assuming one exists. But what if there were two passwords - one takes you to a low elevation fractal world that corresponds with a low reward because it is a honeypot. The other takes you to the fractal zoom where the elevation map is conditioned on you having root access to the system.
This argument shows us that we actually would need to search over every point to get the best answer possible. Yet if we do that we have to search over the entire continuous distribution for our policy. Since by definition there are an infinite number of states a computer with infinite search speed can't enumerate them; there is another infinite fractal under every policy choice that also needs full enumeration. We have non-termination by a diagonalization argument for a computer that has infinite speed.
Now observe that in our reality passwords exist. Less extreme - notice that reacting to policy choice in general, for example, moving out of the way of a car that drives toward you but not changing the way you would walk if it doesn't, isn't actually an unusual property in decision problems. It is normal.
I get what you're saying about recursive adversarial problems and their fractal nature, but this is exactly what GANs do to great success, despite the fact that it's hard. Yes, they have to train a lot slower, but learning general strategies and patterns in opponent behaviour still works.
Your password example on the other hand is a discrete, non-differentiable example. If it was differentiable - for example instead of a true/false you got an edit distance to the real password, then passwords would be trivial to crack.
I am taking about decision problems, you are taking about learning problems. These are different. Skip past the idea that you need to learn something. You’ve finished doing so.
What happens once we learn an approximation of that landscape; a map that has error, it doesn’t correspond fully with the territory.
The cognitive bias framing calls the map biased, but if you generalize from that to a more global sense of irrationality the reasoning is in error. In a more particular situation you have a simpler game tree because it is just the game tree under the node. The lifting of constraints produces the ability to have further insight - the map has to be an approximation.
Don’t reach for edit distance; make the boolean a Maybe Boolean which needs further resolution. See that the approximation is demanded because the world isn’t setup to allow all things to be learnable. My honeypot example is simpler than reality - there exists passwords for which trying to guess the password but getting the honeypot resolves to the learner being jailed; generally the learner in the actual game wouldn’t even get to have infinite guesses either, but I made the problem simpler to expose the problem complexity in terms that learning theory would be more familiar with - the elevation maps of the error landscape that learners like to slide down.
Decision problems are a subset of learning problems. As soon as someone can simulate your environment there is no negative consequence to further exploring the solution space via differentiable evaluation methods which allow efficiently training an optimal player.
Your intuitions are steering you wrong. Think about this from first principles in light of some of the corrections I'm going to provide:
> which allow efficiently training an optimal player.
Training an optimal player is not possible in practice. We know and have known the mathematics for optimal play for decades. Since we know it we are able to calculate the amount of space such a solution would take up in memory. Again this is a studied thing. Here is Peter Norvig in Artificial Intelligence: A Modern Approach to tell you the same thing: Page 173. "Because calculating optimal decisions in complex games is intractable, all algorithms must make some assumptions and approximations."
> Decision problems are a subset of learning problems.
This framing has some benefits - it makes generalization simpler. It has some downsides too - in complicated environments it will only approximate the solution and because of that there will be times where it gets things wrong.
In theory you have at first an intractable problem at your initial training time. Then when the game begins and play has progresses you have a more tractable problem because the information available to you eliminates parts of the game tree from consideration. The result of this is that we actually have two learning problems - not one learning problem. One is computed prior to the game. The other is computed during the game.
This theoretical issue has been studied and found to exist in practice by DeepMind. They tried training agents that didn't use tree search and just used the learned heuristic. These lost to agents that also used tree search.
Here is a section from a talk by Noam Brown - he briefly covers your intuition and why it breaks down.
This is also something you can see without reference to theory by looking at the physical progress on optimal solutions. Chess solving for example has the solutions via the end game tables, but they only have them for the more specific instances you reach near the end of the game tree. It is widely understood that we don't have enough memory to store the full solution to the game.
> As soon as someone can simulate your environment there is no negative consequence
This is a non-physical claim. There is obviously a cost to computation. It consumes both energy and time. Our best understanding is that we have a finite amount of these. Your theoretical approach isn't physically real.
> As soon as someone can simulate your environment...
It doesn't become easy at this point. It remains intractable.
A very simple example of why it doesn't get easy is the halting problem from computer science.
A more complicated example that you will have to really think about in order to understand is the nature of the equilibrium adversarial strategy. It is defined with a respect to an oracle - something which would be able to perfectly simulate its strategy. And it is trying to not lose to an oracle; it is assuming you have a very good map.
You've got to remember - your simulation is your map - it isn't the territory. When you play, you aren't playing on your map. You are playing in the territory via your map. The equilibrium strategies were already assuming you had a map. So they aren't trying to make it easy for your map to give you the right answer. They are trying to make some places un-mappable.
Again - remember the real world. Do I know your password? Why not? And what is my password, if it is so easy to know it?
The algorithms have this tendency. They use counterfactual reasoning to determine that assuming a nash player alike to them is their opponent when making their decisions. Sometimes they don't have a nash opponent, but they persist in this assumption anyway. In the cognitive bias framing this tendency is error. In the game theoretic framing this corresponds with minimizing the degree to which you would be exploited. You can find times where the algorithm plays against something that isn't nash and so it was operating according to a flawed model. You can call it biased for assuming that others operated according to that flawed model. From a complexity perspective this assumption lets you drop an infinite number of continuous strategy distributions from consideration - with strong theoretical backing for why it won't hurt you to do so - since nash is optimal according to some important metrics.
- Attentional bias
The tendency to pay attention to some things and not other things. Some examples of times where we do that are with alpha beta pruning. You can find moves that involve sacrifice that show the existence of this bias. The conceit in the cognitive bias framing is that it is stupid because some of the things might be important. The justification is that it some things are more promising than others and we have limited computational budget. Better to stop exploring the things which are not promising since they are not promising and direct efforts to where they are promising. Something like an upper confidence bound tree search in the cognitive bias model would turn balancing the explore exploit dynamic as part of approximating the nash equillibrium into erroneous reasoning because it doesn't choose to explore everything is an example of the lesser form of anchoring effects as they relate to attentional bias. It weights the action values according to the promising rollout more highly.
- Apophenia
Hashing techniques are used to reduce dimensionality. There is an error term here but you gain faster reasoning speed. Seen in blueprint abstraction - the poker example I gave - since we've hashing down using similarity to help bucket similar things. This gives rise to things like selective attention (another bias, and kind of related to this general category of bias).
Jumping ahead to something like confirmation bias the heuristic that all these algorithms are using are flawed in various ways. They see that they are flawed after a node expansion and update their beliefs, but they don't update the heuristic. In fact if a flawed heuristic was working well such that it won we would have greater confidence rather than lesser confidence in the bias.
---
Putting all that aside I would caution against specifity in understanding my point. I think approaching it in this direction - very specific examples - is horrible because it directs attention to the wrong things; when you look at specific examples you're always in a more specific situation and if you're in a more specific situation it means that your situation is more computationally tractable than the general situation which was being handled by the algorithm. So trying to focus on examples is actually going to give you weird inversions where the rules that applied in general don't apply to the specific situation.
You need to come about it from the opposite direction - from the problem descriptions to the necessary constraints on your solution. Then it happens that the error in reasoning is a natural result of trying to do well.
It sound like you're talking about, or at least brushing up against, prudential judgement[0]. Sometimes, the optimal move is not to seek the optimum.
An obvious class of problems is where determining the optimum takes more time than the lifetime of the problem. Say you need to write an algorithm at work that does X, and you need X by tomorrow. If it would take you a week to find the theoretical optimum, then the optimum in a "global" sense is to deliver the best you can within the constraints, not the abstract theoretical optimum. The time to produce the solution is part of the total cost. An imprudent person would either say it's not possible, or never deliver the solution in time.
Yeah, that is pretty close to what I'm talking about. Coming at it from a different perspective - learning theory - but it seems to be the same overarching idea. I'm extending it a little though to something similar to anachronistic reasoning being incorrect - you can't divorce prudential decisions from their context. When you do judgement of the decisions is flawed because it doesn't acknowledge the actual constraints the decision was made under.
I wrote a PhD dissertation that made this point in 2013, and proposed a new "helocentric" economic model.
The key shift is to move the utility function from evaluating a future state of the world to evaluating the utility of an opportunity for attention in the present moment.
All the "cognitive errors" that we humans make are with respect to predicting the future. But we all know what we find appealing in the present moment.
And when we look at economics from this new perspective of the present, we get an economics of attention. We can measure, and model, for the first time, how we choose how to allocate the scarce resource of the internet age: human attention.
I dropped out of academia as soon as I finished this work, and never publicized it broadly within academia, but I still believe it has great potential impact for economics, and it would be great to get the word out.
>> All the "cognitive errors" that we humans make are with respect to predicting the future. But we all know what we find appealing in the present moment.
I like to say that most human problems are a result of the conflict between short and long term goals. This is true at all levels from individuals to small groups, companies, and states. Many, many "failures" can be framed this way. I would say it's not even a problem of predicting the future (thought that is an issue) but of failure to prioritize the future over the present.
》epicycles were still not enough to describe what could be observed.
Epicycles based models were far superior in practice, such as predicting planetary conjunctions. Heliocentric models did not really catched up, until Newton invented gravity and calculus.
And centre of mass of solar system (barycenter in Newtonian physics), is outside of Sun, so heliocentric models technically never gave solid predictions! Stellar parallax (main prediction from Copernicus theory) was not confirmed until 19th century! Heliocentrism is mainly philosophical concept!
I will stick with my primitive old thinking and biases, thank you! If I get mugged a few times in a neighbourhood, I will assume it is not safe. There is no need to overthink it!
I would normally be skeptical of an article that starts with a description of epicycles because it probably means that whatever is going to be described next is totally bullshit.
In this case I’m not so sure. As a plebeian normie, it seems like the “rational actor” model of economics has a lot of problems.
Now I do believe that All people are All of the time trying to achieve their goals and meet their needs as can best be achieved in the given situation and in the way that they best know how.
But this includes a junkie digging through trash for things to sell, a housewife poisoning her abusive husband, and a schizophrenic blowing up mailboxes to stop an international plot against her. It includes a recent widower staying in bed for two weeks. It certainly includes your exclusion of an entire neighborhood and its thousands of inhabitants from your care due to some harrowing experiences.
As I understand it, most economists, and certainly the ones that influence policy, are not really thinking of these things as “rational”. To them rational means “increasing your own wealth or exchanging your money in the most efficient and expedient way possible”. And that’s very good because this is the way that corporations and rich people that hire people to manage their money effectively operate. But it doesn’t really work for normal people in normal situations. Our lack of information about our surroundings and our incredibly wide array of emotional states doesnt leave a lot of room for rationality.
I won’t really expound on it because this is already so long, but having a single definition of rationality also excludes any possibility of having an informed multicultural viewpoint.
The real question for me is, do you think that a Government is different or in a better position than corporations or people operating in the market in making economic decision for an entire country?
I believe it isn't. Actually I think it's in a much worse position for the following reasons:
1) A Government is made of people (usually elected directly or indirectly by the majority based on feelings and all the same irrationality), which in turn will likely be "irrational", or have the wrong incentives (be elected again).
2) A Government is made of few people compared to all the people that there are in the Country. They can't possibly know about all the details of the economy and the situations people are in or they can't process it.
3) Government policies can affect the entire economy. An error there can have bigger repercussions than, for instance, a company making a mistake.
Because the government isn't, (or at least shouldn't be) beholden to short term interests in the same way those other classes are.
Companies will happily destroy everything around them, poison and impoverish entire nations if not reined in, just to turn a quick buck.
People are extremely short-term, local thinkers in general. Yes, I include myself. Most struggle with delayed gratification, let alone retirement planning.
A government, under the purview of democracy, is needed to try to balance these things out and help a society actually operate. Without one, well, you'll likely end up with Grafton, NH, writ large - https://www.vox.com/policy-and-politics/21534416/free-state-...
I'm not arguing for no Government. I'm arguing against Government intervention in the economy. I disagree on many points though (I have seen the Vox article before - I don't have much respect for Vox as it is very biased, so I tend to ignore it, just like The Guardian or Fox News).
"Because the government isn't, (or at least shouldn't be) beholden to short term interests in the same way those other classes are".
A Government is made of people. "Shouldn't" doesn't mean "Isn't". You need to convince me that it "Isn't".
"Companies will happily destroy everything around them, poison and impoverish entire nations if not reined in, just to turn a quick buck."
I disagree. What's the point for investor to make money and then die of pollution or die in a fire. Yes, there are bad apples and stupid people, but you can't design a system that prevent people to do things because there are some bad apples. Look at the big tech, they are mostly investing voluntarily to reduce fossil fuel dependency. Of course if there is a market failure that really can be solved by the Government and that would otherwise kill us all, then I'm in favor of (indirect and market based) intervention.
"People are extremely short-term, local thinkers in general."
Disagree, people have different plans, some shorter term some longer term.
Also why would they have short-term thinking in the market but long term thinking at the election?
"A government, under the purview of democracy, is needed to try to balance these things out and help a society actually operate"
Very likely so. Although there are some ANCAP proposals I find interesting like the one form David Friedman, I'm not convinced it would work in practice.
However my ideal Government doesn't balance anything (I don't think it actually can without making more harm than good). It instead define the rules and what constitute private property, but it doesn't decide how to allocate these resources. That would be left entirely to the market.
> What's the point for investor to make money and then die of pollution or die in a fire.
Because then they have the money and you don't. That's all there is to it. Look at the world aroudn you and tell me how corporate greed isn't ruining it.
Actually don't tell me, your philosophy is bankrupt.
Did you miss I wrote I don't know it would work.
Not an argument anyway? I don't know if you know the version I'm talking about. It doesn't even assume the non aggression principle.
"Because then they have the money and you don't".
Money can't buy you a planet (yet) and money only have value in a functioning economy.
"Look at the world aroudn you and tell me how corporate greed isn't ruining it"
Most very rich people are actually donating most of their time to fix what they think are the worst problems
"Actually don't tell me, your philosophy is bankrupt"
Your assertion that people wouldn't damage the environment in the pursuit of self enrichment is possibly the most naive thing I've ever seen written down.
It also flies in the face of 'hard data' like climate change, pollution of waterways and the air, extermination of species and all the other stuff people do in pursuit of money.
But you can not approximate complex system like human brain with couple of variables. There are not hundreds, but millions of biases.
Advanced epicycle models had dozens moving parts. JPL planetary ephemerides (modern equivalent in polynomials) have several millions of parameters and terabytes of equations.
Gravity - some mystical force that attracts masses together - turns out to be a completely fictional thing. Mass curves spacetime, objects actually move in straight lines, and the fact you can explain the results of that as an 'attractive force' turns out to just be a convenient invention. The idea of summing how all that works in terms of a simple inverse square force is just an ingenious human observation and invention.
Prior to Newton's conception of gravity as objects attracting one another, the primary model used was the Aristotelian one, in which things tended to go to the "zone" where they belong. Things composed of earth (like a rock) tended to sink towards the center of the earth, while things composed of fire or air tended to rise towards the sky.
I thought this was kind of a lame article. The point of behavioral economics is to systematically understand the heuristics people are using. That is hard to do!
Trying to describe all the heuristics together is difficult and untestable -- i.e., not that good for the experimental research which most behavioral economists practice. Still it is well-known in the field that this is an open question worth theorizing about and I think many people do[1][2], although there is not a consensus on the "best" theory as far as I know.
The author like then lauds some impressive/hard to conduct/large-scale interventions which are formidable but don't really teach us about economic theory, and in fact neither were published in economics journals. Maybe the field should move in that direction, I am agnostic on the point, but the author's argument wasn't coherent in my opinion.
My reading of the article is an application of Chesterton’s Fence to so-called cognitive biases. Not to see them as a mere defect, or proof of our fallibility. But to instead look for the objective for which they perhaps truly are the most reasonable solution.
Example from the article:
> Many costly signals are inherently wasteful. Money, time, or other resources are burnt. And wasteful acts are the types of things that we often call irrational. A fancy car may be a logical choice if you are seeking to signal wealth, despite the harm it does to your retirement savings. Do you need help to overcome your error in not saving for retirement, or an alternative way to signal your wealth to your intended audience? You can only understand this if you understand the objective.
For instance we have a neural-cognitive "bias" toward recognizing moving versus stationary objects. Our attention is prejudiced in favor of things-that-move. This is useful when it comes to detecting potential predators, prey, mates, etc. So a lack of a bias can be a defect to the economic actor.
Conspicuous consumption can be rational, or at least beneficial in the evolutionary sense.
If you are a lawyer with a good practice, you are expected to drive a nice large car. If you drove a battered old economy-class car, your clients might see it as a sign that something is wrong with you (there are several plausible ideas) and shun dealing with you. There go fat fees and investment savings.
Yes, I've heard this said many times about sales people. If they're not visibly wasting money, people are reluctant to hire them because they either aren't good (and thus have no money to waste), or won't be hungry (because they've saved the money they earned by not wasting it). So conspicuous consumption becomes a way to signal that the sales person is capable of reliably generating large incomes.
My own model is pretty good. (pun of the decade) I could fully explain it but it says it wouldn't be understood. (lame I know) But we can start with dismissing the rational actor. It was something I loved to pretend to be true. This was irrational. There is nothing to suggest people act rationally! We all have heads full of nonsense. Just wait 1000 years and the common man will be laughing his ass off. There is no reason to think we are the exceptional generation. We ponder all these errors we believe to be facts and "rationally" arrive at erroneous conclusions. Then, now that we've figured things out it will be a rare exception for us to implement it. In stead we do what we always did and stubbornly defend our actions even if we know its wrong.
It is astonishing to see what we've accomplished despite these rather large shortcomings.
If you disagree, how do you know this emotion isnt triggered by what you would like to be real?
When convinced of anything one grows a bias blind spot of biblical volume. It is a tremendous struggle to look around it.
What he's describing a marketing model moreso than an economic one. This a theme with behavioural economics. I doubt they'll find one.
The "homo economicus" model has become somewhat of a straw man for behavioural economics to disprove. Realistically the model was never claimed to apply in the types of domains where it's being disproved.
Consumers of the modern world are bombarded with choices and attempts to influence these choices. That's not a world, IMO, that can be "modelled" in the same way that a medieval village can be modelled.
If the discipline must be scientific, maybe the better model is "engineering science" where you have to try and build the thing in order to study it. Computation may exist, in various forms, in nature. But, the way to do computer science isn't observing nature. It's building computers, at least on paper.
The efficient market hypothesis ("homo economicus") model is a prime example. Of course it is wrong. It is a model.
That doesn't mean it is not useful.
Howard Marks (highly successful investor over many decades) has this to say about it
> if you ignore the efficient market hypothesis, you’re going to be very disappointed, because you’re going to find out that very few of your active investment decisions work. But if you swallow it whole, you won’t be an investor, and you’ll give up on active success. So the truth, if there is one, has to lie somewhere in between, and that’s what I believe.
I'm not so sure about this. I'm not an expert at all, but I can see in the world around me that biases are real. Sure, heuristics are important in the trade off between accuracy and speed, so I see that they are necessary. However, isn't the problem that we use the same heuristics to bet on a coin flip as we would use to bet on whether we make it past a lion to safety? It seems like the "right" is model is only correct in a small number of cases, but we can't change our unconscious biases to fit the situation. It seems that the bias model explains why we make bad decisions in many areas of our lives.
> I will close with a belated defense of the rational-actor model.
> Evolution is ruthlessly rational.
Yes, but that doesn't mean its products or their behaviour are mostly or even partly rational, just that whatever strategy they adopted has worked to ensure survival.
If this is your definition of what's 'rational', that seems a very low bar to set.
That said I enjoyed the article. Frankly it's long seemed to me that economic models based around the idea of the human as a rational actor are fundamentally wrong, because people are not rational (and I include myself in this, more's the pity). Look out at the world, see all the self-sabotage, the unfounded hatred, the violence, the retribution, the temporarily embarrassed millionaires who vote to keep themselves in the gutter.
Can we stretch the idea of "rational" behaviour by looking at evolutionary impulses and saying "well, at X point in the past, this impulse may have helped tribal cohesion and increased the chances of group survival, even at the cost of blah blah blah". Sure we can, sure. But that behaviour is not then "rational" in the society we live in today.
So yeah, modelling of populations based on rational self interest really does look like it has a foundational error. An economics based around a much more chaotic model of human behaviour, with some rationalities built in, is probably needed.
It's not only that we can't know what the optimal choice is. We also can't know how far the benefits of our choice are from the benefits of the optimal choice.
And what would be the expense of figureing out what the optimal choice is.
I got into this way of thinking this morning starting with what is the definition of "Technical Debt". I would say it is the cost of moving from the current implementation to the optimal implementation.
But we don't know what would the optimal implementation. That means we can't even begin to estimate how much it would cost to refactor the current implementation into the optimal implementation.
Therefore I conclude that "Technical Debt" is consultant-speak. Something you can sell without clearly explaining what you are selling.
I have several problems with this. First, we seem to be assuming that we can figure out what someone's optimum choice should be. Second, the "biases" seem to come from studies of heavily contrived situations. Third, some people could be rationally inclined but just have weak reasoning ability.
In short, I'm still mildly skeptical that we've really disproven the rational actor model.
Maybe the rational actor model is the heliocentrism we're looking for. People want it to be false, in order to justify paternalism.
I once read that in the century after Newton, the French Academy offered a prize for evidence that disproved Newtonian mechanics, and they awarded it several times before finally giving up. The disproofs were all flawed.
And nowadays Newtonian mechanics are disproven and we still apply it broadly, because it's still useful. It just doesn't paint the full picture. Same for the geocentric world view. We all know that the earth isn't the center of the solar system, but we still measure days, years, seasons, etc. as if it were. Why? Because for most uses, it is the simpler and more helpful model.
I think something similar could happen here. We come up with a more accurate model that requires fewer exceptions, but is harder to apply/work with day-to-day.
Maybe we need better theory of rationality (I assume there is something like that). The author says "understanding objectives is important" - since objectives can drastically differ, so would the definition of "rationality" differ. So we might still have a "rational actor", only our assumptions about rationality might be wrong.
>> If your body of knowledge is a list of unconnected phenomena rather than a theoretical framework, you lose the ability to filter experimental results by whether they are surprising and represent a departure from theory.
Sorry to make everything about AI and machine learning, but for a moment there, I thought this was precisely about AI and machine learning.
For the amount of effort put into customer tracking, big data, and using machine learning to crunch on the results, the advertising and marketing industry should have this nailed by now. If they don't, maybe it's impossible.
Even the Business Roundtable doesn't follow the "rational model". In 2019, the Business Roundtable, which consists of America's largest companies, said corporations should focus not only on making profits, but to:
- Delivering value to our customers.
- Investing in our employees.
- Dealing fairly and ethically with our suppliers.
- Supporting the communities in which we work.
- Generating long-term value for shareholders.
What he is talking about is more of a marketing model than an economic one. This is a common theme in the study of how people act. I don't think they will.
Obligatory quote: "All models are wrong, but some are useful".
I realized about myself that I became better at decision-making the easier I could switch to the "appropriate model" for a give situation. Not even physics can get a unified model, and the primitives in social sciences (humans, memories, desires, education) are all as fuzzy as can be.
The article does mention that "rational-actor" might actually be the best we can come up with, but that's if we always have to always work with the same model.
We could have a newtonian/relativistic style pair of models for people, based on urgency or marginal utility threshold (different rules apply to your last dollar), but that threshold has to be subjective, and we seem to have lost all hope in anything subjective.
I strongly suspect that a better model is that people instead of optimizing their outcomes instead optimize the ease of decision making while still getting an acceptable course of action. Most of our biases serve to either allow us to make decisions quicker or minimize the odds of catastrophically bad outcomes for our decisions, which fit nicely with this model. The fact is that indecision is often worse than a bad decision, and the evolutionary forces that shaped our brains are stochastic in nature and thus don't dock points for missed opportunities.