I think the difference between a large language model and a human intelligence is that the human may perform some extra computation to make additional connections on his own.
But other than that, aren't we all just large language models?
Not even remotely close. The difference is so big that it's almost harmful to the discussion to compare the way humans think (which we still don't have great understanding of) and the way language models work.
I completely agree but I do think humans have a language model, and considering how we use that to encode and decode the human experience might be useful in figuring out how we improve things like GPT-3.
Personally I feel that embodiment of some form, in which there is some vector space for a 'world model' that can be paired up to a language model, is a route forward. For example, if you have a Boston Dynamics (for example) robot that has a model for gravity, mass, acceleration, force, object manipulation, etc and you incorporate those into a language model, there is going to be a much richer latent space from which associations can be made between terms. If you ask GPT-3 the difference between various gaits, e.g. walk, trot, gallop, it's going to have associations with other contexts and adjectives used in the vicinity of those terms. However, if you enrich it with data from a Spot Mini that can actually execute those gaits, you're going to have information around velocity, inertia, power consumption and budget, object detection rates, route planning horizon, etc.
>.... A horse trainer once said to me, "Animals don't think, they just make associations." I responded to that by saying, "If making associations is not thinking, then I would have to conclude that I do not think." People with autism and animals both think by making visual associations. These associations are like snapshots of events and tend to be very specific. For example, a horse might fear bearded men when it sees one in the barn, but bearded men might be tolerated in the riding arena. In this situation the horse may only fear bearded men in the barn because he may have had a bad past experience in the barn with a bearded man.
But other than that, aren't we all just large language models?