First, no AI that I know of has at its disposal a full blown model of the world it operates in, whereas most human brains do, and even if the model is imperfect, it is capable of producing fairly accurate simulations (what-if scenarios).
Second, deep learning model, however much we'd like to think they do, aren't capable of doing proper causal inference in a general setting (that is, within the confines of the model) and are therefore far from capable of doing what humans do and will remain so limited for a long time to come.
AGI will require the curve-fitting of deep learning, a general model of the world, the causal inference capabilities of something like AlphaGo, but in a general setting, not the super limited world AlphaGo operates in.
So no, AGI will require much more than just curve-fitting abilities.
> no AI that I know of has at its disposal a full blown model of the world it operates in
This is a field called Model-based Reinforcement learning, and it's quite advanced already -- there are indeed models that have an internal state reflecting the world state.
> deep learning model, however much we'd like to think they do, aren't capable of doing proper causal inference in a general setting
This is also addressed by recent models, somewhat. Once you have an abstract world model, searching for a high reward can be just a matter of running markovian simulation on it using high reward heuristics (given by a network of course), like AG does. This line is also very active right now, one example is the recent MuZero.
Inference at its core really isn't much more than an artful curve fitting (or an artful model search if you like), and it's one of the building blocks of intelligence.
It's all pretty meaningless semantics and guesswork.
Curve fitting means adaptive computation in networks of fairly simple units that allow for fairly general computation, i.e. traversing program space to find a good solution, or equivalently, intelligence is about evolving/searching a program that solves a wide array of tasks. It is about finding programs that maps from sensory space to the space of action sequences, maximizing reward.
But you need the right prior structure such that learning and producing action sequences is efficient or even feasible/reachable. You can see any additional program structure that aids e.g. generation and recall of memories and planning (production of output targeted at solving a goal) as prior structure that limits and defines the searched program space. You can even regard a planning module as part of the curve fitting as it simply concerns the last step of producing the output.
Therefore, intelligence is "curve fitting".
So the actual question is: How much additional structure over just a large number of simple repeated units is necessary? Nobody knows. Possibly not much. Possibly quite a bit.
But all this is curve-fitting in a more general sense - fitting the curve of life, gene reproduction. So it is still curve fitting, it is just that a pottential AGI is certainly not in the hypothesis space of current deep learning models, and those cannot reach AGI by curve-fitting.
"casual inference in a general setting" is just your brain running an input through it's existing thought and decision processes with a low threshold for a passing answer.
So an ML model running an input through a collection of other models to see if it gets a reasonable answer.
Second, deep learning model, however much we'd like to think they do, aren't capable of doing proper causal inference in a general setting (that is, within the confines of the model) and are therefore far from capable of doing what humans do and will remain so limited for a long time to come.
AGI will require the curve-fitting of deep learning, a general model of the world, the causal inference capabilities of something like AlphaGo, but in a general setting, not the super limited world AlphaGo operates in.
So no, AGI will require much more than just curve-fitting abilities.