Memristors have the interesting property of adapting their resistance over time as current flows through them. Thus they can mimic Hebbian learning, a fundamental property of synapses, in a way that other electronic components, cannot easily do.
No one knows what the architecture of the first AI will look like. However, our current architectures have a big problem (for creating AI) in that they have a very clean separation between data and code. Code is run in the CPU and alters data on RAM and disk. Brains don't work this way -- there is no separation between data and code. For this reason I think memristors -- while not a silver bullet -- might represent a step in the right direction.
"However, our current architectures have a big problem (for creating AI) in that they have a very clean separation between data and code."
That's a security measure, rather than a fundamental property of transistors. We had to add that property to our architectures.
Simulating Hebbian learning isn't that difficult with transistors and conventional computation, so you don't get that big an advantage right now. If we were brushing up against the fundamental limits then it would matter, but we're a ways away from that yet.
No one knows what the architecture of the first AI will look like. However, our current architectures have a big problem (for creating AI) in that they have a very clean separation between data and code. Code is run in the CPU and alters data on RAM and disk. Brains don't work this way -- there is no separation between data and code. For this reason I think memristors -- while not a silver bullet -- might represent a step in the right direction.