I think the main potential in analogue computing is to create complex networks of feedback loops where different regions of stability correspond to different machine states. I've seen models of neural network memory where the interconnection of neurons works like a combination of a symmetric linear transform and an amplifier followed by vector normalisation. The transform maps the sensory input into a reduced dimensional space (where each dimension corresponds to a possible memory). The reduced vector is amplified via the neural response function, and then it's transformed back to the sensory input vector space through an inverse to the original transform. That creates a feedback loop where (because of how the neural response function works) whatever the input is, the system converges to a vector that corresponds to exactly one of the memory vectors. It basically picks out and amplifies the closest memory to the sensory input.
That kind of system is a huge simplification, but similar things could be done with analogue computing. In particular, I think probabilistic computing could be done by setting up network feedback loops corresponding to underlying Bayesian networks, where stable points correspond to highest likelihood parameterisations. (I may actually do some work in this direction next year, because it's pretty cool stuff.)
That kind of system is a huge simplification, but similar things could be done with analogue computing. In particular, I think probabilistic computing could be done by setting up network feedback loops corresponding to underlying Bayesian networks, where stable points correspond to highest likelihood parameterisations. (I may actually do some work in this direction next year, because it's pretty cool stuff.)