"Deep Learning" is not itself a candidate for anything, because it's not any single algorithm, but a category of approaches.
Deep Learning generally refers to machine learning algorithms that deal with stacking multiple layers of simpler functions to enable more complicated functions, and optimizing all the parameters to best fit your training set and generalize to new samples (the hard part). Though it usually refers to neural networks, I dont think there's any reason it doesn't also apply to other layered approaches as long as there's a relatively unified learning algorithm applied across the whole system.
There are clearly many different deep learning algorithms, even if you just count the permutations of tricks you can choose from to improve layered NN generalization. Though to be fair I think very good progress is being made towards developing "better" algorithms in the sense that new ones (e.g. RBM pretraining + dropout) usual perform better than than older algorithms, no matter what data you use it on (now network architecture is another matter entirely).
One of the most interestingly general things about Deep Learning is that unsupervised learning approaches can be used at the bottom of the "stack" to learn more useful high-level features from the input data. This ends up making your higher-level learners more helpful for "Real Stuff".
Yeah but will it satisfy us, if we can't see it making analogies, can't see its semantics; identify with it? This same lack of breakdown into pieces we understand will make it hard to tweak and advance NNs beyond 'good categorizers'.
[1] http://en.wikipedia.org/wiki/Deep_learning#Convolutional_neu...
[2] http://deeplearning.net/reading-list/
[3] http://en.wikipedia.org/wiki/Deep_learning#Results
[4] http://www.wired.com/wiredscience/2012/06/google-x-neural-ne...