1. Add noise in the training to avoid overfitting
One common problem in deep learning, artificial neural networks (ANNs) and in general machine learning is overfitting. Due to overfitting, machine learning algorithms focus on a few specific features of the data and do not generalise well to new cases of the data.
As a consequence, by analysing how the ANN works, it is feasible to trick it into mistakes. Having some noise that precisely makes the ANN mistake a panda for a vulture is unlikely statistically, but shows that they can make big stupid mistakes. If ANNs are used more and more often, then this kind of mistakes becomes more and more statistically probable, and eventually happens more and more often.
A simple solution is to add noise in the training set, so that ANNs are more resilient to noise and less prone to overfitting. Noise can come in many different varieties, from white noise, to anything else. In the particular case of ANNs used for image processing, due to the spatial component of the dimensions, different types of distortions can be applied, e.g. lens distortions. This is not common, AFAIK.
2. Analyse, synthesise, generalise
Currently, ANNs do some analysis of the data, even if implicit in their adjustment of the weights. They are very specific, hence the overfitting.
Approaches like self-organising maps allow synthesising what has been learned from the examples, and then make the ANNs more simple, by removing neurons and connections that perhaps were not so useful. This is particular relevant for the optimisation, as deep learning models can become very complex and computationally expensive.
Self-organising maps seem to be more concerned with adding new neurons and connections, though. If this is used for the analysis then nothing good is going to result from that, but a worse overfitting. This capability of making an ANN more complex has to be used to generalise the network to more cases and more complex problems.
3. Learn from few examples
AlphaGo could defeat Lee Sedol in a quite consistent way (four out of five games). On the one hand Lee Sedol learned a lot from those games, especially the four he lost, but probably from all five. On the other hand, the learning that AlphaGo performed on those five games is probably negligible, and it is very likely that it did not learn anything at all in the games it won.
AlphaGo learned from millions of games before facing Lee Sedol, more games than humans could ever play in their lives (or would probably want to play, given the chance to live just for that). Five games are not going to make a big difference, and probably they should not after the extensive experience. While this has a positive side in our chances to defeat Skynet, it is disappointing when we expect AI to do more complex and general things.
There are approaches that use fewer training cases and learn more from them, support vector machines and case based reasoning being two of them. In the end, connectionists and deep learning experts may find something useful in other AI approaches, if they are open for collaboration.