Model assistance in deep learning

We are in the era of deep neural networks being designed constantly designed. Most efforts are devoted to hand-crafted engineering of single network, including architecture design, parameter tuning and optimization.

Althougth diverse deep neural networks have been developed, how to efficiently leverage full potential of these models and make use of their collaboration remain challenging and unsolved issues.

We have submitted a paper to IJCAI-2018 named - Model Assistance with Collaborative Learning - to achieve model assistance in a win-win process.

The key idea behind collaborative learning is to share mutual knowledge from the involved models in their learning processes to assist each other as additional supervision. We achieve this process with a mutual knowledge base (MKB) which includes an encoder-decoder structure, a metric learning module and a verification part. The encoder transfers arbitrary intermediate feature maps of the involved models to unified embeddings in mutual space, which are prepared to perform additional supervision with metric leanring and verification, as well as sent back to the original networks to keep end-to-end trainable.

collaborative learning can be applied to any deep neural networks and easily extended to multiple models. Compared with teacher-student framework, our method enjoys bi-directional assistance and needs no requirements of models such as pre-training and ability difference.

Experimental results on image classification tasks demonstrate our method can efficiently improve the learning ability of all the involved models, with superior performance comapred with strong baselines and relevant approaches.