Deep Learning on hardware limited Devices

(2016-2017)

I worked on a series of papers that promote an information theoretic view on compression of NNs, energy efficiency and speeding them up. I am currently intrested in sequence and pattern compression.


Improved Bayesian Compression (2017)

Marco Federici, Karen Ullrich, Max Welling [PDF] [BIBTEX]
Accepted Bayesian DL workshop paper at the Conference on Neural Information Processing Systems (NIPS) 2017, Long Beach, USA.

Compression of Neural Networks (NN) has become a highly studied topic in recent years. The main reason for this is the demand for industrial scale usage of NNs such as deploying them on mobile devices, storing them efficiently, transmitting them via band-limited channels and most importantly doing inference at scale. In this work, we propose to join the Soft-Weight Sharing and Variational Dropout approaches that show strong results to define a new state-of-the-art in terms of model compression.

Bayesian Compression for Deep Learning (2017)

Christos Louizos, Karen Ullrich, Max Welling [PDF] [BIBTEX]
Accepted paper at the Conference on Neural Information Processing Systems (NIPS) 2017, Long Beach, USA.

Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by taking a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency.

Soft Weight-Sharing for Neural Network Compression (2017)

Karen Ullrich, Edward Meeds, Max Welling [PDF] [BIBTEX]
Accepted paper at the International Conference on Learning Representations (ICLR) 2017, Toulon, France.

The success of deep learning in numerous application domains created the desire to run and train them on mobile devices. This however, conflicts with their computationally, memory and energy intense nature, leading to a growing interest in compression. Recent work propose a pipeline that involves retraining, pruning and quantization of neural network weights, obtaining state-of-the-art compression rates. In this paper, we show that competitive compression rates can be achieved by using a version of "soft weight-sharing". Our method achieves both quantization and pruning in one simple (re-)training procedure. This point of view also exposes the relation between compression and the minimum description length (MDL) principle.

TAKE ME BACK