Disentanglement by penalizing correlation

Mikael Kågebäck, Olof Mogren, (NIPS workshop on Learning Disentangled Features: from Perception to Control)
pdf bibTex

An important reason for the success of deep neural networks this is their capability to automatically learn representations of data in levels of abstraction, increasingly disentangling the data as the internal transformations are applied. In this paper we propose a novel regularization method that actively penalize covariance between dimensions of the hidden layers in a network, driving the model towards a more disentangled solution. This makes the network learn linearly uncorrelated representations which increases interpretability while obtaining good results on a number of tasks, as demonstrated by our experimental evaluation. Further, the proposed technique effectively disables superfluous dimensions, compressing the representation to the dimensionality of the underlying data. Our approach is computationally cheap and can be applied as a regularizer to any gradient-based learning model.

BibTex

@article{kaageback2018disentangled,
  title={Disentangled activations in deep networks},
  author={K{\aa}geb{\"a}ck, Mikael and Mogren, Olof},
  year={2018}
}

Leave a Reply

Your email address will not be published. Required fields are marked *