Differentiable Unrolled Alternating Direction Method of Multipliers for OneNet

Zoltán Milacski (Eötvös Loránd University), Barnabas Poczos (Carnegie Mellon University), Andras Lorincz (Eötvös Loránd University)

Abstract
Deep neural networks achieve state-of-the-art results on numerous image processing tasks, but this typically requires training problem-specific networks. Towards multi-task learning, the One Network to Solve Them All (OneNet) method was recently proposed that first pretrains an adversarial denoising autoencoder and subsequently uses it as the proximal operator in Alternating Direction Method of Multipliers (ADMM) solvers of multiple imaging problems. In this work, we highlight training and ADMM convergence issues of OneNet, and resolve them by proposing an end-to-end learned architecture for training the two steps jointly using Unrolled Optimization with backpropagation. In our experiments, our solution achieves superior or on par results compared to the original OneNet and Wavelet sparsity on four imaging problems (pixelwise inpainting-denoising, blockwise inpainting, scattered inpainting and super resolution) on the MS-Celeb-1M and ImageNet data sets, even with a much smaller ADMM iteration count.

DOI
10.5244/C.33.144
https://dx.doi.org/10.5244/C.33.144

Files
Paper (PDF)

BibTeX
@inproceedings{BMVC2019,
title={Differentiable Unrolled Alternating Direction Method of Multipliers for OneNet},
author={Zoltán Milacski and Barnabas Poczos and Andras Lorincz},
year={2019},
month={September},
pages={144.1--144.11},
articleno={144},
numpages={11},
booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
publisher={BMVA Press},
editor={Kirill Sidorov and Yulia Hicks},
doi={10.5244/C.33.144},
url={https://dx.doi.org/10.5244/C.33.144}
}