Low-light images require localised processing to enhance details, contrast and lighten dark regions without affecting the appearance of the entire image. A range of tone mapping techniques have been developed to achieve this, with the latest state-of-the-art methods leveraging deep learning. In this work, a new end-to-end tone mapping approach based on Deep Convolutional Adversarial Networks (DCGANs) is introduced along with a data augmentation technique, and shown to improve upon the latest state-of-the-art on benchmarking datasets. We carry out comparisons using the MIT-Adobe FiveK (MIT-5K) and the LOL datasets, as they provide benchmark training and testing data, which is further enriched with data augmentation techniques to increase diversity and robustness. A U-net is used in the generator and a patch-GAN in the discriminator, while a perceptually-relevant loss function based on VGG is used in the generator. The results are visually pleasing, and shown to improve upon the state-of-the-art Deep Retinex, Deep Photo Enhancer and GLADNet on the most widely used benchmark dataset MIT-5K and LOL, without additional computational requirements.
Supplementary material (ZIP)