Time-of-Day Neural Style Transfer
for Architectural Photographs

Yingshu Chen1  Tuan-Anh Vu1   Ka-Chun Shum1  Binh-Son Hua2 Sai-Kit Yeung1
1The Hong Kong University of Science and Technology    2VinAI Research

International Conference on Computational Photography (ICCP) 2022 (Oral)

Architectural Photography Style Transfer aims to transfer background dynamic texture and chrominance, and transfer sufficient styles for foreground while keeping foreground geometry intact.

Abstract

Architectural photography is a genre of photography that focuses on capturing a building or structure in the foreground with dramatic lighting in the background. Inspired by recent successes in image-to-image translation methods, we aim to perform style transfer for architectural photographs. However, the special composition in architectural photography poses great challenges for style transfer in this type of photographs. Existing neural style transfer methods treat the architectural images as a single entity, which would generate mismatched chrominance and destroy geometric features of the original architecture, yielding unrealistic lighting, wrong color rendition, and visual artifacts such as ghosting, appearance distortion, or color mismatching. In this paper, we specialize a neural style transfer method for architectural photography. Our method addresses the composition of the foreground and background in an architectural photograph in a two-branch neural network that separately considers the style transfer of the foreground and the background, respectively. Our method comprises a segmentation module, a learning-based image-to-image translation module, and an image blending optimization module. We trained our image-to-image translation neural network with a new dataset of unconstrained outdoor architectural photographs captured at different magic times of a day, utilizing additional semantic information for better chrominance matching and geometry preservation. Our experiments show that our method can produce photorealistic lighting and color rendition on both the foreground and background, and outperforms general image-to-image translation and arbitrary style transfer baselines quantitatively and qualitatively.

Qualitative Results

Comparisons among image-to-image translation baselines and our proposed method. Our results have plausible colors from foreground and background, and preserve the geometry in different style transfer cases.

Comparisons among neural style transfer baselines and proposed method. While neural style transfer methods tend to have visual artifacts, our results have matched colors from foreground and background respectively, and preserve the geometry of the foreground while generating diverse cloud textures in the background.

For detailed comparison, please refer to supplmentary interactive viewer page.

Materials

Paper
Supplementary Doc
Interactive Viewer
Poster
Data
Code
Slides
Conference Talk

Citation

			
@inproceedings{chen2022timeofday,
	title={Time-of-Day Neural Style Transfer for Architectural Photographs},
	author={Chen, Yingshu and Vu, Tuan-Anh and Shum, Ka-Chun and Hua, Binh-Son and Yeung, Sai-Kit},
	booktitle={2022 IEEE International Conference on Computational Photography (ICCP)},
	year={2022},
	organization={IEEE}
}
		

Acknowlegements

This paper was partially supported by an internal grant from HKUST (R9429) and the HKUST-WeBank Joint Lab.

References

DRIT++  H.-Y. Lee, H.-Y. Tseng, J.-B. Huang, M. Singh, and M.-H. Yang, “Diverse image-to-image translation via disentangled representations,” ECCV 2018.
H.-Y. Lee, H.-Y. Tseng, Q. Mao, J.-B. Huang, Y.-D. Lu, M. Singh, and M.-H. Yang, “DRIT++: Diverse image-to-image translation via disentangled representations,” IJCV 2020.

MUNIT  X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz, “Multimodal unsupervised image-to-image translation,” ECCV 2018.
FUNIT  M.-Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, and J. Kautz, “Few-shot unsupervised image-to-image translation,” ICCV 2019.
DSMAP  H.-Y. Chang, Z. Wang, and Y.-Y. Chuang, “Domain-specific mappings for generative adversarial style transfer,” ECCV 2020.
AdaIN  X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” ICCV 2017.
SANet  D. Y. Park and K. H. Lee, “Arbitrary style transfer with styleattentional networks,” CVPR 2019.
AdaAttN  S. Liu, T. Lin, D. He, F. Li, M. Wang, X. Li, Z. Sun, Q. Li, and E. Ding, “AdaAttN: Revisit attention mechanism in arbitrary neural style transfer,” ICCV 2021.
LST  X. Li, S. Liu, J. Kautz, and M.-H. Yang, “Learning linear transformations for fast arbitrary style transfer,” CVPR 2019.