skip to main content
research-article

Data-driven Digital Lighting Design for Residential Indoor Spaces

Published: 17 March 2023 Publication History

Abstract

Conventionally, interior lighting design is technically complex yet challenging and requires professional knowledge and aesthetic disciplines of designers. This article presents a new digital lighting design framework for virtual interior scenes, which allows novice users to automatically obtain lighting layouts and interior rendering images with visually pleasing lighting effects. The proposed framework utilizes neural networks to retrieve and learn underlying design guidelines and the principles beneath the existing lighting designs, e.g., a newly constructed dataset of 6k 3D interior scenes from professional designers with dense annotations of lights. With a 3D furniture-populated indoor scene as the input, the framework takes two stages to perform lighting design: (1) lights are iteratively placed in the room; (2) the colors and intensities of the lights are optimized by an adversarial scheme, resulting in lighting designs with aesthetic lighting effects. Quantitative and qualitative experiments show that the proposed framework effectively learns the guidelines and principles and generates lighting designs that are preferred over the rule-based baseline and comparable to those of professional human designers.

Supplementary Material

tog-22-0029-File003 (tog-22-0029-file003.mp4)
Supplementary material
3582001-supp (3582001-supp.pdf)
Supplementary material

References

[2]
Armen Avetisyan, Manuel Dahnert, Angela Dai, Manolis Savva, Angel X. Chang, and Matthias Nießner. 2019. Scan2CAD: Learning CAD model alignment in RGB-D scans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2614–2623.
[3]
Sai Bi, Kalyan Sunkavalli, Federico Perazzi, Eli Shechtman, Vladimir G. Kim, and Ravi Ramamoorthi. 2019. Deep CG2Real: Synthetic-to-real translation via image disentanglement. In Proceedings of the IEEE International Conference on Computer Vision. 2730–2739.
[4]
Jeremy Birn. 2014. Digital Lighting & Rendering. Pearson Education.
[5]
Vladimir Bychkovsky, Sylvain Paris, Eric Chan, and Frédo Durand. 2011. Learning photographic global tonal adjustment with a database of input/output image pairs. In Proceedings of the IEEE International Conference on Computer Vision. IEEE, 97–104.
[6]
Stanislas Chaillou. 2019. AI + Architecture: Towards a New Approach. Master’s thesis. Harvard School of Design.
[7]
Kang Chen, Kun Xu, Yizhou Yu, Tian-Yi Wang, and Shi-Min Hu. 2015. Magic decorator: Automatic material suggestion for indoor digital scenes. ACM Trans. Graph. 34, 6 (2015), 1–11.
[8]
Yu-Sheng Chen, Yu-Ching Wang, Man-Hsin Kao, and Yung-Yu Chuang. 2018. Deep photo enhancer: Unpaired learning for image enhancement from photographs with GANs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6306–6314.
[10]
Bradley Efron and Robert Tibshirani. 1986. Bootstrap methods for standard errors, confidence intervals, and other measures of statistical accuracy. Statist. Sci. Feb. (1986), 54–75.
[11]
Huan Fu, Bowen Cai, Lin Gao, Ling-Xiao Zhang, Jiaming Wang, Cao Li, Qixun Zeng, Chengyue Sun, Rongfei Jia, Binqiang Zhao, and Hao Zhang. 2021. 3D-FRONT: 3D furnished rooms with layOuts and semaNTics. In Proceedings of the IEEE International Conference on Computer Vision. 10933–10942.
[12]
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27 (2014), 2672–2680.
[13]
Gary Gordon. 2015. Interior Lighting for Designers. John Wiley & Sons.
[14]
Daniel H. Greene. 1983. The decomposition of polygons into convex parts. Computat. Geom. 1 (1983), 235–259.
[15]
Ankur Handa, Viorica Patraucean, Vijay Badrinarayanan, Simon Stent, and Roberto Cipolla. 2016a. Understanding real world indoor scenes with synthetic data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4077–4085.
[16]
Ankur Handa, Viorica Pătrăucean, Simon Stent, and Roberto Cipolla. 2016b. SceneNet: An annotated model generator for indoor scene understanding. In Proceedings of the IEEE International Conference on Robotics and Automation. IEEE, 5737–5743.
[17]
Ruizhen Hu, Zeyu Huang, Yuhan Tang, Oliver Van Kaick, Hao Zhang, and Hui Huang. 2020. Graph2Plan: Learning floorplan generation from layout graphs. ACM Trans. Graph. 39, 4 (2020), 118–1.
[18]
Yuanming Hu, Hao He, Chenxi Xu, Baoyuan Wang, and Stephen Lin. 2018. Exposure: A white-box photo post-processing framework. ACM Trans. Graph. 37, 2 (2018), 1–17.
[19]
Andrey Ignatov, Nikolay Kobyshev, Radu Timofte, Kenneth Vanhoey, and Luc Van Gool. 2017. DSLR-quality photos on mobile devices with deep convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision. 3277–3285.
[20]
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1125–1134.
[21]
Wenzel Jakob, Sébastien Speierer, Nicolas Roussel, and Delio Vicini. 2022. DR. JIT: A just-in-time compiler for differentiable rendering. ACM Trans. Graph. 41, 4 (2022), 1–19.
[22]
Sam Jin and Sung-Hee Lee. 2019. Lighting layout optimization for 3D indoor scenes. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 733–743.
[23]
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. 2016. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV). Springer, 694–711.
[24]
Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2020. Analyzing and improving the image quality of StyleGAN. In Proceedings of the IEEE International Conference on Computer Vision.
[25]
John K. Kawai, James S. Painter, and Michael F. Cohen. 1993. Radioptimization: Goal based rendering. In Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques. 147–154.
[26]
William B. Kerr and Fabio Pellacini. 2009. Toward evaluating lighting design interface paradigms for novice users. ACM Trans. Graph. 28, 3 (2009), 1–9.
[27]
Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jung Kwon Lee, and Jiwon Kim. 2017. Learning to discover cross-domain relations with generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning. 1857–1865.
[28]
Manyi Li, Akshay Gadi Patil, Kai Xu, Siddhartha Chaudhuri, Owais Khan, Ariel Shamir, Changhe Tu, Baoquan Chen, Daniel Cohen-Or, and Hao Zhang. 2019. GRAINS: Generative Recursive Autoencoders for Indoor Scenes. ACM Trans. Graph. 38, 2 (2019), 1–16.
[29]
Tzu-Mao Li, Miika Aittala, Frédo Durand, and Jaakko Lehtinen. 2018. Differentiable Monte Carlo ray tracing through edge sampling. ACM Trans. Graph. 37, 6 (2018), 1–11.
[30]
Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, Yuhan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Hong-Xing Yu, Zexiang Xu, Kalyan Sunkavalli, Milos Hasan, Ravi Ramamoorthi, and Manmohan Chandraker. 2021. OpenRooms: An open framework for photorealistic indoor scene datasets. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 7190–7199.
[31]
Wen-Chieh Lin, Tsung-Shian Huang, Tan-Chi Ho, Yueh-Tse Chen, and Jung-Hong Chuang. 2013. Interactive lighting design with hierarchical light representation. In Computer Graphics Forum, Vol. 32. Wiley Online Library, 133–142.
[32]
Tianqiang Liu, Aaron Hertzmann, Wilmot Li, and Thomas Funkhouser. 2015. Style compatibility for 3D furniture models. ACM Trans. Graph. 34, 4 (2015), 1–9.
[33]
Joe Marks, Brad Andalman, Paul A. Beardsley, William Freeman, Sarah Gibson, Jessica Hodgins, Thomas Kang, Brian Mirtich, Hanspeter Pfister, Wheeler Ruml, et al. 1997. Design galleries: A general approach to setting parameters for computer graphics and animation. In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques. 389–400.
[35]
Paul Merrell, Eric Schkufza, Zeyang Li, Maneesh Agrawala, and Vladlen Koltun. 2011. Interactive furniture layout using interior design guidelines. ACM Trans. Graph. 30, 4 (2011), 1–10.
[36]
Mehdi Mirza and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014).
[37]
Naila Murray, Luca Marchesotti, and Florent Perronnin. 2012. AVA: A large-scale database for aesthetic visual analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2408–2415.
[38]
John A. Nelder and Roger Mead. 1965. A simplex method for function minimization. Comput. J. 7, 4 (1965), 308–313.
[39]
Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. 2019. Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2337–2346.
[40]
Despoina Paschalidou, Amlan Kar, Maria Shugrina, Karsten Kreis, Andreas Geiger, and Sanja Fidler. 2021. ATISS: Autoregressive Transformers for Indoor Scene Synthesis. Adv. Neural Inf. Process. Syst. 34 (2021).
[41]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga and others. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems. 32, (2019).
[42]
Fabio Pellacini, Frank Battaglia, R. Keith Morley, and Adam Finkelstein. 2007. Lighting with paint. ACM Trans. Graph. 26, 2 (2007), 9–es.
[43]
Fabio Pellacini, Parag Tole, and Donald P. Greenberg. 2002. A user interface for interactive cinematic shadow design. ACM Trans. Graph. 21, 3 (2002), 563–566.
[44]
Planner5D. 2022. Planner5D. Retrieved from https://www.planner5d.com/.
[45]
Haocheng Ren, Hao Zhang, Jia Zheng, Jiaxiang Zheng, Rui Tang, Yuchi Huo, Hujun Bao, and Rui Wang. 2022. MINERVAS: Massive interior environments virtual synthesis. In Computer Graphics Forum, Vol. 41. Wiley Online Library, 63–74.
[46]
Daniel Ritchie, Kai Wang, and Yu-an Lin. 2019. Fast and flexible indoor scene synthesis via deep convolutional generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6182–6190.
[47]
Mike Roberts, Jason Ramapuram, Anurag Ranjan, Atulit Kumar, Miguel Angel Bautista, Nathan Paczan, Russ Webb, and Joshua M. Susskind. 2021. Hypersim: A photorealistic synthetic dataset for holistic indoor scene understanding. In Proceedings of the IEEE International Conference on Computer Vision. 10912–10922.
[48]
Chris Schoeneman, Julie Dorsey, Brian Smits, James Arvo, and Donald Greenberg. 1993. Painting with light. In Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques. 143–146.
[49]
Michael Schwarz and Peter Wonka. 2014. Procedural design of exterior lighting for buildings with complex constraints. ACM Trans. Graph. 33, 5 (2014), 1–16.
[50]
Ram Shacked and Dani Lischinski. 2001. Automatic lighting design using a perceptual quality metric. In Computer Graphics Forum, Vol. 20. Wiley Online Library, 215–227.
[51]
Lior Shapira, Ariel Shamir, and Daniel Cohen-Or. 2009. Image appearance exploration by model-based navigation. In Computer Graphics Forum, Vol. 28. Wiley Online Library, 629–638.
[52]
Evan Shimizu, Sylvain Paris, Matt Fisher, Ersin Yumer, and Kayvon Fatahalian. 2019. Exploratory stage lighting design using visual objectives. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 417–429.
[53]
Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. 2012. Indoor segmentation and support inference from RGBD images. In Proceedings of the European Conference on Computer Vision (ECCV). Springer, 746–760.
[54]
Shuran Song, Fisher Yu, Andy Zeng, Angel X. Chang, Manolis Savva, and Thomas Funkhouser. 2017. Semantic scene completion from a single depth image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1746–1754.
[55]
Yifan Sun, Qixing Huang, Dun-Yu Hsiao, Li Guan, and Gang Hua. 2021. Learning view selection for 3D scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 14464–14473.
[56]
Hossein Talebi and Peyman Milanfar. 2018. NIMA: Neural image assessment. IEEE Trans. Image Process. 27, 8 (2018), 3998–4011.
[57]
Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 11 (2008).
[59]
Andreas Walch, Michael Schwärzler, Christian Luksch, Elmar Eisemann, and Theresia Gschwandtner. 2019. LightGuider: Guiding interactive lighting design using suggestions, provenance, and quality visualization. IEEE Trans. Visualiz. Comput. Graph. 26, 1 (2019), 569–578.
[60]
Kai Wang, Yu-An Lin, Ben Weissmann, Manolis Savva, Angel X. Chang, and Daniel Ritchie. 2019. PlanIT: Planning and instantiating indoor scenes with relation graph and spatial prior networks. ACM Trans. Graph. 38, 4 (2019), 1–15.
[61]
Kai Wang, Manolis Savva, Angel X. Chang, and Daniel Ritchie. 2018b. Deep convolutional priors for indoor scene synthesis. ACM Trans. Graph. 37, 4 (2018), 1–14.
[62]
Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. 2018a. High-resolution image synthesis and semantic manipulation with conditional GANs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
[63]
Xinpeng Wang, Chandan Yeshwanth, and Matthias Nießner. 2021. SceneFormer: Indoor scene generation with transformers. In Proceedings of the International Conference on 3D Vision (3DV). IEEE, 106–115.
[64]
Xintao Wang, Ke Yu, Chao Dong, and Chen Change Loy. 2018. Recovering realistic texture in image super-resolution by deep spatial feature transform. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 606–615.
[65]
Wenming Wu, Xiao-Ming Fu, Rui Tang, Yuhan Wang, Yu-Hao Qi, and Ligang Liu. 2019. Data-driven interior plan generation for residential buildings. ACM Trans. Graph. 38, 6 (2019), 1–12.
[66]
Bing Xu, Junfei Zhang, Rui Wang, Kun Xu, Yong-Liang Yang, Chuan Li, and Rui Tang. 2019. Adversarial Monte Carlo denoising with conditioned auxiliary feature modulation. ACM Trans. Graph. 38, 6 (2019), 224–1.
[67]
Kai Xu, Rui Ma, Hao Zhang, Chenyang Zhu, Ariel Shamir, Daniel Cohen-Or, and Hui Huang. 2014. Organizing heterogeneous scene collections through contextual focal points. ACM Trans. Graph. 33, 4 (2014), 1–12.
[68]
Zhicheng Yan, Hao Zhang, Baoyuan Wang, Sylvain Paris, and Yizhou Yu. 2016. Automatic photo adjustment using deep neural networks. ACM Trans. Graph. 35, 2 (2016), 1–15.
[69]
Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. 2017. DualGAN: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE International Conference on Computer Vision. 2849–2857.
[70]
Lap Fai Yu, Sai Kit Yeung, Chi Keung Tang, Demetri Terzopoulos, Tony F. Chan, and Stanley J. Osher. 2011. Make it home: Automatic optimization of furniture arrangement. ACM Trans. Graph. 30, 4 (2011).
[71]
Cheng Zhang, Bailey Miller, Kan Yan, Ioannis Gkioulekas, and Shuang Zhao. 2020a. Path-space differentiable rendering. ACM Trans. Graph. 39, 4 (2020).
[72]
Zaiwei Zhang, Zhenpei Yang, Chongyang Ma, Linjie Luo, Alexander Huth, Etienne Vouga, and Qixing Huang. 2020b. Deep generative modeling for scene synthesis via hybrid representations. ACM Trans. Graph. 39, 2 (2020), 1–21.
[73]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017a. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision. 2223–2232.
[74]
Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A. Efros, Oliver Wang, and Eli Shechtman. 2017b. Toward multimodal image-to-image translation. In Proceedings of the International Conference on Advances in Neural Information Processing Systems.

Cited By

View all
  • (2024)pARam: Leveraging Parametric Design in Extended Reality to Support the Personalization of Artifacts for Personal FabricationProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642083(1-22)Online publication date: 11-May-2024
  • (2024)Automatic Indoor Lighting Generation Driven by Human Activity Learned from Virtual Experience2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00050(276-285)Online publication date: 16-Mar-2024
  • (2024)RayProNet: A Neural Point Field Framework for Radio Propagation Modeling in 3D EnvironmentsIEEE Journal on Multiscale and Multiphysics Computational Techniques10.1109/JMMCT.2024.34643739(330-340)Online publication date: 2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Transactions on Graphics
ACM Transactions on Graphics  Volume 42, Issue 3
June 2023
181 pages
ISSN:0730-0301
EISSN:1557-7368
DOI:10.1145/3579817
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 March 2023
Online AM: 31 January 2023
Accepted: 22 December 2022
Revised: 26 October 2022
Received: 15 May 2022
Published in TOG Volume 42, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Lighting design
  2. interior design
  3. data-driven approach
  4. neural network
  5. deep learning

Qualifiers

  • Research-article

Funding Sources

  • NSFC
  • Key R&D Program of Zhejiang Province
  • Central Universities, Zhejiang Lab
  • Information Technology Center and State Key Lab of CAD&CG, Zhejiang University

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)399
  • Downloads (Last 6 weeks)12
Reflects downloads up to 02 Feb 2025

Other Metrics

Citations

Cited By

View all

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media