Javascript is required
[1] International Energy Agency (IEA). (2022). Energy Efficiency 2022. IEA Publications. https://www.iea.org/reports/energy-efficiency-2022.
[2] Garay, R., Chica, J.A., Apraiz, I., Campos, J.M., Tellado, B., Uriarte, A., Sanchez, V. (2015). Energy efficiency achievements in 5 years through experimental research in KUBIK. Energy Procedia, 78: 865-870. [Crossref]
[3] Bergen, T., Young, R. (2018). Fifty years of development of light measurement instrumentation. Lighting Research & Technology, 50(1): 141-153. [Crossref]
[4] Kamath, V., Kurian, C.P., Padiyar, S. (2022). Prediction of illuminance on the work plane using low dynamic unprocessed image data. In 2022 Fourth International Conference on Emerging Research in Electronics, Computer Science and Technology (ICERECT), Mandya, India, pp. 1-5. [Crossref]
[5] Abderraouf, S., Aouache, M., Iratni, A., Mekhermeche, H. (2023). Vision-based indoor lighting assessment approach for daylight harvesting. In 2023 International Conference on Advances in Electronics, Control and Communication Systems (ICAECCS), BLIDA, Algeria, pp. 1-6. [Crossref]
[6] Kruisselbrink, T., Aries, M., Rosemann, A. (2017). A practical device for measuring the luminance distribution. International Journal of Sustainable Lighting, 19(1): 75-90. [Crossref]
[7] Bishop, D., Chase, J.G. (2023). Development of a low-cost luminance imaging device with minimal equipment calibration procedures for absolute and relative luminance. Buildings, 13(5): 1266. [Crossref]
[8] Wang, Y., Song, S., Zhao, L., Xia, H., Yuan, Z., Zhang, Y. (2024). CGLight: An effective indoor illumination estimation method based on improved ConvMixer and GauGAN. Computers & Graphics, 125: 104122. [Crossref]
[9] Wang, Y., Wang, A., Song, S., Xie, F., Ma, C., Xu, J., Zhao, L. (2024). FHLight: A novel method of indoor scene illumination estimation using improved loss function. Image and Vision Computing, 152: 105299. [Crossref]
[10] Zhao, J., Xue, B., Zhang, M. (2024). SGformer: Boosting transformers for indoor lighting estimation from a single image. Computational Visual Media, 10(4): 671-686. [Crossref]
[11] Gutierrez-Martinez, J.M., Castillo-Martinez, A., Medina-Merodio, J.A., Aguado-Delgado, J., Martinez-Herraiz, J.J. (2017). Smartphones as a light measurement tool: Case of study. Applied Sciences, 7(6): 616. [Crossref]
[12] Kabir, K.A., Guha Thakurta, P.K., Kar, S. (2025). An intelligent geographic information system-based framework for energy efficient street lighting. Signal, Image and Video Processing, 19(4): 305. [Crossref]
[13] LeCun, Y., Bengio, Y., Hinton, G. (2015). Deep learning. Nature, 521(7553): 436-444. [Crossref]
[14] Villasenor, A. (2023). Machine learning assisted design of mmWave radio frequency circuits. University of California eScholarship. https://escholarship.org/uc/item/5fg6r9pf.
[15] Kingma, D.P., Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint, arXiv:1412.6980. [Crossref]
[16] Breiman, L. (2001). Random forests. Machine Learning, 45: 5-32. [Crossref]
[17] Dao, N.-N., Pham, Q.-D., Cho, S., Nguyen, N.T. (2024). Intelligence of Things: Technologies and Applications. Springer, Cham. [Crossref]
[18] Hastie, T., Tibshirani, R., Friedman, J. (2010). The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. Springer, New York. [Crossref]
[19] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12: 2825-2830. https://dl.acm.org/doi/10.5555/1953048.2078195.
[20] Chen, T., Guestrin, C. (2016). XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785-794. [Crossref]
Search

Acadlore takes over the publication of IJCMEM from 2025 Vol. 13, No. 3. The preceding volumes were published under a CC BY 4.0 license by the previous owner, and displayed here as agreed between Acadlore and the previous owner. ✯ : This issue/volume is not published by Acadlore.

Open Access
Research article

A Machine Learning-Based Tool for Indoor Lighting Compliance and Energy Optimization

abderraouf seniguer1*,
abdelhamid iratni2,
mustapha aouache3,
hadja yakoubi1,
haithem mekhermeche1
1
LMSE Laboratory, Faculty of Science and Technology, University Mohamed El Bachir El Ibrahimi of Bordj Bou Arreridj, 34030 El-Anasser, Algeria
2
Department of Electrical Engineering, Faculty of Science and Technology, University Mohamed El-Bachir El-Ibrahimi, 34030 Bordj Bou Arreridj, Algeria
3
Telecom Division, Center for Development of Advanced Technologies (CDTA), 16303 Algiers, Algeria
International Journal of Computational Methods and Experimental Measurements
|
Volume 13, Issue 2, 2025
|
Pages 259-271
Received: 04-26-2025,
Revised: 06-19-2025,
Accepted: 06-23-2025,
Available online: 06-29-2025
View Full Article|Download PDF

Abstract:

Adequate indoor lighting is essential for ensuring visual comfort, energy efficiency, and compliance with architectural standards. This study presents a novel smartphone-based platform for real-time illuminance estimation and visual mapping, that leverages a lightweight machine learning model. The application utilizes the smartphone’s built-in camera to capture images of the scenes and performs illuminance prediction for each patch of the image using a trained regression model, offering a cost-effective alternative to physical lux meter grid. The mobile application generates a color-coded heat maps that visualize the spatial distribution of illuminance and do the assessment of its compliance with an established lighting norm. The advantages of the proposed system include its affordability, portability, and prediction accuracy enabled by the machine learning model trained on image intensity features. Experimental tests in a controlled indoor setting demonstrate high prediction accuracy and low computational requirements, confirming the platform’s suitability for use in real-word applications. The tool enables effective and precise analysis of light and is hence usable in architectural diagnostics, energy audits, and spatial design optimization. In addition, the user-friendly interface benefits both professional and non-professional users, facilitating real-time adjustment and optimization of indoor lighting.

Keywords: Energy efficiency, Illuminance estimation, Indoor lighting design, Lighting compliance, Machine learning regression, Smartphone application

1. Introduction

The push for smarter and greener buildings has made indoor lighting a central aspect of energy-efficient architectural design. According to the international energy agency, lighting represents a major component of energy use in residential and commercial buildings account for as much as 15% of worldwide electricity consumption [1]. Ensuring both efficient and comfortable lighting conditions is no longer a matter of convenience; it is a fundamental requirement for environmental sustainability and occupant well-being [2]. Traditional illuminance measurement methods, such as handheld lux meters or wall-mounted ambient light sensors, often fall short when deployed in practical, large-scale applications. First, these tools provide only single-point measurements, failing to capture spatial variability in lighting conditions, which is crucial for identifying under- or over-illuminated zones. This lack of spatial resolution makes them impractical for environments like classrooms, offices, or retail spaces where lighting uniformity directly affects comfort and productivity. Second, the requirement of manual operation, precise sensor placement, and professional calibration limits their accessibility for non-expert users. Additionally, high-quality lux meters are typically expensive and may not be feasible for widespread deployment in low-resource settings. These limitations emphasize the need for cost-effective, user-friendly solutions that offer spatially-resolved, real-time feedback without relying on specialized instrumentation or trained personnel [3].

Several recent studies have explored the capabilities of image processing techniques to overcome the difficulties faced by the traditional lux meter method. Kamath et al. [4] presented the analysis of illuminance on work plane prediction from low dynamic range, raw image data. While their methodology demonstrates that images from cameras can be utilized as a stand-in for lux measurements, it is restricted to controlled testing environments and lacks the ability to produce visual illumination maps. Moreover, Abderraouf et al. [5] designed a vision-based indoor lighting estimation method primarily geared toward daylight harvesting, using image processing to classify ambient lighting conditions, However, their approach did not integrate predictive modelling or user feedback mechanisms, exhibited limited accuracy in illuminance prediction, and lacked the ability to produce interpretable illuminance overlays.

Kruisselbrink et al. [6] proposed a custom-built device for luminance distribution measurement using High Dynamic Range (HDR) imaging method, a widely used technique in photography which is based on the principal of capturing a wider dynamic range. Their system demonstrated good indoor light estimation accuracy. Nonetheless, it was non-portable, required dedicated hardware, had high computational demands, and needed time-consuming calibration by trained personnel. Similarly, Bishop and Chase [7], introduced a low-cost luminance imaging device using HDR technique with goal of minimizing calibration needs. While economical, this application also relies on external imaging components, and lacked the real-time, lightweight capabilities required for mobile usage.

In addition to image-processing-based strategies, several learning-based techniques have demonstrated high potential in indoor illumination estimation. For example, Wang et al. [8] proposed CGLight, which combines a ConvMixer backbone with a GauGAN-based image-to-illumination mapping framework, enabling the generation of spatially consistent and realistic lighting predictions. Similarly, in their FHLight model, Wang et al. [9] introduced enhancements in the loss function design to improve model robustness across diverse lighting distributions and indoor geometries. Zhao et al. [10] presented SGformer, a transformer-based architecture that incorporates both global context and local spatial cues through self-attention mechanisms, allowing it to accurately estimate spherical lighting parameters from single RGB images. While these methods achieve state-of-the-art accuracy in complex visual scenes, their reliance on deep feature hierarchies, large-scale annotated datasets, and GPU acceleration limits their practicality for mobile deployment. In contrast, our approach adopts a lightweight machine learning framework tailored for on-device inference, achieving a favorable trade-off between accuracy, interpretability, and computational efficiency, particularly suited for real-time illuminance analysis on smartphones.

Some researchers have also investigated the utility of smartphone-embedded ambient light sensors (ALS) for lux estimation and indoor localization tasks [11]. Although such sensors are useful for low-power applications, they typically provide single-point measurements with limited accuracy. In particular, Gutierrez-Martinez et al. [11] reported an absolute error when estimating illuminance of close to 10%. In contrast, our camera-based approach, trained via machine learning regressors, achieved a significantly lower error of around 2.4%. Additionally, the use of features extracted from images allows our method to generate spatially dense lighting maps.

This paper introduces an innovative smartphone-based mobile application that take advantage of a high performance lightweight machine learning model for real-time illuminance estimation and visualization. The app utilizes the smartphone’s built-in camera to capture indoor scenes, segments them into localized patches, and estimates illuminance at the patch level using a trained regression model. The predictions are then used to create color-coded heat map overlay, which provides intuitive feedback on spatial lighting distribution. The average illuminance value of the captured scene is then compared with standards set by the Commission on Illumination (CIE) and the Illuminating Engineering Society (IES) to assess whether the current lighting conditions falls under the recommended levels for typical indoor settings or not.

In contrast with the previous studies that rely on static laboratory conditions, external hardware or needs a high computation power our solution is platform-independent, cost-effective, and optimized for practical mobile use. Through the integration of visual feedback and machine learning inference, it facilitates accessible, real-time assessment of indoor lighting, offering value to architects, lighting designers, educators, and facility managers, this study is guided by two core research questions:

$\bullet$ What level of accuracy can be achieved using different machine learning regressors (MLP, Random Forest, Gradient Boosting) when predicting patch-wise illuminance from camera-derived features?

$\bullet$ Can such a system operate efficiently on mobile devices while providing interpretable, standards-based feedback aligned with lighting guidelines?

These questions drive the development, validation, and deployment of the mobile application described herein. This paper proceeds with Section 2, which details the approach used for data collection, model development, and application workflow. Section 3 presents experimental findings and model evaluations conducted under varying real-world lighting scenarios. The paper concludes with key insights and proposed directions for future work.

2. Methodology

3. Results and Discussion

4. Conclusions

This study introduced a low-cost, smartphone-based tool capable of real-time indoor illuminance estimation and mapping, leveraging deep learning and classical machine learning models to support compliance with lighting design standards.

The proposed mobile application enables users to assess lighting conditions visually and numerically, eliminating the need for costly lux meters or specialized instrumentation. Through a platformatic development pipeline encompassing data collection, model training, validation, and integration into a real-world application, we demonstrated that compact devices can provide accurate and spatially-resolved illuminance feedback.

Experimental results showed that among the three tested models Multi-Layer Perceptron, Random Forest Regressor, and Gradient Boosting Regressor; the Random Forest Regressor demonstrated the best trade-off between prediction accuracy (MAE: 21.25, R²: 0.97) and inference speed (≈ 40 ms). Furthermore, Mean Absolute Error Percentage (MAEP) of 2.43% across multiple desk locations in real-world environments, validated the consistency and spatial reliability of the predicted illuminance distribution under varied conditions. The complete application maintained a total processing time of under one second, rendering it suitable for responsive mobile-based lighting analysis. By offering real-time heatmap visualizations and context-aware lighting recommendations within a single portable platform, this application presents new opportunities for intuitive lighting diagnostics in residential, educational, healthcare, and commercial spaces. The user-friendly interface and compatibility with consumer-grade smartphones further enhance its accessibility and scalability.

However, it is important to acknowledge that the platform was only evaluated in a limited range of indoor environments using a single smartphone model. Broader deployment scenarios involving diverse room geometries, surface reflectance, and device-specific camera characteristics should be explored to confirm the model’s robustness and generalizability.

Future work will focus on addressing the current limitations and enhancing the platform's robustness, scalability, and adaptability. A key priority will be the evaluation of system performance across a broader range of real-world indoor environments, including residential, industrial, and commercial settings, each with distinct lighting configurations, surface textures, and spatial geometries. This expanded validation will help assess the model’s generalization capabilities beyond controlled office and classroom scenarios.

To improve cross-device consistency, future versions may incorporate device-specific calibration routines or normalization layers to account for variability in camera hardware and built-in image processing algorithms. In parallel, advanced preprocessing techniques such as high-dynamic-range (HDR) fusion or exposure correction could be explored to better handle scenes with complex or uneven illumination patterns, including glare, shadows, and mixed lighting sources.

From a computational standpoint, optimizing the model for real-time inference on low-end and mid-range smartphones will be essential. This may involve quantization, model pruning, or knowledge distillation to reduce memory and processing demands without compromising accuracy. Furthermore, the integration of lightweight edge computing frameworks could ensure smooth performance while preserving offline functionality.

Finally, future iterations of the system could incorporate temporal illumination tracking and user feedback mechanisms. These enhancements would enable the platform to learn from environmental patterns and user behavior over time, ultimately supporting intelligent daylight harvesting strategies and adaptive lighting control systems that respond dynamically to both spatial and temporal context.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

The authors thank the University of Skikda, Skikda, Algeria for providing research facilities and the technical staff members in the Technology Department.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References
[1] International Energy Agency (IEA). (2022). Energy Efficiency 2022. IEA Publications. https://www.iea.org/reports/energy-efficiency-2022.
[2] Garay, R., Chica, J.A., Apraiz, I., Campos, J.M., Tellado, B., Uriarte, A., Sanchez, V. (2015). Energy efficiency achievements in 5 years through experimental research in KUBIK. Energy Procedia, 78: 865-870. [Crossref]
[3] Bergen, T., Young, R. (2018). Fifty years of development of light measurement instrumentation. Lighting Research & Technology, 50(1): 141-153. [Crossref]
[4] Kamath, V., Kurian, C.P., Padiyar, S. (2022). Prediction of illuminance on the work plane using low dynamic unprocessed image data. In 2022 Fourth International Conference on Emerging Research in Electronics, Computer Science and Technology (ICERECT), Mandya, India, pp. 1-5. [Crossref]
[5] Abderraouf, S., Aouache, M., Iratni, A., Mekhermeche, H. (2023). Vision-based indoor lighting assessment approach for daylight harvesting. In 2023 International Conference on Advances in Electronics, Control and Communication Systems (ICAECCS), BLIDA, Algeria, pp. 1-6. [Crossref]
[6] Kruisselbrink, T., Aries, M., Rosemann, A. (2017). A practical device for measuring the luminance distribution. International Journal of Sustainable Lighting, 19(1): 75-90. [Crossref]
[7] Bishop, D., Chase, J.G. (2023). Development of a low-cost luminance imaging device with minimal equipment calibration procedures for absolute and relative luminance. Buildings, 13(5): 1266. [Crossref]
[8] Wang, Y., Song, S., Zhao, L., Xia, H., Yuan, Z., Zhang, Y. (2024). CGLight: An effective indoor illumination estimation method based on improved ConvMixer and GauGAN. Computers & Graphics, 125: 104122. [Crossref]
[9] Wang, Y., Wang, A., Song, S., Xie, F., Ma, C., Xu, J., Zhao, L. (2024). FHLight: A novel method of indoor scene illumination estimation using improved loss function. Image and Vision Computing, 152: 105299. [Crossref]
[10] Zhao, J., Xue, B., Zhang, M. (2024). SGformer: Boosting transformers for indoor lighting estimation from a single image. Computational Visual Media, 10(4): 671-686. [Crossref]
[11] Gutierrez-Martinez, J.M., Castillo-Martinez, A., Medina-Merodio, J.A., Aguado-Delgado, J., Martinez-Herraiz, J.J. (2017). Smartphones as a light measurement tool: Case of study. Applied Sciences, 7(6): 616. [Crossref]
[12] Kabir, K.A., Guha Thakurta, P.K., Kar, S. (2025). An intelligent geographic information system-based framework for energy efficient street lighting. Signal, Image and Video Processing, 19(4): 305. [Crossref]
[13] LeCun, Y., Bengio, Y., Hinton, G. (2015). Deep learning. Nature, 521(7553): 436-444. [Crossref]
[14] Villasenor, A. (2023). Machine learning assisted design of mmWave radio frequency circuits. University of California eScholarship. https://escholarship.org/uc/item/5fg6r9pf.
[15] Kingma, D.P., Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint, arXiv:1412.6980. [Crossref]
[16] Breiman, L. (2001). Random forests. Machine Learning, 45: 5-32. [Crossref]
[17] Dao, N.-N., Pham, Q.-D., Cho, S., Nguyen, N.T. (2024). Intelligence of Things: Technologies and Applications. Springer, Cham. [Crossref]
[18] Hastie, T., Tibshirani, R., Friedman, J. (2010). The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. Springer, New York. [Crossref]
[19] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12: 2825-2830. https://dl.acm.org/doi/10.5555/1953048.2078195.
[20] Chen, T., Guestrin, C. (2016). XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785-794. [Crossref]
Nomenclature

RGB

Red, Green, Blue channel intensities

GS

Grayscale intensity

ILS

MAE

Illuminance level in lux

Mean Absolute Error

RMSE

Root Mean Square Error

Coefficient of Determination

MAEP

Mean Absolute Error Percentage

Subscripts

pred

Predicted illuminance

true

True (measured) illuminance

lux

Illuminance in lux units


Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Seniguer, A., Iratni, A., Aouache, M., Yakoubi, H., & Mekhermeche, H. (2025). A Machine Learning-Based Tool for Indoor Lighting Compliance and Energy Optimization. Int. J. Comput. Methods Exp. Meas., 13(2), 259-271. https://doi.org/10.18280/ijcmem.130205
A. Seniguer, A. Iratni, M. Aouache, H. Yakoubi, and H. Mekhermeche, "A Machine Learning-Based Tool for Indoor Lighting Compliance and Energy Optimization," Int. J. Comput. Methods Exp. Meas., vol. 13, no. 2, pp. 259-271, 2025. https://doi.org/10.18280/ijcmem.130205
@research-article{Seniguer2025AML,
title={A Machine Learning-Based Tool for Indoor Lighting Compliance and Energy Optimization},
author={Abderraouf Seniguer and Abdelhamid Iratni and Mustapha Aouache and Hadja Yakoubi and Haithem Mekhermeche},
journal={International Journal of Computational Methods and Experimental Measurements},
year={2025},
page={259-271},
doi={https://doi.org/10.18280/ijcmem.130205}
}
Abderraouf Seniguer, et al. "A Machine Learning-Based Tool for Indoor Lighting Compliance and Energy Optimization." International Journal of Computational Methods and Experimental Measurements, v 13, pp 259-271. doi: https://doi.org/10.18280/ijcmem.130205
Abderraouf Seniguer, Abdelhamid Iratni, Mustapha Aouache, Hadja Yakoubi and Haithem Mekhermeche. "A Machine Learning-Based Tool for Indoor Lighting Compliance and Energy Optimization." International Journal of Computational Methods and Experimental Measurements, 13, (2025): 259-271. doi: https://doi.org/10.18280/ijcmem.130205
SENIGUER A, IRATNI A, AOUACHE M, et al. A Machine Learning-Based Tool for Indoor Lighting Compliance and Energy Optimization[J]. International Journal of Computational Methods and Experimental Measurements, 2025, 13(2): 259-271. https://doi.org/10.18280/ijcmem.130205