Javascript is required
[1] Malakouti, S.M. (2023). Prediction of wind speed and power with LightGBM and grid search: Case study based on Scada system in Turkey. International Journal of Energy Production and Management, 8(1): 35-40. [Crossref]
[2] Al-Hadeethi, R., Hacham, W.S. (2023). Reducing energy consumption in Iraqi campuses with passive building strategies: A case study at Al-Khwarizmi College of Engineering. International Journal of Energy Production and Management, 8(3): 177-186. [Crossref]
[3] Zalloom, B. (2023). Towards a sustainable design: integrating spatial planning with energy planning when designing a university campus. International Journal of Energy Production and Management, 8(2): 115-122. ttps://doi.org/10.18280/ijepm.080208
[4] Hans, M., Jogi, V. (2019). Peak load scheduling in smart grid using cloud computing. Bulletin of Electrical Engineering and Informatics, 8(4): 1525-1530. [Crossref]
[5] Kim, S., Kim, H. (2016). A new metric of absolute percentage error for intermittent demand forecasts. International Journal of Forecasting, 32(3): 669-679. [Crossref]
[6] Bolboacă, R., Haller, P. (2023). Performance analysis of long short-term memory predictive neural networks on time series data. Mathematics, 11(6): 1432. [Crossref]
[7] Abdulrahman, M.L., Ibrahim, K.M., Gital, A.Y., Zambuk, F.U., Ja’afaru, B., Yakubu, Z.I., Ibrahim, A. (2021). A review on deep learning with focus on deep recurrent neural network for electricity forecasting in residential building. Procedia Computer Science, 193: 141-154. [Crossref]
[8] Blasco, P.A., Montoya-Mira, R., Diez, J.M., Montoya, R., Reig, M.J. (2019). Compensation of reactive power and unbalanced power in three-phase three-wire systems connected to an infinite power network. Applied Sciences, 10(1): 113.
[9] Mbinkar, E.N., Asoh, D.A., Mungwe, J.N., Songyuh, L., Lamfu, E. (2022). Management of peak loads in an emerging electricity market. Energy Engineering, 119(6): 2637-2654. [Crossref]
[10] Hewamalage, H., Bergmeir, C., Bandara, K. (2021). Recurrent neural networks for time series forecasting: Current status and future directions. International Journal of Forecasting, 37(1): 388-427. [Crossref]
[11] Zainuddin, Z., EA, P.A., Hasan, M.H. (2021). Predicting machine failure using recurrent neural network-gated recurrent unit (RNN-GRU) through time series data. Bulletin of Electrical Engineering and Informatics, 10(2): 870-878. [Crossref]
[12] Lareno B., Swastina, L. (2018). Short-term electrical load prediction using evolving neural network. IOP Conference Series: Materials Science and Engineering, 384(1): 01201. [Crossref]
[13] Al Mamun, A., Sohel, M., Mohammad, N., Sunny, M.S.H., Dipta, D.R., Hossain, E. (2020). A comprehensive review of the load forecasting techniques using single and hybrid predictive models. IEEE Access, 8: 134911-134939. [Crossref]
[14] Jaihuni, M., Basak, J.K., Khan, F., Okyere, F.G., et al. (2022). A novel recurrent neural network approach in forecasting short term solar irradiance. ISA Transactions, 121: 63-74. [Crossref]
[15] Wen, X., Li, W. (2023). Time series prediction based on LSTM-attention-LSTM model. IEEE Access, 11: 48322-48331. [Crossref]
[16] Seddik, S., Routaib, H., Elhaddadi, A. (2023). Multi-variable time series decoding with long short-term memory and mixture attention. Acadlore Transactions on Machine Learning Research, 2(3): 154-169. [Crossref]
[17] Bolboacă, R. (2022). Adaptive ensemble methods for tampering detection in automotive aftertreatment systems. IEEE Access, 10: 105497-105517. [Crossref]
[18] Shiang, E.P.L., Chien, W.C., Lai, C.F., Chao, H.C. (2020). Gated recurrent unit network-based cellular trafile prediction. In 2020 International Conference on Information Networking (ICOIN), San Francisco, CA, USA, pp. 471-476. [Crossref]
[19] Hao, Y., Sheng, Y., Wang, J. (2019). Variant gated recurrent units with encoders to preprocess packets for payload-aware intrusion detection. IEEE Access, 7: 49985-49998. [Crossref]
[20] Jadli, A., Lahmer, E.H.B. (2022). A novel LSTM-GRU-based hybrid approach for electrical products demand forecasting. International Journal of Intelligent Engineering & Systems, 15(3): 601-613. [Crossref]
[21] Li, X., Ma, X., Xiao, F., Wang, F., Zhang, S. (2020). Application of gated recurrent unit (GRU) neural network for smart batch production prediction. Energies, 13(22): 6121. [Crossref]
[22] Tan, K.L., Lee, C.P., Lim, K.M., Anbananthen, K.S.M. (2022). Sentiment analysis with ensemble hybrid deep learning model. IEEE Access, 10: 103694-103704. [Crossref]
[23] Choi, J.Y., Lee, B. (2018). Combining LSTM network ensemble via adaptive weighting for improved time series forecasting. Mathematical Problems in Engineering, 2018(1): 2470171. [Crossref]
[24] Samson, T.K., Aweda, F.O. (2024). Forecasting rainfall in selected cities of Southwest Nigeria: A comparative study of random forest and long short-term memory models. Acadlore Transactions on Geosciences, 3(2): 79-88. [Crossref]
[25] Ali, S., Wani, M.A. (2023). Gradient-based neural architecture search: A comprehensive evaluation. Machine Learning and Knowledge Extraction, 5(3): 1176-1194. [Crossref]
[26] Jan, S.T., Hao, Q., Hu, T., Pu, J., Oswal, S., Wang, G., Viswanath, B. (2020). Throwing darts in the dark? detecting bots with limited data using neural data augmentation. In 2020 IEEE symposium on security and privacy (SP), San Francisco, CA, USA, pp. 1190-1206. [Crossref]
[27] Qin, D., Wang, C., Wen, Q., Chen, W., Sun, L., Wang, Y. (2023). Personalized federated darts for electricity load forecasting of individual buildings. IEEE Transactions on Smart Grid, 14(6): 4888-4901. [Crossref]
[28] Schober, P., Boer, C., Schwarte, L.A. (2018). Correlation coefficients: Appropriate use and interpretation. Anesthesia & Analgesia, 126(5): 1763-1768. [Crossref]
[29] Srivastav, M.K. (2001). Study of correlation theory with different views and methods among variables in mathematics. Ratio, 5(2): 21-23. https://www.ijmsi.org/Papers/Volume.5.Issue.2/E05022123.pdf.
[30] Šindelka, M. (2022). An Introduction to Scattering Theory. http://arxiv.org/abs/2204.03651.
[31] Chicco, D., Warrens, M.J., Jurman, G. (2021). The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. Peerj Computer Science, 7: e623. [Crossref]
[32] Chai, T., Draxler, R.R. (2014). Root mean square error (RMSE) or mean absolute error (MAE)?–Arguments against avoiding RMSE in the literature. Geoscientific Model Development, 7(3): 1247-1250. [Crossref]
Search

Acadlore takes over the publication of IJEPM from 2025 Vol. 10, No. 3. The preceding volumes were published under a CC BY 4.0 license by the previous owner, and displayed here as agreed between Acadlore and the previous owner. ✯ : This issue/volume is not published by Acadlore.

Open Access
Research article

Optimizing Electrical Energy Management in Apartment Buildings Through Ensemble Neural Network Prediction Model

Ahmad Rofii1*,
Busono Soerowirdjo2,
Rudi Irawan3
1
Faculty of Engineering and Informatics, Universitas 17 Agustus 1945 Jakarta, Jakarta 14350, Indonesia
2
Faculty of Computer Science and Information Technology, Universitas Gunadarma, Depok 16424, Indonesia
3
Faculty of Industrial Technology, Universitas Gunadarma, Depok 16424, Indonesia
International Journal of Energy Production and Management
|
Volume 9, Issue 4, 2024
|
Pages 255-265
Received: 02-25-2024,
Revised: 07-17-2025,
Accepted: 07-29-2024,
Available online: 12-30-2024
View Full Article|Download PDF

Abstract:

The global energy crisis highlights the need for energy efficiency in the management of the electricity sector. One method to contribute to electrical energy efficiency in buildings is to develop appropriate prediction models. This research seeks to optimize the use of electrical energy by using an ensemble neural network approach, combining LSTM, GRU, and RNN models, to estimate reactive energy consumption. This study utilizes energy measurement data for apartment buildings in Jakarta, which includes consumption data during peak and off-peak periods, as well as reactive energy consumption. This methodology involves the use of ensemble neural network models—LSTM, GRU, RNN with Differentiable Architecture Search (DARTS) initiation—to build adaptive prediction models capable of generalizing across various data conditions. These findings demonstrate that ensemble neural network models with Differentiable Architecture Search Initiation (DARTS) achieve more accurate predictions compared to individual LSTM, GRU, and RNN models in estimating energy consumption. Correlation analysis shows a significant relationship between reactive energy consumption and peak/off-peak load More efficient and sustainable energy in apartment buildings is expected to reduce operational costs by scheduling the operation of large reactive power-consuming equipment, increasing energy efficiency, and mitigating environmental impacts through the application of renewable energy sources.
Keywords: Predict, Energy, Ensemble, LSTM, GRU, RNN, Efficiency, Differentiable

1. Introduction

The growing global energy crisis is driven by an imbalance in electricity usage and insufficient regulation of energy resources and unstable due and the lack of effective regulation of available energy resources. This phenomenon is caused by ever-increasing population growth, rapid industrialization, and the unstoppable need for energy [1]. Unbalanced use of electrical energy worsens the greenhouse effect and global warming. Fossil power plants, such as coal and gas power plants, are major contributors to emissions of carbon dioxide and other greenhouse gases. Without proper regulation and sufficient renewable energy, the option of fossil energy continues to be chosen as a quick solution to meet energy needs [2].

In small scale aspect one important of regulation of energy resources is the utilization of reactive power in high-rise residential buildings, which can be a significant determining factor during peak loads and off-peak hours. Peak load, which occurs when electrical energy demand reaches the highest level in an electrical system, is often the moment when energy efficiency can be increased or reduced [2].

When the load reaches its peak, the energy infrastructure faces significant pressure [3]. Increasing operational costs, the risk of overloading, and the potential for power interruptions or blackouts are problems that need to be addressed [4]. Effective strategies for managing peak loads, such as load scheduling and the use of cloud computing technology, can help optimize energy use, reduce operational costs, and prevent failures or interruptions in energy supply [3]. There is a correlation between peak load and reactive power, based on electrical theory that the use of electricity is for two needs, namely for generating heat energy and generating mechanical energy. These two energies are used by electricity users for life needs. Two energies that cause active power are apparent power and reactive power. When reactive power is needed by electricity users for magnetization needs, the power plant requires high rotational power so more fuel is needed when the reactive power is not too large. Conditions like this cause additional fuel costs to be charged to electricity users who use active power above the threshold.

As the user of electrical power, the use of electricity in a building requires planning and in planning information is needed about the state of electricity use in the future and this information is obtained through predictions. The results of recording the condition of electrical parameters are time series data. Deep learning is a time series prediction technology that does not need to manually determine which features should be extracted from the data, because the deep learning model can learn relevant features independently from the given data [5-7].

The data produced in measuring the use of power or electrical energy is generally consistent at the same time for all variables being measured. Therefore, the results of recording these measurements are in the form of time series data. In most buildings, the process of recording measurement results is still done manually in the form of log sheets. Due to manual recording, the volume of data currently available on a building is limited. Due to limited data volume, the proposal in this research is to build a prediction model for electrical energy use by utilizing the advantages and disadvantages of deep learning neural network models. Deep learning models include Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) and Gate Recurrent Unit (GRU). Utilizing the advantages and disadvantages of this neural network model is hereinafter called an ensemble. The obstacle faced in ensembles is that they require repeated iterations during training to produce optimal weights and biases, so an adequate computing system and a long training time are required. To overcome the shortcomings of the ensemble, the Differentiable Architecture Search (DARTS) initiation concept was used. The use of DARTS contained in the Pythorch and TensorFlow libraries makes it possible to automatically generate optimal weights and internal architecture. In this way, DARTS generates accurate neural network ensembles without assigning weights and biases with infinite iterations. The aim of establishing a prediction model with accurate prediction results is to provide direction to building managers in planning future arrangements for using electrical power.

1.1 The Type of Electrical Power Consumed by the User

Electrical power is split into two categories based on how it is used: producing magnetic flux and thermal energy. Reactive power (Q) is the power used to generate flux, while real power is the power used to heat energy (P). Appearance power (S) is the total of real power and reactive power. The following mathematical representation of the interaction between the three forces [8].

$S^2=P^2+Q^2$
(1)
$S=\sqrt{P^2+Q^2}$
(2)
$|S|=V \times I$
(3)
$Q=V\times I \times \sin \emptyset$
(4)
$=|S|\times \sin \emptyset$
(5)
$P=V \times I \cdot x \cos \emptyset$
(6)
$=|S| \times \cos \emptyset$
(7)
$\begin{gathered}\cos \emptyset=\frac{P}{|S|} \\ \cos \emptyset=\text { power factor }\end{gathered}$
(8)

In this context, electrical energy is defined as the work generated by the amount of electricity consumed in kilowatt hours (kWH) during a specific time period. Peak load occurs when all equipment is consuming electricity at the same time. Off-peak load occurs when electricity consumption differs from peak load [9].

1.2 Model Prediction Design

The current predictive models offer a variety of configurations to choose from, with neural network technology emerging as a developing field. This technology is highly adaptable and can be customized to specific needs. However, not all neural network models are suitable for every dataset, as LSTM, GRU, and RNN each possess their strengths and weaknesses. Considering this, LSTM, GRU, and RNN have been chosen for ensemble use to capitalize on their respective strengths and mitigate their limitations. A review of the literature on the advantages and disadvantages of LSTM, GRU, and RNN is as follows [10-13].

Table 1. Advantages and disadvantages of LSTM, GRU, and RNN

Theme Problem

LSTM

GRU

RNN

Data Series

Capable of capturing long sequences

Less able to capture long sequences

Less able to capture long sequences

Vanishing gradient

Controlled

Resolved

Resolved

Overfitting

Prone

Prevent

Prevent

Computing

Complicated

Simple

Simple

Temporal Data

Adaptive and effective

Suitable for short time series and text data

Limited

Hyperparameter

Needs setup

Needs setup

Needs setup

Cell Memory

Overcome long-term dependency, retain past relevant information

Overcome long-term dependency, retain past relevant information

Unable to overcome long-term dependency

Dataset

Capable of large time series datasets

Suitable for small datasets

Suitable for small data sequence

Architecture Internal

Needs setup

Needs setup

Needs setup

The data used is time series data from energy measurements from an apartment building in Jakarta with the following location. The collected data is then preprocessed to adjust the time series input to the neural network model, namely in the form of an array of three input frames consisting of, batch, time steps, and features. The data before preprocessing is carried out is described in Table 1, as part of the acquired data.

1.3 Acquisition Data

The first stage in this research is data collection. The data used includes monthly electrical energy usage, which includes energy usage during peak load times and off-peak load times, as well as reactive energy usage data related to electrical system efficiency. The energy measurement position is on the medium voltage side before it is distributed into the switchboard for the building. Apart from that, other supporting data such as current voltage and reactive current will also be collected. The excerpt is as follows:

Table 2. Actual data measuring result

Kilo Watt Hour Meter

Recording Date

01/01/

2022

02/01/

2022

06/12/

2022

07/12/

2022

PL

STAND

560

561

796

796

Use

0.72

0.74

0.73

0.75

FM 1600

1152

1184

1168

1200

Off

PL

STAND

2581

2584

3581

3584

Use

3.06

3.06

3.21

3.31

FM 1600

4896

4896

5136

5296

STAND

1137

1138

1460

1461

Use

1.02

1.02

1.04

1.00

FM 1600

1632

1632

1664

1600

V1

58.1

58.57

58.7

58.54

V2

58.1

58.87

58.98

58.77

V3

58.4

58.97

59.1

58.89

I1

0.82

1.02

0.922

1.118

I2

0.83

0.89

0.826

0.972

I3

0,80

0.90

0.843

1.049

In Table 2, the variable PL is energy consumption during peak load times, off PL is energy consumption outside peak load times and kVh is reactive power consumption. The PL and off-PL data used for training are data with the initial’s "use" because the initial use data is data on the daily amount of energy used every 24 hours. While other variables such as current and voltage are used as input features to complete learning from the deep learning model that is created. For index data, date data is used starting from January 1, 2022, to December 31, 2022.

1.4 Neural Network

Neural network is a computational model inspired by the structure and function of the human brain. It consists of interconnected nodes (neurons) organized in layers, including input, hidden, and output layers. Each neuron receives inputs, applies weights, and passes the result through an activation function to produce an output [6]. Through iterative training processes like backpropagation, neural networks learn to recognize patterns and relationships in data, enabling tasks such as classification, regression, and pattern recognition. They are used in various fields including image and speech recognition, natural language processing, finance, healthcare, and robotics for their ability to handle complex, nonlinear relationships in data [13, 14].

1.5 Long Short-Term Memory

LSTM is a type of RNN architecture designed to handle relationships between data elements in long sequences [14, 15] (see Figure 1). It features a cell state for long-term information storage, gate mechanisms to control information flow, and the Backpropagation Through Time (BPTT) algorithm for updating the model. LSTM can recognize patterns and learn long-term dependencies with its memory cells, which include input, output, and forget gates [16, 17].

Forget gate:

$f_t=\sigma\left(w_f .\left[h_{t-1}, x_t\right]+b_f\right)$
(9)

Input gate:

$i_t=\sigma\left(w_f \cdot\left[h_{t-1}, x_t\right]+b_i\right)$
(10)

Candidat gate:

$C_t=\tanh \left(w_c \cdot\left[h_{t-1}, x_t\right]+b_c\right)$
(11)

Output gate:

$O_t=\sigma\left(w_o \cdot\left[h_{t-1}, x_t\right]+b_o\right)$
(12)

Update cell:

$C_T= f_t \cdot C_{t-1}+ i_t \cdot \tilde{C}_t$
(13)
Figure 1. Architecture LSTM [16]
1.6 Gate Recurrent Unit (GRU)

The Gated Recurrent Unit (GRU) is a type of architecture within artificial neural networks that resembles the Long Short-Term Memory (LSTM) and is designed to handle sequential data or time series problems [17] (see Figure 2). GRU features a simpler structure compared to LSTM, with a focus on computational efficiency. It comprises two gates, namely the reset gate and update gate, which assist in controlling the flow of information within the model. With its more concise structure, GRU is often utilized in applications requiring a balance between model complexity and computational performance, such as natural language processing, time series modeling, and pattern recognition in sequential data [18-20].

Figure 2. Architecture GRU [20]

Reset gate:

$r_t=\sigma\left(w_z \cdot\left[h_{t-1}, x_t\right]\right)$
(14)

Update gate:

$z_t=\sigma\left(w_z \cdot\left[h_{t-1}, x_t\right]\right)$
(15)

Hidden state:

$h_t=\tanh \left(w \cdot\left[r_t \odot \cdot h_{t-1}, x_t\right]\right)$
(16)

Update gate:

$h_t=\left(1-z_t\right) \odot \cdot h_{t-1} \odot \cdot \tilde{h}_t$
(17)
1.7 Ensemble Neural Networks (ENN)

Ensemble works by combining predictions from several models that have different weaknesses, by following two main methods, namely combining the average prediction results from all models such as Eq. (18) and considering the initial weights and biases of each model Eq. (19) ENN pooling models reduce total prediction variance, which can occur if one model produces predictions that differ significantly from another model. Ensembles provide prediction results by utilizing the advantages and disadvantages of ensemble models [21-24].

$y_{\text {ensemble }}=\frac{1}{N} \sum_{i=1}^N f_i(x)$
(18)
$y_{\text {ensemble }}=\frac{1}{N} \sum_{i=1}^N w_i f_i(x)$
(19)
1.8 Differentiable Architecture Search (DARTS)

In neural networks, there is a process called initialization, where model parameters such as weights and bias are determined before training begins. In practice, both of these processes require adequate time and computer capacity. The solution for this is the Neural Architecture Search (NAS) approach using gradient optimization to automatically search for the best architecture, hereinafter called DARTS (Differentiable Architecture Search) [25-27]. Technique for finding the optimal combination of RNN GRU and LSTM neural networks. DARTS adjusts parameters such as learning rate, number of layers, and number of units in each layer to achieve optimal performance. DARTS automates the design of neural network architectures without having to go through time-consuming manual architecture searches.

2. Methodology

In this research, the prediction method uses a deep learning ensemble neural network LSTM, GRU, and RNN. The goal is to produce high-accuracy predictions for planning future electricity use. To achieve this goal, a model is built that can adapt and generalize various data conditions. The study used three data sets: energy use at peak loads, off-peak loads, and daily reactive energy of an apartment building. The research process can be seen as shown in Figure 3.

The research process starts from the Start stage, which marks the beginning of the analysis to increase the efficiency of electrical energy use. The first step is Pre-Processing, where the raw data is cleaned, normalized, and prepared in the format used by the neural network model. Once the data is ready, the next step is to train the Neural Network Model These models learn patterns from historical data to understand the use of electrical equipment under various conditions. After being trained, the model is evaluated to measure its accuracy through the Accuracy Evaluation stage. Evaluation results are stored in This model is used to predict energy use from new data, providing a snapshot of future electricity use. Internal Storage for further use. With the model that has been evaluated, the final step is to produce a prediction result.

Figure 3. Flow of the methodology

Figure 3 provides an overview of the flow of the research process carried out, starting from pre-processing to prediction results and data analysis. This picture explains in outline the research process.

2.1 Data Preprocessing

Preprocessing actual data, and converting actual data into time series format to capture temporal patterns must be done by converting the data into time series form which is by the input concept required by the LSTM (Long Short-Term Memory), GRU (Gated Recurrent Unit) model and RNN, as well as ensemble models that combine the two. Preprocessing steps:

  1. Address missing data (missing values) and outliers to ensure good data quality.
  2. Normalize or standardize data so that all features are on the same scale.
  3. Time Series Arrangement: Data is organized in chronological order based on timestamp.
  4. Create a sliding window or sequence that combines several historical values as input to predict future values.
  5. Input Generation for the Model: For LSTM and GRU, the input is organized in the form of a 3D tensor with dimensions [samples, timesteps, features].
  6. Dataset Division: Data is divided into training set, validation set, and testing set to ensure the model can be tested with data it has never seen before.
2.2 Building a Neural Network Model

The basis of this research is based on the results of a prediction method using the LSTM Recurrent Neural Network (RNN) and Gated Recurrent Unit (GRU) neural network ensemble model approach. By considering the ensemble as a model that adopts the advantages and limitations of the LTM RNN and GRU models. To ensure accurate and fast learning convergence, the method of initiating a search for the internal architecture of the model using differentiation-based search or DARTS is used. method for automatically searching for model architecture with a differential approach as follows:

Defining Candidate Architectures includes structure, number of layers, type of activation function, or type of connection between layers.

Determination of the Search Space includes candidate architectures including layer type, number of layers, relationships between layers, and other relevant parameters.

Defining Objective Function: Define the objective function or evaluation criteria to evaluate the quality of each candidate's architecture.

Optimization with Gradient Descent: Use optimization algorithms such as stochastic gradient descent (SGD) or other variants to find optimal architecture.

Model Implementation in PyTorch: Implement candidate architectures in the PyTorch framework, because the Python library supports differential computing.

2.3 Training and Data Prediction

Each model will be trained using training data through the steps of parameter initialization, optimization, and overfitting handling. Initial parameters such as the number of hidden layers, batch size, and number of epochs will be set, then an optimization algorithm such as Adam is used to minimize loss functions such as Mean Absolute Error (MAE), Mean Absolute Percentage error (MAPE), Root Mean Square Error (RMSE and Coefficient determinant (R2). In preventing overfitting, techniques such as dropout, and early stopping, are used.

2.4 Correlation Between Variables

Interpretation of observations of the distribution pattern of data points to determine whether there is a linear or non-linear correlation using the Pearson correlation coefficient value to measure the strength and direction of the relationship. Understanding the dynamics of the interaction between reactive power consumption and power consumption during peak and off-peak periods is crucial for effective operational planning and efficient electricity use. To comprehend the relationship patterns between variables at specific times, scatter plots are used. The Pearson Correlation Coefficient measures the strength and direction of the linear relationship between actual and predicted data. The formula is [28]:

$\mathrm{R}=\mathrm{r}_{\mathrm{xy}}=\frac{\sum\left(x_a-\bar{x}\right)\left(y_i-\bar{y}\right)}{\sqrt{\sum\left(x_a-\bar{x}\right)^2 \sum\left(y_i-\bar{y}\right)^2}}$
(20)

The description of the formula is as follows, Xa is the actual data value, is the average of the actual data values, yi is the predicted data value, and is the average predicted data value. Pearson Correlation Coefficient measures the strength and direction of the linear relationship between two variables. The value of this coefficient is in the range of -1 to +1. If the value is close to +1, it indicates a strong positive relationship between the variables. If the value is close to -1, it indicates a strong negative relationship between the variables. If the value is close to 0, it indicates no linear relationship between the variables. Pearson Correlation Coefficient can provide important information about the extent to which two variables are correlated with each other in different data distributions.

Correlation and scatter plots are powerful tools for understanding the relationship between two variables. Correlation provides a numerical measure, while scatter plots offer an easy-to-understand visual representation. Together, they help data analysts identify and understand patterns in the data more effectively. The negative correlation is indicated by points trending downward from the top left to the bottom right, showing a negative relationship between variables. The positive correlation is indicated by points trending upward from the bottom left to the top right, showing a positive relationship between variables. If the points are scattered randomly without a clear pattern, it indicates no correlation between the variables [29, 30].

3. Evaluation of the Performance of Result Predict

The address Location building for data acquisition is at the coordinates of the location, Jakarta, Indonesia, at 6.15343 degrees South (latitude) and 106.79633 degrees East (longitude).

This data acquisition represents energy measurements using kWh meters and other measurement instruments, taken over one year at 8:00 PM every evening at the medium voltage side (Medium Voltage Substation).

Energy measurements use two kWh meters: kWh meter 1 measures energy consumption during peak loads, and kWh meter 2 measures energy consumption during off-peak loads dataset the data set is displayed as follows:

Table 3. Predict result data

Date

Peak Load

Off-Peak Load

Reactive Power

01/01/2022

0.72

3.06

1.02

02/01/2022

0.74

3.06

1.02

03/01/2022

0.69

3.06

1.06

04/01/2022

0.71

2.99

1.02

……

……

……

……

……

……

……

……

07/01/2022

0.78

3.05

1.04

08/01/2022

0.7

2.84

1.02

In deep learning training needs, apart from Table 3, current, voltage, and reactive current data are also used for training needs.

However, in this case, because the time series data that will be processed and predicted is energy consumption data during peak load, energy consumption outside peak load time, and reactive power consumption, the actual data visualization displayed is only these three data.

After the model is trained and results predicted, it is evaluated using validation and testing data with metrics such as RMSE, Mean Absolute Error (MAE), and formula of performance metric as follows [31, 32]:

$R M S E=\sqrt{\frac{1}{n} \sum_{i=1}^n\left(X_i-X_i\right)^2}$
(21)
$M A E=\frac{1}{n} \sum_{i=1}^n\left|Y_i-\bar{y}_i\right|$
(22)

In general, MAE is easier to interpret because it represents a simple average error, while RMSE imposes a larger penalty for larger errors by squaring the errors before averaging. Together, these two metrics provide a comprehensive assessment of the accuracy and reliability of the prediction model.

The implementation of this metric value on predicted results and actual data for energy consumption during peak loads is as follows.

Table 4. Performance of peak load data prediction results

Peak Load

ENSEMBLE

GRU

LSTM

RNN

MAE

0.0037

0.0392

0.026

0.018

RMSE

0.0142

0.063

0.014

0.039

MAPE

0.498

5.863

3.879

2.629

R2

0.946

0.052

0.483

0.593

In addition, Table 4 shows are the ensemble has a small level of prediction error. This states that the Ensemble model is more reliable in predicting energy use than other models.

This model can capture data patterns more accurately, so it can provide more precise recommendations for operational planning and energy efficiency. This performance evaluation reinforces the Ensemble model's advantages in future electrical energy predictions.

Table 5. Performance of peak load data prediction results

Off-Peak Load

ENSEMBLE

GRU

LSTM

RNN

MAE

0.0146

0.0182

0.0986

0.0720

RMSE

0.0533

0.2391

0.1651

0.1516

Prediction testing results for variables off-peak load times, using evaluation metrics such as MAE, RMSE, and MAPE.

In Table 5, the Ensemble model consistently shows the best performance in all metrics used which shows a high level of accuracy in predicting electrical energy consumption off-peak load times.

Thus, based on the results of this evaluation, it can be concluded that the Ensemble model has the best performance in predicting electrical energy consumption off-peak load times in buildings.

Table 6. Performance of reactive power data prediction results

Reactive Power

Metric

ENSEMBLE

GRU

LSTM

RNN

MAE

0.005

0.043

0.043

0.0325

RMSE

0.022

0.096

0.097

0.0914

In Table 6, the Ensemble model consistently performs best in all metrics used. The MAE value, which measures the absolute difference between predicted and actual values, shows a high level of accuracy for the ENSEMBLE model.

Likewise, a low RMSE indicates a small level of prediction error. The ENSEMBLE model also excels with the lowest and highest values respectively.

Thus, the ENSEMBLE model has the best performance in predicting reactive energy consumption in buildings.

3.1 Visualization of Comparison of Actual Data with Predicted Data

Figure 4 is a visualization of the comparison of the results of three prediction models with actual data.

Figure 4 shows the visualization of the comparison of the results of three prediction models with actual data in ensuring the ability of the ensemble model with the initiation of DARTS in data prediction. The visualization of the actual reactive power data and LSTM GRU RNN and Ensemble prediction data, where the prediction model other than the ensemble shows less accurate results on certain data indices and the ensemble prediction model always coincides with the actual data for each data index.

Figure 4. Visualization of Predict result and actual reactive data
3.2 Visualization of Peak Load Time Data and LSTM GRU RNN and Ensemble Prediction Data

Figure 5 visualizes the comparison between the predicted results of all models with actual peak load energy consumption data. Figure 5 shows that the most superior prediction model is the ensemble model, characterized by the closeness of the predicted results to the actual data.

Image visualization in Figure 6 shows that the ensemble model is always in line with the actual data movement by the evaluation metrics explained in the previous sub-chapter.

Figure 5. Visualization of predicted result and actual peak load consumption data
Figure 6. Visualization of Predict result and actual off-peak load consumption data
3.3 Actual Reactive Power and GRU

Due to the existence of meters that cause an increase in the cost of energy use in a building, namely, energy consumption during peak load times and reactive power usage that exceeds the threshold limit determined by the electrical power supplier, this sub-research focuses on the correlation between reactive power consumption and time peak load and off-peak load during past events and future events. For this reason, we visualize this problem using scatter and rolling window methods. In the first stage, we can see a comparison of actual reactive power consumption with the prediction model. Figure 7 is the visualization of the GRU prediction model compared with actual data as follows:

Figure 7. Comparison of GRU prediction model results and actual data on peak load consumption

Visualization of the image shows that the GRU prediction results still show errors at several initial and middle data index locations. So, for the time being in the case of this data, the GRU model still needs consideration and other prediction models need to be checked. Visualization between actual reactive consumption data and LSTM prediction results is shown in the following figures.

Figure 8 shows a visualization of how the prediction results with the LSTM model still contain errors at several index points, this is as shown by the MAE RMSE metrics for LSTM as shown in Table 5.

Figure 8. Comparison of LSTM prediction model results and actual data on Reactive power consumption
Figure 9. Comparison of RNN prediction model results and actual data on Reactive power consumption
Figure 10. Comparison of ensemble prediction model results and actual data on Reactive power consumption

Figure 9 is the same as other single prediction models, the RNN prediction model has inaccuracies at the beginning of the data index, so this model is not recommended for further analysis.

Figure 10 shows a comparison of actual data on reactive power consumption and actual data which shows the position of the two data which always appear to be close from start to finish. Conditions like this show that the prediction results with the ensemble are very accurate, which is also shown by the evaluation metrics in Table 5.

4. Correlation Between Reactive Power Consumption on Peak Load and Beyond Peak Load

Based on the prediction results that have been carried out with several models the Ensemble shows the most accurate prediction results compared to the LSTM, GRU, and RNN models, especially in predicting reactive power consumption data. Thus, the next analysis is to analyze the correlation between reactive power consumption and energy consumption during peak load times and outside peak load times in past events and what correlation might occur in the future.

4.1 The Correlation Between Actual Dataset

Three correlation values show a linear relationship between reactive power consumption and energy consumption during peak load times and outside peak load times. Correlation calculations use the Pearson correlation formula, after calculating the correlation between reactive power consumption and energy consumption during peak loads (with the help of the Python library).

Figure 11. Scatter diagram between peak load energy consumption and reactive power consumption

From this plot Figure 11, we can see that there is a fairly linear relationship between power consumption at peak load times and reactive power. This shows that when power consumption at peak load times increases, reactive power also tends to increase. The correlation between reactive energy consumption and peak load energy consumption is 0.29247416580871066. This correlation value is around 0.29, which indicates a weak positive relationship between reactive power consumption and electricity consumption at peak load times. This means that when reactive power increases, energy consumption during peak loads tends to increase slightly as well, but this relationship is not very strong.

The correlation between reactive energy and energy consumption at off-peak load is 0.3317318580227448. This correlation value is around 0.33, which indicates that there is a weak to moderate positive relationship between reactive power consumption and electricity consumption at off-peak load. This means that an increase in reactive power consumption tends to be accompanied by an increase in off-peak load energy, although this relationship is also not very strong.

Figure 12. Scatter diagram between peak load energy consumption and off-peak load consumption

The form Scattering between data in Figure 12 shows the relationship between reactive power consumption and peak load and off-peak load is relatively weak. The weak correlation between reactive power consumption with peak load and off-peak load indicates that changes in reactive power do not have a large impact on electricity consumption during peak or off-peak loads. However, there is still a slight positive relationship, which means that as reactive power increases, electricity consumption in both periods also tends to increase slightly.

4.2 Correlation Between Ensemble Prediction Data

The ensemble prediction model initiated by DARTS outperforms the accuracy of the LSTM GRU and RNN prediction models as shown by MAE, RMSE which have a good level in predicting all data features of reactive power consumption, electrical energy consumption during peak load times and electrical energy consumption during off-hours. Peak load. Therefore, the results of this Ensemble prediction can be used to implement planning and energy efficiency efforts. For this, here is the correlation between reactive power consumption during peak load and off-peak load result of ensemble prediction, as follows

Figure 13 shows the predicted correlation between reactive power consumption and electrical energy consumption. This correlation of 0.295 and between reactive power consumption and peak load energy consumption of 0.334, shows a low correlation between the two predicted variables. The correlation is a Pearson correlation, then a value close to 0 indicates that there is no strong linear relationship between reactive power consumption and energy consumption. Therefore, the correlation value does not show a zero value, this shows that there is still a correlation between reactive power consumption at peak load and off-peak load.

Figure 13. Scatter diagram between peak load energy consumption and off-peak load consumption
Figure 14. Scatter diagram between peak load energy and off-peak load consumption result of ensemble prediction

Based on data from ensemble predictions in Figure 14, the correlation between reactive power consumption at peak load times and off-peak loads shows that it is stronger than past data. Likewise, the correlation between consumption during peak loads and off-peak loads is also stronger than actual or past data, this reflects an increase in the consistency of electricity use in the future. Correlation between variables that is stronger than in the past indicates a pattern of changes in electrical power use in the future.

5. Discussion

The Pearson coefficient in actual between dataset correlation and ensemble predicted dataset result shows similar values, confirming the reliability of the prediction model. With proven accuracy, it can be used to plan more effective energy management strategies. With the prediction results, large reactive energy-consuming equipment (water pumps, blowers, and other equipment under the supervision of the building manager) can be operationally regulated by avoiding peak load times. Avoiding using electricity during peak load times reduces operational costs.

In addition, understanding reactive energy consumption enables the implementation of effective reactive power compensation strategies to regulate the use of capacitor banks, which increases the efficiency of electricity distribution and reduces energy losses. Reducing active and reactive energy consumption contributes directly to reducing carbon emissions, as electrical energy production is often linked to the burning of fossil fuels. Accurate prediction models also help plan more efficient use of renewable energy, such as solar or wind power, which can replace some fossil energy consumption and reduce environmental impacts. Thus, these correlation results provide a better understanding of the relationship between reactive energy consumption and electrical energy consumption during peak load times and outside peak load times, which can be used to optimize energy use and improve energy efficiency in building load management.

The Pearson correlation coefficient between actual and predicted data shows similar values, confirming the reliability of the prediction model. With proven accuracy, it can be used to plan more effective energy management strategies. With the prediction results, large reactive energy-consuming equipment (water pumps, blowers and other equipment under the supervision of the building manager) can be operationally regulated by avoiding peak load times. Avoiding using electricity during peak load times reduces operational costs.

In addition, understanding reactive energy consumption enables the implementation of effective reactive power compensation strategies to regulate the use of capacitor banks, which increases the efficiency of electricity distribution and reduces energy losses. Reducing active and reactive energy consumption contributes directly to reducing carbon emissions, as electrical energy production is often linked to the burning of fossil fuels. Accurate prediction models also help plan more efficient use of renewable energy, such as solar or wind power, which can replace some fossil energy consumption and reduce environmental impacts. Thus, these correlation results provide a better understanding of the relationship between reactive energy consumption and electrical energy consumption during peak load times and outside peak load times, which can be used to optimize energy use and improve energy efficiency in building load management.

6. Conclusions

Using ensemble neural network methods LSTM, GRU, and RNN has proven effective in improving predictions of electrical energy use in apartment buildings. The developed prediction model can provide accurate results, enabling more efficient planning in energy management. The correlation between reactive energy consumption and peak and off-peak loads indicates a weak to moderate relationship between these factors. This provides important insights into electrical energy usage patterns in apartment buildings, which can be optimized to improve energy efficiency.

By understanding the relationship between reactive energy consumption and electrical energy consumption during peak load periods and outside peak load times, more efficient arrangements can be made in energy management. The use of accurate prediction models also allows renewable energy to replace fossil energy, reduce operational costs, and reduce environmental impacts. Thus, this article highlights the importance of advanced prediction technologies in improving energy efficiency in apartment buildings, as well as providing a foundation for developing more sustainable and environmentally friendly energy management strategies in the future.

References
[1] Malakouti, S.M. (2023). Prediction of wind speed and power with LightGBM and grid search: Case study based on Scada system in Turkey. International Journal of Energy Production and Management, 8(1): 35-40. [Crossref]
[2] Al-Hadeethi, R., Hacham, W.S. (2023). Reducing energy consumption in Iraqi campuses with passive building strategies: A case study at Al-Khwarizmi College of Engineering. International Journal of Energy Production and Management, 8(3): 177-186. [Crossref]
[3] Zalloom, B. (2023). Towards a sustainable design: integrating spatial planning with energy planning when designing a university campus. International Journal of Energy Production and Management, 8(2): 115-122. ttps://doi.org/10.18280/ijepm.080208
[4] Hans, M., Jogi, V. (2019). Peak load scheduling in smart grid using cloud computing. Bulletin of Electrical Engineering and Informatics, 8(4): 1525-1530. [Crossref]
[5] Kim, S., Kim, H. (2016). A new metric of absolute percentage error for intermittent demand forecasts. International Journal of Forecasting, 32(3): 669-679. [Crossref]
[6] Bolboacă, R., Haller, P. (2023). Performance analysis of long short-term memory predictive neural networks on time series data. Mathematics, 11(6): 1432. [Crossref]
[7] Abdulrahman, M.L., Ibrahim, K.M., Gital, A.Y., Zambuk, F.U., Ja’afaru, B., Yakubu, Z.I., Ibrahim, A. (2021). A review on deep learning with focus on deep recurrent neural network for electricity forecasting in residential building. Procedia Computer Science, 193: 141-154. [Crossref]
[8] Blasco, P.A., Montoya-Mira, R., Diez, J.M., Montoya, R., Reig, M.J. (2019). Compensation of reactive power and unbalanced power in three-phase three-wire systems connected to an infinite power network. Applied Sciences, 10(1): 113.
[9] Mbinkar, E.N., Asoh, D.A., Mungwe, J.N., Songyuh, L., Lamfu, E. (2022). Management of peak loads in an emerging electricity market. Energy Engineering, 119(6): 2637-2654. [Crossref]
[10] Hewamalage, H., Bergmeir, C., Bandara, K. (2021). Recurrent neural networks for time series forecasting: Current status and future directions. International Journal of Forecasting, 37(1): 388-427. [Crossref]
[11] Zainuddin, Z., EA, P.A., Hasan, M.H. (2021). Predicting machine failure using recurrent neural network-gated recurrent unit (RNN-GRU) through time series data. Bulletin of Electrical Engineering and Informatics, 10(2): 870-878. [Crossref]
[12] Lareno B., Swastina, L. (2018). Short-term electrical load prediction using evolving neural network. IOP Conference Series: Materials Science and Engineering, 384(1): 01201. [Crossref]
[13] Al Mamun, A., Sohel, M., Mohammad, N., Sunny, M.S.H., Dipta, D.R., Hossain, E. (2020). A comprehensive review of the load forecasting techniques using single and hybrid predictive models. IEEE Access, 8: 134911-134939. [Crossref]
[14] Jaihuni, M., Basak, J.K., Khan, F., Okyere, F.G., et al. (2022). A novel recurrent neural network approach in forecasting short term solar irradiance. ISA Transactions, 121: 63-74. [Crossref]
[15] Wen, X., Li, W. (2023). Time series prediction based on LSTM-attention-LSTM model. IEEE Access, 11: 48322-48331. [Crossref]
[16] Seddik, S., Routaib, H., Elhaddadi, A. (2023). Multi-variable time series decoding with long short-term memory and mixture attention. Acadlore Transactions on Machine Learning Research, 2(3): 154-169. [Crossref]
[17] Bolboacă, R. (2022). Adaptive ensemble methods for tampering detection in automotive aftertreatment systems. IEEE Access, 10: 105497-105517. [Crossref]
[18] Shiang, E.P.L., Chien, W.C., Lai, C.F., Chao, H.C. (2020). Gated recurrent unit network-based cellular trafile prediction. In 2020 International Conference on Information Networking (ICOIN), San Francisco, CA, USA, pp. 471-476. [Crossref]
[19] Hao, Y., Sheng, Y., Wang, J. (2019). Variant gated recurrent units with encoders to preprocess packets for payload-aware intrusion detection. IEEE Access, 7: 49985-49998. [Crossref]
[20] Jadli, A., Lahmer, E.H.B. (2022). A novel LSTM-GRU-based hybrid approach for electrical products demand forecasting. International Journal of Intelligent Engineering & Systems, 15(3): 601-613. [Crossref]
[21] Li, X., Ma, X., Xiao, F., Wang, F., Zhang, S. (2020). Application of gated recurrent unit (GRU) neural network for smart batch production prediction. Energies, 13(22): 6121. [Crossref]
[22] Tan, K.L., Lee, C.P., Lim, K.M., Anbananthen, K.S.M. (2022). Sentiment analysis with ensemble hybrid deep learning model. IEEE Access, 10: 103694-103704. [Crossref]
[23] Choi, J.Y., Lee, B. (2018). Combining LSTM network ensemble via adaptive weighting for improved time series forecasting. Mathematical Problems in Engineering, 2018(1): 2470171. [Crossref]
[24] Samson, T.K., Aweda, F.O. (2024). Forecasting rainfall in selected cities of Southwest Nigeria: A comparative study of random forest and long short-term memory models. Acadlore Transactions on Geosciences, 3(2): 79-88. [Crossref]
[25] Ali, S., Wani, M.A. (2023). Gradient-based neural architecture search: A comprehensive evaluation. Machine Learning and Knowledge Extraction, 5(3): 1176-1194. [Crossref]
[26] Jan, S.T., Hao, Q., Hu, T., Pu, J., Oswal, S., Wang, G., Viswanath, B. (2020). Throwing darts in the dark? detecting bots with limited data using neural data augmentation. In 2020 IEEE symposium on security and privacy (SP), San Francisco, CA, USA, pp. 1190-1206. [Crossref]
[27] Qin, D., Wang, C., Wen, Q., Chen, W., Sun, L., Wang, Y. (2023). Personalized federated darts for electricity load forecasting of individual buildings. IEEE Transactions on Smart Grid, 14(6): 4888-4901. [Crossref]
[28] Schober, P., Boer, C., Schwarte, L.A. (2018). Correlation coefficients: Appropriate use and interpretation. Anesthesia & Analgesia, 126(5): 1763-1768. [Crossref]
[29] Srivastav, M.K. (2001). Study of correlation theory with different views and methods among variables in mathematics. Ratio, 5(2): 21-23. https://www.ijmsi.org/Papers/Volume.5.Issue.2/E05022123.pdf.
[30] Šindelka, M. (2022). An Introduction to Scattering Theory. http://arxiv.org/abs/2204.03651.
[31] Chicco, D., Warrens, M.J., Jurman, G. (2021). The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. Peerj Computer Science, 7: e623. [Crossref]
[32] Chai, T., Draxler, R.R. (2014). Root mean square error (RMSE) or mean absolute error (MAE)?–Arguments against avoiding RMSE in the literature. Geoscientific Model Development, 7(3): 1247-1250. [Crossref]
Nomenclature

DARTS

Differentiable Architecture Search

ENN

Ensemble Neural Network

GRU

Gate Recurrent Unit

LSTM

Long Short-Term Memory

MAE

Mean Absolute Error

NN

Neural Network

RMSE

Root Mean Square Error

RNN

Recurrent Neural Network

SGD

stochastic gradient descent

Tanh

Tangen Hyperbolic

kWh

Kilo Watt Hour

Greek symbols

$\sigma$

Sigmoid

$\emptyset$

Angle between Active and Apearence power

$\alpha$

Weight and biases in modle ensemble

$f_t$

Forget Gate

$f_i$

The i-th model in the ensemble for input

$w_f$

Weight of forget gate

$x_a$

Actual data

$i_t$

Input gate

$C_t$

Candidate gate

$O_t$

Output gate

$C_T$

Update cell

$w_Z$

Weight update gate

$r_t$

Learning rate

$h_t$

Hidden state

S

Apparent power, Volt Ampere

P

Active power, Watt

Q

Reactive power, kVAR

$w_i$

Weight of the i-th Value

$x_t$

Input data

V

Voltage, V

I

Current, A

R

Correlation


Cite this:
APA Style
IEEE Style
BibTex Style
MLA Style
Chicago Style
GB-T-7714-2015
Rofii, A., Soerowirdjo, B., & Irawan, R. (2024). Optimizing Electrical Energy Management in Apartment Buildings Through Ensemble Neural Network Prediction Model. Int. J. Energy Prod. Manag., 9(4), 255-265. https://doi.org/10.18280/ijepm.090406
A. Rofii, B. Soerowirdjo, and R. Irawan, "Optimizing Electrical Energy Management in Apartment Buildings Through Ensemble Neural Network Prediction Model," Int. J. Energy Prod. Manag., vol. 9, no. 4, pp. 255-265, 2024. https://doi.org/10.18280/ijepm.090406
@research-article{Rofii2024OptimizingEE,
title={Optimizing Electrical Energy Management in Apartment Buildings Through Ensemble Neural Network Prediction Model},
author={Ahmad Rofii and Busono Soerowirdjo and Rudi Irawan},
journal={International Journal of Energy Production and Management},
year={2024},
page={255-265},
doi={https://doi.org/10.18280/ijepm.090406}
}
Ahmad Rofii, et al. "Optimizing Electrical Energy Management in Apartment Buildings Through Ensemble Neural Network Prediction Model." International Journal of Energy Production and Management, v 9, pp 255-265. doi: https://doi.org/10.18280/ijepm.090406
Ahmad Rofii, Busono Soerowirdjo and Rudi Irawan. "Optimizing Electrical Energy Management in Apartment Buildings Through Ensemble Neural Network Prediction Model." International Journal of Energy Production and Management, 9, (2024): 255-265. doi: https://doi.org/10.18280/ijepm.090406
ROFII A, SOEROWIRDJO B, IRAWAN R. Optimizing Electrical Energy Management in Apartment Buildings Through Ensemble Neural Network Prediction Model[J]. International Journal of Energy Production and Management, 2024, 9(4): 255-265. https://doi.org/10.18280/ijepm.090406