Short-term Heating Demand Forecasting at Building Level
22 Short-term Heating Demand Forecasting at Building Level
22.1 Rationale & Link to BEYOND Apps
Predicting heating demand in buildings is of crucial importance for better grasping and analysing its overall energy behaviour over time, seasons, years. Under the scope of Beyond project heating demand forecasting is directly linked to the simulation-based analysis of performance and metrics in electricity and district heating networks (BEPO performance monitoring tool). Moreover, heating demand forecasting can be of significant use to the FLEXOpt tool, for configuring accordingly a VPP for the day ahead (electricity/flexibility) market requests.
22.2 Overview of relevant implementations
Heat energy demand forecasting is vital for optimal, efficient, and smart energy management. The authors of [1] compared ARIMA, EWMA (exponentially weighted moving average), linear regression and artificial neural network (ANN) models for thermal load forecasting. They concluded that of the models contrasted, the ANN produced the most accurate results. A number of different studies have confirmed the applicability of ANN in heat demand forecasting at both, domestic level [2] and the whole district heating (DH) network [3], [4]. Other proposed methods in the literature are, support vector regression [5], multiple regression [6]. In recent years a deep learning technique called long short-term memory (LSTM) has been proposed for the heating load prediction of residential buildings [7].
22.3 Implementation in BEYOND
For the implementation in BEYOND project we have chosen to use the LSTM method.
22.3.1. Data inputs and Analytics Pipeline (incl. assumptions /limitations)
The dataset used in the analysis is taken from the “CU-BEMS smart building electricity consumption and indoor environmental sensor datasets”. It includes data of the seven-story 11,700m2 office building located in Bangkok, Thailand. We selected the electricity consumption data (kW) of air conditioning units from the second floor for the year 2019. The entire datasets are available at one-minute intervals for a total period of 18 months from July 1, 2018, to December 31, 2019.
The steps of the analytics pipeline are explained below:
1. The measurements in the dataset are given in 1-minute timestamps, so for our use case they are resampled to 1-hour intervals.
2. Follows the creation of lag features from the past 1 day (1*24 data points) and forward features from the next 1 day (1*24 data points) to provide as input to the network with the scope of predicting the next 24 values (hours).
3. After the shift of values for creating the lag features, interpolation is used for dealing with NAs.
4. Training and testing splitting: Two days are chosen to be used a test case: one as the values of the previous day to be used as input and one for comparing with the predicted values. The rest are split into training (80% of the remaining samples) and validation (20%) sets.
5. Min-Max scaler is used in order to scale the train and validation data in the interval (0,1) because the range is too wide and for facilitating the training procedure since the activation function produces an output between this range. Also, we first keep and scale train and validation set together and then split them, and scale test set separately, so as not to provide to the test data any information from the training data.
6. Network parameters as the number of cells (i.e. 100), activation function (i.e. tanh/relu), dropout (i.e. 0.1 or 0.2) etc., can be decided either by previous training of the model on premises or by finetuning with the use of gridsearch procedure. As optimizer, ‘Adam’ is used.
7. Predictions on the test are made, followed by the inversion to their original scale.
8. For the evaluation of the model MSE (mean squared error), RMSE (root mean squared error) and MAPE (mean absolute percentage error) were used.
22.3.2. Analytics Libraries Employed
The libraries employed for this algorithm are:
- Pandas
- Numpy
- Sklearn
- Tensorflow.keras
References
[1] M. Kawashima, C. E. Dorgan, and J. W. Mitchell, “Hourly thermal load prediction for the next 24 hours by Arima, Ewma, LR, and an artificial neural network,” American Society of Heating, Refrigerating and Air- Conditioning Engineers, 1995.
[2] V. Bakker, A. Molderink, J. L. Hurink, and G. J. M. Smit, “Domestic heat demand prediction using neural networks,” Systems Engineering, pp. 189–194., 2008.
[3] W. Schellong and F. Hentges, “Forecast of the heat demand of a district heating system,” Proceedings of European Power and Energy Systems, pp. 383–388, 2007.
[4] K. Wojdyga, “Predicting Heat Demand for a District Heating Systems,” Int. J. Energy Power Eng., vol. 3, no. 5, p. 237, 2014.
[5] L.Wu, G. Kaiser, D. Solomon, R.Winter, A. Boulanger, and R. Anderson, “Improving efficiency and reliability of building systems using machine learning and automated online evaluation,” in Proc. IEEE Long Island Syst., Appl. Technol. Conf., 2012, pp. 1–6.
[6] T. Catalina, V. Iordache, and B. Caracaleanu, “Multiple regression model for fast prediction of the heating energy demand,” Energy Build., vol. 57, pp. 302–312, Feb. 2013.
[7] S. Hochreiter and J. Schmidhuber, “Long short term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997. [Online]. Available: http://dx.doi.org/10.1162/neco.1997.9.8.1735
Back to BEYOND_Baseline_Analytics