AJUNTAMENT D'ALCOI
Website
Generalitat Valenciana
Website
Ayuntamiento de Valencia
Website
Cicloplast
Website
Ayuntamiento de Onil
Website
Anarpla
Website
Ayuntamiento de Mislata
Website
nlWA, North London Waste Authority
Website
Ayuntamiento de Salinas
Website
Zicla
Website
Fondazione Ecosistemi
Website
PEFC
Website
ALQUIENVAS
Website
DIPUTACI� DE VAL�NCIA
Website
AYUNTAMIENTO DE REQUENA
Website
UNIVERSIDAD DE ZARAGOZA
Website
OBSERVATORIO CONTRATACIÓN PÚBLICA
Website
AYUNTAMIENTO DE PAIPORTA
Website
AYUNTAMIENTO DE CUENCA
Website
BERL� S.A.
Website
CM PLASTIK
Website
TRANSFORMADORES INDUSTRIALES ECOL�GICOS
INDUSTRIAS AGAPITO
Website
RUBI KANGURO
Website
If you want to support our LIFE project as a STAKEHOLDER, please contact with us: life-future-project@aimplas.es
In this section, you can access to the latest technical information related to the FUTURE project topic.
Optimal control method of HVAC based on multi-agent deep reinforcement learning
In HVAC control problems, model-based optimal control methods have been extensively studied and verified by many researchers, but they highly depend on the accuracy of the model, a large amount of historical data, and the deployment of different sensors. With respect to the above problems, this paper proposed a Multi-Agent deep reinforcement learning method for the building Cooling Water System Control (MA-CWSC) to optimize the load distribution, cooling tower fan frequency, and cooling water pump frequency of different types of chillers, which provide a model-free and online learning mechanism. In the learning process, unlike the traditional reinforcement learning methods, the proposed control method uses five agents to control different controllable parts by parallel learning, which can greatly reduce the action space and speed up the convergence rate. In order to verify the effectiveness of the propose-d control method, based on the actual building cooling water system parameters and related historical data, we construct an experimental building cooling water system model, and moreover, we test the MA-CWSC method, model-based control method (optimal control method), baseline method, and single-agent deep reinforcement learning (deep Q-network). The experimental results show that the energy saving performance of the proposed MA-CWSC method is significantly better than the rule-based control method (11.1% improvement), and is very close to that of the model-based control method (only 0.5% difference). In addition, the MA-CWSC method has a faster learning rate compared to the deep Q-network (DQN) control method.
» Author: Qiming Fu, Xiyao Chen, Shuai Ma, Nengwei Fang, Bin Xing, Jianping Chen
» Publication Date: 01/09/2022
C/ Gustave Eiffel, 4
(València Parc Tecnològic) - 46980
PATERNA (Valencia) - SPAIN
(+34) 96 136 60 40
Project Management department - Sustainability and Industrial Recovery
life-future-project@aimplas.es