A reinforcement learning-based demand response strategy designed from the Aggregator's perspective

dc.contributor.author Oh, Seongmun
dc.contributor.author Jung, Jaesung
dc.contributor.author Onen, Ahmet
dc.contributor.author Lee, Chul-Ho
dc.contributor.authorID 0000-0001-7086-5112 en_US
dc.contributor.department AGÜ, Mühendislik Fakültesi, Elektrik - Elektronik Mühendisliği Bölümü en_US
dc.contributor.institutionauthor Önen, Ahmet
dc.date.accessioned 2023-02-22T12:44:38Z
dc.date.available 2023-02-22T12:44:38Z
dc.date.issued 2022 en_US
dc.description.abstract The demand response (DR) program is a promising way to increase the ability to balance both supply and demand, optimizing the economic efficiency of the overall system. This study focuses on the DR participation strategy in terms of aggregators who offer appropriate DR programs to customers with flexible loads. DR aggregators engage in the electricity market according to customer behavior and must make decisions that increase the profits of both DR aggregators and customers. Customers use the DR program model, which sends its demand reduction capabilities to a DR aggregator that bids aggregate demand reduction to the electricity market. DR aggregators not only determine the optimal rate of incentives to present to the customers but can also serve customers and formulate an optimal energy storage system (ESS) operation to reduce their demands. This study formalized the problem as a Markov decision process (MDP) and used the reinforcement learning (RL) framework. In the RL framework, the DR aggregator and each customer are allocated to each agent, and the agents interact with the environment and are trained to make an optimal decision. The proposed method was validated using actual industrial and commercial customer demand profiles and market price profiles in South Korea. Simulation results demonstrated that the proposed method could optimize decisions from the perspective of the DR aggregator. en_US
dc.description.sponsorship National IT Industry Promotion Agency (NIPA), Republic of Korea 1711151479 en_US
dc.identifier.endpage 13 en_US
dc.identifier.issn 2296-598X
dc.identifier.other WOS:000862291500001
dc.identifier.startpage 1 en_US
dc.identifier.uri https://doi.org/10.3389/fenrg.2022.957466
dc.identifier.uri https://hdl.handle.net/20.500.12573/1452
dc.identifier.volume 10 en_US
dc.language.iso eng en_US
dc.publisher FRONTIERS MEDIA SA en_US
dc.relation.isversionof 10.3389/fenrg.2022.957466 en_US
dc.relation.journal FRONTIERS IN ENERGY RESEARCH en_US
dc.relation.publicationcategory Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı en_US
dc.rights info:eu-repo/semantics/openAccess en_US
dc.subject reinforcement learning en_US
dc.subject energy storage system en_US
dc.subject demand response en_US
dc.subject aggregator en_US
dc.subject electricity market en_US
dc.title A reinforcement learning-based demand response strategy designed from the Aggregator's perspective en_US
dc.type article en_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
fenrg-10-957466.pdf
Size:
2.87 MB
Format:
Adobe Portable Document Format
Description:
Makale Dosyası

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.44 KB
Format:
Item-specific license agreed upon to submission
Description: