Updates on the campus response to coronavirus (COVID-19)

Ph.D. Proposal Oral Exam - Styliani Kampezidou

Event Details

Friday, October 4, 2019

1:00pm - 3:00pm

3rd floor conference room, CoVE

For More Information


Event Details

Title:  Game Theory and Machine Learning Based Energy Trading


Dr. Mavris, Advisor

Dr. Romberg, Co-Advisor       

Dr. Vachtsevanos, Chair

Dr. Vamvoudakis


The objective of the proposed research is to provide an energy trading mechanism for distributed load. It is enabled by a recent modification in the market operators' policy. With the presence of an intermediate entity, the aggregator, distributed load can now sell energy and related services to the market. Acting as a broker, the aggregator creates a new market which we design as a Stackelberg competition. By providing price-based incentives, the aggregator makes profit from selling the energy at a higher price (price arbitrage). The introduced competition enhances the overall system's operation by reshaping the demand profile, providing ancillary services to the grid, reducing network congestion risk (correlated to marginal price increase) and contributing to carbon dioxide emission reduction. We propose a non-zero sum Stackelberg game and provide guarantees for the existence of the game's equilibria. The structure of the game allows us to extract the players' strategies in closed form. Our contribution is a game-theoretic framework that allows for both purchasing and selling of energy to the market using a price-based demand-response. We guarantee non-negative payoffs and provide the option for customers to opt-out. Our solution has theoretical guarantees on feasibility and optimality. However, uncertainty present in the market such as changes in players' strategies, introduction of new players, demand privacy, market clearing mechanism, etc. limits our offline approach. We direct our future research into incorporating uncertainty in these games by using online adaptive methods that converge approximately to the offline solution. Because of the nature of the uncertainties, we are driven towards a reinforcement learning, model-free approach that can balance exploitation of past knowledge and exploration of new.

Last revised September 26, 2019