18-21 August 2020
Virtual event

Tutorial sessions

2 tutorial sessions will be held on 18 August, the day before the opening of the conference. 

  • 1. Stochastic optimization for power system problems, lead by Dr. Maria Vrakopoulou, The University of Melbourne (Australia)

Bio :

Dr. Maria Vrakopoulou joined the University of Melbourne as a lecturer in power systems in 2018. She received her degree in Electrical and Computer Engineering from the University of Patras, Greece in 2008, and her Ph.D. degree from ETH Zurich, Switzerland in 2014. She pursued her postdoctoral research at the University of Michigan and at the University of California, Berkeley in the USA, and ETH Zurich. She was awarded a highly prestigious 3-year Marie Curie post-doctoral fellowship. Her research is driven by the need to address climate change through reliable and equitable energy distribution using sustainable resources.

Summary : 

The increasing penetration of unpredictable Renewables Energy Sources (RES) is causing concerning deterioration in both the reliability and economic performance of modern power systems.
However, the growing plethora of controllable distributed energy resources, including batteries and thermostatically controlled loads, offer an opportunity to mitigate the adverse impact of uncertainty on the system via appropriately designed control actions. This tutorial will focus on decision-making algorithms, given the example of day-ahead system operation, that facilitate the integration of RES by designing control policies that regulate the system components in real time in an adaptive way with respect to the fluctuations arising from the stochasticity of the forecast errors.

In general, uncertainty may be managed in a robust way, but this will often result in very expensive system operation. On the other hand, by using heuristically chosen uncertainty scenario cases, solutions may not achieve the level of reliability desired for the system. The tutorial considers decision-making algorithms that are based on a chance-constrained optimization framework. To solve the chance-constrained optimization problems, we will make use of data-based optimization techniques that do not rely on the underlying distribution of the uncertainty and that provide solutions which are accompanied by probabilistic guarantees of the satisfaction of the constraints. Different power system operational problems will be reformulated to meet the framework, with the goal of achieving tractable and near optimal solutions. Specifically, we will consider Stochastic Security Constrained OPF, Stochastic AC OPF and Stochastic co-optimization of energy and reserves, including generators and battery-type load aggregations.


  • 2. Reinforcement learning applications for active network management for electrical distribution networks and microgrid management, lead by Prof. Anders Jonsson, Universitat Pompeu Fabra (Spain)


Bio : 

Anders Jonsson is an interim tenured professor working in artificial intelligent planning and machine learning. He received his Ph.D. in computer science in 2005 from the University of Massachusetts Amherst, working in reinforcement learning under the supervision of professor Andrew Barto. His research interests involve sequential decision problems in general, in which one or several agents have to make repeated decisions about what to do, but he is also interested in applying machine learning to realistic problems in general. Specifically, he is currently working on sequential decision problems involving multiple agents, hierarchical representations of problems, combining the strengths of reinforcement learning and planning, learning decision trees for use in biomedical research, and analyzing the computational complexity of different classes of problems.

Summary :

In reinforcement learning, an agent repeatedly interacts with a stochastic environment while attempting to maximize reward. In each decision epoch, the agent observes a state, takes an action and receives a reward before probabilistically transiting to the next state. The agent has to accumulate experience from acting in the environment to learn a policy, i.e. a mapping from states to actions, that maximizes the expected future sum of rewards. Reinforcement learning algorithms have produced impressive results recently, e.g. achieving superhuman level play in complex board games such as chess and Go. A key property of reinforcement learning is its ability to handle uncertainty and adapt to unknown environments, especially in the case of complex decision problems for which it is difficult to program a successful strategy by hand.

This tutorial will first describe the basics of reinforcement learning, starting from the mathematical formulation of the problem and continuing with a brief description of state-of-the-art reinforcement learning algorithms. It will then continue to show how the problem of controlling a microgrid can be formulated as a reinforcement learning problem. Concretely, the best strategy for controlling a microgrid evolves over time as generation and demand fluctuate. In such cases, reinforcement learning can help adapt to new and unforeseen situations in order to maintain a high level of performance over long time periods.


Tutorial 2 slides : 

- Part 1 : Reinforcement Learning

- Part 2 Microgrids



Registration for these tutorials is included in the registration fee.