2013 Poster Sessions : Stochastic Optimal Control With Dynamic, Time-Consistent Risk Constraints

Student Name : Yin-Lam Chow
Advisor : Marco Pavone
Research Areas: Information Systems
We present a dynamic programing approach to stochastic optimal control problems with dynamic, time-consistent risk constraints. Constrained stochastic optimal control problems, which naturally arise when one has to consider multiple objectives, have been extensively investigated in the past 20 years; however, in most formulations, the constraints are formulated as either risk-neutral (i.e., by considering an expected cost), or by applying static, single-period risk metrics with limited attention to “time-consistency” (i.e., to whether such metrics ensure rational consistency of risk preferences across multiple periods). Recently, significant strides have been made in the development of a rigorous theory of dynamic, time-consistent risk metrics for multi-period (risk-sensitive) decision processes; however, their integration within constrained stochastic optimal control problems has received little attention. The goal of our research is to bridge this gap. First, we formulate the stochastic optimal control problem with dynamic, time-consistent risk constraints and we characterize the tail sub-problems (which requires the addition of a Markovian structure to the risk metrics). Second, we develop a dynamic programming approach for its solution, which computes the optimal costs by value iteration. Third, we discuss some theoretical and practical features of our approach, and potential applications in transportation networks/ energy systems.

Yin-Lam Chow is currently a Ph.D. student in Institute of Computational Mathematics (iCME), Stanford University, specializing in Operations Research. He earned a Master of Science (Aeronautics/ Astronautics) from Purdue University in 2011 and a Bachelor of Engineering (First Hons.) in Mechanical Engineering from the University of Hong Kong,
His research interests include:
1) Control theories:
Linear/nonlinear robust control theories, Lyapunov stability analysis, Adaptive controls, Stochastic controls, Optimal controls, Model Reduction and Networked systems.

2) Optimization methods:
Linear/nonlinear programming, Convex optimization, Semi-definite programming, Numerical optimization, Robust optimization, Stochastic programming, Risk sensitive dynamic programming, and Reinforcement learning.