Research

  • Deep reinforcement learning and compositional learning
  • I am interested in the use of hierarchical learning methods to break down goals into solvable sub-goals for planning in deep reinforcement learning. To reduce the complexity of state space search, eg. tree search algorithms, I am currently looking into augmenting reinforcement learning techniques with external, differentiable, memory systems to improve sample efficiency via online learning.

 

  • Stochastic analysis, rough paths theory, Gaussian processes and optimal control

Currently, my focus is on applying rough paths theory to control problems for dynamical systems driven by Gaussian noise, such as fractional Brownian motion. By adopting a path-wise approach and comparing with white noise dynamics, I aim to study how the optimal control changes with increasing (or decreasing) correlation in the driving signal, and more generally, how correlated noise affects the dynamics of the controlled system. I am also exploring the use of deep neural nets to approximate the value and policy functions, which will hopefully yield more robust Monte-Carlo methods for computing the optimal trajectories in high-dimensional cases.

For applications, I am targeting problems in finance, e.g. portfolio optimization, optimal stopping in options trading etc., where there has been a resurgence in using fractional Brownian motion for volatility models.