Bio

I am currently working at the Shanghai AI Laboratory. I obtained my PhD degree from Machine Learning Group at the University of Cambridge, where I was co-supervised by Prof. Zoubin Ghahramani and Prof. José Miguel Hernández-Lobato, advised by Prof. Carl Edward Rasmussen, and also jointly supervised by Prof. Bernhard Schölkopf and Dr. Michael Hirsch at the Max Planck Institute for Intelligent Systems under the Cambridge-Tübingen PhD Programme.

Previously I obtained the M.Phil degree in Multimedia Laboratory at the Chinese University of Hong Kong under the supervision of Prof. Xiaoou Tang who is also the founder of SenseTime. I also worked closely with Prof. Chen Change Loy and Prof. Xiaogang Wang. Before that, I received the B.Eng degree in Software Engineering from Nanjing University.

I am interested in broad topics on causal machine learning, both in theory and in practice. Noteworthily, among them, both causal representation learning and causal reinforcement learning are becoming increasingly more promising.

Causal Representation Learning

One promising virgin field I am studying is Causal Representation Learning (CaRL), that is, discovering high-level causal variables from low-level observations. The reasons that drive me are twofold. On the one hand, CaRL is one of the fundamental problems for machine learning (ML) and artificial intelligence (AI), because it plays a pivotal role in addressing the two long-standing and unsolved issues in ML/AI: explainability and generalisability. On the other hand, it is also one of the biggest challenges when applying causality to ML/AI, because most work in causality starts from the premise that the causal variables are given but this is not always the case in ML/AI.

Logic void of representation is metaphysics.

Judea Pearl

Please refer to the Agnostic Hypothesis and iCaRL for more details and to Talks for CaRL .

Causal Reinforcement Learning

Another promising virgin field I am currently exploring: Causal Reinforcement Learning (Causal RL). What has been inspiring me is the philosophy behind the integration of causal inference and reinforcement learning, that is, when looking back at the history of science, human beings always progress in a similar manner to that of Causal RL:

Humans summarize rules or experience from their interaction with nature and then exploit this to improve their adaptation in the next exploration. What Causal RL does is exactly to mimic human behaviors, i.e., learning causal relations from an agent that communicates with the environment and then optimizing its policy based on the learned causal structures.

The reason that I highlight this analogy is to emphasize the importance of Causal RL. Personally speaking, Causal RL will, without doubt, become an indispensable part of AGI, which has great potential applications not only in healthcare and medicine but also in all other RL scenarios. Compared to RL, Causal RL has one obvious advantage inherited from causal inference: data efficiency.

Please refer to Causal RL for more details and to Talks for Causal RL towards AGI .

INFO: Chinese Name, Photo1, Photo2, Photo3, Photo4