The central task of machine learning research is to predict future observations by using data to automatically find dependencies in the world. Most machine learning methods build on statistics, which significantly limits their applicability. In fact, one can go beyond this and take a deep dive into causal structures underlying statistical dependences. Since causal models are more robust to changes in real-world data, we argue that causality can play a pivotal role in addressing some of the hard open problems of machine learning, such as explainability and generalisability. As such, the machine learning algorithms equipped with the ability of causal reasoning and learning can make better predictions, and will even contribute an indispensable part of Artificial General Intelligence (AGI).
Now and looking forward, I am most interested in the two fields below.
Causal Representation Learning
One promising virgin field I am studying is Causal Representation Learning (CaRL), that is, discovering high-level causal variables from low-level observations. The reasons that drive me are twofold. On the one hand, CaRL is one of the fundamental problems for machine learning (ML) and artificial intelligence (AI), because it plays a pivotal role in addressing the two long-standing and unsolved issues in ML/AI: explainability and generalisability. On the other hand, it is also one of the biggest challenges when applying causality to ML/AI, because most work in causality starts from the premise that the causal variables are given but this is not always the case in ML/AI.
Logic void of representation is metaphysics.Judea Pearl
Causal Reinforcement Learning
Another promising virgin field I am currently exploring: Causal Reinforcement Learning (Causal RL). What has been inspiring me is the philosophy behind the integration of causal inference and reinforcement learning, that is, when looking back at the history of science, human beings always progress in a similar manner to that of Causal RL:
Humans summarize rules or experience from their interaction with nature and then exploit this to improve their adaptation in the next exploration. What Causal RL does is exactly to mimic human behaviors, i.e., learning causal relations from an agent that communicates with the environment and then optimizing its policy based on the learned causal structures.
The reason that I highlight this analogy is to emphasize the importance of Causal RL. Personally speaking, Causal RL will, without doubt, become an indispensable part of AGI, which has great potential applications not only in healthcare and medicine but also in all other RL scenarios. Compared to RL, Causal RL has one obvious advantage inherited from causal inference: data efficiency.
- Our recent work: AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning
- Our recent work: Nonlinear Invariant Risk Minimization: A Causal Approach