Currently I am exploring a promising virgin field: Causal Reinforcement Learning (Causal RL). What has been inspiring me is the philosophy behind the integration of causal inference and reinforcement learning, that is, when looking back at the history of science, human beings always progress in a similar manner to that of Causal RL:
Humans summarize rules or experience from their interaction with nature and then exploit this to improve their adaptation in the next exploration. What Causal RL does is exactly to mimic human behaviors, i.e., learning causal relations from an agent that communicates with the environment and then optimizing its policy based on the learned causal structures.
The reason that I highlight this analogy is to emphasize the importance of Causal RL. Personally speaking, Causal RL will, without doubt, become an indispensable part of Artificial General Intelligence (AGI), which has great potential applications not only in healthcare and medicine but also in all other RL scenarios. Compared to RL, Causal RL has one obvious advantage inherited from causal inference: data efficiency.
- Our recent work: AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning
- Our recent work: Nonlinear Invariant Risk Minimization: A Causal Approach