Large models face significant challenges, including limited interpretability, unclear mechanisms driving emergent phenomena, hallucination generation, safety and trust issues, and weak self-consciousness. We believe these challenges fundamentally arise from a lack of causal reasoning. Our goal is to equip large models with causal reasoning capabilities to construct causal world models, enable automated AI scientists, and develop self-conscious agents, ultimately paving a novel pathway toward safe and trustworthy artificial general intelligence.
We have opening positions for PhD, postdoc, intern, and full-time researcher. Welcome to contact me if you are interested in exploring Causal AI and Safe AI in the era of large models.