I am currently a fourth-year Ph.D. student at Uninversity of Southern California, advised by Prof. Xiang Ren.

Generally, my research interests are Natural Language Processing and Machine Learning.

  • I try to build NLP systems that could easily evolve, or continually learn over time.
  • I have also been working on robustness & fairness, and interpretation techniques of neural network predictions.


  • Aug. 2022: Finished an exciting summer internship at NLP Platform team at Bloomberg!

  • Jul. 2022: Attend NAACL in person at Seattle and gave an oral presentation for our lifelong pretraining work.

  • Aug. 2021: Finished a 3-month internship at Amazon AWS AI lab!

  • Aug. 2021: Excited to received Bloomberg Data Science Ph.D. Fellowship!

  • Apr. 2021: Our paper On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning got accepted at NAACL 2021. It is also my internship project at Snap Inc!

  • Sep. 2020: Our paper Visually Grounded Continual Learning of Compositional Phrases got accepted at EMNLP 2020.

  • Aug. 2020: Finished my 3-month internship at Snapchat!

  • Jul. 2020: Two papers accepted at Lifelong Learning workshop@ICML 2020 and Continual Learning workshop@ICML 2020. We studied a task-free continual learning algorithm, and proposed a task setup for visually grounded continual compostional phrase learning.

  • Apr. 2020: Our paper about reducing unintended bias in hate speech classifiers by regularizing post-hoc explanations was accepted at ACL 2020.

  • Dec. 2019: Our paper discussing explanation algorithms for compositional semantics captured in neural sequence models got spotlighted at ICLR 2020.


  1. Dataless Knowledge Fusion by Merging Weights of Language Models. Xisen Jin, Xiang Ren, Daniel Preotiuc-Pietro, Pengxiang Cheng. ICLR 2023 (To Appear).

  2. Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora. Xisen Jin, Dejiao Zhang, Henghui Zhu, Wei Xiao, Shang-Wen Li, Xiaokai Wei, Andrew Arnold, Xiang Ren. NAACL 2022.

  3. Gradient Based Memory Editing for Task-Free Continual Learning. Xisen Jin, Arka Sadhu, Junyi Du, Xiang Ren. NeurIPS 2021. [code]

  4. Refining Language Models with Compositional Explanations> Huihan Yao, Ying Chen, Qinyuan Ye, Xisen Jin, Xiang Ren . NeurIPS 2021. [code]

  5. Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning. Xisen Jin, Bill Yuchen Lin, Mohammad Rostami, Xiang Ren. Findings of EMNLP 2021 [code&data]

  6. On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning. Xisen Jin, Francesco Barbieri, Brendan Kennedy, Aida Mostafazadeh Davani, Leonardo Neves, Xiang Ren. NAACL 2021 [code&data]

  7. Visually Grounded Continual Learning of Compositional Phrases. Xisen Jin, Junyi Du, Arka Sadhu, Ram Nevatia and Xiang Ren. EMNLP 2020 [code&data] [project page]

  8. Contextualizing Hate Speech Classifiers with Post-hoc Explanation. Brendan Kennedy*, Xisen Jin*, Aida Mostafazadeh Davani, Morteza Dehghani and Xiang Ren. ACL 2020 short paper. [project page] [code]

  9. Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models. Xisen Jin, Zhongyu Wei, Junyi Du, Xiangyang Xue and Xiang Ren. ICLR 2020 spotlight. [project page] [code]

  10. Explicit State Tracking with Semi-Supervision for Neural Dialogue Generation. Xisen Jin, Wenqiang Lei, Zhaochun Ren, Hongshen Chen, Shangsong Liang, Yihong Eric Zhao and Dawei Yin. CIKM 2018 full Paper. [code] [slides (pdf)] [slides (pptx)]

  11. Sequicity: Simplifying Task-oriented Dialogue Systems with Single Sequence-to-Sequence Architectures. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He and Dawei Yin. ACL 2018 long paper. [code]


  • University of Southern California, 2019.8 -
  • Fudan University, 2015.9 - 2019.7
  • National University of Singapore Exchange, 2017.8 - 2017.12

Research Interns

  • NLP Platform team, Bloomberg L.P., Summer 2022
  • AWS AI Lab, Summer 2021
  • Snap Inc., Summer 2020
  • Microsoft Research Asia, Summer 2018
  • Data Science Lab, JD.com, Winter 2018