About

I am currently a fifth-year Ph.D. student at Uninversity of Southern California, advised by Prof. Xiang Ren.

Generally, my research interests are Natural Language Processing and Machine Learning.

  • I try to build NLP systems that could easily evolve, or continually learn over time.
  • I have also been working on robustness & fairness, and interpretation techniques of neural network predictions.

News

  • Mar. 2023: I am excited to be a part of Apple Scholars in AI/ML PhD fellowship!

  • Aug. 2022: Finished an exciting summer internship at NLP Platform team at Bloomberg!

  • Jul. 2022: Attend NAACL in person at Seattle and gave an oral presentation for our lifelong pretraining work.

  • Aug. 2021: Finished a 3-month internship at Amazon AWS AI lab!

  • Aug. 2021: Excited to received Bloomberg Data Science Ph.D. Fellowship!

  • Apr. 2021: Our paper On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning got accepted at NAACL 2021. It is also my internship project at Snap Inc!

  • Sep. 2020: Our paper Visually Grounded Continual Learning of Compositional Phrases got accepted at EMNLP 2020.

  • Aug. 2020: Finished my 3-month internship at Snapchat!

  • Jul. 2020: Two papers accepted at Lifelong Learning workshop@ICML 2020 and Continual Learning workshop@ICML 2020. We studied a task-free continual learning algorithm, and proposed a task setup for visually grounded continual compostional phrase learning.

  • Apr. 2020: Our paper about reducing unintended bias in hate speech classifiers by regularizing post-hoc explanations was accepted at ACL 2020.

  • Dec. 2019: Our paper discussing explanation algorithms for compositional semantics captured in neural sequence models got spotlighted at ICLR 2020.

Preprints

  1. Demystifying Forgetting in Language Model Fine-Tuning with Statistical Analysis of Example Associations. Xisen Jin, Xiang Ren. On Arxiv (2024). [project page]

Publications

  1. What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement. Xisen Jin, Xiang Ren. ICML 2024 Spotlight. [project page]

  2. Dataless Knowledge Fusion by Merging Weights of Language Models. Xisen Jin, Xiang Ren, Daniel Preotiuc-Pietro, Pengxiang Cheng. ICLR 2023. [code&notebooks]

  3. Overcoming Catastrophic Forgetting in Massively Multilingual Continual Learning. Genta Indra Winata, Lingjue Xie, Karthik Radhakrishnan, Shijie Wu, Xisen Jin, Pengxiang Cheng, Mayank Kulkarni, Daniel Preotiuc-Pietro. Findings of ACL 2023.

  4. Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora. Xisen Jin, Dejiao Zhang, Henghui Zhu, Wei Xiao, Shang-Wen Li, Xiaokai Wei, Andrew Arnold, Xiang Ren. NAACL 2022.

  5. Gradient Based Memory Editing for Task-Free Continual Learning. Xisen Jin, Arka Sadhu, Junyi Du, Xiang Ren. NeurIPS 2021. [code]

  6. Refining Language Models with Compositional Explanations. Huihan Yao, Ying Chen, Qinyuan Ye, Xisen Jin, Xiang Ren . NeurIPS 2021. [code]

  7. Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning. Xisen Jin, Bill Yuchen Lin, Mohammad Rostami, Xiang Ren. Findings of EMNLP 2021 [code&data]

  8. On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning. Xisen Jin, Francesco Barbieri, Brendan Kennedy, Aida Mostafazadeh Davani, Leonardo Neves, Xiang Ren. NAACL 2021 [code&data]

  9. Visually Grounded Continual Learning of Compositional Phrases. Xisen Jin, Junyi Du, Arka Sadhu, Ram Nevatia and Xiang Ren. EMNLP 2020 [code&data] [project page]

  10. Contextualizing Hate Speech Classifiers with Post-hoc Explanation. Brendan Kennedy*, Xisen Jin*, Aida Mostafazadeh Davani, Morteza Dehghani and Xiang Ren. ACL 2020 short paper. [project page] [code]

  11. Towards Hierarchical Importance Attribution: Explaining Compositional Semantics for Neural Sequence Models. Xisen Jin, Zhongyu Wei, Junyi Du, Xiangyang Xue and Xiang Ren. ICLR 2020 Spotlight. [project page] [code]

  12. Explicit State Tracking with Semi-Supervision for Neural Dialogue Generation. Xisen Jin, Wenqiang Lei, Zhaochun Ren, Hongshen Chen, Shangsong Liang, Yihong Eric Zhao and Dawei Yin. CIKM 2018 full Paper. [code] [slides (pdf)] [slides (pptx)]

  13. Sequicity: Simplifying Task-oriented Dialogue Systems with Single Sequence-to-Sequence Architectures. Wenqiang Lei, Xisen Jin, Min-Yen Kan, Zhaochun Ren, Xiangnan He and Dawei Yin. ACL 2018 long paper. [code]

Education

  • University of Southern California, 2019.8 -
  • Fudan University, 2015.9 - 2019.7
  • National University of Singapore Exchange, 2017.8 - 2017.12

Research Interns

  • NLP Platform team, Bloomberg L.P., Summer 2022
  • AWS AI Lab, Summer 2021
  • Snap Inc., Summer 2020
  • Microsoft Research Asia, Summer 2018
  • Data Science Lab, JD.com, Winter 2018