Biography
I am currently a research scientist and lead the NLP research group [photo of JDExplore d-team] at JD Explore Academy. I am also a member of the Doctoral Management Trainee (DMT) program (a top-tier talent program in JD.com, Inc.).
I received Ph.D. from The University of Sydney, supervised by Prof. Dacheng Tao (IEEE/ACM Fellow).
I was a research intern at Tencent AI Lab, advised by Dr. Zhaopeng Tu and Dr. Longyue Wang. I also worked at Cheetah Mobile as the main developer of the "realtime voice translator" [Demo].
I have published over 40 papers in NLP/AI venues, including ACL, EMNLP, COLING, NAACL, ICLR, ICML, AAAI, IJCAI, SIGIR, CVPR, IEEE TPAMI, TKDE, TASLP, TNNLS, and TMM, and some of them were applied to industrial products.
I am the Area/ Session Chair of ACL, AAAI, and SDM.
I won many AI challenges, including SuperGLUE benchmark, GLUE benchmark, WMT 2022, IWSLT 2021 and WMT 2019.
My research mainly focuses on deep learning for NLP, including large language model pretraining, language understanding, generation, and translation.
More recently, I and my group have been focusing on foundation models of general NLP. We start with data, models, objectives, optimization, and better adaptation to various downstream tasks to investigate how to efficiently, sufficiently, and trustworthily transfer knowledge from large-scale data to the parameters of the pre-training model.
Our model scale has reached GPT-3 level, i.e. 175B.
I am always open for collaborations!
📣 I have several internship positions. Self-motivated students with experience in NLP and PLM are welcome.
News
-
July. 2023: Three papers about {efficient pipeline parallelism, knowledge alignment, and federated optimizer} of model training are accepted by IEEE Network, TASLP, and TPAMI, respectively.
-
May. 2023: 🎉 Nine papers about {training, evaluation, robustness, and downstream adaptation} of large model are accepted by ACL 2023, congrats to my interns and coauthors.
-
May. 2023: Two papers about GNN sparse training and healthcare dataset are accepted by IEEE Transactions on Neural Networks and Learning Systems and Information Fusion, respectively.
-
Apr. 2023: One paper about flatter optimization for fedML is accepted by ICML 2023 (oral).
-
Mar. 2023: We release reports to better understand and harness the power of ChatGPT on language understanding (NLU), machine translation (MT), and MT evaluation. Enjoy it!
-
Mar. 2023: 🥂 I lead the R&D of the Vega series Large Language Models (织女系列自然语言大模型), which won the 2022 Technology Golden Award ("京东集团技术金项奖", the highest tech award at JD.com, Inc.), see internal media coverage.
-
Feb. 2023: One paper about knowledge-grounded multiview learning is accepted by IEEE Transactions on Knowledge and Data Engineering, congrats to my intern Qihuang.
-
Jan. 2023: Invited to serve as the Session Chair for AAAI 2023.
-
Jan. 2023: One paper about federated learning is accepted by ICLR 2023.
-
Jan. 2023: One paper about dynamic contrastive distillation is accepted by IEEE Transactions on Multimedia, congrats to my intern Jun.
-
Nov. 2022: One paper about memory-efficient pipeline parallelism of mixture-of-experts is accepted by IPDPS 2023, congrats to my intern Zheng.
-
Nov. 2022: Invited talk at China National Computer Congress 2022 (CNCC'22), check out the schedule.
-
Nov. 2022: One paper about simultaneous translation is accepted by AAAI 2023, congrats to my intern Hexuan.
-
Oct. 2022: 🏆 Our Vega v2 got 1st place on one of the most difficult general language understanding leaderboards - SuperGLUE! Check out the tech report.
-
Oct. 2022: Invited talk about Towards Efficient NLP Foundation Models -- Pretrain, Downstream Adaptation, and Beyond at Nankai Univ. and Univ. Chinese Academy of Sciences.
-
Oct. 2022: Two papers are accepted by EMNLP 2022, congrats to my interns Qihuang and Shwai.
-
Sep. 2022: 📖 Co-authored "White Paper on Artificial Intelligence Generated Content" is published, check out the [Chinese version]&[media coverage].
-
Aug. 2022: Two papers are accepted by COLING 2022, congrats to my interns Changtong and Bing.
-
Aug. 2022: 🥂 Our Project "Super Deep Learning of JD Explore Academy" won 2022 SAIL (Superior AI Leader/ 卓越人工智能引领者) Award Top30 at World Artificial Intelligence Conference, see media coverage.
-
Jul. 2022: 🏆 Our Vega-MT ranked 1st (Chinese<=>English, German<=>English, Czech<=>English, English=>Russian), 2nd (Russian=>English, Japanese=>English), and 3rd (English=>Japanese) in General Translation Task in WMT 2022.
-
Apr. 2022: One paper is accepted by NAACL 2022.
-
Apr. 2022: One paper is accepted by SIGIR 2022, congrats to my interns Jun and Fei.
-
Mar. 2022: One paper is accepted by CVPR 2022.
-
Feb. 2022: Submitted Ph.D. thesis "Neural Machine Translation with Fully Information Transformation", containing sufficient (adequate translation) & efficient (fast translation) information transformation.
-
Feb. 2022: One paper is accepted by ACL 2022.
-
Jan. 2022: 🏆 Our Vega v1 got 1st place on The General Language Understanding Evaluation (GLUE) benchmark! Check out the [tech report]&[media coverage].
-
Dec. 2021: Invited to serve as the Area Chair for ACL 2022.
-
Dec. 2021: Our Vega (织女) achieved the SOTA performance in two tasks @GLUE, surpassing human performance.
-
Aug. 2021: Two papers are accepted by EMNLP 2021 and its findings.
-
Aug. 2021: We organize a course "Advanced topics of AI" at the School of Gifted Young, USTC. I am the lecturer of NLP part.
-
Jul. 2021: 🏆 Ranked 1st in Swahili-English Speech Translation Task in IWSLT 2021.
-
Jul. 2021: Two papers are accepted by IWSLT 2021.
-
May. 2021: Three papers are accepted by ACL 2021 and its findings.
-
Mar. 2021: Invited to serve as the Session Chair for SDM 2021
-
Jan. 2021: Two papers are accepted by ICLR 2021.
-
Jan. 2021: One paper is accepted by ICASSP 2021.
-
Sept. 2020: One paper is accepted by COLING 2020.
-
Sept. 2020: Two papers are accepted by WMT 2020.
-
Sept. 2020: One paper is accepted by EMNLP 2020.
-
Aug. 2020: 🏆 Ranked 2nd in German-English Chat Translation Task in WMT 2020.
-
Apr. 2020: One paper is accepted by ACL 2020.
-
Apr. 2019: 🏆 Ranked 1st in Finnish-English News Translation Task in WMT 2019.
Publications
† indicates intern/student under my supervision. ✉️ means corresponding author.
-
Communication Efficient Micro-Batch Pipeline Parallelism for MoE Models Training.
†Zheng Zhang, Dazhao Cheng, Donglin Yang, Liang Ding, Yaqi Xia, Chuang Hu.
IEEE Network, 2023 (IEEE Netw. 2023).
-
Unified Instance and Knowledge Alignment Pretraining for Aspect-based Sentiment Analysis.
Juhua Liu, †Qihuang Zhong, Liang Ding, Hua Jin, Bo Du, and Dacheng Tao.
arXiv preprint, 2021. & IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2023 (TASLP 2023).
-
Efficient Federated Learning via Local Adaptive Amended Optimizer with Linear Speedup.
Yan Sun, Li Shen, Hao Sun, Liang Ding, and Dacheng Tao.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023 (TPAMI 2023). (CORE Rank A*)
-
Self-Evolution Learning for Discriminative Language Model Pretraining.
†Qihuang Zhong, Liang Ding(co-first author), Juhua Liu, Bo Du, and Dacheng Tao.
Findings of The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
-
Revisiting Token Dropping Strategy in Efficient BERT Pretraining.
†Qihuang Zhong, Liang Ding, Juhua Liu, Xuebo Liu, Min Zhang, Bo Du, and Dacheng Tao.
The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
-
Token-Level Self-Evolution Training for Sequence-to-Sequence Learning.
†Keqin Peng, Liang Ding(co-first author), Qihuang Zhong, Yuanxin Ouyang, Wenge Rong, Zhang Xiong, and Dacheng Tao.
The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
(best paper nomination)
-
PAD-Net: An Efficient Framework for Dynamic Networks.
†Shwai He, Liang Ding✉️, Daize Dong, Boan Liu, Fuqiang Yu, and Dacheng Tao.
arXiv preprint, 2022. & The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
-
Toward Human-Like Evaluation for Natural Language Generation with Error Analysis.
†Qingyu Lu, Liang Ding(co-first author), Liping Xie, Kanjian Zhang, Derek F. Wong, and Dacheng Tao.
arXiv preprint, 2022. & The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023 oral). (CORE Rank A*)
-
CASN: Class-Aware Score Network for Textual Adversarial Detection.
†Rong Bao, Rui Zheng, Liang Ding, Qi Zhang, and Dacheng Tao.
The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
-
Divide, Conquer, and Combine: Mixture of Semantic-Independent Experts for Zero-Shot Dialogue State Tracking.
†Qingyue Wang, Liang Ding, Yanan Cao, Yibing Zhan, Zheng Lin, Shi Wang, Dacheng Tao, and Li Guo.
The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023 oral). (CORE Rank A*)
-
Unsupervised Dense Retrieval with Relevance-Aware Contrastive Pre-Training.
†Yibin Lei, Liang Ding✉️, Yu Cao, Changtong Zan, Andrew Yates, and Dacheng Tao.
Findings of The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
-
TransGEC: Improving Grammatical Error Correction with Translationese.
Tao Fang, Xuebo Liu, Derek F. Wong, Runzhe Zhan, Liang Ding, Lidia S. Chao, Dacheng Tao, and Min Zhang.
Findings of The Annual Meeting of the Association for Computational Linguistics, 2023 (ACL 2023). (CORE Rank A*)
-
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks.
†Chuang Liu, Xueqi Ma, Yibing Zhan, Liang Ding, Dapeng Tao, Bo Du, Wenbin Hu, and Danilo Mandic.
arXiv preprint, 2022. & IEEE Transactions on Neural Networks and Learning Systems, 2023 (TNNLS 2023). (CORE Rank A*)
-
A Perioperative Risk Assessment Dataset with Multi-View Data Based on Online Accelerated Pairwise Comparison.
Xinyao Li, Yibing Zhan, Yanhua Zhao, Yiqiang Wu, Liang Ding, Yuanyuan Li, Dapeng Tao, and Hua Jin.
Information Fusion, 2023 (Inf Fusion 2023). (CORE Rank B)
-
Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape.
Yan Sun, Li Shen, Shixiang Chen, Liang Ding, and Dacheng Tao.
International Conference on Machine Learning, 2023. (ICML 2023 oral). (CORE Rank A*)
-
On Efficient Training of Large-Scale Deep Learning Models: A Literature Review.
Li Shen, Yan Sun, Zhiyuan Yu, Liang Ding, Xinmei Tian, and Dacheng Tao.
arXiv preprint, 2023.
-
OmniForce: On Human-Centered, Large Model Empowered and Cloud-Edge Collaborative AutoML System.
(JD Explore Academy) Chao Xue, Wei Liu, Shuai Xie, Zhenfang Wang, Jiaxing Li, Xuyang Peng, Liang Ding, Shanshan Zhao, Qiong Cao, Yibo Yang, Fengxiang He, Bohua Cai, Rongcheng Bian, Yiyan Zhao, Heliang Zheng, Xiangyang Liu, Dongkai Liu, Daqing Liu, Li Shen, Chang Li, Shijin Zhang, Yukang Zhang, Guanpu Chen, Shixiang Chen, Yibing Zhan, Jing Zhang, Chaoyue Wang, and Dacheng Tao.
System report & arXiv preprint, 2023.
-
Towards Making the Most of ChatGPT for Machine Translation.
†Keqin Peng, Liang Ding, Qihuang Zhong, Li Shen, Xuebo Liu, Min Zhang, Yuanxin Ouyang, and Dacheng Tao.
Technical report & arXiv preprint, 2023.
(🎁A present for the MT community to better understand and harness the powerful ChatGPT)
-
Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT.
†Qingyu Lu, †Baopu Qiu, Liang Ding, Liping Xie, and Dacheng Tao.
Technical report & arXiv preprint, 2023.
(🎁A present for the MT evaluation community to better understand and harness the powerful ChatGPT)
-
Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT.
†Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, and Dacheng Tao.
Technical report & arXiv preprint, 2023.
(🎁A present for the NLU community to better understand and harness the powerful ChatGPT)
-
Gapformer: Graph Transformer with Graph Pooling for Node Classification.
†Chuang Liu, Yibing Zhan, Xueqi Ma, Liang Ding, Dapeng Tao, Jia Wu, and Wenbin Hu.
International Joint Conference on Artificial Intelligence, 2023 (IJCAI 2023). (CORE Rank A*)
-
Prompt-Learning for Cross-Lingual Relation Extraction.
†Chiaming Hsu, †Changtong Zan, Liang Ding✉️, Longyue Wang, Xiaoting Wang, Weifeng Liu, Fu Lin, and Wenbin Hu.
IEEE International Joint Conference on Neural Networks, 2023 (IJCNN 2023). (CORE Rank B)
-
Knowledge Graph Augmented Network Towards Multi-View Representation Learning for Aspect-based Sentiment Analysis.
†Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, Hua Jin, and Dacheng Tao.
arXiv preprint, 2022. & IEEE Transactions on Knowledge and Data Engineering, 2023 (TKDE 2023). (CORE Rank A*)
-
Dynamic Contrastive Distillation for Image-Text Retrieval.
†Jun Rao, Liang Ding(co-first author), Shuhan Qi, Meng Fang, Yang Liu, Li Shen, and Dacheng Tao.
arXiv preprint, 2022. & IEEE Transactions on Multimedia, 2023 (TMM 2023). (CORE Rank A*)
-
FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy.
Yan Sun, Li Shen, Tiansheng Huang, Liang Ding, and Dacheng Tao.
The International Conference on Learning Representations, 2023. (ICLR 2023). (CORE Rank A*)
-
Improving Simultaneous Machine Translation with Monolingual Data.
†Hexuan Deng, Liang Ding, Xuebo Liu, Meishan Zhang, Dacheng Tao, and Min Zhang.
The AAAI Conference on Artificial Intelligence, 2023. (AAAI 2023). (CORE Rank A*)
-
MPipeMoE: Memory Efficient MoE for Pre-trained Models with Adaptive Pipeline Parallelism.
†Zheng Zhang, Donglin Yang, Yaqi Xia, Liang Ding, Dacheng Tao, Xiaobo Zhou, and Dazhao Cheng.
IEEE International Parallel & Distributed Processing Symposium, 2023. (IPDPS 2023). (CORE Rank A)
-
AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks.
Hao Sun, Li Shen, Qihuang Zhong, Liang Ding, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, and Dacheng Tao.
arXiv preprint, 2023.
-
Bag of Tricks for Effective Language Model Pretraining and Downstream Adaptation: A Case Study on GLUE.
†Qihuang Zhong, Liang Ding(co-first author), Keqin Peng, Juhua Liu, Bo Du, Li Shen, Yibing Zhan, and Dacheng Tao.
Technical report & arXiv preprint, 2023.
-
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE.
†Qihuang Zhong, Liang Ding(co-first author), Yibing Zhan, Yu Qiao, Yonggang Wen, Li Shen, Juhua Liu, Baosheng Yu, Bo Du, Yixin Chen, Xinbo Gao, Chunyan Miao, Xiaoou Tang, and Dacheng Tao.
Technical report & arXiv preprint, 2022.
-
Original or Translated? On the Use of Parallel Data for Translation Quality Estimation.
†Baopu Qiu, Liang Ding, Di Wu, Lin Shang, Yibing Zhan, and Dacheng Tao.
arXiv preprint, 2022.
-
Improving Sharpness-Aware Minimization with Fisher Mask for Better Generalization on Language Models.
†Qihuang Zhong, Liang Ding, Li Shen, Peng Mi, Juhua Liu, Bo Du, and Dacheng Tao.
Findings of the Conference on Empirical Methods in Natural Language Processing, 2022 (EMNLP 2022). (CORE Rank A)
-
SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters.
†Shwai He, Liang Ding✉️, Daize Dong, Miao Zhang, and Dacheng Tao.
Findings of the Conference on Empirical Methods in Natural Language Processing, 2022 (EMNLP 2022). (CORE Rank A)
-
Vega-MT: The JD Explore Academy Translation System for WMT22.
†Changtong Zan, †Keqin Peng, Liang Ding✉️(co-first author), Baopu Qiu, Boan Liu, Shwai He, Qingyu Lu, Zheng Zhang, Chuang Liu, Weifeng Liu, Yibing Zhan, and Dacheng Tao.
The Conference on Machine Translation, 2022 (WMT 2022).
(Among all constrained high-resource tracks, Vega-MT won 7 champions, 2 runners-up, and 1 third place w.r.t BLEU, and won 8 champions and 2 runners-up w.r.t COMET.)
-
On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation.
†Changtong Zan, Liang Ding✉️, Li Shen, Yu Cao, Weifeng Liu, and Dacheng Tao.
The International Conference on Computational Linguistics, 2022 (COLING 2022). (CORE Rank A)
-
A Contrastive Cross-Channel Data Augmentation Framework for Aspect-based Sentiment Analysis.
†Bing Wang, Liang Ding✉️, Qihuang Zhong, Ximing Li, and Dacheng Tao.
The International Conference on Computational Linguistics, 2022 (COLING 2022). (CORE Rank A)
-
PANDA: Prompt Transfer Meets Knowledge Distillation for Efficient Model Adaptation.
†Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, and Dacheng Tao.
arXiv preprint, 2022.
-
E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language Understanding and Generation.
†Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, and Dacheng Tao.
arXiv preprint, 2022.
-
Parameter-Efficient and Student-Friendly Knowledge Distillation.
†Jun Rao, Xv Meng, Liang Ding, Shuhan Qi, and Dacheng Tao.
arXiv preprint, 2022.
-
Bridging Cross-Lingual Gaps During Leveraging the Multilingual Sequence-to-Sequence Pretraining for Text Generation and Understanding.
†Changtong Zan, Liang Ding✉️, Li Shen, Yu Cao, Weifeng Liu, and Dacheng Tao.
arXiv preprint, 2022.
-
BLISS: Robust Sequence-to-Sequence Learning via Self-Supervised Input Representation.
†Zheng Zhang, Liang Ding✉️, Dazhao Cheng, Xuebo Liu, Min Zhang, and Dacheng Tao.
arXiv preprint, 2022.
-
Redistributing Low-Frequency Words: Making the Most of Monolingual Data in Non-Autoregressive Translation.
Liang Ding, Longyue Wang, Shuming Shi, Dacheng Tao, and Zhaopeng Tu.
The Annual Meeting of the Association for Computational Linguistics, 2022 (ACL 2022). (CORE Rank A*)
(Full meta-score paper)
-
Where Does the Performance Improvement Come From? - A Reproducibility Concern about Image-Text Retrieval.
†Jun Rao, Fei Wang, Liang Ding✉️, Shuhan Qi, Yibing Zhan, Weifeng Liu, and Dacheng Tao.
ACM Special Interest Group on Information Retrieval, 2022 (SIGIR 2022). (CORE Rank A*)
-
Interpretable Proof Generation via Iterative Backward Reasoning.
Hanhao Qu, Yu Cao, Jun Gao, Liang Ding, and Ruifeng Xu.
Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2022 (NAACL 2022). (CORE Rank A)
-
Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning.
†Lin Zhang, Li Shen, Liang Ding, Dacheng Tao, and Lingyu Duan.
IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022 (CVPR 2022). (CORE Rank A*)
-
Improving Neural Machine Translation by Denoising Training.
Liang Ding, Keqin Peng, and Dacheng Tao.
arXiv preprint, 2022.
-
SLUA: A Super Lightweight Unsupervised Word Alignment Model via Cross-Lingual Contrastive Learning.
Di Wu, Liang Ding, Shuo Yang, and Dacheng Tao.
arXiv preprint, 2021. & The International Conference on Spoken Language Translation, 2022 (IWSLT 2022).
-
Improving Neural Machine Translation by Bidirectional Training.
Liang Ding, Di Wu, and Dacheng Tao.
The Conference on Empirical Methods in Natural Language Processing, 2021 (EMNLP 2021). (CORE Rank A)
-
On the Complementarity between Pre-training and Back-Translation.
Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, and Zhaopeng Tu.
Findings of the Conference on Empirical Methods in Natural Language Processing, 2021 (EMNLP 2021). (CORE Rank A)
-
The USYD-JD Speech Translation System for IWSLT2021.
Liang Ding, Di Wu, and Dacheng Tao.
The International Conference on Spoken Language Translation, 2021 (IWSLT 2021).
(Winning submission out of 42 teams to Sw-En speech translation, exceeding the 2nd place by more than 10 BLEU points)
-
Self-Guided Curriculum Learning for Neural Machine Translation.
Lei Zhou, Liang Ding, Kevin Duh, Shinji Watanabe, Ryohei Sasano, and Koichi Takeda.
The International Conference on Spoken Language Translation, 2021 (IWSLT 2021).
-
Rejuvenating Low-Frequency Words: Making the Most of Parallel Data in Non-Autoregressive Translation.
Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, and Zhaopeng Tu.
The Annual Meeting of the Association for Computational Linguistics, 2021 (ACL 2021). (CORE Rank A*)
-
Progressive Multi-Granularity Training for Non-Autoregressive Translation.
Liang Ding, Longyue Wang, Xuebo Liu, Derek F. Wong, Dacheng Tao, and Zhaopeng Tu.
Findings of the Annual Meeting of the Association for Computational Linguistics, 2021 (ACL 2021). (CORE Rank A*)
-
On the Copying Behaviors of Pre-Training for Neural Machine Translation.
Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, and Zhaopeng Tu.
Findings of the Annual Meeting of the Association for Computational Linguistics, 2021 (ACL 2021). (CORE Rank A*)
-
Bridging the Gap between Clean Data Training and Real World Inference for Spoken Language Understanding.
Di Wu, Liang Ding, Yiren Chen, and Dacheng Tao.
arXiv preprint, 2021.
-
Understanding and Improving Lexical Choice in Non-Autoregressive Translation.
Liang Ding, Longyue Wang, Xuebo Liu, Derek F Wong, Dacheng Tao, and Zhaopeng Tu.
The International Conference on Learning Representations, 2021 (ICLR 2021). (CORE Rank A*)
-
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning.
Xuebo Liu, Longyue Wang, Derek F Wong, Liang Ding, Lidia S Chao, and Zhaopeng Tu.
The International Conference on Learning Representations, 2021 (ICLR 2021). (CORE Rank A*)
-
Towards Efficiently Diversifying Dialogue Generation via Embedding Augmentation.
Yu Cao, Liang Ding, Zhiliang Tian, and Meng Fang.
IEEE International Conference on Acoustics, Speech and Signal Processing, 2021 (ICASSP 2021). (CORE Rank B)
-
Context-Aware Cross-Attention for Non-Autoregressive Translation.
Liang Ding, Longyue Wang, Di Wu, Dacheng Tao, and Zhaopeng Tu.
The International Conference on Computational Linguistics, 2020 (COLING 2020). (CORE Rank A)
-
SlotRefine: A Fast Non-Autoregressive Model for Joint Intent Detection and Slot Filling.
Di Wu, Liang Ding, Fan Lu, and Jian Xie.
The Conference on Empirical Methods in Natural Language Processing, 2020 (EMNLP 2020). (CORE Rank A)
-
Self-Attention with Cross-Lingual Position Representation.
Liang Ding, Longyue Wang, and Dacheng Tao.
The Annual Meeting of the Association for Computational Linguistics, 2020 (ACL 2020). (CORE Rank A*)
-
Zero-Shot Translation Quality Estimation with Explicit Cross-Lingual Patterns.
Lei Zhou, Liang Ding, and Koichi Takeda.
The Conference on Machine Translation, 2020 (WMT 2020).
-
Tencent AI Lab machine translation systems for the WMT20 chat translation task.
Longyue Wang, Zhaopeng Tu, Xing Wang, Li Ding, Liang Ding, and Shuming Shi.
The Conference on Machine Translation, 2020 (WMT 2020).
-
The University of Sydney's Machine Translation System for WMT19.
Liang Ding and Dacheng Tao.
The Conference on Machine Translation, 2019 (WMT 2019).
(Winning submission to Fi-En translation task, exceeding Microsoft Research by more than 1.1 BLEU points)
-
Recurrent Graph Syntax Encoder for Neural Machine Translation.
Liang Ding and Dacheng Tao.
arXiv preprint, 2019.
Competitions and Shared Tasks
-
SuperGLUE Benchmark, ranked 1st with an average score of 91.3 (since Oct. 8 2022).
-
WMT 2022, ranked 1st on Chinese<=>English, German<=>English, Czech<=>English, and English=>Russian, 2nd on Russian=>English and Japanese=>English, and 3rd on English=>Japanese General Translation Tasks, respectively.
-
GLUE Benchmark, ranked 1st with an average score of 91.3 (since Jan. 1 2022).
-
IWSLT21, ranked 1st on Swahili-English speech translation task.
-
WMT20, ranked 2nd on German-to-English chat translation shared task.
-
Tencent AI Innovation Competition, ranked 3rd on "input tips in human-computer interaction translation" problem.
-
WMT19, ranked 1st on Finnish-to-English news translation shared task.
-
CWMT17, ranked 3rd on the Japanese-to-Chinese patent translation shared task.
Professional Services
- Action Editor: ACL Rolling Review 2021
- Area Chair: ACL 2022
- Session Chair: AAAI 2023/ SDM 2021
- Conference Committee: ACL/ EMNLP/ COLING/ NAACL/ EACL/ AACL/ KDD/ SDM/ ICLR/ NeurIPS/ ICML/ AAAI/ IJCAI/ CVPR/ ICCV/ WACV etc.
- Journal Reviewer: IEEE Transactions on Neural Networks and Learning Systems/ Artificial Intelligence/ Knowledge-Based Systems/ Computational Linguistics/ ACM Transactions on the Web (Distinguished Reviewer)/ IEEE/ACM Transactions on Audio, Speech, and Language Processing/ Natural Language Engineering/ ACM Transactions on Asian and Low-Resource Language Information Processing/ Neural Computation/ Neurocomputing/ IEEE Transactions on Multimedia etc.
- Member: ACL (2020-)/ IEEE (2020-)
Group Members
I am always fortunate to work with these brilliant students (both onsite and remote, see alumni here), they are:
- Changtong Zan (PhD from China University of Petroleum, 07/2021-present)
- Qihuang Zhong (PhD from Wuhan University, 09/2021-present)
- Yibin Lei (Master from Eindhoven University of Technology, 09/2022-present)
- Qingyue Wang (PhD from Institute of Information Engineering, Chinese Academy of Sciences, 09/2022-present)
- Keqin Peng (PhD from Beihang University, 09/2022-present)
- Fei Wang (PhD from South China University of Technology, 09/2022-present)
- Rong Bao (PhD from Fudan University, 09/2022-present)
- Shizhan Cai (Bachelor from The Hong Kong University of Science and Technology, 11/2022-present)
- Ziyang Xu (Master from Wuhan University, 11/2022-present)
Selected Awards
- 2022-23: Technology Golden Award (京东集团技术金项奖), the highest tech. award at JD.com, Inc.
- 2022: Superior AI Leader Award, The World Artificial Intelligence Conference.
- 2018: Beijing Outstanding Graduate, The Education Committee of Beijing.
- 2013: National Scholarship, Ministry of Education of P.R.China.
*Last updated on 07/2023.