Peng Li (李鹏)
E-mail: pengli.ds [at] gmail.com
I obtained my M.S. degree in computer science from Fudan University in 2024, advised by Prof. Xipeng Qiu. During this period, I was very fortunate to receive mentoring from Prof. Hongyuan Mei and Prof. Matthew R. Walter at TTIC. I earned my B.S. degree from the School of Data Science and Engineering at ECNU, working with Prof. Yuanbin Wu.
My research interest lies mainly in foundation models for intelligent robots, especially for humanoid robots.
MANGO: A Benchmark for Evaluating Mapping and Navigation Abilities of Large Language Models
tl;dr: MANGO is a textgames-based spatial reasoning benchmark for LLMs.
Peng Ding*, Jiading Fang*, Peng Li*, Kangrui Wang*, Xiaochen Zhou*, Mo Yu, Jing Li, Matthew Walter, Hongyuan Mei
COLM 2024 | Project Page | Paper | GitHub
Statler: State-Maintaining Language Models for Embodied Reasoning
tl;dr: Statler proposes using LLMs as state maintainers for embodied agents in long-horizon reasoning tasks.
Takuma Yoneda*, Jiading Fang*, Peng Li*, Huanyu Zhang*, Tianchong Jiang, Shengjie Lin, Ben Picker, David Yunis, Hongyuan Mei, Matthew R. Walter
ICRA 2024 | Project Page | Paper | GitHub
MOSS: An Open Conversational Language Model
🌟 MOSS is the first ChatGPT-like LLM in China, the first plugin-augmented conversational language model in China, and fully open sourced with 11.8k+ stars.
Tianxiang Sun, Xiaotian Zhang, Zhengfu He, Peng Li, Qinyuan Cheng, Hang Yan, Xiangyang
Liu, Yunfan Shao, Qiong Tang, Xingjian Zhao, Ke Chen, Yining Zheng, Zhejian Zhou, Ruixiao
Li, Jun Zhan, Yunhua Zhou, Linyang Li, Xiaogui Yang, Lingling Wu, Zhangyue Yin, Xuanjing
Huang, Xipeng Qiu
Machine Intelligence Research 2024 | Project Page | Paper | GitHub
CodeIE: Large Code Generation Models are Better Few-Shot Information Extractors
tl;dr: Code-as-Policies for structured information extraction tasks.
Peng Li*, Tianxiang Sun*, Qiong Tang, Hang Yan, Yuanbin Wu, Xuanjing Huang, Xipeng Qiu
ACL 2023 | Project Page | Paper | GitHub