I obtained my PhD degree at Peking University under the supervision of Prof.Yunhai Tong, and my bachelor’s degree at Beijing University of Posts and Telecommunications. I also worked closely with Prof.Zhouchen Lin and Prof.Dacheng Tao.
My research work focuses on several directions, including pixel-wise scene understanding for video/image scene understanding (such as semantic/instance/panoptic segmentation and object detection), their zero/few shot variants, general deep learning methods with applications (such as vision transformer, efficient model design, and neural collapse), and vision meets language (including open vocabulary learning, visual prompting, and visual grounding) and foundation model tuning.
Besides, I am also very interested at aerial image analysis since I am a fun of history and military games.
During my PhD, I conducted research on image/video semantic/instance/panoptic segmentation, as well as several related problems.
Currently, I am looking for Research Scientist Position in the Industry (start from 2024).
Feel free to connect me at firstname.lastname@example.org and email@example.com.
- 2023.09: 🎉🎉 Two NeurIPS Paper are accepted as SpotLight. PSG4D and Point-In-Context.
- 2023.08: Give a talk of video segmentation at Valse and Slides.
- 2023.07: 🎉🎉 Three paper in ICCV-23: Tube-Link, Betrayed Caption and EMO-Net. One Oral Paper in ICCV-23 workshop. See you in Paris!! SFNet-Lite is accepted by IJCV.
- 2023.06: Checkout our new paper on point cloud in-context learning and the first survey on Open Vocabulary Learning.
- 2023.03: Checkout our new survey on transformer-based segmentation and detection, Also Video Talk, Chinese, Link.
- 2023.03：Please checkout our new work, Tube-Link, the first universal video segmentation framework that outperforms specific video segmentation methods (VIS,VSS,VPS).
- 2023.03：One paper on Panoptic Video Scene Graph Generation (PVSG) is accepted by CVPR-2023.
- 2022.11：Two paper on Video Scene Understanding is accepted by T-PAMI.
- 2022.09：One paper on Neural Collapse is accepted by NeurIPS-2022.
- 2022.08： 🎉🎉 Join the MMLab@NTU S-Lab! Our four works (Video K-Net, PanopticPartFormer, FashionFormer, and PolyphonicFormer in CVPR-22/ECCV-22) code are all released. Check out my github homepage.
- 2022.07： 🎉🎉 Our SFNet-Lite (extension of SFNet-ECCV20) achieve the best mIoU and speed trade-off. on multiple driving datasets. SFNet-Lite can obtain 80.1 mIoU while running at 50 FPS, 78.8 mIoU while running at 120 FPS. Code.
- 2022.07: 🎉🎉 Three papers are accepted by ECCV-2022. One paper is accepted by ICIP-2022.
- 2022.07: 🎉🎉 Graduated From PKU.
- 2022.03: 🎉🎉 Video K-Net is accepted by CVPR-2022 as oral presentation.
Full Publications Per Year can be found in Here.
* means equal contribution.
Code can be found in this.
🎖 Honors and Awards
- National Scholarship, Ministry of Education of China in PKU (year 2020-2021) (year 2019-2020)
- President Scholarship of PKU (year 2020-2021)
- 2017, 2022 Beijing Excellent Graduates
- 2017, 2022 BUPT/PKU Excellent Graduates
- 2021.11 Winner of Segmenting and Tracking Every Point and Pixel: 6th Workshop on ICCV-2021 Track2 (Project Leader and First Author)
- 2017.09 - 2022.06, PhD in Peking University (PKU)
- 2013.09 - 2017.06, Bachelor in Beijing University of Posts and Telecommunications (BUPT)
💬 Invited Talks
- 2022.05 Invited talk on Panoptic Segmentation and Beyond in Baidu PaddleSeg Group
- 2021.12 Invited talk on Video Segmentation in DiDi Auto-Driving Group
- 2021.10 Invited talk on Aligned Segmentation HuaWei Noah Auto-Driving Group
- SenseTime, mentored by Dr.Guangliang Cheng and Dr.Jianping Shi.
- JD AI (remote cooperation), mentored by Dr.Yibo Yang and Prof.Dacheng Tao.
- DeepMotion (Now Xiaomi Car), mentored by Dr.Kuiyuan Yang.
- Regular Conference Reviewer for CVPR, ICCV, ECCV, ICLR, AAAI, NeurIPS, ICML, IJCAI and Journal Reviewer For IEEE-TIP, IEEE-TPAMI, IJCV.
- I am mentored by Dr.Kuiyuan Yang, Prof.Li Zhang, Dr.Guangliang Cheng, Dr.Yibo Yang, Prof.Dacheng Tao, Prof.Zhouchen Lin, Mr.Xia li, Dr.Jiangmiao Pang during the PhD study.