YIMING LI
I am a final-year PhD student at NYU AI4CE Lab led by Chen Feng. I am also an NVIDIA Graduate Fellow at the Autonmous Vehicle Research Group, working closely with Marco Pavone, Jose M. Alvarez, and Sanja Fidler. Before that, I had the opportunity to work as an intern at NVIDIA AI Research advised by Anima Anandkumar in 2022, and a research assistant at Shanghai Jiao Tong University (SJTU) advised by Siheng Chen in 2021.
I will graduate in Fall 2024 and am actively seeking a postdoctoral or industrial position, starting in Spring 2025.
My research vision is to enable collaborative autonomous intelligence by empowering robots with human-level spatial, social, and self-awareness, allowing them to actively perceive and plan in unstructured environments, interact effectively with humans or other robots, and leverage as well as augment the associated memory. To this end, I draw from vision, learning, robotics, graphics, language, sensing, data science, and cognitive science. My research works include developing robust, efficient, and scalable computational models for 3D scene parsing and decision-making from high-dimensional sensory input, as well as curating large-scale datasets to effectively train and verify these models for self-driving and robotics.
I am looking for UG/MS students to work on cutting-edge research projects with me and my collaborators at NYU/NVIDIA/USC/Stanford/Tsinghua. Please send me an email if you are interested!
- Neural Representations for Dynamic Scenes (NeRF/3DGS)
- Vision-Language Models for Spatial Robotics
- Embodied and Cognitive AI for Robotics
- Generative Models for Robotic Perception and Planning
- Dataset Curation and Autolabeling for Spatial Robotics
news
Jun 1, 2024 | We are organizing the 2nd Vision-Centric Autonomous Driving (VCAD) Workshop at ECCV 2024. We invite you to attend our workshop and submit your papers! |
---|---|
May 20, 2024 | I served as an Associate Editor for IROS 2024. |
Dec 10, 2023 | I have received NVIDIA Graduate Fellowship (2024-2025) (<2.0% acceptance rate). Thank you, NV ! |
Aug 25, 2023 | NVIDIA featured VoxFormer together with FB-OCC! Here is the youtube video: Taking Autonomous Vehicle Occupancy Prediction into the Third Dimension - NVIDIA DRIVE Labs Ep. 30. |
Jul 14, 2023 | Among Us and PVT++ are accepted by ICCV 2023. See you in Paris! |
Jun 19, 2023 | I am hosting Vision-Centric Autonomous Driving (VCAD) CVPR 2023 Workshop at Vancoucer, together with Vitor Guizilini, Yue Wang, and Hang Zhao! |
Jun 18, 2023 | I give an invited talk about VoxFormer at C3DV: 1st Workshop On Compositional 3D Vision@CVPR2023. |
Jun 2, 2023 | Our NYU team is organizing Collaborative Perception and Learning (CoPerception) ICRA 2023 Workshop at London, together with UCLA Mobility Lab and SJTU MediaBrain Group. |
Apr 23, 2023 | DeepExplorer is accepted at RSS 2023. See you in Daegu! |
Mar 21, 2023 | VoxFormer was selected as a highlight at CVPR 2023. Specifically, CVPR 2023 has received 9155 submissions, accepted 2360 papers, and selected 235 highlights (10% of accepted papers, 2.5% of submissions). |
Jun 20, 2022 | I give an invited talk about egocentric 3D target prediction at EPIC Workshop@CVPR2022. |
Jun 5, 2022 | I give an invited talk about collaborative and adversarial 3D perception at 3D-DLAD Workshop@IV2022. |
Jul 23, 2021 | FLAT is accepted at ICCV 2021 as an oral presentation. ICCV 2021 received a record number of 6236 submissions and accepted 1617 papers. ACs recommended the selection of 210 oral papers. These are 3% of all submissions and 13% of all papers. |