This is a 1hour tutorial about a fresh direction of using Multimodal Language Modals in autonomous driving.
- Why use Language models in end-2-end driving?
- Measuring LLM capabilities in driving
- Paper survey: Non-visual LLMs in self-driving
- Paper survey: VLMs for perception in self-driving
- Why is it important to go end-2-end with VLMs?
- Paper survey: End-2-end VLMs in self-driving
- Unsolved problems, conclusions, Q&A
Check out other great talks from the E2E Autonomoy Tutorials session.
You can find all my other talks here.