Public talks and presentations


Talks on autonomous driving and language models

Public talks and presentations


Talks on autonomous driving and language models

CVPR 24: Language models in end-to-end autonomous driving

Check out other great talks from the E2E Autonomoy Tutorials session.

  • Why use Language models in end-2-end driving?
  • Measuring LLM capabilities in driving
  • Paper survey: Non-visual LLMs in self-driving
  • Paper survey: VLMs for perception in self-driving
  • Why is it important to go end-2-end with VLMs?
  • Paper survey: End-2-end VLMs in self-driving
  • Unsolved problems, conclusions, Q&A

BMVA 2024: Trustworthy Multimodal Learning with Foundation Models

Talk at a Trustworthy Multimodal Learning with Foundation Models meeting

  • Why use Language models in end-2-end driving?
  • Lingo-1
  • LingoQA and a trainable LLM judge
  • Trustworthiness and explainability of end-2-end models
  • Lingo-2

WandB Fully Connected: Vision language action models for autonomous driving at Wayve

This is a short version of the BMVA 2024 talk above at the WandB event.

WACV 2024: Large language and Vision models for Autonomous driving

Keynote talk at LLVM-AD workshop

  • Software 2.0, Wayve’s end-2-end approach
  • Lingo, LingoQA
  • GAIA-1

Language and Video generative AI in Autonomous Driving

Chalmers University of Technology, guest lecture for SSY340 – Deep Machine Learning course

  • Autonomous driving introduction and previous systems
  • Software 2.0, Wayve’s end-2-end approach
  • Limitations of end-2-end systems
  • Lingo-1
  • GAIA-1

Sensory Fusion for Scalable Indoor Navigation

2019 Embedded Vision Summit. Slides.

An older presentation about perception system of BrainCorp robots.