Papers, Media, Patents

Papers, Media, Patents


Here are my citation numbers from Google Scholar.

Citations3118
h-index27
i10-index42

Majority of citations are coming from patents on event-based (spiking) neural networks, neuromorphic hardware, robotic navigation and SLAM. But recently we’ve got published several papers on Multimodal Language Models in Autonomous Driving (also see public talks).

Papers

Here is a collection of papers I participated in.

CarLLaVA: Vision language models for camera-only closed-loop driving

SOTA on CARLA Leaderboard 2.0 autonomous driving benchmark from Wayve’s internship project.


LingoQA: Video Question Answering for Autonomous Driving

ECCV 2024. Releasing a large dataset of autonomous driving video snippets with question answer pairs here and a corresponding benchmark. This became the basis of Lingo-1.

LingoQA 3min summary


Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving

ICRA 2024. Probably the first end-2-end driving with LLMs and VQA leading to Lingo-2, although done in simulation open-loop.

Driving and answering questions


Parallel Algorithm for Precise Navigation Using Black-Box Forward Model and Motion Primitives

IEEE Robotics and Automation Letters 2019. Older work on a very fast local planner handling arbitrary robot geometry, basis of thousands robot deployed across the world but not well cited :)


PhD.Thesis: Training of spiking neural networks based on information theoretic costs

My Phd.Thesis back from 2011. Bunch of math to derive learning rules for spiking neurons. I wanted to set up arbitrary connected pools of biological neurons that learn in continuous time with supervised learning or RL. Here is pool of neurons (randomly connected) that draws simple pictures while respecting the time of event generation. Those paterns were trained in one-shot using derived probabilistic learning rules.


Media and posts

At Wayve, I am/was leading several projects. Here are some blogposts about them.

Lingo-2

Video Language Model that can actually generate driving commans and drive a real car.

Lingo-1

Video Language Model for explainable autonomous driving.

Multiagent Reinforcement Learning

Hundreds of agents trained with RL in a city-scale simulator.

Meet Dr. Oleg Sinyavskiy of Brain Corp in Sorrento Valley

An interview I gave about my story and BrainCorp.

BrainCorp tug robot


Patents

Google scholar counts around 80 patents filed during my time at BrainCorp and Qualcomm. Unfortunately, we didn’t had a chance to turn them into academic papers. Although written in legalese, many of them are pretty good and power thousands of robots working in the real world (or at least hopefully cover the future of spiking neurons). Note, my previous non-yet-refactored name is “Sinyavskiy”.

Patents