John Lambert
Research Scientist, Waymo

Bio

I am currently a research scientist at Waymo. Previously, I received my Ph.D. from Georgia Tech, where I was advised by James Hays and Frank Dellaert. Prior to joining Georgia Tech, I completed my Bachelor’s and Master’s degrees in Computer Science at Stanford University, specializing in artificial intelligence.

[My CV]

Research

My interests revolve around machine learning for robotics and autonomy. Past and present research areas have included image understanding, 3D perception, SLAM, and simulation. I’ve been involved in research for self-driving vehicle development since 2017. Machine learning and computer vision for robot autonomy currently present (and will continue to present) enormous benefits for people all over the world, with implications for safer transportation and safer workplaces.

News

  • September 2023: I will be speaking at the 2023 IROS Workshop on Traffic Agent Modeling for Autonomous Driving Simulation in October.
  • September 2023: Our paper The Waymo Open Sim Agents Challenge is accepted to NeurIPS ‘23 as a Spotlight, in the Datasets & Benchmarks track.
  • May 2023: The 2023 Waymo Open Dataset Challenges have concluded. Our whitepaper describing the 2023 Sim Agents challenge is on Arxiv.
  • March 2023: The 2023 Waymo Open Dataset Challenges are live. More info available here.
  • July 2022: Our paper SALVe: Semantic Alignment Verification for Floorplan Reconstruction from Sparse Panoramas has been accepted to ECCV 2022. [Project Page] [Paper]
  • March 2022: I have joined Waymo Research as a research scientist.
  • March 2022: I defended my PhD Thesis. Many thanks to Simon Lucey, Zsolt Kira, and Cedric Pradalier who joined my co-advisors, James Hays and Frank Dellaert, on my committee. The title of my thesis is “Deep Learning for Building and Validating Geometric and Semantic Maps” – coming soon to Arxiv.

Past News

Teaching

Aside from research, another passion of mine is teaching. I enjoy creating teaching materials for topics related to computer vision, a field which relies heavily upon numerical optimization and statistical machine learning tools. A number of teaching modules I’ve written can be found below:

Module 1: Linear Algebra
Foundations: Linear Algebra Without the Agonizing Pain
Necessary background: Projection, Gram-Schmidt, SVD,
Fast Nearest Neighbors
Vectorizing nearest neighbors (with no for-loops!)
Module 2: Numerical Linear Algebra
Direct Methods for Solving Systems of Linear Equations
backsubstitution and the LU, Cholesky, QR factorizations
Conjugate Gradients
large systems of equations, Krylov subspaces, Cayley-Hamilton Theorem
Least-Squares
QR decomposition for least-squares, modified Gram-Schmidt, GMRES
Module 3: SVMs and Optimization
The Kernel Trick
poorly taught but beautiful piece of insight that makes SVMs work
Gauss-Newton Optimization in 10 Minutes
Derivation, Trust-Region Variant (Levenberg-Marquardt), Numpy Implementation
Convex Optimization Without the Agonizing Pain
Constrained Optimization, Lagrangians, Duality, and Interior Point Methods
Subgradient Methods in 10 Minutes
Convex Optimization Part II
Module 4: State Estimation
The Bayes Filter and Intro to State Estimation
linear dynamical systems, bayes rule, bayesian estimation, and filtering
Lie Groups and Rigid Body Kinematics
SO(2), SO(3), SE(2), SE(3), Lie algebras
Module 5: Geometry and Camera Calibration
Stereo and Disparity
disparity maps, cost volume, MC-CNN
Epipolar Geometry and the Fundamental Matrix
simple ideas that are normally poorly explained
Visual Odometry
The Essential matrix, Nister's 5-Pt Algorithm, and epipolar constraint derivation
Iterative Closest Point
registration, Sim(3) optimization, simple derivations and code examples
Module 6: Convolutional Neural Networks
Backprop through a Conv Layer
Deriving Backprop through convolution to either the kernel weights or inputs
Generative Adversarial Networks (GANs)
Deriving minimax and non-saturating losses, DCGAN implementation
PyTorch Tutorial
PyTorch tensor operations, initializing CONV layers, groups, custom modules
JAX Tutorial
Intro to JAX, optax, flax, linen, and training loops for JAX
Module 7: Reinforcement Learning
Policy Gradients
intuition and simple derivations of REINFORCE, TRPO
Module 8: Geometric Data Analysis
Module 9: Message Passing Interface (MPI)