Sawyer Merritt (@SawyerMerritt)
2025-10-23 | ❤️ 4381 | 🔁 577
A new 30-minute presentation from @aelluswamy, Tesla’s VP of AI, has been released, where he talks about FSD, AI and the team’s latest progress.
Highlight from the presentation: • Tesla’s vehicle fleet can provide 500 years of driving data every single day.
Curse of Dimensionality: • 8 cameras at high frame rate = billions of tokens per 30 seconds of driving context. • Tesla must compress and extract the right correlations between sensory input and control actions.
Data Advantage: • Tesla has access to a “Niagara Falls of data” — hundreds of years’ worth of collective fleet driving. • Uses smart data triggers to capture rare corner cases (e.g., complex intersections, unpredictable behavior).
Quality and Efficiency: • Extracts only the essential data needed to train models efficiently.
Debugging and Interpretability: • Even though the system is end-to-end, Tesla can still prompt the model to output interpretable data: 3D occupancy, road boundaries, objects, signs, traffic lights, etc. • Natural language querying: ask the model why it made a certain decision. • These auxiliary predictions don’t drive the car but help engineers debug and ensure safety.
Tesla’s Advanced Gaussian Splatting (3D Scene Modeling): • Tesla developed a custom, ultra-fast Gaussian splatting system to reconstruct 3D scenes from limited camera views. • Produces crisp, accurate 3D renderings even from few camera angles — far better than standard NeRF/splatting approaches. • Enables rapid visual debugging of the driving environment in 3D.
Evaluation & World Models: • Evaluation is the hardest challenge: models may perform well offline but fail in real-world conditions. • Tesla builds balanced, diverse evaluation datasets focusing on edge cases — not just easy highway driving.
Introduced a learned world simulator (neural network-generated video engine): • Can simulate 8 Tesla camera feeds simultaneously — fully synthetic. • Used for testing, training, and reinforcement learning. • Allows adversarial event injection (e.g., adding a pedestrian or vehicle cutting in). • Enables replaying past failures to verify new model improvements. • Can run in near real-time, letting testers “drive” inside a simulated world.
What’s Next: • Scale robotaxi service globally. • Unlock full autonomy across the entire Tesla fleet. • Cybercab: next-gen 2-seat vehicle designed specifically for robotaxi use, targeting lowest transportation cost (cheaper than public transit). • Same neural networks will power Optimus humanoid robot. • The same video generation system is now being applied to Optimus. • The system can simulate and plan movement for robots, adapting easily to new forms.
via the International Conference on Computer Vision (ICCV).
Full presentation: https://www.youtube.com/watch?v=wHK8GMc9O5A&t=8s
🔗 Related
Auto-generated bookmark
Tags
Vision-3D Robotics Simulation AI-ML GenAI Dev-Tools Web-Graphics