DailyPapers (@HuggingPapers)

2026-01-30 | โค๏ธ 268 | ๐Ÿ” 46


DynamicVLA

A compact 0.4B Vision-Language-Action model that finally lets robots manipulate moving objects in real-time, closing the perception-execution gap with Continuous Inference and Latent-aware Action Streaming. https://t.co/24CTWj5whA

๋ฏธ๋””์–ด

video thumbnail


Tags

Robotics VLA