DailyPapers (@HuggingPapers)
2026-01-30 | โค๏ธ 268 | ๐ 46
DynamicVLA
A compact 0.4B Vision-Language-Action model that finally lets robots manipulate moving objects in real-time, closing the perception-execution gap with Continuous Inference and Latent-aware Action Streaming. https://t.co/24CTWj5whA
๋ฏธ๋์ด
![]()