DynamicVLA: Compact Vision-Language-Action Model for Real-time Manipulation
DailyPapers (@HuggingPapers)
2026-01-30 | โค๏ธ 265 | ๐ 45 | ๐ฌ 5
DynamicVLA
A compact 0.4B Vision-Language-Action model that finally lets robots manipulate moving objects in real-time, closing the perception-execution gap with Continuous Inference and Latent-aware Action Streaming. https://x.com/HuggingPapers/status/2017094507402318169/video/1
๐ ๋งํฌ
๋ฏธ๋์ด
๐ฌ ์์