๐Ÿ“š ์„ธํ˜„'s Vault

๐ŸŒ ๋„๋ฉ”์ธ

  • ๐Ÿ”ฎ3D-Vision
  • ๐ŸŽจRendering
  • ๐Ÿค–Robotics
  • ๐Ÿง LLM
  • ๐Ÿ‘๏ธVLM
  • ๐ŸŽฌGenAI
  • ๐ŸฅฝXR
  • ๐ŸŽฎSimulation
  • ๐Ÿ› ๏ธDev-Tools
  • ๐Ÿ’ฐCrypto
  • ๐Ÿ“ˆFinance
  • ๐Ÿ“‹Productivity
  • ๐Ÿ“ฆ๊ธฐํƒ€

๐Ÿ“„ Papers

  • ๐Ÿ“š์ „์ฒด ๋…ผ๋ฌธ172
Home

โฏ

bookmarks

โฏ

microsoft just dropped vitra vla a new vision language actio

microsoft-just-dropped-vitra-vla-a-new-vision-language-actio

2025๋…„ 12์›” 15์ผ1 min read

  • Robotics
  • manipulation
  • perception

DailyPapers (@HuggingPapers)

2025-12-15 | โค๏ธ 301 | ๐Ÿ” 52


Microsoft just dropped VITRA-VLA, a new Vision-Language-Action model for robotics on Hugging Face.

It learns dexterous manipulation from over 1 million real-life human hand activity videos. https://x.com/HuggingPapers/status/2000441055976595566/video/1

๐Ÿ”— ์›๋ณธ ๋งํฌ

  • https://x.com/HuggingPapers/status/2000441055976595566/video/1

๋ฏธ๋””์–ด

image


Tags

Robotics AI-ML


๊ทธ๋ž˜ํ”„ ๋ทฐ

  • DailyPapers (@HuggingPapers)
  • ๐Ÿ”— ์›๋ณธ ๋งํฌ
  • ๋ฏธ๋””์–ด
  • Tags

๋ฐฑ๋งํฌ

  • domain-Robotics

Created with Quartz v4.5.2 ยฉ 2026

  • GitHub
  • Sehyeon Park