Haoyang Weng (@ElijahGalahad)
2026-01-20 | โค๏ธ 305 | ๐ 47 | ๐ฌ 10
Introducing vla-scratch: a modular, performant and efficient stack for VLAs. https://github.com/EGalahad/vla-scratch
I started it because existing codebases were either slow, or hard to extend for co-training with clean data abstractions. This repo is a ground-up attempt to address both. https://x.com/ElijahGalahad/status/2013644966854357173/video/1
๐ ์๋ณธ ๋งํฌ
- https://github.com/EGalahad/vla-scratch
- https://x.com/ElijahGalahad/status/2013644966854357173/video/1
๋ฏธ๋์ด
![]()
๐ Related
- video-models-serve-as-a-good-pretrained-backbone-for-robot โ ์ฃผ์ : AI-ML, Dev-Tools, Robotics
- what-if-we-could-train-ai-robots-in-a-perfect-physics โ ์ฃผ์ : AI-ML, Dev-Tools, Robotics
- what-if-we-could-model-vision-like-a-wave-moving-through โ ์ฃผ์ : AI-ML, Dev-Tools
- what-if-sim-and-reality-were-one-this-system-keeps-them-in โ ์ฃผ์ : AI-ML, Robotics
- do-we-really-need-an-external-world-model-standard โ ์ฃผ์ : AI-ML, Robotics