Stephen James (@stepjamUK)
2025-11-11 | โค๏ธ 54 | ๐ 8
๐ง๐ฒ๐๐๐ถ๐ป๐ด ๐ฟ๐ผ๐ฏ๐ผ๐ ๐ฝ๐ผ๐น๐ถ๐ฐ๐ถ๐ฒ๐ ๐ถ๐ป ๐๐ต๐ฒ ๐ฟ๐ฒ๐ฎ๐น ๐๐ผ๐ฟ๐น๐ฑ ๐ถ๐ ๐ฒ๐ ๐ฝ๐ฒ๐ป๐๐ถ๐๐ฒ, ๐๐น๐ผ๐, ๐ฎ๐ป๐ฑ ๐ต๐ฎ๐ฟ๐ฑ ๐๐ผ ๐ฟ๐ฒ๐ฝ๐ฟ๐ผ๐ฑ๐๐ฐ๐ฒ. ๐๐๐ @Columbia ๐ฎ๐ป๐ฑ @SceniXai ๐ท๐๐๐ ๐ฏ๐๐ถ๐น๐ ๐ฎ ๐๐ถ๐บ๐๐น๐ฎ๐๐ผ๐ฟ ๐๐ต๐ฎ๐ ๐ฎ๐ฐ๐๐๐ฎ๐น๐น๐ ๐๐ผ๐ฟ๐ธ๐.
They reconstruct real environments as soft-body digital twins using Gaussian Splatting for photorealistic rendering. Then they evaluate robot policies in simulation, and the results correlate strongly with real-world execution.
They then tested it on tough tasks like plush toy packing, rope routing, and deformable object manipulation, the kind of stuff thatโs notoriously difficult to simulate.
๐ช๐ต๐ฎ๐ ๐บ๐ฎ๐ธ๐ฒ๐ ๐ถ๐ ๐๐ผ๐ฟ๐ธ ๐ถ๐ ๐ฝ๐ต๐๐๐ถ๐ฐ๐ ๐ผ๐ฝ๐๐ถ๐บ๐ถ๐๐ฎ๐๐ถ๐ผ๐ป.
They tune the digital twinโs parameters to match real-world dynamics, not just visuals. Plus, color alignment helps close the appearance gap between simulated renderings and real camera feeds.
The results show that simulated rollouts predict real performance across multiple state-of-the-art imitation learning policies.
Once you have this infrastructure, you can run hundreds of policy evaluations overnight.
Evaluation has always been one of the biggest bottlenecks in robot learning and this is exactly the kind of infrastructure that can unlock it.
Check out the paper here: https://real-to-sim.github.io/
Video credit: @SceniXai
๐ ์๋ณธ ๋งํฌ
๋ฏธ๋์ด
![]()