Stephen James (@stepjamUK)

2025-11-11 | โค๏ธ 54 | ๐Ÿ” 8


๐—ง๐—ฒ๐˜€๐˜๐—ถ๐—ป๐—ด ๐—ฟ๐—ผ๐—ฏ๐—ผ๐˜ ๐—ฝ๐—ผ๐—น๐—ถ๐—ฐ๐—ถ๐—ฒ๐˜€ ๐—ถ๐—ป ๐˜๐—ต๐—ฒ ๐—ฟ๐—ฒ๐—ฎ๐—น ๐˜„๐—ผ๐—ฟ๐—น๐—ฑ ๐—ถ๐˜€ ๐—ฒ๐˜…๐—ฝ๐—ฒ๐—ป๐˜€๐—ถ๐˜ƒ๐—ฒ, ๐˜€๐—น๐—ผ๐˜„, ๐—ฎ๐—ป๐—ฑ ๐—ต๐—ฎ๐—ฟ๐—ฑ ๐˜๐—ผ ๐—ฟ๐—ฒ๐—ฝ๐—ฟ๐—ผ๐—ฑ๐˜‚๐—ฐ๐—ฒ. ๐—•๐˜‚๐˜ @Columbia ๐—ฎ๐—ป๐—ฑ @SceniXai ๐—ท๐˜‚๐˜€๐˜ ๐—ฏ๐˜‚๐—ถ๐—น๐˜ ๐—ฎ ๐˜€๐—ถ๐—บ๐˜‚๐—น๐—ฎ๐˜๐—ผ๐—ฟ ๐˜๐—ต๐—ฎ๐˜ ๐—ฎ๐—ฐ๐˜๐˜‚๐—ฎ๐—น๐—น๐˜† ๐˜„๐—ผ๐—ฟ๐—ธ๐˜€.

They reconstruct real environments as soft-body digital twins using Gaussian Splatting for photorealistic rendering. Then they evaluate robot policies in simulation, and the results correlate strongly with real-world execution.

They then tested it on tough tasks like plush toy packing, rope routing, and deformable object manipulation, the kind of stuff thatโ€™s notoriously difficult to simulate.

๐—ช๐—ต๐—ฎ๐˜ ๐—บ๐—ฎ๐—ธ๐—ฒ๐˜€ ๐—ถ๐˜ ๐˜„๐—ผ๐—ฟ๐—ธ ๐—ถ๐˜€ ๐—ฝ๐—ต๐˜†๐˜€๐—ถ๐—ฐ๐˜€ ๐—ผ๐—ฝ๐˜๐—ถ๐—บ๐—ถ๐˜‡๐—ฎ๐˜๐—ถ๐—ผ๐—ป.

They tune the digital twinโ€™s parameters to match real-world dynamics, not just visuals. Plus, color alignment helps close the appearance gap between simulated renderings and real camera feeds.

The results show that simulated rollouts predict real performance across multiple state-of-the-art imitation learning policies.

Once you have this infrastructure, you can run hundreds of policy evaluations overnight.

Evaluation has always been one of the biggest bottlenecks in robot learning and this is exactly the kind of infrastructure that can unlock it.

Check out the paper here: https://real-to-sim.github.io/

Video credit: @SceniXai

๐Ÿ”— ์›๋ณธ ๋งํฌ

๋ฏธ๋””์–ด

image


Tags

3D Robotics Simulation