Yu Xiang (@YuXiang_IRVL)

2024-09-11 | โค๏ธ 121 | ๐Ÿ” 12


Nice work! Using a motion planning expert in simulation to generate demonstrations and then learning a policy is aligned with our previous work on learning a grasping policy for arbitrary objects with @LiruiWang1 https://sites.google.com/view/gaddpg https://x.com/YuXiang_IRVL/status/1833955889935733139/video/1

๋ฏธ๋””์–ด

video

Quoted: @mihdalal

Can a single neural network policy generalize over poses, objects, obstacles, backgrounds, scene arrangements, in-hand objects, and start/goal states?

Introducing Neural MP: A generalist policy for sโ€ฆ


์ธ์šฉ ํŠธ์œ—

Murtaza Dalal (@mihdalal)

Can a single neural network policy generalize over poses, objects, obstacles, backgrounds, scene arrangements, in-hand objects, and start/goal states?

Introducing Neural MP: A generalist policy for solving motion planning tasks in the real world ๐Ÿค– 1/N https://t.co/p4V0RfUG0h

์›๋ณธ ํŠธ์œ—

๐ŸŽฌ ์˜์ƒ

Tags

domain-simulation