Domain: Robotics (179)
manipulation (33)
- 16-ego-centric-world-models-we-introduce-egowm-a-video โ [1/6] Ego-centric World Models We introduce EgoWM โ a video world model that siโฆ
- video-models-serve-as-a-good-pretrained-backbone-for-robot โ Video models serve as a good pretrained backbone for robot policies. Paper: httโฆ
- what-if-we-could-train-ai-robots-in-a-perfect-physics โ What if we could train AI robots in a perfect, physics-accurate simulation? Reโฆ
- do-we-really-need-an-external-world-model-standard โ Do we REALLY need an external world model? ๐ค Standard approaches often rely on โฆ
- introducing-vla-scratch-a-modular-performant-and-efficient โ Introducing vla-scratch: a modular, performant and efficient stack for VLAs. httโฆ
- nvidia-fast-thinkact-efficient-vla-reasoning-framework-that โ nvidia-fast-thinkact-efficient-vla-reasoning-framework-that
- after-years-of-research-there-is-finally-a-solution-to-long โ after-years-of-research-there-is-finally-a-solution-to-long
- my-first-phd-paper-is-out-what-drives-success-in-physical โ my-first-phd-paper-is-out-what-drives-success-in-physical
- large-video-planner-enables-generalizable-robot-control โ large-video-planner-enables-generalizable-robot-control
- twist2-scalable-portable-and-holistic-humanoid-data โ twist2-scalable-portable-and-holistic-humanoid-data
- my-new-blog-to-wrap-up-the-year-training-time-data-scaling โ my-new-blog-to-wrap-up-the-year-training-time-data-scaling
- predicting-how-the-environment-changes-over-time-is-a-key-ta โ predicting-how-the-environment-changes-over-time-is-a-key-ta
- microsoft-just-dropped-vitra-vla-a-new-vision-language-actio โ microsoft-just-dropped-vitra-vla-a-new-vision-language-actio
- how-do-we-fuse-vlm-physical-intuition-with-runtime-adaptatio โ how-do-we-fuse-vlm-physical-intuition-with-runtime-adaptatio
- vision-based-teleoperation-our-open-source-robotic-hand-mirr โ vision-based-teleoperation-our-open-source-robotic-hand-mirr
- excited-to-share-our-iccv-2025-paper-gwm-towards-scalable โ excited-to-share-our-iccv-2025-paper-gwm-towards-scalable
- -tracking-unseen-highly-dynamic-objects-in-contact-rich-scenes-occlusion-blur-an โ -tracking-unseen-highly-dynamic-objects-in-contact-rich-scenes-occlusion-blur-an
- check-out-our-upcoming-3-finger-dexterous-hand-in-action-with-real-time-leapmoti โ check-out-our-upcoming-3-finger-dexterous-hand-in-action-with-real-time-leapmoti
- grasp-detection-is-crucial-for-robotic-manipulation-but-remains-challenging-in-s โ grasp-detection-is-crucial-for-robotic-manipulation-but-remains-challenging-in-s
- reimplemented-vinn-by-pari-notmahi-lerrelpinto-et-al-because-a-it-is-a-nice-way โ reimplemented-vinn-by-pari-notmahi-lerrelpinto-et-al-because-a-it-is-a-nice-way
- helper-of-the-chef-romias-robotics-developed-a-robotic-system-that-helps-out-in โ helper-of-the-chef-romias-robotics-developed-a-robotic-system-that-helps-out-in
- helper-of-the-chef-romias-robotics-developed-a-robotic-system-that-helps-out-in-1 โ helper-of-the-chef-romias-robotics-developed-a-robotic-system-that-helps-out-in-1
- introducing-2-new-ai-systems-for-robotics-aloha-unleashed-to-perform-two-armed โ introducing-2-new-ai-systems-for-robotics-aloha-unleashed-to-perform-two-armed
- if-we-train-a-robot-to-use-a-grasped-object-eg-a-hammer-its-going-to-fail-when โ if-we-train-a-robot-to-use-a-grasped-object-eg-a-hammer-its-going-to-fail-when
- what-structural-task-representation-enables-multi-stage-in-the-wild-bimanual โ what-structural-task-representation-enables-multi-stage-in-the-wild-bimanual
- what-structural-task-representation-enables-multi-stage-in-the-wild-bimanual-rea โ what-structural-task-representation-enables-multi-stage-in-the-wild-bimanual-rea
- today-well-be-presenting-rialto-at-rss2024-if-you-are-around-come-to-the โ today-well-be-presenting-rialto-at-rss2024-if-you-are-around-come-to-the
- what-are-robot-singularities-a-six-axis-industrial-robot-arm โ what-are-robot-singularities-a-six-axis-industrial-robot-arm
- my-masters-research-heavily-influenced-by-factory-and-indust โ my-masters-research-heavily-influenced-by-factory-and-indust
- a-taste-of-teleoperation-nothing-beats-getting-hands-on-with โ a-taste-of-teleoperation-nothing-beats-getting-hands-on-with
- trained-a-simple-world-model-for-my-robot-arm-it-predicts-th โ trained-a-simple-world-model-for-my-robot-arm-it-predicts-th
- cyberdemo-augmenting-simulated-human-demonstration-for-real โ cyberdemo-augmenting-simulated-human-demonstration-for-real
- we-tackle-robotic-manipulation-with-real-world-imitation-lea โ we-tackle-robotic-manipulation-with-real-world-imitation-lea
embodied-AI (32)
- vlas-nowadays-enable-robotic-manipulation-to-perform โ vlas-nowadays-enable-robotic-manipulation-to-perform
- very-excited-to-release-polaris-today-our-new-tool-for-scala โ very-excited-to-release-polaris-today-our-new-tool-for-scala
- แแ ขแแ ตแซ-แแ ตแแ ฒ-แแ ชแซแ แ ตแแ ฆแแ ฅ-แแ ตแ แ ต-แแ กแฏแ แ ตแท-แแ ชแซแ แ ตแแ กแแ ต-แแ ชแจแแ กแผแแ กแท-แแ ตแ แ ต-แแ กแฏแ แ ตแทแแ ณแฏ-แแ ขแแ ก-แแ ตแ แ ฉแจแแ กแ แ กแแ ฉ-แแ กแแ งแซ-แ โ แแ ขแแ ตแซ-แแ ตแแ ฒ-แแ ชแซแ แ ตแแ ฆแแ ฅ-แแ ตแ แ ต-แแ กแฏแ แ ตแท-แแ ชแซแ แ ตแแ กแแ ต-แแ ชแจแแ กแผแแ กแท-แแ ตแ แ ต-แแ กแฏแ แ ตแทแแ ณแฏ-แแ ขแแ ก-แแ ตแ แ ฉแจแแ กแ แ กแแ ฉ-แแ กแแ งแซ-แ
- a-must-read-survey-efficient-vision-language-action-models-i โ a-must-read-survey-efficient-vision-language-action-models-i
- awesome-world-models-github-newly-released-one-stop-github-r โ awesome-world-models-github-newly-released-one-stop-github-r
- the-texture-quality-is-insane โ the-texture-quality-is-insane
- maybe-embodied-rag-could-be-better-off-since-our-glad-to โ maybe-embodied-rag-could-be-better-off-since-our-glad-to
- this-is-very-exciting-a-key-piece-in-what-will-make-home โ this-is-very-exciting-a-key-piece-in-what-will-make-home
- introducing-world-in-world-the-first-platform-to-truly-test โ introducing-world-in-world-the-first-platform-to-truly-test
- meet-embodied-web-agents-that-bridge-physical-digital-realms โ meet-embodied-web-agents-that-bridge-physical-digital-realms
- embodied-web-agents โ embodied-web-agents
- -want-to-move-many-items-fast-with-your-robot-use-a-tray-but-at-high-speeds-obje โ -want-to-move-many-items-fast-with-your-robot-use-a-tray-but-at-high-speeds-obje
- fun-project-at-pi-knowledge-insulation-for-vlas-we-figured-out-how-to-train-vlas โ fun-project-at-pi-knowledge-insulation-for-vlas-we-figured-out-how-to-train-vlas
- httpstcoxjwjo2j3fj-onetwovla-do-reasoning-when-necessary-do-action-output-elsewi โ httpstcoxjwjo2j3fj-onetwovla-do-reasoning-when-necessary-do-action-output-elsewi
- ive-been-waiting-for-this-for-a-while โ ive-been-waiting-for-this-for-a-while
- excited-to-share-our-open-source-project-on-building-world-models-through-differ โ excited-to-share-our-open-source-project-on-building-world-models-through-differ
- i-just-love-that-this-matrix-city-project-is-a-massive-houdini-project-masquerad โ i-just-love-that-this-matrix-city-project-is-a-massive-houdini-project-masquerad
- revolutionizing-space-robotics-one-inchworm-walk-at-a-time-apply-now-watch-our-r โ revolutionizing-space-robotics-one-inchworm-walk-at-a-time-apply-now-watch-our-r
- how-do-you-effectively-scale-up-real-to-sim-data-for-robotic-learning-we-propose โ how-do-you-effectively-scale-up-real-to-sim-data-for-robotic-learning-we-propose
- the-foundation-model-for-robotics-is-evolving-fast-and-its-important-to-stay-upd โ the-foundation-model-for-robotics-is-evolving-fast-and-its-important-to-stay-upd
- the-foundation-model-for-robotics-is-evolving-fast-and-its-important-to-stay โ the-foundation-model-for-robotics-is-evolving-fast-and-its-important-to-stay
- looking-for-what-to-check-out-while-at-here-are-4-projects-14-gti-design-persona โ looking-for-what-to-check-out-while-at-here-are-4-projects-14-gti-design-persona
- training-on-standard-hoi-datasets-and-executing-robots-without-any-fine-tuning โ training-on-standard-hoi-datasets-and-executing-robots-without-any-fine-tuning
- this-is-what-robotics-is-all-about-robots-improving-quality-of-life-for-actual-e โ this-is-what-robotics-is-all-about-robots-improving-quality-of-life-for-actual-e
- this-is-what-robotics-is-all-about-robots-improving-quality-of-life-for-actual โ this-is-what-robotics-is-all-about-robots-improving-quality-of-life-for-actual
- robotarena-is-live-today-led-by-sumo43-robotarena-is-an-elo-based-robot-action โ robotarena-is-live-today-led-by-sumo43-robotarena-is-an-elo-based-robot-action
- can-robots-think-through-complex-tasks-step-by-step-like-language-models-we-pres โ can-robots-think-through-complex-tasks-step-by-step-like-language-models-we-pres
- can-robots-think-through-complex-tasks-step-by-step-like-language-models-we โ can-robots-think-through-complex-tasks-step-by-step-like-language-models-we
- saas-surveillance-as-a-service-we-may-need-to-redefine-the-m โ saas-surveillance-as-a-service-we-may-need-to-redefine-the-m
- humans-use-pointing-to-communicate-plans-intuitively-compare โ humans-use-pointing-to-communicate-plans-intuitively-compare
- simulation-plays-a-crucial-role-in-robotics-development-deve โ simulation-plays-a-crucial-role-in-robotics-development-deve
- how-do-you-make-sure-llms-are-actually-able-to-generate-usab โ how-do-you-make-sure-llms-are-actually-able-to-generate-usab
AR (20)
- 1x-world-model-from-video-to-action-a-new-way-robots-learn โ 1x-world-model-from-video-to-action-a-new-way-robots-learn
- i-wrote-a-4000-words-long-article-about-all-the-math-you โ i-wrote-a-4000-words-long-article-about-all-the-math-you
- remember-to-have-fun-image-metadata-title-candid-charm-the-p โ remember-to-have-fun-image-metadata-title-candid-charm-the-p
- most-robots-still-need-markers-checkerboards-or-long-calibra โ most-robots-still-need-markers-checkerboards-or-long-calibra
- simulation-as-a-data-engine-is-treating-simulation-as-a-data โ simulation-as-a-data-engine-is-treating-simulation-as-a-data
- its-finally-done-ive-finished-ripping-out-my-full-body โ its-finally-done-ive-finished-ripping-out-my-full-body
- cvpr-25-vid2sim-realistic-and-interactive-simulation-from-video-for-urban-nav โ cvpr-25-vid2sim-realistic-and-interactive-simulation-from-video-for-urban-nav
- weve-been-exploring-3d-world-models-with-the-goal-of-finding-the-right-recipe โ weve-been-exploring-3d-world-models-with-the-goal-of-finding-the-right-recipe
- steerability-remains-one-of-the-key-issues-for-current-vision-language-action โ steerability-remains-one-of-the-key-issues-for-current-vision-language-action
- can-robots-search-for-objects-like-humans-humans-explore-unseen-environments โ can-robots-search-for-objects-like-humans-humans-explore-unseen-environments
- what-happens-when-vision-robotics-meet-happy-to-share-our-new-work-on โ what-happens-when-vision-robotics-meet-happy-to-share-our-new-work-on
- people-grasp-objects-with-different-poses-for-different-tasks-can-dexterous-robo โ people-grasp-objects-with-different-poses-for-different-tasks-can-dexterous-robo
- the-last-three-years-i-gave-a-lecture-series-on-motion-planning-together-with-pr โ the-last-three-years-i-gave-a-lecture-series-on-motion-planning-together-with-pr
- collect-robot-demos-from-anywhere-through-ar-excited-to-introduce-dart-dexterous โ collect-robot-demos-from-anywhere-through-ar-excited-to-introduce-dart-dexterous
- how-can-we-collect-high-quality-robot-data-without-teleoperation-ar-can-help โ how-can-we-collect-high-quality-robot-data-without-teleoperation-ar-can-help
- robots-in-our-group-are-finally-moving-excited-to-share-the-first-robotlearning โ robots-in-our-group-are-finally-moving-excited-to-share-the-first-robotlearning
- robots-in-our-group-are-finally-moving-excited-to-share-the-first-project-from-m โ robots-in-our-group-are-finally-moving-excited-to-share-the-first-project-from-m
- looking-for-an-excavator-operator-fully-remote-remote-jobs-for-excavating-jobs โ looking-for-an-excavator-operator-fully-remote-remote-jobs-for-excavating-jobs
- check-out-our-paper-also-the-best-paper-award-at-the-deformable-object-manipulat โ check-out-our-paper-also-the-best-paper-award-at-the-deformable-object-manipulat
- check-out-our-rss2024-paper-also-the-best-paper-award-at-the-icra2024 โ check-out-our-rss2024-paper-also-the-best-paper-award-at-the-icra2024
(no subtopic) (18)
- exploration-is-key-for-robots-to-generalize-especially-in โ exploration-is-key-for-robots-to-generalize-especially-in
- spatialvla-exploring-spatial-representations-for-visual โ spatialvla-exploring-spatial-representations-for-visual
- nora-a-small-open-sourced-generalist-vision-language-action โ nora-a-small-open-sourced-generalist-vision-language-action
- ros2_nanollm-ros2-nodes-for-llm-vlm-vla-nanollm-optimizes โ ros2_nanollm-ros2-nodes-for-llm-vlm-vla-nanollm-optimizes
- you-shouldnt-need-a-phd-to-operate-a-robot-were-unveiling-a โ you-shouldnt-need-a-phd-to-operate-a-robot-were-unveiling-a
- robotics-techniques-course-videos-created-by-jay-summet โ robotics-techniques-course-videos-created-by-jay-summet
- cvpr-2025-paper-alert-paper-title-rocket-1-mastering-open โ cvpr-2025-paper-alert-paper-title-rocket-1-mastering-open
- teacher-student-rl-learning-for-single-view-robust-grasping โ teacher-student-rl-learning-for-single-view-robust-grasping
- tired-of-slam-breaking-in-dynamic-scenes-wildgs-slam โ tired-of-slam-breaking-in-dynamic-scenes-wildgs-slam
- how-can-we-collect-high-quality-robot-data-without-teleoperation-ar-can-help-int โ how-can-we-collect-high-quality-robot-data-without-teleoperation-ar-can-help-int
- why-hand-engineer-digital-twins-when-digital-cousins-are-free-check-out-acdc-aut โ why-hand-engineer-digital-twins-when-digital-cousins-are-free-check-out-acdc-aut
- but-robots-can-not-feel-or-can-they-๐๐ถ๐ป๐ฑ-๐๐ต๐ฒ-๐ฝ๐ฎ๐ฝ๐ฒ๐ฟ-๐ต๐ฒ๐ฟ๐ฒ-scientists-have-develope-1 โ but-robots-can-not-feel-or-can-they-๐๐ถ๐ป๐ฑ-๐๐ต๐ฒ-๐ฝ๐ฎ๐ฝ๐ฒ๐ฟ-๐ต๐ฒ๐ฟ๐ฒ-scientists-have-develope-1
- introducing-2-new-ai-systems-for-robotics-aloha-unleashed-to-perform-two-armed-m โ introducing-2-new-ai-systems-for-robotics-aloha-unleashed-to-perform-two-armed-m
- if-we-train-a-robot-to-use-a-grasped-object-eg-a-hammer-its-going-to-fail-when-t โ if-we-train-a-robot-to-use-a-grasped-object-eg-a-hammer-its-going-to-fail-when-t
- looking-for-an-excavator-operator-fully-remote-remote-jobs-for-excavating-jobs-n โ looking-for-an-excavator-operator-fully-remote-remote-jobs-for-excavating-jobs-n
- today-well-be-presenting-rialto-at-if-you-are-around-come-to-the-presentation-du โ today-well-be-presenting-rialto-at-if-you-are-around-come-to-the-presentation-du
- vision-alone-isnt-enough-to-solve-dexterous-manipulation-the-sense-of-touch-is-n โ Vision alone isnโt enough to solve dexterous manipulation. The sense of touch is needed.
- scale-ego-exo-httpstcovmu4bf5wiz โ Scale EGO-EXO https://t.co/VMu4bf5wiz
visionos (14)
- nice-one-lerobothf-how-in-two-years-we-went-from-20k-to-200-for-robotic โ nice-one-lerobothf-how-in-two-years-we-went-from-20k-to-200-for-robotic
- so-i-was-reading-the-paper-proc4gem-from-the-deepmind-team-on-sim-to-real-and โ so-i-was-reading-the-paper-proc4gem-from-the-deepmind-team-on-sim-to-real-and
- scaling-imitation-learning-has-been-bottlenecked-by-the-need-for-high-quality โ scaling-imitation-learning-has-been-bottlenecked-by-the-need-for-high-quality
- in-robot-imitation-learning-perf-degrades-greatly-with-small-changes-in โ in-robot-imitation-learning-perf-degrades-greatly-with-small-changes-in
- vision-encoder-upgrade-radiov25-dfn_clip-dinov2-sam-siglip-tome-multi-res โ vision-encoder-upgrade-radiov25-dfn_clip-dinov2-sam-siglip-tome-multi-res
- augmented-reality-๐ฅ๐ข๐๐ข๐ง๐๐-demonstrations โ augmented-reality-๐ฅ๐ข๐๐ข๐ง๐๐-demonstrations
- interesting-hierarchical-vla-structure-connected-by-latent-vector โ interesting-hierarchical-vla-structure-connected-by-latent-vector
- i-built-something-silly-but-useful โ i-built-something-silly-but-useful
- spatialcot-advancing-spatial-reasoning-through-coordinate-alignment-and-chain โ spatialcot-advancing-spatial-reasoning-through-coordinate-alignment-and-chain
- dynamically-update-scene-graphs-as-an-agent-explores-and-moves-around-the-world โ dynamically-update-scene-graphs-as-an-agent-explores-and-moves-around-the-world
- super-excited-to-finally-release-our-work-rekep-a-unified-task-representation-us โ super-excited-to-finally-release-our-work-rekep-a-unified-task-representation-us
- robo-gs-a-physics-consistent-spatial-temporal-model-for-robotic-arm-with-hybrid-1 โ robo-gs-a-physics-consistent-spatial-temporal-model-for-robotic-arm-with-hybrid-1
- introduce-open-๐๐๐ฅ๐๐๐ข๐ฌ๐ข๐จ๐ง-we-need-an-intuitive-and-remote-teleoperation-interfac โ introduce-open-๐๐๐ฅ๐๐๐ข๐ฌ๐ข๐จ๐ง-we-need-an-intuitive-and-remote-teleoperation-interfac
- want-to-use-your-new-apple-vision-pro-to-control-your-robot โ want-to-use-your-new-apple-vision-pro-to-control-your-robot
perception (12)
- what-if-we-could-model-vision-like-a-wave-moving-through โ What if we could model vision like a wave moving through space? Researchers frโฆ
- robots-usually-need-tons-of-labeled-data-to-learn-precise-actions โ robots-usually-need-tons-of-labeled-data-to-learn-precise-actions
- but-robots-can-not-feel-or-can-they-๐๐ถ๐ป๐ฑ-๐๐ต๐ฒ-๐ฝ๐ฎ๐ฝ๐ฒ๐ฟ-๐ต๐ฒ๐ฟ๐ฒ-scientists-have-develope โ but-robots-can-not-feel-or-can-they-๐๐ถ๐ป๐ฑ-๐๐ต๐ฒ-๐ฝ๐ฎ๐ฝ๐ฒ๐ฟ-๐ต๐ฒ๐ฟ๐ฒ-scientists-have-develope
- how-do-we-represent-knowledge-for-the-next-generation-of-in-home-robots-we-want โ how-do-we-represent-knowledge-for-the-next-generation-of-in-home-robots-we-want
- but-robots-can-not-feel-or-can-they-๐๐ถ๐ป๐ฑ-๐๐ต๐ฒ-๐ฝ๐ฎ๐ฝ๐ฒ๐ฟ-๐ต๐ฒ๐ฟ๐ฒ-scientists-have โ but-robots-can-not-feel-or-can-they-๐๐ถ๐ป๐ฑ-๐๐ต๐ฒ-๐ฝ๐ฎ๐ฝ๐ฒ๐ฟ-๐ต๐ฒ๐ฟ๐ฒ-scientists-have
- corl2024-accepted-theia-distilling-diverse-vision-foundation-models-for-robot โ corl2024-accepted-theia-distilling-diverse-vision-foundation-models-for-robot
- accepted-theia-distilling-diverse-vision-foundation-models-for-robot-learning-th โ accepted-theia-distilling-diverse-vision-foundation-models-for-robot-learning-th
- our-new-paper-reflex-based-open-vocabulary-navigation-without-prior-knowledge โ our-new-paper-reflex-based-open-vocabulary-navigation-without-prior-knowledge
- awesome-robotics-3d-a-curative-list-of-3d-vision-papers-relating-to-robotics โ awesome-robotics-3d-a-curative-list-of-3d-vision-papers-relating-to-robotics
- openvla-an-open-source-vision-language-action-model-large-po โ openvla-an-open-source-vision-language-action-model-large-po
- a-compact-04b-vision-language-action-model-that-finally-lets-robots-manipulate-m โ A compact 0.4B Vision-Language-Action model that finally lets robots manipulate moving objects in real-time, closing t
- video-models-serve-as-a-good-pretrained-backbone-for-robot-policies โ Video models serve as a good pretrained backbone for robot policies.
spatial-computing (11)
- why-do-generalist-robotic-models-fail-when-a-cup-is-moved-ju โ why-do-generalist-robotic-models-fail-when-a-cup-is-moved-ju
- dynamicvla โ dynamicvla
- temporal-and-spatial-alignment-between-glove-and-camera โ temporal-and-spatial-alignment-between-glove-and-camera
- a-great-visual-positioning-system-makes-augmented-reality-fe โ a-great-visual-positioning-system-makes-augmented-reality-fe
- were-still-so-early-never-been-a-better-time-to-get-into โ were-still-so-early-never-been-a-better-time-to-get-into
- imagine-a-spatially-intelligent-robot-handling-all-your โ imagine-a-spatially-intelligent-robot-handling-all-your
- what-if-robots-could-learn-real-world-tasks-from-your-perspective-without-ever-t โ what-if-robots-could-learn-real-world-tasks-from-your-perspective-without-ever-t
- how-do-we-represent-3d-world-knowledge-for-spatial-intelligence-in-next-generati โ how-do-we-represent-3d-world-knowledge-for-spatial-intelligence-in-next-generati
- super-excited-to-finally-release-our-work-rekep-a-unified-task-representation โ super-excited-to-finally-release-our-work-rekep-a-unified-task-representation
- robo-gs-a-physics-consistent-spatial-temporal-model-for-robotic-arm-with-hybrid โ robo-gs-a-physics-consistent-spatial-temporal-model-for-robotic-arm-with-hybrid
- introduce-open-๐๐๐ฅ๐๐๐ข๐ฌ๐ข๐จ๐ง-we-need-an-intuitive-and-remote-teleoperation โ introduce-open-๐๐๐ฅ๐๐๐ข๐ฌ๐ข๐จ๐ง-we-need-an-intuitive-and-remote-teleoperation
control (10)
- the-ai-paper-everyones-quietly-freaking-out-about-its โ the-ai-paper-everyones-quietly-freaking-out-about-its
- introducing-dr-robot-a-robot-self-model-which-is-differentiable-from-its-visual โ introducing-dr-robot-a-robot-self-model-which-is-differentiable-from-its-visual
- introducing-frank-a-whole-body-robot-control-system-for-day-to-day-household-cho โ introducing-frank-a-whole-body-robot-control-system-for-day-to-day-household-cho
- show-me-the-path-im-not-talking-about-me-im-talking-about-the-robot-created-mimi โ show-me-the-path-im-not-talking-about-me-im-talking-about-the-robot-created-mimi
- show-me-the-path-im-not-talking-about-me---im-talking-about-the-robot โ show-me-the-path-im-not-talking-about-me---im-talking-about-the-robot
- check-out-our-low-cost-3d-printable-exoskeleton-system-that-can-teleoperate โ check-out-our-low-cost-3d-printable-exoskeleton-system-that-can-teleoperate
- i-i-follow-i-follow-you-๐ข๐ฝ๐ฒ๐ป-๐ฆ๐ผ๐๐ฟ๐ฐ๐ฒ-this-is-a-cm6-robotic-arm-leap-motion โ i-i-follow-i-follow-you-๐ข๐ฝ๐ฒ๐ป-๐ฆ๐ผ๐๐ฟ๐ฐ๐ฒ-this-is-a-cm6-robotic-arm-leap-motion
- weve-opened-the-waitlist-for-general-robot-intelligence-development-grid-beta โ weve-opened-the-waitlist-for-general-robot-intelligence-development-grid-beta
- weve-opened-the-waitlist-for-general-robot-intelligence-development-grid-beta-ac โ weve-opened-the-waitlist-for-general-robot-intelligence-development-grid-beta-ac
- maniskill-3-beta-is-out-simulate-everything-everywhere-all-a โ maniskill-3-beta-is-out-simulate-everything-everywhere-all-a
sim2real (7)
- can-we-bridge-the-sim-to-real-gap-in-complex-manipulation-wi-683188 โ can-we-bridge-the-sim-to-real-gap-in-complex-manipulation-wi-683188
- splatting-physical-scenes-end-to-end-real-to-sim-from-imperfect-robot-data-https โ splatting-physical-scenes-end-to-end-real-to-sim-from-imperfect-robot-data-https
- robotwin-a-pioneering-dataset-that-combines-real-world-teleoperation-with-synthe โ robotwin-a-pioneering-dataset-that-combines-real-world-teleoperation-with-synthe
- how-do-you-effectively-scale-up-real-to-sim-data-for-robotic-learning-we โ how-do-you-effectively-scale-up-real-to-sim-data-for-robotic-learning-we
- why-hand-engineer-digital-twins-when-digital-cousins-are-free-check-out-acdc โ why-hand-engineer-digital-twins-when-digital-cousins-are-free-check-out-acdc
- robot-learning-in-the-real-world-can-be-expensive-and-unsafe โ robot-learning-in-the-real-world-can-be-expensive-and-unsafe
- can-we-bridge-the-sim-to-real-gap-in-complex-manipulation-without-explicit-syste โ Can we bridge the Sim-to-Real gap in complex manipulation without explicit system ID? ๐ค
planning (5)
- watch-as-a-simple-cone-which-does-nothing-but-get-in-the-way-demonstrates-the-cu โ watch-as-a-simple-cone-which-does-nothing-but-get-in-the-way-demonstrates-the-cu
- motion-planning-for-robotics-a-review-for-sampling-based-planners โ motion-planning-for-robotics-a-review-for-sampling-based-planners
- how-can-robots-incorporate-human-preferences-into-their-plans-introducing-text2i โ how-can-robots-incorporate-human-preferences-into-their-plans-introducing-text2i
- how-can-robots-incorporate-human-preferences-into-their-plans-introducing โ how-can-robots-incorporate-human-preferences-into-their-plans-introducing
- nice-work-using-a-motion-planning-expert-in-simulation-to-generate-demonstration โ nice-work-using-a-motion-planning-expert-in-simulation-to-generate-demonstration
VR (5)
- can-robots-learn-new-motions-directly-from-humans-meet-motio โ can-robots-learn-new-motions-directly-from-humans-meet-motio
- urban-sim-is-released-a-large-scale-robot-learning-platform-for-urban-spaces โ urban-sim-is-released-a-large-scale-robot-learning-platform-for-urban-spaces
- weve-combined-vr-tech-with-piper-to-create-a-dual-arm-vr-teleoperation-platform โ weve-combined-vr-tech-with-piper-to-create-a-dual-arm-vr-teleoperation-platform
- highlight-simxr-real-time-simulated-avatar-from-head-mounted-sensors-controlling โ highlight-simxr-real-time-simulated-avatar-from-head-mounted-sensors-controlling
- dolphins-multimodal-language-model-for-driving-paper-page-the-quest-for-fully-au โ dolphins-multimodal-language-model-for-driving-paper-page-the-quest-for-fully-au
navigation (4)
- vibecode-your-robots-on-dimensional โ vibecode-your-robots-on-dimensional
- droneforge-simplifying-the-process-of-training-testing-and-deploying-drone-intel โ droneforge-simplifying-the-process-of-training-testing-and-deploying-drone-intel
- check-out-our-iros2024-paper-deep-visual-odometry-with-events-and-frames-the-new โ check-out-our-iros2024-paper-deep-visual-odometry-with-events-and-frames-the-new
- why-do-generalist-robotic-models-fail-when-a-cup-is-moved-just-two-inches-to-the โ Why do generalist robotic models fail when a cup is moved just two inches to the left? Itโs not a lack of motor skill, i
web-graphics (4)
- simulation-drives-robotics-progress-but-how-do-we-close-the โ simulation-drives-robotics-progress-but-how-do-we-close-the
- our-new-paper-reflex-based-open-vocabulary-navigation-without-prior-knowledge-us โ our-new-paper-reflex-based-open-vocabulary-navigation-without-prior-knowledge-us
- robotarena-is-live-today-led-by-robotarena-is-an-elo-based-robot-action-model-be โ robotarena-is-live-today-led-by-robotarena-is-an-elo-based-robot-action-model-be
- i-i-follow-i-follow-you-๐ข๐ฝ๐ฒ๐ป-๐ฆ๐ผ๐๐ฟ๐ฐ๐ฒ-this-is-a-cm6-robotic-arm-leap-motion-contro โ i-i-follow-i-follow-you-๐ข๐ฝ๐ฒ๐ป-๐ฆ๐ผ๐๐ฟ๐ฐ๐ฒ-this-is-a-cm6-robotic-arm-leap-motion-contro
hand-tracking (3)
- vision-alone-isnt-enough-to-solve-dexterous-manipulation โ Vision alone isnโt enough to solve dexterous manipulation. The sense of touch isโฆ
- egocentric-video-of-opening-door-with-hand-tracking-and-join โ egocentric-video-of-opening-door-with-hand-tracking-and-join
- user-study-recap-of-below-post-we-tested-4-atomic-bimanual-manipulation-techniqu โ user-study-recap-of-below-post-we-tested-4-atomic-bimanual-manipulation-techniqu
embodied-ai (1)
- 1965106627511546357 โ Autoregressive Robotic Model - Learning Control from Human Videos