๐Ÿ“š ์„ธํ˜„'s Vault

๐ŸŒ ๋„๋ฉ”์ธ

  • ๐Ÿ”ฎ3D-Vision
  • ๐ŸŽจRendering
  • ๐Ÿค–Robotics
  • ๐Ÿง LLM
  • ๐Ÿ‘๏ธVLM
  • ๐ŸŽฌGenAI
  • ๐ŸฅฝXR
  • ๐ŸŽฎSimulation
  • ๐Ÿ› ๏ธDev-Tools
  • ๐Ÿ’ฐCrypto
  • ๐Ÿ“ˆFinance
  • ๐Ÿ“‹Productivity
  • ๐Ÿ“ฆ๊ธฐํƒ€

๐Ÿ“„ Papers

  • ๐Ÿ“š์ „์ฒด ๋…ผ๋ฌธ172
Home

โฏ

papers

โฏ

New #NVIDIA Paper

New #NVIDIA Paper

2026๋…„ 2์›” 12์ผ1 min read

  • ๊ธฐํƒ€
  • NVIDIA

New NVIDIA Paper

New NVIDIA Paper

We introduce Motive, a motion-centric, gradient-based data attribution method that traces which training videos help or hurt video generation.

By isolating temporal dynamics from static appearance, Motive identifies which training videos shape motion in video generation.

๐Ÿ”— https://research.nvidia.com/labs/sil/projects/MOTIVE/

1/10

๐Ÿ”— ์›๋ณธ ๋งํฌ

  • https://research.nvidia.com/labs/sil/projects/MOTIVE/

๋ฏธ๋””์–ด

image


๊ทธ๋ž˜ํ”„ ๋ทฐ

  • New NVIDIA Paper
  • ๐Ÿ”— ์›๋ณธ ๋งํฌ
  • ๋ฏธ๋””์–ด

Created with Quartz v4.5.2 ยฉ 2026

  • GitHub
  • Sehyeon Park