Anand Bhattad (@anand_bhattad)
2025-03-01 | โค๏ธ 65 | ๐ 9
[1/3] ZeroComp is being presented as an Oral today at WACV2025!
Session Time: 2 PM local time (generative models)
We train a diffusion model as a neural renderer that takes intrinsic images as input, enabling zero-shot object compositing (e.g., inserting an armchair) without ever seeing paired scenes with and without objects during training. This approach naturally extends to object editing tasks, like material changes (e.g., transforming a sofa into a metallic one), all in a zero-shot manner.
๋ฏธ๋์ด

์์ฝ
WACV 2025์์ ZeroComp๊ฐ ๊ตฌ๋ ๋ฐํ๋ก ์๊ฐ๋๋ค. ๋ด์ฌ ์ด๋ฏธ์ง ์ ๋ ฅ ๊ธฐ๋ฐ ํ์ฐ ์ ๊ฒฝ ๋ ๋๋ฌ๋ก, ์ง์ง์ด์ง ํ์ต ๋ฐ์ดํฐ ์์ด๋ ์ ๋ก์ท ๊ฐ์ฒด ํฉ์ฑยท์ฌ์ง ํธ์ง(์: ์ํ๋ฅผ ๊ธ์ ์ฌ์ง๋ก ๋ณ๊ฒฝ)์ ์ํํ๋ค.
๐ Related
Auto-generated - needs manual review