Mizuho Aoki (@mizuhoaoki1998)
2025-12-24 | โค๏ธ 126 | ๐ 24
VLG-Loc: Vision-Language Global Localization from Labeled Footprint Maps
We present VLG-Loc, a global localization method that uses camera images and a human-readable labeled footprint map containing names and areas of visual landmarks.
Project Page:ย https://cyberagentailab.github.io/VLG-Loc-project-page/ https://x.com/mizuhoaoki1998/status/2003757958883401823/video/1
๐ ์๋ณธ ๋งํฌ
- https://cyberagentailab.github.io/VLG-Loc-project-page/
- https://x.com/mizuhoaoki1998/status/2003757958883401823/video/1
๋ฏธ๋์ด
![]()
๐ Related
- chain-of-view-makes-vision-language-models-move-through-a
- starting-the-new-year-without-human-labeling-multimodal
- 3d-re-gen-3d-reconstruction-of-indoor-scenes-with-a
- for-those-who-thought-about-building-ai-bots-to-trade-on
- what-if-you-could-scan-a-photorealistic-moving-avatar-of