Overview
- Apple’s research model converts a single 2D image into a photorealistic 3D Gaussian representation via a single forward pass on a standard GPU.
- The paper and code are publicly available on GitHub, an uncommon move for Apple that has spurred rapid community testing and shared demos.
- Outputs preserve real‑world distances and absolute scale, reflecting training on a mix of synthetic and real‑world data.
- The system is designed for local viewpoint changes and deliberately avoids hallucinating unseen geometry or supporting full walk‑around scenes.
- Early users report successful local runs, Vision Pro viewing, and third‑party renders, as observers weigh SHARP’s role against rivals like SpAItial AI’s Echo.