Unity will launch the Object Capture API in its AR Companion app later in 2021.
At WWDC21, Apple previewed its new Monterey version of MacOS macOS 12. On the AR front, Apple has added new APIs to RealityKit 2, the latest version of Apple’s rendering, animation, audio, and physics engine for AR. Apple says developers can create more realistic and complex AR experiences with greater visual, audio, and animation control, including custom render passes and dynamic shaders. A big part of the RealityKit 2 introduction was the inclusion of the Object Capture API, a photogrammetry tool that stitches together the images of an object to create a high-quality 3D model. At the conference, Apple said developers like Maxon and Unity are already using Object Capture to explore entirely new ways of creating 3D content within their 3D content creation apps including Cinema 4D and Unity MARS. It seems AR is becoming more realistic, accurate, fun, and above all easier.
To make AR experience more engaging and immersive, Apple also announced supporting updates to its development tools:
- Xcode Cloud: Built into Xcode 13, Xcode Cloud can automatically build apps in the cloud. It can test and deliver high-quality apps more efficiently.
- App Store: With In-App Events and Custom Product Pages, the App Store now provides all-new ways for developers to promote their apps and connect with users.
- Swift: Apple’s programming language now has built-in concurrency support built into the language to help developers create user-input responsive apps.
Apple simplifies AR content creation for non-developers. (Source: Apple)
Apple’s Object Capture team discusses the process of creating 3D models with Object Capture and best practices with object selection and image capture in this video that includes a demonstration of 3D capture. Images of an object are taken from every angle. The images can be taken from iPhone, iPad, DSLR, or even a drone. They need to be very clear and from all angles around the object because the photo images are used as texture maps on the generated geometry. There should be significant overlap between the images. Apple recommends 20–200 images of an object are needed to get quick results. Then images are transferred to a Mac device that supports the Object Capture API. Apple says that the API is supported on Intel-based Mac with 16GB RAM and an AMD GPU of at least 4GB VRAM. Not surprisingly, the API is supported on Apple silicon Macs too. The models thus generated can be viewed in AR Quick Look or added to AR scenes in Reality Composer or integrated for further development in Xcode.
Unity has been working with Apple as an early access partner on Object Capture. The feature is built into the iOS version of the Unity AR Companion app. Unity will make it available to the users later this fall through its AR Companion app. Unity also shared a guide to help users capture images through Object Capture mode. An interactive UI sets up a guide object over your object to capture. A green ‘pin’ is dropped on the shell to indicate that the angle has been captured. While a red ‘pin’ detects a low-quality image. This kind of best practice for photogrammetry capture helps users take good-quality images with maximum coverage, generating 3D models of real-world objects.
As we know, photogrammetry is a lengthy and complex process. A photogrammetry rig can employ approximately 40–170 cameras to generate detailed 3D scans of a subject. It is expensive for small businesses or individuals to get 3D models this way. Apple’s announcement of new API will enable low-cost content creation that’s available to beginners using their phones. Though it must be said, the process still requires patience and some skill, which can be developed with practice.
The use cases are unlimited. Curators, architects, medical field, textile industry, artists, designers, e-commerce websites—everyone can bring their ideas to life.
To see Apple’s press release, click here.
To see Unity’s blog post, click here.