Part 2: Spatial Computing—When Virtual and Physical Worlds Collide

By Carolyn Mathas

In Part 1, The Nuts and Bolts of Spatial Computing, we looked at what spatial computing is and what it does. Now, we’re ready to take a look at platforms, applications, and what the future holds.

Two top spatial computing enabling platforms operating today are Apple’s ARKit and Google’s ARCore. Microsoft announced that it will not continue its HoloLens VR efforts, and so far, there’s no sign of a replacement.

ARKit is Apple’s augmented reality development platform for iOS devices, enabling developers to create apps that interact with the world using iOS cameras, processors, and motion sensors. ARKit uses Visual Inertial Odometry technology that estimates the pose and velocity of a device using the input of one or more cameras plus one or more Inertial Measurement Units (IMUs). This allows the device to track its movement within a room. Sensor data analyzes a room’s layout and detects horizontal planes, such as furniture and floors, enabling virtual objects to be placed on those surfaces.

Apple introduces a new version of ARKit whenever it updates iOS, and the features are tied to the expanded capabilities of iOS. The current version, ARKit 6, introduces the ability to capture high-resolution videos of AR experiences for professional video editing, film production, and social media apps. ARKit 6 also features geographic anchors that identify specific geographic areas in an AR experience, adding cities such as Montreal, Sydney, Singapore, and Tokyo, as well as improving motion capture.

Google’s ARCore, by comparison, is Google’s AR software development kit (SDK) with cross-platform APIs for developing capabilities on Android, iOS, Unity, and the web. ARCore provides fundamental tools to create motion tracking, showing positions relative to the world; anchors for tracking an object’s position over time; environmental understanding to detect the size and location of surfaces; depth understanding to measure the distance between surfaces from a given point; and light estimation to provide data on the average intensity and color correction of the environment.

Given that spatial computing is the virtualization of activities and interactions between machines, people, objects, and their environments, it incorporates IoT, digital twins, ambient computing, augmented reality, virtual reality, AI, and physical controls. It focuses on moving away from interactions with devices, and applications for its use continue to grow.

Spatial Computing Applications

Spatial computing allows factories to optimize processes and better understand and manage machine performance. In healthcare, it is transforming diagnostics, patient care, and treatment planning. Medical professionals using spatial computing can visualize complex medical data to make more informed decisions, conduct virtual surgeries, simulate rehabilitation, and visualize the effects of multiple treatments before prescribing them to patients.

In retail, AR and VR are used for personalized product recommendations, helping customers visualize products ranging from clothing to furniture and more.

Spatial computing is also beginning to impact education, and this should rapidly expand. Educators can create immersive learning experiences, and students can participate in virtual field trips, explore history, and conduct science experiments without leaving their classrooms. Spatial computing has the potential to reshape education by providing students with more engaging, complex, and interactive learning experiences.

Finally, entertainment is likely seeing the greatest impact via gaming and virtual experiences that harness AR and VR. Immersive gaming allows players to feel as if they are part of the game world, while virtual concerts and events let users experience the thrill of live events without leaving their homes.

For all of its advances, there are still limitations, including hardware challenges in data processing and analysis, the high cost and technical demands of high-resolution displays, and ergonomic issues in hardware design, such as potential motion sickness and eye strain. However, the industry is working hard to address these challenges.

The Future

Researchers continue to explore spatial computing with new and innovative materials, battery technology improvements, lightweight frame materials, holographic displays, adaptive optics, and GPUs to enhance performance while reducing costs and power consumption. Improved eye-tracking technology will allow better collection and use of user gaze data, enabling more natural interactions with virtual environments.

As technology advances, spatial computing devices are becoming lighter, more cost-effective, and energy efficient, all of which will accelerate the technology’s adoption. Ultimately, it will significantly disrupt how humans interact with technology, breaking down boundaries between physical and digital worlds, and shaping how we live, play, learn, and work.

Leave A Reply

Your email address will not be published.