Image credit: Jane Almon via Unsplash
A couple of weeks ago, Niantic made an announcement that didn’t dominate the headlines, but quietly marked a pretty major shift in how we should think about wearable tech and spatial computing.
In a blog post tied to their AR game Peridot, Niantic revealed that developers now have access to a new capability within their Lightship platform: 'Project Jade' - a mix of camera access, AI object recognition, and geolocation. In other words, digital experiences can now understand not just where you are, but what you’re actually looking at.
For the first time, wearables are starting to see properly.
And while smart glasses and spatial apps have long been able to assist, augment, and respond to user needs, this is something different.
It represents a new layer of contextual responsiveness. A shift from utility to situational awareness. From reactive tools to experiences that can interpret and adapt in real time.
Up to now, most wearables have been pretty limited visually. They might track head movement or use basic sensors, but they haven’t had access to live camera feeds. This is largely a result of privacy concerns and platform restrictions. And those concerns are fair. Giving a device eyes is one thing. Giving it awareness of what it’s seeing is another. But it also means that, until now, even the most advanced wearables have been operating with a kind of sensory gap by knowing where you are, but not what’s in front of you.
Niantic’s update introduces a new level of spatial intelligence. By combining real-time camera data with AI and environmental understanding, their platform can now interpret what’s actually happening around you and not just where you are on a map.
Glasses can now tell whether you're in a kitchen, at a festival, or holding a coffee. They can recognise objects, spot activity, and trigger different responses depending on what they see. This is the start of experiences that will now react to the world, rather than a pre-set input.
That said, let’s not get ahead of ourselves. This new capability isn’t available on every device just yet. Snap Spectacles have supported developer camera access for a while now, but others, like Meta’s Ray-Bans, still keep that functionality locked down. So while Niantic’s platform now supports it, actual hardware compatibility varies.
It’s a foundational shift, not a full rollout. But it’s coming… and soon.
And it’s not just smart glasses making the leap. Earlier this year, Meta quietly unlocked developer access to the cameras on its Quest 3 headset which is paving the way for a new generation of mixed reality apps that can see, understand, and respond to your environment. What we’re seeing now is a clear shift: spatial devices, whether worn like glasses or strapped on like headsets, are gaining the ability to visually make sense of their surroundings. That unlocks a whole new tier of contextual interaction.
For developers and creative teams, this opens up a whole new design toolkit. You're no longer guessing what’s around the user, now you can see it and respond to it.
A few examples already in play:
HTC just dropped the Vive Eagle, a sleek, lightweight pair of smart glasses clearly designed to take on Meta’s Ray-Bans. It’s the latest sign that the wearable race is well underway. Google’s in the mix too, having unveiled its Android XR platform earlier this year with stylish AI-powered prototypes in partnership with Warby Parker and Gentle Monster. Throw in Snap’s Spectacles and Meta’s Quest and Ray-Bans, and it’s starting to look less like a product category and more like a battleground!
The big players aren’t experimenting anymore, they’re committing. And for developers and brands, that means one thing: the window to shape the future of wearables is wide open, but it won’t be for long.
This is one of those rare windows where the rules are still being written. Most people haven’t built for environments where visual context is part of the design. So early movers have a real chance to shape what good looks like and set the expectations that follow.
For brands in particular, it’s an opportunity to move beyond apps and banners and start crafting experiences that exist within the world, not just layered on top of it.
At Astral City, we’re already prototyping with platforms and partners to explore what this looks like. From AI-powered interactions on smart glasses to adaptive content that responds to where you are and what you’re doing, we see this as the next big leap. Interaction not triggered by taps or voice commands, but by presence, context, and awareness.
This might not be the year wearables go fully mainstream (though they’re certainly having a moment) But it might be the year they start to make sense of the world around them. They’re beginning to make sense of what’s in front of you, not just where you are.
And once they start responding in the moment, the possibilities open right up.