Translating the world around using AI & AR

CLIENT

Meta

ROLE

Product Designer

YEAR

2022

The Problem

staying focused while conversing in a foreign language

It's often struggle to stay engaged with the world around when relying on our phones—let alone for translations. It pulls us out of the moment and disrupt real-time interactions. The challenge was to design a seamless, hands-free translation experience through AR smart glasses, allowing users to translate text and speech directly in their line of sight without distraction.

How might we create an experience that allows users to translate text and speech in real time without anxiety and while staying present in the world around them?

Solution

translate visual text in the world around and take action on it

Visual text translation feature lets users translate foreign text in real time directly within their line of sight. Users can either point at an object (using computer vision) to translate or snapshot for more complex translations. If the translated information is actionable, like a phone number or address, users can instantly trigger a call or start navigation, all without needing to look away from their environment.

short burst conversations are translated in real time

Speech translation feature enables users to translate real-time conversations or incoming audio, like announcements, directly through the smart glasses. This feature ensures users can engage in multilingual interactions seamlessly, without the need for external devices, keeping them fully present in the conversation or environment.

translations on demand by summoning your digital assistant

On-demand translation is a feature where users can summon the assistant to request translations on the go. Whether they need help with a specific word, phrase, or even a sentence, the assistant provides instant translations, making it easier for users to navigate language barriers without interrupting their flow.

Impact

Working on the Live Translations project provided me with deep insights into the complexities of designing real-time, new domain of immersive user experiences. A whole new take on the importance of balancing technological possibilities with user needs, ensuring that the design enhances rather than disrupts the user’s natural environment. It also took a more iterative prototyping and testing, even in a highly confidential and constrained setting, to refine the user experience.

Moreover, the challenges of integrating AI & ML (computer vision, OCR, ASR etc.) into future-forward and emerging tech consumer products, particularly in creating seamless, intuitive interactions was nerve-wrecking yet exciting. These learnings gave me a much needed confidence and will play a great role in future projects within this new domain.