Gemini

Google to Launch Gemini-Powered AI Glasses in 2026

Google announces plans to release smart glasses with Gemini AI in 2026. Two models in development: one with display and one audio-focused, competing with Meta Ray-Ban.

Google Gemini AI Multimodal

Two Types of AI Glasses in Development

Google’s Alphabet announced on December 8 via official blog post that it is developing AI-powered smart glasses.

Two models are in development:

  • Display model: Shows information in the field of view
  • Audio-focused model: Specialized for voice assistant functionality

Both will feature Gemini AI and are scheduled for release in 2026.

Competing with Meta Ray-Ban

Meta currently leads the AI smart glasses market with its Ray-Ban collaboration model. Meta Ray-Ban Smart Glasses, featuring cameras and an AI assistant, have been well-received for enabling hands-free information access.

Google previously entered the wearable market with Google Glass (announced in 2013), but it never achieved mainstream adoption due to privacy concerns and high pricing. This time, Google is re-entering the market with a more practical approach, learning from past lessons.

Leveraging Gemini’s Multimodal Capabilities

Gemini is Google’s latest multimodal AI, capable of understanding text, images, and audio in an integrated manner. When combined with smart glasses, expected features include:

  • Recognizing objects in view and providing information
  • Real-time translation
  • Navigation guidance
  • Hands-free Q&A

2026: The Year of AI Wearables

Following Apple’s Vision Pro, companies are focusing on AI-powered wearable devices. 2026 looks set to kick off a new competition in the smart glasses market.