Customer Experience E-commerce
Maciej Cieślukowski Emilia Adamek
Digital Transformation Business
Izabela Franke
Digital Advisory E-commerce
Izabela Franke
Digital Advisory UX research
Jakub Nawrocki
Product Design Design Systems
Łukasz Okoński
Digital Advisory UX research
Maciej Cieślukowski

Featured Insights

Explore all insights
Close
Customer Experience E-commerce
Maciej Cieślukowski Emilia Adamek
Digital Transformation Business
Izabela Franke
Digital Advisory E-commerce
Izabela Franke
Digital Advisory UX research
Jakub Nawrocki
Product Design Design Systems
Łukasz Okoński
Digital Advisory UX research
Maciej Cieślukowski
Explore all insights

Featured Insights

Technology Digital Transformation

Apple’s Mobile AI Gambit Reshapes the Market After WWDC 2025

Apple's Mobile AI Gambit - Cover Photo

Apple's WWDC 2025 keynote unveiled its most radical mobile interface transformation since iOS 7, the Liquid Glass design language. This year’s changes aren’t just a visual overhaul, though – we’re witnessing a fundamental architectural shift that aims to turn iPhones into autonomous AI hubs that power Apple's entire ecosystem.

iOS 26's year-based naming and unified design reflect Apple's mobile-first AI vision, prioritizing on-device intelligence over cloud reliance. This strategy tackles key smartphone pain points while tightening ecosystem control across iPhone, CarPlay, wearables, and beyond.

The implications are vast: this move could redefine mobile computing itself, threatening entire app categories as AI integrates deeper into the system level.

The new era of Liquid Glass and iOS 26

Alongside its new "Liquid Glass" interface, Apple's iOS 26 introduces a significant shift in mobile AI capabilities and on-device processing architecture. The update moves away from cloud dependency by leveraging the M4 chip's Neural Engine – capable of 38 trillion operations per second – to handle AI tasks locally with 0.6ms latency.

Key AI features include Live Translation for on-device multilingual processing across calls and messages, and Visual Intelligence for contextual understanding of screen content. The Foundation Models framework provides developers with direct access to these on-device AI capabilities – enabling local execution of complex tasks like image analysis and natural language processing. This approach prioritizes privacy while maintaining functionality across Apple's ecosystem through synchronized updates to iOS 26, iPadOS 26, and macOS 26.

Computer vision to transform mobile interactions

iOS 26 advances computer vision capabilities through system-level integration, enabling real-time 3D object tracking and scene analysis within the Liquid Glass interface framework. These enhancements support practical applications – from AR-assisted medical diagnostics to educational tools with instant object recognition – while maintaining mobile-optimized performance.

The Liquid Glass interface in particular enhances AR experiences by providing translucent, refractive elements that better integrate digital overlays with physical environments, making spatial computing more intuitive on mobile devices.

The update streamlines cross-device workflows, allowing seamless transitions between iPhone and CarPlay interfaces with preserved context. The Camera app exemplifies this approach – its simplified UI leverages on-device AI for automated adjustments, reducing manual input while improving output quality. As the ecosystem hub, iOS 26 maintains consistent functionality across Apple devices through unified Liquid Glass design principles.

Mobile-specific AI applications and developer implications of iOS 26

iOS 26 introduces mobile-specific AI applications like "Workout Buddy" – which analyzes sensor data from Apple Watch to deliver personalized fitness recommendations. While demonstrating ecosystem integration strengths, limitations remain in supporting third-party health devices like Aura Ring or Whoop.

Micro-interactions and touch interface improvements showcase AI's potential for enhancing fundamental mobile experiences through the Liquid Glass design paradigm. iOS 26's focus on touch screen potential in new contexts suggests upcoming mobile interface approaches that could redefine smartphone interaction models.

AI-powered gesture recognition and predictive touch responses integrated into the Liquid Glass interface eliminate common mobile usability friction points while creating more fluid, responsive interactions that adapt to individual usage patterns.

The new wrist-flick gesture for Apple Watch demonstrates how mobile AI enables contextual, personalized interface controls that learn from user behavior. The Safari edge-to-edge viewing experience in iOS 26 exemplifies how AI processing can optimize mobile screen real estate by intelligently adapting content presentation to focus on user priorities while maintaining navigation accessibility.

Mobile application developers face strategic decisions about competing with or complementing Apple's expanding mobile AI capabilities following WWDC 2025's announcements. The integration of developer functions into native mobile system capabilities follows Apple's historical pattern but accelerates in the AI era. Mobile apps for document scanning, language translation, and photo editing face particular threats as these capabilities integrate directly into iOS 26's system functions through Live Translation, Visual Intelligence, and enhanced Camera processing.

However, mobile-specific opportunities emerge for developers who can leverage Apple's new Foundation Models framework while providing specialized functionality beyond general mobile use cases. The ability to access Apple's on-device large language model for free, offline inference opens possibilities for complex mobile productivity workflows, enterprise-specific mobile applications, and cross-platform mobile experiences that remain largely unthreatened by Apple's consumer-focused mobile AI features.

Push towards deep system integration

Apple's new Foundation Models framework marks a major shift in mobile AI development. The SDK grants developers direct access to Apple's on-device LLM, enabling sophisticated AI features with minimal code while maintaining privacy and reducing latency.

Key advantages include:

  • Deep system integration via native OS mechanisms for sensor/memory access
  • Simplified development (3 lines of Swift code for basic AI implementations)
  • Offline functionality ensuring consistent performance
  • Enhanced personalization using device-resident data

The framework strengthens Apple's ecosystem lock-in, with Liquid Glass design APIs enabling developers to create native-feeling mobile applications that integrate seamlessly with iOS 26's new interface paradigm through SwiftUI, UIKit, and AppKit frameworks.

Mobile developers can now build applications that feel indistinguishable from first-party Apple apps while leveraging sophisticated AI capabilities that would require significant infrastructure investment on other platforms.

On-device AI’s limits – what iOS 26 still can’t do

While iOS 26's Liquid Glass interface and on-device AI capabilities impressed at WWDC 2025, several worrying signs emerged about Apple's mobile strategy. The company continues grappling with fundamental trade-offs – its privacy-first approach prevents unified messaging across WhatsApp, Telegram, and other platforms, despite having the technical capability.

Restrictive APIs create an uneven playing field – while developers gain access to Foundation Models, they're blocked from deeper system integrations that Apple reserves for its own apps.

Most revealing was the delayed Siri overhaul – now pushed to 2026 – highlighting Apple's challenges matching cloud-based AI competitors while maintaining its strict on-device processing paradigm. These limitations suggest Apple's walled-garden approach may struggle to deliver the seamless, intelligent ecosystem it promises as AI becomes central to mobile computing.

Competitive implications for the mobile market

WWDC 2025 marked a turning point in the mobile industry, with Apple’s iOS 26 and Liquid Glass interface challenging competitors through a privacy-focused, on-device AI approach. By leveraging the M4 chip’s neural engine and Foundation Models framework, Apple is positioning the iPhone as an autonomous AI hub free from cloud dependencies that define rivals like Google and Meta.

Apple’s emphasis on local processing directly undermines Google’s cloud-reliant services. Features like Live Translation, fully offline yet context-aware, threaten Google Translate’s dominance, while Visual Intelligence’s real-time screen analysis could diminish reliance on Google Lens. Meanwhile, Meta faces pressure as iOS 26’s built-in translation and messaging enhancements reduce the need for WhatsApp and Messenger’s standalone features.

The Foundation Models framework opens AI development to third-party apps but also accelerates the demise of single-purpose utilities. Scanning, translation, and photo-editing apps must now compete with native iOS 26 features, pushing developers toward specialized niches (medical diagnostics, enterprise tools) or cross-platform strategies to survive.

At the same time, Apple’s partnership with OpenAI introduces a strategic vulnerability. While iOS 26 champions on-device AI, its ChatGPT integration reveals reliance on external cloud-based models: a contradiction Microsoft and Google could exploit. Microsoft’s direct ChatGPT access on Mac risks bypassing Apple’s ecosystem, while Google’s rumored Gemini integration hints at a looming cloud-versus-edge showdown.

The next 18–24 months will test Apple’s mobile AI vision. As iOS 26 rolls out to millions, key questions remain:

  • Can Siri catch up? Delayed enhancements must finally deliver, or users may defect to cloud-powered alternatives.
  • Will developers embrace Apple’s walled garden? The Foundation Models framework offers powerful tools but reinforces platform lock-in.
  • Will privacy trump performance? Consumers must choose between Apple’s secure, on-device AI and rivals’ more powerful (but data-hungry) cloud solutions.

The Liquid Glass era isn’t just an interface shift, it’s Apple’s gamble that privacy and ecosystem control will outweigh raw AI capability. Success hinges on flawless execution, developer loyalty, and whether users truly prefer self-contained intelligence over cloud-powered depth. The outcome won’t just shape Apple’s future. It will define the next decade of mobile computing.

Related insights

Arrange consultation with our Digital Advisory & Delivery Team

describe your challenge
Join our newsletter for top tech & retail insights
We engineer
digital business

Subscribe to our insights

A better experience for your customers with Future Mind.

This field is required. Please fill it in, so we can stay in touch
This field is required.
©  Future mind
all rights reserved