coffeebeanworks / Pixabay
Apple always comes through with an eventful, announcement-packed kickoff to WWDC and yesterday was no different. The company covered a broad range of announcements and incremental improvements to existing lines, but the most important announcements fell into three buckets: the HomePod speaker and Siri, iOS 11 updates, and new developer tools in machine learning and vision.
HomePod & Siri
For the first time in three years, Apple announced an entirely new product line – the HomePod speaker, powered by Siri. Its functionality places it as a direct competitor to Amazon Echo and Google Home. I have to believe HomePod will sound amazing, but at $349 I’d be concerned that Apple has priced themselves out of achieving market penetration. Home assistants like this only make sense with wide scale and deep around-the-house ubiquity behind them, so I would expect a cheaper model or a price drop before the holidays roll around.
The keynote also made no mention of developers connecting their own services to HomePod, which may really limit consumer appeal. Google’s Assistant Actions and Amazon’s Alexa Skills allow a connection to a variety of services, and they support a range of options for streaming music; HomePod’s only announced support is for its own Apple Music service. At $200 and with some partnerships in play with major content partners such as ESPN or Spotify, I’d be very optimistic about HomePod. But at $349 and siloed, I’m skeptical for now.
Even for Siri on iOS, the developer story doesn’t seem to have evolved from last year. iOS 10 brought only a limited set of on-device app integrations for Siri via SiriKit, and no cloud APIs. If nothing changes on this front, Apple will be losing ground to Google and Amazon in terms of capabilities here.
Apple also announced iOS 11 with upgraded Messages, peer-to-peer payments and a number of other new features. But if I had to sum up iOS 11’s primary focus for end users, I would say it’s “personal context.” From on-device machine learning to augmented reality features to a redesigned lockscreen and special tools for drivers to stay focused, the vision for iOS is a device that responds to what you’re doing or even what you’re looking at. This focus has trickled down to the Apple Watch as well – WatchOS 4’s primary feature is contextual Siri-based smarts on your main watch face. Developers will need to take advantage of this extra power and awareness as customer expectations are going to increase for personal timely relevancy.
ML, AR, VR
Apple also clarified its vision of privacy-focused on-device machine learning and vision with the launch of new developer tools – MLKit for machine learning and ARKit for augmented reality applications. These tools are almost aggressively focused on on-device applications and performance; ARKit in particular is a unique new set of APIs, and it will be exciting to see what experiences developers build on the platform.
Google also announced partnerships with VR companies on the OS X desktop side of things. But to me, Apple’s set of desktop VR tools felt out of place, without a clear consumer vision or connection to their broader computing strategy. Its iOS AR tools seemed much more Apple-y. By contrast, while Google’s story with Daydream VR may not be taking the marketplace by storm, it’s at least cohesive with the company’s Android platform approach.
Connectivity vs. Privacy
When placed against Google’s I/O announcements and its focus on large-scale cloud-based machine learning, the contrast is clear. Apple has a vision of an Apple-controlled device and privacy-centric future, where your devices learn from context, but where you’re guided very selectively and in a scenario-focused way to what Apple deems important. Google’s vision is cloud-centric and unapologetically democratic (or, if you prefer, anarchistic) by comparison – a set of cloud algorithms and third-party services all connect via the cross-device Assistant to deliver content and respond to user needs.
It’ll be exciting to see these two visions play out in the market in the coming months and years. At Localytics, we’re building the tools that help developers engage with their users wherever those users are, and we’re very excited to see a vision beginning to form about what that means in the future of mobile – and beyond.
This article originally appeared in The Localytics App Analytics & Marketing Blog.