Apple touting “AI” in a keynote or launching a chatbot is just because everyone else is doing it, as Google or Microsoft have been doing lately. But for anyone who feels the company isn’t focusing on artificial intelligence (AI) as much as other big technologies, WWDC 2023 developments should put those doubts to rest. Apple has used AI to some extent in the next versions of iOS, iPadOS, and macOS, as well as in multiple extensively reworked apps. Then there’s the Apple Vision Pro augmented reality (AR) headset, which also requires some neural network smarts.
The iPhone’s next operating system, called iOS 17 (which will be released later this year) is reworking some of Apple’s own apps. One of them is the phone app, especially the voicemail feature. If you’re not already using it, this might convince you – a live transcription is available for any voicemail message left for you, and if you still feel it’s important, you can pick up the call at any time during the message delivery and transcription process. Apple says the transcription happens on the device itself.
If that’s ever a concern, the use of natural language models, incoming, new word and sentence auto-correct improvements with a focus on grammar, and a new speech recognition transfer model for voice typing, with more to come. Lock screen customization features including voice message transcription in iMessage, a new Journal app that uses context for smart suggestions, a standby mode for time and context, and machine learning (ML) to sync slow-motion by adding more frames to the upcoming iPadOS.
Is this the end of the “duck” era? We’ll know soon enough.
Apple’s AirPods wireless earbuds will soon add adaptive audio technology, which uses machine learning to decode the user’s current (and often rapidly changing) environment to dynamically combine transparency and automatic noise cancellation. If you are on a (eg public transport) enough transparency will be enabled (for those specific frequencies) so you don’t miss any notifications.
We’ll know in due course how well the Adaptive Audio feature performs, across different noise levels and noise compositions, once the feature is released and we use it extensively. In theory, if someone comes to talk to you, their voice will filter through to you, but it’s likely to be blocked by a lot of ambient din. At least that’s the premise.
The Apple Vision Pro headset, which runs on visionOS, relies heavily on AR AI and machine learning to deliver the immersion, privacy and enhanced experience it aims to deliver. The OpticID technology, an AR version of the iPhone’s FaceID biometric recognition and authentication technology, requires complex algorithms to process iris data on the device. This will open access to apps on visionOS and enable App Store purchase authentication and Apple Pay transactions.
Apple confirms that all camera data collected by the Apple Vision Pro headset is processed on the device. Think about it – that’s data collected by the headset’s 12 cameras, five sensors and six microphones.
Last but not least is the Journal app that will be released with iOS 17. Advanced algorithms will be used to gather data from the user’s contacts, photos, music, location data, and more to curate personalized suggestions – while the suggestions will be accessible to curate, the user will be fully in control.
At WWDC 2023, it’s unlikely that Apple will ever announce a chatbot like OpenAI’s ChatGPT or Google’s Bard. From many, especially the conversations on social media, it seems to be hoping for that. But Apple’s path will always be different. One where AI adds inherent smartness to the experience of using an app or feature. AI is not the focus, but a means to an end. With the examples we’ve pointed out, it’s safe to assume the mission was a success. At least as far as laying the groundwork.