The Impact of Machine Learning
November 27, 2017
Its becoming critical that phone companies start working on their own machine learning solutions now in order to remain competitive when those things become essential and core to the user experience, as they threaten to do as early as next year. Chinese companies may work at ludicrous speeds when iterating on hardware, however the rules change when the thing you’re trying to replicate is months and years of machine learning. One of, if not the most impressive expressions of machine learning consumer tech to date is the camera on Google’s Pixel and Pixel 2 phones. Its DSLR-like performance in low-light conditions is astonishing. Google’s imaging software transcends the traditional physical limitations of mobile cameras (namely: shortage of physical space for large sensors and lenses), using a combination of algorithms and machine learning. Google has turned a light problem into a data problem, and few companies are as adept at processing data as Google. Even if Google had done nothing whatsoever to improve the Pixel camera in the time between the Pixel and Pixel 2’s launch, the simple accumulation of machine learning time has made the camera better. Time is the added dimension that makes machine learning even more exciting. The more resources applied to a machine learning setup, the better its output becomes, and time and processing power (both on the device itself and in Google’s vast server farms) are crucial. Google Assistant is not a differentiating feature for hardware, as Google wants to have Assistant running on every device possible. But the Assistant serves as a conduit for funneling users into Google search and the rest of the company’s services, with practically all of them benefiting from some variety of machine learning. What Assistant does for the mobile market is to enhance Google’s influence over its hardware partners: pity the manufacturer that tries to ship an Android phone in 2018 without either the Google Play Store or Assistant on board.
On the Apple side, machine learning is permeating much of the software running on the iPhone already, and the company’s Core ML tools are making it easy for developers to add to that library. But the big highlight feature of the new iPhone X, the thing everyone notices, is the notch at the top of its display and the technology contained within it. Up in that monobrow section exists a full array of infrared and light sensors, something tantamount to a Microsoft Kinect system, which facilitates the new Face ID authentication method. It is still open as how well Face ID strikes the balance between security and convenience (especially without the fallback of Touch ID’s fingerprint recognition). The system is robust enough to work in the dark and, thanks to machine learning, it will adapt to changes in an appearance. Strip away all the usual incremental upgrades and design tweaks, the Face ID system is the iPhone X’s defining new feature. And it’s reliant on machine learning to work its technological magic. It may still be early for machine learning enhancements to truly be the key selling point for mass-market phones. Face ID is of secondary importance to iPhone X purchasers more attracted by the new, bezel-phobic design. While Google’s camera is the best reason to own a Pixel, there are still few Pixel owners and the adversities of the pOLED doesn’t help. Outside of Apple and Google, Huawei has been the biggest proponent of implementing machine learning and AI in mobile devices. The company’s latest phone and processor are both marketed as having “the real AI” smarts. Huawei is moving in the right direction with this AI push, however, unlike Apple and Google — both of which have turned machine learning into tangible, obvious and (literally) user-facing features — Huawei’s approach is to dig into the far less marketable sphere of using machine learning to optimize Android performance over the course of long-term use. However, it’s hard to imagine it being a true differentiator when people are comparing shiny new phones in a store. Huawei is also putting some marketing toward having “camera AI” that tries to automatically enhance images based on detecting what is being photographed, however it is not close to the effectiveness of Google’s Pixel.
Huawei’s example shows that machine learning itself is not the unique selling point; the unique selling points are and will be built on top of machine learning. The OLED display on the iPhone X is impressive. As pricey and exclusive as it may be, though, that panel is available on Samsung, not just Apple. Every new hardware tweak from Apple seems to be targeted at making the manufacturing of its devices trickier and more technical — such as the Taptic Engine for haptic feedback, the 3D Touch interaction on iPhone displays, and the Touch Bar on the newest MacBooks — but all of those are ultimately systems that can be reverse-engineered and replicated by others. The days of phone makers being able to secure a major hardware advantage for longer than a few months are now gone. At this late stage of the evolution of smartphones, machine learning is the only path toward securing meaningful differentiation. Google’s camera is widely underrated, mostly owing to Google’s chronic inability to distribute Pixel devices widely enough. Face ID will be copied, likely badly, by a whole slew of aspiring competitors. But the distinguishing line between the true mobile innovators and the fast copycats, which had until recently been blurring and fading, Apple’s ARKit is positioned as the forerunner to the new world of Immersive Computing and takes machine learning to another level:
Elsewhere in this interview, Cook overtly compares the ramp of AR to the ramp of the App Store itself: a slow starter, and underestimated, but then huge. However, there's a big "but:"
AAPL needs a new dream to really get its P/E above 20X in this market.
The lowest-cost iPhone that Apple is expected to launch next year is a device with a full-face 6.1-inch liquid crystal display (LCD). The two higher-priced phones that are expected to debut alongside that iPhone should have 5.85-inch and 6.46-inch organic light-emitting diode (OLED) displays, respectively. On social media, Kurt Marko, who is an independent technology analyst, argued that the manufacturing cost differences between the 6.1-inch LCD iPhone and the 5.85-inch OLED iPhone won't be that large, making the idea of a lower-end 6.1-inch LCD iPhone questionable. What should help Apple with OLED costs is the arrival of a second and third supplier most likely in 2019.