Vertical Divider
Google Confirms Pixel 6 And Pixel 6 Pro w/Tensor Chip
Google is getting fully invested into the Pixel line, with its own bespoke SOC. Qualcomm calls its chips Snapdragon, Samsung Exynos. The Pixel 6 will be released in late September/early October.
Google is getting fully invested into the Pixel line, with its own bespoke SOC. Qualcomm calls its chips Snapdragon, Samsung Exynos. The Pixel 6 will be released in late September/early October.
The Pixel 6 and Pixel 6 Pro announcement confirms previous renders as being pretty accurate. The Pixel 6 Pro is the new name for the bigger variant, previously called XL.
Pixel’s new Tensor chip will make Google’s existing AR software run better. Google’s recent approach to AR has largely been to treat it as a sort of visual extension to core products like Search and Maps. While new Apple features like Object Capture will make its hardware attractive for those interested in creating AR and 3D content, Google is betting that the smarter play (is to focus on AR that’s more useful to everyone. Investing in better mapping data, search tools and AI, might advance that work better than having the Pixel match an iPhone sensor-for-sensor.
Google’s decision to invest in its own AI chip architecture could also have implications for future devices that exist somewhere between smartphones and true AR glasses. Project Wolverine, Google’s in-development augmented hearing device, comes to mind as something that could benefit from relying either on its own Tensor chip or one in a paired Pixel for audio processing.
- The Tensor chip has limited spec detail for now. The Verge said, “Google’s not sharing who designed the CPU and GPU, nor is it sharing benchmarks on their performance — though Osterloh says that it should be “market leading,” with AI stuff that is totally differentiated.”
- Osterloh says: “We’re now able to run data-center quality models on our device,” implying some pretty powered-up algorithms. There’s some security hardware as well: a new security core and Titan M2 chip, which Google says does more than anyone else on the market, but let's see what that really means in time.
- The machine learning means improvements in real-time object detection and HDR in each frame of a video for better quality vids for the camera, real-time translation on device, and closer-to-live captioning.
- Pro flagship-type specs: a 6.7-inch 120Hz OLED display, 4x zoom, an under-display fingerprint sensor, with “slightly” curved edges.
- The Pixel 6 will have a smaller 6.4-inch, 90Hz OLED display, which is flat, and no telephoto lens.
- The imaging sensor is moving from the Sony IMX363 it’s had since the Pixel 3.
- There’s also a bunch of new colors for the Pixel 6 series, with at least five different color styles.
- Google said it will be a "premium-priced product," verging on a $1,000
Pixel’s new Tensor chip will make Google’s existing AR software run better. Google’s recent approach to AR has largely been to treat it as a sort of visual extension to core products like Search and Maps. While new Apple features like Object Capture will make its hardware attractive for those interested in creating AR and 3D content, Google is betting that the smarter play (is to focus on AR that’s more useful to everyone. Investing in better mapping data, search tools and AI, might advance that work better than having the Pixel match an iPhone sensor-for-sensor.
Google’s decision to invest in its own AI chip architecture could also have implications for future devices that exist somewhere between smartphones and true AR glasses. Project Wolverine, Google’s in-development augmented hearing device, comes to mind as something that could benefit from relying either on its own Tensor chip or one in a paired Pixel for audio processing.
Google has tons of data coming from more than 3 billion Android users, billions of Google search, Maps, Photos, Assistant (voice), applications and services Google is an AI powerhouse with ability to build accurate ML models to improve the user-experience across the same properties which generate so much data. Examples include:
- Advanced ML algorithms to improve the on-device photography using Night Sight-based computational photography, real-time multimodal language translation, visual search leveraging Multitask Unified Model trained across 75 languages, powerful Google Photos with features such as AI/ML powered Little Patterns, intelligent app and content recommendations, real-time collaboration across Google Docs with AI/ML features such as Smart Canvas, Smart chips and other experiences.
- Increasing the overall UX helps Google accurately profile users’ behaviors, patterns, predict needs, build a powerful recommendation engine and get paid top dollars for the targeted advertising from the marketers, driving Google’s core ad business model.
- Increasing vertically integrated systems (ala Apple) gives Google full control of the entire stack from chip to OS to middleware to cloud.
Contact Us
|
Barry Young
|