Google Bets That Artificial Intelligence Will Make You Buy Its New Devices

Google's new hardware devices 2017

Today Google introduced eight new devices that will be released this fall – phones, home assistants, a laptop, a virtual reality headset, and more. It’s the most diverse new product lineup from a technology company in recent memory. Apple just announced three new phones, Amazon just announced a slew of new variations on the Echo, but Google outdid both of them for sheer quantity and variety.

Google has been selling hardware for several years but it has been a bit hesitant about it. Two years ago, it introduced mid-priced Nexus phones but did not promote them aggressively through the carriers and seemed content to have them remain relatively unknown. Last year, Google pushed the price up for expensive Pixel phones but seemed to have supply chain problems, because they were difficult to come by even if you knew about them.

This year Google looks serious. Its new Pixel 2 phones seem to be competitive with the iPhone and Galaxy S8 phones, with sharp looks and unique features, and high price tags to match. Google is offering wireless earbuds that will compete with Apple’s EarPods. The Google Home device lineup has expanded; in addition to the original Google Home, Google’s answer to the Amazon Echo, there is now a cheap new miniature hockey puck competing with the Amazon Dot and a high-end audiophile quality speaker that will compete with Apple’s upcoming HomePod.

Google’s new ChromeOS laptop, the PixelBook, starts at $1000 and soars up from there for more power and storage space. It looks thin and sleek and has high-end specs, which seems to miss the point of a Chromebook. Its appeal will be limited to true Google believers who disregard price in pursuit of the latest and greatest.

That’s part of the story. Google is trying to fly up to the same rarefied air that Apple occupies, where price is less important than excellence.

The individual products are interesting. I recommend that you read about them. Here’s a good summary.

There is an underlying vision that unifies all the devices: Google thinks that it can combine hardware, software, and artificial intelligence to make its devices more useful than devices from any other company. It may take more than one year for everything to come together but Google might be right.

Google CEO Sundar Pichai did not mention any devices for the first fifteen minutes of Google’s product launch. Instead, he explained how Google is transforming itself into a company focused on artificial intelligence and machine learning. Here are a few examples of how that work is paying off.

•  All of the devices have Google Assistant built in. It is clear that Google wants Google Assistant to become something that you reflexively reach for because it’s just so darned useful. Google Assistant is improving rapidly to handle web searches, personal events and notifications, and almost anything else you can think of (and quite a lot that hasn’t occurred to you yet).

•  Sundar Pichai showed the improvements in object recognition that allowed the Google Maps team to map hundreds of miles of streets in Lagos, Nigeria, where addresses and street signs are often obscured or missing. More about object recognition below.

•  The new earbuds, Pixel Buds, will be able to handle language translation out loud in real time. Tell Google Assistant, “Help me speak Japanese.” Your phone’s speakers will output your translated words in Japanese. The other person’s reply in Japanese will play into your ears through the Pixel Buds. We’ll see how real world performance holds up but it was fast and accurate for an English and Swedish conversation onstage. (Watch the video here.) Think about it! This is a genuine science fiction moment that could change the world. More details about it here.

•  The Pixel 2 phones have the best cameras available on any phone, according to a widely respected industry benchmark. Much of the credit goes to software algorithms that improve the photo on the fly in the millisecond after it is taken, doing more to improve the photo automatically than you could do if you opened it up in a photo editor. Apple does portrait effects on iPhones (adjusting lighting, blurring the background) using two lenses. The Pixel 2 phones achieve the same effects with a single lens and really smart software processing.

•  Google’s high end speaker, the Home Max, will use “smart sound” to adjust its acoustics automatically to fit your room, adjusting within seconds if you move it to a new location in the room with different audio characteristics.

•  The Pixel 2 phones will be the first to get Google Lens, the app that evaluates what the camera sees and gives you useful information about it. Point at a painting and get the title and a bio of the artist. Point at a restaurant and get the menu and a link to call for a reservation. Point at virtually anything and Google Lens will try to give you back something helpful – and Google is constantly getting better at guessing what that might be.

Let’s go back to object recognition for a minute. Only a few years ago, no computer on earth could recognize objects in a photo in more than a rudimentary way. We went through a period where computers might be able to pick out “animals” or “faces,” but not much more.

Google’s artificial intelligence research tipped the scales. Smart programming combined with input from millions of photos led to improvements by leaps and bounds.  Animals could be sorted into dogs and cats, then into different breeds. Three years ago Google and Stanford researchers announced AI software that could describe the content of photos with stunning accuracy, for the first time able to identify entire scenes (“herd of elephants marching on a grassy plain”) in addition to specific objects.

The first result was the release of Google Photos, the service that indexes your photos automatically and allows you to access your memories in ways that were simply impossible until now. Google is constantly improving Google Photos. Now, for example, it does facial recognition automatically; if it spots any of your friends in a picture, it offers to share the photo with them.

The constant improvement is the interesting part. Today it’s photo sharing and better mapping and on-the-fly translations. Google’s plan is to use machine learning to improve everything it does. We can’t easily predict what the next thing will be but I feel pretty safe in predicting that Google is going to take us forward into a brave, scary new world faster than any other company. Buy some of its new devices if you want to ride along on the cutting edge.