Will Samsung’s S8 Launch Bring Us One Step Closer to a True Wearable Voice Assistant?

Will Samsung’s S8 Launch Bring Us One Step Closer to a True Wearable Voice Assistant?

It appears the four-way competition among Amazon, Apple, Google and Microsoft in the AI voice-enabled assistant market is about to get a major new player. Samsung will launch the new versions of its flagship smartphone on March 29, and the media is buzzing about the possible inclusion of “Bixby”, Samsung’s as-yet-unreleased AI assistant. You may be thinking, “Great, yet another option that will fragment an already crowded market.” But we say the more, the merrier. Increased competition will drive all the players to take AI technology to the next level by enabling true two-way “conversations”.

Bixby is reportedly based on the technology Samsung brought in-house when it acquired Viv Labs last year. SamMobile reports that Bixby will offer next-gen capabilities such as

a visual search and optical character recognition tool that leverages the phone’s camera to search the web for anything the user simply points the camera at.

That sounds very cool. But reading the press release Samsung issued announcing the Viv acquisition, it’s this statement from a Samsung spokesperson that we find especially exciting:

“Unlike other existing AI-based services, Viv has a sophisticated natural language understanding, machine learning capabilities and strategic partnerships that will enrich a broader service ecosystem.”

In other words, Bixby could represent the next significant step in the evolution of voice-enabled AI.

Today, whether you’re using Amazon Alexa, Apple Siri, Google Assistant or Microsoft Cortana, you’re mostly limited to one-off commands and questions. Your smartphone or smart home device provides answers and launches apps, but struggles to understand follow-up questions or requests.  

That’s not to say the other companies are not making progress. For example, when you ask Google Assistant “What’s the weather forecast for today?” it will respond with the current temperature and display a more detailed forecast. You can then tap on your phone or say the wake term “OK Google” and ask, “What about in New York City?”, and it will respond with the current temperature in the Big Apple.

You can also command apps to launch. For example, tell Alexa to "Play Grateful Dead radio on TuneIn", and Alexa responds with a voice confirmation and then begins playing the appropriate station.

You can use our new OV intelligent headphones to interact with Alexa, Google Assistant and Apple Siri in the same ways, and with the added convenience of not having to pull your phone out of your pocket or bag. It’s as simple as tapping a button on the OV each time you want to speak a command.

Enabling true two-way conversation would mean you could speak into the OV headphones microphone just once and keep firing off a series of questions or commands as if you were dialogue with a living person. The technical term for that capability is “multi-turn command”.

It’s true that the market is fragmented now, and that makes adoption a bit difficult. Apple’s Siri does not work on non-Apple devices, Google Assistant is available only on newer Android-based phones, although iPhone users can leverage the older Google Now assistant. Microsoft Cortana is available on PCs and both iOS and Android phones, but not on Macs. You can find yourself saying “Hey Siri”, “OK Google”, “Alexa”, “Cortana”, and possibly “Samsung Hello”, at various times throughout your day.

But we believe that the increasingly competitive landscape is a good thing. The race is on with more companies either developing or acquiring new AI technologies. As the machine learning that powers them grows more sophisticated, the focus will be adding the necessary layer of intelligence to enhance that end user experience by supporting the easy flow of a natural conversation via true wearable voice assistant.