adobe analytics

Computer software giant, Adobe, recently added new voice analytics capabilities to its Adobe Analytics Cloud tool.

Adobe knows virtual assistants are everywhere. And now, whether you talk to Amazon’s Alexa, the Google Assistant, Apple’s Siri, Microsoft’s Cortana or Samsung’s Bixby, your AI-driven helper won’t be the only one listening to your request, brands will too. At least those taking advantage of Adobe Analytics for voice-enabled digital assistants.

As people rely more and more on virtual assistants for tasks like shopping and ordering products, businesses need to find ways to analyze and take advantage of this data if they want to stay ahead of the competition. Adobe’s new set of tools aims to help businesses figure out new strategies of how to do just that.

The new voice analytics tools integrate Adobe’s artificial intelligence and machine learning platform called Sensei which is used to give companies more insight into what their customers are looking for.

Adobe first introduced the Sensei platform back in November 2016, in an effort to unify its existing AI technologies to be deployed across the entire Adobe platform. Sensei taps into customer data in order to automate work processes, assist in creative operations and ultimately help businesses better understand their consumers.

Brands today have the option of accessing basic analytics for voice-enabled platforms such as Amazon’s popular Alexa, but Adobe’s new tools take things to a whole new level by running a behavioral analysis based on what customers are saying across different voice-enabled platforms. According to a recent report, sales of voice-enabled devices grew with 39% year over year, which is why many brands are now looking for ways to monetize customers’ newfound reliance on these gadgets.

According to Colin Morris, director of product management for Adobe Analytics Cloud, the newly unveiled Analytics tools work by breaking voice queries into two main components – Intent and Parameter. Intent is what the user is searching for, e.g. he or she wants to buy or order pizza, while Parameter is the brand or location, e.g.Dominos or any other type of information needed to complete the request. Once the voice query is broken down in such a fashion, Sensei can analyze it at a deeper level. Additional data like frequency of use and actions were taken after a voice command was also provided.

During the official Adobe presentation, Morris offered a telling example of how businesses could end up employing the new tools. The Wynn Hotel in Las Vegas which will soon be adding an Amazon Echo speaker in each of its guest rooms might use the data gathered and processed by Sensei to anticipate guest needs and deliver personalized promotions and offers. The hotel could recognize a loyal guest and automatically unlock special discounts or serve up ideas about how to spend reward points.

Companies that wish to take advantage of Adobe’s new Analytics for voice-enabled digital assistants are invited to integrate the SDK for the new tool into their mobile apps. On top of delivering critical information back to companies, the new Adobe tool is also expected to help with the future development of digital assistants.

Read more AI related articles here.

LEAVE A REPLY

Please enter your comment!
Please enter your name here