nScreenMedia OTT multiscreen media analysis

You Speak, Jinni Answers in New iPad App

Last year was a busy time in the business of content discovery. The burgeoning business of second screen apps began to alleviate the inadequacies of the on-screen TV guide. A number of companies began to bring together the disparate pieces required of any personal media guide. For example, Digitalsmiths made great strides pulling together robust metadata, personal preference and social media in its Seamless Discovery platform.

This year seems to shaping up in a similar fashion. At CES, the pioneer of mood-based discovery, Jinni, announced the introduction of natural language understanding (NLU) in its innovative media guide. I spoke with Yosi Glick, the Co-founder and CEO of Jinni, last week and he explained how NLU would be employed in the guide. But before we get to that, we should understand how Jinni currently works.

The Jinni media guide allows users to look for content based on how they feel rather than forcing them to search for a particular title or genre. For example, typing into the guide “touching human spirit” yields movies such as Forrest Gump, The Blind Side and The Shawshank Redemption. Jinni also allows the user to establish their own movie personality and will make recommendations based on this.

Yosi told me the company will be releasing an iPad app in the first quarter of 2013 that will extend the features from the web guide to the tablet platform. In addition, the iPad app adds a new feature called “watch together.” With this feature, the discovery engine can combine two movie personalities and provide a list of shows and movies that both viewers will find interesting. The app will also add an “In theaters” section that allows you to use it to plan a trip out to the movies.

However, the most interesting new feature provided in the app is natural language understanding. Yosi demonstrated how this would work. Using the new app he asked “What’s on TV tonight for me and my wife to watch together?” First, the app uses Nuance to translate this phrase to text. Next NLU figures out that the questioner is looking for a list of movies or shows that is the combination of two profiles: Yosi and his wife. Finally, the Jinni app uses the new “watch together” feature to create the list of assets that both Yosi and his wife can enjoy together.

Of course, as far as a Jinni user knows she spoke and Jinni answered. What could be simpler?

Yosi tried other phrases like “What is the best Bond movie with Roger Moore?” which, despite Yosi’s thick accent, Jinni handled quite accurately. Unfortunately, I was not able to test Jinni with my Anglo-American accent since Yosi was giving the demonstration remotely. As soon as I get the app on my iPad I’ll report back here just how accurate it really is.

Voice search is certainly nothing new. At this year’s CES I saw the feature demonstrated quite successfully at the LG and Samsung booths on their current line of Smart TVs. But what sets Jinni’s approach apart is that the user can speak in full sentences and, rather than ask for a specific title or actor, ask for something that matches their mood or interest.

And if it all works as demonstrated, the Jinni app should be well worth the $2 it will cost in the iTunes store.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.