2nd Screen Apps Featuring Virtual Characters

 

 

 

We’ve been very focused on 2nd Screen applications for our platform and are in discussions with companies we can work with for broadcast solutions which we feel are exceptional (Hint:  They do NOT use audio for automated content recognition).  Nothing wrong with Audio ACR, but there are other approaches which are faster to integrate (and also don’t require native apps).

But, at the same time, we’ve been tinkering around with “disconnected” viewing solutions for syncing screens during online video viewing with some in-house efforts.  We’ve got some working demos available and will be releasing public versions in the near future.

The premise is simple:

*  The #1 activity for people on their 2nd screen is text-chat with friends and family

*  Now, they can text chat (via mobile web or app) with on-screen characters

*  Mobile is Personal.  Therefore, ad units and story extensions need to be personalized for EVERY individual

*  Extending story, or even “ad stories” to the 2nd screen SHOULD involve the fundamental elements inherent to the first screen:  story and character.

It’s truly exciting to watch a video and then have an opportunity to chat with the character on a 2nd screen.  In many respects, this is WHY we built the contentAI studios platform (we’re old film and tv people at heart).    If you’d like to see a Beta demo of our online video sync, just CONTACT US.

Cheers.

Mobile Virtual Agents – Speech v. Text

Speech recognition is a wonderful technology.  SIRI uses Nuance.  iSpeech is a strong platform (we have a few other favorites as well); and, Android’s speech recognition is excellent.

But. . .

Even under optimal conditions (little or no ambient noise; microphone close to mouth; high quality microphone, etc.), the best speech recognition engines are running at about 90% accuracy.  Or, in a glass-half-empty-world, 10% errors are introduced.

Text, particularly with spell check and auto-fill, has a higher accuracy level – provided the user knows how to spell.

For mobile virtual agents (“mobile” being any 2nd or 3rd screen device; or non-desktop device), the primary User Interface is text for most applications — It is what people are comfortable with — It’s private!

So, why is there a mad rush to add speech recognition to every virtual agent that’s introduced to the mobile market?

Largely, t

his was due to Apple’s deal with NUANCE for Siri’s speech recognition.  i.e. If Apple have done this, everyone should follow.

We’re not so sure.  We know that SIRI has mixed reviews and whether that is due to it’s AI/API cross-talk issues or whether it’s simply due to speech recognition errors we don’t know.  But, if Users are only getting the correct response 50-80% of the time, that doesn’t seem acceptable to us…and, we look at the errors introduced via speech recognition as being the likely culprit (let’s say, the speech recognition eco-system, including ambient noise in the environment during use).

Will we include speech recognition with contentAI virtual agents?

Yes.

We’ll probably add it as an option to our next platform version using a server-side engine.  It adds a small cost, but, we’ll provide it as an option.

But, we won’t delete the text interface.

The #1 form of written communication in the world is now text messaging on mobile (SMS or other short, conversational text message).

It’s natural.

It has fewer errors.

It’s silent.

Second Screen Apps – Extending Story & Character

Over the past 5 weeks, it seems that not a day goes by when some new study emerges specific to how “the audience” for television are already engaged in concurrent, multi-screen engagement (see blog posts below with links)  — Just not with the 2nd Screen content being related to the television (email, social networks, essentially communication is #1 use).

So, after about 15 years of the term “convergence” being bandied about, the audience are converged.

But, the content is lacking.

Each study talks to the massive growth opportunity for 2nd Screen apps.

We have little doubt there will be a mad-rush to fill in this gap with aggregated content apps and superfluous layers of data streamed to the 2nd Screen.

But, what interests us (and where the contentAI platform is perfectly well suited), is to create personalized, extended story experiences on the 2nd screen that are indigenous to the television screen content.

For drama or any fictional content, the audience are already in a “suspension of disbelief” mode, as they are emotionally engaged in characters and story.  Creating seamless experiences that are personalized and interactive, that deepen the engagement in the television content, is where this all gets fascinating.

 

Prime-Time is Multi-Screen Time. . .Extending Story…

We were asked to summarize some thoughts on 2nd Screen experiences and extending story, and emotional engagement, between screens.

Here’s the ‘in a nutshell’ version. . .

The studies are now in…audiences now participate in concurrent, multi-screen experiences – Prime-tme is multi-screen time — But, that doesn’t mean the content, stories and brand stories are migrating between screens to create seamless and deeper engagement.

How to naturally engage and extend television to digital-device-on-the-couch?

Extend the reason television remains the First Screen to 2nd Screen content applications:  Extend the story.

Whether extending a :30 second television spot, a children’s animated series or a Prime Time drama, creating interactive narrative experiences taps into and deepens the audiences’ emotional connection to the First Screen.

2nd Screen experiences should be seen as a remarkable opportunity for television advertising and content production ventures…not to get clicks and Likes…but, to involve the audience within a personalized, conversational interaction.  This is why we built the contentAI studios’ platform.

Chat with Gossip Girl to uncover hidden clues and story material?

Chat with Astral from Yu-Gi-Oh for advice?

Chat with Mr. Clean, the Skittle’s Rainbow or a myriad of other Brand Characters?

Next time you talk to a television character, they should also talk back to you.

Because the Audience is doing a lot more than just listening.

 

UPDATE:  In addition to the Forrester Reports (Links in Post below), here is additional supporting evidence of the growing dual-screen phenomena:  http://www.screenmediadaily.com/news-viacom-tablets-tapping-into-tabletomics-study-television-consumer-behavior-tablet-user-experiences-airplay-0014001843.shtml

Virtual Agents on Mobile – NOT the same UX as Online

We have a lot of respect and appreciation for companies who’ve been working on “site agents” (virtual agents) on traditional web sites — Many have been in business for five years or more.  Typically, those site agents are charged with bringing up various data elements or Links, which helps the User to better navigate the site (often because whomever did the original navigation didn’t really anticipate the site scaling up).  The few who are working with video-based “agents” are interesting to watch, though their production quality fall short of where we feel it should evolve (a bit like watching local mattress commercials on television compared to a National ad).

The de facto standard for “site agents” has been to include a rather simple 256 color animated character that lip syncs to the voice (text to speech).  The quality here again is less than stellar.

While surprised it’s taken so long, we are now seeing some of those companies starting to package up their product for “mobile.”

What’s really surprising is that they are porting their exact same product – Delivering links or complex/dense text data — And, including those simplistic animated characters and audio (just on HTML.5 instead of Flash).

Hmmmm?

At contentAI studios, where we’ve been thinking about “mobile user experiences with virtual agents” for over two years, we decided long ago that including animated visual faces and audio was counter-intuitive to the average mobile user experience.  Often, the user is not in a location where they can hear.  Also, they don’t want to have to keep their visual focus on the small screen – they are “scan/viewing” across products, the world around them, a television AND their mobile screen…not singularly focused on one screen.

In that respect, we decided to focus our delivery of interactive narrative accompanied by still images that could “establish” the personality of the engagement, without requiring more than a fraction of a second of User Attention.  And, to deliver short, conversational engagements that are MOTIVATED by our virtual agents, not mere Q&A sessions “driven” by the End User – based on mobile experiences needing to be both contextual andget-to-the-point quickly.

Essentially, our virtual agents have a purpose specific to a Mobile User Experience…with the anticipation of the ENTIRETY of the experience, which extends beyond the screen, to the overall context of the engagement

So, will we include “animated characters with voices” on mobile?

No.  There are other companies who we can recommend for that.

We don’t think most Mobile End Users are seeking a duplication of static web experiences on their mobile devices.  But, perhaps, in some cases, it’s appropriate.  But it’s not what we offer here.  We also don’t believe that the current state of visual animated characters adds value to the User Experience; the lack of technical and visual quality is simply too much of a negative in our opinion.  End Users will “buy into” their chat experiences based on an establishing “still frame,” and they fill in the blanks on their own, without 10 frame per second 256 color “visual bots.”  We know this from our own research and analytics.

Because we never were in the mindset of “static web” virtual characters and have focused exclusively on “small screen” engagement, we aren’t porting over old assets to our mobile platform.  Everything is designed specifically for mobile.  To clarify, we also build for “desktop apps,” which are very similar experiences to mobile apps (small windows on the desktop; typically for extremely portable ultrabooks); but, most of our engagement is on mobile and tablets, based on our analytics.

What’s good for Brands is that they will have choices when it comes to how they approach adding a virtual agent to their mobile user experiences.

Based on price, quality and our exclusive focus on Mobile User Experience, we welcome an opportunity to present our platform in comparison to our competitors.

* Side note:  Yes, we include HTML.5 audio and video on our platform too – But, we use that precious (user) time and real-estate for Brand elements, not for animated characters.

Hey, contentAI, where’s that Voice Recognition?

We get that question alot (though phrased in a variety of ways).

Today’s New York Times story HERE  reminded us to bring up the topic in this post.

We could readily integrate our platform with server-side voice recognition or within native-apps – But, we don’t feel that the majority of mobile applications we produce really require it.  In fact, we believe that text-based engagement (private, personal) is preferable in most “mobile” situations.

That said, as we review the presentation slides from IgnitionWEST and other places, we are struck by how 50% or more time with mobile and tablets is concurrent with television viewing.  In general, internet connectivity also runs concurrent with the evening hours of television viewing.

One place we see a real opportunity to incorporate voice-recognition with our applications is specific to the emerging space of “television to mobile” content and ad extenstions.  When someone is in the privacy of their own home (on the couch), the ability to speak may be better than or equal to text (we’ll always offer the option for both).  From a technical standpoint, this also means the user will be in an environment with less ambient noise (traffic, etc.)…

So, it’s something we’re starting to tinker with.  It’s pretty straight forward — We just want to apply it to the “right” application, not do it for the sake of adding something that doesn’t really add value to the End User.

Look for updates on this in Q2 2012 (soon!).

Mobile Virtual Assistants – SIRI, Lawsuits and Solutions

In the past week, concurrent with two lawsuits launched against APPLE and SIRI; mentioned in Venturebeat this week; and last week here  as well — We’ve been drawn into at least a half-dozen discussions about how/why/if this is warranted and how the contentAI mobile virtual agents differ?

First off, “SIRI” is promoted as a “virtual personal assistant.”  Basically, a front-end, single “voice” to reach data from multiple web services; plus some singular “backstory/character.”  Within a limited range of data sets, it does a fairly good job.  We agree that their television ad campaign in the United States set very high expectations for SIRI from consumers.  Watching those first ads, one of the Team here turned to all of us and said, “Can it really do all that?”

At contentAI studios, we do not produce “personal assistants.”  We produce virtual characters and brand agents that have knowledge and a “voice” that is specific to their customer’s needs.  Our virtual retail clerk doesn’t have a clue about how to help you manage your address book…we don’t really see “managing an address book” as a problem that needs solving…but, you may need help specific to a shopping experience at a specific retailer.

Also, our virtual characters are “motivated” to help direct a conversation.  SIRI, and other virtual assistants, are passive Q&A agents, who respond, but, don’t lead.

We believe that a brand needs to take the lead in a conversation.  An entertainment character needs to know it’s “scene motivation.”

Also, to reach consumers, we don’t believe in working with an individual proprietary OS.  The iPhone is about 11% of the U.S. mobile market.  The contentAI virtual characters on mobile web (and native apps) reach well over 50% of the mobile population with a single-build.

We do believe that pushing the AI envelope on a technical level is a wonderful thing.  We’re fond of some AI “personal assistants” that have been around since pre-SIRI days and which continue to evolve.  And, SIRI has done a good job of making “virtual characters on mobile” become something in-demand.  But, we are also believers (and practitioners) of setting and delivering realistic expectations.

Just as most of us have an expertise in one field and no knowledge about many other fields, our virtual characters are “more human” in their approach to AI knowledge.  We don’t need to tap into dozens of APIs to pull data for each bot…they are self-contained units or units that tap into specific knowledge to help consumers and audiences looking for specific experiences.

Our virtual characters were released on mobile web over a year before SIRI went to market.  We’re still focused on “virtual characters” not “virtual assistants.”   That’s where we do something SIRI doesn’t do — Each of our characters are unique to their job — Each have a different voice.  What we find is a high degree of User satisfaction.  People who come to our Apps are looking for a particular experience…one that can delight them and provide personalized information within the engagement.

So, in a nutshell, contentAI bots and SIRI are very distant cousins, at best.  contentAI produce motivated, brand specific virtual agents.  SIRI is aiming to be a broad-based personal assistant.  For most brands and entertainment properties, who want to reach over 50% of the mobile market, we believe that contentAI’s virtual characters are a solution, today, that will deliver highly satisfying experiences to customers and interactive audiences.

Mobile – Where the Growth and Eyeballs Are…

There is an excellent Deck, presented by FLURRY, during IGNITION WEST last week  HERE

The two slides that really stand out — Specific to contentAI — Are related to 2-screen engagement times (when the television AND the mobile device are BOTH in use) and the ratio of ad dollars to consumer time (mobile spending will increase exponentially over the coming years to play “catch up”):

 

Those two slides tell a remarkable story with regard to opportunities for extending television content, both programming and ad-units, to mobile experiences.

After all, 50% of “Location” is the couch.

UPDATE (5 MAY):  From VentureBeat and a similar Nielsen Report on 2-screen experiences:

http://venturebeat.com/2012/04/05/tablets-and-tv/

“Device owners also seem to engage with content related to the TV as well, either by looking up information related to the show or looking for deals and general information on products advertised on TV,” Nielsen said in its report.

Our “My Santa Talk” Featured on INTEL’s AppUp Store

Congrats to our contentAI and MySantaTalk team. . .INTEL’s AppUP store has the “My Santa Talk” interactive chat with Santa on it’s featured banner page…you know, up there with Angry Birds…


INTEL’S appUP (Windows 32 & 64)**
http://www.appup.com/applications/applications-My+Santa+Talk

Mobile is Personal — Really Personal…it’s called Love (Maybe)

While the study was small in scope, the take-away from the New York Times article here

http://www.nytimes.com/2011/10/01/opinion/you-love-your-iphone-literally.html?_r=2&emc=eta1&pagewanted=all

Addresses not just an “addictive” nature to mobile engagement – But it goes further — To a “love” of our mobile devices.

The subjects’ brains responded to the sound of their phones as they would respond to the presence or proximity of a girlfriend, boyfriend or family member.

Virtual characters and agents designed for mobile engagement fulfill the 2-way communication needs associated with the devices — the raison d’etre they have evolved to evoke such deep emotion.  We’re 99.9% certain that this “love” has not come into being due to GPS sensors, mobile banner ads or even “push” notifications.

To fulfill and make “love last,” emotionally compelling mobile content experiences matter!

I’d posit that mobile devices have evolved to evoke “love” because they’ve become our most important communication channel with friends and family (other than face-to-face).

While the article focused on iPhone users and implies that it is a more “loved” device than others, we’d challenge that assertion and suspect it is a cross-device phenomenon.  Simply, iPhone users like to express their affection a little louder than the rest of us!

For those in the mobile content business, we hope the take-away here is that to keep the love flowing, you’ve got to deliver emotionally rewarding content – not just click-throughs — this is NOT the static web.

 

CAVEAT:  Some really smart people have taken issue with the study (not just the thinness, but detail level) and that should be noted:  http://www.talyarkoni.org/blog/2011/10/01/the-new-york-times-blows-it-big-time-on-brain-imaging/

While the technical aspects are worth questioning, the underlying notion that mobile devices are held to be extremely personal by their owners remains fairly solid.  Just try taking one away from someone…or, see how they fare when they lose their device?  It doesn’t take an MRI to tell you that you are touching on emotions, not just rational thought.