‘Tis the Season…Again…Virtual Character Engagement


First, yes, it’s been awhile since we’ve updated our blog here (or, wow, the entire site could use some updates!)…

Much of our focus in the past year has been with our baubleApp.com imprint – But, particularly in the lead up to the Holiday Season, we are still managing our virtual characters and interactive narrative experiences, especially, our children’s apps, which tend to increase in concurent users during the November-December period.

The above clip is just one sample of thousands that take place each month with our virtual characters.  Sure, not all run 16+ minutes (that clip was pulled while a chat was in progress), but, it’s not unusual to see a good percentage of the User sessions run in excess of 10+ minutes.

While we haven’t built a new app for our own release in quite a few months, our English as a Second Language (http://eslAI.com) apps continue to be used in over 80+ countries per month, likewise demonstrating strong engagement analytics.

But, end of the day,  the length of the sessions and number of messages exchanged is only an indicator of the quality of the engagement – For the detail level, we review the chat logs themselves (anonymous logs) to refine the apps and keep them current with changes in language use.

We anticipate a fascinating 2014 for our contentAI studios platform – particularly, as we converge some applications with the baubleApp product line and near field communication (#NFC) triggers.  Stay tuned…

Why We Don’t Include Video with Virtual Characters

For the past day, we’ve had articles sent to us about the launch of a new “human/bot” platform, Volio.

We think what they’re doing is pretty cool.  It’s focused on an individual’s specific knowledge base (knowledge vertical, in our parlance) and a “motivated conversation” (the bot leads the User, though allows for some variety)…

Yes, that’s the basic construct of contentAI studio’s platform, which continues to have users in 80-100 Countries per month enjoying our apps.   Here are some of the posts:




What’s the key difference?   Well, other than the ability to really nuance a conversation when it’s scripted and in text, compared to having to produce a wide range of alternate video clips?  It’s also pretty clear that our code treats conversational engagement quite differently.  Which is great.  But, for most people, the key difference?

Well, it’s the video itself.

As those who’ve followed contentAI’s evolution know, we have always discouraged the use of 3-D models as visual enhancements to our platform.  Even with improvements of quality, the inclusion of a visual “character” immediately tells the End User:  “This is not real.  It is pre-recorded” (Ignoring any real time voice synthesis variables; which says, “This is a computer, not a person”).

Simply, we firmly believe that with today’s technologies and user interfaces, a visual representation, other than a still image/graphic, takes away from the value of the experience. That includes a “real person” on video.

Video is also inherently more difficult and introduces new challenges.  There is a visual language to motion pictures/television.  It’s a story telling medium and we’ve all grown used to the visual language including cuts, cutaways, reaction shots, coverage and a slew of other features — Some of us here at contentAI have produced motion pictures, television and even internet television.  We know that space pretty well.   When we look at Volio, as clever as it is, the video appears to be a camera, a close-up, fixed eye-lines and, well, it feels pre-recorded, because it is.    To us, it signals “not a real conversation.”

Suspension of disbelief requires a range of bells and whistles to encourage and create.  Allowing the End User to go along for the ride, even if they know “it isn’t real.”  The film business relies on these techniques all the time.  We’ve applied the same techniques to our contentAI platform.  The irony that we don’t include video with our platform isn’t lost on us.  We’ve love to.  But, we’ll only venture into that territory when we feel the video experience will enhance suspension of disbelief, will make the experience more personal and will let the Users go-along-for-the-ride even further.


*  FINAL POINT:  Volio, clever as it is, is only available for the iPhone and iPad.  We never build native apps for a single platform.  When it comes to reaching mobile consumers, diversity is the key to happiness, along with mobile web.  We understand the need for niche content to specific operating system and brand demographics.   But, we’re focused on content that is delivered across a wider swath of the public.




Ads that Talk Back to You – Nuance Joins the Party

Anyone who’s been following contentAI for awhile knows that we’ve been offering and producing prototypes for mobile ads that can “talk back” to the consumer (via text conversations) for quite awhile.

The most basic notion of talking to a characters on a cereal box seems obvious to us.

And, incorporating speech-to-text or voice recognition is quite easy to achieve with our core platform – The value is in our engine that delivers the “right response,” not just random noise!

Well, one of our colleagues sent us this article today from AllThingsD that NUANCE are announcing an ad unit that “talks back to the customer.”


There’s additional detail over on VentureBeat:  http://venturebeat.com/2013/04/01/nuance-voice-ads-launch/

Nuance are a terrific company – We like their voice recognition technology alot.  But, from the sound of it, their new ad units are more random than conversational – Less nuanced (pun intended) than the contentAI platform; and the “push” conversational pitch at the end is what we achieve through a more natural, motivated virtual character (along with personalization and variations on the pitch, based on the User’s conversational input).

But, clearly, if NUANCE have joined the conversational ad unit party, then this space is going to get alot busier.  Each company and conversational engine will have their own specific upsides.  We’re fascinated to see this space garner more attention and a higher profile.  A company like Nuance can raise the tide for all…we hope!

contentAI Autumn Updates

Summer was (as always) far too brief.

Here @ contentAI, we spent most of the summer working on our consumer-facing English as a Second Language (http://eslAI.com) conversational applications, especially a children’s series (the abcDog series).  We also expanded to release all of our apps on the Chrome App Store.

But, as the clouds return to the Pacific Northwest and temperatures drop, we’re pleased that the calendar has turned to Autumn.

So, what’s in store?

*  First, we found a “vertical scroll” issue in Android 4.0 and above devices.  Both with Native and web apps.  That’s #1 on the list.

*  Due to the popularity of our free children’s apps, we’re opening up our My Tooth Fairy Chat as a free app for the Season.  We’ve got some nice reviews but want to see more kids enjoy the app; and we’re also looking into sponsorship relationships so that we can keep the app free

*  ‘Tis the Season – Yup, after two successful Holiday Seasons of upgrading and releasing our My Santa Talk app, we’re considering what enhancements we can incorporate into this year’s iteration?  Due to the nuances of Google Play store, we may have to set up a new listing…app stores are remarkably inconsistent in how developers can submit revisions; especially major revisions built with new tools.  More info on this to follow.

Overall, our focus for commercial work shifted over the summer as “2nd Screen” application became more of a realistic business model.  We’re continuing to pursue and explore unique ACR relationships and engage Network’s digital departments in discussions.  Our initial tests of combining storied television content with virtual character chat shows amazing potential.

NFC Tap to Chat with Virtual Character

We’re fans of NFC (near field communication) technology for creative applications (looking forward to payments as well)…

We’ve been testing and watching the user interaction off of Beta apps where an NFC tap leads to a chat with one of our virtual characters – And, comparing this to alternative triggers, such as shortcode SMS, QR codes, URL shorteners, etc.

Hands down, NFC wins.  It’s a great user experience because (a) it’s habitual and (b) it’s a mindless gesture, more instinctual than thought driven.

These are NFC Tags from TAGSTAND (we also use their Android app, TAGSTAND WRITER, for encoding)

If you’d like a sample NFC tag encoded to reach any of our applications sent to you, just CONTACT US.

2nd Screen Apps Featuring Virtual Characters




We’ve been very focused on 2nd Screen applications for our platform and are in discussions with companies we can work with for broadcast solutions which we feel are exceptional (Hint:  They do NOT use audio for automated content recognition).  Nothing wrong with Audio ACR, but there are other approaches which are faster to integrate (and also don’t require native apps).

But, at the same time, we’ve been tinkering around with “disconnected” viewing solutions for syncing screens during online video viewing with some in-house efforts.  We’ve got some working demos available and will be releasing public versions in the near future.

The premise is simple:

*  The #1 activity for people on their 2nd screen is text-chat with friends and family

*  Now, they can text chat (via mobile web or app) with on-screen characters

*  Mobile is Personal.  Therefore, ad units and story extensions need to be personalized for EVERY individual

*  Extending story, or even “ad stories” to the 2nd screen SHOULD involve the fundamental elements inherent to the first screen:  story and character.

It’s truly exciting to watch a video and then have an opportunity to chat with the character on a 2nd screen.  In many respects, this is WHY we built the contentAI studios platform (we’re old film and tv people at heart).    If you’d like to see a Beta demo of our online video sync, just CONTACT US.


Mobile Virtual Agents – Speech v. Text

Speech recognition is a wonderful technology.  SIRI uses Nuance.  iSpeech is a strong platform (we have a few other favorites as well); and, Android’s speech recognition is excellent.

But. . .

Even under optimal conditions (little or no ambient noise; microphone close to mouth; high quality microphone, etc.), the best speech recognition engines are running at about 90% accuracy.  Or, in a glass-half-empty-world, 10% errors are introduced.

Text, particularly with spell check and auto-fill, has a higher accuracy level – provided the user knows how to spell.

For mobile virtual agents (“mobile” being any 2nd or 3rd screen device; or non-desktop device), the primary User Interface is text for most applications — It is what people are comfortable with — It’s private!

So, why is there a mad rush to add speech recognition to every virtual agent that’s introduced to the mobile market?

Largely, t

his was due to Apple’s deal with NUANCE for Siri’s speech recognition.  i.e. If Apple have done this, everyone should follow.

We’re not so sure.  We know that SIRI has mixed reviews and whether that is due to it’s AI/API cross-talk issues or whether it’s simply due to speech recognition errors we don’t know.  But, if Users are only getting the correct response 50-80% of the time, that doesn’t seem acceptable to us…and, we look at the errors introduced via speech recognition as being the likely culprit (let’s say, the speech recognition eco-system, including ambient noise in the environment during use).

Will we include speech recognition with contentAI virtual agents?


We’ll probably add it as an option to our next platform version using a server-side engine.  It adds a small cost, but, we’ll provide it as an option.

But, we won’t delete the text interface.

The #1 form of written communication in the world is now text messaging on mobile (SMS or other short, conversational text message).

It’s natural.

It has fewer errors.

It’s silent.

Keep The Front-End (REALLY) Simple – Mobile UX

It was really refreshing to read a post today by developer Christian Heilmann entitled:


Which has been a primary focus for contentAI since the earliest days of designing the platform.

A few standout quotes:

The first load sends you the shell of the app and it stays in the browser – this means it can be a very quick experience

The experience is sticky – you stay in one interface and load content into it which is what “real” apps do

All of the complexity resides in the backend, away from the customer.  A clean interface is one which never forces the User to “hunt around.”

It’s why we’ve believed that Natural Language Processing for mobile web experiences is an excellent UX.  With the mad rush to create NLP “virtual assistants” who can Search across multiple APIs, we’ve remained focused on “brand agents” with specific knowledge pools and perhaps more importantly with brand specific “voices” (even in text) rather than a generic spokesperson who “speaks for many.”

We’ve continued to simplify our front-end design and UX over the past half-year, continuously “removing” visible elements in order to keep the interface as direct as possible.  While we could create more hyperlinked activity with the inclusion of graphics, video and audio, we are endeavoring to keep everyone on the “single page.”  From our analytics, we see this working exceptionally well.

Virtual Agents Designed for Mobile First

The contentAI platform was specifically designed for “mobile first” engagement – With virtual agents and characters who differ greatly from “online” bots and virtual agents.

That key difference is with our “proactive” (or, “motivated“) virtual agents/characters who view each engagement as if it’s a short scene that has a logical conclusion (contextual to the mobile location; even if that location is someone’s couch).

It was refreshing to read the analysis of Fred Wilson’s recent “mobile first” post on Venture Beat this morning:


With some key take-away lines:

 …mobile doesn’t not reward “feature richness” but rather “light services.”


 The companies that treat the mobile and web experiences differently are likely to prosper.



Anthropomorphize Digital Engagement – Why Bots Work

There is a well-worth-reading article titled:

Virtual coaches keep overweight people on track



The core concept that was proven in the study was that humans will anthropomorphize their engagement with a “virtual coach:”

 58.1 percent of the participants using a virtual coach indicated it motivated them to be more active, and 87.1 percent reported feeling guilty if they skipped an online appointment.


Only arises when there is an emotional connection.

We see this kind of emotional engagement with our virtual characters and agents.  We haven’t done formal studies, but, the referenced article is consistent with how we feel human End Users respond to virtual agents — It’s an overwhelmingly personal experience.

Just as HAL 9000 was a believable “character” to movie audiences, we suspend disbelief in our direct engagement with (good) bots.