NFC Tap to Chat with Virtual Character

We’re fans of NFC (near field communication) technology for creative applications (looking forward to payments as well)…

We’ve been testing and watching the user interaction off of Beta apps where an NFC tap leads to a chat with one of our virtual characters – And, comparing this to alternative triggers, such as shortcode SMS, QR codes, URL shorteners, etc.

Hands down, NFC wins.  It’s a great user experience because (a) it’s habitual and (b) it’s a mindless gesture, more instinctual than thought driven.

These are NFC Tags from TAGSTAND (we also use their Android app, TAGSTAND WRITER, for encoding)

If you’d like a sample NFC tag encoded to reach any of our applications sent to you, just CONTACT US.

Anthropomorphize Digital Engagement – Why Bots Work

There is a well-worth-reading article titled:

Virtual coaches keep overweight people on track

On HEALTH CARE IT NEWS:

http://www.healthcareitnews.com/news/virtual-coaches-keep-overweight-people-track

The core concept that was proven in the study was that humans will anthropomorphize their engagement with a “virtual coach:”

 58.1 percent of the participants using a virtual coach indicated it motivated them to be more active, and 87.1 percent reported feeling guilty if they skipped an online appointment.

Guilt?

Only arises when there is an emotional connection.

We see this kind of emotional engagement with our virtual characters and agents.  We haven’t done formal studies, but, the referenced article is consistent with how we feel human End Users respond to virtual agents — It’s an overwhelmingly personal experience.

Just as HAL 9000 was a believable “character” to movie audiences, we suspend disbelief in our direct engagement with (good) bots.

Past, Present & Future – Automating Customer Service & Social Media

The article by analyst @jowyang on Techcrunch today:

http://techcrunch.com/2012/06/07/brands-start-automating-social-media-responses-on-facebook-and-twitter/

Took us back in time, about 2 years, when we first tested automated “human’ish” responses with our platform, via Twitter (and, even into Facebook, via IM).

We discussed with Brands about the cost and scaling of “humans” v. automated bots (as well as consistency and the instant response we provide), but, 2 years ago, I guess we came across sounding like Science Fiction fans, not mobile and social marketers!

So, today, it’s official, we in the “nascent, but growing” stage of seeing this engagement format move into the marketplace.  Our two years of building our platform and testing our “creative” and interactive narrative positions us well ahead of anyone just starting to dabble in this space.

@jowyang’s “future” looks like this:

“Human-like Relationships:  While on the distant horizon, artificial intelligence agents will simulate human behavior and be a guiding agent, conversationalist, and act like a real world concierge, “

Well, that’s what our “today” looks like.  Brand Agents who guide.

Our blog posts haven’t used the CATEGORY “AI and Twitter” in a heck of a long time.  Nice to dust that off again!

Now, we do respect that there are diverging opinions on this topic, really just specific to use of automation within “social media,” as if it’s a “pure” landscape that should be reserved for “humans only.”  (See the OPINIONS section of the Techcrunch article).

We have the ability to clone or create the “best” customer service bots, who respond instantly, around the clock – and can manage high volumes of concurrent users (and each receives a personalized response).  Compare that to most “human” customer service representatives?  Perhaps most who work  a Brands Twitter account are a bit hipper than those in the Call Centers, but, well, we can add “hip” to our bots.  Do people really care if it’s human or silicon, provided they get what they need?  Quickly?

As humans, we’ve adapted and evolved to accept Human::Machine interaction quite well.  At the grocery, if the line is shorter at the automated check-out, that’s where we go.  Yes, I like a brief chat with the human check-out person, but, nine times out of ten, I head to the automated check-out, who is always friendly and gets me out the door quickly (And I never find someone has put the fresh strawberries at the bottom of the bag).

 

Second Screen Apps – Extending Story & Character

Over the past 5 weeks, it seems that not a day goes by when some new study emerges specific to how “the audience” for television are already engaged in concurrent, multi-screen engagement (see blog posts below with links)  – Just not with the 2nd Screen content being related to the television (email, social networks, essentially communication is #1 use).

So, after about 15 years of the term “convergence” being bandied about, the audience are converged.

But, the content is lacking.

Each study talks to the massive growth opportunity for 2nd Screen apps.

We have little doubt there will be a mad-rush to fill in this gap with aggregated content apps and superfluous layers of data streamed to the 2nd Screen.

But, what interests us (and where the contentAI platform is perfectly well suited), is to create personalized, extended story experiences on the 2nd screen that are indigenous to the television screen content.

For drama or any fictional content, the audience are already in a “suspension of disbelief” mode, as they are emotionally engaged in characters and story.  Creating seamless experiences that are personalized and interactive, that deepen the engagement in the television content, is where this all gets fascinating.

 

Sometimes You Just Want a Person – Supplementing Mobile Virtual Agents

We have had a few queries about whether or not our platform could also include “live agent” chat – especially for small businesses?

In one case, this was so that the in-store shopper could instantly connect with any of the store employees who are on the Floor, but they can’t find.

The answer?

Sure.  Why not?

No apps to download.  No messaging costs.  The user’s privacy is safe.  

Instead of talking to a bot, you talk to a person, through our XMPP to HTTP platform and engine.

The only variable we need to include is a “push notification” to the “live agent” so they know there is an incoming message for them.

If you’d like to get on our early Beta test list, just CONTACT US and request “live agent beta.”

Of course, for volume and high concurrent user engagement, the personalized experiences we create with “bots” is the most cost effective and provides a great user experience.

Heck, we’re human too…we understand…sometimes you just want to talk to a person…quickly!

Hey, contentAI, where’s that Voice Recognition?

We get that question alot (though phrased in a variety of ways).

Today’s New York Times story HERE  reminded us to bring up the topic in this post.

We could readily integrate our platform with server-side voice recognition or within native-apps – But, we don’t feel that the majority of mobile applications we produce really require it.  In fact, we believe that text-based engagement (private, personal) is preferable in most “mobile” situations.

That said, as we review the presentation slides from IgnitionWEST and other places, we are struck by how 50% or more time with mobile and tablets is concurrent with television viewing.  In general, internet connectivity also runs concurrent with the evening hours of television viewing.

One place we see a real opportunity to incorporate voice-recognition with our applications is specific to the emerging space of “television to mobile” content and ad extenstions.  When someone is in the privacy of their own home (on the couch), the ability to speak may be better than or equal to text (we’ll always offer the option for both).  From a technical standpoint, this also means the user will be in an environment with less ambient noise (traffic, etc.)…

So, it’s something we’re starting to tinker with.  It’s pretty straight forward — We just want to apply it to the “right” application, not do it for the sake of adding something that doesn’t really add value to the End User.

Look for updates on this in Q2 2012 (soon!).

Mobile Virtual Assistants – SIRI, Lawsuits and Solutions

In the past week, concurrent with two lawsuits launched against APPLE and SIRI; mentioned in Venturebeat this week; and last week here  as well – We’ve been drawn into at least a half-dozen discussions about how/why/if this is warranted and how the contentAI mobile virtual agents differ?

First off, “SIRI” is promoted as a “virtual personal assistant.”  Basically, a front-end, single “voice” to reach data from multiple web services; plus some singular “backstory/character.”  Within a limited range of data sets, it does a fairly good job.  We agree that their television ad campaign in the United States set very high expectations for SIRI from consumers.  Watching those first ads, one of the Team here turned to all of us and said, “Can it really do all that?”

At contentAI studios, we do not produce “personal assistants.”  We produce virtual characters and brand agents that have knowledge and a “voice” that is specific to their customer’s needs.  Our virtual retail clerk doesn’t have a clue about how to help you manage your address book…we don’t really see “managing an address book” as a problem that needs solving…but, you may need help specific to a shopping experience at a specific retailer.

Also, our virtual characters are “motivated” to help direct a conversation.  SIRI, and other virtual assistants, are passive Q&A agents, who respond, but, don’t lead.

We believe that a brand needs to take the lead in a conversation.  An entertainment character needs to know it’s “scene motivation.”

Also, to reach consumers, we don’t believe in working with an individual proprietary OS.  The iPhone is about 11% of the U.S. mobile market.  The contentAI virtual characters on mobile web (and native apps) reach well over 50% of the mobile population with a single-build.

We do believe that pushing the AI envelope on a technical level is a wonderful thing.  We’re fond of some AI “personal assistants” that have been around since pre-SIRI days and which continue to evolve.  And, SIRI has done a good job of making “virtual characters on mobile” become something in-demand.  But, we are also believers (and practitioners) of setting and delivering realistic expectations.

Just as most of us have an expertise in one field and no knowledge about many other fields, our virtual characters are “more human” in their approach to AI knowledge.  We don’t need to tap into dozens of APIs to pull data for each bot…they are self-contained units or units that tap into specific knowledge to help consumers and audiences looking for specific experiences.

Our virtual characters were released on mobile web over a year before SIRI went to market.  We’re still focused on “virtual characters” not “virtual assistants.”   That’s where we do something SIRI doesn’t do — Each of our characters are unique to their job — Each have a different voice.  What we find is a high degree of User satisfaction.  People who come to our Apps are looking for a particular experience…one that can delight them and provide personalized information within the engagement.

So, in a nutshell, contentAI bots and SIRI are very distant cousins, at best.  contentAI produce motivated, brand specific virtual agents.  SIRI is aiming to be a broad-based personal assistant.  For most brands and entertainment properties, who want to reach over 50% of the mobile market, we believe that contentAI’s virtual characters are a solution, today, that will deliver highly satisfying experiences to customers and interactive audiences.

Origami Towel Creatures, Toys, Personalized Mobile Experiences & Delight

It was nice to see Portland, OR host last evenings talk:

Jared Spool Presents: Mobile & UX – Inside the Eye of the Perfect Storm – Portland, OR

Last night at the UoO building in Old Town.

We’ll post Links to his Deck when it’s available (Now available Here).

He’s been giving this talk for the past year, and a video is here:  http://vimeo.com/25547105

While there were 4x elements in Spool’s presentation that create this “perfect storm,” the over-riding metaphor for much of the presentation was a SIX FLAGS v. DISNEYLAND:  ”activity” v. “experience” paradigm.

Basically, SIX FLAGS offers a pretty straight forward activity-based flow, while DISNEY’s design encourages a more “experiential” flow for the End User.  The parallel was basically how online web sites are data/feature driven, while mobile (when successful) is more experience driven.

The natural extension of this, while not discussed, seems to us to be how a DISNEY-experience is “personal,” while a SIX FLAGS-activity laden day is more generic (everyone has nearly the same experience).

One slide in the presentation Deck were pictures of the origami towels that magically appear in someone’s “resort room” at the end of the day — sometimes surrounded by the visitor’s children’s toys (Toy Story with Origami towels).

The illusion is that this is a deeply personalized, memorable touch (even if 20,000 other rooms are nearly the same), in part, by adding the visitor’s toys to the tableau does make it “personal.”

Let’s extend this to “mobile thinking and UX.”

Mobile is a far more “personal” engagement format than “online.”

It’s in someone’s pocket, purse or bag.  It’s in someone’s hand.  It’s a one-to-one EXTREME CLOSE UP engagement.

It’s not just “experiential.”

It’s personal.

And, the UX, along with the programming, needs to fulfill “personal” engagement — Whether that is through deeply complex algorithms or smoke-and-mirrors fancy tricks (User’s will suspend disbelief and go along for the ride if you do it well), “personalization” of mobile experiences is what delivers:

Delight.

Which was another theme of the evening.

The contentAI studios conversational mobile platform is predicated on personalizing each and every engagement. Sometimes deeply, sometimes lightly.  But, it’s been a Prime Directive in the development of the platform since our focus went to Mobile, nearly 2 years ago.

We’ve been thinking about personalized mobile experiences for a long time.  Which is why the idea of putting someone’s children’s toys around a bunch of origami creature shaped towels, resonated so deeply.

Mobile Virtual Characters Become Fashionable

While we’ve been quietly going through Beta releases and testing “virtual brand agents” and “mobile characters” over the past year, it’s been fascinating to see how the press has latched onto the “generalist” mobile virtual assistant attempts (also, to be fair, in Beta).

Obviously, SIRI was the big one.  Lots of press and very slick ads.  Now, along comes MAJEL:  http://technomondo.com/2011/12/14/google-working-on-its-siri-competitor-codenamed-majel-for-android/

It’s really important to differentiate between a “generalist” which can access finite data sources and act on them (e.g. send an SMS, register an appointment in your calendar), and a “virtual brand agent” who has a specific “voice” (even if text based) and knowledge specific to the Brand as well as a personalized engagement with the User.

At contentAI, we don’t build “generalists.”   The “one bot to rule them all” just isn’t as interesting as building virtual characters who are unique to a Brand or user experience.

There’s plenty of room for many mobile virtual assistants and characters – But, there will never be, on our lifetime, one bot to rule them all.  We do find it interesting that much of the “pleasure” people derive from SIRI is that it has some level of “personality,” beyond data retrieval.  People enjoy personable virtual characters…2012 looks like a very busy year…

Mobile is Personal — Really Personal…it’s called Love (Maybe)

While the study was small in scope, the take-away from the New York Times article here

http://www.nytimes.com/2011/10/01/opinion/you-love-your-iphone-literally.html?_r=2&emc=eta1&pagewanted=all

Addresses not just an “addictive” nature to mobile engagement – But it goes further — To a “love” of our mobile devices.

The subjects’ brains responded to the sound of their phones as they would respond to the presence or proximity of a girlfriend, boyfriend or family member.

Virtual characters and agents designed for mobile engagement fulfill the 2-way communication needs associated with the devices — the raison d’etre they have evolved to evoke such deep emotion.  We’re 99.9% certain that this “love” has not come into being due to GPS sensors, mobile banner ads or even “push” notifications.

To fulfill and make “love last,” emotionally compelling mobile content experiences matter!

I’d posit that mobile devices have evolved to evoke “love” because they’ve become our most important communication channel with friends and family (other than face-to-face).

While the article focused on iPhone users and implies that it is a more “loved” device than others, we’d challenge that assertion and suspect it is a cross-device phenomenon.  Simply, iPhone users like to express their affection a little louder than the rest of us!

For those in the mobile content business, we hope the take-away here is that to keep the love flowing, you’ve got to deliver emotionally rewarding content – not just click-throughs — this is NOT the static web.

 

CAVEAT:  Some really smart people have taken issue with the study (not just the thinness, but detail level) and that should be noted:  http://www.talyarkoni.org/blog/2011/10/01/the-new-york-times-blows-it-big-time-on-brain-imaging/

While the technical aspects are worth questioning, the underlying notion that mobile devices are held to be extremely personal by their owners remains fairly solid.  Just try taking one away from someone…or, see how they fare when they lose their device?  It doesn’t take an MRI to tell you that you are touching on emotions, not just rational thought.