Evolving Distribution Patterns in Mobile and Browsers

While the Native v. HTML5 pseudo-battle waged over the past year, we pretty much stayed away from it – We’re platform agnostic – We work with nearly all formats (including ye olde Java apps)…

But, for our own proprietary Apps it’s been fascinating to see how one of our strongest release channels has been the CHROME STORE, as “browser based” apps (which is really a fancy book mark in many respects).  But, the numbers have been truly surprising, especially for children’s content and educational content.  While (for comparison) the Amazon App Store and Kindle Fire Store are great…CHROME is hitting a 2x – 3x factor over Amazon.  Even compared to PLAY, CHROME has been exceedingly strong.

The strange thing is that we have done virtually zero promotion for our CHROME STORE apps.

Maybe we should…though they seem to be doing really well, all on their own.

Virtual Agents on Mobile – NOT the same UX as Online

We have a lot of respect and appreciation for companies who’ve been working on “site agents” (virtual agents) on traditional web sites — Many have been in business for five years or more.  Typically, those site agents are charged with bringing up various data elements or Links, which helps the User to better navigate the site (often because whomever did the original navigation didn’t really anticipate the site scaling up).  The few who are working with video-based “agents” are interesting to watch, though their production quality fall short of where we feel it should evolve (a bit like watching local mattress commercials on television compared to a National ad).

The de facto standard for “site agents” has been to include a rather simple 256 color animated character that lip syncs to the voice (text to speech).  The quality here again is less than stellar.

While surprised it’s taken so long, we are now seeing some of those companies starting to package up their product for “mobile.”

What’s really surprising is that they are porting their exact same product – Delivering links or complex/dense text data — And, including those simplistic animated characters and audio (just on HTML.5 instead of Flash).

Hmmmm?

At contentAI studios, where we’ve been thinking about “mobile user experiences with virtual agents” for over two years, we decided long ago that including animated visual faces and audio was counter-intuitive to the average mobile user experience.  Often, the user is not in a location where they can hear.  Also, they don’t want to have to keep their visual focus on the small screen – they are “scan/viewing” across products, the world around them, a television AND their mobile screen…not singularly focused on one screen.

In that respect, we decided to focus our delivery of interactive narrative accompanied by still images that could “establish” the personality of the engagement, without requiring more than a fraction of a second of User Attention.  And, to deliver short, conversational engagements that are MOTIVATED by our virtual agents, not mere Q&A sessions “driven” by the End User – based on mobile experiences needing to be both contextual andget-to-the-point quickly.

Essentially, our virtual agents have a purpose specific to a Mobile User Experience…with the anticipation of the ENTIRETY of the experience, which extends beyond the screen, to the overall context of the engagement

So, will we include “animated characters with voices” on mobile?

No.  There are other companies who we can recommend for that.

We don’t think most Mobile End Users are seeking a duplication of static web experiences on their mobile devices.  But, perhaps, in some cases, it’s appropriate.  But it’s not what we offer here.  We also don’t believe that the current state of visual animated characters adds value to the User Experience; the lack of technical and visual quality is simply too much of a negative in our opinion.  End Users will “buy into” their chat experiences based on an establishing “still frame,” and they fill in the blanks on their own, without 10 frame per second 256 color “visual bots.”  We know this from our own research and analytics.

Because we never were in the mindset of “static web” virtual characters and have focused exclusively on “small screen” engagement, we aren’t porting over old assets to our mobile platform.  Everything is designed specifically for mobile.  To clarify, we also build for “desktop apps,” which are very similar experiences to mobile apps (small windows on the desktop; typically for extremely portable ultrabooks); but, most of our engagement is on mobile and tablets, based on our analytics.

What’s good for Brands is that they will have choices when it comes to how they approach adding a virtual agent to their mobile user experiences.

Based on price, quality and our exclusive focus on Mobile User Experience, we welcome an opportunity to present our platform in comparison to our competitors.

* Side note:  Yes, we include HTML.5 audio and video on our platform too – But, we use that precious (user) time and real-estate for Brand elements, not for animated characters.

Hey, contentAI, where’s that Voice Recognition?

We get that question alot (though phrased in a variety of ways).

Today’s New York Times story HERE  reminded us to bring up the topic in this post.

We could readily integrate our platform with server-side voice recognition or within native-apps – But, we don’t feel that the majority of mobile applications we produce really require it.  In fact, we believe that text-based engagement (private, personal) is preferable in most “mobile” situations.

That said, as we review the presentation slides from IgnitionWEST and other places, we are struck by how 50% or more time with mobile and tablets is concurrent with television viewing.  In general, internet connectivity also runs concurrent with the evening hours of television viewing.

One place we see a real opportunity to incorporate voice-recognition with our applications is specific to the emerging space of “television to mobile” content and ad extenstions.  When someone is in the privacy of their own home (on the couch), the ability to speak may be better than or equal to text (we’ll always offer the option for both).  From a technical standpoint, this also means the user will be in an environment with less ambient noise (traffic, etc.)…

So, it’s something we’re starting to tinker with.  It’s pretty straight forward — We just want to apply it to the “right” application, not do it for the sake of adding something that doesn’t really add value to the End User.

Look for updates on this in Q2 2012 (soon!).

Mobile – Where the Growth and Eyeballs Are…

There is an excellent Deck, presented by FLURRY, during IGNITION WEST last week  HERE

The two slides that really stand out — Specific to contentAI — Are related to 2-screen engagement times (when the television AND the mobile device are BOTH in use) and the ratio of ad dollars to consumer time (mobile spending will increase exponentially over the coming years to play “catch up”):

 

Those two slides tell a remarkable story with regard to opportunities for extending television content, both programming and ad-units, to mobile experiences.

After all, 50% of “Location” is the couch.

UPDATE (5 MAY):  From VentureBeat and a similar Nielsen Report on 2-screen experiences:

http://venturebeat.com/2012/04/05/tablets-and-tv/

“Device owners also seem to engage with content related to the TV as well, either by looking up information related to the show or looking for deals and general information on products advertised on TV,” Nielsen said in its report.

Innovation in Mobile Ads – Topic Du Jour

Nice perspective from Barcelona, as well as projections for the market:

http://venturebeat.com/2012/02/29/innovative-mobile-ads-grab-attention-in-barcelona/

What’s nice to see is that some “uniquely mobile” experiences are coming to light – beyond porting static web engagement – but, really thinking about the User Experience on mobile.

The future’s so bright. . .

Mobile is Personal — Really Personal…it’s called Love (Maybe)

While the study was small in scope, the take-away from the New York Times article here

http://www.nytimes.com/2011/10/01/opinion/you-love-your-iphone-literally.html?_r=2&emc=eta1&pagewanted=all

Addresses not just an “addictive” nature to mobile engagement – But it goes further — To a “love” of our mobile devices.

The subjects’ brains responded to the sound of their phones as they would respond to the presence or proximity of a girlfriend, boyfriend or family member.

Virtual characters and agents designed for mobile engagement fulfill the 2-way communication needs associated with the devices — the raison d’etre they have evolved to evoke such deep emotion.  We’re 99.9% certain that this “love” has not come into being due to GPS sensors, mobile banner ads or even “push” notifications.

To fulfill and make “love last,” emotionally compelling mobile content experiences matter!

I’d posit that mobile devices have evolved to evoke “love” because they’ve become our most important communication channel with friends and family (other than face-to-face).

While the article focused on iPhone users and implies that it is a more “loved” device than others, we’d challenge that assertion and suspect it is a cross-device phenomenon.  Simply, iPhone users like to express their affection a little louder than the rest of us!

For those in the mobile content business, we hope the take-away here is that to keep the love flowing, you’ve got to deliver emotionally rewarding content – not just click-throughs — this is NOT the static web.

 

CAVEAT:  Some really smart people have taken issue with the study (not just the thinness, but detail level) and that should be noted:  http://www.talyarkoni.org/blog/2011/10/01/the-new-york-times-blows-it-big-time-on-brain-imaging/

While the technical aspects are worth questioning, the underlying notion that mobile devices are held to be extremely personal by their owners remains fairly solid.  Just try taking one away from someone…or, see how they fare when they lose their device?  It doesn’t take an MRI to tell you that you are touching on emotions, not just rational thought.

 

Virtual Agents Get Closer to Customers

One of the early players in the “virtual agent” business was VirtuOZ, we have not seen them venture into mobile (though they’ve talked about it); and our interactive narrative engagement solution differs from their engine in other key respects (contentAI was designed specifically for mobile, contextual and personalized engagement, not static web).

That said, we like the work VirtuOZ do — And, we really like the perspective they offered in a Guest Post on VentureBeat:

http://venturebeat.com/2011/09/15/4-ways-to-bring-your-customers-closer-with-virtual-agents/#disqus_thread

The projected 400% increase in virtual agents by 2014 could be low (though it’s a great number, as is); but, we’d include our Mobile FAQ product in this category; and that form of application alone could drive greater adoption of IVAs on the “mobile side” of the web specific to individual products and intelligent packaging as mobile web use increases.

 

Mobile Marketing Start To Think Conversationally

This was fascinating to look at from Quaker Oats and their recent mobile campaign — that’s termed “conversation,” though in our opinion, it is taking a step toward conversation without jumping into actual 2-way conversation. . .

http://www.mobilecommercedaily.com/2011/09/06/quaker-uses-qr-codes-to-start-dialogue-for-nick-jonas-promotion

But, it’s an important step to see a major Brand move into conversation based mobile engagement — Our position that mobile devices are, at heart, communicators, is clearly key in the campaign thinking for the above.

Thanks @Mposada for sending the link and seeing the connection!

*****

UPDATE:  Another article this week with an excellent quote and perspective from Mike Wehrs of Scanbuy:

http://www.mobilemarketer.com/cms/news/strategy/10915.html

“It is also an empowerment tool that allows for a one-to-one conversation…”

 

Conversational Messaging and mHealth

This story caught our attention today, about using SMS “reminders” for mHealth in Kenya:

http://www.nytimes.com/2011/08/16/health/16global.html?scp=6&sq=kenya&st=cse

http://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2811%2960783-6/abstract#

Now imagine that the health workers could query the system for instant answers to common questions?  Or, be lead through step by step simulations for circumstances they face?

We can route via SMS, but, even at the 1-cent per message, our platform would be delivering 3x or more messages than a “notification system.”  With expanded data access, there would be no per message costs, using mobile web.  There is also the capability to work through “light data” engagement such as IM platforms; including pushing mobile web pages through IM widgets.

But, the natural next step is to bring conversational engagement to mHealth to deepen the value that’s offered.

We’d welcome an opportunity to discuss this with any groups working in this space.

Mobile Technology and Developers v. Average User

Each day there’s a story about one platform, or another, being the winner-take-all in the mobile universe.  Or, a story about how some new technology/platform means the death rattle of another.

Today’s story is about an image recognition feature added to LAYER’s AR engine that sounds like Google GOGGLES, but, with more brand control on directing the action of the scan:

http://www.fastcompany.com/1771451/augmented-reality-kills-the-qr-code-star

Cool!

Except, how does anyone know when and where to use this?  How many people use LAYER to begin with?

These new tools and technologies are rushed to market before establishing user acceptance or demand.  Just when you think LAYER is one thing, it’s now another?

Market-Fit is forgotten in the excitement of new and shiny toys.

This blog post from over at CONQUENT touched on GPS and Location Check-in tools; which reminded me I hadn’t used Foursquare in over a year either.  Do regular folks with their mobile devices care about GPS enabled engagement, except for when using GOOGLE MAPS?  I don’t think so.  Yet, GPS based platforms still are funded and come to market almost daily.  Why?  Because Developers and the technology are cool.  Because it’s there.  Because putting things together in new combinations may come up with a killer app through serendipity, if not through smarts.

But, sometimes we come across mobile technologies that seem inherently “right.”

The TOUCHANOTE platform that uses NFC and EVERNOTE: http://www.prweb.com/releases/2011/8/prweb8686751.htm is one of those matches that seems really perfect.  OK, way ahead of the curve in many respects (by a year?), but, it seems practical, useful and maybe even fun.  It also seems that an “average user” (whatever that means) would find ways of applying this to their daily lives without a lot of complication.  We also see applications that are of great value for specific mHealth related applications using a platform and tools like this.

We spend a lot of time thinking about how the “average user” and their mobile device want to hang out together?  How do they enjoy acting together?  We’re not so focused on the early-adopter iOS user (less than 10% of the mobile population) since they tend to love all things shiny and new.  That’s great.  But, it may not indicate how the mainstream will behave.

We know that “short, text-based conversations” on mobile continue to be the #1 use of the device.  That’s where we remain focused, though we see these conversations being triggered from a range of interfaces.