1 hour write up: lessons from CES

Wo Meijer
5 min readJan 12, 2019

I went to CES! I promise, I mean I went alone and I didn’t take selfies, but how else would I have video footage of this?

Amazing Japan, absolutely 10/10 work

The Disappointing Realization of Your Favorite Hype Technology on the Hype Curve.

or how I know I will have to wait longer than I want for AR.

Experience 1: Uselessly adding AR when Lyft already showed us the way.

I rode around in a self driving car at CES (but not a real car, but one of those pod cars that there seem to be a millions of at CES)! Which was something I wanted to do and besides being one of the lucky few picked up by an autonomous Lyft or get an invite to the other invite only self driving car tests (if you want to invite me to stuff, reach out! I’m great at parties and really critical of people’s design assumptions!).

Apparently people use these, hopefully those don’t feel like a prototype and come with a British guy holding a Xbox controller.

And it really disappointed me for three reasons:

  1. It was like getting a ride in a golf cart driven by a grandma in a line that felt really preprogrammed (I was told I could not ask the car to go to another waypoint or backwards).
  2. IBM Watson is fine I guess, but with the amazing amount of Google Assistant and Amazon Alexa demos and promotions at CES, it’s weird to find it in this small car… with terrible text to speech… and it did not know what to answer when I asked it what 4 plus 5 was (I was told I could ask it simple questions).
  3. It comes with an AR app to help you identify your self driving pod car!

And this last one really bothers me, but I didn’t want to spend the 45 minutes it would take me to get to the bottom of a few things. Mainly, the idea that was presented to me, was that there would be so many pod cars that I would not be able to find mine and thus hold up my phone in front of my eyes like all us millennials do!

look at all these stock photo teenagers! All holding their phones!

I hate this idea, so much.

First of all, no one wants to wait for their car, holding their phone uncomfortably in front of their face for 10 minutes waiting for their ride.

Second of all, heaven forbid that you make an effort to give your pod cars and distinguishing features, we know people when technology has simple personas, that’s why Google recommends you make your conversational assistant bland and boring… right? (wrong)

Third, this problem exist in real life. And we have companies that have developed solutions to this. Let’s look at my prefered ride hailing app, Lyft.

New user experience, who dis?

I think Lyft does a good job helping people get to the right car, even in the giant stampede that was CES.

10/10 selfie, 200/10 people waiting for Lyfts/Ubers at the airport to go to CES
  1. They give you the description of the car. I mean, I forget it most of the time, but if these self driving cars had 5 different colors it would already take a lot of ambiguity out.
  2. They show you where the car is. If there are really that many self driving pods that I can’t figure out when mine arrived or which one it is, just show me where in the line it is and I can walk towards it.
  3. The Lyft Amp shows gives you a colored sign to look for. I didn’t take screenshots from this, but the Amp lights up in a color that the app tells me to look for, it’s really simple and effective in my opinion, and there are already a million LEDs on the pod car.
  4. If only there could be a sign that has my name on it to confirm that it’s the right car.
And then it says your name.

So, in summary, they could have people identify their ride by making the cars different colors, telling people where the cars are, using the LEDs they smothered them in anyway, and just put a sign in it. Or you could have a really complex and more difficult and awkward to use AR system.

Experience 2: “it’s not interactive”

Sweet graphix

I went to the Augmented Reality (and Virtual Reality, and some other stuff) exhibit that Deloitte was showing in the LVCC. Now full disclosure, I work for Accenture which competes with Deloitte in the consulting firm, and the conversation I had with the staff were nice, but…

It was the most basic, client baiting, low hanging fruit, lead to future disappointment, first pass, last minute AR thing I have seen this side of vodka promotion from 2014.

Actually that has more interaction than this.

Basically, it was poorly made animation that placed itself on a marker on the wall. It presented some things that looked like buttons, like things that I could scroll! The eager to learn child in me was thrown back to the engagement and immersion of interactive exhibits at places like the Pacific Science Center (as this rejected Justin Roiland character can show you). And then I was told

Oh, it’s not interactive.

My heart sank, not only was this a poor demo of modifying some marker tracking tutorials, it missed the opportunity to give a richer, more immersive experience, it also expressed a bland, generic vision of the future in a disappointingly fitting bland and generic way…

The worst part is it probably impressed some people, and when they realize the lowest effort approach of this, they will be disappointed and disenfranchised with AR.

In summary

CES featured tacked on AR ideas, and “really just there for buzzwords” implementations that will lead AR down into the valley of the hype cycle. At least we’ll get some exercise holding tablets and phone up in front of our eyes!

Like, comment, and subscribe! But really, you guys have, hints about my writing skills, ways to have my drink the Kool-aid, or job offers to run your company’s design department are all appreciated!

--

--