Future

01 Mar Does AI want to be humane?

Published on Medium at https://medium.com/@nbloom/does-ai-want-to-be-humane-1ea354e6832e#.gqhttg99i

I had this little exchange recently around a Vancouver tech conference. Here are a few reasons that there’s a common fear about AI: sci-fi movies, the Turing Test, and Kurzweil.

Is it ever easy to overanalyze these sci-fi movies about their critique of our culture or our future:

Why does Alex Garland’s 2015 sci-fi flick Ex Machina show that the pinnacle of AI is convincing human-like emotion? Or that the flaw of the human is their susceptibility to their emotions or to their empathy? And, did that Ava character pass the Turing Test? Well, in a condensed, feature-length film tackling big issues and trying to entertain, ambiguity is your friend.

Feelings! Ava from Ex Machina

Feelings! Ava from Ex Machina

“The Turing test is a test, developed by Alan Turing in 1950, of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.” The Turing Test is often seen as the definitive threshold for AI to be “true intelligence” and gets frequently name-dropped throughout pop culture.

Programs are already beating and tricking humans at chess and games, music composition, and even poetry. Disappointingly, often, AI programs will pose as a foreign-speaking child to slacken the judge’s scoring. Sure, it’s likely this confused and unintelligible “person” just doesn’t know very much!

I’d prefer to hear from voices in AI like Stuart Russell, Marvin Minsky, and Nick Bostrom who would argue that the Test is essentially worthless and is a distraction from the real work of AI. The only people actually working on passing the Turing Test are doing so as a hobby (eg. Ex Machina).

Minsky called one Turing competition “obnoxious and stupid” and even put up his own prize to shut it down.

Passing the Test was not meant as the goal of AI, but as a thought experiment to incite people who were skeptical of intelligent machines, or to prove that they could be intelligent through their behaviour and being indistinguishable from humans, not by being self-aware.

The players in our AI future, or rather machine learning, program-as-hardware, are likely to be the big guns: Google, Amazon (Lab126), Microsoft, IBM, and maybe even Andy Rubin’s new venture Playground. These are not hobbyists. There are different motivations. They are here to create value from machine learning for industry and for consumers. I’m particularly excited for the niche, consumer, mainstream market, every-single-day AI bots, like the expanding Slack bots (you spend your whole day in there anyways), Messenger (slowly usurping all your social chatting), and Fin (new contender from Sam Lessin and Andrew Kortina). These are not just cocktail party tricks.

Read More

20 Apr The state of in-car UX: it’s getting worse, and more technology is not the fix. Voice is.

Geoff Teehan of Toronto’s exquisite design firm Teehan+Lax recently posted on “The State of In-Car UX.”

It’s wonderfully visual and a worthwhile read, but still if you want a tldr: Teehan argues that no matter the price or the brand, the interfaces that adorn today’s vehicles are in a bad place. And they’re getting worse. Thankfully, there’s hope. Hope coming from Apple and Google.

Remember this? 1995 Nissan Pathfinder with aftermarket stereo:

Nissan Pathfinder

1995 Nissan Pathfinder

Now, for $845,000 in the otherwise luscious Porsche 918 Spyder, you get this. Really?

Big and obtrusive and distracting.

Big and obtrusive and distracting.

Is this really the UI people are raving about in the Tesla Model S?

Tesla S: still too much going on visually

Tesla S: still too much going on visually

Pretty galling especially to see Lambo and Ferrari electronics UI.

And in a $1.5M Lamborghini Reventon — what does it all mean? And that it often gets worse as the car gets more expensive.

Lamborghini Reventon -- what does it all mean?

Lamborghini Reventon — what does it all mean?

Jalopnik chimed in this week in “This Is The Worst New Trend In Car Interior Design

While touring the cars on display at the New York Auto Show, I was struck by how many of them get the integration of a screen into the dash right and how many of them get it wrong. If the screen is going to be a part of the car, shouldn’t it be integrated seamlessly into the design of the dash, rather than plastered on like a cheap Garmin you bought at Walmart?

An afterthought design earsore?

An afterthought design earsore?

The argument here is that: it’s only getting worse. The bright effort to slide your (familiar, better designed, fully functioning) iPad Mini onto the dash never seemed to have too much uptake. Car companies don’t hire or use people who design software for a living to conceive of and build the UX. Stereos, climate control, etc are designed either by industrial designers, car designers, or Human Machine Interface, which is what the car industry calls the people who deal with knobs and dials.

What’s next? Teehan harks on the arrival of Google and Apple CarPlay. Apparently, these software plus hardware giants are the panacea. And their placement of the buttons on the touch screens will solve our woes. While it seems reassuring at first, this is really no improvement.

I actually think this upcoming entrance of better software (via Google and Apple) into car UX is only a temporary stop-gap. In 10-15 years, cars (nicer cars) will (should) have super simple and many less controls. Like how Audi puts climate controls directly on the vents.

Control built into function

Audi: Control built into function

Your interactions will be all via voice and basic Head-up Display. HUDWAY reflects a simple UI from your phone onto the windshield. Here’s your inspiration:

Car HUD via HUDWAY

Clean HUD concept from HUDWAY

So, without all the clutter of current software offerings, the (better car) interiors will be back to classic, no frills looks — like my dream Porsche 356, beautifully simple and minimalist:

Porsche 356

Porsche 356 cockpit; beautifully simple and minimalist

Summary: it’s easy to get caught up software-ifying the world and everything. Sometimes, especially when doing dangerous things like driving, the less interaction and the less design the better. The solution is not the smart and brash Apple and Google swooping in to fix the issue. The solution is voice-activated control, and maybe even HUD (Head-up Display) for navigation, and focusing on the driving part. At least until we’re all in driverless cars — 25 years away?

Dan Reitman chimes in with this great rant:

Colin Chapman, founder of Lotus cars, famously said “Simplify, then add lightness.” Of course, he probably didn’t consider Moore’s law, nor The Law Of 7-11 (Thine steed shall contain a cup-holder big enough to hold a Big Gulp). Cars would invariably get heavier and more complicated. This did not make for a better, safer driver – which should be the priority – nor does what Teehan is proposing.
 

He talks a lot about design purity and aesthetic beauty. He is an expert. Unless I missed it, he doesn’t, however, talk about ergonomics – and I wonder if he knows anything about them. To wit: how is a more pleasing font in a Porsche 918 infotainment screen going to improve my lap time or keep me from making sweet, violent, carbon-fibre love to a telephone pole at 100mph?
 

When I’m barrelling down the highway, I give precisely zero fucks about the fonts on my stereo screen. I care about how well I can feel around for the A/C controls (3 physical dials FTW, touchscreen = FAIL) or the volume knob, and where the flappy paddles are.
 

Teehan may like cars, but I bet you he’s just like the rest of the world’s distracted drivers: clogging the left lane while fiddling with his smartphone. The fact that he thinks any kind of swipe gesture is a remotely sound ergonomic choice for a car hammers this home.
 

This leads to a bigger issue about how we use our cars. I get that driving habits are, of course, a regional thing. I was just in LA for a couple of days and was reminded how bad the traffic was there – Jeff, i am guessing the Bay Area is just as bad. California is also, of course, a massive car market. If i lived in those places and had a commute that regularly involved me stuck in crawling traffic, I would likely be interested in things like in-car internet and other infotainment options that were optimized – both aesthetically and functionally – for my use, as Teehan describes.
 

The other side of the coin is those who believe that driving, whether on the Autobahn or in a gridlock traffic, is an activity that should involve minimal distraction and interference from other pursuits. Drivers should be focused on moving as quickly and safely as possible (and stay the out of the left lane) until they reach their destination, at which point they can start surfing the web or doing whatever else they need to do. Drivers in Europe seem to have this one figured out pretty well.
 

TL; DR – Teehan is barking up the wrong tree, IMHO. He’s pushing for improving in-car UX as it applies to infotainment – but the car is not yet an appliance; those things are secondary, and if they aren’t treated as such, we’re just going to have more distracted drivers who are a nuisance and hazard on public roads. Build me an interior that focuses on driver alertness, safety, and comfort – in that order – and worry about turning your car into a Bang & Olufsen or Apple showroom when the thing is parked.

Less and smarter technology, more voice activated features. Less distraction, more safe driving. Less buttons, more classic clean design. What are your favourite car interiors free of design clutter?

Read More

28 Apr The Google Glass era has begun. Will it last?

The first public Google Glass povsmartjewelries have shipped. It’s drumming up some real emotion about the social appropriateness of it being pervasive and mainstream. The issue is that it’s not just about the one wearing Glass — for him or her, it’s pure usefulness, once they get past the self-consciousness of their current awkward appearance — it’s about everyone else being always watched, from up close, from the point of view of a person with whom you’re interacting. Are we ready for this? Does it forever cross our comfort line, or will that, like so many other conventions during the Internet, mobile, and social era, slowly push that comfort line further?

We just don’t know; it’s great technology, but perhaps it’s not everyday technology.

What is he looking at exactly?

Some preliminary early product thoughts:

Robert Scoble:

I will never live a day of my life from now on without it (or a competitor). It’s that significant… The success of this totally depends on price. Each audience I asked at the end of my presentations “who would buy this?” As the price got down to $200 literally every hand went up… Most of the privacy concerns I had before coming to Germany just didn’t show up.”

Drew Olanoff:

Some will see this device as a fad, something that isn’t really “necessary” in today’s world, and others will see this as the beginning of an adventure for users, developers and Google, of course. I tend to lean towards the adventure side, as it’s not fully known what impact Glass will have on society, your day-to-day activities, or the future of technology and hardware.”

None other than Google Executive Chairman Eric Schmidt actually said:

Talking out loud to control the Google Glasses via voice recognition is “the weirdest thing… There are obviously places where Google Glasses are inappropriate”

Some of the best behavioural insights come from Jan Chipchase, Executive Creative Director of Global Insights at frog:

His article You Lookin’ at Me? Reflections on Google Glass is a heavier read about the implications of wearing Glass in public. It makes us think more about how Glass may break the unwritten rules that govern socially appropriate behaviour.

It brings up the famous Milgram subway social psychology study from almost 40 years ago: “But Dr. Milgram was interested in exploring the web of unwritten rules that govern behavior underground, including the universally understood and seldom challenged first-come-first-served equity of subway seating.” It was a rare study on the delicate subway order.

“Milgram’s idea exposed the extremely strong emotions that lie beneath the surface,” he said. “You have all these strangers together. That study showed how much the rules are saving us from chaos.”

From Jan Chipchase’s previous research while at Nokia about actors wearing a Glass-like product in Tokyo:

[During experiments about social/tech interactions], our actors and actresses felt extremely self-conscious about wearing nonstandard glasses, and awkward about acting out the scenarios, particularly in contexts where there were others in close proximity. A number of the things we learned from this study surprised us.

What will induce an odd response to usage of Google Glass or other tech device interactions in the future?

Glass has four design principles for developers that focus on the Glass wearer’s user experience: “design for Glass,” “don’t get in the way,” “keep it timely,” and “avoid the unexpected.”
 
Two complementary principles will go some way toward accommodating the concerns of people in proximity and lower social barriers to adoption:
 
Proximate Transparency: Allow anyone in proximity to access the same feed that the wearer is recording or seeing and view it through a device of their choosing.
 
Remote Control: allow identifiable people in proximity to control Glass’s recording functionality and have access to the output of what was recorded.

What a great way to consider how we might accomodate the privacy concerns of people nearby: let Glass usage be transparent and let people collaborate on its created content.

One could argue that the form taken by Glass offers up a lazy futurist’s vision of what might be Glass has a certain inevitability about it.
 
In due course, the technologies to deliver Glass’s emerging functionality will truly disappear from view — this is a window of opportunity for discussion, debate and a reflection.

Final thoughts:

Yes, we are always being watched, but we’re starting to accept it. There can be value in that, like the surveillance coverage and user generated visuals around the Boston Marathon bombing. That led to a citizen-led detective hunt for the suspects, and you may disagree with how that happened, but isn’t it incredible that we live in that sort of era.

We’re still grappling with our individual privacy in a social-world-gone-online, which is only a fabrication of the last 9-12 years! Remember when we banned cameraphones from locker rooms? The discomfort was recognized, reasonable guidelines went up, and social norms were easily swayed. What happens when Glass of the future will be hidden and covert: people will have it, and there’s nothing anyone else can do, and that’s why we should be worried.

Even now, the product is not fully recognized in the real world, which is why Robert Scoble doesn’t get much backlash about wearing it all the time.

We ought to talk about this openly. Otherwise, could it be “too late”?

In the meantime, I’m bullish on shared experiences on mobile and their inevitable evolution to an always-in-view experience. In terms of people around us, that’s something like my company’s current iPhone app Jiber, and I’d love to hear your thoughts on all this.

Read More