Tech

10 Nov 3 startup pitch lessons (disguised as 3 lessons for a new startup investor)

Published on Medium at https://medium.com/@nbloom/3-startup-pitch-lessons-disguised-as-3-lessons-for-a-new-startup-investor-79778838cdb2

One month of wading into the exciting world of startup investing, and I have been pummelled with lessons of the pitch and the grokking that an early stage investor must do. I can’t stop but accumulate a list of lessons learned. What are lessons for investors, in fact, are lessons on how to make a pitch great.

Early stage startups are risky. Investing in them leads to growth or, more likely, zero. The signals are incomplete. The value proposition is likely still not verified. Early stage investors still aim to codify their due diligence, and there is no universal or right or best way to do this.

The challenge is to measure internally the risk of the company. Once the investor finds a company behind a threshold of interestingness, either he gauges it as too risky or comfortable, or, so often, unable to be discerned. What’s most intriguing, I’ve found, is how to get from “unable to discern” into a investment decision — yes or no.

1) Pitch transparency of risk. I’ve been guilty when pitching investors, from the other side of the table, as an entrepreneur, to make the opportunity seem big and certain. She can speak on the strengths, and dance around the weaknesses (ideally as nondefensively as possible) when probed. But the investor is inherently skeptical. Of course there are massive risks. What are the biggest ones? What is the outcome if they come to fruition; have you proven there are alternatives to your playbook? Or how are you even thinking about this? If an entrepreneur can build a transparency around the risks, preemptively, and explain what they don’t know and how they are constantly thinking about it, it in fact comes off as more confident and lets the investor measure more accurately that risk factor in their head.

2) Memorable something. I’ve joined some larger pitch events, which are not interactive on the fly. Sometimes, you leave with one memorable bit that you can’t help but want to tell someone or your partner later that day: a problem that seems so viscerally important and here’s this neat little solution, impressive traction (from launch to $1M ARR in less than a year), daily engagement of the app, even high NPS. It is up to you to decide what you want that to be (or rather instead of other the investor guessing, fumbling, and forgetting). And you want to repeat it at least 2–3 times.

3) Blend of the past and the future. The great, confident founders can legitimately say that the current product makes them feel embarrassed. They already have a beloved product used by millions or paid for by the thousands. Another founder may emphasize what they’ve done to date: substantial growth that will continue. Still, that’s not enough. A less confident founder may talk endlessly about the things coming down the pipe — a hire, funding close, upcoming product launch, or feature release that will be the supposed panacea. Yes, that’s lacking too. But when the founder strikes a balance between what has brought you to this point, and also what drives you to soldier forward, it is tremendously powerful. On one side, what is your success to date; on the other, why are you doing this, and how can this continue to drive the team forward. Even if that vision is so stupendous or lofty, it’s a signal of passion.

I have about 15–20 more lessons to continue this post another day. Stay tuned!

Read More

01 Mar Does AI want to be humane?

Published on Medium at https://medium.com/@nbloom/does-ai-want-to-be-humane-1ea354e6832e#.gqhttg99i

I had this little exchange recently around a Vancouver tech conference. Here are a few reasons that there’s a common fear about AI: sci-fi movies, the Turing Test, and Kurzweil.

Is it ever easy to overanalyze these sci-fi movies about their critique of our culture or our future:

Why does Alex Garland’s 2015 sci-fi flick Ex Machina show that the pinnacle of AI is convincing human-like emotion? Or that the flaw of the human is their susceptibility to their emotions or to their empathy? And, did that Ava character pass the Turing Test? Well, in a condensed, feature-length film tackling big issues and trying to entertain, ambiguity is your friend.

Feelings! Ava from Ex Machina

Feelings! Ava from Ex Machina

“The Turing test is a test, developed by Alan Turing in 1950, of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.” The Turing Test is often seen as the definitive threshold for AI to be “true intelligence” and gets frequently name-dropped throughout pop culture.

Programs are already beating and tricking humans at chess and games, music composition, and even poetry. Disappointingly, often, AI programs will pose as a foreign-speaking child to slacken the judge’s scoring. Sure, it’s likely this confused and unintelligible “person” just doesn’t know very much!

I’d prefer to hear from voices in AI like Stuart Russell, Marvin Minsky, and Nick Bostrom who would argue that the Test is essentially worthless and is a distraction from the real work of AI. The only people actually working on passing the Turing Test are doing so as a hobby (eg. Ex Machina).

Minsky called one Turing competition “obnoxious and stupid” and even put up his own prize to shut it down.

Passing the Test was not meant as the goal of AI, but as a thought experiment to incite people who were skeptical of intelligent machines, or to prove that they could be intelligent through their behaviour and being indistinguishable from humans, not by being self-aware.

The players in our AI future, or rather machine learning, program-as-hardware, are likely to be the big guns: Google, Amazon (Lab126), Microsoft, IBM, and maybe even Andy Rubin’s new venture Playground. These are not hobbyists. There are different motivations. They are here to create value from machine learning for industry and for consumers. I’m particularly excited for the niche, consumer, mainstream market, every-single-day AI bots, like the expanding Slack bots (you spend your whole day in there anyways), Messenger (slowly usurping all your social chatting), and Fin (new contender from Sam Lessin and Andrew Kortina). These are not just cocktail party tricks.

Read More

28 Apr The Google Glass era has begun. Will it last?

The first public Google Glass povsmartjewelries have shipped. It’s drumming up some real emotion about the social appropriateness of it being pervasive and mainstream. The issue is that it’s not just about the one wearing Glass — for him or her, it’s pure usefulness, once they get past the self-consciousness of their current awkward appearance — it’s about everyone else being always watched, from up close, from the point of view of a person with whom you’re interacting. Are we ready for this? Does it forever cross our comfort line, or will that, like so many other conventions during the Internet, mobile, and social era, slowly push that comfort line further?

We just don’t know; it’s great technology, but perhaps it’s not everyday technology.

What is he looking at exactly?

Some preliminary early product thoughts:

Robert Scoble:

I will never live a day of my life from now on without it (or a competitor). It’s that significant… The success of this totally depends on price. Each audience I asked at the end of my presentations “who would buy this?” As the price got down to $200 literally every hand went up… Most of the privacy concerns I had before coming to Germany just didn’t show up.”

Drew Olanoff:

Some will see this device as a fad, something that isn’t really “necessary” in today’s world, and others will see this as the beginning of an adventure for users, developers and Google, of course. I tend to lean towards the adventure side, as it’s not fully known what impact Glass will have on society, your day-to-day activities, or the future of technology and hardware.”

None other than Google Executive Chairman Eric Schmidt actually said:

Talking out loud to control the Google Glasses via voice recognition is “the weirdest thing… There are obviously places where Google Glasses are inappropriate”

Some of the best behavioural insights come from Jan Chipchase, Executive Creative Director of Global Insights at frog:

His article You Lookin’ at Me? Reflections on Google Glass is a heavier read about the implications of wearing Glass in public. It makes us think more about how Glass may break the unwritten rules that govern socially appropriate behaviour.

It brings up the famous Milgram subway social psychology study from almost 40 years ago: “But Dr. Milgram was interested in exploring the web of unwritten rules that govern behavior underground, including the universally understood and seldom challenged first-come-first-served equity of subway seating.” It was a rare study on the delicate subway order.

“Milgram’s idea exposed the extremely strong emotions that lie beneath the surface,” he said. “You have all these strangers together. That study showed how much the rules are saving us from chaos.”

From Jan Chipchase’s previous research while at Nokia about actors wearing a Glass-like product in Tokyo:

[During experiments about social/tech interactions], our actors and actresses felt extremely self-conscious about wearing nonstandard glasses, and awkward about acting out the scenarios, particularly in contexts where there were others in close proximity. A number of the things we learned from this study surprised us.

What will induce an odd response to usage of Google Glass or other tech device interactions in the future?

Glass has four design principles for developers that focus on the Glass wearer’s user experience: “design for Glass,” “don’t get in the way,” “keep it timely,” and “avoid the unexpected.”
 
Two complementary principles will go some way toward accommodating the concerns of people in proximity and lower social barriers to adoption:
 
Proximate Transparency: Allow anyone in proximity to access the same feed that the wearer is recording or seeing and view it through a device of their choosing.
 
Remote Control: allow identifiable people in proximity to control Glass’s recording functionality and have access to the output of what was recorded.

What a great way to consider how we might accomodate the privacy concerns of people nearby: let Glass usage be transparent and let people collaborate on its created content.

One could argue that the form taken by Glass offers up a lazy futurist’s vision of what might be Glass has a certain inevitability about it.
 
In due course, the technologies to deliver Glass’s emerging functionality will truly disappear from view — this is a window of opportunity for discussion, debate and a reflection.

Final thoughts:

Yes, we are always being watched, but we’re starting to accept it. There can be value in that, like the surveillance coverage and user generated visuals around the Boston Marathon bombing. That led to a citizen-led detective hunt for the suspects, and you may disagree with how that happened, but isn’t it incredible that we live in that sort of era.

We’re still grappling with our individual privacy in a social-world-gone-online, which is only a fabrication of the last 9-12 years! Remember when we banned cameraphones from locker rooms? The discomfort was recognized, reasonable guidelines went up, and social norms were easily swayed. What happens when Glass of the future will be hidden and covert: people will have it, and there’s nothing anyone else can do, and that’s why we should be worried.

Even now, the product is not fully recognized in the real world, which is why Robert Scoble doesn’t get much backlash about wearing it all the time.

We ought to talk about this openly. Otherwise, could it be “too late”?

In the meantime, I’m bullish on shared experiences on mobile and their inevitable evolution to an always-in-view experience. In terms of people around us, that’s something like my company’s current iPhone app Jiber, and I’d love to hear your thoughts on all this.

Read More