Techstars: AI startups must be wary of ‘move fast and break things’ mantra | Tech News
Techstars, one of the largest startup accelerator organizations on the planet, currently has 40 accelerators in 25 cities around the world. Some of its startup accelerators focus on helping early-stage companies grow, while others aim for specific sectors, like programs run with Amazon for Alexa startups and Target for retail companies.
This fall, Techstars will open its very first AI startup accelerator in Montreal, a city that in the past year has welcomed Facebook AI Research, Microsoft Research, Google AI, and as of last month Samsung Research.
The accelerator is being launched in tandem with Real Ventures, a prominent seed fund investor in Montreal, and includes advisors from Google, as well as Element AI and other companies that call Montreal home. In roughly the past year, Google AI, Microsoft Research, Samsung Research, and Facebook AI Research have all opened research labs in Montreal.
The first cohort of 10 startups will be selected next month to begin in September.
Ahead of the start of the program, VentureBeat spoke with managing director Bruno Morency to explore what it means to be an AI startup, and the kinds of challenges and opportunities startups that startups with an AI-first approach must face. He also talked about the qualities he looked for when choosing startups to participate in the inaugural class of the accelerator.
This interview was edited for brevity and clarity.
VentureBeat: So Jeff Bezos has talked about what it takes to build a tech company, and though every business today uses some form of technology, that doesn’t make them a tech company. Andrew Ng has also talked a lot about what it takes to be an AI company. What do you think it takes to be a real AI startup? What are the elements involved there?
Morency: I think we’re still figuring out the right answer to this. I think there’s some startups that come and apply to the program that basically come out of research projects and do something very interesting, but it’s not clear yet what the product is going to be and how this can turn into a business.
On the other hand, we have some companies that have a product, that already have some pretty good customers, some of them even have decent revenue, but the product exists because of the AI technology, so would you describe them as an AI company? Probably not, because the product didn’t exist with artificial intelligence, but there the mission of the company is not to push the science and ease our burdens, the mission is to fight a specific problem, so I don’t know.
I think we’re still trying to figure out what the definition of an AI startup is, but in my mind, I think it includes people who apply it as much as people who push the science part of it. And companies in two or three years may use AI but will not do any data science research themselves — they’re just going to use models that are pre-packaged and available and applicable to them — but we’re not there yet.
VentureBeat: I guess, to ask it in a different way, do you see specific indicators when you’re speaking to a company and you can tell they’re good and on the verge of being a great AI company? Are there specific indicators you see there? Because people show up with data, and there’s lots of different interesting verticals you can go into, and maybe they build some custom models. But beyond data: Is it the team? The market they chose to tackle? Are there other indicators that tell you a company is about to do some hockey stick growth?
Morency: There’s not a lot of historical data we can base that on, so yeah, we’re definitely trying to figure that out.
You’re going to judge the company by the standard way that investors judge companies from these verticals. For AI specifically, the playbook hasn’t been written yet, so I think this is what I’m trying to figure out in the next year or two: what makes these applications and what are the signs I need to look for. So yeah, the first cohort is going to be a good test for that.
I don’t think I have the answer yet. That’s something I try to figure out and discuss with the investment team at Techstars and Real Ventures: What makes an AI company different [from] the standard SaaS or mobile company?
VentureBeat: In your mind, how is it that Montreal basically became the envy of the world in AI?
Morency: I think it comes from the research labs investing like McGill [University]. If you look at the winters of AI research decades ago, funding disappeared. But you had people in research like Yoshua Bengio and his team and the machine learning lab at McGill; you have counterparts at the University of Toronto as well that just kept working on this despite a lack of funding and general interest. And a lot of what’s happening in Montreal is owed to these groups, and the academic research and its collaborative nature created this pool of researchers that have gained attention around the world as a core group of talent around research.
We’ll see in the next years how much we [Techstars] can apply actual practical applications and products, and I believe that’s where Techstars can be useful. We’re not going to be a research entity. We’re not about getting the next algorithm or deep learning or neural networks. [We’re more about] applying those into commercial products; I think this is where Techstars can contribute a lot to the local ecosystem.
VentureBeat: That’s sort of the distinction you want to draw between all the others, because there’s so many things here.
Morency: Exactly. Personally, I don’t think I can add much to the research aspect. What I want to add is on applying the output of that research with innovative companies and bringing them to Montreal and being the bridge between the research and the commercial applications for it.
VentureBeat: What are the forms of applied AI in the world today that get you really excited or pumped about the possibilities? I know there’s a list of uses that come to mind for me that make me think, “Yeah, that’s crazy.”
Morency: Anything where there’s a lot of repetitive tasks done by humans to try to make sense out of a large amount of data — for example, financial markets, contract review, compliance — these are stuff where AI can change things a lot.
Health care, I think there’s a lot to do around a better diagnosis, and some that we saw, it’s not just diagnosing things, but it’s before a doctor needs to decide if they operate on someone. And there’s some signs that they can use to make a decision, and I’ve been pitched about ideas around using machine learning to improve those signs by an order of magnitude, where you wouldn’t need to operate on someone [for] open heart surgery because the signs were better and you were able to make a better decision. So there’s some pretty interesting applications around health care, around automotive, every industry is going to be touched by it.
VentureBeat: One of the things that strikes me about AI is you’ve got these behemoth companies throwing around massive amounts of weight and pre-existing technology and datasets, and you’ve got startups that are trying to tackle niches or build on some sort of great idea. Is there any specific space you think startups can exist within or do well because of that dynamic?
Morency: They own a lot of the data, that’s for sure, and that’s the big difference between AI startups and the typical startups we’ve seen in the last 10 years. For example, Google and Facebook have a lot of users, but a startup can come in and acquire those users with a better proposition. Now the data is harder for an AI startup to just start from scratch, because the product becomes useful when a model has been trained, and you can’t train without the large dataset.
So there’s indeed a huge challenge if you’re starting from scratch, but from what I’ve been able to see from the applications we’re seeing come in to Techstars is there are some datasets that the Googles and Facebooks of the world don’t have, and that people are leveraging and have acquired through standard products they’ve had for a year or two. And there’s always room outside of the big companies, to be honest.
Ten years ago we saw all these large players, and you’d say, “Well, there’s no place for a small one to come in and take their users away.” Well, it has happened, and I think the same thing is going to happen with AI. The datasets, they have a lot more, but there’s a lot they don’t have, and I’m fairly bullish on startups being able to find their way around these companies.
VentureBeat: What are some other specific challenges or opportunities facing AI startups?
Morency: We talk a lot about data, but you have to be careful about how you get that data and make sure that the people who shared that data are a willing party of this. So I think that’s a huge challenge, because the rules about what you can do with data and how you can acquire it change from region to region. GDPR gives a good example of how these change from country to country.
So I think that’s one of the biggest challenges: If you want people to continue sharing data, they have to see why this is useful for them — and it can’t be just getting a tool like Facebook without paying, it has to be more than that. So if you want to take people’s data and do stuff with it, it’s got to be useful to everyone.
VentureBeat: I had a conversation with a company where they were basically suggesting the idea that you can recruit your users to help supply more labeled data, and you know there’s different ways to do that. Someone gets on Instagram and starts using hashtags — guess what? That’s what you’re doing. But have you seen any companies that are doing a successful job of creating that value proposition that you just described?
Morency: Think of a company like Strava, for example, where you share where you run with other people — so besides telling your friends how good your pace is, you can do a lot of secondary good things, where you go into a new city [and share] where are good places to run, where are safe places to run.
VentureBeat: Cities are using that data to create bike paths, for example.
Morency: Yeah, so as long as people feel that “If I share that information of when I run and where I run, there’s useful stuff that can make my experience as a runner better,” then that’s a good reason to share. But we have to be very careful with what we do with data. If users are surprised about what you do with their data, that’s a big strike in my mind. The second strike is now that they know what you’re actually doing with the data, they need to be cool with it; they need to accept this usage agreement you’re making with them. Otherwise, you shouldn’t be doing this.
VentureBeat: To what extent do you think events like the premiere of Google Duplex or Alexa sending a recording of a user’s voice to a person in her contact list influence, for example, the willingness of a company to deploy specific kinds of AI because they think their customers will be too freaked out by it?
Morency: I think that’s really the biggest challenge because pretty much every AI project can be applied for good, or there’s a pretty bad turnout for every single one of them, so how do you build the tool to make sure that it doesn’t turn bad? I think a lot of startups, when you pitch the idea and you want to invest, and then the question that’s always asked is, “Can you really build this?” And I think in AI, there’s a second question, which is “Should you build this? And if you build this, how do you prevent this from being used for nefarious purposes?” And it’s a real question that people need to ask themselves.
I mean, if you think about models for assessing risk for insurance, there’s a good value of being able to detect whether someone is at risk due to their lifestyle and what they do over time to help them avoid the bad scenario of dying too young or getting a sickness or a disease that they could have avoided over 10 years with a better lifestyle. But if you use this same model to not insure people that are too much of a risk, then that’s bad. So how do you use that to prevent the risk rather than cutting people off from insurance?
VentureBeat: I think of algorithms being applied to people with no previous history of credit, and initially it makes sense I guess to look at things like how often someone charges their phone. But if you have inconsistent energy supply, then that might not be a great metric, and the idea that somebody somewhere would lose a loan over that is nuts.
Morency: Yeah, and if you’re taking historical data about what it means to be a good person to insure around health or what it means to be a good person to lend money to, then historical biases about who got money and who was able to use it will inherently be applied in the model, because the model is trained on that data, so you’re just going to reinforce biases that you had in the past. So how do you make sure, as you automate this, you come in with a clean sheet in terms of biases, yet learn from what was good or bad in terms of who you lend money to?
Everyone I’ve spoken to in this field up until now — I have yet to find someone who is a researcher that doesn’t think about this and is not really worried that what I’m doing cannot be used for such things, and [they ask] how do I make sure these things are prevented.
VentureBeat: Do you think it’s possible for AI startups to have that “move fast and break things” ethos when you’re dealing with things that can be so inherently impactful?
Morency: You’ve got to be more mindful about it, because if you’re taking people’s data and automating decisions that will impact their lives, I think in a lot of cases a human will be able to have the context, and probably — well, I mean, there’s so many examples of humans making decisions that are detrimental to people, so you could argue it’s not better because a human makes a decision. But if you’re automating at the scale that AI will, I think you have to be careful not to break — I mean you can break things, but you can’t break lives. So how do you prevent that?