Episode 102: Even with AI, Great Is the Enemy of Good Enough
Jon Prial: Considering that I view myself as a techie and I'm definitely a bit of a nerd, the truth is I'm rarely an early adopter. I mean, perhaps I've gotten burned in the past over the years, and now I just say I'm a version of three person. That means I'm not going to consider a product until the third version has come out. Well today, we want you to think about what type of person you are and what you should inaudible when it comes to tackling, machine learning and AI projects. We'll be talking with Adam Drake. He's currently a white house presidential innovation fellow, an IEEE senior member. And Adam has a great track record in leading technical business transformations in global and multicultural environments. And inaudible talk with him today about technology, tech trends, data science, AI, and where you might want to be on the technology adoption curve. I'm Jon Prial, welcome to The Georgian Impact Podcast. Welcome to the show, Adam.
Adam Drake: Thanks John. Good to be here.
Jon Prial: So you wrote a post last year, you called it Novel Results Considered Harmful, and I think it does a great job of grounding your audience and reminding them to focus on leveraging the right technology for business value, not just because it's the latest and the greatest. It'd be great if you could share an example with us, maybe even a good one and a bad one.
Adam Drake: Sure. So the post that you're referencing, it was the outgrowth of a lecture that I gave at University of Toronto. And Ravi Adve from University of Toronto was kind enough to invite me to speak there. And when I thought about inappropriate topic for the lecture, one thing that I thought of was from my experience working with a variety of companies all over, the thing that comes up again and again, is technology teams often pursue sort of the cutting edge of what's possible technologically. And there are reasons and situations when that might be appropriate, but for a lot of companies and especially in academia, the pursuit of novel results at the expense of useful results is actually something that's quite an issue, it's quite harmful, hence the name of the post. And I thought it would also be interesting to do a considered harmful lecture because every tech person should do a considered harmful paper lecture at some point. So I did that and it was inappropriate venue because the lecture was well attended by folks from the computer science department, mathematics, electrical engineering. And it was very interesting to give a talk to folks at a university, basically saying, hey, the novel results that you have to come up with as part of the publishing model in the academic world is actually problematic in some cases in industry, because when people read these papers and they see the cutting edge of the research, they think that that's what they need to implement. And in fact, most of the companies that I work with, they would be well- served by having a fast, pragmatic solution to whatever problem they're currently trying to solve.
Jon Prial: It makes a lot of sense.
Adam Drake: The current thing that I see pretty frequently are organizations that might have some type of classification problem. For example, whether it's a fraud, not fraud or any classification problem you might consider and sort of the first tool that a lot of people reach for is some deep learning type of solution, some neural network based solution. That may or may not be the right approach, but often they reach for it too quickly. And so kind of my first thing is like," Did you think about logistic regression?" For example, maybe if you have a binary classification problem, that might be something to try. And so there are a lot of very, very useful results that are traditional, they're well understood, they've been around a long time and the amount of business value that they can add is immense. Because if you can have a working solution out the door in a matter of days versus going down more of a research track and saying, okay, what's the cutting edge? How can we implement that? How can we support it? How can we interpret it? All these sorts of questions. That actually can be quite harmful for organizations, especially if you're a high growth startup and you have funding limitations, you have competitive limitations. And what you really need to do is focus on getting product out the door quickly, not necessarily engaging in a more extensive research operation.
Jon Prial: So you really hit the old technologists warning aphorism, great is the enemy of good enough, let's get good enough out the door.
Adam Drake: Exactly. As a technologist, that's something that I had to struggle with earlier in my career. I mean, I've been in tech since the 90s and I would just, early for some and a long time for others, but I've had many situations where I thought this would be a really interesting project to pursue, but it's not necessarily useful for the business.
Jon Prial: So you won't send people off to Kaggle right away because they're going to get the great versus the good enough.
Adam Drake: Do you mean when I send companies to Kaggle to put their problem up to solicit solutions?
Jon Prial: Yeah. Exactly.
Adam Drake: Well, at least from what I've seen, many of the solutions that folks come up with on Kaggle, they don't actually get implemented by the organizations. So they're very interesting. And for me as a technologist, as a machine learning person, they're very interesting. And I regularly read some of the after action report type of stuff coming out of the competitions. But there've been quite a few cases where the winning approaches to the Kaggle competitions were not actually implemented by the organizations that sponsored them. And in some cases that's due to performance considerations or other things, but yeah, just because it's a Kaggle solution and it's very interesting, does not necessarily mean that it has the same business value.
Jon Prial: Sure. So I think it's always fair to say that an AI can work with a simple concept, lots of data, Netflix recommendation engine. And of course I can go on on the number of issues with AIs yielding bad results, whether it's facial recognition or I don't know, self- driving car accidents, but at the end of the day, I think you've captured this best when you're inaudible not just to think about artificial intelligence, but rather to focus on what you're calling AI Intelligence Amplification. Talk to me about what you mean here, please.
Adam Drake: So this is not really my idea. I can't take credit for it, but it's an idea or a concept, but I think is very salient now. And especially in a lot of the organizations I work with, they think they're going to go from zero to amazing AI that does automated everything kind of in one step. They think there's sort of an off the shelf solution for it. And what that leads to, is it leads to people considering problems or potential solutions in a way that is oftentimes technically infeasible. We do not have general artificial intelligence right now, it doesn't exist. And so another way to look at that problem would be to say, okay, we have people and they're currently engaging in a variety of business processes. What can we do in order to amplify the intelligence of those people or to augment their intelligence so that we can make them more productive? And what that does is when you focus on augmenting the intelligence of the people you do have, you end up solving for a different type of problem. You're not trying to find some very far future vision technology that may or may not materialize. You're looking at more concrete things that you can do right now that can be more helpful to your current workforce to help them be more productive and to help your company scale more effectively.
Jon Prial: What will be a good example then of an IA that's in common use right now?
Adam Drake: What I like to do is I like to go back and think, what are the roots of this? Where did it come from? And one of the people I think that broke a lot of ground in this area is Douglas Englebart. If you haven't seen The Mother of All Demos, it's on YouTube, you can check it out. I mean, this was from the late 60s, if I remember correctly. And they demoed windowing systems, hypertext, computer mouse, word- processing like real time editing and all this stuff. And that was in the late 60s, and here we are in 2019. And what they did was they said, okay, well, if we were going to take a group of people and make them more productive, what would that look like? And I think that's a much better way to approach the question of how can technology assist us then to say, okay, well, AI is going to figure out everything, so let's just work backwards from that. I think looking at ways to extend our current capabilities is a much better approach. It keeps us more focused and it results in more tangible outcomes.
Jon Prial: Almost sounds like a classic product management discussion, but that is real... Englebart really took an outside in view, what did users need? And then here's the answers versus, Oh, I could do this really cool thing. It sounds like a really outside in view, which is kind of cool.
Adam Drake: Yeah. And some of the things that we see now, I mean, I think it's referred to as Tesla's theorem, but a lot of the things we have now that you could argue make people a lot more productive, we don't really consider AI necessarily. So one common one is Siri, for example, it's like, you can talk to your phone, which is pretty cool. I mean, I remember in the 90s, the concept of talking to your phone, or talking to your watch or something like that was scifi stuff, it was in the future. And now people do it all the time. And how much more productive does that make people? I'm not sure, but there are a lot of folks out there that do not consider that to be AI, because we have it already. So I think there are some interesting examples out there, and there are ways that people are getting more productive and sometimes subtle ways. But the other side of that is once those technologies exist, they're usually not thought of as AI anymore by people generally.
Jon Prial: Interesting. I was watching a video of you somewhere, but I heard you say, when you hear AI and I guess you mean the term AI be suspicious. So I liked this Siri analogy. It's not necessarily an AI could be, could get better and better with AI, but it's really starting as an interface.
Adam Drake: I mean, the interface discussion is interesting and the question of bandwidth and human computer interaction and how that can be improved. I mean, that's a whole research field. I mean, it's massive, but I think the point about being suspicious is important. A lot of the work that I do, is with companies that have to make these sorts of decisions. It's with investors and companies that have to say, okay, we have a portfolio that we're managing, we have a variety of investment options, what makes the most sense? And sometimes there's definitely some research focus that's needed on an AI problem, but oftentimes you don't need any let's say cutting edge state of the art research. You need some relatively normal solutions to these problems that have probably been around for a long time, but they give you efficiency gains and scalability gains that you wouldn't otherwise have. And then as a company that allows you to grow much more efficiently, to be much more capital efficient. And so you get scalability benefits that other folks don't have.
Jon Prial: And because we have AI and the next term, and you've already talked about using very successful techniques that are easy to use, like stochastic gradient descent or logistic regressions, but we have AI, we have deep learning. So you're kind of cautioning everybody, don't jump in with both feet into the latest and greatest, so you could back it up and really think about what you need and what the value you're going to provide is.
Adam Drake: Yeah, I mean, I hope that it's not really a controversial perspective. It's kind of like just use the right tool to solve the problem. And you don't necessarily need to use the latest and greatest tool to get a pretty good solution to your problem, and especially if you're in a high growth company. Maybe you've taken on funding from a VC or from a PE firm. Your objective is to continue growing the company and produce those returns and to build up your product. And if you have to choose between putting out something that does a pretty good job of solving your problem in a reasonably short time, or it does a slightly better job of solving your problem in a much longer time then from a business perspective, I would choose the former. And I think that kind of goes back to the novel results considered harmful concept is that if you're pursuing the novel results because they're new or because they're nominally better, then that might be interesting for me as a technologist, let's say, but for me as a potential investor, I'm actually going to look at that rather skeptically, because I'm going to question your capital efficiency. I'm going to question your ability to prioritize. I'm going to question your ability to grow the business. So in some ways the novel results are actually kind of a red flag.
Jon Prial: Sure. And I love that you brought it back to a business. So, some writing on a computing trade off, you actually referred to another researcher, Frank McSherry and you talk about how a distributed computing platform, Hadoop and Spark may or may not outperform a simple single threaded implementation running on a laptop. So what should a CEO and a CTO think about as he or she looks at their inaudible?
Adam Drake: That's a really good question. So I think one thing that I suggest people do, and if you haven't looked at it, I definitely would go and take a look, is take a look at the Heilmeier Catechism. The Heilmeier Catechism was developed by George Heilmeier. And he ran DARPA. He was the director there, I think, in the late 70s. And he had kind of a series of questions that he asked and they were non- technical in nature, but they got to the root of the actual problem people were trying to solve. And so they were things like, what are you trying to do? Or articulate what you're trying to do, but don't use any technical terms and how are you doing it right now? And what are the limits of the way that it's done now? How can it not be improved? Or sort of these types of things, and then risks and costs and things of that nature. So if for the business folks they're looking to evaluate tech stack, it's almost the wrong question, in my opinion. Because if the CEO is saying," Hey, let's talk about the tech stack." That makes me raise an eyebrow a little bit. It's like," Why is the CEO kind of getting in the weeds on the tech stack?" If I were in the CEO spot, what I would be concerned with is what are we building? Is this sustainable? Can we get the right people? Is it good for the product? So those sorts of things. When we start talking about the tech stack and whether or not it makes sense, I mean, there are a lot of ways to skin a cat, and we can have a pretty lengthy discussion about different approaches to technology, whether it's tooling, like programming languages and libraries and so on, or whether it's architectures like, should you go directly to microservices? Should you start with a monolith and then sort of carve things out over time? I mean, there are a lot of ways to solve that problem. But I think the way I would respond to that is to say if the CEO is getting in the weeds on the tech stack, there are probably some other things that we need to talk about first.
Jon Prial: This is great. And actually to close now, this is just perfect, It's a great setup now. I'm going to ask you to bring out what you've called your AI BS detector. So what are the questions you just might be asking somebody? We kind of talked about the tech stack, but this is bigger than that now. What are the questions you might ask somebody on the other side of your AI BS detector?
Adam Drake: That's actually a great question, John. So one thing that I've done a couple of times over the years is I have a lecture that I've given called developing your AI BS detector. And the goal of that lecture was to talk with kind of non- technical audiences. So mixed groups, business folks, maybe executive teams, and the idea was to give them a kind of framework or a set of tool that they could use when they are approached for investment opportunities or when they have to make decisions about projects or allocate funding relating to AI or technical projects. And essentially it's a set of questions that are pretty straightforward. And as it turns out, I kind of realized after the fact that I'd sort of redeveloped the Heilmeier Catechism, but a couple of the questions are pretty useful. And the first one that I ask is what's the actual problem that you're trying to solve? The business problem. And the reason that I think that's important is because oftentimes we see technical solutions in search of a problem. And so I do encounter that periodically when I'm working with companies that are looking for funding or when I'm advising venture capital or PE firms and they're asking me to advise on their portfolio and so on. And we find these sorts of companies that are really pitching in a very interesting technical solution, that's in search of a problem. So if I actually talk to the folks and they say, Oh, we have this business problem, and here's the thing we want to solve, and we're using AI to do it. And I say, okay, great, well, what's the most naive solution to your business problem? And did you actually try that? And for example, if they have some kind of binary classification type tasks, they'll say, well, the most easy solution would probably be logistic regression and stochastic gradient descent. And I say," Well, did you try that?" And they say, no. And then I say, well, maybe that's a perfectly good solution to the business problem you're trying to solve. And why are you looking for millions of dollars of funding if you don't even know if the easy solution will work just fine. And so I think these types of questions are not just important for business folks that maybe are less technical, but need to make about technology investments. But they're also important for technologists, because for people who love tech, and I would count myself among that set of people, it can be very easy to pursue the technically interesting solution at the expense of the best solution for the business. And I think people often lose sight of that.
Jon Prial: So avoid the hype pragmatism rules, simplicity rule, value to your end user customers, Adam Drake, what a fantastic discussion. I really enjoy spending the time with you. Thanks so much for giving us the time. It's great to chat with you.
Adam Drake: Thank you, John.
DESCRIPTION
We’re all tempted by the newest, shiniest technology when looking for a solution to a problem. But often the best results come from proven techniques, so it’s wise to find a fast, pragmatic solution to your problem, whatever the technology. How do you know what’s the right choice for your business? And how do you select the right opportunities to pursue with AI, so that you’re augmenting your customers’ experience and ability to get their jobs done?
In this episode of the Georgian Impact podcast, you'll hear about this and more as Jon Prial is joined by White House Presidential Innovation Fellow and growth-stage advisor Adam Drake.
You’ll hear about:
- How you can identify opportunities to use AI/ML in your business
- Why you should start with proven modeling techniques
- Getting valuable products out with customers, rather than stuck in R&D
- How to develop an AI BS detector
Adam Drake leads technical business transformations to help growth-stage technology companies continue their rapid acceleration through leadership, technology, and data architecture guidance. He is currently a White House Presidential Innovation Fellow and IEEE Senior Member. Adam’s professional background spans a variety of industries including e-commerce, online travel, online marketing, financial services, healthcare and oil and gas.
Read his thoughts at adamdrake.com. Jon references these articles during this episode:
Novel Results Considered Harmful
Artificial Intelligence and The Heilmeier Catechism
To get started with AI in your business, read our Principles of Applied Artificial Intelligence and rank your organizational readiness with our maturity matrix. You can download your free copy here.