Commercializing AI with Vector Institute’s Cameron Schuler

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Commercializing AI with Vector Institute’s Cameron Schuler. The summary for this episode is: <p>In this episode on commercializing AI, we speak with Cameron Schuler, a key contributor to AI's game-changing prominence. Cameron is the Chief Commercialization Officer at the Vector Institute and is dedicated to advancing the transformative field of AI.&nbsp;</p><p>&nbsp;</p>
Getting into AI
02:33 MIN
Deploying AI models
01:27 MIN
Pan-Canadian AI strategy
00:51 MIN
Components of #TrustworthyAI
06:39 MIN
Where explainability matters
01:15 MIN
Privacy and data sharing
01:45 MIN

Jon Prial: Hi, everyone, and welcome to The Impact Podcast. I'm your host, Jon Prial. There's no question that artificial intelligence has been a game changer for businesses, but this wasn't always the case. It's been the result of decades of efforts from researchers at industry alike to unlock its potential, and, of course, there are many people around the world constantly finding new and exciting use cases for the technology. Today's guest is among the many people helping to make that happen. Today, we'll be talking with Cameron Schuler, Chief Commercialization Officer at The Vector Institute. The Vector Institute is dedicated to advancing the transformative field of AI with top researchers and experts from around the world. He'll be chatting with us about what it takes to build an AI ecosystem, the key to commercializing AI discoveries, and even how human behavior and progress can impact AI development. Cameron, welcome. You are also part of another research institute, Amii, A-M-I-I, and I'm really looking forward to your insights today. Just tell us what you're working on right now and how you got there.

Cameron Schuler: I originally got into this field in 2008, which was at the tail end of the winter when it was considered that AI had no commercial values. I thought, " That's a great place to go to advance your career," but as it turns out, I happened to be in a pretty good place. Where I sit inside of Vector is the head of the industry innovation team, and so our role is the interface between our industrial partners, of which there's enterprise companies, there's scaleups that are leading edge in the AI domain, as well as SMEs that are Canadian companies that may not be AI- first companies. Our role is to impact them through what we call our three Ts, so technology, teams, and training. Under the technology piece really is experiential training and experiential learning, so being able to roll up your sleeves, work on new methodologies. On the talent side, it is exclusive opportunities for our partners to recruit newly graduating talent, but we also have a highly curated list of people that are already in industry who might be looking for new opportunities, and that sits within our digital talent hub. On the training side, it really is keeping people on the leading edge of AI.

Jon Prial: Well, you've been around a while and I like that you've got this focus with the partnerships on commercial. I mentioned AI is still a ubiquitous, but there's a phrase that I've seen all too much of and I don't really like it called" take X and add AI." I mean, it's a naive view of data and business processes, but I'd like you to talk about AI and its predecessor machine learning, where it was and where you see it as today.

Cameron Schuler: Just for context, I tend to use AI and machine learning synonymously. They are different things, but I will use them synonymously, so I may not be referring to any particular one thing. The field is interesting because we were at a position where the field was so small for so long. 10 years ago, you could go by first name globally and you know who you were talking about in the field, and so what happened was research problems became industry problems because there was a lot of research in very specific areas, things like overfitting. It's basically training a model and you're training data, and then when you put it in the real world, they run more data through it, the model doesn't work. There was a lot of research going on in those areas, but again, we went from a small group of highly talented people, industry said, " Hey, there is value." Started taking people out of academia, which was a problem. Then, those problems not having been solved, becoming real- world problems. Ultimately, when we start taking a look at AI right now, it does have broad industrial use, but there's also the potential for companies and individuals to use it in a naïve way that may not provide the best outcomes.

Jon Prial: I often think about that AI is a little different from other tech, the constant refreshing of data, the need to refresh models is a different space than, I don't know, we have an algorithm that debits my bank account for a hundred dollars and I take a hundred dollars. That's probably the same algorithm that was written in 1960. How does that work for you?

Cameron Schuler: That is a very good question. If you think about a pandemic or a financial crisis, which are both recent history, who knew that we'd have a shortage of toilet paper? That was certainly an unexpected thing, so if you think about AI models, you take historical data, you train models to then predict the future, and humans have done this forever. It was chicken bones that they'd throw around and look, " Should I predict the future?" It's a pretty natural state for us to do. We recognized during the pandemic that whether changes be dramatic like that or subtle, and so you're a retailer in your demographic over a three- year period is now five years older and tastes have changed. You need a way to go back and actually validate your models, so that would be something called data set shift or model drift. There are methodologies you have to deploy, so yes, it is more complex to deploy AI models, but ultimately, it has a lot of the same characteristics that any good project plan. All right, so if you think about a problem you're trying to solve, what's your baseline? Understanding how it fits into the rest of your business problems because technology is quite commonly a hammer looking for a nail, so we approach it in a very different way. What type of business problems are you trying to solve? How is AI a good methodology for doing that? Some of the things that it does well are pattern recognition or some of the language models as they've evolved have been pretty incredible, but ultimately, it is starting out with that, what problem are you trying to solve? What is the best methodology for doing so? Not just using AI for everything.

Jon Prial: It's really not a shift from research to development per se. You're actually starting with development occur, commercialization, what's required, and then backing into the tech that might be required. Is that a fair way to put it?

Cameron Schuler: Ultimately, when working with industry, it has to have a practical side. Our Vector sponsors, we have 29 large enterprises, and there's a piece of what's in it for them. How do Canadians adopt technology? You talked about our vision up front, but there's also a good for Canada component. It's about building an ecosystem and both of those pieces are important, and so we do focus an awful lot on how do Canadian companies take advantage of what's been built here because that is important, but ultimately, it really is about ensuring that Canadians can get the benefit of decades of investment in AI when nobody cared about it. How do we make sure we retain and Canadian industry benefits from there? It does start out with a, what are your overall objectives? Where is AI a good fit for that?

Jon Prial: Cameron, tell me about the Pan- Canadian AI Strategy, please.

Cameron Schuler: Canada was the first country to have a national AI strategy, and where this came from is when commercial value was recognized, you get commercial value out of AI again after decades of thinking it's a dead field. That industry is making it very attractive for people to leave academia, and because the field was so small, pretty soon we were going to end up losing the advantage that we had as a country. Places like Toronto are expensive to live, so if you're an academic, you may not be able to afford to live here. We developed the Pan- Canadian AI Strategy to ensure that we could take advantage of that. It really was around ensuring that we had the next generation of talent and growing that talent and ensuring there was a large supply of it. That's worked in terms of companies relocating to the Greater Toronto area to ensure that there is a supply of talent and a reason to come here.

Jon Prial: We'll talk about the end users a little bit. There's so much backdrop in this tech space, privacy, tracking, deep fakes, the proliferation of untruths. I think we'll stay focused on the end user and giving them a safe and reliable application that adds value, minimizes risks. One of the most prominent topics I saw from Vector Institute is what you call, and I'm excited it has a hashtag, # TrustworthyAI, which seems to be both obviously a research topic and this business practice. Before I dig into the piece parts with you, how important do you see this as a focus area?

Cameron Schuler: I think it's critical, so unfortunately a lot of humanity is informed by AI through science fiction. I'm pretty sure your toaster's not going to grow legs and put them on itself and dive into the bathtub and electric you. If we look at understanding that AI world is about teamwork, it's about humans, so if you think about the positive side, things like precision medicine, something that's completely aligned to you, not something that's aligned to some other genetic characteristics that are nothing like you. If you think about learning, should you really spend a lot of time learning things you get? Or should you spend more time on learning the things you have a challenge with? Those are the types of things where that personalization, I think, is incredibly... it will change the world in the future and is doing so already, so that's important, but that trusting AI is the hard piece. Well, when you think about harms from AI, it can be just as simple as something that is biased in nature because your training data was biased, and it tends to be because humans put that together. Could be something that just gives you a bad answer, or it could be something that... You know, we've seen online examples where an AI is put on Twitter and it starts to mimic some really bad traits of humanity. I don't think we get anywhere without humans saying, " This is making my life better," and part of that is ensuring that there is a trust component to this. There's been enough thought and enough effort put into it to make sure the outcomes are the right outcomes.

Jon Prial: We've got some components that you've got in #TrustworthyAI, so I want to work through them. The first one is fairness, so I'd like to start by asking you if you see a different between fairness, and you already mentioned potentially biased data, but between fairness and bias.

Cameron Schuler: There's going to be proper definitions, but let's talk about it in a practical sense. If you think about where they've found that hiring practices are strongly geared toward people that look like me, all right, that is clearly unfair and I would say that's a bias issue as well. Fairness to me would be giving everyone an equal opportunity. Bias would be discriminating against people, if that makes sense. If you take a look at historically, if a consumer lender wouldn't lend to people in particular postal codes, that would definitely be a bias, but it also would be considered not to be fair. There are practical definitions around that, but when you think about, what do we really want? We want everyone on an even playing field.

Jon Prial: Do you feel a challenge to... The way you're describing that to me, there's a human side of bias and all of this was very human- oriented. It's been around well before technology, yet we're evolving to an extremely technical set of solutions. Is there ways to bridge that?

Cameron Schuler: I think there are. Some of these areas are unsolved, so when you think about AI, it's probabilistic. If you had a probabilistic keyboard, maybe one out of every a hundred times your A would give you something else. All right, so that's not particularly practical. AI is trying to make good decisions in ambiguous environments. If you think about an autonomous system, so an autonomous car, when we drive we may not generally think about, " Where do I need to put my steering wheel to get my car in the right spot?" It's fairly integrated in how we function, but if you think about an autonomous car, you'd have to say, " What environment am I in? Is there a school zone here? Are there cars on the side? Am I on a highway?" Each one of those will be different areas in which it operates, and from there it will then have to make a decision on, " Where should I put the car on the road?" Those are some of the challenging ways that you need, when I say engineer things, you need to develop things. Whereas, as humans, we could look at these as fairly innate things. Another part would be when you think about data, we interpret data, so I have this on my desk, so is this a cup or a glass? The answer could be both. Depending on your lens that you're looking through in that moment in time, you may define it a different way. Those are just some fundamental challenges of where we come from as humans. That bias in the data would really start out as a, " I think this is a cup, you think it's a glass," and all of a sudden we've diverged somewhere.

Jon Prial: You mentioned self- driving a little bit, which makes you think, and you talk about steering, where you steer, there are companies that are going to deliver self- driving car without a steering wheel, which has to deliver a great deal of trust if you're going to sit there behind a self- driving car and you still have a wheel in front of you. How it gets delivered is going to be interesting over these next few... let's say the next 10 years.

Cameron Schuler: Agreed.

Jon Prial: Talk a little about impact. You talked about I don't want the toaster with legs to jump in the bathtub. I may get a recommendation for toaster to buy. That's probably a great AI solution. Yeah, someone says, " Here's a bunch of features and here's the toaster you want." Then again, if it gave me a bad toaster, I'll be all right. I'm not getting the toaster with the legs, of course, but then what about a medical diagnosis perhaps? Then, I get a little more concerned about the impact of getting things right for good end user experiences. Thoughts about impact of the AI?

Cameron Schuler: Two things I'm going to talk about in terms of that, so one is humans are imperfect. Your doctor can make a misdiagnosis. That can actually happen, but think about it in a different way. When I think about the FDA and how they would approve, or Health Canada, approve medical device that have AI in them, that was a challenge because there's that cause- and- effect component, the transparency, how much you trust the way it works. That's a challenge. One of the first things I saw was actually around cardiac imaging, and it was neat and that it said a human would normally take an hour and a quarter to interpret this particular image. This machine can do it in 15 minutes. The hurdle wasn't the machine needed to be perfect, the hurdle was the machine needed to be as good or better than a doctor. Very interesting thing about that is, do you want your doctor to spend an hour and 15 minutes diagnosing something that may or may not be correct and then spend five minutes with you? Or do you actually like the doctor to have close to an hour with you to actually talk about what are the next steps? That's that human aspect of it. Another example of this would be there's stuff that's going to be fairly straightforward. A broken leg is a broken leg. Now, someone's going to probably disagree with that, but ultimately, things that are very obvious in imaging, do you really need a doctor to look at that? Or do you need a system? When you think about augmented intelligence or helping humans do their job better of things that need human judgment because the machine can't decide. Ultimately, you could be more efficient by saying, " We really need a doctor to take a look at this because we're not sure what it is. Maybe we need some biopsies. Maybe we need something that's a far more complicated case because nobody's seen it before." Think about using people's time more efficiently and making it more human, getting more time with the doctor.

Jon Prial: That all leads us to explainability, and I'll definitely conclude hearing you that there are things that don't necessarily need to be explained. A broken leg is a broken leg. Image processing is very data- rich. I guess the story I've always heard is that we almost impossible to write an algorithm, a true algorithm to recognize a tree on the side of the road versus a person on the side of the road, yet I could feed a neural network zillions of images and absolutely tell the difference between a tree and a human. I don't need a lot of explanations of that, but there are times when I think we do need explainability, so does it have to do with the final impact of this decision? What are your thoughts on when explainability is more important than in other cases?

Cameron Schuler: I think you brought it up, it's risk, and let's go away from learning systems and just talk about intelligent systems. If you were going to get on an airplane and the pilot said, " You know what? The computer's not working, but let's go for it anyhow.", you'd be pretty uncomfortable and that pilot probably wouldn't be allowed to leave. What they did find is if the pilot is too engaged and an emergency happens, they're fatigued, if the pilot is not engaged enough, then an emergency happens, then they're not prepared for it. There's an ideal state in between those two, but ultimately, technology has made it safer. The number one cause of crashes where people die still is human error, but ultimately, flying has been, especially commercial aircraft, has gotten safer over the years just due to protocols and making sure that there's trust, making sure they're well thought out. That doesn't always turn out the way you want it to, but ultimately, it's incredibly safe. If you think about that intelligent system, it's actually made our world a much better place in a very risky environment. Now, it comes back to, is this something that's going to operate on me as a human? Or something that's going to potentially impact my life? Or is this something where I'm going to buy something and have to return it? Really, that risk profile is going to be pretty critically important as to the explainability of it.

Jon Prial: It's interesting. It makes me understand better. I never really thought it. I know that there are rules how long pilots are allowed to work and not work or how long truck drivers can work or not work. It really comes down to not the driving of the truck, but to handle those edge cases of crises time to make sure that they're fully prepared. Obviously, an hour on, an hour off is optimal, but not smart as not a good business decision. I guess it's a little bit of subjectivity and objectivity maybe as part of the name of the game here, I'll think about judges issuing sentences. I mean, there are guidelines, and those are very objective, but at the same time, there's a clear subjective side of things. The judge is going to listen to what the defendant has to say, so there's an element where subjectivity is important, and I don't know that we'll ever... I think we want to maybe augment, but not replace. Same thing for maybe college applications. Are essays purely a subjective thing? Or are they becoming objective, too? WHat's your thoughts on how this gets used?

Cameron Schuler: Those are hard things, so if memory serves me correct, Napoleonic law was trying to have a very binary outcome and humans just don't work that way. There's extenuating circumstances depending on the country you live in. Our legal system's very different. While we have two different legal systems in Canada, but Quebec has a different one than the rest of Canada, but there's fundamental things that we can look at. Then, it also ties into value, so I think there needs to be a case where we have an appeal system, too. If a judge feel a judgment wasn't fair, you go back. You could put something in that's AI- based, but then you have to go back to, is this biased? If we have a inherent bias toward a particular group of Canadians or any other country who are incarcerated more often, depending how you train your model, it may incarcerate more of them just based on characteristics that you may not be taking into account. It doesn't mean that humans aren't biased as well, and we've seen cases of that in the Canadian legal system, so I think there's an opportunity to do that. Again, when we wrote something called the Pan- Canadian AI Strategy, we made sure there was an AI society piece in it like, how do you actually make sure humans are thought of when you're doing these things? I think there's opportunities for AI to participate in some of these, but it's the taking the human out of the loop and the judgment piece out of the loop I think has risk associated with it.

Jon Prial: I really like the fact that you say there's an appeals process, and I would argue I know that if you get rejected by your insurance company, at least in America health insurance company, there's an appeals process that if we do do more autonomous- type systems and there's a mechanism in place to question it, that will force the explainability. Well, it'll force the research into biases or not. We could get there with a good, broad, holistic approach to thinks, so I think I'm encouraged. That's good.

Cameron Schuler: If you think about our justice system, too, we don't want people wrongly incarcerated. That's a pretty big thing, which means there's some people that are guilty that actually get away with it, and so it really comes back to values and what you really want out of the system in which you live.

Jon Prial: Another one I'm fascinated by and I had not really spent any time really cogitating on it, you talk about safety, protecting the physical, which I always thought was interesting because to me, it's buying a toaster is not physical. It's a purchase. It's still a human action. Most things are actions. Maybe a self- driving car obviously implies there's some physical to that, and now I'm going to love self- driving cars because I'm about to say you could have autonomous weapons systems, but protected physical gets kind of interesting that I hadn't really thought much about. Can you talk a little more about that, please?

Cameron Schuler: Yeah, so let's think of it in an industrial environment, so let's think about robotics in an industrial environment. You could have a robotic arm that could truly operate in a hemisphere, so 360 degrees around and 180 degrees, but in reality, it doesn't need to do that because there might be humans in place. The neat thing about a factory is you can reduce the amount of variables. What I mean by that is you're not going to have a helicopter crash likely crashing out of the sky coming through the roof and hitting it to cause some strange thing. Whereas, if you're out in the real world, all sort of other things can happen, so you reduce the number of variables. In a case like that, you could certainly turn it to a point of as long as you're operating in this particular area, then it can do what it needs to. If it goes outside of this, then it needs to shut down, so there's lots of ways you can approach it. Again, it comes back to risk. Again, if it's a welding robot, what's it doing welding a foot off of where it should be? You can put a bunch of controls in place where it would say, " This seems to be operating outside of this parameter. Therefore, you need to do something." Another example would be you can have a very complex system and every one of the interactions in that system's going to have parameters in which it operates. Sensors, voltage, pick whatever other thing. If you sit there and say, " All right, so we know all these things together work well, so can we actually optimize within that to actually get a better system? You can look at the other side of it as well. How do you take complex systems and get better outcomes from them?

Jon Prial: We don't really need to worry about Asimov's Three Laws of Robotics yet?

Cameron Schuler: You know, I think that... I'm glad you brought that up. If somebody says that's really far off, I think that's a problem. I think we need to be able to have these conversations, and the conversations are we think about these things. Humans traditionally have... It may not always feel like it, but the world gets better as time goes on. We have hiccups along the way, but go back a hundred years to where we are today, I'd much rather be living today. Ultimately, I think it is important to have these discussions because if you don't, people are going to come up with their own conjecture. Being part of that voice, and that was where the AI and society piece came from, we want to have a voice. We want to engage people. We want to make sure that it's well understood that we are thinking about humanity and not just the systems themselves. I think it's incredibly important.

Jon Prial: One indirect impact that I thought of that's not quite robotics, but it's an unbelievable human impact, so I live in Vermont and we'll end up after winter with mud season, and most of our roads of Vermont are not paved. The GPS systems where people are driving up do not realize they're putting them on unpaved roads. We have had cars stuck and people stuck overnight and near- death experiences and things, and that is not something that was written into any GPS algorithm yet. I don't know that any map says, " Dirt road versus solid road, and it's after November or December and you should not be on that dirt road," or, " It's rainy season, you should not be on that dirt road." There's a good safety one, I think, for as well for the physical.

Cameron Schuler: Well, I agree with you, and that comes back to, how do you design things to begin with? What assumptions are you making? I have a finance and economics background, so you would have, all things being equal, what does this look like? If you're like, " It's a road, therefore, it's like any other road," is that the right assumption? Again, that planning piece up front, a good project generally is well thought of. You'd spend 10% of your time building your plan, building the norms, team, and all those other things. AI is no different. You really have to spend a lot of time thinking up front and then being able to test and understand for unintended consequences.

Jon Prial: Excellent, so the last one I want to hit from this#TrustworthyAI is privacy, and it's been talked about before, but I just love talking to you. Anonymization is insufficient necessarily, so talk to me about why that's the case, please.

Cameron Schuler: A lot of things come from inference, and so anonymous data may not actually be anonymous. If you think about the values that we have, certainly in Canada and the U. S., privacy is way up there on the list. Other countries not so much, but for us, it matters, so anonymization, you could actually release enough information that somebody can turn around and figure out what that looks like. Just postal code in Canada or a ZIP code in the U. S. A couple of different characteristics like, " I know who that is, that's Jon or that's Cameron." Privacy has to be entered into by design, and we do a lot of work on privacy- enhancing techniques. We work with companies to be able to actually extract data without needing all the other pieces. Again, these are open areas that haven't been solved, but areas that are critically important, so we spend quite a bit of time working on that.

Jon Prial: Do you see some of the techniques and strategies from the finance industry bleeding into other industries now? Are they... everyone beginning to get the message? My argument. of course, is that finance is ahead of the game. I may be wrong. You could argue around that one as well.

Cameron Schuler: The answer is it depends. I think the... We'll have things like banks actually can't share data, so if you think about enforcement, you can't see both sides of a transaction because they can't actually talk to each other. You have to infer, is this a nefarious transaction or not? If you had a way of not sharing data, but sharing enough information, you could figure something out, that would be worthwhile. I do think our banks are quite progressive in Canada in terms of how they deploy and use Ai, and there's lots of rules around how they can do that. We have large financial institutions who we work with. Roughly a third of our enterprise portfolio is finance, other investment, management, insurance, or banking. The things we do work on really are around, how do we make the world a better place?

Jon Prial: Now, when you talk about regulations on states, North America and Canada, you know, there are regulations or governance. As I look about maybe integrating and thinking across this space of fairness and explainability and safety and privacy, I feel like we're beginning to get into or broadening the definition of MLOps, which is a relatively hot term. What's your sense of how that's evolving?

Cameron Schuler: Yeah, so that's a really good one. I think when we think about how companies have approached AI, and I'm not talking about the Googles of the world that are leaders in AI, I'm talking about general industries. Some are behind. We have 29 large companies and they're companies that a lot of Americans will recognize as well or people throughout the world. We have roughly 20 companies that are on the leading edge of AI that are in the unicorn steps, and a bunch of them are unicorn status. Then, we have a number of small companies, and we talk to people outside the industry, so we get a purview across industries, size of company, predominantly Canadian- related, but certainly elsewhere in the world. That gives us an opportunity to see common problems, and MLOps certainly is one of them. If you think about how a lot of companies would have approached this, it's, " We're going to do proof of concepts. We're going to get lots of attention, lots of funding. The data that we want, we're going to produce something and then we're going to throw it over the fence." It's going from that POC to something that is actually in production. There's a challenge related to that and even some of that's structural. Do you have the right computing? Do you have data governance? Do you collect data one way today and you've changed it tomorrow and nobody knows? You're getting different results out of it. Do you have the right talent in place? Making AI accessible, I think, is really, really important, and what I mean by that is there's packages and libraries you can load up and things like Python to be able to use, but do you know enough not to get... I talked about overfitting earlier. Do you train in a particular data set and say you're done, but in reality next time somebody uses it, it doesn't work in any way, shape, or form the way it would? That MLOps piece is critical and it is things like what I described earlier around a pandemic or something where things change over time. That's called model drift, and they're either very extreme or they can be something that's subtle, but having the processes in place to make sure that you actually have models that are still giving you the outcomes you need. Another one would just be your outcomes. Can you do something with that? It's the change management with humans. Do you have the ability to actually go and influence this? I think those are incredibly important that have to be a focus of this. MLOp is most definitely one of the next pieces where there's a lot of energy going into it, and it's being able to use AI in a meaningful way.

Jon Prial: One last question. Because you work with so many different partners and you bring so many different skills to the table and unless you're a large large corporation, you're something some within our audience here, our wheelhouse of our audience, how about collaboration? How do you get companies to work with others?

Cameron Schuler: Yeah, that's a good question. Canada has more than five banks, but if you look at our five large banks, they're roughly 90% market share, and so we're not going to do anything that is competitive because two reasons. One, nobody wants to go to jail. Two, it's a bad idea. Why would we do that? It's working on pre- competitive things, so it's, again, being able to look across industries. We find common challenges that companies have, and the analogy we would use is we teach them to fish versus giving them a fish, and AI and fish. replace those. For us, it's we will look at things like, as I described earlier, data set shift because of the pandemic. Here's something that we can help with right away. Let's go do something related to that, or the privacy- enhancing techniques. There's more a forecasting of what the future looks like. What do we think companies will need in the future? We'll have large companies working with small companies, but it's a very unique model. We have collaborative groups that get together that could be competitors to work on common techniques, and what they take back home they can then build on, and that's how they create their IP out of this. It really is a collaborative model.

Jon Prial: Not only do I want to thank you, I want to congratulate you. It's so clear that you and Vector Institute have just a true purpose and I know we're going to hear so much more about this. Thanks for your time. It was great chatting with you.

Cameron Schuler: Thank you for the kind words, and I will say it is Vector Institute and not me. I'm only part of the team, but thank you for the converts.

DESCRIPTION

Artificial intelligence has been a game changer for businesses. This is the result of decades of efforts to unlock its full potential. On this episode of the Georgian Impact podcast, we will be talking to one of the people helping make that happen Cameron Schuler. Cameron is the Chief Commercialization Officer at the Vector Institute and is dedicated to advancing the transformative field of AI. 

 

You’ll Hear About:

 

●  Cameron’s views on AI and ML, where it was and where it is today.

●  How working with AI is different from working with other tech.

●  Building trust in AI. 

●  The components of trustworthy AI.

●  The importance of building AI right in the legal system. 

●  Cases where explainability matters.

●  Protecting the physical when it comes to AI.

●  Why anonymization is insufficient in preserving privacy. 

●  How MLOps are evolving.



Today's Host

Guest Thumbnail

Jon Prial

|
Guest Thumbnail

Jessica Galang

|

Today's Guests

Guest Thumbnail

Cameron Schuler

|Chief Commercialization Officer & VP Industry Innovation at Vector Institute