Tackling Digital Disinformation with Kathryn Harrison

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Tackling Digital Disinformation with Kathryn Harrison. The summary for this episode is: It used to be you could trust what you saw. With the prevalence of deep fakes and other synthetic media, today it isn’t always so easy knowing what is real and what isn’t. Kathryn Harrison is our guest on this episode of the Georgian Impact Podcast. She is the founder and CEO of the DeepTrust Alliance and FixFake, a company with the data and expertise to find fakes and fight fraud.
The accessibility of deep fakes today, and how they are created, edited and manipulated.
04:44 MIN
The responsibility that companies in the synthetic media space have and the importance of guardrails.
02:21 MIN
DeepTrust Alliance’s criteria around the creation of synthetic media, to ensure it meets certain ethical standards.
02:36 MIN
Cheap fakes versus deep fakes, and the need for critical thinking when navigating synthetic media.
02:19 MIN
How social media companies need clarity in their policies surrounding deep fakes.
02:42 MIN
The challenges around detecting fakes with the growing number of ways to create and share content.
14:29 MIN
FixFakes’ repository of known fakes, and how it helps to train their algorithms in order to identify new fakes.
07:25 MIN

Jon Prial: It wasn't that long ago that we opened a podcast with a recreation of my voice that we were calling Robot Jon. Audio only of course, but we hope we got you thinking a bit about the potential issues with that technology. And there's no doubt that you've seen a number of different false videos. It doesn't take much to make simple modifications with cuts or speed changes, but the hard stuff can be done too, and technology is allowing people to do far more than having a person's voice in a body do all kinds of untoward things. Now is this a consumer worry only? I think not. We've all heard the old saw that a happy customer tells one person, but that an unhappy customer tells 10. And I bet if bad stuff like this happens, it's going to get shared with a lot more than 10 people. We've got quite the show for you today. I'm Jon Prial and welcome to Georgian Impact Podcast. Today I'll be talking with Kathryn Harrison. She's the founder of the Deeptrust Alliance, as well as the founder of FixFake, a company with the data, tools and expertise to find fakes, fight fraud, and I really like that they say this, fix the web. Bold? Yes. But is a deep fake possible in your ecosystem? Maybe financial services, healthcare or more? Yes. Do you need some data to train your algorithms? She's got that. So buckle up for an amazing discussion that is probably something you haven't put on your radar screen yet, but the time is approaching. Katheryn, welcome. As I think about fakes in my head, I've got a spectrum of bad impact to good impact and somewhere in the middle, some things I don't care about. So if I stay on that good bad spectrum on the end points, we could do a wonderful job of educating students and maybe they could see a fake of Abraham Lincoln giving the Gettysburg address. Staying with the presidential theme, I wonder why, what if I had an American president that said," We have launched nuclear weapons on Russia," and it's fake, but before it gets caught, bad stuff happens? So help me understand your thoughts on the positives and the negatives of this space. And then maybe we'll do the I don't cares next.

Kathryn Harrison: So for the last hundred years, it was kind of expensive to create videos and to create real information. It was credible. So you could largely believe what you saw or what you heard unless you went into the movie theater. Hollywood and sovereign governments were really the only two that had the computing power to be able to create credible fakes. Now you have that power in your smartphone. You can go on TikTOK, you can become Elon Musk, or I can be Julia Roberts. And on one hand, that's really cool because it allows us to communicate in new ways. On the other hand, it means that you can create pictures, videos of people doing and saying things they've never done in their lives. So there are a few different things that are necessary to help us, as a society, figure out what is real and what is not. So first, there's the question of technology. Understanding where information comes from and how it has been created, edited and manipulated and distributed are critical because if you ever report that nuclear weapons have gone out, you want to understand exactly where that information is coming from, how it's corroborated, et cetera.

Jon Prial: I never thought about it. That's the pedigree. That's different than the content, which may or may not be real or may not be fake, but you're talking about the pedigree, metadata, so to speak. Cool.

Kathryn Harrison: So metadata is part of it, but you want to know who's creating it and why and how because all of that is important. The second piece is the policies around how this information is used. There's lots of great ways to create entertaining content, but if it's being used to harm an individual, if the behavior is malicious or social engineering, extortion, harassment, that's a problem. There are already some laws on the books and they need to make sure that they're extended for the digital harms that are possible. And then there's obviously education, right? People need to learn how to interpret and understand what they're seeing on the internet. We now all live fully digital lives, whether we like it or not and we don't have a lot of information to help us parse what's real and what's not.

Jon Prial: There's a lot to unpack there. We talked about the source and the pedigree, and I think it's really important that you kind of got to the harm. So I said, there's good and the bad and the I don't care. I don't know that I care that they make a movie with a dead actor. I don't know that I care that Carrie Fisher was in Star Wars after she passed, although that's not by some definition a deep fake, maybe that's just CGI. Either way, I don't know that I care, right?

Kathryn Harrison: You might not care, but her estate certainly cares. Is she getting paid for the use of her likeness? There's a whole number of people that care quite a lot about whether or not she's being used even in film.

Jon Prial: I love it. That's great. Here's a question. Does anybody care, I know I don't care, if I click the button on Zoom that says touch up my appearance, which I still can't believe that exists, but is that safe or is that not safe?

Kathryn Harrison: That's absolutely safe. And one of the major uses that people are using deep fake technology is dating apps. Their profile pictures to make them look a little bit younger, a little bit thinner, a little bit nicer.

Jon Prial: So what's interesting is the issue of harm is important. So we use one of our editing tools for the podcast is a company Descript, and we give it the audio files. We actually give it audio and video files as well, and you get back a transcript and I can edit the transcript. I can edit the text doc and it'll edit the audio for me, which is kind of cool. They created a deep fake piece of technology. We actually did a podcast with the founder of that tech. And what's interesting is we created a robot John for the podcast. We didn't do the full, full editing. It still sounded interesting, but it was kind of fake. What I liked about it is the permissions I had to grant them to do it. Not only did I have to read, five, ten minutes worth of pages to get all the things, all the phonemes they wanted from me, but I had to also give verbal authorization that I am the owner of this voice and I recognize that this could be done and I give them full authority. It was interesting. They were building really what appeared to be thoughtful guardrails around this space. We talked about pedigree and we talked about people recognizing it, but there's a lot of responsibility on the companies themselves on how they enable this to happen.

Kathryn Harrison: One hundred percent. I think the synthetic media space, anyone, whether it's voice audio, whether it's image, et cetera, has to have a set of rules and ethics around what data is going into the creation of these images, videos, or audio. And so the companies like Lyrebird, there's Synthesia, there are quite a number that have set up very clear rules and permissions, which is critical. But there are quite a number of open source software programs that you can basically download and there are no controls over what data is there. There is no way of auditing or watermarking, is it actually your data, et cetera. So there are some very real risks to the synthetic media industry as a whole if they do not very clearly control the rights that go into the creation of these projects.

Jon Prial: Are you looking for some type of good housekeeping seal that companies should come get out of the deep fake alliance to kind of explain publicly that they've got the guard rail set up and they're doing this correctly?

Kathryn Harrison: Yeah. So Deeptrust Alliance has really built a set of criteria for the creation of synthetic media that meets certain ethical standards. So many of the key criteria in our framework you actually experienced when you created the fake voice with Lyrebird. Ownership of the asset, whether it be a voice, a likeness, an image, you have to have the right to be able to share that information. Second, you have to be able to audit and control who gets to use it down line so that it can't inadvertently be used against you to harm you for social engineering, et cetera. And then third, providing an audit trail of how is the algorithm developed, who has access and how is it going to be used moving forward. Those criteria are critical so that individuals that are participating in synthetic media can feel comfort about how their likeness or voice data is being used.

Jon Prial: So I'd like to just step back and just get everyone to understand a little bit about the tech. Took me a while to figure all this out and I came up with my own analogy. I can take my vinyl record, which is analog, and I rip it and I've now included it into digital. And of course when I play it, it decodes it and it works and I never thought about it until prepping for this podcast. So obviously we can encode my voice and we can encode Richard Nixon's voice. And I guess if we take my voice into Richard Nixon's decoder, I sort of just get Richard Nixon. Is that tech that, I don't want to word simple and I don't want to use the word trivial, but is the tech that accessible for people now?

Kathryn Harrison: It's not yet that simple, but we're getting there. It still takes quite a bit of training to be able to get the source, so if you're the source of the dialogue, onto Richard Nixon's, into his actual voice. It takes generally about a dozen plus hours to get that level of training. However, there are new technologies that have come out, even just in the last year, that has gone from days down to hours. We will be at minutes very soon. And if you think about the Snapchat and the TikTOK filters, those happen instantly. Now they're not as high caliber as having Richard Nixon deliver your podcast, but very quickly we're getting there.

Jon Prial: This is interesting. So there's two paths I want to go with this. I think we'll stay on that high caliber. I saw it in the paper from the alliance, cheap fakes versus deep fakes, and obviously I'm hoping it would be easy to maybe detect the cheap fakes and perhaps they get flagged. And then the deep fakes take the higher tech with perhaps a good housekeeping seal of some authority. Help me understand the differences in how you view them in terms of problems to be solved.

Kathryn Harrison: So let's just define them first so it's really clear. Cheap fakes are any sort of editor manipulation that does not require AI to be created. So think about Photoshop, all of the tools that have existed for the last 20 years, being able to speed up, slow down, potentially miss- contextualizing information. Those are all cheap fakes. Deep fakes are images or video or audio that are created using AI technology. Very often they talk about something called a generative adversarial network or a GAN, but that basically trains an algorithm to create these holy new images. The problem, if you think about the 2020 election season, people keep saying," Oh, we didn't see that many deep fakes." That's because people still fall for the cheap fakes without too much trouble. And so therefore they're cheap and easy to make and they represent probably still about 95% of the manipulations that you see out there. Now, they are easier to detect, but the problem is that people don't stop to really think," What am I looking at? Where did this come from? How was this created?" They just hit share on Facebook or Instagram or Twitter, and suddenly it is out in the wild. So it's really important that we think about why do people share this information. Deep fakes are certainly harder to detect and there's more that needs to be done in order to provide those signals to identify that potentially this has been made with AI. But regardless of how you create the fake, you need to really help people understand and be critical thinkers in terms of the information they are consuming.

Jon Prial: So that's part of this issue of labeling. It's interesting. So there was a very cool piece I heard where they took the speech that was written for Richard Nixon, I guess that's why he's in my head, if the moon landing did not go successful. And some research has created a deep fake with Richard Nixon reading this astoundingly well- written piece about their resting in peace on the moon. Facebook would not allow that, even though it's for educational purposes, Facebook flagged it, wouldn't let it be posted because their algorithms caught it as a deep fake, however they caught it. And yet at the same time, they let pick it... You talked about speeding up and slowing down. They let the Nancy Pelosi sounding like a drunk stay up there. So we have to have labeling. We defined the guardrails, but there's another element of it; who shares, who's in charge of the curation of the things.

Kathryn Harrison: We could spend a lot of time talking about social media policies as it relates to deep fakes. So far, there has been sort of a blunt instrument approach on most deep fakes for the social media platforms. They just say," We're not allowing them." That said, you can find plenty of deep fakes with Nicholas Cage's face on every single movie, or Elon Musk put into Space Odyssey 2001. So their official policies often don't necessarily reflect what actual content can be shown. I think it's really important that they focus on the behavior and the intent of the video. And there are really critical questions about freedom of speech, about parody, about satire, because you don't want to shut down all of that creative outlet at the same time. So what we think is really, really important is that the social media platforms, and really all of the major technology companies, are very clear with their policies, that they're focused much more on behavior than on the technology that's used to create the information. They are most effective at stopping misinformation when they add friction. For example, Twitter has the messages and the warnings about misinformation that may have come up in the last week or so. Labels certainly help as well as some of the flags saying," Election results, a winner has been projected," et cetera. Each of those companies feels that they have really proprietary ownership over how those policies are developed, but there needs to get to a much more standard set of languages because bad actors can just... They put something on one platform where the policies are very low and then it gets shared and shared and shared, and suddenly it's on Facebook. So the public needs to get to a much more simple understanding of what are these technologies, what are they seeing.

Jon Prial: How do you view your role, probably in the alliance, or maybe just as a public speaker in this space, of helping to inform the world to be sensitized to this? This seems to be the issue, you're right. It doesn't matter if it's deep, deep, fake or just a cheap fake. It doesn't matter. Once the cat's out of the bag and it goes viral... And you're right, Twitter is doing something, but it doesn't stop the message from necessarily getting out. It's almost, in my mind, no different than training people stop clicking on their email when there's a link in there. Like I tell my father," Don't click on anything in email. Don't click."

Kathryn Harrison: Yes.

Jon Prial: Do we have to tell people don't believe? We can't do that at the same time. What is your societal concerns or worries or positives for me?

Kathryn Harrison: So I think there often can be a sense among people of this is hopeless and how are they ever going to trust anything they read again? And so I say, that's not the takeaway. The takeaway is to think about what information would you share if you were going to stand in the middle of Times Square and tell all the world, and it was clearly going to be tied to you and it was clearly going to get very broad distribution. You would be very thoughtful about what information you wanted to share. You would want to make sure that it was corroborated in a few different places so that it was actually true. You would be very careful about how you shared that information. We need to take some of the realities from the analog world, from our day- to- day lives, and start to bring them into the digital world. It's too easy and there's too few consequences for just sharing garbage willy nilly and it ends up spiraling into conspiracy theories and a lot of the problems that we've seen across COVID and the election season. So if people are very thoughtful before they share, you can actually stop the spread of a lot of misinformation.

Jon Prial: It's funny that you mentioned viewing yourself as in the middle of Times Square. I I've thought about that some of this view of the public square is scary to me. It turns out it was the open dialogue on the net neutrality, but the FCC was considering that. And 96% of those that were in support of killing that neutrality were all bots. That was an interesting data point I came across. And I'm wondering, do we have to shut down some aspects of the public square? Or maybe the way to do that is to get away from anonymity. Should we avoid anonymous? Does that change the world? Anonymous commenting, hiding behind funny Twitter handles. Do we need to be who we are? Does does that help?

Kathryn Harrison: So that definitely helps. I mean, LinkedIn has a much lower misinformation problem than Twitter or any of the other platforms because it's so tied to people's actual data and actual lives and actual relationships. That said, they still have hundreds of thousands of fake accounts, which they are pulling down and taking off.

Jon Prial: Wow. I did not know that.

Kathryn Harrison: So it's certainly having the identity attached can be very helpful in certain circumstances. But I want to also look at the flip side. If you think about human rights activists or whistleblowers, it's really important for them to be able to share information in an anonymous, but verified, way. And so what I think is really critical is being able to show the provenance, that audit trail of how information is created, but in a privacy protecting way because otherwise we're going to end up in an Orwellian state where everything is completely under control of either major technology companies or the government. And so I think there needs to be ways to verify without having to have an individual's identity tied to it.

Jon Prial: Right. I liked that you show both sides of that coin because you're right. When I think about the anonymity versus the safety behind that, when you're a whistleblower or you're a reporter in another country, perhaps. So are there legal actions that might make sense? I mean, we did get, and this is not really deep fake related, but there are now laws against revenge porn. And that doesn't have to be deep fake, but that's a law that came into being, and maybe something as section 230 gets looked at by next year's Congress.

Kathryn Harrison: So there are a number of laws that actually tackle deep fakes directly. Virginia, Texas, California, all have a variety of different deep fake laws. Generally they focus on deep fakes in elections or politics or deep fakes tied to kind of sexual abuse. Now, those laws are very new. They've existed for the last probably 18 to 24 months. There have not been really any cases that have been tried on them. Laws are really important around the behavior. I think the behavior is the most important piece. The problem though, is that laws as a mitigation technique are very slow and harm is already done. So I think we do need laws. I think that companies often will only respond when there is actually a regulation or a law that they have to adhere to, but we also have to recognize that policies and laws in and of themselves are not sufficient to stop the harm. For the average person that's not necessarily going to have the resources to be able to mount a lawsuit, may actually have no impact at all.

Jon Prial: Interesting. So part of the problem, but not the whole thing. So we talked about people's awareness. We have to highlight fake somehow and in the right way. I was excited to read somewhere, I guess it was during the UK prime minister runoffs, that they made deep fakes of Boris Johnson endorsing Jeremy Corbyn, and Jeremy Corbyn endorsing Boris Johnson as a way to say," Hey, don't believe everything you see." That was a pretty cool public service approach.

Kathryn Harrison: There is another great one of Kim Jong- un talking about voting and the role of democracy, which is hysterical. I think deep fakes are awesome, I just worry about how they can be used to harm people and I want to make sure that individuals, organizations and society as a whole has more safeguards in place. I mean, this is like cars, right? When you first had the Model T, there were no seatbelts, there were none of those safety things. It wasn't until the cars got faster and faster and people started dying that it became very clear you needed safety belts and airbags and other things, which don't actually tie to the performance of the car, but they keep the people in those cars alive. And we're in a very similar situation in the media ecosystem. We now can share information at light speed, but there's no safety belts. And I think that that's what we are starting to see and what we need to begin to build.

Jon Prial: Excellent. So in terms of prioritization, so let me give you sort of three options. I'm not sure you'll like my categorization. One is sort of political. We'll go A, political, kind of driving ideas right or wrong, and it could be false. There's a social aspect of hurting people that... There's all kinds of things you can do to hurt other people. And then there's a social aspect, which is sort of helping yourself like putting your Fitbit on your dog's leg to get lower health insurance rate or hacking your car to get yourself driving insurance. I don't know. There's a couple of aspects of this. Obviously this world is so wide. What do you focus on first? And can go all of the above, but I don't know.

Kathryn Harrison: What do you focus on first? So Deeptrust Alliance is really focused right now on the problem of deep fake porn because it is incredibly insidious, incredibly widespread, and has the ability to significantly damage individual lives. It's also 96% of the deep fakes that are out there. So we are focused there because it is a real concrete problem that is impacting people today. That I think is really, really important. And we're going to start to see deep fakes be part of not just individual harms, but security breaches. They're going to start to provide new types of risks to organizations of all types. So by focusing on the use of how individuals are portrayed first, I think we then begin to open up a number of different use cases that you can start to solve that have not only individual implications, but also commercial and industry implications. Politics is also incredibly important, but incredibly difficult to do effectively. There is just so much angst about this space and getting to an agreed upon set of facts, not straightforward at all. And the incentives are not really aligned to solve this problem in politics. So as much as democracy needs a foundation of key truths that we all hold to be true, that is struggling at the moment. And I think journalism is really playing a very critical role, but there are other issues with that. So that is a very important topic, but that needs a lot of different stakeholders to drive it.

Jon Prial: Yeah, for sure. You mentioned earlier on about creating things and my mind is thinking of a registration or some degree of invisible watermarking. I was involved in a digital library project where there was watermarking to prevent copies and obviously some type of copy protection gets built into these things, but that may not be the case. What are the types of things you're thinking out at the front end that might help, and does it involve a blockchain for example?

Kathryn Harrison: Great question. So I really think that understanding the provenance of information is critical. And so being able to start at the camera level, literally at the device which is registering the image or the video. We now have LiDAR on our phones. We all have trusted execution and compute on our cameras. So you can actually validate when and where and how a photo was taken, or a video was created. You don't necessarily have to tie an individual's identity to that, but being able to verify that is critical. And I see a world in which, similar to the SSL lock little symbol so you know you can put your credit card details into a website, a similar type of verification indicator that says," We have the provenance of this media asset. We know where it started and how it was created." Blockchain is certainly one way in which you can do that if you want to do it in a decentralized way. I think that's really important to help avoid that sort of Orwellian state that I talked about. But there are lots of different ways that it can be done and individual organizations can do it through their own centralized systems if they want. But getting to those kinds of indicators so that decision- makers can understand where did this information come from, is critical.

Jon Prial: Now, understanding this provenance, and I'm going to go back to cheap and deep fakes again. So if you're doing deep fakes, which takes a lot of resources and are expensive. You mentioned movie industry and governments, for example, early on had the resources to do something like that. So what I actually think about, I don't know where I read it, but obviously it's hard to counterfeit money now. Money's got all kinds of cool things in there. There's colored threads and there's all these bands. You got to be a real professional. You can't be a cheap fake for counterfeit money, got to be a deep fake for counterfeit money.

Kathryn Harrison: Correct.

Jon Prial: And if you've got that level of tech, maybe you can do more of the provenance stuff. Yet at the same time, I think maybe there is a way for every iPhone movie, anything that gets produced by iMovie or whatever, some Adobe thing, hopefully they're part of your alliance, they do capture time and place and Mac addresses and really all the creation stuff. So I guess the first question is, does this have to work for both cheap and deep? And then I guess my second question is, even if you do know the provenance and I did it, and then somebody gets it and edits it, how does my provenance get taken away? Because I made a good video and put it on the web, somebody could take it and edit it and now what does that next editor catch it I guess?

Kathryn Harrison: There are lots of different efforts to standardized how content is created, what information is shared. Deeptrust Alliance works with Adobe on the content authenticity initiative. We've commented on their white paper. There are a lot of different efforts. W3C has a group that's focused on credibility standards. There are a number of different efforts that are going on to set what are the standards, how do you transfer information. Because today, one of the biggest problems, whether it's deep fake or cheap fake, doesn't matter, is that there are thousands and thousands of ways to create content. There is no single way in which information is uploaded, shared, compressed, any of those pieces. And so therefore detecting is extremely difficult unless you know some information about how the content was created. So that's one of the biggest challenges that we face today. There's work going on at Qualcomm and at a number of the other hardware companies to make this verifiable at the hardware level, which is critical. And it's really important that we be able to track those pieces from the device all the way through to distribution. I think having those types of watermarks or identifiers are key. The problem today is you can screenshot an image or a video and suddenly you've started a whole new era of provenance, right? That becomes kind of a fresh image that is created. And so one of the reasons why I think it's important that you start to have registries of content is so that you can then compare it to what else is out there, what types of duplicates may be there, so that you can figure out did somebody try to just change one pixel and get through some sort of detection? How semantically similar is it? Because very often, as a human, you're going to look at three images and they look exactly the same, but the ones and zeros are not the same. And that's what's really critical.

Jon Prial: This is neat. So I want to talk a little bit about FixFake. We have your nonprofit, the alliance, and then the company, which obviously is in the same space. So two questions, I guess, here we go again with a pair of questions. I kind of like that you just mentioned the screenshot. That's another fork of the code. From my open source days, that that's just another fork, and all of a sudden now something new is created and some tool was used to create that screenshot. I never think for a second that a screenshot is another fork of this content, but it is, and we have to maybe get all of these companies to buy in to putting the provenance there. I guess in all cases, are there underlying... because you talked about three images that are not the same, although they look the same to me. Are there underlying technical artifacts that allow you to find the differences? And if so, is FixFake one of the companies that could do that? Are you going to offer a bunch of APIs, validation APIs? So help me understand a little bit about the content and then how the company is going to work with the content please.

Kathryn Harrison: Yeah, absolutely. So FixFake is working on solving this problem for images of products. And very often if somebody wants something counterfeit, they'll take a real image and then they will put it on lots of different websites. So FixFake is doing three things. We're helping with that verification layer, so when an image gets taken, you can verify that it is actually this person, this product, et cetera, that is taking the image. The second is we are doing duplicate detection. So we have a way of identifying similarities across images. There is exact image maps and then there are similar ones, which is kind of that example I talked about. And there is computer vision technology that we are using to establish a graph of how similar are different images to one another. That's critical to overcome that screenshot problem that we talked about. And then finally, we've aggregated a dozen forensics algorithms to identify different types of manipulations, whether it be simple ones like different types of compression, whether it be copy splice or more sophisticated deep fake style manipulations. So what we are helping to do is to really identify at the photo level, what is real and how is it being edited or manipulated.

Jon Prial: And not only do you talk about these different techniques and algorithms or so, you also say you have the world's largest database of manipulated, stolen and deep fake media. Talk to me about the database of what you have, and please don't ever let me look in it.

Kathryn Harrison: So part of what we have done to help build FixFake is to figure out what images are real, what are manipulated and start to have that repository of known fakes because part of what we need to be able to do is identify much more quickly when similar types of things begin to crop up. And so we've done that through a variety of projects in terms of identifying things that have been created first and foremost with deep fake technology, because that's really where we started. And now we're starting to look at a whole variety of different fake and manipulated images, not just deep fakes. And we're constantly adding to that. We see this as a repository to help train our algorithms and to help identify when fakes come up.

Jon Prial: And it's interesting, you not only have the tech, but you actually... I see there's a lot of consulting that the company does too, which I think is great because we're talking about how much learning there is for consumers, and obviously learning for CEOs to figure this out. So talk to me about some of the executive communications and training that you know you need to do.

Kathryn Harrison: So what's critical is for people to even be aware that this is an issue. I mean, if you talk to most executives at companies, it's only in the last six to nine months that they've even really heard of a deep fake and understand that this is a possibility in their ecosystems. So we spend a lot of time to make sure, first and foremost, they understand what is the technology, why does it matter and how is it likely going to impact their businesses. There are certain industries where it's obviously front and center. The media industry obviously has been thinking about this and worrying about this for an extremely long time. But as it becomes very easy to create synthetic identities, so that means you use a fake picture but someone's real details, you're going to start to see far more attacks and breaches and problematic accounts that are going to impact financial services, insurance, e- commerce, healthcare. And most of these industries are not thinking about this set of emerging threats. And so we've hosted a series of events, we do round tables, we have conversations to really help with a basic fundamental understanding of what the threats are and what are the potential solutions that executives and companies can start to think about as part of their overall security frameworks because this is really a new threat that is coming out of left field for a lot of them and not even at the front of their radar.

Jon Prial: Wow. So CEO's probably have figured out that security matters. They're definitely aware of that, they've got chief security officers. They're recognizing that their relationship with their customers is critical and should not be broken and you've got to establish a bond of trust between the company and the consumers. And here is another ogre under the bridge that's threatening another part of their business and they really need to pay attention and should get a look at what's going on with the alliance and what FixFake can do. Kathryn Harrison, this was such a great conversation. Thank you so much for taking the time. Appreciate it.

Kathryn Harrison: Absolutely. It's been my pleasure. I really enjoyed our conversation.

DESCRIPTION

It used to be you could trust what you saw. With the prevalence of deep fakes and other synthetic media, today it isn’t always so easy knowing what is real and what isn’t. Kathryn Harrison is our guest on this episode of the Georgian Impact Podcast. She is the founder and CEO of the DeepTrust Alliance and FixFake, a company with the data and expertise to find fakes and fight fraud.

Today's Host

Guest Thumbnail

Jon Prial

|
Guest Thumbnail

Jessica Galang

|

Today's Guests

Guest Thumbnail

Kathryn

|Harrison