Episode 107: Information Privacy for an Information Age

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Episode 107: Information Privacy for an Information Age. The summary for this episode is: <p>Privacy is much more than a compliance issue. It’s a way of thinking about your relationship with your users and a product design choice. </p> <p>In this episode of <a href= "https://www.georgianpartners.com/the-impact-podcast/">the Georgian Impact podcast</a>, Jon Prial talks with Professor Ari Ezra Waldman - an internationally recognized thought leader on privacy and online safety and author of “<a href= "https://www.amazon.com/Privacy-Trust-Information-Age-ebook/dp/B07B7N1T8D">Privacy as Trust</a>.” They discuss how Ari thinks about privacy and what it means for the product design process. </p> <p><strong>Listen to the full podcast episode to find out more, including:</strong></p> <ul> <li>How <a href= "https://www.georgianpartners.com/principles-of-trust/">trust</a>, intimacy, and privacy interlock</li> <li>Where context comes into the privacy equation</li> <li>How social media can cause harm by ignoring the fundamentals of privacy</li> <li>Who should be involved to create the best technology design team?</li> <li>How “Queer Data Apps are Unsafe by Design”</li> </ul> <p><strong>Who is Ari Ezra Waldman?</strong> </p> <p>Professor Ari Ezra Waldman is an internationally recognized thought leader on privacy and online safety and, in March of 2018, he published a book “Privacy as Trust - Information Privacy for an Information Age” and, more recently, he had an op-ed posted as part of the New York Times privacy project. Ari is a Professor of Law and the Director of the Innovation Center for Law and Technology at NYLS. He also holds a Ph.D. in sociology. His research focuses on the influence of law on technology.</p>

Jon Prial: We have a very special guest on our podcast. With us today is Ari Ezra Waldman. Ari is a professor of law and the director of the Innovation Center for Law and Technology at New York Law School. He's also the founder and director of the Institute for Cyber Safety, which includes the first of its kind law school pro bono clinic representing victims of online harassment. Professor Waldman is an internationally recognized thought leader on privacy and online safety. And in March of 2018, he published a book" Privacy as Trust: Information Privacy for an Information Age," and more recently, he had an op- ed posted as part of the New York Times privacy project. Now I'm personally excited about our conversation because a few years ago, I wrote a Georgian blog post after seeing the New York Public Theater's production of a play called Privacy. Now I enjoy the play as a play, but it further expanded my mental horizons on privacy to the point that I had to sit down and write that post. Starring Daniel Radcliffe. It not only told a compelling story about his character, it wove in a great deal of audience real- time interaction, which is primarily focused on each audience member's personal data. The clever conceit was that at times, the play was stopped. And in context somehow, we showed how much supposedly private information could be dug up on audience members. And we saw images of ourselves getting tickets scanned, and then some audience members' homes were projected onto the backdrop of the stage. It was all very compelling stuff. So why am I excited? Because it turns out that Ari was a consultant in the production of this show. Now I'm not sure I should be happy to talk to him, or I should just give him a piece of my mind. Kidding. So with his rich background on privacy, the law, and how our lives are impacted by technology, let's buckle in and start our chat with Ari. I'm Jon Prial, and welcome to the Georgian Impact Podcast. Welcome to Georgian's Impact Podcast, Ari.

Ari Ezra Waldman: Thank you so much for having me. It's an honor to be a part of the program.

Jon Prial: So Ari, let me talk about privacy a little bit, even before we got to technology, and you talk a lot about intimacy. So let's talk about intimacy between two people. I don't ask my friend to sign a nondisclosure before I tell this person that I don't like another person, but I don't expect my friend to go around blabbing it. And it's sort of the same maybe for a smaller group of people, perhaps a support group. What's your take on intimacy and what our expectations of people are?

Ari Ezra Waldman: I think you're exactly right. We tend to think, especially lawyers tend to think that documents like confidentiality agreements are what make privacy, but that's really not true. Documents like a confidentiality agreement or a non- disclosure agreement, all those are trying to do is recreate the kind of expectation of confidence that we have with other people. And it's not just the kind of expectations that we have with people that we love or people that we know. So for example, you may share your pin or your social security number or your salary information or your deepest sexual secrets with your partner or with your best friend or whomever, someone that you know quite well, someone that you're confident will not share your information, but think about also how you also have expectations about strangers on the street. You expect that strangers are not going to stare at you or are not going to invade your personal space like someone that you know very well would be okay invading your personal space.

Jon Prial: Interesting. Now there are times that there are some agreements kind of supporting it. And you mentioned doctor, I will have conversations with my doctor and I walk all around, if I'm in a hospital or a doctor's office, you see signs up everywhere saying," Don't speak privately, walk three steps, stand behind this yellow line." And there are HIPAA laws that protect us a little bit. So sometimes the laws seem to, I don't know, seem to work?

Ari Ezra Waldman: Well, let's say that HIPAA does work. Although HIPAA, even though it's the kind of privacy law that most of us have the most interaction with, because we have to sign a form every time we go to a doctor's office. Let's say HIPAA does work. HIPAA only applies to very few types of entities, like doctors and healthcare organizations. But think about it this way. Why shouldn't the same type of expectations that we have with doctors holding our information, why shouldn't we have that same type of expectation with doctors as we do with technology companies? Doctors and lawyers and estate planners are trustees, or to use the legal term, fiduciaries of our information. We give over information to them in order for them to help us, to use their expertise in order to serve our interests. Companies like Facebook are taking our data. Under current law, they're doing it to serve their own interests on our backs and hurting us in the process. I think what the law needs to do is to treat companies like Facebook far more like the law treats doctors and lawyers with respect to their relationships to us and their clients.

Jon Prial: Interesting. So I'll come back to that, because I think that's going to be an interesting way for us to close on how companies should be behaving, I think. And I really do like this thought about, you're right, HIPAA is quite narrow in that sense. When I think about kind of the examples that we've had here, and you've talked about sharing things with a loved one in intimate deal, we did not have revenge porn laws. I could share something very intimate with a partner, but then the relationship ends and now we have laws, and I think you wrote 38 states, that those laws evolved because of change of expectations. Why do you think we got to that point of laws?

Ari Ezra Waldman: Well, I think that changed because of changes in technology. There's nothing new about sharing intimate information with another person. The example I give to my students every year is if anyone has ever read the published letters between Abigail Adams and John Adams when John was away at the Continental Congress. There are these private letters that are published in a book and that they're out there for anyone to see. They were missing each other for months on end, and those letters, some of those letters were pretty salacious for their time. And you can bet that if either Abigail or John had access to a front facing camera, that they would have shared selfies with one another, because that's how they express their love and their longing for each other in the technology of the day. And we express the same kind of feelings in the technology that we have. The problem is that the same type of opportunistic and mischievous behavior that has always existed is probably supercharged by the feeling of anonymity and the distance that today's social technologies provide. So what we have here is not anything new in terms of social experience, but we have new technologies that have allowed mischievous and opportunistic behavior to harm us more. And it just so happens that the overwhelming majority of victims of things like non- consensual pornography, other forms of online harassment, are gendered. Women are the overwhelming majority of victims. LGBTQ persons are victims at higher rates than everyone else. These are the significant ways that people, mostly men, can keep vulnerable and marginalized populations down.

Jon Prial: Wow. Now, when you talk about these kinds of posts, these are public disclosures. So what are your thoughts, for example, if someone's on disability and posted pictures of themselves that are playing basketball. Public disclosure, I've told the government that I need to be in a wheelchair or whatever the circumstances might be. They make a choice to do things publicly. How does that work?

Ari Ezra Waldman: Right. I think that's a really good question. The privacy of information depends on context. So there's no hard and fast rule, because as you can imagine, we can disclose information to a few people, to one person, even to a hundred people, and still have the expectation that everyone else that we're sharing it with is going to behave discreetly or behave confidentially. That's the example that you mentioned earlier, of something like a support group. If you go to an Alcoholics Anonymous support group, all those people are strangers to you, except for the fact that you know that they also have a difficult relationship with alcohol, and you're supposed to share intimate details in those meetings. But because of the rules of Alcoholics Anonymous, you have expectations that that information isn't going to go far. So you can't make a rule that says that even if you disclose your information to even just one other person, that you have no privacy interest in it, because then privacy is no different than secrecy. And the only way we have privacy is if you live on an island by yourself, either literally or figuratively. However, there's a big difference between posting a photo or a video of you, publicly accessible, on Facebook of you playing basketball while claiming disability for the government and you disclosing your HIV status to your 10 closest friends. So I think it depends on context and it depends on, as I describe in my book, privacy is trust. It depends on the expectations of trust that emerge out of the social situation. Alcoholics Anonymous, friends, even strangers in certain contexts, creates certain expectations of what's going to happen after that information is disclosed. That kind of expectation doesn't exist when you post something publicly on Facebook. Now, the interesting question is, does it exist when you post something on Facebook, but only make it accessible to your 10 closest friends or a subgroup, which you can do on Facebook. In some ways, you could say that that's very similar to sharing it with your five closest friends. In another respect, that disclosure affects Facebook's algorithm and affects what the system shares with other people. That's how something like an individual, a young person, a young adolescent joining an LGBTQ support group, turns out that that information, which she did privately, turns up in her father's Facebook feed, because they knew that the two were related. So Facebook's algorithm figured well, they should know information about each other, even though the group was a closed, private group. So there are really tough questions that courts and fact finders have to analyze every time one of these cases comes through. The problem, though, is that many of them don't. Many of them just see, Oh, well, you disclosed this information to one other person, your privacy interests are over, and then you lose this case. And I think that's wrong.

Jon Prial: Interesting. So let me take this to a higher level then, and correct me if I'm misstating you, but your argument is that privacy, which obviously can be messy to say the least, is better understood when it's looked at from this lens of trust. And maybe we view this when trust is top of mind, it changes the way you might look at things. Is that a fair way to put it?

Ari Ezra Waldman: Yeah. Privacy or the right to privacy. The way we protect privacy should be viewed through the lens of trust. The idea being that information disclosed in contexts of trust should still be protected as private. You should still have rights to it as private information, even though you've disclosed it to someone else. I think that's the way to look at.

Jon Prial: And I also want to think about it, because you talked about Facebook making a decision to share this information with this young person's father, for example. So there's an element of judgment here. So rules and judgment and guidelines are all kind of variable here. So if there's a piece of artwork that has a naked man or naked woman is not the same, I would argue, as an intimate photo. And I want to put a tip of the hat to Felix Salmon and the Slate Money Podcast, because he ranted on a bit recently about YouTube, and his argument was you can't have hard and fast rules. You can't have a hard and fast rules about a naked body, but rather, step back and create a framework of guidelines that might matter that then there can be some judgment under those guidelines. So do you see that as a way to help build trust in his judgment algorithmic ever? Or do you always need humans?

Ari Ezra Waldman: The insightful point that you're getting to is really about how do we do effective, fair content moderation today? And whether it's YouTube taking down videos that include women breastfeeding or Facebook not taking down videos that are fake. And I think you're right, it's hard to create hard and fast rules, not just about what is appropriate for a given community, but what should be kept private and what shouldn't, because context matters. So Facebook and YouTube have a terrible history of over- censoring a woman's body and over- censoring queer and LGBT artists and LGBT art, especially if it has a sexual nature. So there are tons of stories and there's also statistical evidence of over- censorship of women breastfeeding, even though it's entirely natural and it's not sexual at all. And it shouldn't violate community standards just because it involves a breast. And there is also notorious examples of under- censoring hate and harassment, whether it's from the so- called alt- right or racists and bigots and neo- Nazis. Because the content moderation principles of a lot of these companies are at least founded upon free speech principles, because all the people that made these rules are lawyers. But you're you're right that a single rule, which is what algorithms are best at. You take a policy and then you translate it into code and you go from inputs to outputs and algorithms can do that. But when it comes to assessing and evaluating whether a particular disclosure in context is harmful, or should be held private, or is a violation of intellectual property rights, or what have you, humans have to be involved. And that doesn't mean that once you have a human involved, content moderation is going to be easy. Healthy moderation is a hard job. The problem, really, is that Facebook can hire 100,000 people to handle content moderation. But if its mission, if its principles, if its content moderation standards are based on the mission of protecting speech over and above any other value like safety or privacy, then you're going to get more bad content and less safety for marginalized populations.

Jon Prial: So I want to bring this up to kind of a CEO message level. And in the op- ed, your op- ed, we'll have links to it in the show notes, it's titled" Queer Dating Apps Are Unsafe by Design." You really focus on what the design choices are being made. And I'll just give the three examples and have you comment on that. One app made a commitment to privacy by designing automatic deletion of all communications the moment the user deleted their account. Or another app said they're going to take all complaints, you flag an issue, 24 hours, they're going to fix it. But in your research, you've talked to lots of different people. And then the third app, you show that one individual sent over 100 complaints about harassment and they got ignored. What would you tell a product team in terms of how to approach their business?

Ari Ezra Waldman: Right. So if a company is interested, and this doesn't just apply to dating apps, it applies to any company that collects information on their customers or their users. If a company wants to maintain the trust of their users, which is essential for continued and sustained disclosure and participation, you have to intimately connect safety and the function and design of the app. So those are some examples. Instead of promising to delete data after two years, just design the platform such that in two years, the data gets deleted. That's what we would call privacy by design. It's in the code and it's in the technological structure. But in addition to the technological structure, there are also administrative and organizational things that a company can do to build trust with consumers. So I've advised a lot of companies about how to even structure design teams. And in my research, that's going to be part of my new book coming out which I'm writing now, hopefully coming out in a couple of years. My research is about how products get designed from the ground up and what is the reason why, despite company executives saying they care so much about privacy, why are technologies still being designed without privacy in mind? And one of the main reasons is because design happens almost entirely at the engineering level. In teams where everyone is an engineer, almost everyone also is a white male engineer, and almost everyone's boss is also an engineer, and their boss is an engineer, and privacy issues and safety issues have to trickle up through that filter. And only when they get through that higher level do they eventually get to, if at all, do they get to a lawyer or a privacy professional. So what I've advised a lot of companies is you need a lawyer or a privacy professional inside the design meeting, inside the design room. And when people push back and say," Well, that's going to slow down the design process" because people are going to have to explain to a non- technologist what's going on, my response is," Well, I think that's exaggerated." And two, that's a good idea. Slow it down because then you're going to avoid the multi- million dollar lawsuit afterward by designing products from the beginning that garner and develop and maintain trust with your users.

Jon Prial: So I love that you mentioned privacy by design. Ann Cavoukian has been a guest on our podcast and she's spoken to a number of our portfolio companies, she's been fabulous about it. The only person I think about this product team that you didn't mention, but we've mentioned it on a couple of other podcasts when they're talking about machine learning is not only lawyers or privacy professionals, but it's sociologists to think about those interfaces that we're trying to get to.

Ari Ezra Waldman: I agree with that 100%. I only left that out because people tend to laugh when I say that a sociologist should be on a design team. But as a sociologist myself, I think experts on how humans interact with computers, human/ computer interaction rules, anthropologists, sociologists, even psychologists, people understand how technology takes place or occupies space in a social world. People will understand how the design patterns actually manipulate users. Companies have for too many years taken advantage of social psychology to manipulate users. Companies, for example, know that it's very difficult for individuals to appropriately and adequately discount and evaluate future risk against current reward. So when you're providing customers with convenience or with a discount, people are very willing to, or seem very willing to share their data. Not because they don't care about privacy, but because they don't understand and can't evaluate the risks that come with it. And they are resigned to the fact that well, everyone's listening anyway, so I should just give up my privacy. It's a resignation as opposed to a voluntary giving up of data in any sort of consensual way. So companies for so long have been designing their platforms with the aid of social psychologists to manipulate us. It's time that sociologists and psychologists are in design meetings to make sure that products are designed to secure our privacy from manipulative behaviors.

Jon Prial: I love it. So as the stewards of trust, getting CEOs to come to knowledge of stewards of trust. You quickly mentioned earlier the term data fiduciaries. I'd love for you to go a little deeper on what your thoughts are on that, who the other fiduciary is we deal with kind of over our lives of and don't even think about it, and yet it hasn't made it to this space yet.

Ari Ezra Waldman: Right. So traditionally, this idea of a trustee or a fiduciary is based on expert or special relationships. We cede control over certain aspects of our lives to a doctor or to a lawyer because of the expertise that they have. And inside that relationship, we are expected to disclose a lot of information. And if we don't, we're not going to get the right treatment from a doctor, and we're not going to get the right arguments to benefit us from a lawyer. We have to disclose information, and we are vulnerable to them. We're vulnerable to lawyers and doctors, especially if they use their expertise in order to harm us. Those same factors, those same constitutive elements are true of many data collectors in the online world. They hold themselves out as experts. Imagine a platform like OkCupid. It holds itself out as the expert matching platform, because they use this proprietary algorithm where you answer all these questions and they find out your best match. They hold themselves out as experts. Facebook holds itself out as the best place to meet and connect the world. Google holds itself out as the best in search. And we are vulnerable to them. We are vulnerable to them in that they have so much power. Google gets to determine everything about what we see, hear, and learn, because it is basically our access point to the internet. Facebook's algorithm determines what we see and what we don't see, on Instagram, on our Facebook proper. And there is an information asymmetry. Just like doctors and lawyers who know a lot about us, Facebook and Google know a lot about us, know a ton about us; in many ways, more than we know about ourselves and can predict our behavior. Yet we, very much like lawyers and doctors, know very little about them. These are the constituent elements in law that mean that well, if you give information to this person, or if you give over your livelihood to this person, the rule is that they have to act in our interest. They have to act with duties of care, with duties of loyalty, and with duties of confidentiality.

Jon Prial: So I liked that that model is a description of what needs to be done and that responsibility that comes with duties. Now you've got to communicate that. What's your perfect terms and conditions that somebody might see when they get onto... What should that look like?

Ari Ezra Waldman: Hmm. Well, in a perfect world, I don't think we should rely so much on these long, legalese, seven- point font documents, privacy policies in terms of service, because no one reads them, and they can't really form any sort of expectation or basis of expectation for users. But what has to happen is we need to operate under a general rule, that no matter what, no matter what we click on, no matter when we say yes, no matter when we move a button from left to right, that companies still are limited in what they can do with our data. What we need to do is right now, we have a system where protection of our privacy is our responsibility. We are burdened with reading through privacy policies and terms of service and going through all the 250 apps on our phones to toggle our privacy preferences. And we can't handle that, because everyone knows that by the time you reach app 10, you're exhausted. And companies know this. It's called the problem of over- choice and exhaustion. It's almost like going to the pharmacy and trying to buy toothpaste. Then you go to the toothpaste aisle and there are 300 toothpastes. So instead you just pick the one that you've always picked, right? So psychologists know that, and these companies know that, and they design their platforms in order to foster that sense of over choice. We need to switch from that system where the burden is on us to a system where the law puts the burden on companies to protect our privacy, because they're actually in a better position to do so because they're designing the apps, they're designing the platforms, they're designing the data collection tools. Give them the responsibility from the ground up to build in better defaults privacy protective defaults, to build in data minimization, to build in limitations on what third parties can do with our data. That's going to be the wave of the future, and that's going to be how companies are going to maintain the trust of their clients.

Jon Prial: They build it in. They clearly communicate it. Everyone should be happy, and we do get the win- win. Ari Waldman, that was a fantastic discussion, so thanks so much for being with us today. This has been fantastic.

Ari Ezra Waldman: Thank you so much for having me. It's been a pleasure.

DESCRIPTION

Privacy is much more than a compliance issue. It’s a way of thinking about your relationship with your users and a product design choice. 

In this episode of the Georgian Impact podcast, Jon Prial talks with Professor Ari Ezra Waldman - an internationally recognized thought leader on privacy and online safety and author of “Privacy as Trust.” They discuss how Ari thinks about privacy and what it means for the product design process. 

Listen to the full podcast episode to find out more, including:

  • How trust, intimacy, and privacy interlock
  • Where context comes into the privacy equation
  • How social media can cause harm by ignoring the fundamentals of privacy
  • Who should be involved to create the best technology design team?
  • How “Queer Data Apps are Unsafe by Design”

Who is Ari Ezra Waldman? 

Professor Ari Ezra Waldman is an internationally recognized thought leader on privacy and online safety and, in March of 2018, he published a book “Privacy as Trust - Information Privacy for an Information Age” and, more recently, he had an op-ed posted as part of the New York Times privacy project. Ari is a Professor of Law and the Director of the Innovation Center for Law and Technology at NYLS. He also holds a Ph.D. in sociology. His research focuses on the influence of law on technology.