Episode 100: When Algorithms Become Managers: Uber and the Future of Work

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, Episode 100: When Algorithms Become Managers: Uber and the Future of Work. The summary for this episode is: <p>Tech giants like Airbnb and Uber tear up the rule-books for their markets. They grow so fast that by the time competitors and regulators react, it’s already too late. There’s fascinating research being done into their impact and how they have reshaped society. In this, the 100th episode of the Georgian Impact Podcast, Jon Prial is joined by Alex Rosenblat, Data &amp; Society Research Institute researcher and author of Uberland: How Algorithms are Rewriting the Rules of Work. They discuss how these companies challenge the status quo with new business models, new employment models and new ways of thinking about management.</p><p>&nbsp;</p><p>You’ll hear about:</p><ul><li>How tech companies are disrupting employment models and challenging our concepts of entrepreneurship</li><li>What this means for the future of work</li><li>The trust challenges of managing through algorithms</li></ul><p><br></p><p><a href="http://alexrosenblat.com/" rel="noopener noreferrer" target="_blank">Alex Rosenblat</a> is a technology ethnographer. A researcher at the <a href="https://datasociety.net/" rel="noopener noreferrer" target="_blank">Data &amp; Society Research Institute</a>, she holds an MA in sociology from Queen's University and a BA in history from McGill University. Rosenblat's writing has appeared in media outlets such as the <a href="https://www.nytimes.com/2018/10/12/opinion/sunday/uber-driver-life.html" rel="noopener noreferrer" target="_blank"> New York Times</a>, Harvard Business Review, the Atlantic, Slate, and Fast Company. Her research has received attention worldwide and has been covered in the New York Times, the Wall Street Journal, MIT Technology Review, WIRED, New Scientist, and the Guardian. Many scholarly and professional publications have also published her prizewinning work, including the International Journal of Communication and the Columbia Law Review.</p>

Alex Rosenblat: I've taken many hundreds of Uber rides. And since I finished the book, I've certainly taken many more. I did interview about 125 drivers, but I also observed many of them through the hundreds of rides I took across more than 25 cities in the United States and Canada.

Jon Prial: So, today our guest is a researcher from the Data and Society Research Institute. Data and Society is an institute in New York City. They focus on social and cultural issues, arising from data- centric and automated technologies. And I'm excited today because with us is Alex Rosenblat. Alex is a technology ethnographer and most recently the author of Uber Land, how algorithms are rewriting, the rules of work. This book really helps us all understand some of the new work paradigms in the digital age and Uber Land paints a future where any of us might be managed by a faceless boss. Now, what'd you just say? Well, don't you sometimes ask your boss a question to keep her engaged. I mean, the answer doesn't matter, but the human interaction does, and our guests really understands this. The book clearly shows how businesses can behave badly and during our conversation, we'll be getting into what you should be thinking about so that you don't. Now please understand nothing is simple and just black and white, but that doesn't diminish the importance of understanding the implications of the decisions you make in running your business. So stick around I'm Jon Prial, and welcome to the Georgian Impact Podcast. Alex, welcome. We're excited to have you here today.

Alex Rosenblat: I'm excited to be here, Jon. Thanks so much for inviting me.

Jon Prial: So, tell us about your background. Like for example, what's a technology ethnographer?

Alex Rosenblat: An ethnographer is someone who studies people and cultures through qualitative observation, I'm really interested in the cultural dynamics of technology culture. And so when I observed drivers at work and when I engage in discussions about algorithms and AI, I'm really focused on looking at how the cultural dynamics of technology are affecting how we conceptualize work.

Jon Prial: Wow. So I don't know that my intro did it justice or not, but tell us a little more about the book. What led you to do your research around Uber?

Alex Rosenblat: Well, in 2014, when I started this research project, Uber was still the darling of the media and business and technology beats. It had promised to scale entrepreneurship for the masses through technology, a wildly valorized premise that helped its disruptive business model make waves across hundreds of cities, where it entered the market and said," No, we're not a taxi company and your rules don't apply to us. And your medallions don't limit us. We're actually a technology company." And at the time, although it was quite disruptive, it was also celebrated as we're going to all enjoy the fruits of disruption. And by the way, people are still suffering economic hardship from the legacy of the great recession. They've maybe lost their homes, their businesses have gone under and here was Uber offering to help us instrumentalize the promise of technology to create mass entrepreneurship and an especially appealing claim in the United States where being your own boss has a lot of cultural capital. And so when I started studying Uber, I was very interested in some of these promises, what will technology deliver? And I started alongside my research colleague at the time, Luke Stark, by examining one simple rule that Uber had, which seemed to offer a real remedy to a longstanding issue of discrimination. If you're a black man, it's difficult to hail a cab on the curb, on the side of the road, they might pass you over and pick up the nearest white passenger. But Uber's technology platform came in and said," We have eliminated a destination discrimination because we won't let you know where the passenger is going or much about them before you have to accept the trip from our algorithmic dispatcher." And so I was intrigued by both the bigger promise to society and the very basic tenants of, okay, how do you achieve a goal that remedies a social problem?

Jon Prial: Wow, that's fantastic. Now, I think most of us understand about two- sided markets, and in the case of Uber, one side are the customers, the other side is the driver. Do you think Uber has balanced this well, or has one part won over the other?

Alex Rosenblat: I'm not sure that Uber would always describe it that way. Because for example, when Uber drivers sued Uber over their classification as independent contractors, they alleged that Uber has such significant control over how they have to behave at work, Uber sets the rates at which they earn, it sets the criteria through which they get fired. It intervenes in lots of their transactions. And they said," Well, we really should be classified as employees and therefore be entitled for example, to a minimum wage", and Uber pivoted and said," Well, actually drivers are just consumers of our technology app, just like passengers are." And although that seems incredulous on its face, it's quite important because if you argue that you're in fact, a two- sided transaction platform that both drivers and passengers are consumers, you're basically arguing that Uber is Amex, and Amex has just had a Supreme Court ruling where they've sort of got an immunity for the different ways that they secure agreements with their consumers.

Jon Prial: So Uber drivers are not employees according to Uber, although they are paid by Uber. Now isn't this something of a contentious issue around job categorization?

Alex Rosenblat: So, pay is something that Uber experiments with constantly no different than a technology, internet company, experiments an AB test, what you see on the web, or when Facebook plays around with your newsfeed, that happens to drivers too, only it affects their pay. And so even if Uber becomes a more trusted technology institution, which it is taking steps to become, as it recognizes that it's not merely a sort of Wild West of technology anymore, but that political times are changing and technology companies are being recognized for all the power that they have. They're not neutral anymore. I think that it will still take a lot of reconciliation to figure out," Okay, how do we create a more even dynamic between drivers and Uber and between consumers of technology platforms and Facebook or Google, for example, what does it take to collectively bargain with an algorithm?"

Jon Prial: So, I want to step back a bit and kind of talk about the history of business when companies behave well or behave badly. I know we've had a long sorted history of pre- technology bad behavior. So just set the tone, right? How different are we in this world than we were with snake oil salespeople, or Enron, or a pick your bad behavior of the past?

Alex Rosenblat: You know what's really interesting about technology culture? It's that it provides wide cover for pretty traditional practices. So when Uber leverage is higher prices for passengers based on their route, they know where you're going, they know what you might be willing to pay if you're part of a group of people that typically takes this particular route, we would think of that as price discrimination. It's not an original practice, but Uber would narrate it as an innovation of artificial intelligence. They might say we have machine learning algorithms that help us assess the road based price. And so what's happening is that technology culture has enjoyed Y exculpatory qualities so that when it does something, even if it looks like X, it gets narrated and widely accepted as Y.

Jon Prial: It's always fun and interesting to argue both sides of any particular point. So, here we go, tech companies and probably a slew of end- users will argue that the innovation around big data, and now artificial intelligence, is that we get to much richer and better personalization, better ads and services. So is price personalization any different?

Alex Rosenblat: Well, here's the thing I don't think it has to be personalized. I think it's easier to take offense if you know you've been individually targeted, but you can just be identified as part of a consumer class that typically is willing to pay X for some service at a particular time, or under particular conditions. And so, although a lot of the price discrimination, uneasiness, I think occurs around personalization and is baked into the privacy debates, how much information is a fair for you to know about me? Those are valid concerns, but I'm not sure that these companies even need that much information to sort of elevate the prices or lower them, depending on the conditions that they can gauge from how people typically interact with our platform. The difference is that they get very granular data on how people interact with their platform and they have large historical data that lets them make very good predictive assessments about what you might be willing to pay at a given time, or how much you should pay workers at a given time.

Jon Prial: Oh, wow. So I guess going back both history and even current times, how does, and is this considered behavior and how does it get caught or fixed?

Alex Rosenblat: Well, I know that amongst researchers it gets studied and not necessarily from the perspective that a company's practice is bad or good, but we tend to look at the impact of a given practice. It may be the practice was very well- intentioned, but it might be that the outcome is quite negative. Let's take, for example, facial recognition technology within the ride hail world. You want to know that the driver who's taking you from A to B is actually the person in the app. You want some verification, potentially, or the company wants verification that the person who signed up to drive is in fact, the person who's driving the customer. And so perhaps they say," Okay, we're going to do a real time identity check on you. Could you please take a photo?" It's often this type of technology is marketed as automated, but it could be that the photo is actually being sent to a worker in India or Kenya who's trying, squinting at the image and seeing if the two of them match. Well, that might work well for a dominant group. If you are someone with characteristics that look transgender, or perhaps you've grown a beard, you might get dinged. They might say, Oh, you're not really the driver who you claim to be. And what if that means you get fired? So the intention is to provide safety and real- time verification. The effect might be, oh, this damages people on the margins.

Jon Prial: Wow. That's a fascinating example in terms of where the technology can go. Now, at the same time, the earlier example you used was every Uber driver is given a pickup person and they don't know who they're picking up, this is kind of after the pickup fact and you're avoiding some potential discrimination that could have happened, where they would choose not to pick up a particular person. At the same time, in terms of different types of bad behavior, I guess. And I got this out of the book. There are times where drivers are being told," Hey, here's a big surge pricing happening right now. Go over to this airport, because surge pricing is hitting." And you've talked about examples where drivers get there because Uber Nita all these drivers and the surge pricing has been turned off because the drivers got there. How does that play out a little bit?

Alex Rosenblat: Ride hail companies like to talk a lot about how good they are at assessing supply and demand. They won't say that they are doing it exactly, they'll say we have algorithms to tell us what supply and demand are and we can raise the price we're willing to pay drivers so that they'll relocate to a given area of passenger demand, for example. And that's very interesting because you're trading on assumption there about math and it's about activity. You can measure supply and demand, but what's actually often happening is that Uber or Lyft might be oversupplying the number of drivers to meet a given concentration of passenger demand. And so if you're a driver who gets a notice from your algorithmic manager and it says there's really high demand or surge pricing is in effect right now, search refers to an algorithm where after Uber measures that the given passenger demand for drivers outstrips the local supply of them, an algorithm goes into effect that assesses a higher and higher price. So. It might be that drivers could earn three times the base rates they would normally earn if they get a passenger in the area that is affected by surge pricing. And so Uber drivers have an Uber app, and they get tons of messages from their algorithmic manager. They get enough notifications, they get emails, they get text messages, urging them with great enthusiasm and insistence to relocate to the surge pricing area. So, if you're a driver, you might say," Okay, let me maybe even log out of my app now so that I can relocate 20 minutes away to the search pricing area. And then I'll turn it back on so that I don't get a ride request on my way that doesn't have search pricing in it and I'll get there." But for some drivers, they might be sitting in the middle of surge pricing and they're waiting and waiting maybe 20 minutes go by and they don't get a ride request. And so if, had lost opportunities, the gas it took to get there and the general cost of the transaction without benefiting from the promise. And while Uber might trade on assumptions about the neutrality of algorithms to sort of say," Well, that's not our fault." What is happening is that this is taking place in a managerial context, your manager tells you it's a very, very, very good chance you're going to be paid twice as much if you show up at this place at this time and you get there and you don't. That creates automatic distrust.

Jon Prial: Hi listeners, Jon Prial here. The Georgian Impact Podcast is part of a new generation of podcasting slash marketing slash journalism. As a company, we feel quite strongly that this medium is a great way to help us interact and share meaningful content with our portfolio and companies that might want to work with us in the future. Hence, we think about podcasting slash marketing slash journalism slash really good content, I mean, that's our singular objective. We don't have ads. We won't have ads. As many companies like us, we just love what we do and we love sharing. So, this isn't a commercial break per se, but it is an opportunity for us to increase our interaction with you, our audience, on the anniversary of our 100th episode. Yep, 100. Something we're quite proud of. We'd like to hear from you so the week can get better and deliver even more appropriate content for you. And our ask is, well, we'd love to hear from you and what you think on some of the issues that were just raised in this discussion with Alex. Are faceless manager's here to stay? And if they are, how do we let workers express their grievances and receive support? What problems can you foresee if we're increasingly managed by algorithms? Let us know your thoughts on Georgian's Twitter or our LinkedIn page. Just search for Georgian partners or email us at podcastatgeorgianpartners. com. If this is the first episode you've heard, there are 99 others available to you, wherever you get your podcasts. If you're a regular listener, we'd love it if you'd let others know about the show and rate and review us on iTunes. Thanks, we really appreciate your support. We make the show for you. We want to hear from you. So please do get in touch. So, now you've got drivers, blindly following instructions based on the information they're provided, yes, from an algorithm. Sometimes they get paid, sometimes not, but in this case, is there anything Uber could easily do? I don't know, either human or AI driven solutions to make this better for drivers?

Alex Rosenblat: You could intervene, right? It could be that, okay, well, we know that there's only a chance you're going to get this premium pay, but we'll give you a little bit extra just for making the effort. There are ways to try and store up a stronger system of trust, even as automation starts to carve out different management practices.

Jon Prial: I love your comment about kind of building up this trust, and in this particular case, it's the drivers and the algorithm that's driving them. Let me step back a little bit. Obviously, it's all data- driven, and I think everybody listening to this podcast agrees, data's a good thing and the more, the better it's what we do with it, that matters. So, let me ask you about an example, you referenced the Netflix tweet quoted here, just because it's kind of funny," To the fifty- three people who watched a Christmas Prince every day for the past 18 days who hurt you?" End quote. There was data. I don't know that they should have shared it, what's your thoughts on that?

Alex Rosenblat: Effectively, there's always these moments where people consume services that monitor and track them all of the time. Facebook is looking at what you do. Lots of websites might even be tracking where your mouse moves or what you're clicking on. And they'll use that to sort of do web analytics on their base and consumers and such, but most people don't see that happening visibly. It's not like someone is swabbing you, okay? I've got your data now. It's all quite invisible. And there's these moments where companies occasionally pierce through the fabric of invisibility, neutrality, innocence, and announced what they're up to. And that always is jarring for people, I think, not because what Netflix said isn't funny it is, but it pierces through the veil of sort of innocence and neutrality in a way that makes you feel really uncomfortable. If you were aware of all the times that your data was being collected, you might find yourself being very uneasy with many of the services that you use.

Jon Prial: And I guess we just learn a few months ago that Facebook had routed a bunch of phones of 13 to 35 year olds was paying them$ 20 a month to observe everything they did. So there was sort of a price paid upon that as well for their gathering astounding amounts of data.

Alex Rosenblat: Right? So it's not that you can't put a price on people's data. The question becomes, should you be doing that? So, does a 13 year old have the full awareness of the implications to consent to this type practice? Should you be swabbing at 13 year olds phone as a corporate entity? And that raises a lot of questions about ethics and other considerations. So even if you can put a price on it, should you be allowed to do it, is the burgeoning question coming out of the tech lash as more and more people come to grapple with the implications of technology companies that have all of this swabbing ability.

Jon Prial: So, that's the first time I've heard the term tech lash and I love it. So, that's kind of cool. And I hear your point in terms of both transparency, which may get accepted by people and be okay. And if it's not, then there might be other remedies that might be there. So these are gray areas in terms of people like it and not liking it. Perhaps let me take you through another gray area that kind of came out of the book. I'd love to hear your thoughts, can a company, and I'll use the phrase, because it was kind of the implication in the book hide behind being a tech company versus being a transportation company. And I know this works across a whole bunch of other industries, the Uber example was transportation companies, you could argue that Facebook is, or isn't a media company versus a tech company. What are your thoughts on how the companies portray themselves?

Alex Rosenblat: I think a lot of it is technology theater. I think if you claim to be a technology, you get to operate in a culture that understands and supports this idea that if you try and restrict or regulate technology companies too much, you're going to undermine the innovations they develop that service to all in really immediate ways. It's great that you can hail a ride at the touch of a button. It's great that you can stay in touch with lots of different family through Facebook. It's great that you can have a newsfeed that delivers breaking news. But at the same time, what they're saying is the rules and expectations and cultural norms that normally apply to our industry competitors, whether it's taxi companies or media companies they don't apply to us. And that presents a real challenge because if you're a policymaker or a regulator, and you're saying you have to follow these rules that were developed for taxi companies or for in Uber's case, they might say you have to provide accessible services. Like we've made a social commitment as a society to create more broadly accessible services for people with disabilities. And Uber we might say, we don't have to comply with the Americans With Disabilities Act because that applies to transportation companies, not technology companies. And so there's a bit of a bait and switch happening where, on some level like, yes, it makes sense you are importing the practices of Silicon Valley to this other area. On the other hand, is this just an act of regulatory arbitrage? And the interesting thing about this is that there's so many cascading consequences to these different types of logics at play. So, for example, Uber has been at the center of a lot of disputes over labor. One of those centers on employment misclassification. So Uber drivers are classified by Uber as independent contractors. And they are called Uber driver partners. Although a lot of drivers burn that kind of language because it's really not a partnership, but they call them independent contractors. Yet they leverage a great deal of control over how they behave at work because they can monitor them in such granular ways through the app. And because they have all these rules that drivers have to follow. So, no one's yelling at you, but at the same time, if you don't do what the app says, it gets mad at you and crosstalk fire you. So, that's become a catalyst for debates over whether Uber has misclassified drivers as independent contractors, and maybe they should be employees. But if we accept that Uber drivers are independent contractors, that means Uber is setting the prices and coordinating the prices that passengers pay. And that drivers earn that would typically become an antitrust issue. It's price fixing, for hundreds of thousands of independent contractors. And so there's always these cascading consequences for," Okay, if you accept this claim A, what does it do to this other area B?" The problem is that means that the ball is constantly kicked down to a different regulatory purview. And so it's never the one tool, there's not one policy fix it's what is technology and what are these arguments doing to change the culture under which we have conceptions of what it means to work or what coordinated economic activity means.

Jon Prial: Wow, it's interesting. So, we do get to the discussion you have been mentioning that regulations could may or may not fix it. I think it's a challenge to get government officials to understand this the right way. I know we know there's a struggle there. We won't go to some of the things we've seen in congressional interviews with executives of these new companies. But I do like, and you reference often in the book, the myths of neutral technology or the algorithm as a neutral manager. So I'd like to just dive into something, not transportation reported on, but something that is heavily regulated, which is healthcare. Facebook has initiated over 3000 wellness checks where they might potentially call 911, and notify authorities that people are at risk of suicide. That this seems very altruistic, but at the same time, it seems to cross a line a little bit like the Netflix message a bit. What's your thoughts in terms of how, again, a tech company could bypass all whole regulated industry and do something that's way out there on the edge?

Alex Rosenblat: Well, I think there's two components of it. So reporting on potential suicidal behavior has well- established mechanisms offline as well. So, if Facebook is simply responding to alerts and Twitter does this too, if a user sort of says," Oh, my friend is suicidal", and reports that to Facebook, that's just mimicking a reporting mechanism we already have in place in other parts of society. But if instead Facebook and Twitter are simply making inferences about the state of your health and then responding to it in kind and perhaps sending police to your door. That's a different story, in part, because police showing up at your door might be a prompt to actually carry the suicide out. And so there is a gray area in which these companies are trying to do good. They're intervening in a problem, but they're doing so without being queued in or protected by health practices by medicine. They're not doctors, but they're playing doctor and that's a problem.

Jon Prial: So I'd love to kind of follow up and kind of talk with you how a company can build trust, and what does it take to do that? I don't really love the hype about," Well, what happens if an algorithm goes wrong and somebody gets hurt and who's accountable?" I think we have to begin to think a little more about, again, transparency as a way to get to trust. Let me just talk about the financial industry sometimes and false negatives and false positives. I could make sure a fraudulent transaction never goes through a system, but more than likely I will absolutely not let un- false transactions go through or vice versa. So how do you see this discussion kind of get to the way a CEO should be thinking about running his or her business?

Alex Rosenblat: That's very interesting that you could draw a line in the sand and say, well, we're only going to let the most honest people as we assess them transact on our platform. So we'll never have any fraud, but the flip side is you might exclude a lot of people who aren't recognized by your system, who would also honestly participate in financial transactions. That's the gist of what I understood from what you described. And that's very interesting because Facebook did a similar thing. They want people to use their real names and its systems didn't recognize a lot of Native American names as real. It worked well. It had a high accuracy rate for the majority of people. And if you were on the margins, you might have your account deactivated. And so there's always these interesting trade- offs between how do you manage a wide- scale policy and how do you also account for people on the margins? And I think that's where a CEO or a company has a real opportunity to intervene because they can say," Okay, we've done the assessment. Who's going to be negatively affected by this?" I think you actually discussed this a bit in a previous podcast where you were discussing the image processing algorithm Google was using, which was mostly accurate, but sometimes identified white people as whales, and sometimes identified black people as gorillas. The whales is kind of funny, but identifying black people as gorillas plays into a longstanding, basically blood libel, and negative racist trope about black people. And so that becomes a real problem. So, it's not that your system has to be made more accurate to produce transparency, it's that you have to recognize there's always going to be edged cases and people on the margins and people who are negatively affected by something, you have to be sensitive to the social implications, and then you have to intervene and perhaps have an additional solution on top of what works at scale.

Jon Prial: That's the most important takeaway here that you can always get the 80/20 rule, and you can get the 80 to work pretty well, but good tech and good testing requires a breadth of test cases. And I think it's incumbent upon CEOs and chief analytics officers and development teams to make sure they are looking at those edge cases, whether it's a minority face or a native American name, there's so much out there, right?

Alex Rosenblat: I think that it can be hard to do it internally as well. I think that's how so much scandal accrues to a lot of these companies. It's that researchers and journalists on the outside are looking at it and saying this is often in the purview of social scientists, especially, but there's increasingly a movement amongst computer scientists to account for fairness and accountability and transparency. There needs to be an interdisciplinary effort to say," Okay, how do we assess this? Can we pregame this? Can we know what's coming down the line", and you can. Some companies, I think it's only really the bigger ones that are able to do this, but they will have people, internally, who act as guardrails, who can say," Okay, I understand what you're doing from an engineering perspective, for example, or from a finance perspective. But let me tell you how this is going to play out in public. Let me tell you how this is going to affect certain people badly and why you don't want to do that." So just because Netflix knows how many people watched a given show and infers that they were sad, that maybe it shouldn't announce that.

Jon Prial: I do like the concept of challenging constantly. And I love your term guardrails. I think that's important again. So we kind of know what CEOs need to do in terms of what they should be challenging their people on. We believe, and I think you've seen it in our trust paper that's out now, in terms of companies can differentiate themselves on trust, and therefore they need to actually explain the edge cases. And rather than let the reporters figure out the edge cases that embarrass the company, get out there and do the right job and take some credit for it.

Alex Rosenblat: I think that would be a great step, but one of those ways is to have people employed at your company whose job is to engage in these public debates, whose job it is to do interdisciplinary research. And I know that's a big ask, right? If you're focused on making your business profitable and making sure that you've served your shareholders and making sure that everything that you're delivering is actually working saying," Okay, we also need people looking at the downstream consequences", seems like a line item, you maybe can't afford it, don't want to invest in. On the other hand, if you can become a trusted institution, because you're always engaging with those debates and you're always speaking to them and you become an authority that has infinite social value later, and reputational value as well.

Jon Prial: Wow. So Alex, Rosenblat from the Data and Society Research Institute, I can't think of a better way to close this. So on the anniversary of the Georgian Impact Podcast, 100th episode, I don't think we got to pick the better guests than having you with us today, Alex, thanks so much for being with us.

Alex Rosenblat: Thank you so much for having me.

DESCRIPTION

Tech giants like Airbnb and Uber tear up the rule-books for their markets. They grow so fast that by the time competitors and regulators react, it’s already too late. There’s fascinating research being done into their impact and how they have reshaped society. In this, the 100th episode of the Georgian Impact Podcast, Jon Prial is joined by Alex Rosenblat, Data & Society Research Institute researcher and author of Uberland: How Algorithms are Rewriting the Rules of Work. They discuss how these companies challenge the status quo with new business models, new employment models and new ways of thinking about management.

 

You’ll hear about:

  • How tech companies are disrupting employment models and challenging our concepts of entrepreneurship
  • What this means for the future of work
  • The trust challenges of managing through algorithms


Alex Rosenblat is a technology ethnographer. A researcher at the Data & Society Research Institute, she holds an MA in sociology from Queen's University and a BA in history from McGill University. Rosenblat's writing has appeared in media outlets such as the New York Times, Harvard Business Review, the Atlantic, Slate, and Fast Company. Her research has received attention worldwide and has been covered in the New York Times, the Wall Street Journal, MIT Technology Review, WIRED, New Scientist, and the Guardian. Many scholarly and professional publications have also published her prizewinning work, including the International Journal of Communication and the Columbia Law Review.