OpenWeb’s Tiffany Xingyu Wang on making publishers sustainable with first-party data

Media Thumbnail
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, OpenWeb’s Tiffany Xingyu Wang on making publishers sustainable with first-party data. The summary for this episode is: <p><br></p><p>In this episode of the Georgian Impact Podcast, we talk to Tiffany Xingyu Wang, OpenWeb’s first-ever Chief Marketing Officer.&nbsp;</p><p><br></p><p>With a mission to “save online conversations,” OpenWeb wants to improve the quality of conversations online while enabling conversation-based advertising, which allows brands to connect with their most active audiences. Through connecting with audiences, publishers can garner first-party data for ad targeting — a valuable tool as publishers prepare for the disappearance of third-party cookies.&nbsp; </p><p><br></p><p><br></p>
Background on OpenWeb
00:40 MIN
The current landscape for publishers
01:38 MIN
Brand Sustainability
02:22 MIN
The importance of trust in strategic growth
02:43 MIN
Consumer demand for online trust
02:07 MIN
Shifting away from creator-led content towards a community economy
02:37 MIN
Best practices for brands and publishers
03:16 MIN
Online anonymity and identity
03:26 MIN
Identity and user safety in OpenWeb
03:04 MIN

Jon Prial: The material and information presented in this podcast is for discussion and general informational purposes only and is not intended to be and should not be construed as legal, business, tax, investment advice, or other professional advice. The material and information does not constitute a recommendation, offer, a solicitation or invitation for the sale of any securities, financial instruments, investments, or other services, including any securities of any investment fund or other entity managed or advised directly or indirectly by Georgian or any of its affiliates. The views and opinions expressed by any guest are their own views and does not reflect the opinions of Georgian. To be truthful, when it comes to consuming books, movies, or the news, I think I'm a snob. You see, there's no crowdsourcing for me, no Rotten Tomatoes for movies, no good reads for reviews of books, no news aggregators. I have my set of trusted curators and news sources who are curators themselves. You see, I like to trust the source and not just my habits that are feeding an AI. Part of this is also me not paying attention to ads or avoiding comments, which sometimes I think are toxic. So why is all this relevant today? Because I am a user of content and brands and there should be a better way for me to learn and interact across the board. I'd like the idea providers knowing me. First party data to anyone? Welcome to Georgian's Impact podcast. I'm Jon Prial. Tiffany, welcome.

Tiffany: Thank you, Jon.

Jon Prial: Before we dive in, Tiffany, can you give us some background on what OpenWeb does?

Tiffany: So OpenWeb is a community engagement platform that helps publishers and brands host healthy conversations, build their own first party data on their own properties and therefore build more sustainable monetization strategies. And today we serve over 1000 customers, including publishers and brands. And really our vision for the future is that we can really build what I call the community economy where all the businesses can really thrive on the communities they care about and they curate and they help thrive.

Jon Prial: And be safe. Excellent. Well we're so glad you're here. So where I think we are today is... We live in, for the most part an ad supported world, although maybe that's getting overlaid with a world that has more and more monthly recurring revenue. Maybe a little bit of both. What's your view of what's happening in terms of the publisher's view and them having ad supported internets versus maybe we should have had micropayments a million years ago? What's your thought of where we are today?

Tiffany: Yes, I think if you look at the past 15 to 20 years, the so- called web two also social web, which was dominated by the social media platforms. And during that time publishers and brands started at that moment to rely on the social media platforms to leverage their data and to monetize their content. But fast forward where we're today, publishers and brands realize that hey, why not build our own social media layer and to really monetize directly to our loyal communities and readers? And that's honestly where OpenWeb is coming from, right? We want to build a social media as a service for all the publishers because they're in this surgeon and unmet need from the publishers to say, why should we rely on social media while we are the content generators and creators to monetize through social media while we can do that ourselves? But there are different key components to actually do that so- called social media service. You need the safety layer, you need the first party data layer and then you need to monetize through advertising. So I think publishers and brands shipped from this partners and investors or stakeholders in social media into potentially the competitors with the social media to have their own data and more sustainable growth.

Jon Prial: I take a couple of critical takeaways from that. I think you're happy that I might live in one particular news provider versus an aggregator because if I then grow up and get comfortable and join the comments section, it's a self- selecting group. I'm in a channel already where we kind of know what the topic is, whether I'm a, I'll pick at a New York Times reader or I'm reading an article about climate, all of a sudden I'm now self- selecting into that dialogue. So you're probably... That's where the publishers want to go.

Tiffany: Absolutely. And I think it's not only the benefit for the publishers, it's also benefit for the brands who start now advertised directly on publishers real estates and it's also beneficial for the readers or users for publishers. It's a win- win situation for many reasons. One of many is that you actually have way better social and interest to graphs, is you exactly mentioned that I know exactly which column, which article and through the comment I know exactly the sentiment against that article. So publishers, if done well, not only curate their own communities, which can be more sustainable for their business, but also actually curate way better quality data for the brands who want to advertise on publishers.

Jon Prial: It's now their first party data. They're not relying on getting it from... And as Apple blocks everything anyway, they're going to begin to grow and grow in amass the right first party data that they can then offer to their advertisers.

Tiffany: Absolutely.

Jon Prial: So I'm just thinking about now you've got the content providers, whether it's right or wrong, just use New York Times for a second and General Motors as a car. General Motors doesn't do a generic advertisement on the New York Times or generic advertisement on Twitter, but they'll do one very specific to maybe an article about EVs, that adds some sophistication that's required on the part of the brands. Do they have that level of technology yet or is that an investment that's going to be acquired by all the brands or does OpenWeb help them with that?

Tiffany: There's a term in the industry called brand suitability, with what's going on with Twitter and Elon and long before that, brand safety is a very common user term, but under that there's really this concept of brand suitability what you're talking about. Because if you understand your audience through the first party data of whatever platform, a publisher that you advertise through, then you have a better idea of the intent. And many people have many controversies about what's going on with Twitter, but one of the controversy show point is about if the data... Its quality data and for the publishers they know because they're content creators, if they own that first party data, they should have that data and brands through that data can directly know if my advertising is suitable for the environment where it advertising is placed. Now to your point, it isn't easy. I think many of the listeners might be familiar with this immense Lunascape where there are hundreds of players in the advertising marketing tag space and you see on the one end you have the publishers, the other end you have brands, but in the middle there's so many middle players in between. So brand suitability and brand safety was so hard because you have SSP of DSP of ad server. When you disconnect the real estate and a person who want to buy the real estate, suitability becomes way harder. But if you skip automate players, you actually the right data to tell, hey, this is how my room looks, my swimming pool looks and this is where my garden is. It's much easier for the buyer to actually understand the surrounding and place where they want the furniture, where they want to be. So I think suitability is much easier in the area to your initial question, this inflection point where publishers and brands start to build their own communities and therefore their own data. The so- called brand safety and suitability becomes a more feasible problem to solve, which was technically from infrastructure perspective was so hard and it was never a priority neither.

Jon Prial: Do you see then some of these middle layers getting cut out? Do you see a market shift that you'll be able to provide information about a specific community that the brands can use, go directly to the publishers and bypass some of those middle layers where everything really is quite opaque?

Tiffany: So I started my career in the US actually in the space of DMP and I was with Salesforce and I had two patents with USPTO about how to understand the journey insights, the touchpoints of a customer's journey in this opaque world, to figure out what is your next best action is. Now that was the world many years ago. And then fast forward, my last company and OpenWeb are playing in the space of helping our customers to build their first party data, to build their own communities in a safer and a more trusted way. And I think when I look back, my personal thesis of where the market is going is the following, what we call the digital transformation for the past 20 years. In the next 20 years trust in a new digital transformation. So all the companies which succeeded in digital transformation or helped their customer to drive digital transformation are the companies we heard today. Those are the lights of Salesforce will help B2B businesses. I mean their B2B companies help their customers to drive data transformation or they're directly B2C at either in e- commerce, gaming and social media to really centralize the data and to monetize. But the future is different. The future is if you don't place trust at a center of your strategic growth, if you don't place the people at the center of your strategic growth, you won't be the brands and the platforms or the names we hear in 20 years. So in this thesis of if trust is the next digital transformation for the next 20 years, then coming back to what you were asking is I don't think any business including publishers or brands have the choice but choose this new growth model. And also because of that and because that elevated awareness, the inaudible trust and awakening of the new generation caring about safety and the privacy. And as a result you started to see there is this paradigm shift of the pathways. Finally we can break this opaque Lunascape and then going to the world which is trust and people- centric that you actually can connect the two dots. I mean, brands and platforms directly and brands and publishers direct together.

Jon Prial: I think about the way things were for placing ads and I'm thinking about a Facebook ad placement interface where you click a bunch of boxes and they got in all kinds of trouble because there were boxes they should never allow people to click or Google ad words. This allows you or your brands to work with the content providers and maybe have a different set of facets they might click on that become more important. Do you see that evolving away from the level of... I mean, maybe there's things like location still matters, age matters, but maybe the things that don't matter, do you see the brands and the content providers evolving in terms of what's going to be on that kind of set of check marks for where to place advertisement? Do you see that evolving?

Tiffany: Yeah, I think the general principle is it has to go with the consumer behavior change. The check marks is a way to gather the data. I mean, again, I think in the past 20 years there are three things happened. People were productized, data was centralized and gross was above everything else. So every single check mark, every single layout, every single user experience to optimize one single thing is how much time you spend on site. Not because your social interaction, it's because how much you're going to spend time and purchase that next item through an advertiser. So I think what happened is we start to see, at least from the trust perspective you mentioned is people start to ask how my data is actually consumed? How can I recall my data in this whole new GDPR, CCPA and many other regulations world? So it started to have consent management by design start to be actually part of the user experience. And from safety perspective, people start to request why is the underrepresented voice I need to be assaulted? Today over 40% of the US internet users have reported to be harassed or subject to hate speech. And users saying, why should I be in this global digital society but not feeling I'm part of the society? And people start to request, well you need to embed safety experience in horizons metaverse with many controversies like how you actually include that in your experience. But even in the two dimensional world we have today, how can I appeal if there was a wrong call about my behavior and how can I be involved in the conversation if you designing your safety enforcement and policies? So many new experts of trust that new generations users demand are now being considered into the user experience design, which was never the case before.

Jon Prial: So you wrote a tremendous piece for the World Economic Forum. We're going to put a link to that in the show notes. And I was talking about building a community economy and there are a couple things that we've already mentioned kind of shifting away, giving the brands an opportunity to do more, particularly becoming a community builder because they'll own more information about the people in their consumer. We're moving to this different type of economy, you called it shifting away from a creator led content towards a community economy. And we talked about safe spaces. Talk to me a little more about where you see all this going. This is really an interesting... This is kind of wrapping up the directional piece and then I've got a couple more questions as we get to web three and the like.

Tiffany: So there are three lags to lift this community economy and they are first the safety layer, how you create a safety net for everyone to involve? And I mentioned the data, about 40% of US internet users being harassed or subject to hate speech. If you don't feel safe, you can't really be engaged. Especially you briefly alluded to web three, there are many components to web three. One of them is the metaverse. The immersiveness will make the impact way more immersive and way more impactful. So that's the first layer. That's why all the content and moderation has to be in place. And then the second layer is if you get that covered, the second layer is really the respect for data dignity. And the flip side is the privacy. The whole idea of community economy is how you put people at the center. So that's why I mentioned you have to respect the consumers for their data dignity. And the way to do that is to have privacy preserving first party data approach. So that's a second layer. And for businesses is very important to think about that, the more you actually respect the privacy, the more you construct data in a very clean structured and manage the consent first way, the more you actually can sustain your community. And on top of that, when you have people who feel safe and engaged and you have the people who are willing to give you the data to allow you to understand their social behaviors and anxious to graphs, now you actually can monetize where's your foundational layers covered. It can be advertising, which is one way of it. It can be e- commerce and can be subscription. So there are many. It's the summit of diversification of monetization. But at the end of the day, if you look at that, it's no different from a inaudible pyramid. People want to feel safe first and people want to actually have that interaction which translates to the data and then you actually can build a summit of the monetization for businesses and for people to really consume the content, commerce on the platform. So this is the idea and the pillars of the community economy. The last thing I will say is it's really a pendulum swing from before we call attention economy, which you centralize all the data, centralize all the people, you centralize everything. And this is an awakening and pendulum swig back to decentralization where you put people back to the center, put community back to the center.

Jon Prial: As I think about this community, we can have a community led, let's say a piece of news is out or something is out, and a hundred people get engaged with it. Do you want the hundred people to just have at it? Would you need some moderation? So if General Motors are running an ad, does a General Motors communications or marketing person stay engaged in those interactions? They could obviously do more selling and the like, or do you kind of set it up and move away? How do you see it really? What's an optimal way for as if I was a brand?

Tiffany: Yeah, I think we are all learning in terms of the best practice for brands and the publishers to create their communities for two reasons. One is we already learned the lessons of somebody who has already done that, which is called social media platforms. So if you look at social media platforms, what has been done is to start with, just drive growth, no moderation. And then suddenly you see a deep proliferation. And when that happened, you already have toxicity online. It's kind of big snowball running very fast. It's very hard to break at that point. And since 2016, the election, there was uptick of the attention to that. It was accelerated by the cultural movements like Black Lives Matter, Stop Asian Hate, then capital insurrection and now with what's going on with Twitter. So it's accelerated and accelerated. So what we have learned, lesson one is if you can put moderation towards policies enforcement at a get- go then do it at a get- go because it always starts with what you want the brand to be, what identity is, what do you want a community to believe and act in your community? And instead policies enforce that, it's much easier to do in the beginning, which is the term we call safety by design. So that's lesson one that brands and publishers can learn. The second one we can learn from... Also what's happening is kind of the version two and evolution of the social media is kind of people saying web 2. 5 or just the cusp moment between all the social media and the new generation of the business models and revenue models. If the companies like ROBLOX, Discord, they're very community driven platforms and it is, I think it's just from multi- dimensional to one dimensional, becomes more broadcast in many ways. And in those platforms another thing we can learn is they always had facilitators and moderators on each server and in each community and it was a necessity. If you ask anyone at Discord and at ROBLOX, they will tell you, you cannot almost do without it. Because if you, first of all many cases the policy wasn't clear, enforcement was not there. So you have to facilitate, but even you get policies right, you get enforcement right, you still need the person to keep an eye on things before things go wide and give people... To facilitate that conversation. So that is still happening, people are still learning the best practices of that. So that's something we can learn to keep an eye on. So I think the benefit for anybody who is starting to view their own communities today, the good thing is you're not starting from scratch. There's so many mistakes made and learnt them. As I taught a class two years ago just to gather all the best practices in the space so that people can just learn and take shortcuts because we made many mistakes, why not repeat those? I think that's good news is web three builders should get things right at get- go, at least avoid the mistakes that we've seen in web two.

Jon Prial: Two things coming out of web three. One obviously you made the point, it used to be a centralized data controlled by some large behemoth back at the cloud and not only is their security risks, but there's what are they doing with this data risk? So web three pushes it back out into the hands of the users. So I got a couple questions for you. My first one is, if I've got my data, I can give you my name or I can give you my avatar, does anonymity change in this world, is it still not a good thing to allow anonymous contents? Clearly if there's a moderator there, maybe it doesn't matter as much. So I'm curious what your thought is. Moderation versus people still having to have validated identities and let us not go down the blue check mark.

Tiffany: Yeah. Gosh. Once I feel our space, once it gets to the point of identities, it gets murkier, I have to say. So the way I look at this is you really have three levels. You have message level moderation, you have conversation level moderation, and you have really identity level conversation. So actually probably I have one layer added to it is do you moderate within your platform or do you moderate across platforms? It gets even trickier, right? And very quickly it gets to the controversy about censorship, right? So if you do cross platforms. So the industry standards and the practice right now is you definitely monitor at a conversation level and a message level. And at message level people usually use what we call the keywords based technologies to do moderation and usually then suffice, I will explain why. And at the conversation level, what we call contextual AI, so you use the context and metadata to determine if the message is safe or not based on different use cases. So contextual AI is usually kind of the golden standard right now to do moderation. At identity level it's less important to know about exactly who that person is except when it's about age verification, especially for the underage users use case. But beyond that, more and more companies and platforms and brands looking to reputational score. So how you behave, what is the history of that identity on my platform in the past? So is the person pretty toxic so far? And based on that reputational score and what that person just acted on that incidence, the platform or brand or publisher would decide how to take action. That means is it a warning, is it a banning, is it a banning forever? Now cross platforms moderation is much harder, but you can't avoid talking about this if you really talk about metaverse, right? Metaverse, the whole idea is I can jump from a gaming platform to an e- commerce platform to another gaming platform, to a dating platform. You kind of have an identity across different platforms. But the question is do you carry the same reputational score from this game to next game and to e- commerce and dating platform, maybe not, right? And do you carry this reputation from this publisher to that publisher and the other publisher? Maybe. One thing I do want to mention is one thing I see many industries... But because digital trust and safety is pretty nascent industry, what I encourage a lot is cross- disciplinary collaboration. This is an area where you need not only just technologists, you need social science background and behavioral science background people to actually collaborate on that, which is often what is lacking. When you zoom only into the Silicon Valley technologies builders and forget the broader societal elements into it. But we are inaudible and technology to serve the society. So you have to think about those aspects.

Jon Prial: I like the idea of even, let's get it right in a context base. There's going to be AI and the layer of moderation on top of that, but let's get it right. So if it happens to be an article about climate and EVs or happens to be a section about New York Knicks or whatever it might be, you could begin to manage that. And it does make sense that if there is a bunch of sport providers... I'm just staying with the Knicks thing for a second. A bunch of sports providers and I am a troll on bad things to one particular thing... It would be great if, like you said, if the platforms began to recognize a list of bad people, which means identity has to play a bigger role in it. So you co- founded the Oasis Consortium and it's really the first set of user safety standards for web three. This is kind of what we've been talking about. I don't need you necessarily to describe it unless you think you need to, but if we kind of covered what the issues are that you're trying to figure out in this consortium.

Tiffany: I found that in 2020 it was when the big brands like Unilever and Procter& Gamble, boycotted platforms like YouTube and Twitter due to hate speech. And very quickly I realized the problem is not only just technology, the problem is that the people on the brand side don't necessarily appreciate how the platforms and publishers do the moderation and what they can do and what they should do. So there is this expectation and a visibility gap. So very quickly I was saying to myself, the best way to move the industry forward is to get the people on both sides together. So Oasis Consortium was on that basis. Let's get brands, agencies, publishers, and platforms all together and discuss what has been done, what is expected and what should be done next. And very quickly regard to the conclusion is, there wasn't definition of good. People know when bad things happen, but people really don't have what is good. How much I should do to not to end in the New York Times headline for scandalous news that I didn't do my moderation properly. So very quickly we say let's view the user safety standards for the upcoming web three, learning from all the mistakes we made and that serves as the best practices and the golden metrics you can start with, and the ideas we can evolve that every year. And also we are bringing more academia and other nonprofits to look at the standards because when you look at user safety, for example, user safety is the basis for brand safety, right? At OpenWeb, the first thing we think about is how we keep users safe. And then as a result you will find the foundations up to keep brands up. I think I read an article written very interesting. People say, are we really doing brand safety for keeping people safe or keeping brands safe? We need to take a pause to think about that. The whole hypothesis if we need web three is to give the power back to the people because people demand it. So you can't keep just thinking the old way. And for brands to grow, they want to partner with the platforms and technologies vendors who actually understand how to make people feel safe. So coming back to where you were saying is user safety for OpenWeb and for Oasis is the foundation of brand safety. So we build user safety standards and then we bring all the people who care about brand safety to look at and say, hey, this is what you can do actually to keep users safe. And then if you keep users safe, very naturally you actually can keep brands safe and not a way around.

Jon Prial: And I have this vision now of a Venn diagram with users and brands and publishers all together and it's really this safety radiating out and in. It's just terrific. Tiffany, this was a great discussion. Thank you so much for spending the time with us. We really appreciate it.

Tiffany: Thank you, Jon. I love the conversation. Really great questions to spark this wonderful conversation with you.


In this episode of the Georgian Impact Podcast, we talk to Tiffany Xingyu Wang, OpenWeb’s first-ever Chief Marketing Officer. 

With a mission to “save online conversations,” OpenWeb wants to improve the quality of conversations online while enabling conversation-based advertising, which allows brands to connect with their most active audiences. Through connecting with audiences, publishers can garner first-party data for ad targeting — a valuable tool as publishers prepare for the disappearance of third-party cookies. 

“We're building an internet where content creators of every kind are empowered to truly own their audience relationships and thrive independently,” OpenWeb says in its mission statement. “We do this the best way we know how: by building innovative technologies that turn content creators—publishers, brands—into the hosts of thriving, healthy communities.”

Tiffany supports OpenWeb's rapid expansion to new publishers, new advertisers, and applications — all while amplifying the importance of digital safety and engaged community building. Listen to the podcast to learn more about how OpenWeb works and what makes a healthy internet. 


You’ll Hear About:


●  Background on OpenWeb.

●  The current landscape for publishers.

●  How ads can become more sophisticated on OpenWeb.

●  The importance of trust in strategic growth.

●  Shifting away from creator-led content towards a community economy.

●  Best practices for brands and publishers.

●  Identity in OpenWeb.

●  User safety standards for Web3.

Today's Host

Guest Thumbnail

Jon Prial

Guest Thumbnail

Jessica Galang


Today's Guests


Tiffany Xingyu Wang

|Chief Marketing Officer, OpenWeb