Adapting teams to GenAI with Marketing AI Institute's Paul Roetzer
Jessica Galang: The material and information presented in this podcast is for discussion and general informational purposes only, and is not intended to be and should not be construed as legal, business, tax, investment advice, or other professional advice. The material and information does not constitute a recommendation, offer, solicitation or invitation for the sale of any securities, financial instruments, investments, or other services, including any securities of any investment fund or other entity managed or advised directly or indirectly by Georgian or any of its affiliates. The views and opinions expressed by any guests are their own views and does not reflect the opinions of Georgian. Hi everyone and welcome to The Impact Podcast. I'm your host and Georgian's content editor, Jessica Galang. When we last had today's guest on the podcast, back in 2020, we were talking about AI and marketing and how to use things like automation tools to make our jobs easier. Now, in 2023, generative AI tools are basically the biggest topic of conversation in marketing right now, and it feels like so much has changed. While I feel like there may be some anxiety around the ways generative AI will impact our work as marketers, I think a lot of marketers are also really excited about the new opportunities that gen AI will bring. So, we are here to break that down with Paul Roetzer. Paul is the author of several books on marketing and AI, including Marketing Artificial Intelligence, and he's the creator of the Marketing AI Conference. So I'm really excited to have Paul on the show to talk about how generative AI will impact marketers and also how it'll impact organizations. So, welcome Paul.
Paul Roetzer: I can't believe it's been three years. I feel like we've lived a lifetime of AI progress in three years.
Jessica Galang: So, Paul would love to hear about your journey working with generative AI and some of these tools. Is there anything that you've discovered and anything that stands out to you, especially in the last few months as so much has changed?
Paul Roetzer: I think for a lot of people, ChatGPT was their first real hands- on experience with artificial intelligence, and so the language generation is really where a lot of people's minds go with generative AI, obviously DALL·E 2 prior to that with image generation and now we have Midjourney and Stable Diffusion. And so image and language generation are the two main things. But I think with language, the assumption is that it's a writing tool, like a writing... In a lot of cases, people think of it as a writing replacement tool. I actually don't think of it that way. I don't use AI to write anything. I'm a writer by trade though, so I actually enjoy the process of writing. I find it therapeutic and helpful to think things through. It's how I think and analyze things. However, that being said, I use GPT- 4 all the time for ideation, summarization. We use transcription, other tools for transcription. We develop outlines for things. So I'm always using it in many ways more as a strategy tool than a writing tool. And so a lot of the stuff I use GPT- 4 and other AI writing tools for is stuff that no one would ever actually see publicly. The stuff I write and put on LinkedIn and other places I don't use writing tools for. So, strategy tool, ideation tool, that's the thing that to me is undervalued right now. And then a data analysis tool in the very near future, it's going to be a lot of marketers and business professionals are going to use it for data analysis in the same way you would give a text prompt to get an output, it'll analyze your data for you. It's
Jessica Galang: It's interesting you say that, because I think we're both former journalists, so I really understand that creative process and that need to put in your 10, 000 hours in order to actually get good at this thing. And my perspective as well is in order to prompt the gen AI tool properly or be able to create something good, you need to know what good actually is in the first place. I'd love for you to break that down a little bit. You mentioned that you use it as strategy and ideation tool. In what ways are you doing that specifically, especially on the strategy side?
Paul Roetzer: Yeah, so I'll do it with ideas around business models. I was thinking about launching a separate company, and so I go in and I say, " Okay, what are the fundamental elements I need to do to build a startup?" As an example. And so I can go Google search that and I can spend an hour pulling the latest list. I can sit there and think back in my own mind about the different companies I've launched over the last two decades. But in five seconds I can get a full- blown checklist from GPT- 4 to launch a company and say, " Make sure to include all the financial to-dos, all the legal to- dos," and it'll actually generate a tactical list of things I need to do to build a company. So really anything where I have to think about a strategy, you can actually apply this. I did another one for a friend of mine who runs a dental practice and I said, " The costs are going up, because of inflation." So this is the prompt. " Costs are going up, because of inflation, pricing is controlled by the insurance companies. What are ways that dental practices can drive profitability or maintain profitability despite the increase in costs?" And it spit out 10 things immediately that dental practices could do to apply AI to drive efficiency. And not being the expert, I just sent them to my friend. It was like, " Here, you're the dentist. I don't know if these are any good." And he came back, he's like, " Man, there's like three of these are absolutely worth exploring." It's like, " Cool." So to your point, he's the expert, so he can assess the output and know if it's any good, but I know how to go in and prompt the system to get something out of it. So I think for people, you're saying if you have a domain expertise, you're way better at prompting the machine and then figuring out if what it gave you is any good. So you still need the expertise and the knowledge to assess the AI.
Jessica Galang: I think there's a lot of questions about how gen AI is redefining creativity and what that means when you can prompt a thing and it could make something that closely resembles a human and is trained on human expertise as well. What do you think about gen AI's role in enabling creative work and how that's redefining creativity?
Paul Roetzer: If you allow it to be, it definitely is an assistant in creation. You can use it to enhance your own creativity, to stimulate ideas when you're not being able to get the ideas going, you're staring at the blank page. So, I do think that people have to really think about these language, image and video generation, we'd probably have to throw in there, as ideation engines, as a true assistant there to help you along and take an idea and expand it out or write a thing and then make it more empathetic. Really think about how it can help you improve and enhance the output and expand your own creativity. I think so many creative people immediately just assume it's just going to replace them, and so they're hesitant to even dive in. They don't even want to experience it. And my feeling has been the opposite is true. My friends who are graphic designers can get way more value out of an image generation tool than I can, because I don't even know what to tell the thing. I'm not a visual person. I can't explain what I'm trying to get out of it. If we just think about a logo design process, for example. If I was going to work with a graphic designer, I would try and put into words what is in my head of what I want that logo to look like, and I wouldn't be very good at it. Where I can go into Dolly2 or Midjourney and I can start just playing around with prompts to create imagery. That starts to inspire me. That's like, " Yeah, that's kind of what's in my head." And now I take that and I give that to my designer and say, " Here you go. This is what I'm envisioning. Now go do your thing." But now they have a starting point that's more than my words, and that's the biggest challenge I've found working with designers and my wife's a painting major, working with artists, they just see things. I do not have that ability. So in many ways, AI can enhance my creative ability, because I can now actually get my words into something that gives me a starting point to go work with the actual artist to create something.
Jessica Galang: When you were last on the podcast in 2020, you mentioned that adoption of AI tools was low at the time. Are you seeing a change now since it's a little bit more accessible?
Paul Roetzer: Oh, yeah. So we do an intro to AI for Marketers class every few weeks. We've done 25 of them now, starting back in November of 2021. And in the start of 2023, so we've done six so far this year, we ask, " Have you experimented with ChatGPT?" The first time in January, it was like 63% of people said yes. The last one we just did was 86%. And now when I ask that question when I'm giving talks, it's basically a hundred percent of the room. Everyone has at least tried ChatGPT. Now adoption in terms of infusing it into your daily workflows, if we want to consider that adoption, my guess is it's still low overall, but certainly ChatGPT and generative AI, rapidly advanced adoption rates and infusion into processes. I think image generation and language generation became the very obvious gateway for people to start using AI on a daily basis that they just didn't have back in 2020.
Jessica Galang: Are there ways that marketers or even yourself are thinking about using gen AI as part of the broader content strategy? Because I feel like right now we're in a place where it's really helpful for ideation and just ad hoc, getting a vibe check if you're doing some work. But are there ways to tangibly, at this point, include it as part... Like, this is part of our research, for example. This has to be part of it. There are certain prompts that we use. Are we there yet when it comes to actually making gen AI part of the formal content or marketing strategy?
Paul Roetzer: Yeah, definitely. I'll just use our podcast as an example. So, in our podcast, there's in between 18 and 21 steps we go through from the curation of topics each week to then selecting the main one. So our format is three main topics and then rapid fire. So me and Mike, my co- host, basically throughout the week just share links back and forth. He then takes that and synthesizes those and identifies them and puts them into an outline. So there's not much AI involved there. But then from there, we record it and then we do a transcription. So AI does the transcription. Then we can take chunks of the transcription and drop it into a GPT- 4 and do a summarization of those transcriptions. Those summarizations become blog posts, so we turn each main topic into a blog post and then the whole thing. So, that podcast now just created four blog posts. We also use AI to help split each of those segments up into videos, so we create four YouTube videos as a result of it. Then use AI of the transcript to create social media shares for LinkedIn, Facebook, and Twitter. So, using AI there. You use image generation technology to create the images for each of the blog posts and the social shares. So we've infused AI into a process that was already happening and drove massive efficiency in the entire production and promotion of each podcast, using those tools.
Jessica Galang: So I guess it's just a matter of understanding the different points throughout your own process and just integrating it as it's fit for yourself.
Paul Roetzer: Yeah.
Jessica Galang: So, actually I wanted to ask as well, a lot of what we cover is actionable advice, of course, for founders on our show who are thinking about how to integrate it into their business. Do you have any advice for maybe people, leaders or any leaders of teams who are getting their teams used to using gen AI tools? Because even as you mentioned, there's some hesitation maybe to even adopt these tools or people can sometimes give up easily when it comes to prompting and it doesn't turn out how you want it to. So, what was your journey in navigating getting teams used to and embracing gen AI in the team?
Paul Roetzer: It starts with education. So you can't just assume the team's going to figure this stuff out. Yes, they're maybe using ChatGPT already and maybe don't even want to tell you, because they're not sure if they're allowed to be things like that. So you have to start with education within the team, so everyone's on a level playing field of what exactly this technology is and how it works. We then advise people to build an AI council, so have a few people internally or a cross- functional team if you're a bigger organization, of people who are thinking about this technology on a regular basis and how it impacts the business moving forward. We always recommend having responsible AI principles and generative AI policies. So the responsible AI principles are bigger picture; how do we think about the application of AI in our organization? How do we put humans at the center of that application? Generative AI policies is; here are our policies of how and when we use generative AI technologies. We do or we do not create blog posts with it. We do or do not use image generation tools. So you setting those guidelines. From there, I would look at your team and do an exposure assessment of AI. So how likely over the next one to two years is AI to automate portions of these people's jobs? So you start to think about the impact it's going to have on your team. And then the last thing I tell people is build an AI roadmap. So set a three- year vision for becoming a smarter organization. We don't know what the tech's going to look like in three years, but you can put the policies and processes in place to stay at the forefront of this stuff and figure out how to infuse it into your own company.
Jessica Galang: Okay. There's a lot of helpful advice there. And I want to touch on the responsible AI bit later in this podcast, but is it that the AI council helped inform some of the AI principles that you adopted? What were some of the things that you were thinking about when it came to creating that policy around generative AI?
Paul Roetzer: So for us, we're inventing all of this in flight, because there aren't really models to go look at. So, our responsible AI principles came from years of thinking about the fact that the industry needed them and that most people that had them were the SaaS companies building the tech, but the average marketer and brand didn't have them. And so I literally just wrote 12 principles one morning and then just published it and put it out into the world and said, " Here, it's Creative Commons. If you don't have one in your brand, which most likely you don't, use this as a starting point, think of it as a template for it." So for us, that just formed... The council was never really a formal thing we created, because we were doing all of this every day, because it was what we did running the institute. So we had to stay at the forefront of all of it. But we're starting to see some people within our community that are doing this. So there's a big technology company in particular, they have 15 people on their AI council, and it's formed with people raising their hands that they're interested in being a part of it, and then they got to the point where they're doing weekly meetings and they're developing processes of how to share information back and forth, and then how to build action items based on that. So the AI council to me is a relatively new thing. I'm hearing of organizations that are starting to build them, but the simplest way I think about it's just formalizing information sharing. So you may have three or four people on your team who are obsessing over this AI stuff right now, you're listening to podcasts, you're reading articles, and maybe it's on Slack or email. You're just sending each other stuff. That's great, but that's not scalable. So the council helps to start put guardrails around this that; okay, here's the five people today who are really interested in this. Let's bring them together and create the council and have some weekly or monthly meetings. And the meeting agenda is going to be; what are the latest things? Is anyone in our company running pilot tests we need to know about? What are the core tech companies we're working with? What did they announce about their AI initiatives this month? So you can start to build your own framework of what that should look like.
Jessica Galang: And diving into those responsible AI principles, so trust and data privacy and these topics are really at the forefront of the conversation. For example, do you know how companies are using the data that you input and are you sharing sensitive information? Or even just this technology is evolving so quickly, what does it mean in the future for jobs? And even the fact that this data is trained on so much of other people's third party work and information, what is the ethics around using it? So I'm curious, what are the responsible AI principles that you adopted? And I guess what is top of mind for you when it comes to responsible AI?
Paul Roetzer: So, we talk about believing in a human- centered approach that empowers and augments professionals, that technologies should be assistive, not autonomous, which again, you just start to get a sense of the human- centered approach to this. That humans remain accountable for all decisions and actions, even when assisted by AI. The human has to remain in the loop in all AI applications. We believe in the critical role that human knowledge, experience, emotions, imagination, creativity play, and will promote those capabilities as part of future career paths. There's elements that get into the fact that law isn't going to catch up anytime soon. The regulatory bodies aren't going to fix this for everyone, so that brands need to have a moral compass and they need to make decisions that align with the values and principles of their organization and the culture they're building. We talk about the need for not dehumanizing your customers, just turning them into data points, because data trains all this stuff. And so not making decisions that aren't in the best interest of the humans on the other side, who are those data points. And then a big one is the commitment to upskilling and reskilling team members who maybe have larger portions of their jobs intelligently automated in future years, that your first instinct isn't let's save costs, the first instinct is; how do we redistribute their time to other more fulfilling activities? So those kinds of things where it's just very human- centered and always; how does this benefit the human?
Jessica Galang: I think a big principle coming out of that is having a moral compass as well. I feel like tech historically has not had the greatest reputation on this. So, what does it mean for a business who's maybe looking at this and they do want to adopt this technology ethically and responsibly? What does it mean to integrate a moral purpose or a moral compass into that work? Just for background, at Georgian, we have a thesis called product- led purpose, where we believe that companies that do have a purpose can use that to drive growth opportunities and strategies as well while making a positive impact in the world, which is why I'm particularly interested in this area. So, long question short, how does a moral compass tie into building an AI strategy tangibly?
Paul Roetzer: My instinct is you either have one already or you don't. And if you're working at a company, you already know whether or not they have a strong moral compass. So, the moral compass isn't being created because of AI. It's you're good people doing good things in the world, and so you're going to apply AI in the most responsible way possible. If you're working for an organization that generally takes shortcuts on data privacy and policies and doesn't value their people or their customers, then you probably already know that they're likely going to take shortcuts with AI too. So, some of this is just common sense building on what great companies already have; great culture, great people, great missions. And saying we're going to follow the same patterns. Even though AI is going to enable a bunch of really interesting things, we're going to have to make some tough decisions not to use all of the capabilities of AI, because sometimes it's going to cross lines for us that, yeah, it's not against the law, but we're just not going to do it, ethically or morally.
Jessica Galang: You also mentioned the need to not just look at your customers as data points and just using that data. Have you thought about what guardrails companies can use to ensure that they're protecting their companies privacy and considering the human on the other side of that?
Paul Roetzer: I think it really just comes down to a lot, again, of how do you treat people now? It's not like we didn't have the ability to do personalization and capture a bunch of data and buy third party data and blend that in. We've had the ability to do a lot of these things for the last few years through just basic machine learning capabilities. So, generative AI wasn't there yet, but the machine learning was where we could dump all this data and make a bunch of predictions about people. And you could go get all kinds of data about people. And so some companies probably had guardrails, say, " We're not going to know that about that person. Yes, we could buy that data and know it, but we don't need to know that to do what we're doing." And so again, I think this comes down to the data practices and policies the organization probably already has thought through and making sure that if they have been ethical about that to date, that they don't go down the path of feeling tempted to maybe break some of that. Because what has happened in tech is some of the ethical AI teams at these big companies have been let go. Not got rid of the whole teams, but certainly some high profile cases and some others that probably weren't as high profile.
Jessica Galang: For sure. And do you think in the future there's going to be editorial policies or disclaimers for using AI to work? Because for example, when you're reading a news article, if a journalist is connected to a subject they have to disclose that. And I know that I've seen some organizations put out policies about how they're using AI, and as you've mentioned, you'll never use it to just straight write blog posts and things like that. Do you think that's going to become more popular among different organizations as well, is being transparent about that?
Jessica Galang: Yeah, so you're the head of the Marketing AI Institute. What role do organizations like yours or even just tech have in coming together to keep this top of mind and create some of these policies or best practices?
Paul Roetzer: For me, putting out the Responsible AI Manifesto and just making it basically open source, Creative Commons was an effort to try and make a bigger play into this space. Because what happened was, before ChatGPT, nobody cared about any of this. The ethicist cared, but the general business world didn't care, because they didn't understand it. They didn't realize why it even mattered, because they didn't understand what a language model was. They still don't. There's more awareness now about generative AI. And so as that awareness level has skyrocketed, smart, moral people are starting to be like, " Well, hold on a second. There's more to the story here." And it's like, " Yes. Okay, thank you." And we're now to the point where we can talk about the important stuff. But going back to 2019, my first Marketing AI Conference we did, we had a panel on ethics, and what I told my team is like, " It is a general session panel. I'm going to force the people coming to this conference to hear this conversation from day one." And so that's how I feel about it now, is anytime we're talking, anytime I go give a talk, I make sure to infuse the Responsible AI Principles into that talk. So if it's in front of 300 people or 3000, doesn't matter, we're going to have this conversation. And that's my way of trying to advance it as much as possible is make sure it's always a part of the conversation, whether it's an event we're running, online courses, presentations, whatever it may be.
Jessica Galang: Okay. And looking a little bit towards the future and how we're using some of these tools, are there any trends that you're really excited about? It can be about marketing, but if you want to expand that broadly in terms of how it'll change businesses and how it'll make people's lives potentially easier.
Paul Roetzer: There's definitely things I'm excited about. There are things I worry about. I think the next major breakthroughs is these action transformers, the ability for the machines to not only generate things, but take actions on your behalf. Think about it, booking your flights for you or planning a trip and then actually going through and scheduling reservations. Like doing everything for you, not just giving you; a here's what to do. And we're seeing the early signs of that with these AutoGPTs. They're not good yet, but they're going to improve really quickly. So I don't know that I would say I'm excited about those. I worry about those probably more than I am excited. But in terms of the things that I do look forward to and that give me hope, I believe we're going to enter a rapid phase of innovation and entrepreneurship. I think you're going to see amazing companies built that disrupt industries and create lots of jobs along the way. Not as many jobs as a traditional company would have, but you're starting from scratch, so any job is a job. I feel like we're going to see entrepreneurship go. I feel like we're going to see people emerging within companies at every level who raise their hand and want to help figure this stuff out and lead. And since most organizations don't have people internally who understand the business side of AI, there's lots of opportunity for people to advance their careers and do really interesting things very quickly. So I'm excited by that. I am excited about the idea of enhancing creativity. I think it's going to be a little messy and I think it's going to be disruptive to creators and artists, but I think there's going to be a lot of positive that comes out of that. And I just think at a broader level for humanity, there's going to be massive scientific breakthroughs, like incredible things that we just didn't think were going to be possible in our lifetimes that probably in the next 5 to 10 years, you're going to start hearing about these massive breakthroughs in biology and astronomy and just amazing, inspiring things for humanity, and that gives me hope. I think a lot of good will come in the end from this, but it won't be a straight line and it won't be all good along the way.
Jessica Galang: Amazing. That's all from me, but was there anything that we didn't cover, whether it's related to your work or just with generative AI and marketing? Or the world? I feel like we got into some pretty deep topics here.
Paul Roetzer: It goes way deeper. I'll tell you, some of the conversations I end up having after these talks or just private dinners. People are scared. I think the biggest thing for me is if this topic is abstract to you, if it feels overwhelming, if it scares you a little bit, you are not alone. The majority of the world feels that way. And so I've been thinking about this stuff and working on it for 12 years. I have had time to come to grips with our future and what this stuff means, and I spend most of my time now trying to help other people just understand it and figure out what it means to them and how they can leverage it. But being real, it's going to hurt sometimes. If you're an artist, my wife, like I mentioned, is an artist, the first time I had to show her Dolly2, it wasn't an easy thing for me to do, and I had to show my 10- year- old daughter, who wants to be an artist, that AI can create images. That was a weird experience. And then we held a AI for Writer's Summit in March, we thought we were going to get a thousand people. We had 4, 200 registered, because writers are scared. They think they're going to be replaced by these things. I would encourage you to keep learning. Don't be afraid to ask questions and don't feel like you're alone in fearing that this is going to take your job. It's a lot of people are in the same spot. So I think the more people that are talking openly about this, the better off we're going to be as a society and certainly as an industry.
Jessica Galang: Awesome. I think even before gen AI started making headlines, this idea of how to adapt with AI and how to showcase your value as a marketer or as a creative has been top of mind for a lot of us. So it'll definitely be a learning curve, but to your point, I think it helps people to know that they aren't alone in this journey. So Paul, thanks again for coming on the podcast and for diving deep into AI adoption and how to build teams and values around AI adoption. It's so appreciated, and thank you again.
Paul Roetzer: Happy to do it.
On this episode of the Georgian Impact podcast, we are talking to a guest we last had in 2020. Back in 2020, we were talking about AI and marketing and how to use things like automation tools to make our jobs easier. Now in 2023, generative AI tools are basically the biggest topic of conversation right now. So, we're here to break that down with Paul Roetzer.
Paul is the author of several books on marketing and AI, including Marketing Artificial Intelligence, and he's the creator of the Marketing AI Conference.
You'll Hear About:
- The evolution of generative AI tools in marketing.
- The role of AI in ideation and strategy, rather than writing.
- The adoption and integration of AI tools into marketing workflows.
- The impact of generative AI on content creation and strategy.
- Building an AI council within organizations.
- Developing responsible AI principles and policies.
- The importance of a moral compass in AI application.
- Transparency and disclaimers for AI usage.
- The future of AI in entrepreneurship, creativity and scientific breakthroughs.
- Addressing fears and concerns related to AI in the workplace.