How Georgian's AI team supports companies in adopting GenAI

Media Thumbnail
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, How Georgian's AI team supports companies in adopting GenAI. The summary for this episode is: <p>Generative&nbsp;AI is redefining businesses with its capacity to write text, generate code, execute tasks, create images, and more. Gen AI is fundamentally changing how companies have to build their products.</p><p><br></p><p>This is the first in a series of podcasts featuring our AI team, where they share their experiences on the generative AI work they've already done with more than 20 of our portfolio companies. In this episode, we are joined by two technical leaders of Georgian's R&amp;D team Parinaz Sobhani and David Tingle. Parinaz is the Head of AI at Georgian and David is the team's Engagement Manager for our work with our customers.&nbsp;</p><p><br></p><p><strong>You’ll Hear About:&nbsp;</strong></p><ul><li>How generative AI is reshaping businesses by excelling in text, code, task execution, and image generation.</li><li>The importance of a cross-functional collaboration approach and a top-down problem-solving strategy in technology development.</li><li>An overview of the historical focus of Georgian's AI team on data and machine learning.</li><li>Exploration of the starting points for companies, including the use of foundational models from big tech companies and the role of first-party data in differentiation.</li><li>Discussion on the crawl, walk, run stages of generative AI adoption, highlighting the importance of finding a golden use case and the need for a "trust-first" approach for future-proofing.</li></ul><p><br></p>
Series kickoff with Georgian's AI team
00:40 MIN
Cross-functional collaboration in technology development
00:39 MIN
The evolution of Georgian's AI team
01:03 MIN
Leveraging diverse teams to solve GenAI challenges
01:06 MIN
Navigating the current landscape of foundational models
00:59 MIN
The role of first-party data in differentiation
01:33 MIN
Finding impactful use cases for GenAI
01:24 MIN
Insights from our first GenAI bootcamp
01:34 MIN
Finding differentiators
00:47 MIN
Georgian's "crawl, walk, run" framework
02:22 MIN

Jon Prial: The material and information presented in this podcast is for discussion and general informational purposes only and is not intended to be and should not be construed as legal, business, tax, investment advice, or other professional advice. The material and information does not constitute a recommendation, offer, solicitation, or invitation to the sale of any securities, financial instruments, investments, or other services, including any securities of any investment fund or other entity managed or advised directly or indirectly by Georgian or any of its affiliates. The views and opinions expressed by any guests are their own views and does not reflect the opinions of Georgian. Welcome to the Impact Podcast. I'm Jon Prial. Generative AI is redefining businesses with its capacity to write, text, generate code, execute tasks, create images, and more. GenAI is fundamentally changing how companies have to build their products, as many companies are working hard to keep up with generative AI. This is our first in a series of podcasts featuring our AI team, where they share their experiences on the generative AI work they've already done with more than 20 of our portfolio companies. With me today are two technical leaders of Georgian's R& D team. We have Parinaz Sobhani and David Tingle. Parinaz is Georgian's head of AI, and David is the team's engagement manager for our work with our customers. They'll talk about how companies are building with GenAI, how we've worked with our customers, and how finding differentiation has changed with GenAI. Pari, we often talk about how technology development needs to basically be done across an entire company to some degree, no longer just an AI team or a data science/ AI team. Is this still something you agree with? Has anything changed?

Parinaz Sobhani: I do agree, and I believe even before the Gen AI excitement, we were actually recommending exactly the same principles, like data science and AI teams shouldn't work in isolation. It should be a cross- functional collaboration. We should treat it as a top- down problem, starting with the actual problem, customer's pain points. We should have more collaboration with engineering and data science team and the type of skillset that you need. It's not only people with a science background. You also need people with an engineering background. You also need to make sure that you have the right data foundation and you have the right data pipelines. It's not the garbage in, garbage out. To some extent, I think all these principles are still valid. What, maybe, is easier is that building proof of concepts are easier now because these technologies are very accessible and level of performance that you can quickly get to using those pre- trained models or foundational models as a base. I guess that's also a big difference because you can easily get to a good level of performance even without bringing your own data in. Of course, you can bring your data in and then build on top of that.

Jon Prial: Interesting. Of course, it's not just the teams, but the data and how companies build around all that. But I think we should go back. I'd like a little bit of history on how our AI team operates and how we've evolved over the past few years. Can you give me some background, please?

David Tingle: I think data and machine learning has always been at the core of Georgia's investment thesis. We've divided a thesis around data and machine learning very early on in the Georgian journey. So a lot of the early hires were technical folks who had interesting perspectives on the market and on investment opportunities from a technical perspective, specifically in the area of data and machine learning. People like Mads, who's our head of R& D, or Parinaz. Both of them have PhDs in machine learning. Over time, Georgians built around that core with engineers and research scientists to form a team that is focused on helping our customers build and deliver software and cope, specifically in the area of machine learning. That's what we still do in the generative AI world, but a lot of the tools that we're using have changed to incorporate these new large language models and other innovations in the space.

Jon Prial: So does the need to have different skill sets look totally different, slightly different for companies that are trying to adopt GenAI? And of course, the question is even more important now.

Parinaz Sobhani: Yes. The number of use cases that these technologies can solve have expanded. What does it mean? It means that there might be more and more automation and less and less human in the loop, and it's very, very hard to actually evaluate and monitor the performance of these models. At the end of the day, that's a very, very hard problem. So many organizations are dealing with what would be the right framework for quality assurance. We know some of the challenges with these large language models, like hallucinations, and they can make things up, and they are not very consistent. You might ask a similar question and get the very different answers. They've been always important, but now it's even more important to have diverse teams thinking about some of these challenges because of the number of use cases, because of more adoption, and because of more automation.

Jon Prial: Okay, interesting. But just in case any of our listeners are not familiar with hallucinations, it's a well- known technical challenge where large language models are involved because they generate output by predicting the next word that could be fabricated, factually incorrect, or nonsensical because they're not tied to a corpus of facts. They're just tied to a corpus of lots and lots of texts that make sentences. But enough of that. It's too easy for just to spend time on cool examples that we could talk about over dinner with friends and family. Yeah, I guess I really am a nerd, but let's talk about these foundational models. In many cases, companies are going to start with a model from one of the big tech companies, Amazon- made OpenAI, but aren't the competitors of these same companies also working with these big tech models? So what's the starting point, and where do things evolve to from there?

David Tingle: Like a lot of things, the starting point is where the company's currently at in terms of infrastructure, what they've already adopted, what partners they already work with, and most of these large platform providers that would provide services, technology, and other domains offer solutions in this space. So that's the starting point. A lot of the time is quick adoption of what's closest at hand, but I think what we're seeing is, many of our partners that we work with are monitoring things like the leaderboards for model performance, new releases for both open source and closed source models, and they're really staying on top of the ecosystem. I think there's a desire to be able to move between models, between foundational models, and also the technology that layers on top of them.

Jon Prial: It will be interesting to see how startups begin to move between open- source models and closed- source models to build their products. For our listeners, open source models are publicly available models that companies can build on top of and modify, while closed source models are more tightly controlled by the company who created it. There are valid business models on both sides, by the way. So you actually both mentioned models, and we know model selection is a focus area for companies, but I'd like to just step a little higher level here and stay with differentiation. I also want to bring in data. Of course, there's data that built these foundational models, but there's also first- party data that companies have acquired through their application and working with their customers. So when it comes to where companies put their focus on finding differentiation, when and how do these factors enter into the picture?

Parinaz Sobhani: I would highlight David's point around it depends where their current status is. Have they deployed machine learning models before or not? Have they used undeterred non- deterministic systems using their own data before or not? Have they built that muscle of deploying these models and monitoring their performance over time or not? If they have, then yes, they can start with bringing their own data in and fine- tuning some of these models, or start with some sort of vector databases and some sort of augmented retrieval. But if they are just starting, most likely they shouldn't start with fine- tuning or starting from scratch building some of these large language models. Most likely, they have to start with some sort of integration with one of these APIs or large language models. They can start with a Google, OpenAI, or Cohere of the world and, over time, bring their own data in so then they can come up with the right design, getting more feedback from their customers, collecting the right data, and for the next version of the product, they can actually fine- tune these large language models and custom it and make it more customized for a specific problem or specific domain. Our recommendation is normally crawl, walk, run.

Jon Prial: That's exactly what I was thinking about, that you're running with your own data, but you're not ready yet. You've got to have a strategy. You have to have the base knowledge. I love that you talked about where are they today. I think what's interesting is, we talked about it affecting everybody. I also love that you talked about the rise of vector databases, and I know it's a hot topic in GenAI, and I thought I'd explain it just a bit before we move on to David. Large language models like ChatGPT have a limit on how long prompts can be. So if you're bringing a lot of information in and feeding that into ChatGPT and asking you to do some work with it, context can be lost just because of the length. A vector database can store large amounts of information, which can then be broken into smaller chunks, and then, based on the question or the prompt, the vector database actually retrieves the most relevant chunks, let's call the paragraphs, that contain the answers somewhere. It then feeds that information to ChatGPT along with your question and prompt. The result, you get higher- quality answers with the question and context which has been provided to ChatGPT. It's always been the case where AI adoption has been a process. Companies define the right teams and the right use cases. They choose the technologies. With GenAI showing up everywhere, there has to be even more urgency to get this right. So, David, we recently ran an AI bootcamp to help our customers learn together, and we were able to share our perspectives and experiences. Can you talk about what you saw during the bootcamp and how it helped companies that were maybe at this early crawl stage?

David Tingle: I think across the board, we saw really high levels of interest. So just to maybe give a little bit of context, Georgian organized and ran a bootcamp in June that was mostly for companies in our portfolio, and it was split between educational elements where we went through some of the foundational technical expertise that the teams would need to build, and then we gave some time for each team to develop their own proof of concept or what they wanted to develop. So that was the bootcamp, and I think that there was a really, really strong reception to the educational component because the baseline understanding of how to work with these models, how to integrate them into a semi- production grade system from a data perspective or from a model input output perspective that wasn't there across the board. It goes to parties point about when you're starting off with that crawl stage, you don't necessarily need to be fine- tuning on your own data or to have this really robust instrumentation or pipeline that's coming in a way that previously was really important for training these models or working with machine learning. Now, we saw a lot of teams in the bootcamp and in all of the work that we do with portfolio companies get a lot of mileage out of good customer knowledge, a good understanding of what the customer pain point is, and then very targeted ways of working with these models to help them learn the context and then provide results that made a difference for the end user.

Jon Prial: Now, what do you think? In terms of the bootcamp, did you see one challenge more than another?

David Tingle: We had 22 participating teams. I think at the end, 19 of those built technical projects or proof of concept that they demoed at the end of the week. So an amazing amount of participation, and across that group, a lot of breadth in terms of what they worked on and what challenges they faced. I do think there are a couple of themes that probably apply to a lot of the different themes. One is, with these large language models, they can be really good at getting to 80% delivery or 80% quality, where they're handling most of what you throw at them pretty well. But then really getting to an excellent user experience or excellent customer experience where you're kind of tackling that long tail of edge cases or problematic questions or context that they might not be ideally suited for, that can take a lot of work solving those long tails of edge cases. That's one thing.

Jon Prial: Since we recorded this, we recently completed our second bootcamp, and we dived deeper into the GenAI development process and brought up different sets of skills to this process. Stay tuned for more details on that in a future episode. Now, you've got to have your eye on the prize and thinking about what those differentiators are. So as you work with customers and you think about it, how do you help them find and think about what might be a differentiator for them?

Parinaz Sobhani: Thinking about differentiation starts with finding the golden use case. The golden use case means because there are so many opportunities now to use generative AI as part of your operational and as part of your product's offerings, but it's very important to find one use case that can help your customers the most. It's very aligned with your core value proposition.

Jon Prial: It's funny that you say that because, if you go back to some of our early thesis that we wrote, we talked about evaluating new technology in terms of impact to your customers and revenue to your business. It's nice to have. There's needs, but when you want to get down to business, you wanted the upper right- hand quadrant where you could really help. It sounds like that's exactly where your head is at.

Parinaz Sobhani: That's why I said it's kind of similar. Still, most of the principles are valid.

Jon Prial: So, Pari, the thoughts of prioritization and finding the golden use case really resonates with me. So if you look at our bootcamp participants, what roles were particularly important to help keep this kind of focus and how to play out?

Parinaz Sobhani: We had product managers, and as David mentioned, one of the goals for that bootcamp was helping our companies to build the muscle of leveraging these technologies. So we didn't push so hard to get to that level of a strategic thinking to really pick the best use case. Our goal was helping them to pick a valuable use case, help them to experiment, to run some experiments using these technologies, and build that muscle of how to leverage these technologies. We also thought, as they're going back to their companies and showcasing and demoing what they have built, it's going to also help the strategic thinkers to understand what might be that golden use case. Because if you don't know what is possible with these technologies, it might be harder for you to identify that golden use case.

Jon Prial: Before we started recording, we talked about children or grandchildren, like we always do. I want to stay with this crawling, walking metaphor. How do you define the different milestones of the crawl, walk, run stages, but GenAI adoption that we use at Georgian?

David Tingle: I think there's a lot of learnings as we work through this new paradigm, what the different milestones are. So generative AI is obviously a pretty nascent field. There's been a lot of innovation over the last several months, and things are still changing rapidly. But at the same time, we're trying to form a perspective on some of the patterns that we're seeing in how companies pursue these initiatives and what types of things they're trying to build or activities they take on to support this work. The maturity framework that we're starting to pull less around has three stages. We're referring to them as the crawl stage, the walk stage, and the run stage of generative AI maturity. At the crawl stage, we're primarily referring to the necessary work of thinking about the most valuable use case you might apply generative AI to or thinking about the strategy that you need to develop to support incorporating this type of functionality or this type of technology into your existing product or environment. Really, the goal here is to identify the most impactful way of adding value for customers using some of this new technology, like large language models. So that's the crawl- stage use case identification. The walk stage is where you've started to build technical solutions, products, or features with this technology, and you're starting to put that into production. So maybe you start with a proof of concept that illustrates that you can solve the use case that you want to solve with this technology, and then we see companies at this stage, this walk stage, starting to put this into production. Often, these are relatively simple solutions, where they're doing single- or few- shot prompting and they're working to tune the prompts that they're supplying to these underlying foundation models so that they can iterate really fast. That's the walk stage, and we're seeing a lot of examples of different use cases at this stage. The run stage is something we're still trying to wrap our heads around because it's evolving quite quickly, but here's... We're talking about more cutting- edge applications or more sophisticated applications that still try to solve a core use case, but maybe now we're working on fine- tuning the foundation models ourselves or thinking about the challenges of deploying these models at scale within an enterprise software environment. So that run stage is really the highest level of maturity from a generative AI perspective that we're seeing at the moment.

Jon Prial: So what do you think about finding differentiators, and how you help our companies find that differentiation? It sounds to me like, and I loved your answer, Pari, when you're later and you're running, you can begin to really do fine- tuning and bring more data to it. So David, Pari talked about the need to build muscle memory for leveraging these technologies. So how do we partner with our customers in a way that we could support them as they start to adopt GenAI tech, but then leave them in a place where they can move between crawl, walk, run stages on their own?

David Tingle: Each challenge is new, and we work very closely with the team to scope things in a way that makes sense for them. We're doing development work, we're doing technical work with these teams, learning with them, helping them contribute code, and the ultimate goal is to make sure that they can do that in the future on their own. But the way to do that is get your hands dirty with the materials, I would say, and the technical work. So very much a player- coach type model.

Jon Prial: We clearly feel the sense of urgency, and clearly our customers do as well, but this technology is moving so fast. Is this like any other technology where the work is good and sticks around for a while, or is there a different thought on how we must future- proof what we all do together?

Parinaz Sobhani: By taking the trust- first approach. What does it mean? It means that definitely the quality, quality assurance, some of these kind of trust challenges of these large language models and GenAI technologies if they take a proactive approach, and again, we have our trustees, we have our principles, how you can be very proactive in communicating what kind of problems you're approaching, what kind of data, how do you use data, how do you protect your customer's data, how do you mitigate the bias, how do you prevent any hallucinations, and how we can bring more consistency and reliability to these kinds of technologies. By building on top of that and by putting some guard base around the output of these models and making sure that you earn your customers' trust, I think that's a way to build on top of that trust over time and introducing more of these technologies to your workflows and core offerings.

Jon Prial: I'm glad that you brought trust into this. One really can't effectively run a business without understanding risks and rewards, and we're seeing companies that are building their products and driving their companies with purpose. But I really appreciate the work that you are doing to help companies build products for differentiation that leverage these new technologies. This is an exciting time. So much more to come. Thanks to you both. For Georgian's Impact Podcast, I'm Jon Prial.


Generative AI is redefining businesses with its capacity to write text, generate code, execute tasks, create images, and more. Gen AI is fundamentally changing how companies have to build their products.

This is the first in a series of podcasts featuring our AI team, where they share their experiences on the generative AI work they've already done with more than 20 of our portfolio companies. In this episode, we are joined by two technical leaders of Georgian's R&D team Parinaz Sobhani and David Tingle. Parinaz is the Head of AI at Georgian and David is the team's Engagement Manager for our work with our customers. 

You’ll Hear About: 

  • How generative AI is reshaping businesses by excelling in text, code, task execution, and image generation.
  • The importance of a cross-functional collaboration approach and a top-down problem-solving strategy in technology development.
  • An overview of the historical focus of Georgian's AI team on data and machine learning.
  • Exploration of the starting points for companies, including the use of foundational models from big tech companies and the role of first-party data in differentiation.
  • Discussion on the crawl, walk, run stages of generative AI adoption, highlighting the importance of finding a golden use case and the need for a "trust-first" approach for future-proofing.

Today's Host

Guest Thumbnail

Jon Prial

Guest Thumbnail

Jessica Galang


Today's Guests

Guest Thumbnail

David Tingle

|Engagement Manager
Guest Thumbnail

Parinaz Sobhani

|Head of AI