Olivia
Welcome to Innovators and Ideas, an audio series from RBC Capital Markets, where we engage with visionary leaders shaping the future of technology Today, we're coming to you from RBC’s Global TIMT conference in New York City, speaking to Peter Guagenti, President and CMO of Tabnine, an AI code assistant that accelerates and simplifies software development. Peter, glad to have you here. Why don't you tell us a little bit about your company, what you do and the audiences that you serve?
Peter Guagenti
Tabnine is the originator and creator of what is now called the AI code assistant category. And the way I'd think about this is, this is generative AI both tools and agents that help you in the software development life cycle. So what's interesting about Gen AI as we're thinking about it, we all believe Gen AI will change how everyone works, right? I think that's sort of a generally accepted thing. What's really interesting is the LLMs were first very, very good at the most highly structured, repeatable languages. Which code is the most highly structured, repeatable language. So as a category, I think we're seeing the most, the earliest innovations and how work is changing through the software developer, and we're the canary in the coal mine for probably what's gonna happen in the next three to five years, and other knowledge worker jobs, which is super exciting.
Olivia Hack
Yeah, that's amazing. That makes sense. So it's becoming a kind of crowded market for AI assisted code platforms. What do you think Tabnine’s biggest competitive advantage is?
Peter Guagenti
Look I think it's really important to say it's cluttered from a who got funded perspective, it's actually not cluttered from a who has code and who has actual agents and applications. I think there's really still only, only a handful of us who are actually operating and doing this at scale. We're the originator of the category. We're still number two behind copilot, right? I don't have Microsoft's distribution. Otherwise I'd be much larger. But we actually hold our own on a few things we like to describe. The reason why people pick us is the three P's. So privacy, personalization and protection, the privacy side is actually more critical than you might realize. And so Tabnine was built from the beginning to actually be deployed privately on our customer's infrastructure, including being completely air gapped. So we have our own private models, and we can use your private models in a way that you don't even have to be connected to the internet. So we have military use case, governmental use cases, DOD contractors, that sort of thing as use cases, so that's been a big differentiator for us. We do have a SaaS version, but even that is single tenant, so it's all separated. And the privacy also leads into this sort of security and compliance and protection side. We have our own model that was exclusively trained on permissively licensed open source code, so there is no possibility whatsoever of license pollution, and that makes a big deal if you're a software company, right? But the personalization side is where things really get interesting. We believe that the AI code assistant market is already stratifying. There's a bunch of folks at the bottom who are really focused on individual developers building new applications, and they're trying to be like, low code, no code, that's really what they're replacing. At the top end of the market, we're looking at opportunities where it's really about helping individual, large enterprises. And that's where we're focused. Large mature engineering organizations, for them, the AI code assistant can't behave like a software engineer off the street. It has to behave like your employee of the month. It has to know the ins and outs of your code, your processes, your procedures, your standards and practices. That's what it takes for the AI code to get past the pull request. And so that personalization. We've got really advanced techniques and technology to do this, but that personalization thing has been the reason why the large enterprises pick us.
Olivia Hack
Yeah, I love that. So, looking a little beyond privacy, what other hurdles do you think developers need to overcome when it comes to implementing these AI-powered solutions? You’ve talked about AI software testing, for example.
Peter Guagenti
You know, so, the adoption curve right now is actually mostly being driven on understanding and then adapting to how this changes your actual processes. The SDLC, will fundamentally change with these tools. It will. We're already seeing it right? We have full autonomous testing. We have full autonomous documentation. We just launched a comprehensive code review agent that can scan every pull request and identify deviation from security, performance standards, compliance standards, your corporate policies but think about what that does to how you work. We have these very well defined processes as engineering teams that are really thoughtfully constructed, and now we drop a bomb in the middle of one piece of it. So what we see with the largest customers we work with is, they're not adopting all in at once, right? They're going they're picking a cohort, seeing how that changes their processes, adapting their processes, then adding another cohort, then adding another cohort, then adding another cohort. And so I think the things that that an engineering team and developers and IT decision makers should be thinking about is, how do I acknowledge and accept that this is going to change my process? How do I make sure it's changing it for the better? How do I adapt what I'm doing? And then I do think there's a change management thing here that we're not fully acknowledging. People are afraid that AI is going to take their job. They are. So acknowledge it. I don't think that's actually going to happen. I think the backlogs in most of these organizations are so ridiculous. What it's going to do is, actually, it's going to increase our velocity and then up level our work, right? I really believe that, but I think you need to acknowledge that people are concerned, and then make sure that as you're rolling these tools out, they're part of the process, and they're adapting the work with you.
Olivia Hack
Yeah, that's interesting. I want to talk a little bit about a theme that has been a major theme for AI, and that's, you know, sort of ethics within AI. What steps would you say that Tabnine is taking to ensure ethical use of AI and software development?
Peter Guagenti
So for us, ethics takes a couple of shapes, and we actually pride ourselves on grounding the business in some pretty strong cultural expectations of ethical use of AI in software, right? And so we've done a few things that are different than others have from the very beginning, before the other LLMs had emerged that were strong in this, we built our own LLMs. Originally, we actually, from the beginning, actually focused only on that permissively licensed open source code, because we personally believe that unless you've explicitly given permission, it probably isn't. Is not appropriate, right? So when the large LLM providers are scraping every public repo, every piece of code they can get access to, like we didn't agree with that we would not have done that, we understand why others disagree. But we really like being able to say the stuff that we're creating. We're following explicit permissions. The other thing from the very beginning is we always agreed on a zero data retention policy. We will never collect your proprietary information, your user data, your code will never collect any of it, because we think, at the end of the day, privacy is paramount, right? Especially we don't, you know, we have people who are looking at this and saying, you know, wait a minute, you're exfiltrating my code, my requirements, my user data, to another system. There's not a lot of trust there. And for good reason. This information has been abused in the past. And so, from our perspective, when we built these things, part of that ethics was that social contract we had with our customers, of we're going to collect that data, we need it to be able to run against the inference server. It's only restored in memory. Soon as the inference server responds and goes back to you, it's wiped from memory. Nothing gets stored. Any improvements we make to the product, you must explicitly opt into improving the products. And even when we do that, all we collect is, what was the prompt or the specific request you were making, and then did you accept it or not? Yeah, because if you didn't accept it, we can look at that acceptance rate and say, ‘Okay, there's a pattern here around we need to do a better job at this. We need to do a better job at that,’ and that, and that makes a huge difference. And once again, we're still trying to, like, not collect anyone's proprietary information in that process.
Olivia Hack
Yeah, that's fantastic. I heard something recently. It was like 70% of people are using AI with, you know, it's not permissed by the enterprise organization.
Peter Guagenti
Oh, there's definitely shadow AI. We saw shadow IT, by the way, at the explosion of the web. And now we're seeing shadow AI in the same way, because people are hungry for these tools. They really want them. And so they're just doing end runs around corporate policy.
Olivia Hack
Yeah, exactly .And there's a huge danger to that.
Peter Guagenti
People are copy and pasting proprietary code into ChatGPT and asking it questions, yeah. And the terms of service on the public services is not the same as the enterprise ones. What's the old saying, if a software is free and you are the product.
Olivia Hack
So I want a little I want to talk a little bit about your chat agent. So this seems to be a really. Significant shift towards natural language interfaces for developers. What is the response been since you introduced this?
Peter Guagenti
Incredible. I mean, I think we always were already doing this. I mean, if you were asking coding questions you were asking the coding questions to your peers on Reddit. Or you were going and searching Stack Overflow and looking for it. And so I think the one thing that we forget with software engineering it is not a lonely process. Yeah, you do work with others like you do. When we originally started talking about these tools, we used to call them AI pair programs? Yeah, right. I think we've moved beyond that now, where these are actual AI agents and workers that are doing these things. You're offshoring to Silicon now as much as you're actually getting an angel on your shoulder, but it was a natural progression for people, right? And it actually goes both ways. Actually, one of our most popular AI agents is what we call the onboarding agent. So when you open a new project, if you've not been onboarded to that project before, you can click a button and Tabnine will explain what the project does, amazing, and it will explain like, this is what it does. This is what it's connected to. Here. It's dependencies. This is the other requirements of it, and it gives you follow on questions you might want to ask. And so it's we are already conversational, and we've just moved that conversation out of, like, I got to go to Reddit, I got to go to Hacker News. I got to go ask my buddy at the, you know, the water cooler. Like, now I just ask Tabnine. And since Tabnine is aware of your code base and your requirements, it gives you really thoughtful answers that are based on the company's knowledge.
Olivia Hack
Yeah, that's fantastic. Kind of de risks knowledge transfer issues or time saving, obviously.
Peter Guagenti
Well, actually, the one thing I love about the onboarding agent is the thing we never admit is we onboard new developers when we hire them, we take them through all of these things that's part of like drinking from a fire hose onboarding to a new company, yeah? And then the minute they open the project up again, they forgot everything. And then we don't give them enough time to re onboard them, or they feel stupid and they don't want to ask the question again. That's so I think we've tons of lost productivity in either senior engineers having to go through the process again, or new engineers like not wanting to impose and so then they have to go and kind of pour through the code and figure it out. So the AI agents are pretty remarkable for this.
Olivia Hack
So I know you're probably always gathering user feedback when it comes to refining these tools. Are there any new or upcoming features, capabilities that you're excited about as you've been sort of working on building these products?
Peter Guagenti
Yeah, so the code review agents is actually the one I think we're most proud of and most excited about. Sure, this code review agent is really remarkable in that it ingests your corporate policies, so it is not our view of software best practices. The way you set up Tabnine’s AI code review agent is you give us in plain language what your engineering policies and standards are. And because Gen AI reads it like a human, we read through it and generate rules off of it, and then we expose those rules and say, ‘Okay, did we get it right?’ And then you can change the language, and then fine tune it, and then you can turn each of them on and off based on what your use case is, and all that stuff. It's really remarkable. And that's v1. V1 is doing this, and it's catching stuff that none of the static code analysis tools ever did. But then it's super personalized, because if you actually go when you look at the code review process for like, you work for RBC, three different banks. I've worked in all the major banks. You look at their code review policies, they couldn't be more different. They're looking for similar things and there’s some overlap, but their expectations are unique to the business. And so I love that. I love that this thing can actually behave like an employee, and I think we're gonna see many waves of that. The other thing I'm really excited about is, I mean, all of the agentic work that we're doing right now is where we're getting real value. Most of the productivity gains we've gotten from Ai before has been just in the chat interface. The agents now are offloading whole work, right? So this offshoring of Silicon as the mental model, right? Not giving it to another worker. I'm giving it to the AI to solve. I think the first part of the SDLC will probably fully automate as a category. Not just Tabnine is testing. We are already on version three of our testing agent. That testing agent, you know, you grab code, it generates a unit test, generates a test plan, and actually, we will iterate, the Tabnine will iterate, until it reaches the appropriate amount of test coverage before it even comes back to you and gives you the test. The next version of that we're all working on is then just running the test and coming back with the results for you So that's really amazing. I think, let's be clear, no developer wants to spend their time doing that. It’s not a good use of their time. Like, I think we forget sometimes that software engineers are actually creative people. They're makers, right? They want to make things. Yeah, they don't want to sit there and, like, write tests and run the test.
Olivia Hack
So what do you see as the next big evolution in AI driven development tools. I know we've talked a little bit about your features and capabilities, but yeah, bigger picture, I suppose.
Peter Guagenti
I think two things are going to emerge very quickly here. First off, I mean, just under wrap your head around the state of the art. So state of the art a little over two years ago was auto complete as you type, yes, that was state of the art. Yeah. State of the art 18 months ago is, ‘A chat agent, I can ask it questions.’ Right state of the art now are agents that do whole tasks, like we rolled out an agent for JIRA that literally will read a JIRA ticket and write all of the code for you, or the other way around, you can select code in the ID and say, Did this solve the ticket? And if it didn't, it'll just tell you what you're missing and write the code with you. So I really like, I think the innovations around the agents are only going to accelerate as you think about the SDLC today, each piece of the SDLC from planning through code generation, through testing, refactoring, fixes, outages, you know, break fix and an outage, we're check, we're chunking each of those pieces up and building out agents. Yeah, that is the journey towards an AI engineer, right? Is it going to be consolidating all of those things? And we're making really rapid progress, like really, really rapid progress. And it's not just once again, not just a generic answer like this, solves it for our company in a thoughtful way. The other thing that I think it's going to unlock, which I'm really excited about, which will happen is on the generation side right, actually being able to just have an interaction with the AI code assistant, where it's iterating on the actual feature and function, and you're seeing a new version, and then an updated version and a new version. We're already seeing this like we've got customers who are using us to build really simple applications, then saying, ‘Make it red, make it blue, add this feature.’ And literally just seeing a new like node is a good example, a front end application, a node, seeing the node application changing in real time. Yeah, as you're actually making it. That's amazing, yeah, yeah, absolutely amazing.
Olivia Hack
That's very cool. You have this recent series B funding. What areas within Tabnine are you prioritizing for investment? Are there any specific goals or milestones that you want to achieve with this capital injection?
Peter Guagenti
So we're still super early stage, right? I think it's really important. It's a series B, but we didn't raise a ton of money when, when we started, the earliest rounds of the funding were before CIOs realized this was real. So the profile of the business was intentionally at the time, to be heading towards cash flow break even quickly. It was basically supposed to be like an almost, almost bootstrap Dev Tools company. And then all of a sudden, the world woke up and realized this was real, and our demand went up 100x. And so the funding now is still in exactly where you think it is. You know, we've had, we've had great financial success in the last, we've only been selling the product, we're in our sixth quarter of selling the enterprise product, and we're already doing incredibly well. Yeah, so we need more sales marketing, right? We absolutely do. I mean, I got to the numbers that I have today on three ramped reps, three right? So I need a lot more. But we also think we're in truly chapter one, chapter two of an evolution of the technology. So we want to continue investing in the technology. We believe we've been punching above our weight for the capability of our team, partially because so many people in our team were computer scientists and data scientists. We need a lot more people like that. We need a lot more people who are passionate about reinventing the SDLC, and so we'll pour as much money as we can into that.
Olivia Hack
That's cool. I love that. You know you have led teams across Cockroach Labs, NGINX, Acquia, and now Tabnine. What leadership principles do you think have been most effective in helping to scale high growth companies.
Peter Guagenti
I really love that you asked these questions as part of it, I am a proud entrepreneur. I could have many times in my life just gone. Half those companies were acquired. I could have gone and been inside of a large company, but I literally dropped out of college to start my first company and sold it a year and a half later, and then went and did it again, because I really do love this. And the leadership skills that I bring are all tied to what I believe is the real cultural contribution of entrepreneurship. You know, most of us do this because we want to solve a significant problem for a set of customers, or even the world in a thoughtful way for. And we want to enjoy the process. So, like a lot of what I focus on in leadership is building, building that passion and ownership and entrepreneurialism in every person in the team. And so when I was at Acquia, our CEO at the time, Tom Erickson, we would always talk about our cultural values and what it was. And he one day, he said, Look, actually, it seems like we're all in alignment. It's P triple. I was like, Okay, what does that mean? Passion, initiative, intelligence and integrity. He goes, ‘You know what? We all we're all aligned on this mission, and we're all passionate about it's not just a job’ like this. Is passion and, you know the entrepreneurialism that initiative is super important. So is intelligence like you have to be a first principles thinker. If you're not a first principles thinker, it doesn't really work, because if you try to pattern match from the past and you're inventing the future, it doesn't work that way. But what I loved about it was that last i of integrity, he said you can find the first three almost anywhere, and then you have toxic environments. Integrity, this whole idea of being able to show up to work every day. And I say what I mean, and I mean what I say, and I can build trust that to me is the thing that actually has been the secret to success in most of the business I've been in that I've done really well. Yeah, right. And so that's what I try to imbue and then part of that for me is also I believe that mentorship is everybody's job. I'm very proud to say, you know, in in my time, I think I'm up to like, a dozen executives that I've groomed from their first job. And I'm really proud of that, because I think, you know, as a leader, if you can, if you can build that leadership among your own team, it makes your job a million times easier. I don't want to be setting everything, but then you get to have this social impact that goes well beyond what you do for your own financial benefit.
Olivia Hack
Yeah, that makes a lot of sense. I've got one other leadership question for you, if we're thinking further out, say, the next 5-10, years, what major challenges do you think are facing software firms, and what advice would you have for you know, CEOs, CMOs, leaders in this position that are going to be kind of dealing with this changing landscape.
Peter Guagenti
Yeah. I mean, there's so much I think we are, I think, I think we are in the midst of radical and dramatic change at scale as a society. I really believe this. I think it's been building for a while. I want to just grab onto AI, though, and, and I think we're at a pivotal moment in technology, and I think this moment as big as digital transformation was this will dwarf it. And actually, for the software companies out there, having been, I was at the birth of the web, I was at the birth of open source, I was at the birth of cloud. I'm now at the birth of AI, and I really love being right up front, sure, and I will tell you the pattern that I'm seeing across this is this change is going to be as dramatic for technology companies as the change was from client, server to web, yeah? And for those who lived through that, they'll understand what I'm saying. For those of you who are young and didn't see any of these things, yeah, it literally was whole companies broke? If you think of who the leaders were in technology in 1990 and who they were in 2005, how much was even left? Yeah. And then look at the impact it had on the broader ecosystem of business. Who were the 10 most valuable companies in 1990, 2000, 2010. Radical change. And so here's, here's my advice. It is not yet another technology wave. So all of the technology work we've done to date has been deterministic software. If this, then that even. When we did digital transformation, we took paper processes, we made them a lot more adaptable and approachable and easy to use, but we took the same decision making process that is not what AI is. AI is a brain. It's a pattern matching simplistic brain. My CTO will kill me for saying that he doesn't agree with me. But it is, it's a monkey brain, right? So it works like a brain. So what we've discovered in making AI successful is treating it like you're coaching a human and so I think technology executives have to really embrace that. What they know and what worked in the past is not gonna work going forward. It is a different structure. It's a different way of working through this stuff, all of the amazing things we learned in writing great software, some of it stays I understand user intention. I shape that user intention into an interaction model with my software. I then return results. I do all those things. Instead of now going through an if this, than that, to figure out how to prompt the brain.
Olivia Hack
Yes, this has been fantastic.