AI + a16z Podcast: Vibe Coding, Security Risks, and the Path to Progress
Socket CEO Feross Aboukhadijeh and a16z partner Joel de la Garza discuss vibe coding, AI-driven software development, and how the rise of LLMs, despite their risks, still points toward a more secure and innovative future.
Sarah Gooding
July 25, 2025
In a world overflowing with AI-generated “slop,” from low-effort YouTube content to questionable AI-written code, it’s easy to think that technology is pulling us backward rather than propelling us toward a more thoughtful, human-centered web. But every technological era is punctuated by its own unique growing pains, and the rise of large language models (LLMs) is no different.
In this episode of the AI + a16z Podcast, Socket CEO Feross Aboukhadijeh joins Joel de la Garza, investment partner at Andreessen Horowitz, to explore the optimism hidden in this transition. From “vibe coding” to securing the AI-assisted software supply chain, they discuss why, despite the noise and risks, this moment in AI development ultimately bends toward progress.
Joel: You were one of the first, I think, security founders that I talked to that was actively engaging in using copilot and then cursor and then really leaning into this technology. And so maybe it'd be great to, for first start off with a discussion or just an understanding of kind of what has your experience been with Vibe coding, what are you using it for, what have you observed?
And then we'd love to talk about maybe the other side of the equation that we don't typically talk about so much, which is. What are the issues that it's raising?
Feross: Yeah. like most companies that I think are doing it right, we're embracing this stuff as fast as we can and putting it into the hands of our team and, folks are using it to write large parts of our applications today.
like you said, we were pretty early on in terms of looking at, these LMS and like how we could use them in enterprise applications like Socket. When we got our hands on GPT-4, like that was the magical moment for us was like, what can we do with this to go out and scan every open source package and have this like human-like intelligence doing [00:01:00] this task that would've been, too much scale for a human to handle. and then obviously all the way now through to like agents and agent decoding and all this stuff. it's super, super powerful. there are the skeptics, there are the curmudgeons that are like, and, I think this is true in every company, there's always some folks and certain use cases where it, it doesn't work the best today, but that's all gonna get better, I think long term.
But, certain types of, In writing certain manifest files, like Kubernetes things, certain low level code, we see it struggle. But, by and large, I mean in, a JavaScript or a TypeScript application, it's just like super good. UI code, it, it does, it really, well.
so yeah, we're using this stuff as much as we can. but obviously, like as a security company, like being concerned that we have is okay, how do we make sure that we're actually writing secure code? and there's there's multiple dimensions on which you, care about this. There's like the code itself that's generated, and making sure that people are not, taking it [00:02:00] for granted.
Like what comes out of these models. one of the things when you write code, it's similar, like when you're in school and you're taking notes in class versus when you're just listening, you, remember, you pay more attention when you're actually writing something yourself. And so one of the things we, we have seen is that when people use these tools, they, it's like easier to just Go with it and not scrutinize every single line of code and every conditional and every, Yeah. Piece of the code perfectly. as if you were writing it yourself. Yeah. And so that's a big, scary thing for, us, is, and, and I think that's one of the main challenges with these tools and the other big one is, is third party code that's being brought in by these tools.
Yeah. 'cause they often will just like, reach for dependencies and say. oh, you wanna build a feature? I know how to do that. let's, NPM install this and that and the other thing. And then that's how they get going. And then, how do you know that the code they're choosing is good?
Joel: I just did it. I just did a, I was helping my daughter with a project last night and I did a panicked NPM install last night, and it was like, oh wait, I don't know if he used the right library for that. [00:03:00] luckily no, nothing got Trojan. But so I'm, really curious. So the, sirens call for this stuff and what we've consistently heard is that Engineers just become way more productive. And I think when Copilot came out, we heard that engineers were like 20% more productive. And I think over the last two years we've now heard with things like Cursor, it's three X productivity. I'm curious like. Are you seeing that level of productivity with these tools?
Is it, that much more efficient for your engineering teams?
Feross: It's harder for me to say because like we were native to this. we're not, I guess we started as a company in 20 20, 20 21 timeframe. I guess for things that are one-offs. Yeah. Like absolutely. It's 10 x. Yeah. It's actually more, way more than three x.
Yeah. if you just need a script to go and pull this data or that data, At least for me, like I find it's, it's 10 x because it'll just, it'll one shot it correctly. Yeah. So I'm doing something that would've taken me like four hours to write, or maybe one hour if I was really good, really fast.
and it's gonna do it in As long as it takes me to write the prompt and wait for the, the text to appear on the screen. Yeah. So that's really [00:04:00] good. And then, but then I think when, you're dealing with a, a larger code base and there's like a lot of things you gotta get right, like system design, things like that.
Yeah. That's when you end up having to work with a lot more. And I would
Joel: say 'cause you are the pinnacle of, engineer. not to flatter you, but just as a statement of fact,
Feross: thank you.
Joel: You are, a technical expert and you are, the father of a lot of this stuff.
you still review your code, you still spend a lot of time looking at the code. That's generated. It's not the case where vibe coding to you isn't, Hey, I'm gonna just type a prompt in. And commit whatever. Yeah. Whatever comes out.
Feross: It's more it's the same way that probably a lot of people use, chat g PT for writing emails.
Yeah. you would never just send an email without reading it. You would use it to draft the first draft and then you're gonna go in and tweak some things, right? That's how like I'm using it. Gotcha. and I think I, I think a lot of the people on our team are using it that way.
It's not, whereas I think that other use case where, you know, like you just let it generate something and you run it without even looking at it. that's the true vibe coding, right? Yeah, And like that, I, do that [00:05:00] sometimes and I kinda almost don't wanna say it on camera. it's not good, but it's okay.
It's usually for one-offs. This is a safe space. It's for one-offs though. Yeah. It's like stuff where I know that code's going to, I'm not, I'm gonna use it once and I'm gonna throw it away. Oh, totally. So I don't really care as long as it's not gonna lead to me being owned in some way. Yeah. Like I'm okay with that.
Joel: Yeah. Yeah. if I need to generate some sort of a script to find something in a data set. I'm running it locally. And I don't put any API keys, but Exactly. So it sounds it's really funny, right? Because I think we're seeing this sort of strange bimodal world emerge like. The most productive, skilled people are more productive and more skilled.
And then on the other end of that, barbell is like the least knowledgeable in this space when using this stuff become perhaps the most destructive. it, seems like there, there's a lack of guardrails, possibly. Like for folks. even like myself, where I did CS in college and I've done it before and I've always been horrible, but, now I'm like, now it makes me good enough to be dangerous, right?
I think that seems to be what's happening with some of this stuff.
Feross: Yeah. you at least [00:06:00] know, you, if you studied cs, you actually know like, how to prompt it correctly and describe what you want properly. Yeah. And you have some, idea of but you're, but I think the point is a good point though.
like somebody who. Because that's the positive side of all this, is that, okay, more people now are writing code Yeah. Absolutely. Than ever before. That's great. And a lot of those people are gonna find their way into engineering in some way. Yeah. Like they're gonna end up, like learning about software engineering and like actually find this stuff fascinating.
And their, gateway is that, they were able to just vibe code something. and then, and then of course, like I think the more software in the world, the better. I think just people like being able to create bespoke apps for their own use cases. Yeah. Like this is stuff is really, exciting.
but, But yeah, to go into production and like with a purely vibe coded app, as a beginner that doesn't actually know how to assess like the design of the app? Yeah. are you doing validation of the password, like on the client side so that it doesn't do anything? are they gonna know that?
Yeah. so do you know what a
Joel: client secret is? [00:07:00] Exactly.
Feross: yeah. So I think that's where you get this sort of, Do you get different takes on what the future is? Some people think that the AI is gonna get good enough that you can just, it's gonna solve all these problems eventually.
Yeah. It's gonna be smart enough. And then others, think that it's never gonna get to that level. And just a human, where you always have code review, no matter how good, if you're like a god to your program or like you still do code review because it's the right thing to do.
It's the best practice. And even those people miss things, right? Yeah. is it gonna be more like that where, where I just hit the mic. That's fine. Is it gonna be more like that? Where, you. Where you always need like guardrails, security checks, security tools, totally. Other, maybe other AI assessing that ai, that type of stuff.
Joel: when you talk to people that are really close to this stuff, and I mean like the PhD, the folks where it is actually rocket science, like they're mostly pretty adamant that the AI long-term solves this. And I think just my casual looking over kind of the problems that we've seen emerge with this stuff like.
You don't see a lot of SQL injections or cross site scripting. [00:08:00] Or cross site requests for it. Like a lot of this stuff where if you had a bug bounty program, you got 50 of them a week. Like it seems like these classes of vulnerability are disappearing with this, but It's, foregrounding a bunch of other vulnerabilities that are much more like DevOpsy, like key management or Authorization stuff or weird race conditions with, AU Z or off n. I'm curious, is that kind of what you've picked up or,
Feross: yeah, for, sure. I think with stuff like SQL injection, I don't, I wouldn't, I would hesitate to say that like it's perfect at it, but where, it's a matter of.
Remembering rules. I think it's similar to a really good human. If you're an experienced programmer, you'll remember these rules and apply them pretty consistently and generally avoid the, mistakes, I think where you get more, More risk is, like you said, key management.
So with M ccp, surfers, people, everyone's just putting these files on their system. cp do js ON or Yeah. Or whatever. And they just contain all the tokens to everything to their Gmail [00:09:00] account. Yeah, totally. Yeah. Because they want, they want, the power of hooking all this stuff together, but.
I think like in the design of a lot of this stuff, because it's been such a rush to get things to market and everybody wants to move quickly and be like, part of this exciting movement. All good stuff. Yeah. I'm not complaining we're, like part of it too, but, there's this, this just afterthought. Security is an afterthought, right? Like, it, was in all the other, it's always an afterthought, just, like with cloud, right? yeah, It's okay, shoot. This thing got really big and it's really important. Now let's figure out how to bolt on security.
Joel: Security is always, it's, security is always a concern when there is a, potential impact
Feross: when there's something to protect and there isn't, yeah. There isn't an
Joel: impact if no one's using it, right? yeah. it's You can think of all the dead programming languages that were hyper secure, but nobody used them, so it doesn't matter.
that's true. Yeah. and it, just seems like with technology, like there's always gonna be mistakes and our job is to, we're trying to do bumper bowling, right? Keep people out of the gutter, try to get people go a strike. It's always been the role of security, but, but you
Feross: gotta [00:10:00] worry when you just have MCP servers that are just random GitHub repositories that people are finding and honing down.
No one's reading that code. no one's like assessing what it does before they run it. that's, a broader problem with all of open source usage. Yeah. That's the, yeah.
Joel: Yeah. That, and I, that, that reminds me of the old days where the, with the, free tools to rip CDs or DVDs.
had a, you find it on it,
Feross: sketchy site that's full of ads and that's where you download from. Yeah. Do you remember, what was that? Site? Source Forge, I think. Source Forge. Yeah. I think they're still around, but yeah. One
Joel: packet, storm and like all the old school hacker ones, right?
Feross: yeah.
There's just. yeah, so like you're, taking a chance every time you run one of these things Yeah. That you're not, that it's, hasn't been compromised, or popped in some way. and then you're com you're then you're taking this code that again, you haven't assessed, that you found on GitHub that was published like a week ago.
And then you're connecting it to all your most sensitive data sources. yeah. Like your email, your Google drive, your, all your company documents and and then okay, so that's bad enough, but then now [00:11:00] it's, you're also then even if that was all secure, like even if you did vet the code and you connected it well and all that stuff, and you made sure it only had access to what it needed to have access to, you fundamentally have this problem that, at the core of this, you have an, a black box.
LLM. Yeah. That and this may get better again, like I'm, I'm not the like, expert on I model development and what's gonna come down in the future. But at least today, like it's fundamentally a black box. Yeah. And as good as it gets, like it's still Can make mistakes and you're letting this thing basically take, commands from here and then act on it and then push data there and pull data from here and yeah.
And it's just No one really knows like what it's gonna do. And then, and the thing is, you look at, you just poke these systems a little bit, and that no one's checking for this stuff. So like in Gmail for instance, I saw a recent example where somebody got an email and they clicked the little summarize button.
Yeah. And then like in the email there was like white text on white background. That said, your Gmail account's been hacked. Yeah. call this one 800 number. And that [00:12:00] was the summary of the email that came up. and so these really basic attacks, oh yeah. we're moving so fast.
Like people aren't even, trying to stop it, Yeah.
Joel: and like prompt injection and all, there's that whole class of AI attacks where you feel that, this is, just an we're in this inner, we're in this in-between period. Between like really exciting, super fast moving technology and it becoming something that's like trustworthy and stable for the enterprise, right?
and, it's interesting that you bring up MCP servers 'cause it seems like MCP servers like solved a bunch of problems not a lot of people are having and created a whole new class of them, right? And like when you talk to engineer engineering managers and people who are responsible for big development groups like the.
The consistent, it was interesting 'cause I was, I've been pinging folks being like, Hey, is Vibe coding causing security problems for you? And the answer was consistently like, not really. people are more productive. We review more code. But it's not it's not like we're yellowing this [00:13:00] stuff right into prod.
there's still a code review process, there's an SDLC, we have tools, we do linting, blah, blah, blah, blah. But what they said is, where we're actually getting problems is around the MCP servers. Because people. Don't want to go through the headache of getting a new service stood up in production.
Because that involves design review and there's a whole process around it. So they're basically just like on their home, PCs taking credentials home. Writing MCP servers locally, doing stuff, and then seeing some way that they can get that back into production.
Feross: That scares me so much and that, that seems to be like that's where there's a
Joel: potential problem.
Yeah.
Feross: No. just this week there were, a bunch of NPM packages that were compromised. And like the first thing they do, the first thing they did was like, search the entire system for these tokens just sitting around on disc. for someone to pick up. Totally. Totally. Yeah. they're just like sitting there for yeah.
for the taking.
Joel: Yeah. and, so that's, like I think it's interesting, right? Because, and this is if you want a really [00:14:00] great example of how to not get invited to speak at a security conference. Lead with saying that things are getting better 'cause it just doesn't sell security products.
But I think in general, 'cause you've, been in this ecosystem for a long time. and I've been in it longer than I'd like to admit and like it does actually seem like we've made a lot of progress. if you're using reputable cloud service providers, if you're building on languages that have a lot of security already built in, if you're using the litany of tools that are out there, including socket.
It feels And if you're doing vibe coding in a way where it's still part of your STLC, there is code review And it's going through your review process, like it still feels like you're pretty secure Compared to the way it used to be. Yeah. so I'm curious 'cause you, live at the forefront of Y your customers are all doing that. they're all secure. They're all doing the best. You, live at the, top end. Like how are hackers working around that? It seems like vibe coding becomes [00:15:00] a bit of like an asymmetric benefit to someone who's trying to intrude. Right now you've got infinitely many monkeys with infinitely many typewriters.
and they have the ability to launch these new attacks to get around this stuff. I'm just curious, what are you seeing happening on that ecosystem?
Feross: Yeah. That's interesting. it's honestly not as much the, so the, what do I wanna say here? I think it's the, vibe coating stuff is part of it, but it's also just the bread and butter, like security Yeah.
Best practices. to bring it a little bit back to socket, somewhat, there was a recent, NPM supply chain attack that I just mentioned, where you basically had a phishing email come in. pretending to be NPM and a bunch of people just clicked the link Yeah. And put in their password.
And, gave it to the attackers. Yep. The attackers then now have credentials to publish, supply chain attacks into those packages, and then it just spread virally and there was a whole bunch of other packages they've been compromised since. Yeah. And [00:16:00] how does mc, how does MCP servers play into that, how do agents play into that?
Not really yeah. like they're, they made it a little easier to steal certain tokens that were just sitting around on disc, like we talked about. But, but no, fundamentally it's just if you have a, like you said, if you have a secure SDLC, if all your commits and your code review is happening and it's going through the same pipeline Yeah.
Then that you're doing a pretty good job. The only thing I'll say is, there's these, ways that people can bypass and go around it. And like you said, it's when they're running MCP servers locally Yeah. They're taking tokens home. Yeah. Or when they, I think, about the end point a lot actually.
So when, when people let these agents loose on their local system, they're often like installing a bunch of third party components Oh, yeah. And running stuff that like, hasn't been vetted and, by the time you do get into, a GitHub workflow or you do run your, linting and your scanning and all that kind of stuff, your local systems already been compromised and you can do quite a lot of [00:17:00] damage with just that.
I think that. it's yes and no, right? Yeah. yeah, if, prod will be protected in that case, but who knows, like what sensitive stuff's on that developer machine. And so there's still quite a lot of damage you can do in some cases.
Joel: Totally. I was talking to a very paranoid founder, probably more paranoid than most security people.
building a really interesting company, blah, blah, blah, blah. Like I'm, it's very important company. and he was just convinced he had to turn off email for everyone, but like a few people that needed outbound access. Wow. Because it's just so consistently the case that like it's email. It's someone clicking on a link and then that just unwinds your whole thing, right? Like it's just, and it's just, always funny that always seems to be like where everything falls down is where you have like humans making this judgment. Yeah. Like the veracity of a message.
Feross: But that's the interesting part, because the agents, the more this stuff gets rolled out, like they're gonna be serving that role of the human Oh yeah. Making the judgment call and they're like. they are fallible in the same way that humans are [00:18:00] in some sense. I
Joel: don't know. I found 'em to be really good at spotting fraud.
Feross: Huh.
Joel: Like I've, taken like more sophisticated phishing messages and posted them into chat GPT, for example. Oh yeah. I'll spot and just been like, is this, and that's a good point. Yeah. That's a good, and they're just like a hundred percent like, oh yeah, it's malicious. And it's just I feel like there's, I know this is weird to say, but like at some point these things will be little guardian angels, right? Like I feel like these things will sit on our shoulders and just be like, no, that's stupid. Yeah. That's the hope, right? That's the hope, It's hopes, right? Yeah. And I, think you're starting to see that materialize.
Yeah. At some level, the vibe coding stuff very much feels like the beginning of that. And like of course, like security always comes after bad stuff happens, right? You don't get cars. Cars, you don't get seat belts until after a car accident. but we're getting enough stuff now where it feels like we're starting to get better on it.
Maybe. I dunno. Yeah. This is not a talk accepted at Defcon, by the way. No, I, yeah. No,
Feross: I think, you're, I think you're right. but I also, I just worry the more we like, take [00:19:00] ourselves outta the loop. Yeah. I don't know, this is maybe more general point, but the people in my life are starting to.
They just put stuff into, ask a question to an AI Yeah. Tool, and it's the kind of question that you know it's not going to do a good job of that answer. Yeah. it involves like, pulling in a, like a ton of data, doing a bunch of math on it, and they're just asking like the free chat GPT tier, you're like, they're like, and then just blindly trusting it.
Yeah. And I'm like, they're
Joel: like day trading stocks or something. Yeah, They're like, oh, it said, it's gonna go up. it's totally but
Feross: it, but just think a little bit about what you know. So yes and no. I feel like, I don't know, but. But I, think, broadly, you're right.
this stuff is gonna get better and then, we're gonna have, we're gonna have, especially if you can have, if, it becomes cheap enough to deploy a ton of these, agents that just watch other agents, Yeah. And you can have almost like a, bureaucracy, like a an org chart of the pull bureau of agency.
Yeah. that's that's what we actually do today. Yeah. in, in socket for, cost reduction, [00:20:00] believe it or not, like you, You don't wanna scan everything with the most expensive model right away, because that's, cost prohibitive. And, it's gonna, you don't wanna spend like 50 bucks analyzing like every each package.
Yeah, totally. So you have like a, group of five or 10 cheaper models. Yeah. the dumber models. Yeah. That, you have them all assess, the situation and then if enough of them raise their hands, then it like gets escalated to the smarter wow. To their manager basically. Yeah. So you see this kind of architecture quite a lot. and so I don't know. maybe that's the future.
Joel: Yeah, it could be. I think as the cost of inference goes down, I think those dumb bots get smarter, right? and eventually at the top probably sits some sort of thing that represents a GI when we get there, who knows?
Yeah. you talk to
Feross: all AI companies, a lot more than I do. And I'm curious, do you, buy into the, it's gonna get good enough that it solves security, or do you think it's, we're always gonna need the, seat belt? [00:21:00]
Joel: I think that security is always gonna be a challenge.
I think it solves, I think eventually the long-term solve is that the technical security issues get solved far and away. Like I think, these things stop making coding mistakes that have historically plagued us, combined with languages are just more secure now. You don't have to do memory management memory safe, right?
so I think that solves it. But the, thing with security is that there's a, there's the other side of the equation, which is the attacker. And I think as long as you have people looking to find ways to exploit these systems, there will always need to be some kind of security.
Now, I think the role will change as it has over the last 20 years. and I think the role probably will shrink. And I think over time, if you take a long enough view of this, like security's still very much an immature market. You can generally measure maturity of a market by whether or not the insurance markets function for it.
And like largely cybersecurity [00:22:00] insurance became a wealth transfer mechanism for ransomware authors for a while there, and it seems like it's correcting. so I think as that market matures and cybersecurity, cyber, insurance becomes something that's mature, that's when security becomes mature. It becomes a standards game.
I think you probably see lawyers taking more forefront. as long as there will be attackers that can be successful in what they do, there will always need to be a role for some, someone in the security side. And it's just, it just, you just don't have, it's the only technology market where you have that sort of an adversarial relationship, and that's the engine that's been driving all the growth all these years.
so I think, largely the really nasty problems get solved, but like people clicking on the wrong link or humans making the wrong decision. And it's gonna essentially become a game of, to your point, with the white text on the white background, it will become tricking the machines. but I, think that for the foreseeable future, at least for the extent that which I [00:23:00] see myself working, there will be humans in the world. It
Feross: won't be your problem after that, right? Yeah.
Joel: it will probably 'cause I'd be a victim of the crime, but I think for the foreseeable future there's humans in the loop and I think it's, it seems to be a really incredible thing that we're building. I'm generally very positive on
Feross: it.
Joel: and I think that, we, it's just, I, think of the, for me, the vibe coding stuff is just having, expanding the number of developers in your organization. And some of them are probably a little bit more junior.
And so you need to just do a little bit better job of guardrails around that sort of stuff. I think that's what happens. Yeah. But who knows? They could find a GI next week and we're all out of a job.
Feross: Yeah. That's, actually why it makes it, it so fun. Yeah. To be doing this right now.
Joel: Oh yeah. Totally. it's funny you were saying that when you saw G PT four, you were like, that was the moment, right? That flipped. And I was like, I think that was around the time you raised your round, right? Yep,
Feross: actually, we got it. we tried it. We actually tried, our kind of whole approach with GPT-3 0.5.
Yeah. And we're like, oh, this is not good. [00:24:00] this doesn't work. It doesn't know it, it thinks everything's malicious because it's, it's. It's being led too much by the prompt. Totally. It's oh yeah, that seems bad. That seems bad. Yeah. And then, and then we, got early access to GT four and that was the moment because it was like, oh, I get it now.
Yeah. Okay. This is okay. And you can see Yeah. The step change and you're like, okay, yeah, we're gonna bet on this.
Joel: And it's funny because we were having, I'd say as a team, similar to you guys, very much involved with a lot of this stuff, and we see a lot of stuff before a lot of other people do.
And it's just as we were seeing the stuff coming out of OpenAI. and then eventually stuff coming outta cursor, it was just like, wow, okay, this is a game changer. And then And then you engage in these discussions with the broader community and people are like, curmudgeonly.
yeah. Oh, it's not real. It's just, and you're just actually yeah. Stuff's kind of amazing.
Feross: They come around. Yeah. Soon enough. Yeah.
Joel: So obviously you've been building socket for a while now. and you, live in this interesting, space, which is [00:25:00] While code is being developed, while all these things are happening, you are in the middle of this transformation. And I'd love to get your thoughts on, it feels like even if these systems solve the underlying technology problems around coding, and they, make safe code, which I think is a safe bet to make, there's still security problems, right?
Because there are attackers that are adulterating packages, people are trying to sneak stuff into your supply chain. Like, how do you see that? Like from a product perspective, where do you see this kind of charting out and mapping to?
Feross: Yeah. today the main way that Socket helps companies that are using Vibe coding to actually do that securely is, they use Socket as a realtime source of information about like the state of the third party dependencies that they're bringing in.
So if you think about it like this. models are trained every three to six months on, a bunch of data, and then they're like baking in their knowledge at that point in time. And yes, they can, reach out and do searches and things like that, [00:26:00] or use tools or use MCP servers to, to do different tasks.
but outta the box, you take, one of these vibe coding tools and they don't really have any concept of what is the security status of this component or that component. So if something was backdoored. Yesterday. Do they know that like the answer today is no. And generally this, is, not a problem most of the time because stuff doesn't get backdoored that often.
But when it does right, that's when that real time information can be really valuable. So socket's out there, like going and getting that info, cataloging all the open source code, and, then, you can get that hooked in directly to the vibe coding workflows for, developers. And then you don't have to worry about when the model was trained or like what they know about, the package.
It also, by the way, it also helps not just with security, but like code quality. one thing we see is, something gets deprecated. it's not recommended to be used anymore. You're supposed to use a different package models. Don't necessarily know that. they just, trained on a whole bunch of examples.
From Stack Overflow or GitHub or whatever, [00:27:00] and so they'll just keep using that old code. And so you end up with some tech debt and some other kinds of, just like longer term, just crusty code in there that doesn't, that could, wouldn't necessarily be there if, you had, use socket to go and pull that info in real time.
Joel: Change control will still be an issue even when the AI is writing the code, right?
Subscribe to our newsletter
Get notified when we publish new security blog posts!
Try it now
Ready to block malicious and vulnerable dependencies?