Transcript
00:00:02
Hello and welcome to AGTP Daily where we cover the most significant tech stories in the US and abroad. Today is Tuesday the 10th of March 2026. Let’s get right to it. I want to cover two topics today. Is Apple out still outsourcing part of series brain to Google. Kind of feels like it to me. That’s the first thing I want to cover and then we will also be joined later by Alvas, a co-founder and CEO at Entry Security to discuss identity governance for AI agents. This is going to be really important. secrets
00:00:30
management at scale and how security operations shift when autonomous systems dominate your infrastructure. Okay. Is Apple’s Siri really just Google’s Gemini? It feels like forever that Apple sold the world a very specific idea of itself that the most important technology should be designed inhouse, integrated vertically and controlled end to end. Hardware, software, silicon services, security, privacy, all of it working together in one carefully managed system. This is what Apple has literally always sold. That was the
00:01:02
whole concept of the Mac and the Mac OS is that we’re going to build everything. Now they build their own silicon. They wanted to build everything. They made an announcement, I don’t know, two years ago that AI actually stood for Apple intelligence and not artificial intelligence. But that’s why now we have to talk about Siri because now Apple has effectively conceded something in public that it’s never conceded before. When it comes to the core intelligence layer of the next generation of Siri, Google currently has
00:01:30
a better answer, which makes a certain amount of sense, right? Reuters reported last month that Apple will use Google’s Gemini models for a revamp Siri under a multi-year deal. I mean, remember, Apple never built its own search engine either. They’ve always outsourced this to Google. And Google’s own statement said the next generation of Apple Foundation models will be based on Gemini models and Google Cloud technology. Now, to be precise, Apple’s not giving Google the keys to all of
00:01:58
Siri. It’s not going to do that for sure, but it feels like it may be the brain of this thing. Apple says that Apple Intelligence, I love the way they do this, right? Luckily for them, their company name began with an A. It’s just funny to me, will still run on device and through private cloud compute, which Apple has positioned as its privacy preserving architecture for more advanced AI tasks. In [clears throat] other words, Apple is still controlling the product surface. It’s still obviously controlling the
00:02:28
operating system. It wants to maintain and control that user relationship as well and the privacy framework. But the cognitive engine that sits underneath Siri, the model layer that actually does most of the reasoning reasoning is now in a meaning meaningful sense external. And that’s really a gigantic story because what Apple is outsourcing here is not just a feature. Remember, Apple said that Box and Dropbox were just features and they built their own sort of internal file sharing system, right?
00:02:58
So, they go they go a long way to to vertically integrate stuff. It’s not just they’re saying that um this outsourcing here is not just a chatbot add-on. It is not just a convenience integration like bolting chat GPT onto the side for occasional hard questions. Reuters noted that Apple had already integrated chat GBT in late 2024 for complicated opt-in queries, but under this new arrangement with Google, Gemini becomes the default foundation for revamped theory and future Apple intelligence features. This is a big
00:03:30
deal and that moves Google from being an optional helper here to being much closer to the systems default intelligence substrate. This is a really, really profound strategic shift. There’s a lot going on as you can tell with like this competition between OpenAI Anthropic. There was some [clears throat] anthropic news as well. We’re not going to cover that today though and Gemini because in the old Apple model, intelligence would have been something the company considered way too central to their core to
00:03:58
outsource, right? But we’ve spoken about this before. Is Apple really good anymore at building sophisticated new style software? Well, I think the answer is obviously no. And I think this is indicative of what’s happening with Apple building Gemini into Siri, right? Apple could outsource components certainly. And remember, they used to do this in the chip space. Now they don’t anymore. It could use other suppliers for stuff. It could partner around distribution. It’s done this in some
00:04:23
cases, right? But the defining layer, the layer that determines how the product thinks, responds, reasons, and improves, that at some level was supposed to be core to Apple. But now it doesn’t. Now it does not belong to Apple. at least not entirely. And that tells us two things. At least two things. First, it tells us that Apple’s internal AI progress was not moving fast enough. Apple had already admitted this month, no sorry, last year at this time that some of the more advanced Siri improvements that I
00:04:54
promised would be delayed until 2026. Apple never does this. They had never done this in the past, announced a product in say it was going to be delayed. Then Reuters reported in January of this year that Apple planned to turn Siri into a built-in AI chatbot and that this new version would run on a higherend custom Google model. And by late February, Reuters was also reporting that Apple was fighting a shareholder lawsuit, tied in part to claims that it overstated Siri capabilities. I mean, I didn’t know you
00:05:22
could get sued about this. Sure, you can get sued for anything in the USA, but I mean, we’ve always known that Siri’s capabilities have been overstated. No. Let’s just put this a little bit bluntly. Apple had promised an AI infused Siri future and struggled to deliver it on time and they reached outside the company for the model layer. Secondly, it tells us where value is concentrating in the AI stack. And actually, this may be the more important point that we’re making here. In consumer tech, we used to think that
00:05:52
the most durable power sat in the device, the operating system, and the app ecosystem. And those still matter, but generative AI has added a new choke point. Whoever owns the foundational reasoning layer can begin to shape the user experience even when they don’t own the device. We’ve talked about this, right? Artificial intelligence at some level is going to become the UI and the UX. It just is because it’s becoming so dominant that there’s no way you can escape the fact that your user that your
00:06:20
user interface is going to be driven by a layer that’s above what you actually thought it was going to be. And again, we’ve talked about this. You see many many applications, many apps being written where the UI seems to be a second thought, maybe even a third thought. And I think the reason why is because they’re expecting AI to interact with their API and that the user interface itself, the visual representation of that UI is going to turn out to be meaningless. Look, Google already had this is this is a gigantic
00:06:46
win in this case for Google. Google already has Android. They sell more devices uh they sell more Android devices than Apple sells um iOS devices. Apple makes more profit. It it doesn’t matter, right? It already [clears throat] has Chrome and now through Gemini, Google gets the influence inside the intelligence layer of Apple’s ecosystem as well. This is important. Reuters described the deal as a major vote of confidence for Google and said it gives Google access to Apple’s installed base of more than 2
00:07:14
billion active devices. This is not simply a product partnership. It is becoming it is Google becoming harder to root around in the AI era. It’s so interesting to me. They owned search. They owned ads along with Facebook. And now they’re going to have to sit on two billion Apple devices. Is Google going to quietly, slowly but surely also own artificial intelligence? If you’ve been following Google for the right amount of time, you know that they’ve been working on their AI stuff
00:07:43
forever. They made a commitment to this over a decade ago. They’ve been talking about it quietly forever. And yeah, there could be a potential anti-shadow trust hanging over all of this, but for my money, I don’t think that’s going to be a problem. At least not in the United States. You may have the European Union come after them, but isn’t that the European Union’s job is just to go after Google? Anyway, Apple and Google were already bound together by the enormously controversial default search
00:08:08
relationship. We talked about that earlier, right? Reuters has previously reported that Google’s search deal with Apple could be worth $20 billion annually to Apple and sit at the center of scrutiny in the United States. Google pays Apple almost $20 billion a year to be its default search engine. Just think about how significant that is and why that matters. So now add a deeper AI dependency on top of an already sensitive search dependency and the question becomes super obvious. Are we watching two gigantic tech companies
00:08:42
extend an old platform bargain into the new computing era? I mean, this is the big question. Google owns the search business. It builds the ad business and it basically owns that with Facebook. Now, is it going to own AI? Because if you think about it, it’s going to now it’s going to be it’s going to be included in every iOS device. What does that mean? We’ll we’ll think about this over the next few days, but what does that mean for chat GBT and for OpenAI? And what does it mean for
00:09:07
anthropic as well? Right? Because they’re kind of getting squeezed out of this Apple device. And the more interesting question is not necessarily legal, it’s philosophical. What does it mean for Apple if the company that once prided itself on building the whole ecosystem, its whole sort of full stack ecosystem now has to source core cognition. The real thinking behind Siri to Google. Just think about the implications of that as well. Search was one thing, right? But Apple was never going to be
00:09:37
in the search business and they were getting paid $20 billion a year to use it. Fair enough. One answer to this question is that it’s it may not signal weakness at all. It’s just pure pragmatism. Pragmatism. And I don’t want to go back to this idea that like Tim Cook is really different than Steve Jobs and blah blah blah. But let’s be fair. As we get further and further and further and further away from the Steve Jobs era, Apple has become way more pragmatic. Apple may have decided that users don’t care whose
00:10:06
model sits underneath Siri. I mean, I don’t care at all. And to be fair, I don’t use Siri that much. I use the I use the Gemini interface way more. So, as long as Siri becomes useful, people will be happy, I think. And in that in that sense, Apple is doing what strong, smart, pragmatic companies do. They’re abandoning kind of this ideology, right, for execution. They want to make sure stuff works. And to be fair, this has always been Apple’s motus operandi. Sure, they wanted to build it
00:10:38
themselves, but more importantly, they the idea around Apple was it just works, right? You don’t have to be a technological genius to figure out how to use an Apple product. You don’t have to customize it for yourself. Better to ship a worldclass assistant built on external intelligence than to defend some kind of dogma, right? While still falling behind. But there is a cost here. Once the intelligence layer is externalized, differentiation becomes harder, right? Because if you’re getting the same
00:11:08
results by going directly to Google’s Gemini, then what are you doing, right? You could just use Google’s personal assistant, right? Because Google makes devices, you know, to hey Google, you can ask it plenty of questions. Yeah, just like you can ask Siri. I don’t want to do that because every device in my house will just go off. But if Google can supply the underlying reasoning, then Apple’s advantage has to come from orchestration, all the other things that Apple does around Siri. Privacy,
00:11:35
interface design, hardware optimization, cross device continuity. Are you on your phone? Are you on your speaker? Are you on your laptop? Are you on your computer? This cross device continuity super important. And trust, those are real strengths, but they are different strengths. And Apple is no longer saying we built the smartest thing. They can’t because they didn’t, right? It is saying we built the best environment in which somebody else’s smart thing can operate safely. This is a subtle but important
00:12:04
repositioning and it also changes the competitive map. I mentioned this earlier like around open AI and and around anthropic and perplexity and other sort of big LLM labs, right? Because they’re not sitting directly on two billion iOS devices. Reuters also noted that the Google deal pushes open AAI into more of a supporting role which is not shocking right with chat GBT remaining for more complex [clears throat] excuse me opt-in queries rather than serving as the default intelligence layer which is what
00:12:32
Google’s going to do. So the battle here is no no longer just about model quality in the abstract. It is about who becomes the default cognitive infrastructure embedded into the products that people already use every single day. Just keep repeating it to yourself. two billion iOS devices. And in that context, default placement matters as much as benchmark scores. We’ve already seen this in search, right? Again, Apple never really built their own search business. They just integrated Google
00:13:02
search into their into their devices. It seems to have worked fine. So, the headline here is not merely that Siri is getting smarter. And it is demonstrabably it’ll get much smarter because it was pretty terrible before, right? The headline is that Apple, the company that made vertical integration into dogma, into a religion, has decided that in artificial intelligence in AI that the winning move may to be to control the in in may be to control the interface while licensing the mind and that may and that may turn out to be
00:13:36
one of the defining truths of this phase of the AI economy. And remember, we’re still early here, right? We don’t know where we’re going, but we know where we are. The companies that own distribution do not always own the intelligence. And the companies that own intelligence do not always own the customer. Apple still owns the customer. And again, two billion devices out in the wild now going to basically have Gemini on them. It’s going to feel like Siri. It’s going to look like Siri, but at the back end,
00:14:01
the brain is going to be Gemini. Now, Google owns more of this cognition, and that’s going to be important. And Siri sits at the enter of this new bargain. Okay, we’re gonna welcome Itsk Alvas, co-founder and CEO at Entro Security. We’ll be right back. Okay, let’s welcome into the uh into the show. Alvis, co-founder and CEO at Entro Security. Let’s bring him in. It’s great to have you here. How are you doing today? >> I’m great. Uh thank you. Thank you for having me.
00:14:33
It is my pleasure. Look, we wanted to talk to you about identity governance. I think we’ve we’ve been through this a little bit. AI agents, secrets management, and a bunch of other stuff, particularly when autonomous systems donate infrastructure. But I think it’s more important at least to start with a little bit of an introduction about Entro, if that’s okay, so people can understand and get a little bit of context. >> Sure. Uh, first of all, I’m I’m Galvas.
00:14:54
I’m the CEO of Fentro Security. Um, I’ve been in the cyber industry for almost 20 years now. started um yeah started at um at an offensive uh security role but then prior was responsible for the internal security of one of Microsoft’s clouds Microsoft defender cloud prior to that was a chief information security officer >> um yeah so that’s a bit about myself and security uh we started at security about three years ago and started from the non-neuman identity side so we’ve been
00:15:28
protecting um service accounts API keys, connection strings, basically identities that applications or machines are using in order to access or authenticate against resources those application needs. And in the past year uh or so now we’ve been uh also protecting AI agents. >> Okay, look this is a super important topic and again just to give people a little bit more context >> maybe we here’s the thing I think most people understand like human security, right? Lock the doors, close the
00:15:58
windows, turn lights off, all this kind of stuff. I think that’s really straightforward, right? >> Um, yeah. >> And don’t give them a card so they can swipe to get into the office, right? And don’t give them passwords. But what maybe you can talk a little bit more about what exactly non-human identity is in a little more detail and not just what it is, but why it’s become the fastest growing identity class inside enterprises. >> All right. So, non identities um you can
00:16:23
think about them as credentials that applications are using in order to access or authenticate against resources application needs. So if an application uh needs to use a storage or if an application needs to use a database or if an application needs to use another application or connect to another application and so forth in order to connect to the database or to connect to the storage uh they need some sort of a credential and those are the role of non identities. So non identities essentially are credentials that
00:16:52
applications are using in order to connect or better yet authenticate against resources those application needs. Uh now yes they are fastly fastly growing. Um more applications that we’re using and more uh services that those application needs uh means more non-human identities. In our latest research, we saw that for every human identity in an enterprise, we’re seeing an average of 144 non-human identities. So, the scale is uh the scale is insane. >> But it’s 140 something to one for human
00:17:24
vice versa, right? For an NHI, for human identities. How does how is it gotten so big so fast? >> We’re adopting a lot of new applications, right? Um and also microservices became a thing. Uh before that we had one monolith, one server if you will that ran an application um and that server maybe ran few applications or even one. Now we’re kind of breaking it down into a lot of um mini servers if you will and all of them are serving the same application and all of them need to connect to one another and to connect to
00:17:59
other resources again like databases or storage accounts and and so forth, right? Uh so as more applications that we’re adopting and we’re adopting a lot of new applications now within an enterprise more non- human identities that we need uh so yes we’re seeing we’re seeing that number 144 I think the report before that was one to uh 80 or so and that was a report conducted by cyber okay >> from four or five years ago uh so so yeah again the scale the scale of them is is unmanageable
00:18:31
yeah it sounds like it’s complet it sounds a little scary to me too and just in my mind the way things used to work just structurally and tell me where I’m wrong right because I’m not an expert on this is that you’d kind of have this periphery and on the periphery you’d have these applications and every time you wanted to use the application it would log in with one of these credentials that you’ve just described and when it was done it would log out and leave but again aentici seems to me
00:18:54
that something that sits inside the periphery right like you give it access to stuff and then it sits inside so if it starts doing things that are a little bit had it’s already inside the system. Is that a fair representation in most cases? >> Um, I think it’s kind of a fair like it’s >> Tell me I’m wrong. It’s okay. I’m good. I’m I’m happy to be >> I wouldn’t say I wouldn’t say you’re wrong. I think that’s one use case where you have AI agents or an application uh
00:19:22
that lives within your perimeter. That’s one scenario. And then there’s another scenario where you have applications or AI agents that lives outside of your perimeter but they need uh to access your data or to access your resources. Uh so I’ll give you an example. Think about >> maybe booking.com right? Booking.com uh needs to connect to so many other provider. They need to connect to Marriott and they need to connect to JetBlue and and so forth and so on. That means there’s an application from
00:19:51
outside booking that is trying to connect to if you um if you’re working for Marriott then then booking is trying to connect to your resource right uh so something from outside is is connecting in and then for the AI agent um you know you have many agent that lives on the cloud or in the SAS uh you can have Microsoft copilot or Salesforce agent force or open AI um agents that are living in the cloud and they need to connect to your resources. Maybe OpenAI is managing your calendar, right? You
00:20:25
give it access to manage your calendar. Now those agents, those AI agents are actually leveraging non-human identities in order to access resource. >> So at the end of the day, yeah. So at the end of the day, you can have both scenarios where you have AI agents that lives within your perimeter and communicating with your data and services and then you have AI agent that lives outside of your perimeter and they also need to communicate with your resources and that means that they are coming uh in an inbound way to your uh
00:20:56
data and perimeter. >> I just feel like things are getting so complex. Look, one of the things that um you know, we talked earlier about human identity, >> and I like to use like your home because this is something people can understand as kind of like a metaphor or an analogy for security, >> right? I can know who lives in my neighborhood. It’s easy. I can just look outside the window or open my door and look outside and see who’s there. So, identifying who could be a potential
00:21:20
perpetrator is not that hard. But if it’s 140 to one and it’s probably just going to keep growing or 144, excuse me, to one, right? NHI to human identity. How are we even keeping track of what’s interacting with what we do? And how do we even know if somebody new come? Do you know what I mean? Like it’s getting so complex. How do we track what it is that that’s out there? >> That’s before the femoral you have ephemeral ones that are only doing one action and then disappearing. So yes,
00:21:49
it’s it’s it’s it’s it’s complex, you know, when you’re comparing human identities uh to non-human identities and definitely AI agent identities like it’s it’s a completely different realm. >> Yeah. >> Um while all of them are identities that grants access, non human identity grants access to applications. Um human identities grants access to humans and then AI agents identities are getting access to to identity. So while all of them are getting access to your system,
00:22:16
they are completely different. Like think about that when you’re on boarding a new um human identity basically you pretty much know everything about him uh it’s probably one of your employees right the more employees that you’re on boarding more human identities you’re creating but then you know everything about your employees you know their name and position and where they live and probably SSN and so forth non is a bit different usually they are being created by developers or someone from your IT uh
00:22:43
devops and so forth they are creating them permissioning them and and and usually also scattering them around. So even at the cinema how many we have and where they are like that’s a huge huge problem >> right >> um and and also often they have permissions way way higher permissions than human identities because they can access databases and they can access u you know our infrastructure uh so that’s a huge problem and then aentic now everybody in the organization not only the IT team is able to subscribe to a
00:23:10
new AI agent uh permission it and use it so you know the 144 to1 will uh will only increase in the in the agentic AI uh world that we’re living on. Uh so yes, that’s the main you you’re touching on the main problem. So that’s the main problem for organizations nowadays trying to understand how many non identities they have in the environment, where they are, how they’re being utilized, what permissions they have and so forth. And the same same set of problems for aentic AI, how many aentic
00:23:38
AI uh we have in the environment, where they are, who’s owning them, how they are being utilized, what permissions they have, and so forth. So, so yes, you’re touching exactly on the problem. >> And how do we how do we track this? Right. Again, cuz you mentioned this. If I on board a new employee, I know who she is. I know where she lives. I know her family. Like, I know all this stuff about her. Right. >> Right. >> And I can also tell if something I can also tell by her activity just on
00:24:02
day-to-day at work, the way she logs in, even if she’s working remotely, if she’s starting to do some strange stuff. But that’s one person. >> How do I monitor all these agents that are out there? Because you’re right. Right. frontline employees are now getting access to just creating their own agentic AI. And if that happens, I don’t know what the personality, it sounds strange, right? But I don’t know what the personality of that non-human identity is. >> And I don’t know what they’re going to
00:24:30
do. How do I track all that stuff? What should enterprises actually be thinking about when a new agent or when a new NHI, I guess is the right word, gets on boarded, right? Because they know what to do when a human gets on boarded. But what should they do when a new NHI gets on boarded if that makes sense? >> It does. Yes. And that’s um so so basically what we’ve done for human identities to securing non-human identities like that completely breaks uh for uh for for the non-human identities and AI agents. Uh
00:25:04
that that’s that’s not the right approach. And while we kind of used to that approach to the joiner mover lever approach if you if you’re familiar with that like that’s the the main approach for identity for humans um you know tracking or safely on board them and if they are moving roles within the company changing their permissions and then when they are leaving or safely offboard them and remove all all access and so forth that that approach completely completely breaks. >> Yeah. Um so so yeah so what you want to
00:25:33
do for a agents uh you want to recognize first of all you want an inventory you want to understand to know who’s there right >> how many exactly you want a comprehensive inventory and understand how many AI agents we have and where they are once you have that inventory now you need to classify them because unlike human employees where you know everything about them for your AI agents you kind of don’t you don’t know why they were created who’s using them, who’s invoking them and and so forth. So
00:26:03
you want to get that classification piece. You want to create a lineage map of uh what human user is invoking what AI agent that has permission to what resources in my environment and how it’s being utilized, you want to kind of create that lineage map, that map. And once you have that, once you have the inventory and then the classification or the business context for every one of your AI agents, now you can start taking action. Now you can start doing misconfiguration. Maybe that AI agent
00:26:31
have admin permission over all of your infrastructure and that’s unneeded. Maybe it’s an AI agent of finance team but is trying to authenticate against developer uh resources like code repositories. That’s something that you probably want to know about, right? Um and then you can do abnormal behavior. So what we’re doing at least at Enro, we’re finding all of the agents. We’re understanding all of their identities, all of the identities that grants them permissions and then we’re tracking the
00:27:00
activities of those identities. We’re tracking how they’re being utilized by the AI agent and then we can create a baseline of activities and trying to monitor for any uh deviation from that baseline. Yep. >> Or any risky uh scenarios. So let’s say your AI agent is messingly download a lot of sensitive information, right? That’s something we’re going to be able to see and and call out. So you can do abnormal behavior. So maybe the AI agent is trying to encrypt uh some data.
00:27:27
Another thing you should probably be aware of and and and prevent. Uh so you can start doing all of those stuff once you have the inventory and the classification in place. You can start u managing them. Um also you should consider uh zero standing permissions for AI agents which basically means that um you’re not granting any permissions for the AI agent and you can create identities for the AI agent on the fly according to policy. So if we’re going to take the example uh that I talked about earlier
00:27:58
where you have a finance team AI agent that is being utilized by a finance team that is trying to uh trying to access a developer resource um if that AI agent has zero permissions to do anything and you’re only creating permissions according to policy. You can set up that policy and say AI agents of finance team is only allowed to access finance resources and if the AI agent is trying to access something else, we’re not going to create an identity for him. So basically what I’m saying, you can
00:28:27
create identities on the fly for those AI agents and you can control that and only create those identities if they’re trying to access stuff that you’re allowing him to access according to your own uh enterprise policy. Have we reached the point though where compute and like throughput is so seamless right now that you can create these agents on the fly and give them permissions on the fly in a way that it doesn’t look like you know what I mean so you’re not slowing them down because in my in my
00:28:53
mind before you started saying this I was thinking >> oh if if they have zero permissions which makes sense right in other words >> right >> you would never give and we talk about this all the time right you’d never give access to like a day one intern you’d never give them your credit card to like go buy lunch for everybody because you don’t know what they’re going to buy right so if they come to you and let’s do this thing. Then you just give them manual access to it. But in these
00:29:15
all these automated credentialed NHIS, right, non-human identities, you kind of can’t approve them one by one. So you have to create this infrastructure as well that gives them approval for things that they can do. It just seems like there’s a lot of work going on in the background to create this identity structure and this per not performance sorry permission structure to be able to do that as well. Do do I have that right? Yeah. >> Um, so you can use vendors like Ento that is that can build that for you
00:29:46
easily that you have at off the shelf. Uh, but but basically all all of what the enterprise should do is is create those policies. At least at least at Entro, we have policies uh predefined policies like the one I just mentioned. developer AI agents of developers are only allowed to access developer stuff like AWS or code repositories and and other maybe Jira tickets and so forth. If they will try to access Koopa or any finance system, they will be blocked. We’re not going to create those identities for them. We’re going to say
00:30:20
a manual approval is needed and many in the middle should approve that. Yeah, >> but yes, the technology is there to create those identities and permissions on the fly without uh without causing any delay for the AI agent. In >> in your experience up until now, what level of and it’s going to be a tricky word, right? What level of autonomy are enterprises giving their human employees to create agents on the fly? Do you know what I mean? Like does everybody have access to these tools? What are what is
00:30:51
the governance internally for? Can every line employee just say, “I want to automate this task. I’m going to create an agent to do it or are there permissions? Are they going slower now than they would maybe two or three years from now?” If that makes sense. >> So, you know, it’s funny. Um, we had um [laughter] we had an interaction with a with a very uh large bank um one of the the top Yeah. one of the top five banks globally. >> Okay. and he said we’re preventing
00:31:20
everybody from using AI agents and we need a solution uh like yours in order to allow adoption right >> of AI agent right we want to set up boundaries we want to set up policies and we want to say stuff like what you just mentioned uh we want to block AI agents from doing specific stuff and we’re not uh we don’t want to allow them to access certain stuff and so forth and and and and you know we got off the call um at at my office uh over here at Cambridge and and we were like we’re
00:31:49
taking bets. He has so many AI agents that he doesn’t even aware of. Um and when we started and when we started uh the proof of concept the the PC it was a Christmas tree that light up. >> Uh he had Yeah. He had thousands of AI agents. I don’t think I don’t see a technical scenario where a company can prevent an employee from running an AI agent because that AI agent can use. So AI agents are relying on something that calls MCP in order to connect to data. And MCP servers can be blocked, but AI
00:32:27
agents can also use other methods like APIs that are open >> for pretty much everybody. And therefore, you really unable to prevent them from, you know, subscribing to an AI agent. Even if you prevent them from installing a new UA agent on their machine, you you really are unable to prevent them from subscribing to an AI agent, giving it permission, and letting him run um and connect to your resources. Uh that’s that’s technically unless you’re preventing everybody from doing everything including the human
00:33:02
employees, it’s technically kind of impossible. So if somebody is saying, “Yeah, I I I’m waiting. I’m trying to prevent everybody from adopting an AI agent. While it’s also probably a bad business decision, >> right? >> It’s also kind of impossible. Uh so, so yes, you know, um everybody’s adopting an AI agent nowadays. When we started with AI agent security solution that was um an year ago, almost an year ago, we saw an average back then and that wasn’t
00:33:33
a thing. It wasn’t in the news. It wasn’t in the headlines. one year ago agent wasn’t in the headlines and we saw that every seventh employee >> in the in the average organization every seventh employee is already using an AI agent. So >> um >> so here’s but here’s the question though right so you mentioned this you have to map all this stuff out you have to understand the inventory you have to classify everything but you say you go into this big bank and they don’t know
00:33:56
that there are thousands of these things running out there right but when you when you map it out for them and you do the things you said you build this inventory and you classify them all I’m sure you go back to these these big institutions and say this is what it actually looks like inside of your security architecture and you’re running all these agents you didn’t know about are they like shocked when you present this to them >> so they can look at um yeah so first of all yes but but we have a dashboard so
00:34:23
it’s all open so >> sure but I’m saying before like you install the D you know what I mean like when you go to them >> and build this like oh my god they were shocked they were shocked for sure they thought they have it under control they thought no one nobody’s using it >> using it so so yes they they were shocked for sure um yeah um and I just think it’s um you know it created some urgency to close the deal and start using uh start using Entro as a solution to
00:34:51
secure and allow them to safely adopt those AI agents. >> Yeah. >> Um and they were adopting a agents even before they they realized. So yes uh yes definitely definitely it’s a shock for uh for everybody. Uh the scale is also a shock for everybody. >> I mean it must be right because they have no concept how how many people are actually using this. And I bet if you told them this 144 to one number, they would um they would lose their minds. How do you think this changes over the
00:35:18
next like I don’t know four to six years. And in what way should And this is so funny. Should the chief information security officer I had people pronounce this two different ways. Ciso and CISO. So I wasn’t sure which one to use. Which one do you prefer? >> I’m using CISO. >> CISO. That’s what I thought. It makes more sense to me. Um, but what does this look like four to six years from now when that number is not 144 to1, but it could literally be like 10 times higher
00:35:41
than that? >> Yeah. And probably probably way more. >> Yeah. Factors higher, right? >> Yeah. Probably way more. And and and like we’re only at the beginning, right? >> Agreed. We now it’s like at the point where you have a human employees that are using or leveraging an AI agent in order to do more efficient or maybe more correct uh work, >> but then quickly enough it will be agent to agent, right? You’re going to have a flea of agents that are working with
00:36:10
each other and so yes, we’re only at the start of it. >> Um and stuff will get way more complicated very very fast. Um so what do you what do I suggest uh what do I suggest organizations uh should do or or how CISO you know as a CISO I I used to be a CISO I had several roles as um as the the chief information security officers for for multiple companies and what you really want to do is enabling the business to do whatever the business want to do um and enabling it seamlessly you don’t want them to be delayed you
00:36:48
don’t want the business to be blocked or be delayed in doing anything, but you also want to make it secure, right? Because otherwise you’re going to go to jail and that’s not your goal in life. Uh so you want to make it uh you want to make it as uh yeah, you want to make it as secure as uh as you can, right? So you want to from one end you want to enable the business from the other end you want to make it secure >> and and and that’s the approach. That’s the approach. You want to find um a good
00:37:13
partner, a good solution, maybe build it on your own, but you want to find something that allows you to seamlessly create those boundaries. Yeah. >> To seamlessly find everything um and to seamlessly block stuff you don’t want people to use. Uh so I’m not sure if you’re aware, but there was that like the open claw breach like two weeks ago. >> Yeah. So, so there was like a a rogue AI agent if you will um two weeks ago and and the organization or the CISO should have the ability to understand who’s
00:37:43
using that AI agent, right? That uh that may have vulnerabilities or may be or may misbehaving. >> Y >> uh the ability the ability to understand who’s using it, what they have access to, block that access. uh that those are capabilities the organization will need to will need to adopt and and the the key over here is doing that seamlessly without interfering to anybody uh who’s trying to do his work. So that’s uh that’s that’s the security mentality that uh that I had. Um and
00:38:16
also you know I used to prioritize my projects according to um IBM cost of data breach like that’s the main report out there. IBM cost of data breach Verizon report Gartner Forester like all of them are saying that non identities today are the second most frequent attack vector. Yeah. >> And the number one most costly attack. So that’s that’s why I think we’re seeing such rapid growth uh and maturity in that market although it’s fairly new. Uh because that’s how CISO was pri
00:38:44
prioritized and explained to the board. Uh you know the what what’s the first frequent attack vector? It’s fishing. I have something for that. Good. Okay. I’m good to go over there. What’s the second one? No identities. Do I have anything? No, I don’t. Okay, that’s my next project. And [snorts] that’s kind of how I used to present it to the board and get uh allocated for uh budget and resources. >> So you’ve just triggered this thought in my head and if you don’t mind I’ll ask
00:39:08
you this and then maybe I’ll let you go because I’ve already spent a lot of time with you and and I don’t want to take up more of your time. >> That’s sad. >> But give me just give me one more second. Yeah. If a human if if if someone tries to fish on me, right, as a human or if someone tries to break in or if I see somebody try to log into my stuff, right, I can go to my chief security officer. I can go to my CISO and say, “Hey, just so you know, here’s an attack
00:39:34
vector that’s being used to try to attack me. It did it did not work, but I’m just letting you know just in case so we can tell everybody else that if this happens, don’t do X because if you do X, then we’re going to have a breach.” Right. >> Right. Can we build the agents these non-human identities as well so that they can kind of like tell on external actors? Do you know what I mean? Where it’s like, uhoh, someone’s trying to break in. I prevented it from happening, but to
00:39:59
I understand what what you mean. But but but is it the smart approach to try to develop every agent to be self-aware or >> I I don’t know. That’s what I’m asking. You understand what I mean though, right? Like >> I I I do. and and and you can try to build every agent uh to be self-aware and try to tell on bad stuff or you can have something from the outside looking in seeing who’s using those non-human identities who’s using those AI agents. So let’s say somebody from North Korea
00:40:27
is uh is using an AI agent within that bank that I said earlier, right? And they are probably not doing business with North Korea. Like that’s something you want to be aware of. >> Uh right. So, so again like a product, a platform like Entro can look from the outside on all of your agents, no matter what agents are there and understand how they should behave and then any deviation from that. We’re we’re going to be we’re going to be the telltale there. >> Okay, look this is a fascinating topic.
00:40:55
I’m going to let you go. It’s Elvis, co-founder and CO security. I hope you enjoyed this. We’d love to have you back. Thank you again so much for doing this today. >> Thank you for the time. >> Okay, man. Bye for now. Right. >> Okay. That was super interesting. It it’s like Elvis. And we also spent some time earlier today talking [clears throat] about how Apple is interacting with Google and is Google owning the brain of Apple Siri. Um that’s going to be it for today for HGTP
00:41:23
Daily. Thank you again for joining us. Um thank you again for it. As I already mentioned, give us a thumbs up on the video, subscribe to the channel on YouTube, and follow us on all of our socials, X, Instagram, and LinkedIn. And let us know in the comments if there’s anything else you want us to talk about. If you agree with us or if you disagree with us. My name is Michael Weights. Bye for now.