Kitecast
Kitecast
Howard Holton: Weighing AI Cyber Hype and Risks
Unveiling AI, Data Security, and Innovation
Howard Holton, the Chief Technology Officer of GigaOm, explores some of the most pressing topics in technology today. With over two decades of experience spanning roles as CTO, CISO, CIO, and consultant, Howard brings a wealth of knowledge to the conversation. His background includes leadership positions at Rheem Manufacturing, Hitachi Vantara, and Precision Discovery, where he honed his expertise in digital transformation, data science, and operational strategy. At GigaOm, Howard combines his technical acumen with a passion for helping organizations navigate the complexities of modern technology landscapes.
Generative AI: Hype vs. Reality
The conversation delves into the rapid rise of generative AI (GenAI) and the realities beyond the hype. Howard explains how businesses are grappling with this transformative technology, which, while promising, is rife with complexities. Many organizations rushed into adopting AI without fully understanding its implications, leading to inefficiencies and unexpected risks. He points out that generative AI is a powerful tool but cautions against treating it as a catch-all solution. The conversation highlights how improper use can lead to issues like misinformation, inaccurate outputs, and even legal challenges, underscoring the need for deliberate strategy in deploying AI tools.
Tackling AI Governance and Risks
Howard also provides an unvarnished look at AI governance and its associated risks. With generative AI being a relatively young technology, governance frameworks are still in their infancy. Organizations often lack cohesive tools to manage the risks associated with AI deployments. This leads to challenges in ensuring compliance with data privacy regulations and safeguarding sensitive information.
Shadow AI: The Hidden Risk
Shadow AI emerged as another critical topic in the discussion. Howard describes Shadow AI as the unauthorized use of AI tools by employees, often without the knowledge or approval of management. While employees leverage these tools to improve productivity or efficiency, this practice introduces significant risks to data security and compliance. Sensitive company data may unknowingly be exposed to public large language models (LLMs), creating vulnerabilities and potential regulatory breaches.
Advice for the Tech Community
Closing the episode, Howard offers invaluable advice for professionals navigating the ever-changing tech landscape. He underscores the importance of mentorship, curiosity, and collaboration in driving innovation. “It’s our job to help people,” he says, emphasizing the need for tech leaders to share their knowledge and foster growth within their communities. Howard also encourages organizations to adopt a mindset of continuous learning, particularly as emerging technologies like AI continue to evolve.
LinkedIn: https://www.linkedin.com/in/howardholton/
GigaOm: https://gigaom.com/
Check out video versions of Kitecast episodes at https://www.kiteworks.com/kitecast or on YouTube at https://www.youtube.com/c/KiteworksCGCP.
Patrick Spencer (00:01.98)
Hey, everyone. Welcome back to another Kitecast episode. I'm your host for today's show, Patrick Spencer. We have a real treat. Joining me for today's podcast is Howard Holton. He is the chief technology officer over at GigaOM. Howard has over 20 years of experience in technology, digital transformation, data science. Check out his LinkedIn profile, which we'll have included in the link for today's podcast. It's quite impressive.
Prior to coming to GigaOM in 2022, Howard held numerous positions in tech, including serving as the CTO at Ream Manufacturing and Hitachi Vantara, as well as the CIO over Precision Discovery. And we won't go any further into his career beyond those. It's quite lengthy and quite impressive. Howard, thanks for joining me today.
Howard Holton (00:51.266)
Thanks for having me, Patrick.
Patrick Spencer (01:28.412)
Well, maybe a good starting point to talk a bit about your role. You've been over there a couple of years now. You know, talk a bit about Gig of Om. You guys are an analyst firm, which you guys do. And then, you your role, you do a lot of interaction with customers as well as work with your internal team. It's an interesting mix.
Howard Holton (01:48.987)
I do. So I came to GigaOM after being recruited by the CEO over here. And I started as a consultant back in 2000, right at the end of 2000. Or sorry, 2020. 2000 would be a long time. I thought it was interesting, right? Like, like just being a little bit self-serving, I'm like, wait a minute. So if I'm an analyst, then your job is to promote my brand. That seems like something that'd be smart for my
you know, for my career overall. okay, all right, let's, you know, I'll have that conversation. And then I went to work for Reem and I still consulted at GigaOM during that time. And then when I was leaving Reem, Ben said, hey, you should come over here full-time. And so I took on a full-time role. It's awesome. I recommend to everyone do it, right? When I talk to other analysts, they, we all kind of say the same thing, which is this is not a real job. This is the most fun you could possibly have and work in tech.
You're a member of the press everyone treat you well and no one ever calls at one o'clock in the morning and says hey This that or the other thing went down Right, I have an engineering team my engineering team does real engineering but but none of it is enduring it's all benchmarks and tests on behalf of tech companies testing technology as an independent third party It's freaking awesome. It's absolutely amazing that being said
Patrick Spencer (02:52.54)
Yep.
Howard Holton (03:11.246)
I came here to try to help shorten the improved communications basically with tech companies. Oftentimes tech companies are great at talking about their technology. If they're good at anything, they're good at talking about their technology. They're not good at understanding how a customer actually uses it and why they use it what they want to do with it and what they should want to do with it. And so the goal really is to help bring more of that to the market.
and really help customers understand not a look back, not what did you do? Did you do the same thing most of your peers did? Really, what should you be doing? How should you look at this? How should you see the world? Where should you make mental investments? I don't care about dollar investments. Where should you make mental investments? And that's really some of the things that we're trying to help with here at Giga.
Patrick Spencer (03:58.606)
Interesting. So you get all different types of technology companies that are coming to you or is it beyond technology when it comes to clients?
Howard Holton (04:07.754)
No. Well, so I have two different types of clients, right? On one side, I have the vendors that appear in our reports and those are all technology. We haven't moved into industry for those reports. We're still fairly small. So I kind of feel like I should be good at one thing before I start doing 50. And then from the from the end user side, right, the organization side, that's all over the map, right? So
So I do a lot of consulting with end user organizations and those are, know, finance, healthcare, telecom, retail, CPG, manufacturing, government, small, large, medium, you know, it's all over the map. So, you know, it's interesting. It's a ton of fun and I get exposed to a lot because of it.
Patrick Spencer (04:47.216)
Interesting.
Patrick Spencer (04:51.354)
Yeah, they're advising them on their technology deployments, what they have and what they could have. These like one week engagements or what? What's a typical engagement look like?
Howard Holton (05:01.799)
Yeah, so so most of my engagements are they're really designed to be kind of workshop focused. Unlike other analysts firms, you can't buy a block of hours from us. It's not I don't like that's not a skew that I have available. Everything that we sell has a has a discrete purpose to it. Because I, I don't, I don't have any interest in just building billable hours, right? I'm not scoped for it. I'm not designed for it. And I don't want to do it. And it always bothered me as a customer. So instead,
we come in with a specific purpose and we help build for that purpose. Most of my time is, can I help you build a strategy that makes sense for your company, for your organization and for where you want to go? And if you have a strategy, can I help you build a plan that's actually possible to execute within the context of who you are and the, and the, the organization that is built, right? It do me any good to recommend, Hey, I think you need to be a gen AI leader.
when you haven't figured out the first five data science projects. So it's really, how do I help build something that works for you? And we do those through a series of workshops. And then like I said, they're very, very discrete pointed specific things. So.
Patrick Spencer (06:04.06)
Yeah.
Patrick Spencer (06:18.33)
They all, they involve both virtual as well as on-site when you do engagements or what are those?
Howard Holton (06:23.704)
Yeah, for sure. I try I push really hard for on site. Right. The logic being like, they're not overly expensive. Right. I try to make them as fairly priced as I as I can. But really, I want to make sure everybody's got skin in the game. No one has skin in the game in a zoom call. Right. So so we need to do them on site whenever possible. That means you're gonna have to make an investment that means you're have to get people together.
But the other thing that happens is you might invite 50 people as stakeholders to a Zoom call where you'll put six in the room in a physical. Yeah, the one that's in the room that's physical, you'll get 10 times as much done. You'll actually make decisions. You'll make progress. You'll move things forward. When you have 40 people, you won't.
Patrick Spencer (07:08.976)
Yeah, very true. There's a sense of ownership that occurs and you're able to build a relationship with the folks you're consulting with at the same time. So I'm curious, because we've seen the AI transformation take hold in the marketplace. Certainly even two years ago when it first started, people were dipping their toes in the water. They didn't know what they were doing. I suspect those engagements two years ago looked different than those over the past year.
What do those look like? Are you getting a lot of your inquiries related to AI or is it still sort of in the minority of the projects that are coming in? I'm kind of curious what the companies are talking about.
Howard Holton (07:51.618)
So it's funny, when people want me to talk to the board, which I do, or people want me to do these large conversations, it's almost always an AI conversation. Because it's generating a lot of noise, and so they kind of want to know what's reality, what's not, what's here to stay, what's not, how should we be thinking about it? They're almost universally in the opinion that they're not ready.
Patrick Spencer (08:02.577)
Hmm.
Howard Holton (08:18.51)
and they really want someone to help them determine what makes ready, what makes real, what makes, you and they're being bombarded on all sides from companies saying we could do this for you. Right, the reality is being successful with generative AI is extremely complex, right?
There's a lot of ways you can be unsuccessful and unsuccessful can in fact be I'll never get an ROI from the money I spent because there's just not enough value in the thing that I did. It doesn't have to be risky. It doesn't have to be dangerous, but it can also be, look, we just made the news because our AI chatbot just sold our biggest, most expensive product for a dollar. Right? Or our AI chatbot was convinced it worked for a competitor and then went on a sales rant selling my customer.
Patrick Spencer (08:58.47)
No.
Howard Holton (09:06.722)
you know, the competitor's product, you know, or it could be even more risky. Like we went to court based on our AI chatbot, we lost, now we have to pay and we pulled the chatbot off, right? Hundreds of thousands of dollars, millions of dollars, potentially tens of millions of dollars lost because companies don't really understand what they're doing. They really understand the risk and they hire consultants that do neither of those things, right?
Patrick Spencer (09:17.222)
Yeah.
Howard Holton (09:32.834)
I don't know if you remember the like during the early aughts, there was this whole thing around motivational posters. We put them up everywhere, office space, even made fun of them. But there was a company in 2000 called despair.com that made demotivational posters. I mean, my absolute favorite and it was it was I think I'm trying to remember the picture. I may not have it right, but it was it was a an old style phone, you know, with a separate handset that was covered in dust and spiderwebs and it said consulting.
Patrick Spencer (09:39.493)
Yep.
Patrick Spencer (09:47.194)
Hahaha
Patrick Spencer (09:56.668)
Yeah.
Howard Holton (10:00.578)
We don't have any answers, but there's a lot of money to be made off the problem. I kind of feel like that's where we were with Gen.ai and even large analyst firms, right? Like the other one that starts with a G. They spent all of last year talking about Gen.ai until middle of this year when they pushed it back to the Trophat Disillusionment. So I went to their big conference and the message was basically Gen.ai is still here. It's still here to stay. It's still something you should do. However,
we're going to spend some time talking about the risks because apparently we didn't make that clear the last time we talked about it. And so that's why it's in the trough of disillusionment. There's nothing wrong with it. It's not, it's not going anywhere. It's still going to change the world and it's still going to eat a ton of other kind of technologies and enable new things. But we all kind of, you know, we, we all as a, as an industry, right? We all, a, as a group of people working with technology, we all kind of lemming right off the cliff.
And we didn't stop and go, hey, like we actually need to be a little bit reasonable here. Right. So my message hasn't changed since, since this stuff came out, right? We are in the experimentation phase, which means you need to start learning it. It's not going anywhere. It will be part of your job. So you really need to get ahead of it. So you kind of understand that, know, what a prompt is, you know, what tuning looks like, not tuning a model, but tuning within the context of a prompt and a conversation that you understand it's not a search tool. It's not.
Google, right? Every search you do in Google is a discrete search within the same conversation. It is a conversation in, you know, Gen.ai. But at the same time, we probably should also be aware of the risk contained within the data set that you apply Gen.ai to is magnified. It is not minimized.
Patrick Spencer (11:54.844)
Are you finding that most organizations are thinking about doing a risk assessment of their use of AI after the fact? this the old shit moment where, gee, we should have thought of that a year ago? Or are you finding that they're doing that proactively, or does it depend on the client?
Howard Holton (12:14.126)
It depends on the client, it depends on the organization. I think most organizations that I talked to have either looked at it initially and went, not ready for that and didn't really do much. And then the ones that did something have clawed back and effectively are right back to that TROFA disillusionment going, well, we probably need to think more about it and are looking for like what a tool, tools look like to help us. We did an AI governance radar.
with sonar, which is our version of kind of like the magic quadrant kind of stuff. And looking at that, there is no cohesion, right? That's going to end up being either one category that is this kind of amalgamation of three or four different kind of technologies, or it's going to be four or five categories, because there's no, like anyone that says we do AI governance, they don't actually they do a piece of what we would consider a cohesive AI governance platform. Right?
It's just not there yet. It's just not mature enough, right? Like, Gen. AI is effectively two years old as a market technology, right? A technology available on the market. And so you've only had 18 months to a year to build something that would govern that, right? So we just don't have the maturity. And yet the pace of motion, the pace of innovation, the pace of adoption is higher than we've ever seen in a technology, right? So it's a little bit tail wagging the dog right now.
We don't even have a fully formed dog and companies are really trying to see how do we leverage this, how do we turn this into value. And this is all generic. When we get into cyber, holy crap, this conversation changes an awful lot.
Patrick Spencer (13:58.697)
Well, speaking of that, we had a lot of shadow IT and so forth the day myself, going back 10, 15 years, it was a big topic. You'll remember those days as well. Now we're starting to hear about shadow AI because you have all these employees who are using the AI tools that are not necessarily ordained and approved by their management teams, but they still are using them to get their job done. But that can create
risk exposure in various ways, you named a few. Based on our business model, we would say that you have employees who are exposing sensitive content, confidential data, ingesting these public LLMs that create risk in terms of that information itself, but also compliance issues. Because once you load it, it's in the public sphere, you've essentially violated some of these data privacy regulations.
Howard Holton (14:53.186)
Yeah, like, like, yes, let's let's be really clear, right? It's the data that you load into the LLM is used exactly the way that the privacy policy and the EULA of the LLM described, right? And as a rule, if you're not paying for the product, then you are the product. Right? These things are really, really expensive to run. Right? And so every piece of data you put into the free tier, you need to assume
is part of the product is now part of the learning is now part of the training is now part of the the the information and data contained within that model. It's not it that it's not a guarantee. But it doesn't matter. Because you have to like you have to hope for the best and plan for the worst. And some of that planning is knowing ooh, I didn't pay for this. I didn't check policy there. And privacy. didn't change in the default settings. And therefore it's learning from
Right? So stop it. At the same time, organizations need to be wise enough to know, if people can find a way to be more efficient, more effective, especially the things they don't like doing, or the things they perceive to be a weakness, they're going to do it. And so knock it off. Right? Your your gen AI policy as a company can't be gen AI is is not allowed. Well, that's stupid. Correct.
Patrick Spencer (16:18.874)
Yeah, ostrich in the sand, right?
Howard Holton (16:21.89)
You've effectively said, well, we've got air cover. No, you don't have any cover at all. What you've done is instead of providing support for employees to use tools that are available to them, you've said you have to shadow AI. Right? It's gays in the military in the 90s. Don't ask, don't tell. Our policy is we don't have gays in the military. Yeah, you do. You got millions. Knock it off. Grow up, embrace reality, get ahead of these things.
Patrick Spencer (16:42.64)
Yeah.
Howard Holton (16:49.666)
have an official program, an official platform, have some tooling, right? Have a policy that people can understand. Second, they don't understand it, they're not gonna read it. Right? If people have to put your policy into Gen.ai to understand that your policy says they can't use Gen.ai, you've written the wrong policy.
Patrick Spencer (17:07.6)
No, It's like the days when these were around, right? We probably worked in a couple of the organizations where these were prohibited, but we know that didn't work. They were still used by employees.
Howard Holton (17:20.258)
Correct. Smart phones, you couldn't bring your own device. Or cameras, you couldn't have cameras. Do you remember when I worked for a telco years ago, and there were specific models of phones that were maintained way longer than they should have been because they were one of the only models you could buy that didn't have a camera. So that enterprises could buy them for their sites that they would not allow cameras. And cameras were there all the time.
Patrick Spencer (17:47.546)
Yeah.
Howard Holton (17:49.432)
This is ridiculous. It's like, come on, guys, you've got to get your head out of the sand. This stuff does not work.
Patrick Spencer (17:56.208)
So all this AI stuff when it comes to shadow AI, data being put at risk and so forth, where's the CISO on that from? We can talk about how cyber criminals are leveraging AI to make their attacks more sophisticated, to accelerate their attacks and so forth. But then you have employees and third parties for that matter, doing dumb things with AI that create risk exposure. Where is that in terms of the radar of the CISO?
she or he focused on that or is it just coming to the forefront because of some of the board meetings or e-staff meetings are taking place? Where are we at on that front?
Howard Holton (18:36.556)
I think they're attentive to it. It's probably not the number one thing at the forefront of their mind, but they're definitely attentive to it. And we are seeing attackers, right? The offense is definitely leveraging generative AI in much the same kind of ways we are, right? Can I make this seem like it's better written? Can I make this more convincing? Can I make this more real? Some organizations are seeing quite a large number of
Gen AI based attacks through things like WhatsApp, right? With impersonation, Video, audio, text impersonation. There's some funny conversations that I have seen where people are like, well, like your CEO shouldn't say anything externally because all of that can be used to impersonate your CEO using Gen AI. No, no, We should just recognize that at no point will the CEO of a 35,000 person company
call a random employee and say, hey, I really need you to go out and grab some gift cards. Like if you've never communicated with the CEO before, they're not gonna start with that as the first volley. They have an executive assistant. If you're not their executive assistant, they're not going to contact you that way. Right? And the executive assistant will know if that's reasonable or not. Additionally, the executive assistant will have their cell phone number changed or stored.
So when it's a new number, they also know that it's BS. Like we spend so much time training and the reality is the CEO will never contact you like that. So what are you thinking?
It should just be that simple, right? The reality is, GEN.AI should make us question everything, not trust everything.
Patrick Spencer (20:28.092)
What do you see happening in terms of security in GEN.AI, the improvements? There's obviously, everyone says they're using GEN.AI today. It's about anybody that has a website. They're using GEN.AI somehow, at least they think they are. From a security standpoint, there's obviously a huge growth path on that front for maturation of the technology. Where are we at today? Have you seen some cool and interesting use cases that you think our audience might be?
interested in hearing them.
Howard Holton (20:59.034)
I've seen some interesting use cases. I'm not sure that they're good yet. Yet, yet being the operative word. So let's look at companies that have security tools like EDR, XDR, CDR, whatever DRs, right? These are investigative tools that collect and log a ton of data. The traditional methodology for doing your investigation is effectively SQL or SQLite scripting.
Right? You query the database using something that looks kind of like SQL. If it's got a low code, the output is still SQL, right? You're still just plugging things in and effectively running these SQL scripts. I have seen demos, live demos in person where the technology company has added a generative AI component to the front where you can chat with it and get a result. And let's say 70 % of the time works beautiful. It's nice and fast. Right?
30 % of the time it returns bullshit. It returns some level of response that is simply untrue. It's not trying to be untrue and it is based on data that does exist, but it doesn't understand what real is versus not real. And here's the fundamental flaw with generative AI. We assume because we're humans and we like shortcuts.
Patrick Spencer (22:00.753)
Yeah.
Howard Holton (22:22.442)
We assume these things are trustworthy. We assume these things are intelligent, which they're not, right? Because intelligence requires reason. These are not reasoning machines. These are mathematical models. They are nothing more than math, right? And so it does not know what the question is you should be asking.
Right? So it's going to return an answer to the question that it thought you asked, not the question that thought you should ask. It does not know how to ask clarifying questions. It does not know how to make a decision and go, ooh, right? The intent algorithm just returns, here's your intent. It doesn't return, here's your intent and a confidence level, and then go, well, if the confidence level is too low, I need to ask a follow-up question. It doesn't do those things. Right? It's like a seven-year-old.
Right? It's exactly like my daughter when my daughter was like two and a half and we sent her to school and the teacher called and said, hey, you need to come pick up your kid. She bit someone and we go, you don't bite. She was okay, dad, I got it. And the next day she kicked someone. Honey, we don't kick. Okay, no biting, no kicking so I can hit them. I can pull their hair. can, you know what I mean? Like, they're not reasoning machines. They just take an instruction, return a response. It's just like a three year old. It's fine. Someday there'll be like a 25 year old.
We're some number of years away from that. But we somehow, because the response contains a 10th grade education, because the response can be at that PhD level, because the response can be at a very high level, very detailed, very thorough response, it's just information. It's just spitting out information. We've ascribed it details that it doesn't have. And so we also have to be smart enough to turn up the BS meter and go, hey,
that's wrong, hey, that's not even close and really learn how to work with these things. And if we just accept the response, we're never going to get there. We have to be smarter and we have to be more critical than we've ever been before. And if we can do that, then our acceleration curve in leveraging these tools and getting value from them will not be a, it will hockey stick. And today it's kind of doing this, right?
Patrick Spencer (24:38.948)
Yeah. What do you think we can do to the hallucination factor in using these tools? It varies. It depends on the person who's writing the prompt. One, it depends on the question you're asking it to, and what you're feeding into it from an information standpoint, I guess. Three, right? Where do you think you're at? There's some times when we'll generate content or ask it to analyze data and it's right spot on.
I did a case study the other day ingested the details about the client. It spit out a case study. Like you said, it sounded great. The whole piece sounded accurate. I didn't cross reference. I sent it over to my product marketing manager and he looked at it. So this is great. But these two paragraphs, I have no idea where I got this information because it was talking about a 30 to 40%. It sounded correct contextually. And if you didn't know the original story, you would have thought it was correct.
Where do you think we're at on that hallucination front where we can begin to shave more and more off so it becomes more and more accurate and you don't have those types of aberrations take
Howard Holton (25:44.518)
Sure, So there's a few things you could do. So the first thing is you have to go into it understanding that these are large language models. They're not large math models. They're not large architectural models. They're not large anything else. They are large language models. That means they're a predictive engine that tries to determine in order to answer the prompt, what is the next technically part of a word that drives to that answer, not.
What knowledge do I have? doesn't know anything. doesn't have any knowledge. It understands how words go together. All of the data that's fed to it is designed to predict what is the word that would follow in a response that matches the question. And so if it's heard the question a lot and it's seen the response a lot, it's really good. When it hasn't, it's just not good. So the first thing that I always do, any time there's a statistic,
especially if I have the stats in an easy to find way, I simply disregard all of them and I let my team know those are just placeholders, please replace them with the right stat. Do not use what AI gives you. Just assume that it's wrong and replace it manually. The second thing you can do, let's say you don't want to go through all that work, then you just go through and ask it. Fantastic. Go back through what you just gave me and for each one of the statistics, I want you to give me a reference.
right here in the response. want you to link it back to the document, but I also want you to give me the full context of the sentence or paragraph or section that came out of that gives you the ability to do control F cut or copy, paste, find, and make sure that it actually exists. Right. and you can always call it out. I can't find that reference. Can you go back through and double check? if you give it the ability, like if you iterate, it's actually pretty good.
But those three things combined tend to give me really good results. But again, we go back and forth. Like I did a really massive project. It ended up resulting in a really detailed documents about 125 pages. And I went back today through all the prompts and it was God, I don't know, a hundred prompts easily. My work was easily a hundred drops. Right.
Howard Holton (28:05.294)
Probably quite a lot more and we exceeded the context window several times. And so I had to redo things. I had to feed things back in. I had to copy and paste and move, you know, in order to get this really good result that ultimately now I have to spend a whole bunch of time backing out, putting the logic back together to put that together into something that makes it reusable. Because ultimately, reusability should be the thing that we aim for. This stuff isn't super easy.
Patrick Spencer (28:18.908)
Hmm.
Howard Holton (28:34.648)
but the ability to think in a structured fashion and not assume that it has knowledge, not assume that it's anything more than the five-year-old. It's just a five-year-old that has access to all of the world's information. Right? That puts you in a really good position to create really good things.
Patrick Spencer (28:53.55)
I agree. think those more complex prompt sequences, if you have them structured the right way, it gets you more substantive and more accurate outputs using the AI tools. I've also found that having Claude check chat, GBT or vice versa, Gemini, it actually will identify sometimes some of those hallucinations or instances where algorithms weren't calculated correctly and so forth.
I've benefited on that front. generated a risk exposure index report a month or two ago using AI and obviously included a bunch of a series of prompts, but I also use one AI tool to check the accuracy of the other one. Being a paranoid person like I am, that had to be the case, particularly since we're reporting specific data points. So yeah, it's an interesting landscape. We talked a bit about all those that end in
DR, CDR, DLP and so forth. That doesn't end in DR, but when I'm talking about risk management and measurement, I would think AI, when you look at the cybersecurity space, that's probably one of the starting areas where it's gonna be most beneficial and probably the lowest hanging fruit, or that's my sense.
Howard Holton (30:14.638)
So when we look at any program dealing with data and data quality and data risk and data governance, we're late to the game. We've been creating data since machines were created. technically way before machines, but we'll just talk about the data that's contained within machines, right? And so we have immense amount of data. DLP platforms didn't exist for most of that time, right? Data governance platforms didn't exist for most of that time. So we've been creating data in this
this great open wild wild west that we're now saying, cool, now we got to govern it. Now we got to regulate it. Now we got to do these things. It's very complex to do. For organizations that haven't gone way down that path and been forced to make that investment and been forced to get their arms wrapped around it, AI offers you the opportunity to do that. Don't do it for everything. Do it for the data that's going into the AI. And when you're like, hey, this data is too complex for us to govern. Great. No AI for that. Move on.
Pick a different product, a different AI, pick a different outcome with a different data set that you can in fact govern, that you can regulate, that you can bookend with the prompt for onboarding and offboarding to manage the AI. And when you're like, well, nothing has value for us because we can't use any of our data, cool, then you're not mature enough to do that yet. Pick something that you can, pick something you can wrap your arms around, go through the heavy work.
Patrick Spencer (31:34.758)
Hmm.
Howard Holton (31:38.802)
Once you've done it once and you kind of realize what the systems are and how to work with them and you built an operating model around it, then the next one that's more complex becomes easier and then the more complex and more complex. And before long, that will actually be as easy to you as any other process inside your organization, as easy as running the coffee machine. Right. And you can really start accelerating and governing the data that goes in. And before long, you know, you'll have
at least manage that risk to a reasonable level.
Patrick Spencer (32:11.58)
Data classification, that seems to be a challenge when I talk to organizations and our clients face the same issue and that's sometimes why they come to us. understanding where that data is and moreover what type of data is, not all data requires the same level of governance and security controls obviously. Where are we at on that? You think AI will help? And what does quantum computing do to that paradigm at the same
That's four questions in one.
Howard Holton (32:44.147)
Yeah, so data classification is very hard to do. And it has, it's really hard to do because of edge cases. Right, it's really easy to find a phone number that's framed like we would expect a phone number, but you place the dashes with dots, it's no longer as easy. Right? You replace the dashes and dots with nothing. Again, now it's just a 10 digit number. Right, the advantage that generative AI has is generative AI can take into consideration context.
which regular expression cannot take into consideration context. Right? And so, so that's a huge value. It's very, very, very, very, very expensive to use generative AI to do data classification. It's ridiculously expensive. Right? And so what I'm expecting to see is I'm expected to see smart platforms do something like a near field analysis. Hey, anytime you have a 10 digit number or a nine digit number that doesn't exactly match this pattern,
send that to generative AI and let the generative AI look at the full context of the data stream, right? Whether that's just a message or document or whatever, and try to use context to determine the data classification. I think that's going to be going to be like the potential is there for it to do that. And I think that's going to be really positive. As we, so as we move through data classification, I think that stuff is really valuable. When we get to
Patrick Spencer (33:56.964)
Interesting.
Howard Holton (34:10.328)
quantum is going to make everything infinitely faster. That's the goal of quantum within that kind of space. don't know that it's going to help much. Quantum is going to help much with data classification other than speed. I don't think it's going to unlock some new variable. It may, decade from now, make generative AI substantially cheaper, right? Because you have more processing power per square inch. However, right now,
That square inch is kind of like DNA storage. I don't know if you looked at DNA storage recently, we made a huge advancement in DNA storage and effectively, right, you can store a petabyte in the thing the size of a fingernail. It's now read writable. You can actually roll your own, but a petabyte of storage is like a billion dollars. The unit economics are so broken, it's ridiculous, right? And so you just kind of have to go, well, while this is neat,
Patrick Spencer (35:02.042)
Hmm.
Patrick Spencer (35:05.979)
Wow.
Howard Holton (35:09.774)
This is not sustainable for anyone. And quantum is kind of there. Where quantum poses really, really, really big issues is in cybersecurity. Right. Where, you know, if that
Patrick Spencer (35:11.932)
It's staying more peaceful right now.
Patrick Spencer (35:25.946)
Organizations are gathering all this data today waiting for quantum so they can unlock it, right? So more other things.
Howard Holton (35:29.944)
Correct. Correct. Correct. And here's the bummer about it, right? All encryption is designed not to be impossible to crack, because it's not. It's designed to be so overwhelmingly costly to crack and take so much time that it's effectively impossible. Well, quantum has the potential to change that to where neither of those things are true. It doesn't take all that much time and thus doesn't change the cost. Now, the bummer about it is...
Even if we came up with anti-quantum encryption, which there is certainly companies working on that, all of the data that's being transmitted today doesn't have to be cracked if it's captured, saved and stored to be cracked at a later day. Right. And then that data becomes available and that encryption becomes open. Right. Now, the good news is there's no large organization that can really afford quantum. They can also afford
the lawsuit from the intellectual property theft, right? So you're really down to nation states. So now the question is, because you can't prevent nations from owning this technology, not reasonably, what nation states will own quantum? What nation states are gathering what data and what is our risk and exposure there? I don't spend a whole lot of time thinking about that simply because there is literally nothing that can be done about it today.
You cannot stop it. You cannot really prevent it. You need to know that it exists. You need to understand that it exists. You need to be paying attention for when we have a solution. But it's also if everything that we've transmitted that anyone saved could eventually be hacked using quantum, there's nothing you can do about that. You can't somehow go back in time and not transmit that data. The data has been transmitted.
Patrick Spencer (37:20.678)
Yeah, it's too late. The genie's out of the bottle. That's actually right, per se.
Howard Holton (37:24.959)
Well, even if the genie is not out of the bottle, the bottle is no longer in your hands. Right? And you can't get the bottle back.
Patrick Spencer (37:31.433)
Yeah. Unfortunately, it's across the ocean. We won't say in which direction. So, yeah, so your, your background is, and I'll wrap things up here. won't be sensitive to time, but I thought it might be interesting to hear you talking about your background because it is unique. Like you said, you have CISO experience. You've been a CIO, you're a CTO. Not everyone has that opportunity to
Howard Holton (37:35.052)
Right, right. So, so.
Patrick Spencer (37:58.608)
hold all those and wear all those different hats. How did that happen? I assume you had to do something to get those opportunities.
Howard Holton (38:08.018)
yeah, so, so my dad was a law professor and owned the only law office in our little town. he was, he definitely skewed the results in the wrong way for, for attorney income, cause he never made any money, but he had a problem when I was young. and I wrote a piece of software to solve that problem. I, I wrote templates and watermarks for word perfect called in a, in an application called plead perfect. And it was specific to law offices.
He liked it so much and it saved so much time that when he shared it with his law firm buddies at other firms, they were like, we want that. Can we just, we buy that? How do we get that? He closed the law firm. We started selling that. I was 11 when that happened. I took over my school districts network, which was not a thing that was done at the time. This was the early nineties, became a Novell engineer, became a windows engineer, became a very early Microsoft windows.
Patrick Spencer (38:44.732)
well.
Howard Holton (39:05.484)
Active Directory, MCSE, my curiosity has really led the charge here. And it was the right place, right time. This was the Wild Wild West. The first thing I did in security that was really seriously security was I wrote a custom piece of code that did logging, that logged the actions people took inside a Novell network because I was certain we had an insider threat and used that to catch an insider.
Patrick Spencer (39:35.59)
Hmm.
Howard Holton (39:35.914)
and, you know, execute the policy effectively. There were no tools to do it at the time. So I wrote one. I didn't realize it would have any applicability. So I just wrote it for that thing and moved on with my day. And, and, you know, and so that's kind of been what I have done ever since just that level of curiosity, and the willing to kind of stick with it and go deep and do what needed to be done. And I've had very, very, very good mentors that have taken the time to
to make sure that I'm on the right track.
Patrick Spencer (40:06.734)
Interesting. That's a great story. Word Perfect, I remember those days and I was a took a long time for me to move from Word Perfect to Word. I don't think you had a choice because it was dying and you had to move. So how are...
Howard Holton (40:18.904)
You did not. Yeah, yeah, unfortunately, this the CEO messed it up. The CEO said Microsoft is in our sites and we want to we want to see Microsoft destroyed effectively. And what he missed was do the operating system your stuff sits on is made by that Microsoft company. That's not really going to work. So anyways.
Patrick Spencer (40:36.71)
Yeah, yeah. Destroyed the chicken that was laying all the eggs, right? Per se. Well, Howard, folks who would like to get in touch with you, LinkedIn, suspect is a good place to start. For GigaOM, they should go to the website. Any recommendations to our audience who want to engage with you directly or find out more information about the firm?
Howard Holton (41:03.586)
Sure, so I'm on LinkedIn all the time. All the time. Anyone can ask me any question except, would you buy my stuff? Don't ask me that question, because I'm not gonna buy your stuff. But reach out to me about anything, I'm happy to help. I really do believe that it is our job to help people, to mentor whenever possible, to help all of us be better. I don't think there's any, I don't have any sacred knowledge that I'm not contractually obligated to through an NDA. Giga-ohm is...
GIGAOM.com, we've been around for a good long time, sign up for a free account. I took over and I reduced the price to one tenth of its, less than one tenth of its previous price, because I want everyone to have access to analyst research. Everyone should be able to invest in themselves. At this point, I think we're cheaper than a Netflix subscription is $12 a month if you buy the year.
Patrick Spencer (41:54.702)
wow.
Howard Holton (41:55.496)
Black Friday is like a week away, so I'm sure we'll have a Black Friday and a Cyber Monday sale that will last through the year. There'll be a coupon code somewhere. You know, yeah, so reach out. am the easiest person in the world to find. And if I'm going to be at a conference and you want to say hi, please, please, please hit me up and say hi.
Patrick Spencer (42:12.528)
That's great. All right, audience, you got to take Howard up on his offer. Go take advantage of that Black Friday deal. Well, Howard, thanks for your time today. We hope to have you on the show again in the future. This has been very interesting conversation for our audience. Check out other KiteCast episodes at KiteWorks.com slash KiteCast.