How AI is Reshaping National Security

Episode Description:

What happens when artificial intelligence meets the intricate demands of national security? This episode brings you an exclusive conversation with Hillary Coover, a seasoned expert in national security technology. We delve into the responsible use of AI in government contracting, unraveling the complexities of technology adoption policies and the ethical use of public data. Join us as Coover reveals her journey in harnessing AI for critical security missions and discusses the future of AI in transforming government operations.From analyzing large-scale data for security insights to addressing the ethical implications of AI in privacy, this episode is packed with groundbreaking discussions on AI's role in shaping the future of national security and governance.

Full Transcript:

Andrew Miller:

[0:00] Welcome to another episode of AI Unboxed. Today's guest is Hillary Coover.

Now, Hillary is an accomplished growth executive in the national security technology sector.

She has success in sourcing and executing strategic technology, as well as business partnerships for software and services companies within the $4 million to $400 million range.

With a proven track record and deep relationships in the national security community, she has captured and designed innovative technology solutions for federal customers and led many technology integrations, OEM agreements, complex implementation projects, and teams.

Now, given her extensive background as a business development professional for government contractors, she conducts workshops, training staff on the responsible adoption and use of AI tools for government proposals and capture efforts.

Additionally, she designs and implements guidelines for AI usage that prioritize safety and security for government contractors.

Now, if that wasn't enough, she is also a host with the podcast It's 505 and gives audio updates on cybersecurity and open source news.

Wow. So, so great to have you here, Hillary. Welcome to the show.

Hillary Coover:

[1:12] Thanks so much for having me.

I'm excited to be here and to chat about my favorite topic of the day, AI.

Andrew Miller:

[1:18] Awesome. Well, could you briefly, I know I did a bit of an introduction, but in your own words, could you briefly introduce yourself and your area of AI knowledge? Anything that maybe I missed?

Hillary Coover:

[1:29] I got my MBA and I focused on data analytics a few years ago.

This was before you could learn how to code through chat GPT.

So I learned how to leverage Python and some other data analysis languages and tools during my MBA.

And that kind of propelled me into the national security technology space away from the professional services areas and got me into the product space.

And I got, as a business development executive, I wanted to better understand the types of tools and services that our products were providing to our customers.

And while my area of expertise begins with the understanding of the national security stakeholders stakeholders and their problem sets, and then also my understanding of government contractors and their relationships with those stakeholders.

And so the biggest gap that I saw earlier this year was government contractors coming out and saying, everybody stop, nobody use ChatGPT, it's banned.

[2:39] Except nobody, it didn't seem to work.

I think there were several surveys that went out that showed anywhere from 50 to 75% of employees of government contractors were actually using it just on their personal devices.

And so that disconnect between, you know, embracing new technology coming out and trying to ban it, it was pretty incredible.

So I came in to a couple of firms earlier this year to say, hey, let me do a workshop and show your employees how LLMs work and how to use this new generative AI technology in the context of dealing with often sensitive proposal information and how to create governance policies around the adoption of those technologies.

And not just throw blanket bans that nobody's going to listen to.

And so it changes every week.

And so I find myself following up even just, I think, three days after I gave that workshop, I said, hey, guess what? This totally changed everyone.

Just so you know, stay up to date on X, Y, and Z.

And it's an incredibly rewarding, fast-paced technology to stay on top of.

And over the last year, getting to know all the best resources and tools to be able to leverage it maximize its you know value has been really uh really fun well.

Andrew Miller:

[4:08] I'd like to dig into one of the things you just said a little bit further you said that the government came out and said no don't don't don't touch this why why do you think they were so like so like no don't touch.

Hillary Coover:

[4:19] Not the government um government a lot of government contracting companies um they and for some additional context i won't call anyone out but it's sure sure no absolutely many many of the the government contractors still kind of use very archaic systems for their business development process and information tracking in those, like generating, very archaic.

And there's a huge opportunity there for anybody that wants to go in and start to really reinvigorate and disrupt that.

I mean, it's changing all the time, but- Right.

Andrew Miller:

[4:56] Right, of course.

Hillary Coover:

[4:57] That being said, it's a big gap to have to manually search through often thousands of digital artifacts to try to find what you're looking for when you're trying to put together a solution for a, government government problem so what i like to tell people is that in when you're engaging with an llm i treat it the way i would treat engaging with a browser you know like it's not anything you put into uh you know a google search or any search for that matter um it's it's not, it's not private i mean and so i like to tell people you know okay if you're if you're going to engage with an llm the the best way to um from a security perspective to explain the risk and have people have it kind of resonate at least with the government contracting crowd um that tends to be very risk averse is to say hey would you put that in a browser and most of them most of them kind of look at me like oh i didn't realize that i thought this was you know totally safe totally private and and different from browsers, but really drawing that parallel, at least to this community of users was quite helpful. So. Yeah.

Andrew Miller:

[6:19] Yeah, no, that makes a lot of sense. I mean, you have to use something that people are familiar with, right?

Create that associative heuristic there that says, they already know this, I can connect it to that.

And then they're like, oh, I understand it without getting into all the details about, well, I mean, it is called open source, you know, open AI for a reason.

And it has disclaimers now, but you know, for a while there, when it first launched, it's like, just throw everything in there.

Right. It's totally fine.

They're not going to give you the message like they do now that if you put somebody's personal information and said, Whoa, you know, you shouldn't do that.

Um, but at the beginning that that wasn't there.

Hillary Coover:

[6:56] Yeah, I agree. And at the beginning is where there was the most demand to be Like, how do we control our company information and our customer government information in the context of this really fast evolving technology?

And so just being passionate about, you know, the government stakeholder and very passionate about emerging technology.

I kind of stepped in and set up a few workshops to be able to guide people through not necessarily where it wasn't at that point in time, but how to stay on top of it.

Andrew Miller:

[7:30] Makes a lot of sense. I mean, speaking of being passionate about it, you know, what attracted you to like AI?

Hillary Coover:

[7:38] I worked for a company that leveraged publicly and commercially available information for national security purposes.

And for me, that was the most eye-opening to be able to see the difference that that, widely available data can make in law enforcement investigations and counterterrorism missions and in missing cases of missing children.

And so I realize the privacy implications of those technologies is, you know, its own separate thing.

And I've taken that, you know, for the last couple of years, I've been on the commercial side and having leveraged a lot of that data and information information in the public sector side of things.

It was, it was really interesting to see how it's used, um, in the marketing, uh, to see how sophisticated it is in the commercial sector as well. Um.

Andrew Miller:

[8:43] Yeah, no, that makes sense.

Hillary Coover:

[8:45] It was to say, wow, you could do a lot with all this data.

Like the insights you can pull from it are, are incredible and just force multipliers for really important missions.

Andrew Miller:

[8:58] Absolutely. Absolutely. I love that. I know we spoke a little bit about government contractor risk averseness, but specifically on adopting this and seeing it being utilized, how do you think it's maybe like revolutionizing the industry?

Hillary Coover:

[9:17] I think every company is quickly scrambling to come up with an AI proposition or acquire require an AI company as soon as possible in the government contracting context.

There's no, it's been that way, I think, actually, for the last two years.

There are a lot of compelling technologies being invested in by organizations like In-Q-Tel and non-traditional government funding sources from like the Defense Innovation Unit and some other really, really really interesting national security technology investment arms.

So I don't see that as necessarily a gap there.

I think they're moving very fast for the government and getting around some historically challenging acquisition regulations.

Andrew Miller:

[10:10] Yeah, absolutely. Well, maybe you could jump into maybe a real world example of kind of a traditional process that's been improved by AI that you've seen or built. built?

Hillary Coover:

[10:21] Leveraging publicly and commercially available data is inherently controversial, and especially when the government is doing it.

I don't know why people get such ruffled feathers over the government using it, and yet they don't care that companies have that.

The corporate surveillance thing doesn't bother your average person.

But anyway, so with that in mind, I will will say, the locating criminals.

When you can draw, you can collect massive amounts of data with temporal parameters and geographic parameters to be able to discover what an on-the-ground situation is like at any given point.

And so, if you're looking at.

[11:13] If you wanna look and see what's happening, on the other on the other side of the world you can leverage in a sensitive area like a border area between two countries at war um you can leverage satellite imagery data you can leverage rf data you can leverage mobile device location data even and a number of other types of sources to be able to get an on the ground potentially delayed depending on the data source but or and how much you're willing and able to pay for all that data it's not necessarily going to be real But to get an understanding of where high traffic areas are in certain parts of the world.

So another, that's a loose example, you know, talking about a border area, but there are activity-based intelligence research projects ongoing on the Russian-Ukrainian border where they're trying to identify places where there is lots of traffic.

So they know, okay, there are likely not very many IEDs on the road, lots of people are traveling through there.

You know, like certain kinds of insights that you can derive from just massive amounts of information that no human is, or just basic, you know, basic computational power is going to be able to extract such meaningful insights from.

[12:38] Another one would be similarly, would be a border. You're trying to monitor cross-border activity to say, okay, what's going on?

And you can identify, leveraging this technology, you can actually identify, tunnels potentially used for smuggling.

So I'm not saying there is a specific use case or or border in mind, but just sharing, you know, that is a use case because there's so much, you get so much data from so many different sources. Like the world is your oyster.

There's no limitation to the amount you can collect that provides that level of insight. And then maritime data as well.

There are a lot of providers out there that have nailed the ability to identify anomalous behavior and and identify potential sanction violations by certain companies and countries and so it's it's a really really cool space yeah I has revolutionized.

Andrew Miller:

[13:52] Yeah yeah I'd say you know before we had like big data processing and language learning models and things that are you know improving improving you know having the manual boots on the ground sitting there and just like looking at all the data you're gonna miss so much it's just i mean human error right we can only process so much information that's coming in front of us but then whenever you can train a model and then it's not going to get it right the first time but you keep improving on those iterations the amount of data that can be processed in like a second is i mean we're living in the future right now and And it's amazing how much has changed in just like the past 20 years.

And now with AI becoming more open and available to like businesses and consumers, they're just, I think, like 10x-ing the amount of things that we can do with, you know, these machine learning algorithms.

Hillary Coover:

[14:48] Yeah. And there is another, there's a particular stakeholder.

I won't name the actual government customer, but they have engaged with a company that basically provides AI in a box.

They go in, they collect and clean all of your data.

They put people on site to make sure that they can oversee the implementation.

And they make it so easy to be able to extract insights from potentially decades of data that have been in all sorts of different forms.

A lot of companies have done a really good job in the government contracting space of taking existing data in all sorts of forms and drawing insights from it and in a productized fashion, which is fantastic.

Incredibly impressive to me. Because you think of it like taking data from like classified systems and all of the crazy requirements around navigating that and something from like 1955, you know, it's just like being able to make it all work together is really something and usable across agencies.

Andrew Miller:

[15:57] Absolutely. And making it easy. I think, I'm not sure, I haven't really shared this much, but in the past life, you know, I was director of of 9-1-1 for South Texas.

And so working within different governmental entities, the tools that we had were, I mean, you mentioned archaic. I mean, they were a little archaic, but they were also so hard to use.

They were not easy to just jump in and click and drop.

And this is how you do it. It was difficult to get a lot of our processes and systems working in a streamlined line fashion.

So for an organization to come in, and that's the opportunity, make it so simple for the government, because they're focusing or government contractors or entities, make it so simple that you can jump in there and like a consumerized product and be like, I'm used to using Microsoft Word.

I'm used to using Google. I just drag and drop. It's easy. Making it that simple from a UI, UX experience is just by itself is revolutionary in there.

And then layering on on the actual like machine learning and neural networks and AI, you know, it's that that's incredible.

Hillary Coover:

[17:06] Yeah. And things are a little bit crazy with open AI right now, obviously.

But I would say the next the next area of disruption in the government contracting and in the government community is going to be in the everyday use and really widespread enterprise use use of AI by adopting something like Copilot, whatever that looks like in the future.

I know there's a limit or a minimum of 300 licenses at this point that you have to buy if you want to try it out right now.

But it's with that, I mean, Microsoft is used so widely throughout the government contracting and government community that it's once that rolls out, I think people are really going to start, especially if there are on-prem solutions, offerings of it.

It's going to start to put those, probably put some data businesses out of business. Sure.

Andrew Miller:

[18:00] Sure. But it's all moving us forward and making it, you know, more impactful, you know, the work that everybody's doing.

So that makes a lot of sense.

Could you describe, and you kind of touched on this with this org, but there might be another one.

Could you describe maybe a fascinating AI application that you've recently worked on?

Hillary Coover:

[18:20] If you've ever put together or heard about putting together a proposal to win a government contract, you know it's probably a pretty rough process.

I spent years writing and managing proposals, and that can be pretty tedious.

I mean, sometimes you have like a thousand page proposal that has to go in and get produced and all that.

So for me, the easiest application of that was to go Go to a company and say, okay, let's get your entire library of proposals, let's categorize them, and let's make everything searchable by what was technically acceptable, what was an integrate feedback from the government customers when possible, they don't always provide it.

[19:11] Take all of that information because many of these people have these massive libraries of proposal and pricing content that they've never actually derived any insights outside of manually experiencing it and looking and adding their own kind of like flawed human intuition to it, right?

And so being able to say like, hey, it looks like the last 10 winning proposals for this particular government customer had all of these things in common or looks like the ones that we lost had all of these things in common and they were missing all of the things that were in the way like being able to extract very quickly insights like leveraging on-prem data is is incredible and like you you can do that in a safe way and it's a two to three month consulting project tops and it's uh like i think i think people don't realize that you can do it in a safe and secure way and that there are a million consultants out there that do that and do it well and it's it's really accessible for the small mid-sized business market absolutely.

Andrew Miller:

[20:18] That that's definitely revolutionary, Because you try to bring in, and I've had proposals that I've looked at, you know, back in the day, right, where the people working with them are following a process that in their minds, this kind of always worked.

You know, I think if I put it this way and I phrase it this way, you know, that's worked, you know, before.

And then they are always surprised when, you know, new regulations or rules or anything have been published that changes the RFP requirements, which impact the response.

And you have to reject it and say, sorry, you needed to follow this process or this is like new.

And they're like, I've always done it this way.

I thought this was right. OK, back to the drawing board, which they've spent months or a good amount of time working on.

Hillary Coover:

[21:06] Yeah, it's true. True. And I advocate for the on-prem because of the security concern.

I'm probably more risk-averse than most people, I feel like, because of my experience.

Andrew Miller:

[21:20] You're a privacy expert, so that makes sense.

Hillary Coover:

[21:22] There are companies out there that are creating really compelling technologies for the government contracting proposal community by saying like, hey, upload an RFP and we'll spit out out a draft for you.

And then, oh, by the way, we'll integrate, like, you know, we'll create your library and integrate insights from those.

They're not very sophisticated at this point because they're not customized.

Like, they're not very customized. Whereas if you have, like, at this point, if you hire a consultant, you're going to, for an on-prem, you're going to get a safer and better product and outcome.

But the stuff that's coming out on the software as a subscription service for this community is still compelling and worth following and looking at.

Andrew Miller:

[22:08] Yeah. What do you think were some of the challenges in bringing that type of application to life?

Hillary Coover:

[22:14] Confidential but unclassified data. So when you're when you're dealing with when you're layering different pieces of information into one document, you know, those individual pieces of information could be unclassified.

But when you start to layer all those things together, you approach challenging territory and potentially violating security policies.

And so I think having a human in the loop to be able to monitor that and make sure that nothing, if you're, I'm saying if you're using a cloud product, like having a human in the loop to determine like what prompts go in, what, you know, what documents get uploaded. loaded.

I think that's the biggest, that's the biggest barrier for success in that context, in my opinion, and that could change tomorrow.

Okay. And again, I consider myself a little more risk averse, I think than any other sales and marketing professional you'll probably ever meet.

But yeah.

Andrew Miller:

[23:21] Speaking of like risk averse, I think this takes us to a core piece of your expertise And that's ethical considerations, you know, behind AI.

You know, what would you say are some of those considerations that are essential in AI-related work?

Hillary Coover:

[23:39] So in the national security context, the most hotly debated one is what happens when we collect information on U.S.

Persons. You know, and that and violating the privacy of of American citizens abroad in the context of the legislation that's being debated right now.

But, you know, and what to do with that. And I my response to that is, I mean, this.

This technology is so valuable that as long as it's available to everyone, including our adversaries and corporations, I mean, it should absolutely be available to the government.

And from what I've seen, a lot of the government folks take an enormous amount of responsibility when they do encounter that.

And there are very, depending on the agency, there are very specific guidelines and standard operating procedures for when you do encounter data on a U.S.

Citizen overseas or something. And so I think that there's no...

[24:48] Like there is no there will always be a human in the loop for that reason because you have to be careful but i would argue that we should a lot a lot of folks are calling on the government to stop collecting the data period and it's like whoa whoa whoa if we do that can we can we maybe have it make it so that china can't get it on american citizens and on make it so that you know Corporations can't use it to target and sell to American citizens.

So I would say that's a big ethical consideration is making sure that there's a human in the loop to be able to adhere to standard operating procedures, to respect privacy of all people.

It doesn't matter if it's an American citizen, European or whatever. right?

And so that would be my biggest area that I think the most about because I do take privacy very seriously.

But I also think that as long as everyone else can access it, like there's no reason the government shouldn't be able to as well. You.

Andrew Miller:

[26:00] Touched on a lot of very, very hot points there that I don't know if we'll dive down into to some of those rabbit holes.

But I think you also brought up some really, really important parts that I think are applicable to the larger AI community as a whole.

And that's, we can't 100% rely on this data to be accurate.

I mean, it's still early and it's in its infancy. And even as it evolves, there has to be a human aspect built into it. I mean, you look at it and I know you talked about, microsoft's co-pilot so i mean that that's a good term to use here you can look at these tools as a co-pilot a way to like 10x your your efforts but you have to validate the information you have to make sure that it it doesn't you know get delusional visions on like some of the responses i mean we all know that early chat gpt and other similar you know language learning models would would kind of make up responses you know they they like I forgot the exact term there's there's a word that's used I don't know it's hallucinating yes it's like yeah hallucinating on their responses and so I mean that's something to seriously be you have to be considered yeah.

Hillary Coover:

[27:18] And I think on the development of those tools I remember I gave I gave a workshop to one customer and I was was demonstrating like, okay, hey, and by the way, like at the time, ChatGPT didn't have any sort of trail of or citations of where they, you know, where they got their information. But BARD at the time did.

And, you know, it goes back and forth. I feel like it changes all the time.

And you have all of these different tools that offer attribution to the results.

And at the time, I was like, oh, and here's BARD, and they do it.

And it was within the 24 hours between me prepping for this workshop and then me giving it, there were no longer any citations.

And I was like, well, it looks like they changed that.

Let's go to a different one that does it, you know? And it was like, oh, that was fast.

That's something that I've seen adopted in an on-prem capacity is done really well to say, okay, and here's where we got that information and how we did that.

And it's pretty easy to...

To understand from on on a dashboard right so right.

Andrew Miller:

[28:24] A little bit back to the ethical considerations uh is there a specific ethical dilemma that you'd be open to kind of like sharing that you've kind of gone through.

Hillary Coover:

[28:35] The bias that those tools come with inherently based on what goes in it's really really hard to know at this point for me, in the context of the use cases I've dealt with, what the implications for bias are going to be.

And I haven't personally struggled with a situation where there hasn't been enough human control to be able to moderate a dangerous dangerous decision being made. And so I can't.

So yeah, I obviously, if, you know, we were dropping bombs, based on AI developed intelligence, that would be kind of terrifying.

But I mean, don't quote me.

And I'm not, you know, that's not my area of expertise by any stretch of the imagination.

But I think there's a lot more to it.

That's where I would start to be like, Like, okay, that's kind of scary.

Andrew Miller:

[29:37] Definitely, definitely. And hopefully there's always that human aspect built into it to make sure that, you know, that's handled appropriately and goes all through the proper, you know, SOPs.

Hillary Coover:

[29:47] Good job, Microsoft, for getting that co-pilot name.

Andrew Miller:

[29:51] Maybe that's why they're so connected and invested in OpenAI, right? 45, what is it? 49% ownership in it. So it's great.

In regards to innovation and limitations, you know, we talked about a few companies and innovations that it's making in your space.

Would you say that there's any groundbreaking yet underutilized AI technology in the government contracting field? I.

Hillary Coover:

[30:17] Would say in bids and proposals, yes. That's absolutely the biggest gap, just operationally, that could save government contractors the most amount of money in the shortest amount of time. That's point blank.

I mean, that's my area of expertise.

That's something that I see as a huge opportunity.

Outside of that, I'm probably not the best to answer outside of that in terms of adopting AI technology in the context of solving complex problems.

Because what I've seen so far has been really impressive and advanced.

I am not a data scientist.

Andrew Miller:

[30:59] Where would you say that AI maybe falls short in your field?

And how could they overcome those limitations?

Hillary Coover:

[31:06] As far as security goes, US government contractors are a target for cybercrime, for all sorts of attacks.

And so I can understand the slow, the hesitation to adopt new technology quickly because of that.

But in the context of AI and the widespread adoption of it, of inevitably all of their employees or many of their employees, I think there needs to be an industry-wide cultural shift to figure out how to embrace it and how to govern around the existence of this technology and the inevitable use of it and prevent abuse of it. Yeah.

Andrew Miller:

[31:54] Yeah, absolutely. If we look out to the future now, right?

So we're looking at innovations and limitations right now, but we wanna look into the future, emerging trends are you most excited about?

Hillary Coover:

[32:07] I am most excited about the prospect of having, a co-pilot on, like, you know, on my machine or on my device, even my phone to say, like, I can quickly ask a question and it'll take my, it'll take insights from my information.

I mean, yes, I have that currently in a hodgepodge kind of manner, but like once that becomes very very streamlined and accessible, that's just going to change everything because you're not just asking the internet for questions.

You're saying, hey, from this library of, you know, vetted data, I want instant insights.

And so I know that that exists, but it's very expensive.

Once it becomes something that you can pay $30 a month for, I think that's going to be crazy.

It exists today and it does a great job, but it is expensive.

It's not $30 a month to do.

Andrew Miller:

[33:00] How do you see the relationship between AI and human expertise evolving?

Hillary Coover:

[33:06] Yeah, I think the term co-pilot sums it up. Like it's going to be, you're always, I see it as, you know, people are nervous.

Professors and academics are nervous about, you know, students using it to cheat.

But at the same time, as soon as spellcheck came out, we had the same conversations whenever whenever that was, where, oh, you know, the students are never going to learn how to spell.

And guess what? Maybe that was partially true.

But like, I see a similar parallel in those two things.

And it's not going anywhere and figuring out how to, how do you like leverage it to empower those students potentially that they're worried about overusing it.

And I think the emergence of the internet was another huge thing.

I mean, it's not going anywhere and it's only going to grow. And so, Figuring out how to stay safe on it and benefit from it is critical.

Andrew Miller:

[34:10] Yeah, yeah. No, I couldn't agree more. And we've seen those changes over time, right?

It's always this new technology pops in and then, oh, no, we're going to forget how to do it this way or that's going to change everything for the negative.

And, you know, we're not going to be able to, like, learn anything anymore.

Your example of students, I think, is on point. I remember, you know, back in college going to the library, even though, you know, we have the Internet.

Right. But you go to the library because you only have certain resources there in those books to respond to the essay that you're working on.

So you sit there all night and you have, you know, 30 books on this huge table and you're flipping through. Oh, OK, that's a reference I can use.

That's useful here. That's useful here. And sure, it's a good learning experience to, you know, use the library search function and the Rolodex index and go find the book and sit there and spend hours sifting through to find what's helpful.

But if you can expedite all of that and you have all those resources you know with an ai that you could just type in hey i'm looking for this and you give me some of the sources you're still learning the information you're just reducing the the time that you're spending hunting for that one paragraph which.

Hillary Coover:

[35:28] Maybe makes you a little less invested to retain the information because if you go through like eight hours of work trying to learn something and then that you can now learn in in 15 minutes, you're probably going to be less motivated to keep it.

Because if I spend that much time learning something, I'm like, oh, I better not forget this ever.

It's going in that part of my brain that like, it's not going anywhere.

Andrew Miller:

[35:49] Hopefully, if you weren't just writing the night before, I'm not saying I was, for the essay that's due in a couple hours, you know, and you're not going to remember it after, but you know.

Hillary Coover:

[35:59] Never did that.

Andrew Miller:

[36:00] Never did that in my life. Oh man. Let's jump into just like personal insights and maybe like some lessons.

So what's a lesson you've learned in your career that, you know, you wish you knew earlier?

Hillary Coover:

[36:12] Probably the biggest thing, it's sort of a mantra that I use is, you know, you are tougher than you think you are. Like you can do more than you think you can.

And having that, reminding myself of that on a regular basis, I wish I had learned to do that long ago.

And I wish that I knew that I was tougher than I thought I was a long time ago.

I would say that that's probably the biggest. It's very broad and I think you can apply it to anything, but I, um, when, when I'm faced with really difficult challenges, it, that that's, that's what helps me the most.

Andrew Miller:

[36:50] Yeah. Yeah. No, I, I love that. And I guess it's the decisions that we make during these trying times that build our characters, right.

And we get stronger from them. I mean, maybe not at that moment.

Uh, maybe we fail in that moment, but you do learn from them and ideally not repeat them. And, you know, if it if it doesn't kill you, I think that's my dad said that, which is probably like taken from a lot of other people, too.

If it doesn't kill you, it makes you stronger. Right. Everybody's heard that.

So hopefully life isn't at risk here, but you are, you know, moving on and you can, you know, grow from it. So I love that mantra. Yeah.

Hillary Coover:

[37:24] I also think that as a salesperson, I've always believed that if you're selling technology, you should be able to deeply understand the product and speak to it and pitch it and help implement it and be a part of those processes so you know what you're selling.

I think that that is kind of a large gap in what I've seen in the sales and marketing spaces. Absolutely.

Andrew Miller:

[37:53] So if we if we jump into kind of like AI related learnings, and I know that you have a vast knowledge linked to like privacy and security for those listeners on here.

Are there any recommendations on maybe industry, journals, books, websites that you would recommend that people check out?

Hillary Coover:

[38:18] I like to stay on top of what's coming out. And I don't make my security determination based on this yet.

But there is a website and newsletter called there's an AI for that.com.

I'm sure you've heard of it.

And maybe our listeners have already heard of it. I like seeing capabilities that come out that have, you know, thousands of reviews on them.

So I feel like, okay, this many people are using it.

I can go through and extract insights from some of those reviews and say, okay, do I want to try this?

Is it, you know, what's the back end of it and how is it created and who's behind it?

You know, there are a lot of companies out there that have business record data so they can tell you, If there's a company behind a specific AI capability that you as a consumer are interested in, you can go and say, oh, is this, where is this entity based out of?

And sometimes it'll be like an Iranian entity. In that case, you know.

They not want to use it. I mean, I don't know.

I think most, I think your average person doesn't really think about those, think about those things.

But I like to know who's, who I'm giving my data to, even if I am careful in my, in my prompts and in my use of it.

Andrew Miller:

[39:43] Well, and that's why on the enterprise side, most companies go through like SOC 2 compliance.

Clients, they show they've gone through all of the, you know, proper channels that they're housing the data and, you know, secure methods that they've actually spent the time to train all of their staff on specifically what phishing and hacking and all these things look like and how to, you know, handle that so that it doesn't compromise the data of our, you know, customers.

This is a question that I like to ask everybody, and it can come in different ways.

You know, some people are building a new business and they have a moonshot, you know, idea for their others may within their current efforts have like a moonshot project that they're working on.

And so they describe some of the things that they're building there.

If I asked you, you know, Hillary, what's your moonshot AI project for the future?

Future? What would that be?

Hillary Coover:

[40:42] Moonshot Project. I know there are some nonprofits and companies working on this, but building something that informs consumers and just your average person who has their data and what it's being used for and how much money these companies are making off of your data.

I think that building out some kind of like data efficacy platform to inform form individual users would be incredible.

I think we would see some very positive change and, I mean, I don't know realistically if that will happen given, you know, the fact that it's a multi-billion dollar industry.

But I love the idea of being able to articulate the, you know, the supply chain of my individual data to say like, okay, I went on this website.

They took these necessary cookies. Where, who, you know, what's being used?

Like which advertisers were able to take that and, and try to target me and what information were they give it.

And so I, you know, that would be pretty cool.

Andrew Miller:

[41:51] Yeah. That would bring so much transparency to the efforts and the behind the scenes work that's, that's going on.

I know there, there are, you know, platforms, search engines like DuckDuckGo that have launched to keep your information private so that you do have that true, like anonymous searching, searching, you know, browsing interaction.

From a marketer's standpoint, we're like, oh, oh, no, oh, no, because now we can't give you that targeted ad that we wanted to show you.

But being able to opt in or opt out about it and being conscious about where your information is going is, I think that one of the next steps in just our digital evolution and something that if there was a company that like brought that to the forefront, I mean, definitely a lot of challenges in getting that all connected but it could be amazing uh and it would solve a lot of the pain points of people saying stop stealing my data i don't know where my data is and this gives you that that tracker you know yeah.

Hillary Coover:

[42:50] Or you see that your data has been breached by a website that you never consented to you never gave it to and you're like what like thanks thanks a lot you know and then to be able to like backtrack and see like okay where did they get that when did they get it and and how and I think I think that would just be such a cool thing to illuminate for all users.

Andrew Miller:

[43:12] Oh absolutely absolutely I mean I'm sure we have all received that email from like lifelock or a credit company or something saying hey your email and password is accessible on the dark web it's like oh which one okay that one in general how you know like how did that happen and that's it That's all the information that you get. It's like, great.

So I guess I just changed my password, which hopefully you've been doing on a regular basis anyways.

But having something to be able to track it, that also brings up a lot of accountability, for the companies that are out there protecting us and taking our data.

It's like, where was the leaky funnel?

And I think that could all, I mean, that would make us all better on the business side is we can track this better. Or we're being held accountable.

So we hold ourselves to even higher standards, which I think is phenomenal.

Hillary Coover:

[44:04] Yeah. And marketers would innovate around it. Like, yes, there would be some pain points initially, but that technology would, would innovate quickly and find a way to, I think more ethically target, you know, advertisements well.

Andrew Miller:

[44:18] Absolutely I mean there's all these recent like cookie changes you know on like Facebook tracking on like on the Google side you know Google Ads Network is like reduce what you can actually see so with with all these changes that have happened there's still ways for us to market to those individuals and bring them the right thing at the right time because if you take the Netflix experience you want that personalization to some extent so So people will still opt in on certain things that they want because they want that information.

I think it just brings more transparency and say, only show me what I really want to see and make it relevant.

And that just makes us have to be better on the marketing side.

Hillary Coover:

[44:57] Yeah, I agree. A hundred percent.

Andrew Miller:

[44:59] Cool. It's been a pleasure having you here, Hillary. How can our listeners follow your work?

Hillary Coover:

[45:07] LinkedIn is the only social platform that I'm active on. But yes, LinkedIn and 505updates.com is our podcast that we put out every day at 5.05 p.m. to say, hey, here's what's going on in the cybersecurity and open source space.

Andrew Miller:

[45:23] Perfect, perfect. Well, I have learned a lot in this conversation.

I have like a lot of notes right here, you know, sheets and sheets of notes that I've taken.

And I'll be sharing those in the show notes once this actually goes live.

But I really appreciate you taking the time to be on AI Unboxed and looking forward to getting this out into the wild and yeah, thank you so much again.

Hillary Coover:

[45:51] Thank you so much for having me.