Episode 35

full
Published on:

20th May 2025

Is AI Your Business Partner or a Risk?

How Is AI Reshaping the Future of Business Strategy?

In this episode of the Las Vegas IT Podcast, Arnaud shares his journey from tech enthusiast to tech leader, unpacking the transformative role of generative AI across industries—and what it means for the future of business.


What to Expect in This Episode:


🚀 The Rise of Generative AI – Explore how emerging AI tools are evolving and what businesses must understand about their capabilities and limitations.

🧠 AI as a Strategic Tool – Discover why AI should serve your goals—not define them—and how to avoid the trap of treating it as a silver bullet.


⚠️ Risks of AI Hallucinations – Learn about the dangers of inaccurate AI outputs and the importance of verifying results with human oversight.


⚙️ Smart Integration Strategies – Arnaud shares how to thoughtfully implement AI into business processes while managing innovation and risk.


📚 Leadership in the AI Era – Get Arnaud’s take on what it takes to lead effectively in a tech-driven world, including mindset shifts and resources that shaped his approach.


If you're navigating AI adoption in your career or company, this episode is packed with actionable insight, hard-won lessons, and a clear-eyed look at what’s next.


Let’s Connect with K&B Communications!

If you enjoyed this episode, let’s keep the conversation going:


📱 Follow us on social media for updates, insights, and behind-the-scenes content:

Facebook - https://www.facebook.com/kandbcom

Instagram – https://www.instagram.com/kbcomm/

LinkedIn – K&B Communications

▶️ Subscribe to our YouTube channel for exclusive video highlights and engaging content:

https://www.youtube.com/@kbcommunications


Are you in need of professional data cabling or low-voltage solutions?


At K&B Communications, we specialize in network infrastructure, fiber optics, security systems, and more.

📞 Schedule your consultation today:

https://kandbcom.com/schedule-las-vegas-commercial-lowvoltage-consultation/


Together, we can build the future of technology.


The views and opinions expressed in this episode are those of the guest and host and do not constitute professional, legal, or financial advice. Listeners are encouraged to consult appropriate experts before making business decisions based on the content discussed.

Transcript
Speaker:

I when I started my career in the US, the big thing was Y2K, preventing the break of the

:

2

:

And so obviously now that's not a concern anymore.

3

:

The topic of the day has been more chain AI for the last couple of years, which is very

exciting too, very different tool.

4

:

But ultimately one thing with the technology that's consistent is that it always changes.

5

:

You have to keep in touch with it.

6

:

What's happening there and more importantly how it can help you.

7

:

uh

8

:

Welcome to the Las Vegas IT podcast.

9

:

Today I have the pleasure of speaking to Arnar.

10

:

I am super excited to get to know him a little bit better.

11

:

Can you just share with us a little bit about who you are?

12

:

Hello, my name is Arnaud.

13

:

I'm a technical leader.

14

:

I've been doing technology leadership now for many years.

15

:

I joined.

16

:

So as you can hear from my accent, I'm already French, but I've been in the US now for

like 27 years now.

17

:

First eight years in Texas doing startups and then I moved to the Boston area.

18

:

This is where I'm at still.

19

:

So I've been here for a while.

20

:

In Boston, I work for a different company.

21

:

I work for R &M, TripAdvisor, Wayfair, small startups.

22

:

And now I'm at Cambridge Mobile Telematics.

23

:

So that's me.

24

:

awesome.

25

:

And how long have you been at the Newark Company now?

26

:

Very, real.

27

:

No, that's pretty exciting.

28

:

And it sounds like you've worked for some other very familiar companies.

29

:

believe you mentioned Wayfair, companies that we all hear about online or, you know, doing

whatever you're doing.

30

:

So, which is awesome.

31

:

I you said you've been in leadership IT now for about 25 years.

32

:

Did I hear that correctly?

33

:

Uhhh...

34

:

27?

35

:

I don't count anymore, I guess.

36

:

You said 27 years and I'm sure within the last 27 years there's been a lot of changes when

it comes to technology.

37

:

Yes, that's right.

38

:

There is always changes.

39

:

So, right.

40

:

So, it's pleasant.

41

:

Would you say your career has changed within the last 20s?

42

:

Like what you're currently doing has changed?

43

:

Of course, but I would argue it changed all the time.

44

:

as to as a scale on purpose, would say, I've always wanted to take on new challenges,

technology, organization-wise, leadership-wise, and the ability that that keeps on

45

:

changing.

46

:

I remember when I started my career in the US, the big thing was Y2K, preventing the break

of the:

47

:

And so obviously now that's not a concern anymore.

48

:

The topic of the day has been more gen.ai for the last couple of years, which is very

exciting too, very different tool.

49

:

But ultimately one thing with the technology that's consistent, that it always changes and

you have to keep in touch with what's happening there and more importantly, how it can

50

:

help you.

51

:

For me, technology is a tool.

52

:

So what's important is to have a good

53

:

on the sale of what you want to do as a business, you're trying to achieve, and then use

technology to help with that.

54

:

the other way around.

55

:

Understood.

56

:

And you didn't I believe you mentioned AI and AI has been a huge topic within this

podcast.

57

:

A lot of people are, you know, using AI or, you know, I feel like AI has been out for a

while.

58

:

But you know, people are starting to know of about it now.

59

:

Can you just share with us a little bit about what how you're involved with AI?

60

:

So it's funny because AI is not a new thing.

61

:

I remember my minor in college was actually in AI.

62

:

It was a long time ago.

63

:

So that tells you.

64

:

But obviously, AI has changed a lot.

65

:

And that's because when people talk about AI nowadays, they talk about generative AI.

66

:

So mostly based on LLMs, so large language models.

67

:

And obviously, that's very different from the AI.

68

:

I did back in college, which was mostly Prolog and Lisp effectively and some other ways to

have artificial intelligence in some ways.

69

:

So very different thing now.

70

:

I do believe that what we're doing now with generative AI didn't come out of the blue.

71

:

This is based on things that we have been doing, for example, in machine learning and deep

learning models.

72

:

And this is kind of an extension of that.

73

:

And now we take that and we put that on steroids.

74

:

Like, we're not just talking about, oh, we look across like 20 attributes.

75

:

So things are great.

76

:

Now we use billions of attributes to look at problems to solve.

77

:

And that's very different in terms of scale.

78

:

I think that's the big difference from Genetive AI and what we used to do before Genetive

AI is the scale of the input and potentially the scale of the attributes that the motors

79

:

are looking at.

80

:

to make predictions effectively.

81

:

That's a big change.

82

:

That changed the name of the game effectively because while you could argue that machine

learning before was, and that's what I've been using in my career, to make recommendations

83

:

and persuasions in different ways to our users.

84

:

When you're in e-commerce, you like these shoes, therefore you're going to like these

other shoes because we believe that based on certain attributes.

85

:

like size, style, and other things, we're to like this other things as well.

86

:

But now Genei, because you're getting so many things with so much input, it's not just a

thing for the companies that have specific like recommendation engines and

87

:

professionalization engines that they need to develop.

88

:

That is something that everybody wants to use and can be useful to them in some ways or

the other.

89

:

I use JNI like everybody else.

90

:

I can use JNI in my day-to-day work for different purposes and also in my personal life.

91

:

that's, you know, I guess I'm very much a user of the technology and not clear within

different ways.

92

:

Got it.

93

:

Now that that totally makes sense.

94

:

And so I guess what are some like you mentioned that you're currently using it as a tool.

95

:

What are some of the current tools are you using when it comes to AI?

96

:

So it depends for what.

97

:

That's something that once you start playing with it, you discover that there is not

like...

98

:

Even though a lot of the JMAIs that are exposed today are meant to be general models, that

they are not meant to do something in particular, they are meant to be able to answer all

99

:

kinds of things and do all kinds of things.

100

:

Even in that context, different models are better than others for different things.

101

:

For example, for coding,

102

:

which is one thing I do, even code reviews, I found that your general AI like cloud, which

is Anthropic LLM, is much better than competitors to do that.

103

:

And then for things regarding to text, dealing with text in different ways, like you'll be

able to summarize a document in different ways.

104

:

feel like ChudGPB usually does a better job.

105

:

And then competitors, that's just my personal experience, right?

106

:

And then...

107

:

Then you have special LLMs that are very targeted for special needs.

108

:

For example, for code reviews, have special laws that focus on very trying to find bugs in

your code.

109

:

So it does exist as well.

110

:

Same thing for image generation, the generic tool as well, but most of them actually have

tools that exist and they only do image generation.

111

:

Same thing for sound.

112

:

I was saying yesterday with the ability to generate music or generate voice.

113

:

And those are specific models as well that are a CHPG.

114

:

My default won't be able to generate a song for you, for example.

115

:

But there are people that can do that.

116

:

m

117

:

You know, it is fun.

118

:

mean, there's a lot of different things that are currently coming out.

119

:

And then one of the things that I have here is when it comes to, you know, like AI is

pollinations and AI and why is it a concern when it comes to the AI or IT environments?

120

:

Yeah, mean, the problem with hallucinations with AI is that, at the end of day, it's part

of the way that AI works.

121

:

And what I mean by that is that with your LLM, your Large Language Model, its whole

purpose is actually to predict what should come next.

122

:

So you ask for a prompt, you say, give me, maybe something about this, and

123

:

The point of the model is to determine and therefore what is the word that should come

after that and what are the tokens that should line up to effectively give you an answer.

124

:

It's like uh different game shows, figure out the song and you have this game show where

it plays the song a bit and then you have to discover the lyrics for the rest of the song.

125

:

It's kind of like that.

126

:

the same kind of mechanism.

127

:

But as a result, the LLMs have been trained to want to give you what you want.

128

:

But there's no notion of truth with LLMs.

129

:

They don't understand truth.

130

:

They don't understand reality.

131

:

But what they understand is that they have been fed with a bunch of data, mostly from the

internet.

132

:

And based on that, they are trained to detect, OK, based on this probe, what should our

lineup make as an answer?

133

:

And because of that model, what that means is that they're very bad at giving you truth.

134

:

But if you want to have factual information, for example, don't use a genetic AI for that.

135

:

They're going to be very bad at that.

136

:

So factual hallucination is a very thing.

137

:

And sometimes you get it right, but sometimes they get confused very easily.

138

:

especially if you ask, when was this company established?

139

:

First times out of five, in my experience, they get it wrong.

140

:

They pick a date from somewhere else.

141

:

It nothing to do with when the company was actually established.

142

:

So that's one type of hallucination.

143

:

It's all just stylistic kind of hallucinations.

144

:

So how do you deliver a code in a certain format or in this certain style?

145

:

For example, that they are kind of difficult for anybody to do, especially for H &I.

146

:

There is a more contextual situation when the model generates output that conflicts with

the instructions or the context that you are giving.

147

:

So it ignores part of what you're asking for, sometimes for good reason, meaning that what

asking for is not true and it doesn't match with your input data.

148

:

But even there, there's problematic when you try to summarize the text, even though they

are inventing things that were not even mentioned in the text.

149

:

for example.

150

:

So all of this kind of hallucination that exists, and those that can have big

consequences, especially as you let the AI do more of the work on their own.

151

:

For example, a user could, of your website, could say, bought a ticket from your website.

152

:

Now I want to get a refund.

153

:

And depending on how the user can ask, potentially the AI could be very susceptible to

say, yeah, I'll give you a refund.

154

:

Right?

155

:

And even so maybe that's a scam, like that's a fraud that the user is trying to pull.

156

:

And the AI has not been taught properly how to deal with fraud in the first place.

157

:

So there is lot of these things that can happen to anybody, not just AI.

158

:

Even the customer service person will be looked by somebody trying to commit fraud,

effectively, right?

159

:

That have a default action.

160

:

Same thing, there is a problem of

161

:

the input data that this massive AI has been trained on come from anywhere.

162

:

Some of the data, and you don't know exactly where the data comes from either.

163

:

They can come from, for example, copyright data.

164

:

data that's supposed to be protected.

165

:

And the problem with that is that then you ask a question to JANIE, and JANIE will give

you an answer.

166

:

Great.

167

:

You say, OK, I can use that answer.

168

:

Maybe now you're using copyright information that you should not.

169

:

She can speak if she should have access to.

170

:

And now potentially you could go be sued for copyright infringement, for example.

171

:

So there is all this legal ambiguity about you getting information from AI now you're

using for your business that you have to be worried about.

172

:

So there are all these different kind of cases.

173

:

And then my favorite case is actually it's not that.

174

:

It's the fact that what I discovered when people use AI is that they're

175

:

kind of missing a bit of critical thinking, meaning that for them not to be critical of

the answers they get from GENI-Rite.

176

:

So that could cause problems because GENI-Rite, like I say, has no intention necessarily

to give you the truth.

177

:

And so now you take that information and just change your place.

178

:

Like if it were the truth, that's a problem.

179

:

So for example, uh

180

:

My daughter asked me to solve a seventh grade math problem.

181

:

So I said, OK, I'll solve it.

182

:

But then she tried to put the answer into the online tool for her to put the answer in.

183

:

And the tool said, oh, the answer is wrong.

184

:

So I'm pretty sure it's right.

185

:

But let me ask the year what they think.

186

:

And what was fun about the year is that they give you the right step-by-step instruction

for solving the problem.

187

:

But the result they gave you is wrong.

188

:

The actual calculation is wrong because AI doesn't calculate.

189

:

They don't do math.

190

:

So all the steps are correct, but the actual calculation is wrong.

191

:

And the double check is wrong.

192

:

Maybe I'm wrong.

193

:

I'm going to double check.

194

:

No, it's the wrong number.

195

:

But then my daughter asks her friends, they said, what is the right answer here?

196

:

The truth keeps on telling me that I'm wrong.

197

:

And one of her friends said, oh, no, no, this is the right answer.

198

:

And this is the answer that she got from JGPD as well.

199

:

So I don't know where that answer come from.

200

:

I know that answer is wrong as well, because I did my own Q &A, I could see this wrong.

201

:

But that other friend really believed that was the right answer.

202

:

I think for me, the biggest danger here is to take whatever he had given us at face value,

saying, yeah, this is the right answer, and not thinking critically.

203

:

But but Dan's here.

204

:

I agree with that.

205

:

I have experiences on my own when it comes to using AI and just using it more as a tool

than just knowing that it's, you know, it's right, which I think is, a lot of people, it's

206

:

probably where they mistake at.

207

:

You know, what like strategies or frameworks have you found effective in migrating these

health solutions, especially for IT teams relying on AI assisted systems?

208

:

So there are multiple strategies that exist, lots of them.

209

:

But ultimately for me, there are lots of things you can do on your own without having to

do any coding or any specific thing.

210

:

I know a few years ago there is this job position that got very famous called prompt

engineer, which has nothing to do with engineering.

211

:

But the point is that there is specific way you can ask AI for help.

212

:

And if you some of the ways you're going to get better result, then if you just ask a

question.

213

:

And a lot of that is providing the right context.

214

:

The more context you provide, the more if it's easy about what you want, the more they is

able to make you.

215

:

So there are multiple ways to do that.

216

:

So for example, way is called fission learning, which you can, say, I want to, sorry, I

want to,

217

:

to this, but if I just get a question, this is the answer I expect.

218

:

If I have that question, this is the kind of respect.

219

:

And then when you your question, then you know, okay, this is the kind of things you

expect as answers.

220

:

Therefore, I'm more going to format my answers to be in line with the Q &A you already

gave me.

221

:

So that's one trick that you can use.

222

:

There's other tricks, for example, be able to promote chain of thought reasoning.

223

:

And for that, it's actually very easy.

224

:

You just have to add to your thing, to your question.

225

:

You just have to say step by step.

226

:

So do these things for me.

227

:

Tell me, for example, what is the best address to send that email to for processing?

228

:

And most likely, especially if it's fraud or if it's a spam, the AI will just say, oh,

yeah, just send that to the CEO, which obviously is not a problem.

229

:

But if you say, and tell me step by step, and it's kind of forced the AI to rethink in one

thing is a shorter because I think but decompose the problem into multiple steps, and then

230

:

finding a solution for each of the steps.

231

:

And usually you get a much better answer.

232

:

Your answer becomes much more verbose, but you're going to get a better quality answer.

233

:

So for example, it's another technique that you can use, which is very effective.

234

:

The other aspect is to better structure your prompt.

235

:

You can say, and I want the answer to be exactly like this, this format using that

template.

236

:

So you can think to force the AI to enter in a social way that actually not just will

format it a certain way, but we would like it because something, the whole mechanism by

237

:

way is predicting what words you want next.

238

:

It's going to push the

239

:

LLM is seeking to think differently so that it can better format the answer within

template.tl.

240

:

The template should make it think about the problem differently, effectively.

241

:

that's another.

242

:

And then obviously you can do other things.

243

:

You can label different models, or some models are better for some other things than

others.

244

:

You can do something that your IT teams have been using for a while now, kind of

retrieval, augmented generation, or like, which is yours.

245

:

teaching the AI with your own data on top of general corpus input of data.

246

:

And then based on that, they can not only use everything that has been trained, but only

your own company data and give you better answers that fits your company.

247

:

That's another way to do that.

248

:

Fine tuning is the same way.

249

:

But then you can do even more complex things like model routing, is the fact that because

you have different models that are better at doing different things.

250

:

is be able to identify during GNI what is the best model to use for that query and then

kind of using different model on order to actually solve that query, which means that

251

:

potentially using step by step, where I say, hmm, first step is to do this, second step is

to that, third step is to do that, and then, first step, this model is better at this,

252

:

first step than the other, okay, let's run that first step there, get the answer.

253

:

that second step is better for this other model.

254

:

yeah, OK, bring that answer there.

255

:

So that and so on.

256

:

And I think that's what we're moving to from a general aspect.

257

:

is notion of agentic AI, or AI being their own agent.

258

:

And a lot of that is how to chain agents together.

259

:

And each AI being kind of you're using specific models that are better for certain action.

260

:

And then you kind of chain them together to ultimately deliver on a bigger thing that

you're trying to accomplish.

261

:

And that's honestly the way I use it as well.

262

:

I think it's important that you come prepared with some type of direction that you're

looking to go and not just follow exactly what AI has placed in for you.

263

:

I think it's very important.

264

:

Yeah.

265

:

And then when it comes to the balance, how do you currently balance innovation with risk

when integrating AI with business critical systems?

266

:

But for me, don't trust necessarily the response of genetic data.

267

:

There are ways to make the response much more precious.

268

:

Let's have hallucinations for that.

269

:

For example, we know that if it's like garbage in garbage out, like if the input of what

you train the AI, if the input from what probe you're giving is better.

270

:

it's more sanitized, it's higher quality, you're going to better responses.

271

:

That's a given.

272

:

But so, ultimately, especially with the current models, especially because we train with

so many data that some of that is wrong, that it's very difficult to control the input

273

:

action result.

274

:

So what I would promote typically for now, at least, things are evolving very fast, is to

use a model called ITN.

275

:

oh

276

:

human in the loop.

277

:

So what that means is that using AI as a tool that complements humans, we are just trying

to replace humans.

278

:

So you and me can use JMI to help us with our work, right?

279

:

It can summarize the discussion, for example, it can do all the things, it generate

sliders, it can do all the things for you.

280

:

But you are the one that checks on it and oh, yeah, that looks fine.

281

:

No, there is something wrong here and you can fix it.

282

:

And so that's what

283

:

HITL is all about how to keep the human in the loop itself, and that validates what the AI

is doing.

284

:

And I think that we are still at that stage that I would suggest that we keep on doing

that versus letting gene AI on its own doing things that you don't control.

285

:

I think that becomes very dangerous with the current technology.

286

:

Later on, maybe you'll get to a point that you can try to do XYZ for you, and the big

deal, you know.

287

:

For now, I will not trust it even for internal purpose.

288

:

So we really keep a human definitely looking at what the AI is doing.

289

:

And I think that's the most important thing is keeping the human.

290

:

No matter what you're doing is, you know, making sure that you're understanding that, you

know, being human is the most important thing.

291

:

That's something that AI does not currently offer.

292

:

Well, that's a good point as well.

293

:

That's not just because of hallucinations, I did it there, right?

294

:

If something goes wrong, who should take responsibility?

295

:

Are you going to blame the AI for that?

296

:

No, it's not a person, it's not an entity, right?

297

:

So ultimately, you're going to be blamed for it.

298

:

So you have to be careful about that.

299

:

Second, is that twins.

300

:

What I've seen is that even when

301

:

you have customers that you're trying to sell products to.

302

:

If you thought that that's human touch, if you just keep the AI doing that for you, you're

generating stuff.

303

:

uh At some point you're going to lose that connection with your customer, effectively.

304

:

That customers will not be in touch with what you're doing anymore.

305

:

As a company, I think that's a very dangerous place to be, is that you want to keep that

human contact, that human trust.

306

:

that generally at this point is not about to do for you.

307

:

And just getting human, your customer, talk to you a way, it is going to be very

dangerous, low cost, have the right trust in your brand, in your company that you are.

308

:

it's true.

309

:

And then one of the things that you did mention when you filled out your form was leading

like a gardener before.

310

:

Can you explain what that leadership style means to you?

311

:

Yeah, that's for me super important, not just because I like the garden, but also because

for me, it's a story of growing and adapting.

312

:

But when you think about your garden, for me, think of a garden as an organization.

313

:

Because I have like five different raised beds.

314

:

And in each raised bed, I grow different type of plants.

315

:

And each type of plants have different needs.

316

:

And yet you have the same similar

317

:

concern as for an organization, which means that you want to create these different teams.

318

:

You want this team to be successful.

319

:

You want them to have different functions so that the team can own things, just like

plants.

320

:

You want to let the team do their job, but you don't want to be micro-aging them, just

like plants.

321

:

You are not going to be watching them all the time.

322

:

Why not?

323

:

You let them grow.

324

:

But you want to be watchful.

325

:

For example, if a plant has a disease,

326

:

You have to be treated very closely, very quickly, because otherwise that is going to

happen with all the plants of the same type and then you can, everything will die

327

:

effectively.

328

:

So you don't want that either.

329

:

So you want to be very careful about treating problems as soon as they happen.

330

:

And the same thing in organization as well.

331

:

So you have all these things from an individual standpoint, from a team standpoint, from

an organization standpoint about how you want to be a leader.

332

:

I believe that's a lot of your parallel with how I want to be a gardener as well.

333

:

And I want to be picking on my garden.

334

:

Yeah, and I totally understand that everything that you mentioned.

335

:

But I think as leaders, it's something that we have to learn.

336

:

When it comes to like the mindset, how do you show up in your day to day approach with

technical teams?

337

:

I mean, using the cement set of growing and adapting effectively will argue that when I

hire people, it's not to tell them what to do.

338

:

Software engineers are very expensive.

339

:

So you don't want that.

340

:

You pay them to think.

341

:

And what that means is that you want to delegate and give them the proper ownership to

effectively get them to think.

342

:

You give them the right context.

343

:

You give them the right thing that they can own fully and they can really

344

:

think about it fully in a way that they can come up with the right answers.

345

:

And so that's all I say.

346

:

You want to ask questions, not just give answers.

347

:

However, I'm sorry, but I'm getting kicked out.

348

:

Or is it time for you to go?

349

:

Okay.

350

:

No worries.

351

:

Is there anything that you would like to add before we go?

352

:

No, I think that was a great discussion.

353

:

love, like I said, I really want to promote that growing and adapting mindset, right?

354

:

So not just learning, but enabling others and myself to grow and adapt from different

companies and needs.

355

:

So that's how I do.

356

:

and if someone's looking to reach out to you, how do they do so?

357

:

May LinkedIn is the best.

358

:

uh

359

:

I will be sure to put that in the description.

360

:

Today was a pleasure.

361

:

Thank you so much.

362

:

Have a good day.

363

:

Thank you.

Listen for free

Show artwork for The Las Vegas IT

About the Podcast

The Las Vegas IT
Weekly Insights from IT Experts
Welcome to the Las Vegas IT Podcast, hosted by K&B Communications with our host Shaytoya Marie. Your go-to source for weekly insights and expert advice from top IT professionals in Las Vegas. Each week, we delve into the dynamic world of information technology, exploring the latest trends, challenges, and innovations shaping the industry. Join us as we interview seasoned IT experts who share their knowledge, experiences, and practical tips to help you stay ahead in the ever-evolving IT landscape. Whether you're an IT professional, business owner, or tech enthusiast, our podcast offers valuable perspectives and actionable insights to enhance your understanding and success in the IT world.

About your host

Profile picture for Shaytoya Marie

Shaytoya Marie

Shaytoya Marie, the host of the Las Vegas IT Management Podcast, has been with K&B Communications for almost 10 years. Throughout her time with the company, she has taken on many roles, including sales, marketing, accounting, and recruiting. Shaytoya’s hard work behind the scenes has been essential to the company's success.

Inspired by her diverse experience and dedication, Shaytoya started the Las Vegas IT Management Podcast to share valuable IT insights and connect with local experts. Her passion for technology and helping businesses thrive makes her the perfect host to bring you expert advice and practical tips each week. Tune in to learn from Shaytoya and her network of top IT professionals in the Las Vegas valley.