Is AI the Future of IT in Healthcare?
In this engaging conversation, Wally Lee, Vice President at LGC Group, joins host Shaytoya on the Las Vegas IT Management Podcast to share insights into his role overseeing IT for healthcare diagnostics, the core values of LGC Group, and the impact of AI on cybersecurity. He discusses the challenges posed by AI in the hacking landscape, the importance of implementing robust AI policies in businesses, and how AI can both enhance and threaten cybersecurity defenses. Wally emphasizes the need for education and training in using AI responsibly while maintaining data privacy and security.
This episode of the Las Vegas IT Management Podcast is brought to you by K&B Communications. The views and opinions expressed on this podcast are those of the hosts and guests and do not necessarily reflect the official policy or position of K&B Communications.
Transcript
Hello, my name is Shatoya with KMA Communications with the Las Vegas IT Management
Podcast.
2
:And today I'm super, super excited to be speaking with Wally Lee.
3
:He is a vice president at LGC Group.
4
:How are you doing today, Wally?
5
:Doing wonderfully today.
6
:Good.
7
:I'm so glad to hear that.
8
:I'm super excited to get to know you a little bit better.
9
:Can you tell us a bit about your role as a vice president at LGC Group and how your
journey led to this profession?
10
:So I'm actually responsible for all IT for your analysis and toxicology from the
infrastructure on up to the ERP system.
11
:Everything related to information flows in the business and the computers and the network
and also the cybersecurity and keeping it secure is under my responsibility for actually
12
:two locations, one in Long Island, New York and one in Garden Grove, California.
13
:and then some remote workers.
14
:Got it.
15
:And did you say a VPR system?
16
:Exactly.
17
:What is that?
18
:So VP for IT systems.
19
:So got it.
20
:Information technology.
21
:Yes.
22
:Traditional title.
23
:No, you're fine.
24
:I just ask but sometimes, you know, our audience may not know exactly what that is.
25
:So, you know, we may have different terms with being in the industry and other people may
not understand.
26
:So it's basically for anything that has informational technology involved, right?
27
:So it could be SCADA systems that might control the lifelizers.
28
:We have freeze dryers to freeze dry our products.
29
:So there's a PC that runs the SCADA system from that on up to the business system, which
would be the financial system and then the order, the cash system, the shipping system,
30
:the warehouse system altogether.
31
:So the ERP system.
32
:And anything ancillary that's related to those systems to allow the business to run.
33
:So operationally, we look at all the information that's going in and out, the
communications, and look at that in terms of how do we make sure that it's running
34
:correctly and that it's running as efficiently as possible.
35
:Got it.
36
:That's pretty cool.
37
:And then I guess, how did you become the vice president of LGC Group?
38
:Yeah.
39
:So basically I started out as an engineer way back when.
40
:close to 30 years ago, got into supercomputing with supercomputers in the mid 90s, early
90s, got into become a IT architect or a large enterprise of 10,000 people, five different
41
:groups for a large defense company.
42
:And from there went on to various roles and become a CIO for a larger company and then
became
43
:director for a 99 cents wholesale company that moved a lot of packs in and out to becoming
into the health manufacturing.
44
:So health medical manufacturing area came in as a director and then moved up as the vice
president.
45
:Got it.
46
:And that's that's thank you for sharing.
47
:And what are some of the core values and missions of LTC group?
48
:And how does your how does your role contribute to these?
49
:Yeah, so the core value for us is to provide the quality controls for the healthcare
industry, for the medical devices.
50
:For example, we provide your analysis control.
51
:We've been known for 45 years to have the best controls in the industry.
52
:And to maintain that quality, you have to have all the processes in place and you have to
e that you're meeting the ISO:
53
:So compliance for that and the FDA standards.
54
:so that all of our controls are not deviating from what the standard should be.
55
:So quality is kind of the main goal of LGC in all of our products because we are clinical
diagnostics.
56
:So our products have to work so that when you go as a patient to the hospital and then
does your lab work, for example, blood, urine, or whatever it is, that the machine is
57
:giving you the right information so that the doctors can do the right diagnostics.
58
:Got it.
59
:And how are some ways that you've found to help with these systems?
60
:So I know, especially within being in the medical field, I know there's a lot of HIPAA
regulations you guys have to follow.
61
:Yeah.
62
:So HIPAA is more on the patient privacy and portability of your healthcare information
that you have control of.
63
:So that's HIPAA.
64
:We don't do HIPAA because we don't know the patient.
65
:Ours is more diagnostic control.
66
:So it's separated from patient information.
67
:But our controls are has to meet a certain quality standard and that's driven by our ISO
standards and our FDA standards.
68
:So it's a little bit different.
69
:That's perfect.
70
:Got it.
71
:Understood.
72
:And then, you know, the huge, the big topic right now that we've been talking about is how
has the rise of AI impacted the field of cybersecurity?
73
:So
74
:It's become a lot more faster in terms of the hackers getting access to your systems.
75
:You know, it's still account takeover is about 90 % of the hacking that's going on, right?
76
:So they take over an account that may be dormant or an ex-employee.
77
:That's what happened with Colonial Pipeline.
78
:And they take that over and then all of a sudden they take over the OT, which is the
operational technology that controls the valves and
79
:all that and then they take that over and then shut down your pipeline for example.
80
:That's what happened with Colonial.
81
:Same thing happened with MGM, know, local to Las Vegas.
82
:You know, was was a flaw in the single sign on Okta, but they took advantage of the
customer service by sending a, hey, my 2FA is not working.
83
:Can you send an email to my Gmail account?
84
:So that's how they got to the account and then elevated themselves to admin.
85
:and then shut down everybody and did ransom.
86
:Basically a big, whereas they would last probably a hundred million dollars in two weeks
that they were down.
87
:So because of AI, the large language model gives you the code.
88
:And now I'm not saying the code is accurate.
89
:It does still hallucinate, but it gives the speed of the hackers to be a lot faster than
before.
90
:And so that's what you have to watch out for.
91
:And then the other thing you have to watch out for is the
92
:quantum computing.
93
:So there was a Chinese researcher that actually broke the encryption code for RSA at a
lower level with quantum computing.
94
:But imagine what they're doing in the military side for military encryption.
95
:So there's a lot of things going on with quantum computing and AI that's increasing the
speed at which the hackers can get to your systems.
96
:Got it.
97
:And I know that's been a huge topic on our podcast is MGM.
98
:So we've talked about that a few times.
99
:cybersecurity is huge right now.
100
:So what specific cybersecurity challenges, you you did mention some, but has AI introduced
and how can organizations address them?
101
:Yeah.
102
:So, mean, AI is going to be out there.
103
:People are going to start hacking with it.
104
:You know, the level entry of hacking is very low.
105
:So, and then anybody with a computer from
106
:not even have a computer, they might go to a computer rental place like in Pakistan or
wherever they are.
107
:And you have the advanced persistent threats like from nation states like North Korea,
China, they're gonna come after you, they're gonna come after you.
108
:it's kind of, it's like a layered process for cybersecurity, right?
109
:It's the Swiss cheese model, right?
110
:You gotta just try to plug up the holes and layer in defense.
111
:But with the advent of...
112
:of AI and these hackers, will leverage that to get to the holes faster, right, into your
environment.
113
:So, so that's what we have to watch out for.
114
:It's evolving.
115
:Is it where you think it's going to be like Terminator?
116
:No, you know, we're not at, we're not even AGI, which is artificial general intelligence.
117
:That's not threatening.
118
:When you get to artificial superior intelligence, ASI, that's when we got to get
concerned.
119
:But I think we're way far away from that at this point.
120
:Got it.
121
:know I was podcast that I had last week.
122
:We were talking about that.
123
:And I know he was saying from what you're speaking about, we're probably looking at more
of like:
124
:Right beyond that.
125
:Beyond.
126
:Because remember, it's about this.
127
:If the AI can understand the first principle of physics, and then build something by
itself, that's when we got to be concerned.
128
:Right.
129
:So if the AI is as smart as Elon Musk, that's when we got to get concerned.
130
:Right.
131
:Until they understand the first principles and how to manipulate the physics of the
environment.
132
:That's, that's when you get superiority.
133
:It's not just language models, right?
134
:Right now, the language model is only as good as the training model that you have.
135
:And one thing you have to be concerned about is when you have training modules and you're
using somebody else's language module that's out in public.
136
:Well, somebody might have, a hacker might have poisoned that.
137
:So how do you know that's safe for your environment?
138
:And how do you know that your training module is safe?
139
:Did anybody hack your training module for your AI?
140
:So those are the things that you got to think about when you're using AI and developing
that for your own businesses, you know.
141
:Got it.
142
:And so how can a business protect themselves from these training models?
143
:Yeah.
144
:So you got to have a AI policy, right?
145
:That's the first thing.
146
:So first of all, you got to figure out your crown jewels.
147
:Okay.
148
:I have this business and I'm making money.
149
:I'm making money because I do something really well.
150
:Right.
151
:So what is that crown jewel of information that you need to keep protected?
152
:Okay.
153
:One policy is you can't use chat GPT and put company sensitive information into that chat
GPT so that it can be used in public.
154
:So that's kind of the first policy.
155
:Right.
156
:The second policy you have to have is, if you're building AI, you have to have a safety
rule in there.
157
:The AI cannot make safety decisions for the company.
158
:It cannot affect any health of any individual in the company or the customer.
159
:And that has to be a policy if you're going to implement AI in your company.
160
:So those are the things that you have to consider is, what are the policies that you can
implement as a company to protect
161
:not only your employees, but to protect your customers.
162
:So that has to be very top of the line, not just using AI and how to use it and all that
kind of stuff, but it's like, are the policies to make sure that it's being used safely
163
:and also protecting the company information that's proprietary?
164
:Got it.
165
:And could you also explain how AI is used to both improve cybersecurity defenses?
166
:Well,
167
:AI has been around for, since Alan Turing defined AI, right?
168
:Since the 40s.
169
:And then, in the 90s, remember they had this thing called expert systems that if you have
a very specific domain, would ask the software will actually use the neural network to
170
:give you solutions, right?
171
:For healthcare, whatever it is.
172
:Remember we had HabBlue.
173
:I think the IBM supercomputer that beat the world chess champion, for example, that was
very an expert system that in a specific domain of chess, right?
174
:So we've had this for a long time.
175
:has been there.
176
:The neural network has been there a long time.
177
:Machine learning has been there a long time.
178
:So the hype is way ahead of where the technology is today.
179
:But, you know, so when you have a lot of hype, what happens is you have people imagining,
you know, a movie
180
:kind of scenario that's happening with Terminators, for example.
181
:But it is a reality.
182
:The reason that the large language model became so prevalent is because of the price of
the GPUs that dropped in the:
183
:So it became much cheaper to have all these GPUs that you could use to create the large
language models.
184
:And you're going to have to have that continue to drop and have it be affordable.
185
:But you're still talking billions of dollars of GPUs.
186
:to make open AI work, right?
187
:So the next AI is gonna be more intensive, like what Tesla is doing with the Go Dojo, for
example.
188
:So as we get further with these AI resources, it's gonna be much better and better as we
go.
189
:Is it where it needs to be?
190
:No, I'm not too worried about it killing us.
191
:Well, you could, mean, right now with the Ukrainians, they're developing new weapons
systems and...
192
:Who's to say they're not using neural network and put a machine gun on a dog that's like
Boston Dynamics, a robot dog and going out and killing people, right?
193
:it's that kind of ethical questions will be in place coming soon.
194
:But for what we are in general, I don't think there should be a large concern.
195
:It's that for you got to be careful of how you use AI or how are you going to implement AI
in your local environments?
196
:Yeah.
197
:And then it does, it does give an advantage to the bad people, right?
198
:They're going to use it to get what you want.
199
:But also on the good side, we have, we're using AI today, right?
200
:So we have a lot of things that are monitoring our network that used to be human.
201
:So the typical thing that a company would have is a SOC, which is a security operation
center.
202
:And you would have people maintain that with.
203
:kind of a logging system that would like Splunk that would actually bring in all the log
data to see if there was a hacker in your environment.
204
:Well, now you have a lot of AI companies that actually offer that service without having
the human there, right?
205
:So you can take advantage of that.
206
:There's a lot of endpoint protection that's AI based.
207
:you have, course, CrowdStrike, everybody knows CrowdStrike because we had that blue screen
of death for a while, but you know, they're one of those that does the endpoint, DarkTrace
208
:for example is AI based.
209
:So lot of the vendors are AI based and then providing that kind of service.
210
:it's just kind of, you know, remember the spy versus spy cartoons?
211
:It's kind of like that.
212
:So they're going to use AI and you're going to use AI to protect yourself.
213
:So the hackers are going to use that and we're going to use that to counter that.
214
:it's a matter of, you know, who it's kind of, it's kind of a global war in a sense that
it's just kind of going forward as forward as the technology advances.
215
:So you just gotta keep up with it.
216
:People have to keep up with it.
217
:Got it.
218
:And how does someone keep up with it?
219
:You gotta either get an advisory service that will help you or somebody that's in the
field that are focused on that, or even in small businesses, they have to take advantage
220
:of the lower cost AI tools that are available to them to protect their environment.
221
:But you do have to have somebody that knows what they're doing to help you.
222
:do that, especially if you're really small.
223
:most small business that I'm looking at is mostly 20 million and up mid-sized companies.
224
:They do need to invest in counter AI technologies to prevent hacking coming.
225
:Plus you have to do your usual stuff, pen testing and all that to make sure that you're up
to snuff.
226
:it doesn't mean that the hacker can't get in, right?
227
:Right.
228
:They can take over somebody's Gmail account and ask for a 2FA, counter the TFA, and get
the account information and then log in and go to wherever they want to go to.
229
:Or they can use deepfake.
230
:They can do a deepfake, voice deepfake.
231
:They can do a video deepfake to pretend to be the CEO and to wire money out.
232
:So there's a lot of things that they can do.
233
:And it's more education.
234
:So the two things you have to do is not only counter
235
:with AI, also educate your employees of what's happening, what is real, right?
236
:And then you have that procedures to do that, to make sure that you're not sending
$500,000 wire to some guy that is flying across to Japan and pretending to be the CEO.
237
:Right.
238
:That's very important.
239
:And then from a cybersecurity standpoint, point.
240
:How can AI tools then balance innovation with data privacy and security concerns?
241
:It can balance, it cannot balance.
242
:It depends on how you use it.
243
:So it's always like this.
244
:So I used to be in defense, so we would have, you know, weapon systems and it would be the
operator.
245
:It's how you train the operator that use the weapon system to make effectiveness of that
weapon system, for example.
246
:So for...
247
:AI and everything that we're talking about, it's how the employees are using it and how
they train to use it to counter whatever it is that we're doing.
248
:Right.
249
:So if we can have a procedure of training to people to be familiar with AI, what it can
do, what it cannot do.
250
:And it's a whole process of training and education for the operators.
251
:Right.
252
:AI is just like a technology.
253
:It's just something that's coming and it's kind of prevalent now.
254
:Everybody's using chat DPT.
255
:For example, I use chat DPT to develop some of my coding, but it's not correct.
256
:So how do you know what information they're giving you is correct or not?
257
:Right?
258
:So you have to have somebody that's experienced to know that is not correct.
259
:And then you have to correct that.
260
:So you can't just, it was a sample.
261
:I think there was a case where a lawyer cited chat DPT on a case and chat DPT gave him a
false citation.
262
:and the judge fined them $5,000 to do out the case.
263
:you know, that is happening at this point.
264
:So you can't trust what it is today.
265
:Got it.
266
:And so just the importance of kind of using it, chat, GBT, or whatever platform you are
using more for a starting point.
267
:Just as a starting point, as a tool, but with a caveat that the human needs to make
decisions and to make sure that it's correct.
268
:And can't just trust it 100 % at this point.
269
:Awesome.
270
:Well, thank you.
271
:And as I said, I really enjoyed today.
272
:And I'm excited for our listeners to have an opportunity to hear from you.
273
:And thank you so much, Wally.
274
:Thank you very much.
275
:Bye.