Many of us have done it: opened up ChatGPT to get a quick summary of a complex issue or to whip up an email in seconds. But are generative AI platforms like ChatGPT or Gemini safe enough from a cybersecurity perspective to be used in the workplace?
Cybersecurity experts say these AI tools can be used with lowered risks to a business so long as the appropriate precautions are taken.
“From a pure security standpoint, every AI model that exists has some risk associated with it,” said Walter Schilling Jr., Ph.D., a computer science and software engineering professor at the Milwaukee School of Engineering.
When you put information into a generative AI platform, that information is then built into that system’s training data. This means that there is always potential for a bad actor to access that information with the right prompting, Schilling explained.
Smaller companies might be at a disadvantage when it comes to implementing AI as they likely don’t have the same level of privacy policies in place as larger corporations. This means small companies need to start from square one and roll out strong policies before training employees on the responsible use of AI platforms.
“From the standpoint of cybersecurity training, a lot of what is needed are the good corporate governance pieces on what you do with your data,” said Schilling.
Another risk associated with using generative AI platforms is a lack of transparency and information on these tools when they’ve been breached. For example, the Chinese platform DeepSeek made headlines in early February after it suffered a significant data leak that exposed the data of more than one million users. The breach was first discovered and reported by the New York-based cybersecurity firm Whiz.
“With these generative AI tools, not much is shared when attacks are happening when it comes to what information has been leaked,” said Schilling.
Even with the possibility of a breach, generative AI tools remain an important tool to boost worker productivity and efficiency. Businesses shouldn’t aim to stop employees from using the technology.
[gallery columns="2" size="full" ids="585469,607120"]
“Your staff are going to utilize it regardless, right? It’s such a powerful tool, and it can help them with their jobs,” said Brad Lutgen, founding partner at Madison-based cybersecurity firm Ghostscale. “Banning it completely is not the answer.”
Instead of banning the use of generative AI tools outright, Lutgen says educating both employees and vendors on responsible use of the technology is the best approach.
While AI can make the everyday tasks of employees easier, it also makes attacks from cyber criminals more effective. Data poisoning, which is a type of cyberattack that feeds AI models false information to corrupt them, is becoming more prevalent. Hackers use data poisoning to promote misinformation.
“We now have different types of social engineering attacks because you can put out all this misinformation and get employees to think things or do things they may not have otherwise,” said Lutgen. “We need to train for that.”
On the corporate side, this means making sure companies update their security awareness training. Employees need to have a foundational knowledge of what AI is and how it can be used against them, Lutgen explained.
From a policy standpoint, businesses need to have clear guidelines for acceptable and unacceptable uses of AI, and they need to consider any scenario that could impact security.
“It’s the knowledge of sensitive data that should never go into a public-facing AI engine,” said Lutgen. “Some of the standard examples are if you’re in finance, you shouldn’t be putting financials into ChatGPT.”
AI-focused cybersecurity training should be tiered and specified by department. Within an organization, an employee in the HR department would use generative AI in a different way than an IT specialist.
If a company is mature enough that they’ve developed their own large language model for internal use, then employees should be trained on proper use cases for the technology and what the company’s goals are, so that ultimately the company gets a return on their investment from building it.
“You need to combine training on the benefits of AI with training on the risks,” said Lutgen. “You need to say, if I teach you cool stuff, you can do an AI to make your job easier. Employees are more excited about that and more likely to listen to that.”
Both Lutgen and Schilling say providing specialized cybersecurity training while updating and tweaking your company’s corporate governance and data usage policies are the two best ways to prevent a possible breach. While the technology used to facilitate social engineering schemes, like phishing, has become more complex, the methods used to prevent them mostly remain the same.
“Each company and individual entrepreneur needs to know what their area of expertise is and how much risk they’re willing to tolerate related to cybersecurity,” said Schilling. “It’s a business innovation versus stagnation type of decision.”