For all the hype artificial intelligence has generated in recent months, it has also stoked plenty of fear – even among the tech industry’s top minds. Take Geoffrey Hinton, for instance, also known as the “Godfather of AI.” In May, the 75-year-old computer scientist left his role overseeing a research team at Google to devote
For all the hype artificial intelligence has generated in recent months, it has also stoked plenty of fear – even among the tech industry’s top minds.
Take Geoffrey Hinton, for instance, also known as the “Godfather of AI.” In May, the 75-year-old computer scientist left his role overseeing a research team at Google to devote his time to speaking out about the risks AI poses to society.
Most urgent among Hinton’s concerns are potential job loss and the prevalence of disinformation. He’s also worried about what might happen if humans – especially so-called bad actors – give robots and other AI machines too much control, whether it’s on the battlefield during war or in a corporate office setting.
“If you give one of these superintelligent agents a goal, it’s going to very quickly realize that a good sub-goal for more or less any goal is to get more power,” Hinton said during an interview for The New York Times’ The Daily podcast. “We love to get control. And that’s a very sensible goal to have, because if you’ve got control you can get more done. But these (AI systems) are going to want to get control, too, for the same reason, just in order to get more done. And so that’s a scary direction.”
The interviewer, Cade Metz, who is a technology correspondent with The New York Times, cited a possible scenario in which a human asks an AI system to make money for them.
“Remember, these are machines. Machines are psychopaths. They don’t have emotions. They don’t have a moral compass. They do what you ask them to do. Make us money? OK, we’ll make you money. Perhaps you break into a computer system in order to steal that money,” said Metz. “If you own oil futures in Central Africa, perhaps you foment a revolution to increase the price of those futures to make money from it.”
Of course, as Cade points out, an AI system like ChatGPT does not have the ability to take over the world or “destroy humanity,” but the rise and mainstream usage of large language models is already having real-world consequences that can’t be ignored.
Since early May, thousands of Hollywood screenwriters represented by the Writers Guild of America have been on strike over concerns of working conditions and compensations. Among its various objectives, the union wants to “regulate the use of artificial intelligence” in projects, thus preventing studios from replacing human writers with AI. “AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI,” according to WGA’s campaign.
A March study by researchers at ChatGPT parent company OpenAI and the University of Pennsylvania found that around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of large language models.
The threat of disinformation generated by AI is even more urgent.
New York City-based attorney Steven Schwartz recently made international headlines for using ChatGPT to research legal cases to cite in an affidavit. It turned out that the six cases Schwartz included in the filing were “hallucinations,” or entirely fabricated by the AI chatbot. As a result, the attorney now faces possible sanctions by a federal judge, but claims he was unaware the cases were false and was not acting in bad faith.
While no real harm resulted from the attorney’s use of AI-generated disinformation, that’s not always the case.
In the world of cybersecurity, AI is increasingly being used by bad actors for nefarious purposes. Often in the form of digitally manipulated audio, photos, videos or email messages – also known as deepfakes – disinformation plays a key role in social engineering tactics designed to lure targets into sharing personal information or performing tasks such as installing malware or downloadingviruses.
Many workplaces are all too familiar with email phishing scams, in which, for instance, an employee receives what appears to be a legitimate email from their boss or head of the company asking them to transfer funds, make a purchase or update payroll information.
“There’s now voice phishing scams with people’s voices,” said Keegan Bolstad, sales manager at Menomonee Falls-based managed IT company Ontech Systems Inc. “You receive a call from what looks like your boss’ phone and it’s your boss’ voice telling you to go do something. It’s very, very difficult for you as a user to interpret that, and that’s where AI is getting extremely scary.”
What’s more, the rise and public availability of generative AI has “evened the playing field” amongst cybercriminal groups, Bolstad said. With programs like ChatGPT, would-be cybercriminals no longer need high-level skills like code writing to create malware or ransomware.
But as much as AI has heightened cyber threats to the business world, it has also allowed IT companies like Ontech to sharpen its defense tactics. Bolstad likens it to a game of cat and mouse.
“We respond to the bad actor, and we’ll develop tools or technologies or systems that negate that, and then they’ll pivot and do something new,” he said. “They do something, we react. As we do something, they react. And it’s just a never-ending cycle.”
Holiday flash sale!
Limited time offer. New subscribers only.
Subscribe to BizTimes Milwaukee and save 40%
Holiday flash sale! Subscribe to BizTimes and save 40%!