Artificial intelligence is the last double-edged sword.
It is advancing medical technology at an astonishing rate and improving the quality of life around the world, but it is also already being used for nefarious purposes.
“When we talk about bad actors, things are now available to a lot of people who might not otherwise be considered technically sophisticated,” JS Nelson, a cybersecurity expert and visiting fellow in business ethics at Harvard Law School, told The Post.
“It’s happening on a global scale,” Lisa Palmer, chief AI strategist at consultancy AI Leaders, told The Post. “This is not just something that is happening in the United States. It is a problem in several countries”.
Through AI, people’s facial data has been used to create pornographic images, while others have replicated their voices to trick close family and friends over the phone, often to send money to a scammer.
Read on to learn more about the terrifying ways AI is being used to exploit and rob people, and how it’s likely to get worse.
Generative AI and deepfakes
Popular photo apps where users submit snapshots of themselves and have the AI transform them into a sci-fi character or Renaissance artwork have a very dark side.
When MIT Technology Review’s Melissa Heikkilä tested the hit app Lensa AI, it generated “tons of nudity” and “overtly sexualized” images without her consent, she wrote in late 2022.
“Some of those apps, in their terms of service, make it very clear that you are sharing your face in their data storage,” said Palmer, who gave a lecture Wednesday on the potential benefits and drawbacks of AI for the Management Company. of the information.
And, in the wrong hands, the theft of a person’s biometric facial data could be catastrophic.
She continued: “That’s a terrible case scenario where someone could potentially breach a [military or government] facility as a result of having someone’s biometric data.”
Easy-to-create deepfake and generative AI content is also emerging, such as fake footage of Donald Trump’s arrest. Palmer is “exceptionally concerned” that this will be a problem in the next election cycle.
In particular, he fears unethical, but not illegal, uses that some politicians might see as “just clever marketing.”
Nelson, who preaches “how dangerous it is for AI to just make things up,” also fears that easy access to generative AI could lead to fake news and mass panic, such as a computer-generated extreme weather event being widely shared on social networks. social. .
She said: “It’s going to keep going off the rails. We are starting to see all of this happen.”
AI is bringing a high degree of sophistication to fraudulent emails and robocalls, experts warn.
“It’s very convincing,” Palmer said. “Now they can create these phishing emails in [a massive] scale that they are personalized,” he said, adding that phishers will include convincing pieces of personal information taken from a target’s online profile.
ChatGPT recently introduced Code Interpreter, a plugin that can access and drill down on major data sets in several minutes. It can make a scammer’s life substantially easier.
“You [could] have someone who has access to a complete list of political donors and their contact information,” he added. “Maybe you have demographic information like, ‘We really appreciate your latest donation of X amount of dollars.'”
AI is also improving the ability to create fake phone calls. All it takes is recording three seconds of the person speaking; 10 to 15 seconds will get you an almost exact match, Palmer said.
Last month, a mother in Arizona was convinced her daughter had been kidnapped for $1 million in ransom after hearing the cloned girl’s voice over the phone, something the FBI publicly addressed.
“If you have it [your info] public, you’re allowing yourself to be ripped off by people like this,” said Dan Mayo, assistant special agent in charge of the FBI’s Phoenix office. “They are going to search for public profiles that have as much information about you as possible, and when they find out about that, they are going to dig into you.”
Employees, especially in technology and finance, may also receive calls with the fake voice of their boss on the other end, Nelson predicted.
“You are dealing with a chatbot that literally Sounds like your boss,” he warned.
But ordinary citizens aren’t the only ones being deceived.
In late April, pro-Putin Russian pranksters tricked Federal Reserve Chairman Jerome Powell into thinking he was talking to Ukrainian President Volodymyr Zelensky.
The hoaxers then broadcast the resulting lengthy conversation with Powell on Russian television.
AI’s ability to improve malware, which experts recently tested with ChatGPT, is also raising alarm among experts.
“Malware can be used to give bad guys access to data you store on your phone or in your iCloud,” Palmer said. “Obviously, it would be things like your passwords on your banking systems, your passwords on your medical records, your passwords on your children’s school records, whatever the case may be, anything that’s protected.”
Specifically, what AI can do to improve malware is create instant variants, “which makes it increasingly difficult for those who are working on securing systems to keep up with them,” Palmer said.
In addition to ordinary people, especially those with access to government systems, Palmer predicts that high-profile individuals will be targets of AI-assisted hacking efforts aimed at stealing sensitive information and photos.
“Ransomware is another prime target for bad actors,” he said. “They take over your system, change your password, lock you out of their own systems, and then demand a ransom.”