ChatGPT Agent shows that there’s a whole new world of AI security threats on the way we need to worry about

Security News

ChatGPT Agent shows that there’s a whole new world of AI security threats on the way we need to worry about

Credit: The original article is published here.

If you watched the launch of OpenAI’s new ChatGPT Agent, or you’re a Plus, Pro or Teams user who has had a chance to try out the new ‘agent mode’ in the tools drop-down list, it’s hard not to be impressed with the latest AI innovation.

ChatGPT Agent is designed to do complex real-world tasks for you. Think about things like planning a wedding, booking your car in for a service, making an app to solve a problem, or planning and booking a holiday.

Just like OpenAI’s previous agent called Operator, ChatGPT Agent acts like a real person who works for you, performing tasks as if they had their own computer. In fact, you can watch what’s happening on its ‘desktop’ as it performs these tasks – you can see it dragging windows around, and entering data into forms on websites, for example.

The entire concept is a unified agent that can handle the legwork, make informed decisions about which websites to use, and navigate the web independently. ChatGPT Agent can do it all, and you can even watch it work if you want to, but there is a catch…

A new world of threats

Its the powerful abilities of ChatGPT Agent that open you up to a whole new world of security threats:

“As we know, the Internet can be a scary place” said Casey Chu in the ChatGPT Agent launch presentation, “there are all sorts of hackers trying to steal your information, scams, phishing attempts, and Agent isn’t immune to all these things.”

Well, that’s worrying. He went on:

“One particular thing we’re worried about is a new attack called ‘prompt injections’. Agent might stumble upon a malicious website that asks it to enter your credit card information here because it will help you with your task, and Agent, which is trained to be helpful, might decide that’s a good idea. “

It sounds like we’re all going to have to worry not only about ourselves getting phished in the Future, but we’re also going to have to worry about our AIs also getting phished as well!

“We’ve done a lot of work to try to ensure that this doesn’t happen”, continued Chu, “we train our model to ignore suspicious instructions on suspicious websites. We also have layers of monitors that peer over the agent’s shoulder and watch it as it’s going and stop the trajectory of anything that looks suspicious.”

My first thought upon hearing this was that I would never give ChatGPT Agent my credit card information anyway, but I definitely would not do it now. I mean, the only reason that my credit card resides with Amazon and Apple is that they seem like secure places to me, so the convenience is worth it, but all it would take would be a hint that they weren’t safe and I, probably along with millions of other people, wouldn’t be storing my credit card information with them.

OpenAI team

The OpenAI team launching ChatGPT Agent. (Image credit: OpenAI)

Trust is everything

With online security, trust is everything. The idea that an AI agent, no matter how many background checks it is doing, is autonomously deciding what I spend my money on already fills me with dread. And when you add in the factor that there could be malicious sites out there doing ‘prompt injections’ to try and trick my AI into giving away information, it scares me enough not to want to trust it.

It should be noted that there is a ‘takeover mode’ with ChatGPT Agent where you input the sensitive information directly into the browser yourself, instead of handing it over to ChatGPT Agent to control. That would seem like a better way to use an agent to me. I don’t think I’m quite at the stage yet where I’m ready to give my AI the power to spend my money as it sees fit, and I bet I’m not the only one.

OpenAI seems quite upfront about the risks involved in using ChatGPT Agent with sensitive information, and as CEO Sam Altman said in the presentation, this is emerging technology, and we don’t even know what all the threats will be yet. We’ll just have to see what happens as people start to use it.

But that’s what’s got me the most worried – what happens when people start using AI to beat AI? I’m sure the hackers won’t be shying away from using AI to circumvent our security protocols, and AI will probably come up with a number of attacks we haven’t even thought of yet.

You might also like