Research Reveals Adversarial Images Designed for AI Can Trick Human Perception Too

Research Reveals Adversarial Images Designed for AI Can Trick Human Perception Too

Get this – those freaky edited images designed to mess with AI? Turns out they can actually trick humans too!

Researchers have been creating these “adversarial images” to try and fool computer vision systems. We’re talking subtle changes – like barely adjusting some pixels – that can make an AI think a cat is a truck. Crazy that small tweaks can deceive AI like that.

But the latest research shows a similar effect on people! In experiments, folks were shown adversarial image pairs and asked stuff like “Which looks more cat-like?” Even though neither was a cat, people consistently picked the one altered to trick the AI.

The crazy part is this happened even with suuuper subtle changes that humans normally wouldn’t notice. But somehow those tiny adversarial tweaks created a bias in what people saw.

It shows human vision isn’t as easily fooled as AI, but we’re definitely not immune. The changes created a subtle nudge in perception toward what the computer vision saw.

This is big stuff when it comes to AI safety research. It means adversarial attacks don’t just mess with the machine – they can subtly manipulate human decision-making too.

It emphasizes why we have to keep studying how AI and human brains interact. Don’t want these adversarial images causing issues without us realizing it. For tech to be safe and secure, we need to understand how it impacts both humans and machines.

So in summary, edited images can deceive more than just AI – human vision is affected too! It’s a tricky thing we’re still figuring out. But understanding how brains percieve tech is key for developing it responsibly. The goal is AI that doesn’t subtly manipulate us without our knowledge. A little editing shouldn’t make anyone see cats as trucks!