Why Replacing Your Employees with VLM NPE Bots Won’t Defeat Social Engineering

(Scammed bot finger picture from Imagen 3)

Your cybersecurity firm can provide the most amazing protection software to your clients, and the clients still won’t be safe.

Why not? Because of the human element. All it takes is one half-asleep employee to answer that “We received your $3,495 payment” email. Then all your protections go for naught.

The solution is simple: eliminate the humans.

Eliminating the human element

Companies are replacing humans with bots for other rea$on$. But an added benefit is that when you bring in the non-person entities (NPEs) who are never tired and never emotional, social engineering is no longer effective. Right?

Well, you can social engineer the bot NPEs also.

Birthday MINJA

Last month I wrote a post entitled “An ‘Injection’ Attack That Doesn’t Bypass Standard Channels?” It discussed a technique known as a memory injection attack (MINJA). In the post I was able to sort of (danged quotes!) get an LLM to say that Donald Trump was born on February 22, 1732.

(Image from a Google Gemini prompt and response)

Fooling vision-language models

But there are more serious instances in which bots can be fooled, according to Ben Dickson.

“Visual agents that understand graphical user interfaces and perform actions are becoming frontiers of competition in the AI arms race….

“These agents use vision-language models (VLMs) to interpret graphical user interfaces (GUI) like web pages or screenshots. Given a user request, the agent parses the visual information, locates the relevant elements on the page, and takes actions like clicking buttons or filling forms.”

Clicking buttons seems safe…until you realize that some buttons are so obviously scambait that most humans are smart enough NOT to click on them.

What about the NPE bots?

“They carefully designed and positioned adversarial pop-ups on web pages and tested their effects on several frontier VLMs, including different variants of GPT-4, Gemini, and Claude.

“The results of the experiments show that all tested models were highly susceptible to the adversarial pop-ups, with attack success rates (ASR) exceeding 80% on some tests.”

Educating your users

Your cybersecurity firm needs to educate. You need to warn humans about social engineering. And you need to warn AI masters that bots can also be social engineered.

But what if you can’t? What if your resources are already stretched thin?

If you need help with your cybersecurity product marketing, Bredemarket has an opening for a cybersecurity  client. I can offer

  • compelling content creation
  • winning proposal development
  • actionable analysis

If Bredemarket can help your stretched staff, book a free meeting with me: https://bredemarket.com/cpa/

Leave a Comment