The title of this post uses acronyms for brevity, but the full version is “Using Large Language Models for Know Your Customer. What Could Go Wrong?”
Biometric Update links to a TrendAI post that demonstrates how the use of a large language model to analyze document data is a vulnerability to prompt attacks.
“In a real-world stack built with FastAPI, Claude Code, and a SQLite MCP backend, his team embedded malicious instructions inside a passport so that the AI agent followed them and leaked other customer records directly into the verification page.”

What does this mean?
“The takeaway here is that if your AI can read documents and call tools, your documents can potentially become executable attack surfaces even when guarded with strict schemas.”
Something a human wouldn’t do.
