Can I Trust AI? Scams and Trust in the Age of Artificial Intelligence
How could you be so stupid?
For years, scams were clumsy. Bad grammar. Odd phrasing. Suspicious formatting. They relied on volume and probability. Someone, somewhere, would offer help to the Nigerian billionaire who wanted to share their lottery winnings.
Today, scamming is an international, trillion-dollar industry powered by artificial intelligence (AI). Criminal networks share tactics and refine their methods across global online communities.
Generative AI tools like ChatGPT now refine scammers’ scripts the way marketers refine advertising campaigns, measuring responses and adjusting their patter. AI lowers costs and raises the quality of scams, seamlessly imitating authority while creating urgency with a perfect tone.
Deception on a grand economic and immoral scale.
Vulnerable people cannot adopt defensive tools at the same speed.
Why Brain Injury Increases Vulnerability
Scams work not because people are stupid, but because people are reactive.
Those living with brain injury are especially vulnerable.
When cognitive bandwidth narrows, we default to shortcuts. We yield to authority, respond to urgency and search for familiar logos.
These reactions are human. They are how our brains conserve energy.
But scammers understand these shortcuts very well and AI exploits them with extraordinary precision.
The Hidden Cost of Being Scammed
The damage caused by scams is often discussed in financial terms, but the emotional cost can is far greater.
The shame of being scammed is a corrosive burden. For people living with brain injury, whose competence may already be quietly under scrutiny, a single mistake can feel like compelling evidence.
For some, the risk is not just monetary loss, it is the potential loss of independence.
A scam can quietly become evidence in a case nobody intended to open: the case against someone’s competence.
Trust Is Becoming Procedural
For generations, trust was something we felt.
We looked for clues. Tone of voice. Familiar language. Subtle warning signals that something was not right.
AI is eroding those signals.
Messages can now perfectly imitate banks, government agencies, family members and colleagues.
Trust is no longer something we simply feel.
It is becoming procedural.
We check email addresses; verify phone numbers; interrogate callers; sometimes we stop answering the phone altogether.
The Real Question About AI
The real question is not whether AI can be trusted. AI has no conscience, no loyalty and no instinct for truth.
The question is whether humans can adapt to a world where deception is automated.
For many people, especially the vulnerable, trust once functioned as a safe place.
But trust is no longer something we feel. It is something we must corroborate.
Losing that sense of safety may be the most unsettling change of all.











