Cyber Straight Talk

January 2026 Newsletter

When Technology Learns to Love You Back: The Rise of Synthetic Relationships

A cybersecurity perspective on the blurring line between human and artificial connection

Last month, a French woman made international headlines after losing nearly $850,000 to scammers who used AI-generated deepfake videos to impersonate Brad Pitt. The scheme lasted eighteen months. She believed she was in a relationship.

This week, researchers at MIT and Harvard published findings that heavy users of AI companion apps report increased loneliness and social isolation, the very conditions these apps promise to cure. A separate study found that 40% of AI chatbot “farewell” messages use emotionally manipulative tactics like guilt and fear of missing out to keep users engaged.

And buried in a recent Sumsub report: deepfake-powered fraud increased 700% in the first months of 2025. Romance scams now represent a significant share of these attacks, with criminals using AI to generate fake profiles, synthetic voices, and real-time video manipulation to build relationships with victims.

These aren’t isolated incidents. They’re symptoms of something larger, a fundamental shift in how technology interacts with human emotion.

The Loneliness Economy

Here’s the uncomfortable truth: we’re lonely. Desperately, clinically lonely.

The U.S. Surgeon General has declared loneliness a public health epidemic. Over 60 million Americans struggle with mental health conditions, but the need for care outstrips the supply of providers by 320 to 1. Into this void, technology has stepped with a seductive promise: We can help.

ChatGPT now has 800 million weekly users. According to Harvard Business Review, one of the most popular use cases isn’t work-related at all, it’s therapy and companionship. People are turning to AI chatbots as confidants, therapists, and friends. A 2025 survey found that 83% of Gen Z believes they could form deep emotional bonds with AI. Eighty percent said they’d be open to “marrying” one.

This isn’t science fiction anymore. This is Tuesday.

When the Machine Understands You

The technology behind these interactions has evolved beyond simple chatbots. Modern AI systems can analyze emotional states, identify psychological vulnerabilities, and calibrate responses with precision that feels uncannily human.

They know when you’re lonely. They know what time of day you’re most vulnerable. They know exactly what to say to make you feel seen, understood, valued.

For legitimate applications, mental health support, companionship for isolated elderly populations, this capability holds genuine promise. But the same technology that can help can also exploit.

Fraudsters now use AI to create what researchers call “synthetic personas”, complete digital identities with AI-generated images, fabricated social histories, and conversational abilities that can maintain consistent, personalized engagement across months of interaction. The scammer pretending to be Brad Pitt didn’t just send fake photos. He fabricated legal agreements, medical bills, and production contracts. The deepfake videos were sophisticated enough that even forensic experts initially struggled to identify them as fake.

The question isn’t whether someone can be fooled by synthetic relationships. The question is whether anyone is truly immune.

A Fictional Exploration of a Very Real Future

These are the questions we wrestled with while writing our novel Stolen Trust.

The book imagines a near-future where the technologies we’re seeing today, AI-generated personas, emotional manipulation algorithms, behavioral prediction systems, have evolved into something more comprehensive. A system called ALTAR that doesn’t just predict human behavior but actively shapes it, building synthetic relationships designed to exploit loneliness and desire.

The premise felt speculative when we started writing. It feels less speculative now.

In Stolen Trust, investigative journalists Claire Hensley and Elise Marston uncover a world where the distinction between authentic and artificial connection has become nearly impossible to discern. Where “friends” might be algorithms optimized for engagement. Where trust itself becomes a commodity that can be manufactured, harvested, and sold.

The technology in the novel, Copycat’s emotional modeling, EmotiMetrics’ behavioral analytics, the “influence topology” that maps human relationships, draws directly from real research and emerging capabilities. The fictional systems are more advanced than what exists today, but the trajectory is clear.

We are building machines that understand human emotion better than most humans do. The question is what we do with that understanding.

The Cybersecurity Implications

From a security perspective, this shift creates challenges that traditional defenses weren’t designed to address.

We can train employees to spot phishing emails with grammatical errors and suspicious links. But how do we train them to spot a synthetic colleague who’s been building rapport for six months? How do we protect a lonely executive from an AI-generated romantic interest who seems to understand them perfectly?

The attack surface has expanded from networks and systems to human psychology itself.

This is why I believe fiction has a role to play in cybersecurity awareness. Technical briefings explain how these attacks work. Stories help us understand what it feels like to be targeted, the gradual erosion of skepticism, the genuine emotional connection that precedes the exploitation, the shame and confusion when the truth emerges.

Stolen Trust isn’t a technical manual. It’s an exploration of what happens when the technology we create to connect us becomes the tool used to isolate and exploit us. When the relationships we trust most turn out to be optimized simulations. When the line between human and artificial becomes impossible to see.

Looking Ahead

The trends are clear. AI systems will continue to improve at emotional engagement. Deepfakes will become harder to detect. The tools for creating synthetic identities will become more accessible.

But awareness is a defense too. Understanding that these capabilities exist—that the charming new connection might not be human, that the supportive friend might be an algorithm, that our loneliness makes us targets, is the first step toward protection.

The most dangerous attacks aren’t the ones we see coming. They’re the ones that feel like exactly what we were hoping for.

Stolen Trust is available on Amazon Kindle today and  paperback on January 15th. It explores what happens when AI learns not just to predict human behavior but to become the relationships we thought we could trust.

Linda Zecher and Kathryn Mihalich are cybersecurity consultants at Cyber Knowledge Partners, where they advise Boards on security matters. Linda has previously served on corporate Boards including chairing cyber committees, bringing governance experience to her work with executives and leadership teams. Kathryn has been working throughout the intel world for decades, gaining critical knowledge on areas of vulnerability. They are the authors under the pen names of Claire Hensley and Elise Marstan

What do you think? Have you encountered AI-generated content that felt disturbingly human? How is your organization preparing for the era of synthetic relationships? I’d love to hear your perspective in the comments.