When Everything Starts to Feel Fake: Your Data Might Already Be Full of AI Imposters
You know the old joke - if someone much, hotter than you, like ridiculously out of your league, walks up to you in a bar and starts flirting, there’s a very high chance you’ll wake up in a bathtub without a kidney.
It’s funny, but it speaks to something real: when something looks too perfect, too polished, too improbable, your brain immediately goes, “There’s a catch.”
That instinct, “Is this real?”, is starting to apply to almost everything now.
A lot of people I know who are still dating keep telling me the same thing: one of the biggest frustrations now is not knowing if the person they’re talking to is even real. Dating apps are full of faces you can’t quite trust. Is this a real person, an AI-generated image, or just a filter doing half the work? Years ago, when you fancied someone, it was because you saw them across a bar or, in my case, the laboratory, and your reaction was to an actual human being in front of you. There was no hesitation or quiet doubt about whether the person even existed.
That baseline assumption of reality, the idea that we’re interacting with actual people, is quietly fading.
The uncomfortable fragility of our data ecosystem
I read an article in The Economist this week that genuinely stopped me. Not because it was dramatic, but because it revealed something very quiet and very serious: AI can now impersonate survey respondents so convincingly that researchers genuinely can’t tell the difference.
Not “in the future.” Not “if we’re not careful.” Now.
The piece explained how large language models can fill out surveys, mimic human tone, copy demographic patterns, and slip through every normal quality check. They can even sometimes replicate the kinds of small errors that real people make, the things researchers once relied on to detect fraud.
In at least one experiment, just a handful of AI-generated answers were enough to shift the results of a political poll. And if political researchers, with all their safeguards and decades of rigour, struggled to catch it, then the rest of the insight world doesn’t stand a chance.
What struck me was how vulnerable our entire feedback ecosystem actually is. We’ve spent years moving everything online, automating data collection, and boasting about volume: more responses, faster responses, cheaper responses. It all sounded efficient until you realise that your dataset can now be quietly polluted by machines pretending to be people — and you would have no idea.
When the respondent isn’t real, the insight isn’t real. It’s as simple as that.
And this is where things get interesting. Because it turns out the most trustworthy method we have, the one thing AI can’t impersonate, is the part of insight work everyone assumed we’d eventually automate away: a real-time human conversation.
The future of insight may look a lot like the past
A conversation forces accountability. A conversation requires presence. A conversation has texture: pauses, hesitations, contradictions, emotion. You can’t fake your way through twenty minutes of talking about damp in your flat, or how safe you feel in your building, or how a service made you feel undervalued.
A trained human hears the things that never make it into survey boxes: embarrassment, fear, pride, irritation, resignation. Machines can simulate the words, but not the weight.
This is the part no one wants to admit: the more sophisticated AI becomes, the more valuable genuine human listening becomes. Not because humans are perfect, we’re not, but because authenticity is becoming scarce. It’s becoming something you have to deliberately protect.
I don’t think this is the “death of the survey,” and I would never say that - surveys still matter. But the era of blind trust in digital feedback is over. We’re going to have to be more thoughtful. More rigorous. More human. We’re going to have to ask not just “What do people think?” but “Who is actually speaking?”
And the answer can’t be an algorithm.
Ironically, the future of insight might look a lot like the past. Not because technology failed, but because it succeeded so well that it blurred the line between real and synthetic experience. Now the organisations that will make the best decisions are the ones who understand one simple truth: you can automate data, but you can’t automate trust.
And trust still comes from people.
If you want to understand what your customers, residents, or communities truly think, and not what machines are guessing on their behalf, this is exactly the work we do at RealService
We speak to people directly. We listen properly. We build the kind of trust that no algorithm can fake, and we surface the human truths that actually drive behaviour, loyalty, frustration, and change.
At a time when digital feedback is becoming noisier, cheaper, and less reliable by the day, our work is intentionally the opposite: slow where it needs to be, rigorous where it matters, and grounded in real conversations with real people. That’s why organisations come to us — because they don’t just want data; they want clarity. They want meaning. They want to make decisions based on insight they can actually stand behind.
If this resonates with you, or if you’re starting to question whether your feedback is telling you the whole story, let’s talk.
For more for insights like this, follow me: Chenai Gondo, PhD