Bots and humans are coexisting in the space we call the internet without any clear rules of engagement.
I have a growing concern that even as someone deeply embedded in the tech landscape, I can no longer reliably recognize AI-generated content.
At the very least, most of us rely on signals like:
- “People think”
- “Experts say”
- “Users report”
But people, experts, and users could all be bots.

It’s not just me. In a 3,000 person survey, Checkr found that 88% of people say it’s hard to tell what’s real online.
The ramifications aren’t agreed upon. They depend on who you are. If you’re Sam Altman, this is upside. If you’re an elderly person trying to protect your retirement fund, this starts to feel like an existential problem.
Even AI struggles here. Models cannot consistently recognize AI-generated content.
So in many ways, we’re left to figure this out ourselves.
I want to document a few ways to recognize AI-generated writing, images, video, and social media behavior. Not because it’s possible to be perfectly accurate, but because it’s becoming necessary to be a little more skeptical than we used to be.
The Spectrum of “Authorship”
It can be difficult to distinguish AI-generated writing because modern AI is trained on human language.
Research from MIT shows that, on average, content generated or finalized by AI is perceived as slightly higher quality than human-generated content. That alone makes this harder than most people expect.
The study breaks content into four categories:
| Rank | Category | Description |
|---|---|---|
| 1 | AI only | Fully generated by AI |
| 2 | Augmented human | Human decides, AI assists |
| 3 | Augmented AI | AI decides, human assists |
| 4 | Human only | Fully human-created |
I tend to focus on categories 1 and 3, where AI is making most or all of the decisions and is given creative liberty.
In practice, this shows up in subtle ways. You might read a blog post that is clean, well-structured, and easy to follow, but when you finish it, nothing really sticks. Or you might write something yourself, run it through an AI tool to “polish it,” and realize afterward that while it reads better, it no longer sounds like you.
That tradeoff is where this starts to matter.
Em Dash
AI-generated content tends to overuse the em dash — sometimes to the point where it becomes noticeable.
If you were someone who used em dashes before generative AI, there’s nothing wrong with continuing. But it is interesting that prior to AI, most online writing used them sparingly, whereas now they show up much more frequently.
A simple example would be a sentence like:
“Success isn’t about working harder — it’s about working smarter — and using the right tools — to maximize output.”
There’s nothing technically wrong with it, but the repetition feels unnatural. Most people don’t structure sentences this way multiple times in a row.
It’s a small signal, but one that appears often enough to be worth noticing.
No One Behind the Words
The biggest tell, at least in my experience, is a lack of genuine emotion.
Working with tools like ChatGPT, I’ve noticed that models are good at reflecting emotions that are already present in the prompt, but they struggle to generate emotion from lived experience.
For example:
“Losing my job was a challenging experience that taught me resilience and adaptability.”
This is correct. It reads well. But it doesn’t feel like anything.
Compare that to something more specific:
“I got laid off on a Tuesday morning, and I remember sitting at my desk for a few minutes before it even registered what just happened.”
The difference is subtle, but important. One is constructed. The other is remembered.
AI does not have experiences. It cannot listen, smell, feel, or recall moments in the way humans do. As models improve and incorporate more modalities, this gap may narrow, but for now, emotionally flat writing is still one of the more reliable signals.
Corporate Nothingness
Another pattern is the use of elaborate but empty language.
LLMs sometimes default to a style that sounds sophisticated, almost like political speech, but doesn’t actually say much.
You might read something like:
“In today’s rapidly evolving digital landscape, leveraging scalable solutions is critical to driving long-term value creation.”
At first glance, it sounds insightful. But if you pause and ask what it actually means, it becomes harder to answer.
In contrast, a human version of the same idea might be less polished but more direct:
“Most teams are just trying things and hoping something works.”
One sounds better. The other communicates more.
When reading, it helps to occasionally stop and ask: what is this actually saying? If the answer is unclear, that’s usually a signal.
Almost Real
Images come with their own set of signals.

Some is as simple as marker on the bottom right.
They are often not obvious at first glance, which is what makes them effective.
Common patterns include skin that appears too smooth, lighting that feels slightly inconsistent, or backgrounds that don’t fully make sense when you look closely.
For example, you might notice hands with an unusual number of fingers, or subtle distortions where objects blend into each other. In group photos, faces in the background can start to look less defined the longer you look at them.
Items magically appearing in a frame.
These details are subtle and easy to dismiss, so look hard.
The Fake Crowd
Social media is where this becomes more operational.
Bots are not just generating content. They are maintaining accounts, building audiences, and interacting with real people. Because platforms actively try to detect and remove them, bot operators have to obfuscate their behavior.

That creates patterns.
For example, usernames often include randomized text like jan_doe_w3oisers98df, which are generated at scale to create unique accounts, but terrible if you want to share with friends and family. You might also see relatively new pages with a surprisingly large number of followers, or accounts that follow far more people than follow them.
Content patterns matter too. Posts can feel repetitive, images can look like variations of the same template, and engagement often lacks depth. Sometimes bot farms will have all their accounts interact with each other. So jan_doe_w3oisers98df is commenting on mark_johnson_sdjf09w234234.
A common example is an account that posts frequently about a single political issue, engages aggressively in the comments, but never really deviates from a narrow set of talking points.
The callout here is to spend 5-10 seconds on the profile of a divisive post to decide if it is a bot.
Now What?
I’m on high alert.
And the combination of everything I just mention will earn you a report from me.
We are entering a phase where AI is not just shaping what we read online, but increasingly influencing how we interpret reality. There is no clear line between human and machine-generated content anymore, and there likely won’t be one.
You won’t catch everything. I don’t either.
But developing a habit of asking simple questions; what is this saying, who is behind this, does this feel real, goes a long way.
If you do some of the things above, you might be mistaken for a bot.
Or, at the very least, you might start to understand how easy it is to be.
If you found this useful, feel free to share it with someone who might enjoy it too.