Is everything AI-generated now?

Every day, I mindlessly scroll through X (previously Twitter), consuming an endless stream of cute cat photos, my favourite K-drama content, and those perfectly relatable tweets that make me pause mid-scroll. It's my daily dose of comfort, until recently, when this routine became tinged with an unsettling paranoia.
A few months ago, I came across what seemed like the most adorable photo: a fluffy orange cat taking tentative steps into the snow, then retreating to pose cutely for the camera. I shared it without a second thought, joining thousands of others who found it irresistibly charming. Only later did I discover the uncomfortable truth; it was AI-generated. The telltale signs were there: the paws unnaturally facing the same direction, the cat's almost-too-perfect orange coat, and that uncanny quality that makes you feel something is slightly off. The original image that inspired this doppelganger featured a black cat with properly oriented paws, but by then, the AI version had already colonised our collective consciousness.
Then came the "emotional support kangaroo" video: a marsupial with a boarding pass being denied airplane access. Despite the original post clearly labelling it as AI-generated, the video spread, stripped of its disclaimer, with viewers genuinely debating airport pet policies. Most recently, I watched an alligator seemingly giving a cat a ride across the water, another AI creation that looked convincingly real until you knew better.
This daily deception makes me question: in a world where we have largely abandoned content verification, how can anyone prove their work isn't AI-generated? The burden of proof has quietly shifted from "prove it's fake" to "prove it's real", a nearly impossible task in our current technological landscape.
Given that social media platforms provide creators of all kinds a chance to showcase their art, it is becoming increasingly difficult to tell real from generated content.
How can one even prove that their content is not non-AI-generated?
When detection tools fail us
While detecting AI generated content requires us to either watermark it as an AI generated content, observe the content shared closely and deem it as you see fit or use AI detection tools.
However, the fundamental problem with proving human authenticity lies in the failure of our detection systems. We are worse at detecting AI-generated images than text. Recent survey data reveals that over half of respondents mistook human-generated images for AI creations. 51% incorrectly identified a human-created shampoo bottle image as AI-generated, while 53.4% made the same error with a refrigerator image.
Image credits: Nexcess
While AI-generated text might retain some detectable patterns, AI imagery has crossed an uncanny valley threshold that makes visual deception far more effective.
The text detection landscape is equally problematic, as Christopher Penn, Chief Data Scientist at Trust Insights, discovered when he tested an AI detection tool on the preamble to the U.S. Declaration of Independence. The system confidently declared that 97.75% of the text was AI-generated; a document written 246 years before ChatGPT's existence.
Penn's analysis revealed two critical flaws: AI detectors rely on metrics like perplexity and burstiness, flagging documents with consistent vocabulary and similar line lengths, while also using smaller AI models trained on the same data as their larger counterparts, creating circular logic where they recognise familiar training data as "AI-like."
The proliferation of detection tools has created what experts call "detector marketplace confusion." GPTZero, Originality.ai, Copyleaks, QuillBot, and dozens of others each employ different methodologies with inconsistent results. While some controlled tests suggest high accuracy, with certain tools achieving perfect scores in limited test sets, broader independent reviews reveal unreliability. OpenAI's decision to discontinue its own AI text classifier due to low accuracy serves as a stark indicator of these limitations.
More troubling is the systematic bias these tools exhibit. Studies show that AI detectors consistently misclassify content from non-native English speakers and neurodivergent individuals, whose writing styles naturally exhibit patterns like simpler sentence structures, specific vocabulary choices, and repetitive phrasing for clarity.
This creates a deeply unfair system where certain human voices are systematically delegitimised, with linguistics experts correctly identifying AI-generated research abstracts only 38.9% of the time. This shows that AI-generated content can often pass as human-written, making it harder for real human work to be recognised and trusted.
The race to detect AI-written text has become more complex. It's no longer just about spotting AI content; now there are advanced tools designed to avoid detection. Systems like AuthorMist use reinforcement learning to generate text that progressively becomes less likely to be flagged while preserving its original meaning. Simple paraphrasing tools, prompt engineering that instructs AI to "write like a human," and even using AI to "humanise" AI text have proven effective at bypassing detection.
Why legal frameworks can't keep pace
In response to these challenges, governments are beginning to create legal guidelines for AI use. The European Union's AI Act is one of the most comprehensive attempts to regulate artificial intelligence, but even its ambitious scope reveals the limitations of legislative solutions to the authenticity crisis. The Act establishes risk-based frameworks, banning "unacceptable risk" applications like cognitive manipulation while imposing transparency requirements on limited risk tools like chatbots.
However, recent legal developments suggest that copyright law, often viewed as a potential avenue for addressing AI training practices, may offer limited protection. In June 2025, U.S. District Judge Vince Chhabria dismissed a copyright lawsuit against Meta, brought by authors including Sarah Silverman and Ta-Nehisi Coates. While the judge emphasised that the ruling didn't establish Meta's practices as lawful, he noted that the plaintiffs "made the wrong arguments," highlighting the legal complexity of proving copyright infringement in AI training.
The watermarking solutions promoted by tech companies, embedded during content creation, face their own challenges. These systems can be bypassed through paraphrasing or disabled in open-source models. More importantly, watermarking requires universal adoption to be effective, and there's little incentive for all players to participate in good faith.
The real regulatory challenge is about the shift in how we understand content creation. The traditional distinction between "human" versus "AI" content is meaningless in an era where AI tools assist human writers in various capacities. Current legal frameworks struggle to address this nuanced reality, focusing on simple classifications rather than the complexity of human-AI collaboration.
Even well-intentioned "Made with AI" disclaimers face limitations. Social media platforms let go of context as content spreads, and there's no standardised system for maintaining attribution across platforms. The viral nature of digital content means that disclaimers often become separated from their original context, as has been my experience on X.
Why proving humanity may be impossible
The quest to prove human authenticity in content creation might be flawed. We are operating under the assumption that clear distinctions exist between human and AI creation, but this premise increasingly doesn't hold up to scrutiny. Modern AI systems can adopt various writing styles, incorporate emotional language, and even mimic human imperfections; not because they are becoming more human, but because they are becoming better at mimicking surface-level human characteristics.
Human expression is incredibly diverse. What we consider "AI-like" characteristics, including repetitive phrasing, formal tone, and consistent structure, can appear in genuine human writing from diverse linguistic backgrounds, cognitive styles, or communicative needs. The norm of human writing that detection systems are trained on fails to embrace this human expression.
This makes proving human authenticity harder for those whose natural expression deviates from algorithmic expectations. Non-native English speakers, neurodivergent individuals, and those from different cultural backgrounds face higher scrutiny, not because their content is less authentic, but because their authentic expression doesn't match the parameters of what detection systems consider "human."
The focus on detection also misses the larger point. The question shouldn't be "Is this AI-generated?" but rather "Is this content valuable, accurate, and ethically created?" Under Penn’s LinkedIn post talking about AI detectors, Steve Mudd, founder and CEO, Talentless AI, noted, “What are we supposed to do if we detect AI-generated text? Call the AI police? If the content is worth reading, I'll read it whether it came from AI or not. And I now assume that the writer used some form of AI somewhere in the process.”
Instead of the "AI or human" classifications, we need transparency about how content was created, what tools were used, and how human judgment guided the process.
The responsibility ultimately falls on content creators to maintain ethical standards and clear attribution, regardless of the tools they use. This means honest disclosure about AI assistance, maintaining quality standards, and ensuring that AI tools enhance rather than replace human creativity and judgment. The goal isn't to eliminate AI from content creation but to make sure its use serves human interests and maintains trust.
Yes, almost everything is AI-generated today, but that’s not the real problem.
The problem is that we no longer know what to trust. Whether it is charming cat photos or misinformation, AI-generated content has woven itself into our lives. We need better systems, not just technical or legal, but cultural, that prioritise transparency and support creators of all kinds.
News