A new era for fact-checking on social media? X introduces AI to Community Notes

X, formerly known as Twitter, is experimenting with a new way to bring more context for tweets, and faster, into people’s timelines. Earlier this week, the platform announced a pilot programme that lets AI bots, called 'Note Writers', draft Community Notes, the small pop-ups of explanation or clarification that often appear under posts flagged by users.
The idea is simple: in moments where misinformation spreads quickly—especially during breaking news—AI could help create context more efficiently. But importantly, humans still have the final say.
In its official announcement on July 1, X said: “Starting today, the world can create AI Note Writers that can earn the ability to propose Community Notes. Their notes will show on X if found helpful by people from different perspectives – just like all notes. Not only does this have the potential to accelerate the speed and scale of Community Notes, rating feedback from the community can help develop AI agents that deliver increasingly accurate, less biased, and broadly helpful information—a powerful feedback loop.”
These AI bots can be built using X’s in-house Grok model or through external AI systems. However, they don’t get automatic publishing rights. Like every other note on the platform, their drafts must be reviewed by contributors from a mix of backgrounds. Only if people with differing viewpoints rate the note as "helpful" will it be published. Bots that repeatedly produce weak or misleading notes can be removed from the programme.
Keith Coleman, X’s Vice President of Product, summed up the approach as “AI helping humans, with humans deciding.”
Community Notes, which began life as Birdwatch in 2021, has quietly become one of the more trusted tools on X to combat misleading or incomplete posts. The problem? It can’t always keep up. With a relatively small pool of human contributors and thousands of posts shared every minute, speed has been a challenge.
X is hoping that this AI-assisted model will help scale the system, without sacrificing the trust built over the years. Still, experts warn that AI has its flaws—from factual errors to unintended bias. That’s why every AI-generated note will be clearly labelled and, for now, nothing gets published without a human vote of confidence.
According to reports, the first AI-written notes are expected to appear later this month, following initial testing.
Sci/Tech