Twitter’s seeking to improve the worth of its Group Notes, with a brand new characteristic that’ll allow Group Notes contributors to add a contextual note to an image in the app, which is able to then see Twitter’s system connect that observe to any matching re-shares of the identical picture throughout all tweets.
From AI-generated photos to manipulated movies, it’s widespread to return throughout deceptive media. Right now we’re piloting a characteristic that places a superpower into contributors’ arms: Notes on Media
Notes hooked up to a picture will mechanically seem on current & future matching photos. pic.twitter.com/89mxYU2Kir
— Group Notes (@CommunityNotes) May 30, 2023
As you’ll be able to see on this instance, now, when a Group Notes contributor marks a picture as questionable, and provides an explanatory observe to it, that very same observe can be hooked up to all different tweets utilizing the identical picture.
As defined by Twitter:
“Should you’re a contributor with a Writing Affect of 10 or above, you’ll see a brand new choice on some Tweets to mark your notes as ‘In regards to the picture’. This feature might be chosen whenever you consider the media is doubtlessly deceptive in itself, no matter which Tweet it’s featured in.”
Group Notes hooked up to pictures will embody an explainer which clarifies that the observe is concerning the picture, not concerning the tweet content material.
The choice is presently solely accessible for nonetheless photos, however Twitter says that it’s hoping to increase it to movies and tweets with a number of photos quickly.
It’s replace, which, as Twitter notes, will change into more and more necessary as AI-generated visuals spark new viral traits throughout social apps.
Pictures like this:

This AI-generated image of the Pope in a puffer jacket prompted many to query whether or not it was actual, which is a extra light-hearted instance of why such alerts may very well be of profit in clarifying the precise origin of an image inside the tweet itself.
Extra not too long ago, we’ve additionally seen examples of how AI-generated photos could cause hurt, with a digitally created image of an explosion outdoors the Pentagon sparking a brief panic online, earlier than additional clarification confirmed that it wasn’t truly an actual occasion.
That particular incident has possible prompted Twitter to take motion on this entrance, and using Group Notes for this objective may very well be a great way to maximise utility to AI-enhanced photographs at scale.
Although Group Notes, for all its advantages, stays a flawed system too, with regard to addressing on-line misinformation. The important thing challenge with Group Notes is that they will solely be utilized after these visuals have been shared, and Twitter customers have been uncovered to them. And given the real-time nature of tweets, that delayed turnaround – with regard to making use of a Group Observe, having it permitted, then seeing it seem on the tweet – may imply that tweets just like the Pentagon instance will proceed to realize vast publicity within the app earlier than such notes might be appended.
It will possible be quicker for Twitter itself to tackle the moderation in excessive circumstances, and take away that content material outright. However that goes towards Elon Musk’s extra free speech-aligned method, wherein Twitter’s customers will determine what’s and isn’t right, with Group Notes being the important thing lever on this respect.
That ensures that content material selections are dictated by the Twitter group, not Twitter administration, whereas additionally lowering Twitter’s moderation prices – a win-win. The method is smart, however in utility, it may result in numerous traits gaining traction earlier than Group Notes can take impact.
Both manner, this can be a good addition to the Group Notes course of, which is able to change into extra necessary as AI-generated content material continues to take maintain, and spark new types of viral traits.