Twitter cracks down on AI fake pictures, will add user-generated fact-check information to pictures

Shortly after a fake "explosion near the Pentagon" image went viral on the platform, Twitter announced it was expanding its user-generated fact-checking program to include images, Kim Ten reported on May 31. Called "Community Notes," the project is user-generated and appears beneath tweets to provide more context to potentially misleading content on Twitter. Contributors will now be able to add information about images, and this information will appear under "Recent and future matching images". Twitter Community Note users will be able to indicate whether they add context to the Tweet itself or an image within the Tweet. Currently, the feature only works with single images, though the company says it's working on expanding it to videos and multi-image tweets.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)