Researchers
Incidents involved as both Developer and Deployer
Incident 211 Report
Tougher Turing Test Exposes Chatbots’ Stupidity (migrated to Issue)
2016-07-14
The 2016 Winograd Schema Challenge highlighted how even the most successful AI systems entered into the Challenge were only successful 3% more often than random chance. This incident has been downgraded to an issue as it does not meet current ingestion criteria.
MoreIncidents Harmed By
Incident 211 Report
Tougher Turing Test Exposes Chatbots’ Stupidity (migrated to Issue)
2016-07-14
The 2016 Winograd Schema Challenge highlighted how even the most successful AI systems entered into the Challenge were only successful 3% more often than random chance. This incident has been downgraded to an issue as it does not meet current ingestion criteria.
MoreIncident 6841 Report
Google Books Appears to Be Indexing Works Written by AI
2024-04-04
Google Books is indexing low-quality, AI-generated books, degrading its database and potentially distorting Google Ngram Viewer's analysis of language trends. This integration of inaccurate or misleading information undermines trust, disseminates poor-quality content, and wastes resources as researchers must spend time clearing up the misinformation.
MoreIncident 7291 Report
GPT-4o's Chinese Tokens Reportedly Compromised by Spam and Pornography Due to Inadequate Filtering
2024-05-14
OpenAI's GPT-4o was found to have its Chinese token training data compromised by spam and pornographic phrases due to inadequate data cleaning. Tianle Cai, a Ph.D. student at Princeton University, identified that most of the longest Chinese tokens were irrelevant and inappropriate, primarily originating from spam and pornography websites. The polluted tokens could lead to hallucinations, poor performance, and potential misuse, undermining the chatbot's reliability and safety measures.
MoreIncident 7341 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites
2024-06-18
An audit by NewsGuard revealed that leading chatbots, including ChatGPT-4, You.com’s Smart Assistant, and others, repeated Russian disinformation narratives in one-third of their responses. These narratives originated from a network of fake news sites created by John Mark Dougan (Incident 701). The audit tested 570 prompts across 10 AI chatbots, showing that AI remains a tool for spreading disinformation despite efforts to prevent misuse.
MoreRelated Entities
OpenAI
Incidents involved as both Developer and Deployer
- Incident 7291 Report
GPT-4o's Chinese Tokens Reportedly Compromised by Spam and Pornography Due to Inadequate Filtering
- Incident 7341 Report
Leading AI Models Reportedly Found to Mimic Russian Disinformation in 33% of Cases and to Cite Fake Moscow News Sites