AI Incident Roundup – January ‘23
Welcome to this month’s edition of The Monthly Roundup, a newsletter designed to give you a digestible recap on the latest incidents and reports of the AI Incident Database.
This month we are diving deeper into ChatGPT and other Large Language Models (LLMs), whose incidents are annotated with the 💬 icon.
Estimated reading time: 6 minutes
🗞️ New Incidents
Emerging incidents that occurred last month:
Incident #446: ShotSpotter Failed to Alert Authorities of Mass Shooting in North Carolina
- What happened? ShotSpotter did not detect gunshots and alert Durham police of a drive-by shooting in Durham, North Carolina which left five people in the hospital on New Year's Day.
- About the AI: ShotSpotter classifies using sensor data audio events as potential gunshots, triangulates the event's location, and alerts authorities of the event.
💬 Incident #452: ChatGPT-Written Bug Reports Deemed "Nonsense" by White Hat Platform, Prompted Bans
- What happened? ChatGPT-generated responses submitted to smart contract bug bounty platform Immunefi reportedly lacked details to help diagnose technical issues, which reportedly wasted the platform's time, prompting bans to submitters.
Incident #453: Twitter's AI Moderation Tool Misidentified Rockets as Pornography
- What happened? Twitter's automated content moderation misidentified images of rocket launches as pornographic content, prompting incorrect account suspensions.
Incident #459: Firefighters Smashed Cruise AV's Front Window to Stop It from Running over Fire Hoses
- What happened? Local firefighters were only able to stop a Cruise AV from driving over fire hoses that were in use in an active fire scene when they shattered its front window.
📎 New Developments
Older incidents that have new reports or updates.
Original incident | New report(s) |
---|---|
Incident #112: Police Departments Reported ShotSpotter as Unreliable and Wasteful |
|
💬 Incident #339: Open-Source Generative Models Abused by Students to Cheat on Assignments |
|
Incident #430: Lawyers Denied Entry to Performance Venue by Facial Recognition |
|
Incident #434: Sudden Braking by Tesla Allegedly on Self-Driving Mode Caused Multi-Car Pileup in Tunnel |
|
Incident #436: Tesla Driver Put Car on Autopilot Before Falling Asleep in Germany |
|
Incident #440: Louisiana Police Wrongfully Arrested Black Man Using False Face Match |
|
💬 Incident #443: ChatGPT Abused to Develop Malicious Softwares |
|
Incident #448: AI-Powered VTuber and Virtual Streamer Made Toxic Remarks on Twitch |
|
💬 Incident #449: Startup Misled Research Participants about GPT-3 Use in Mental Healthcare Support |
|
💬 Incident #450: Kenyan Data Annotators Allegedly Exposed to Graphic Images and Text for OpenAI Contract |
|
🗄 From the Archives
Every edition, we feature one or more historical incidents that we find thought-provoking.
In January, we received an influx of incidents involving OpenAI’s newly released ChatGPT. We compiled and analyzed the distinct ChatGPT incidents and derived categories of harms including:
- Academic dishonesty
- Malware development
- Jailbreaking with ease
- Labeler Content Exposure
- Fake citations
- Quality assurance
A detailed analysis from one of our editors will be published shortly on our blog.
👇 Diving Deeper
-
All new incidents added to the database in the last month, grouped by topic:
- Generative Models (e.g. ChatGPT): #443, #449, #450, #452, #455, #457
- Social media: #447, #448, #453
- Impactful Errors: #444, #445, #446
- Privacy & surveillance: #438, #441
- Facial recognition: #439, #440
- Robots & AI Companions: #456, #458
- Consumer Fairness: #435, #437
- Autonomous vehicles: #436, #459
- Bias & discrimination: #454
- AI generated images: #451
-
Explore clusters of similar incidents in Spatial Visualization
-
Check out Table View for a complete view of all incidents
-
Learn about alleged developers, deployers, and harmed parties in Entities Page
🦾 Support our Efforts
Still reading? Help us change the world for the better!
- Share this newsletter on LinkedIn, Twitter, and Facebook
- Submit incidents to the database
- Contribute to the database’s functionality