AI Incident Roundup – March ‘23
Read our month-in-review newsletter recapping new incidents in the AI Incident Database and looking at the trends.
🗄 Trending in the AIID
In March the deluge of Deepfake related incidents continued as the public applied generative AI in creative and sometimes destructive ways. In a recent example, high schoolers created a Deepfake photo of a principal making violent and racist threats, seeding fears in the community and damaging the principal’s reputation.
Another trend is a series of enforcement actions related to privacy, including several arising from the European Union's General Data Protection Regulation (GDPR) . Recently, the Italian Data Protection Authority issued a ban on ChatGPT as well as Replika’s AI-powered Chatbot due to a lack of age-verification mechanisms as well as other personal data collection concerns. Another recent court ruling related to privacy found in favor of Uber drivers alleging that the platform violated the drivers’ data rights in a number of instances.
Know a privacy-related AI incident that is not covered by the AI Incident Database? Help us cover the world of AI harms by submitting incidents yourself!
🗞️ New Incidents
LLMs & Chatbots
- #498: GPT-4 Reportedly Posed as Blind Person to Convince Human to Complete CAPTCHA
- #503: Bing AI Search Tool Reportedly Declared Threats against Users
- #504: Bing Chat's Outputs Featured in Demo Video Allegedly Contained False Information
Deepfake
- #488: AI Generated Voices Used to Dox Voice Actors
- #492: Canadian Parents Tricked out of Thousands Using Their Son's AI Voice
- #493: TikTok User Videos Impersonated Andrew Tate Using AI Voice, Prompting Ban
- #494: Female Celebrities' Faces Shown in Sexually Suggestive Ads for Deepfake App
- #495: High Schoolers Posted Deepfaked Video Featuring Principal Making Violent Racist Threats
- #496: Male College Freshman Allegedly Made Porn Deepfakes Using Female Friend's Face
- #499: Parody AI Images of Donald Trump Being Arrested Reposted as Misinformation
Generative AI
- #490: Clarkesworld Magazine Closed Down Submissions Due to Massive Increase in AI-Generated Stories
- #500: Online Scammers Tricked People into Sending Money Using AI Images of Earthquake in Turkey
Bias & Discrimination
- #489: Workday's AI Tools Allegedly Enabled Employers to Discriminate against Applicants of Protected Groups
- #502: Pennsylvania County's Family Screening Tool Allegedly Exhibited Discriminatory Effects
Other
- #491: Replika's AI Experience Reportedly Lacked Protection for Minors, Resulting in Data Ban
- #497: DoNotPay Allegedly Misrepresented Its AI "Robot Lawyer" Product
- #501: Length of Stay False Diagnosis Cut Off Insurer's Payment for Treatment of Elderly Woman
👇 Diving Deeper
- Explore clusters of similar incidents in Spatial Visualization
- Check out Table View for a complete view of all incidents
- Learn about alleged developers, deployers, and harmed parties in Entities Page
🦾 Support our Efforts
Still reading? Help us change the world for the better!
- Share this newsletter on LinkedIn, Twitter, and Facebook
- Submit incidents to the database
- Contribute to the database’s functionality