Assuming Bias and Responsible AI
There are plenty of examples of artificial intelligence and machine learning systems that made it into the news because of biased predictions and failures.
Here are a few examples on AI/ML gone wrong:
Amazon had an AI recruiting tool which favored men over women for technical jobs The Microsoft chat bot named “Tay” which turned racist and sexist rather quickly A doctor at the Jupiter Hospital in Florida referred to IBM’s AI system for helping recommend cancer treatments as “a piece of sh*t” Facebook’s AI got someone arrested for incorrectly translating text The list of AI failures goes on…