A recent CNN report criticized Microsoft’s MSN AI model for news aggregation, citing questionable editorial decisions. This isn’t new.
The CNN report highlights instances where the AI selected stories that were either inaccurate or used inappropriate language.
Examples include a story that falsely claimed President Joe Biden fell asleep during a moment of silence for wildfire victims, and an obituary that referred to an NBA player in a derogatory manner.
The report argues that Microsoft’s automated system continues to display or generate content containing offensive language and false information without clear accountability.
This is hardly new: In 2020, Microsoft began replacing MSN’s news editors with automated systems that rewrite headlines or replace images with questionable results, but still planned to increase automation, which it apparently did.
MSN takes content from other publishers and publishes it on its site. The exact process by which Microsoft selects this content and publishers is unclear, as is the use of automated systems.
But it’s not just MSN.com where Microsoft’s AI information tools are failing in ways that threaten society: A recent study by AlgorithmWatch and AI Forensics, in collaboration with Swiss broadcasters SRF and RTS, found that Microsoft’s Bing Chat gave incorrect answers to questions about upcoming elections in Germany and Switzerland.
The AI chatbot provided misleading information, such as inaccurate poll results and incorrect names of party candidates. A Microsoft spokesperson responded to the study by saying that the company is committed to improving its services and has made significant progress in improving the accuracy of Bing Chat’s responses. However, this does not address the fundamental structural issues with large language models that lead to these errors.
For a company that claims to be committed to the responsible and safe use of AI, this is a pretty poor track record.