Microsoft allegedly fires its AI Ethics Team

Microsoft allegedly fires its AI Ethics Team

While Microsoft has stated that it is “committed to creating AI products and experiences in a safe and responsible manner,” critics argue that self-regulation is insufficient to keep AI technology from causing social issues. “Self-regulation was never going to be sufficient,” said Emily Bender, a professor of computational linguistics and ethical issues in natural language processing, “but I believe that internal teams working in concert with external regulation could have been a highly useful combination.”

The increased use of AI technology has raised concerns among experts, who believe that it requires regulation to avoid causing social problems. ChatGPT, a conversational robot that allows humans to converse in natural language, was recently released by OpenAI and has quickly become a buzzing tool in tech circles. Microsoft invested $10 billion in OpenAI, valuing it at approximately $29 billion. Despite Microsoft’s decision to lay off its AI team, the company’s Office of Responsible AI, as well as two other responsible AI working groups, the Aether Committee and Responsible AI Strategy in Engineering, remain operational.