Microsoft shares their methods for AI development, which they have detailed in what they call their Responsible AI Standard framework when building artificial intelligence systems. The Standard is meant to serve as a guide for more responsible AI development.
As artificial intelligence continues to grow and evolve, so too must the procedures and guidelines responsible for generating them. AI is becoming more advanced, more versatile, and more volatile, and Microsoft understands this very well. While the law may not be keeping up with the development of this useful yet potentially dangerous technology, as a figurehead in the advancement of technology, Microsoft recognizes they must set up their own rules if AI is to continue being an asset rather than a liability. And Microsoft is sharing with the world their plans for how AI should be developed going forward, a systematic checklist that they call the Responsible AI Standard.
The Responsible AI Standard is a formula that guides Microsoft to produce AI in a responsible manner. This is a living document that was first crafted in 2019 and has been changing ever since. The second version is the most recent iteration, derived from the combined efforts of Microsoft’s researchers, engineers, and policy experts. Microsoft is sharing this with the hope that other developers will not only learn from their experience but provide feedback as well, should further iterations be necessary.
The Standard is based on six core values: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Each of these factors holds its own set of goals that AI development teams at Microsoft must reach before the product is considered safe for public usage. These individual goals include, but are not limited to, impact assessment, disclosure of AI interaction, quality of service, failures and remediations, Privacy Standard compliance, and Accessibility Standards compliance. These are the values and goals that keep users from experiencing a myriad of errors.
When technical difficulties do arise, Microsoft has been quick to act. When an academic study, back in March 2020, showed that some speech-to-text systems were giving twice as many errors in black and African-American communities compared to white communities, Microsoft hired a sociolinguist to aid them in resolving the issue. This challenge allowed Microsoft to collect more data on how to improve future speech-to-text programs, thus leading them to add to the formula of the Standard.
User abuse is also a factor that Microsoft must account for. While the features of Azure AI’s Custom Neural Voice are impressive, Microsoft understands that they can also be easily misused. They have set up strict policies in the form of a Transparency Note and a Code of Conduct to ensure that users are actively using their own voice when creating a synthetic version so that no other voices are being replicated for purposes of impersonation. The same has been done for facial recognition systems.
Microsoft will be committed to following their Standards for responsible AI.