A new report from Microsoft outlines the steps the company took to release responsible AI platforms last year.
- Home
- Technology
- News
Microsoft says it did a lot for responsible AI in inaugural transparency report
Microsoft released its first Responsible AI Transparency report explaining the steps it’s taken to put up guardrails around its AI products.


In its Responsible AI Transparency Report, which mainly covers 2023, Microsoft touts its achievements around safely deploying AI products. The annual AI transparency report is one of the commitments the company made after signing a voluntary agreement with the White House in July last year. Microsoft and other companies promised to establish responsible AI systems and commit to safety.
Microsoft says in the report that it created 30 responsible AI tools in the past year, grew its responsible AI team, and required teams making generative AI applications to measure and map risks throughout the development cycle. The company notes that it added Content Credentials to its image generation platforms, which puts a watermark on a photo, tagging it as made by an AI model.
The company says it’s given Azure AI customers access to tools that detect problematic content like hate speech, sexual content, and self-harm, as well as tools to evaluate security risks. This includes new jailbreak detection methods, which were expanded in March this year to include indirect prompt injections where the malicious instructions are part of data ingested by the AI model.
It’s also expanding its red-teaming efforts, including both in-house red teams that deliberately try to bypass safety features in its AI models as well as red-teaming applications to allow third-party testing before releasing new models.
However, its red-teaming units have their work cut out for them. The company’s AI rollouts have not been immune to controversies.
When Bing AI first rolled out in February 2023, users found the chatbot confidently stating incorrect facts and, at one point, taught people ethnic slurs. In October, users of the Bing image generator found they could use the platform to generate photos of Mario (or other popular characters) flying a plane to the Twin Towers. Deepfaked nude images of celebrities like Taylor Swift made the rounds on X in January, which reportedly came from a group sharing images made with Microsoft Designer. Microsoft ended up closing the loophole that allowed for those pictures to be generated. At the time, Microsoft CEO Satya Nadella said the images were “alarming and terrible.”
Natasha Crampton, chief responsible AI officer at Microsoft, says in an email sent to The Verge that the company understands AI is still a work in progress and so is responsible AI.
“Responsible AI has no finish line, so we’ll never consider our work under the Voluntary AI commitments done. But we have made strong progress since signing them and look forward to building on our momentum this year,” Crampton says.

Bengals' Higgins concussed after hard fall to turf
- 6 hours ago

Where the most chaos could be lurking on Rivalry Week
- 6 hours ago
Malaria vaccine price cut set to protect 7m more children by 2030
- 19 hours ago
Xi holds call with Trump: Chinese state media
- 13 hours ago

5.2-magnitude earthquake jolts Indonesia’s East Java
- 19 hours ago
Three personnel martyred in suicide attack on FC headquarters in Peshawar
- 19 hours ago

Week 12 winners and losers: Gibbs notches highest score of season, Taylor goes quiet
- 6 hours ago

Colorado has wolves again for the first time in 80 years. Why are they dying?
- 5 hours ago

Grok’s Elon Musk worship is getting weird
- 7 hours ago

UAB player arrested after stabbing two teammates
- 6 hours ago
PM Shehbaz vows to strengthen Pak-Saudi ties across all domains
- 14 hours ago

Bollywood magnet Dharmendra passes away at 89
- 19 hours ago










