A new report from Microsoft outlines the steps the company took to release responsible AI platforms last year.
- Home
- Technology
- News
Microsoft says it did a lot for responsible AI in inaugural transparency report
Microsoft released its first Responsible AI Transparency report explaining the steps it’s taken to put up guardrails around its AI products.


In its Responsible AI Transparency Report, which mainly covers 2023, Microsoft touts its achievements around safely deploying AI products. The annual AI transparency report is one of the commitments the company made after signing a voluntary agreement with the White House in July last year. Microsoft and other companies promised to establish responsible AI systems and commit to safety.
Microsoft says in the report that it created 30 responsible AI tools in the past year, grew its responsible AI team, and required teams making generative AI applications to measure and map risks throughout the development cycle. The company notes that it added Content Credentials to its image generation platforms, which puts a watermark on a photo, tagging it as made by an AI model.
The company says it’s given Azure AI customers access to tools that detect problematic content like hate speech, sexual content, and self-harm, as well as tools to evaluate security risks. This includes new jailbreak detection methods, which were expanded in March this year to include indirect prompt injections where the malicious instructions are part of data ingested by the AI model.
It’s also expanding its red-teaming efforts, including both in-house red teams that deliberately try to bypass safety features in its AI models as well as red-teaming applications to allow third-party testing before releasing new models.
However, its red-teaming units have their work cut out for them. The company’s AI rollouts have not been immune to controversies.
When Bing AI first rolled out in February 2023, users found the chatbot confidently stating incorrect facts and, at one point, taught people ethnic slurs. In October, users of the Bing image generator found they could use the platform to generate photos of Mario (or other popular characters) flying a plane to the Twin Towers. Deepfaked nude images of celebrities like Taylor Swift made the rounds on X in January, which reportedly came from a group sharing images made with Microsoft Designer. Microsoft ended up closing the loophole that allowed for those pictures to be generated. At the time, Microsoft CEO Satya Nadella said the images were “alarming and terrible.”
Natasha Crampton, chief responsible AI officer at Microsoft, says in an email sent to The Verge that the company understands AI is still a work in progress and so is responsible AI.
“Responsible AI has no finish line, so we’ll never consider our work under the Voluntary AI commitments done. But we have made strong progress since signing them and look forward to building on our momentum this year,” Crampton says.

Pam Bondi’s ouster makes Trump’s Justice Department even more dangerous
- 13 گھنٹے قبل

Flames of war dimmed in the Middle East: Pakistan leads peace efforts,says PM Shebaz
- 10 منٹ قبل

How climate science is sneakily getting funded under Trump
- 13 گھنٹے قبل
Efforts to facilitate talks between US and Iran ongoing, Pakistani sources say
- ایک دن قبل

Pakistan makes $1.3bn payment toward Eurobondd
- ایک دن قبل

Diplomatic win for Pakistan as US–Iran conflict pauses with conditional truce
- 4 گھنٹے قبل

Why Trump’s latest threat against Iran could be a war crime
- 13 گھنٹے قبل
‘A whole civilisation will die’ says Trump as Iran defies looming deadline
- ایک دن قبل

‘‘A big day for global peace’’, Says Donald Trump after Iran US ceasefire
- 4 گھنٹے قبل

PM holds phone call with Pezeshkian, Iranian President confirms participation in Islamabad talks
- 4 گھنٹے قبل
PM Shehbaz reaffirms unwavering solidarity with KSA
- ایک دن قبل

Ceasefire agreed:PM Shehbaz invites US and Iranian delegations to Islamabad for talks
- 4 گھنٹے قبل










