All US federal agencies will now be required to have a senior leader overseeing all AI systems they use, as the government wants to ensure that AI use in the public service remains safe.
- Home
- Technology
- News
Every US federal agency must hire a chief AI officer
The Office of Management and Budget will require all federal agencies to hire chief AI officers to oversee the use of AI as part of new policies released Thursday.


Vice President Kamala Harris announced the new Office of Management and Budget (OMB) guidance in a briefing with reporters and said that agencies must also establish AI governance boards to coordinate how AI is used within the agency. Agencies will also have to submit an annual report to the OMB listing all AI systems they use, any risks associated with these, and how they plan on mitigating these risks.
“We have directed all federal agencies to designate a chief AI officer with the experience, expertise, and authority to oversee all AI technologies used by that agency, and this is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government, who are specifically tasked with overseeing AI adoption and use,” Harris told reporters.
The chief AI officer does not necessarily have to be a political appointee, though it depends on the federal agency’s structure. Governance boards must be created by the summer.
This guidance expands on previously announced policies outlined in the Biden administration’s AI executive order, which required federal offices to create safety standards and increase the number of AI talent working in government.
Some agencies began hiring chief AI officers even before today’s announcement. The Department of Justice announced Jonathan Mayer as its first CAIO in February. He will lead a team of cybersecurity experts in figuring out how to use AI in law enforcement.
The US government plans to hire 100 AI professionals by the summer, according to OMB chair Shalanda Young.
Part of the responsibility of agencies’ AI officers and governance committees is to monitor their AI systems frequently. Young said agencies must submit an inventory of AI products an agency uses. If any AI systems are considered “sensitive” enough to leave off the list, the agency must publicly provide a reason for its exclusion. Agencies also have to independently evaluate the safety risk of each AI platform it uses.
Federal agencies also have to verify any AI they deploy meets safeguards that “mitigate the risks of algorithmic discrimination, and provide the public with transparency into how the government uses AI.” The OMB’s fact sheet gives several examples, including:
When at the airport, travelers will continue to have the ability to opt out from the use of TSA facial recognition without any delay or losing their place in line.
When AI is used in the Federal healthcare system to support critical diagnostics decisions, a human being is overseeing the process to verify the tools’ results and avoids disparities in healthcare access.
When AI is used to detect fraud in government services there is human oversight of impactful decisions and affected individuals have the opportunity to seek remedy for AI harms.
“If an agency cannot apply these safeguards, the agency must cease using the AI system, unless agency leadership justifies why doing so would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations,” the fact sheet reads.
Under the new guidance, any government-owned AI models, code, and data should be released to the public unless they pose a risk to government operations.
The United States still does not have laws regulating AI. The AI executive order provides guidelines for government agencies under the executive branch on how to approach the technology. While several bills have been filed regulating some aspects of AI, there hasn’t been much movement on legislating AI technologies.
Correction March 29, 2024, 11:50 AM ET: The White House updated its fact sheet after publication to note that travelers will “continue to” be able to opt out of TSA facial recognition.
Pakistan to become 90pc clean energy country in terms of power generation by 2034, NA told
- 3 hours ago
Shanaka fireworks as Sri Lanka thrash Oman at T20 World Cup
- 3 hours ago
Aitchison College, Pakistan Army share strong and enduring relationship: ISPR DG
- 5 hours ago

Former army chief Qamar Bajwa hospitalised following fall at home: ISPR
- 5 hours ago
Full circle: Tigers sign Verlander to 1-year deal
- 14 hours ago

The Epstein files might bring down a government. Just not the US government.
- 13 hours ago

The Art of the Steal
- 13 hours ago

Section 230 turns 30 as it faces its biggest tests yet
- 15 hours ago

Siemens CEO Roland Busch’s mission to automate everything
- 15 hours ago
Pats' Maye rules out surgery, 'just need time off'
- 14 hours ago
IOC to allow Ukrainian athlete to wear armband
- 14 hours ago

Why American “quad god” Ilia Malinin skates like no one else
- 13 hours ago










