According to the latest guidelines from the US Office of Management and Budget, all federal agencies must appoint a senior officer and governing boards to oversee all AI systems used within the system.
Agencies will also be required to submit an annual report listing all artificial intelligence systems they use, related risks, and plans to mitigate them.
"We have directed all federal agencies to appoint an AI leader with experience, knowledge, and authority to oversee all artificial intelligence technologies they have to ensure responsible use," - quotes Vice President of the Office of Management and Budget Kamala Harris The Verge website.
It is noted that the chief AI officer does not necessarily have to be a "political appointee," although ultimately it will depend on the agency's structure. All governing boards must be established by the summer of 2024.
These guidelines expand on the previously announced policy outlined in the Biden administration's executive order on artificial intelligence, which required federal agencies to establish security standards and increase the number of AI experts in government.
Some agencies have already begun hiring before the guidelines were announced: for example, in February, the Department of Justice introduced Jonathan Mayer as the head of the AI division - he will lead a group of cybersecurity experts to determine how to use artificial intelligence in law enforcement.
According to Shalanda Young, head of the Office of Management and Budget, the US government plans to hire 100 AI specialists by summer.
Federal agencies must also ensure that any AI system they deploy complies with precautions that "reduce the risks of algorithmic discrimination and provide transparency to the public about how the government uses artificial intelligence."
The office's press release provides several examples:
- People at the airport will be able to opt out of facial recognition without any delays or loss of place in line.
- If AI is used in the federal healthcare system to support critical diagnostic decisions, a person controls the process of checking tool results.
- AI can be used to detect fraud in government services under human supervision in the event of making significant decisions, and victims can seek redress for harm caused by AI.
"If an agency cannot apply these guarantees, it must cease using the artificial intelligence system if the head cannot justify why it will increase security risks or rights in general, or create an unacceptable barrier to the agency's critical operations," the press release said.
According to the new guidelines, any government AI models, code, and data must be made public if they do not pose a risk to government activities.
With the exception of such specific guidelines, there are still no laws regulating artificial intelligence in the US - while the EU has already voted on its own rules, which are expected to come into force in May.
Comments (0)
There are no comments for now