AI in Government: Balancing Innovation with Risk and Accountability

Narrowing the National Security Exception to Federal AI Guardrails

OMB’s 2025 AI Integration Memo: A Shift Towards Expedited Government AI Adoption

In an effort to prioritize the integration of artificial intelligence into governmental functions, the Office of Management and Budget (OMB) issued a set of memorandums in April 2025. These memos replace the prior 2024 guidelines from the Biden administration, significantly altering the approach from a cautious implementation to a more accelerated and ambitious adoption of AI technologies. This change aligns with the Trump administration’s objectives to reshape federal operations through AI, encompassing areas such as layoff identification, communication monitoring, and adjustments to government contracts.

Despite differences, the 2025 memos retain key risk management procedures from the 2024 guidance, underscoring that innovation should not come at the expense of oversight. Agencies are still required to conduct AI impact assessments, test systems before deployment, allow for public input, and offer appeals for any adverse impacts. Additionally, transparency measures, initiated by the Trump administration, like maintaining inventories of AI use cases, remain in place. The 2025 memo expands these criteria to include evaluation of the data’s purpose fitness and potential cost savings.

However, the 2025 memo has notable omissions. It no longer mandates that agencies avoid AI usage if risks outweigh benefits, and removes references to mitigating bias and assessing impact on minorities and underserved communities. This change, coupled with the administration’s reduction of diversity, equity, and inclusion (DEI) initiatives, may hinder efforts to assess AI’s societal impacts. The memo narrows risk monitoring to privacy, civil rights, and civil liberties, excluding broader discriminatory impacts, and maintains agency discretion to waive risk management protocols under certain conditions.

AI Oversight and National Security: Bridging the Regulatory Gap

The 2025 OMB memo sets some foundational standards for AI adoption, yet discrepancies with national security protocols could pose challenges. Under Biden, AI regulations were divided between general government use and national security systems, the latter subject to separate, less stringent oversight. The Trump administration’s ongoing review of the National Security Memorandum (NSM) presents a chance to harmonize these standards with the 2025 OMB memo.

Transparency remains a concern, particularly for national security applications. While the OMB memo requires publication of use-case inventories and risk management waiver justifications, the NSM does not mandate public disclosure. This lack of transparency obscures the extent of AI’s role in national security and compliance with safeguards.

The NSM’s limited guidance on AI’s negative impacts is also problematic. Unlike the OMB memo, which offers a framework for appealing AI decisions, the NSM remains silent, missing opportunities for accountability in privacy and civil rights infringements. Similarly, broad waiver provisions under both sets of rules could lead to a disregard for compliance, necessitating Congress to enforce stricter oversight.

Congressional Role in Ensuring AI Accountability

The Intelligence Community Directive 505 exempts certain publicly available AI uses from key risk management practices, creating potential security and rights risks. This inconsistency underscores the need for legislative action to establish comprehensive AI regulations for national security, ensuring proper authorization, evaluation, and transparency.

Congress should enhance scrutiny over AI expenditures and enshrine transparency and accountability measures within intelligence and defense budgets. Aligning the standards across different domains while ensuring robust oversight will be crucial to responsibly harness AI’s potential in government operations.

Latest News