← Browse

How AI Could Play a Role in Making Government Work

Thirty years ago, policymakers and pundits debated how the internet might transform government. Today, those same questions are being asked of artificial intelligence (AI)—a technology that is both rapidly evolving and undeniably here to stay.

Public discourse around AI has largely centered on its impact on the workforce, its potential to drive both economic growth and inequality, and its environmental footprint. Yet one crucial dimension has  comparatively little attention: how AI could strengthen governance.

Can AI ensure federal funds reach the right recipients? Will it enhance accessibility and responsiveness of public services? Could it yield significant cost savings? Or introduce expensive new complexities? These questions are all part of the conversation about program integrity—the technical name for efforts that cut back on waste and fraud and make sure government programs work.

This report explores how AI can improve program integrity and public sector performance, examines the trade-offs between cost and efficiency, and identifies the key challenges that governments must navigate to realize AI’s full promise.  

Four Ways AI Can Make Government Work

While AI technology is evolving by the day, there are already numerous ways that it can improve government functions. For example:

1. Preventing Improper Payments and Detecting Fraud

Right now, the government often uses a reactive “pay and chase” model, where officials work to catch fraud after it happens. By identifying suspicious or risky transactions, AI can help the federal government shift to a pre-payment detection model for combatting fraud, stopping bad actors before money goes out the door.1

To do that, AI can help agencies build layered defenses that integrate broad datasets and behavior-monitoring tools. Predictive analytics can uncover hidden trends and anomalies, allowing agencies to detect fraud and make payments more efficient.2

The private sector has already demonstrated what’s possible. J.P. Morgan, for instance, has used AI for payment screening since at least 2021.3 AI-driven behavioral monitoring has also been used to complement traditional identity verification—a model the public sector could adapt and expand.4

2. Transforming Tax Administration

AI is already playing a role in US tax administration, from audit selection and call center routing, to automated data retrieval and information entry.5 And there’s room for AI to do even more. According to the Government Accountability Office, AI could help reduce the nearly $600 billion annual tax gap. It would do this by improving the process for who gets audited, identifying recipients of refundable tax credits who are more likely to owe additional taxes, and improving business partnership tax compliance, especially among complex business entities.6 For example, the IRS has piloted AI tools to target audits for large partnerships and earned income tax credit recipients.7 These early efforts suggest a broader opportunity for AI to assist in navigating fast-evolving tax rules and emerging asset classes.8

Yet, innovation must be matched by accountability. Under the Department of Government Efficiency (DOGE), the IRS explored using AI, not only to select taxpayers to audit, but also to manage the audits—without adequate oversight.9 Questions around transparency of these efforts and potential bias underscore the need for careful governance of AI deployment in such high-stakes areas.10

3. Enhancing Customer Service in Public-Facing Programs

Rapidly advancing AI-powered virtual assistants and chatbots could ease the burden on strained agencies like the IRS and Social Security Administration. Other countries offer compelling examples: in Australia, an AI assistant launched in 2016 resolved 88% of taxpayer inquiries on first contact and helped reduce call center volume by nearly 10%.11

These tools offer promise, but they cannot solve everything. Many benefit programs help people who may be less comfortable with digital interfaces, like seniors. AI solutions must be carefully tested to ensure they actually improve the user experience, not replace human support where it’s most necessary.

4. AI Can Help Identify Outdated or Duplicative Regulations

Reports suggest DOGE is deploying an AI tool to analyze over 200,000 federal regulations, with the goal of eliminating half by next January.12 Already, more than 1,000 HUD regulations have reportedly been reviewed at the Department of Housing and Urban Development through this approach. Previously, the Trump administration used AI tools to review legacy regulations at HHS in 2020.

However, while AI can highlight candidates for revision, it cannot substitute for human legal and policy expertise. Ultimately, decisions about regulatory changes must comply with existing law—and in many cases, require congressional input.

What Are the Potential Budgetary Savings and Costs?

In short: benefits could be in the tens to hundreds of billions of dollars.

The Federal government loses $233 billion to $521 billion annually due to fraud.13 Even a modest 10% reduction in fraud could translate into tens of billions in yearly savings—surpassing the entire budgets of many federal agencies. While such gains may be optimistic in the short term, the long-term savings are likely substantial.

In 2024, the Treasury Department reported saving $4 billion by preventing and recovering fraudulent payments.14 Much of this success was due to advanced technologies and data-driven strategies, including AI.15 Notably, AI-enabled identification of Treasury check fraud led to an estimated $1 billion in recoveries.

Another federal agency, participating in an intergovernmental roundtable on AI, reported using sophisticated AI algorithms to analyze large datasets and uncover fraudulent patterns. The agency identified roughly one million potentially fraudulent events and prevented more than $10 billion in improper payments.16 If similar outcomes were replicated across dozens of agencies, the cumulative savings could reach into the hundreds of billions.

The private sector is seeing similar returns. J.P. Morgan reported saving $1.5 billion—around 1% of its annual revenue—through AI-powered fraud prevention, operational efficiencies, and related improvements.17

It’s important to note, however, that AI deployment will require upfront costs and human involvement. Technology, infrastructure, and training all require upfront capital. The GAO has emphasized that fighting improper payments with AI requires high-quality data and a workforce prepared to engage with AI systems.18 Notably, federal data collection is uneven across agencies, and improvements have been slow and inconsistent.19 AI-related training might include instruction on automation tools, data handling, algorithm development, and robust verification processes.20 In the private sector, this kind of training can range from a few hundred dollars to tens of thousands per employee.

Importantly, AI is not a substitute for human oversight. The GAO warns that keeping a “human in the loop” is essential, not only to correct inevitable errors, but also to maintain trust in an imperfect system.21

Recent legislative efforts have aimed to fund AI tools for the federal government. The House-passed version of the One Big Beautiful Bill Act included $25 million for deploying AI to reduce improper payments in Medicare Parts A and B.22 The provision, however, did not make it into the final bill that was signed into law. The final bill did include approximately $1 billion for AI tools within the Department of Defense, including $200 million earmarked for financial audit capabilities.23

Five Challenges to Using AI in Government

While AI can significantly help government functions and save potentially hundreds of billions of dollars, there are still numerous challenges. For example:

1. The Volume and Quality of Federal Data

AI systems thrive on curated and standardized information, but the sheer scale and uneven quality of federal data can pose significant challenges. With trillions in spending and millions of transactions processed annually, deploying AI to combat improper payments requires not only advanced tools but a deliberate and strategic effort to systematically curate and standardize data. A focus on standardization and a verifiable information chain is essential for achieving interoperability, allowing data to flow seamlessly and reliably from its source to an AI model. Without this, AI models will be trained on flawed information. For instance, if undetected fraud is labeled as legitimate, the system may learn to ignore real threats. Complicating matters further, data standards and quality vary widely across agencies, making interoperability and consistency difficult to achieve.

2. Privacy and Data-Sharing Constraints

A 2024 report by the House Bipartisan AI Task Force flagged privacy protections as a major barrier to AI adoption, particularly for sensitive areas like Social Security and taxpayer records.24 Some experts suggest that agencies share AI algorithms or detection methodologies instead of raw data to safeguard privacy while still collaborating.25 Others have suggested updating the Privacy Act of 1974 to reflect modern data-sharing needs—though any such reform would require a careful balancing of civil liberties and operational effectiveness.26

3. Balancing Identity Security with User Experience

Agencies face the dual challenge of securing access to federal systems while ensuring a smooth experience for legitimate users. Overly burdensome authentication protocols risk excluding those who most need assistance—such as low-income individuals, older adults, or those with limited digital literacy.27 Effective AI tools must therefore strike a balance: enhancing identity verification without creating barriers that undermine trust or access to services.

4. Human Oversight to Guard against Error and Bias

AI systems are not immune to mistakes. False positives can delay or deny rightful benefits—such as Social Security payments or small business loans—while false negatives allow fraud to slip through.28 Human oversight is essential to verify cases flagged by algorithms, though it adds cost and complexity. Additionally, AI models may inadvertently perpetuate bias. International bodies, such as the EU Agency for Fundamental Rights, emphasize the need for transparency about data used to train AI systems to prevent discriminatory outcomes and uphold fundamental rights.29

5. Political Headwinds in the Wake of DOGE

Recent political backlash surrounding DOGE may cloud broader AI efforts. In our recent poll, a majority of voters were unhappy with DOGE’s execution, although they supported the goals of DOGE.30 Skepticism surrounding DOGE could discourage lawmakers from pursuing further AI integration, even in areas like fraud prevention where bipartisan support might otherwise exist.

Conclusion

Artificial Intelligence is fast becoming integral to how consumers and businesses operate. To harness AI effectively in government functions, agencies must invest in data quality, safeguard privacy, ensure equitable user access, and retain human oversight. These program integrity efforts, by their very nature data-intensive and oversight-driven, could serve as a proving ground for responsible AI adoption—offering a path to reduce fraud, improve efficiency, and strengthen trust in public institutions.

Document ID: how-ai-could-play-a-role-in-making-government-work