Analyzing AI Use in Government Agencies
At the 2024 TAF Coalition Conference, panelists discussed the potential impact of artificial intelligence (AI) on fraud against the government. Experts in the field have been warning for years that AI will power new and more costly fraud schemes, by scaling up existing methods of perpetrating fraud and by making new schemes possible. However, even as fraudsters have scaled up their operations, government agencies have been exploring how they can use AI in the fight against fraud. In the early 2000s, an official at the Government Accountability Office lauded AI as a “powerful tool” for strengthening the government’s detection and monitoring of fraud because AI programs can flag abnormal patterns that can help identify fraud as it happens and make it easier to recover damages.
In 2010, less than 1% of government fraud identification occurred because of tech-based oversight. Now, emerging and established technology can enable government agencies to take advantage of the wealth of incoming data, in electronic records, contracts, emails, and more, to prevent and detect fraud. Using technology to identify fraud has been a daunting task because fraud is a latent variable, meaning that it is not directly observable and can only be detected through the presence of other indirect variables. AI makes that task easier. Using a data-up approach, in which a model is fed thousands of data points and taught to identify relevant patterns, government agencies can use their data to train algorithms to identify the indirect variables that may signal the presence of fraud. . Further, AI tools that focus on natural language processing in administrative review processes could help employees at the SEC, CFTC, and other agencies sort through whistleblower tips and identify the most promising cases much more quickly.
Many agencies have plans for future AI use, but some, like the SEC and EPA, already take advantage of the technology to make their work more efficient. For example, the Treasury Department recovered $375 million from AI-powered fraud detection in fiscal year 2023 alone. The Centers for Medicare & Medicaid Services approved a human-reviewed AI that aids in making coverage determinations and detecting healthcare fraud based on past billings, and the IRS recently unveiled an AI program that focuses on “detect[ing] tax cheating, identify[ing] emerging compliance threats, and improv[ing] case selection tools to avoid burdening taxpayers with needless ‘no-charge’ audits.” The SEC has implemented a suite of AI programs similar to those in use at the CMS and IRS; the components of the suite work in tandem to decrease the number of whistleblower claims that require manual review.
While using AI and other technology to monitor for fraud has the benefit of increased operational efficiency, the issues that haunt private-sector AI development also plague the public sector. Some key challenges include:
Retaining talent to develop and maintain AI. As AI becomes more common in daily life, private-sector companies recruit AI developers at higher rates than government agencies, making it difficult for the government to find the talent to develop AI models. Some argue that model development should be outsourced to the private sector, but others assert that keeping AI growth in-house will make the algorithms better aligned with the goals of the agency that created it. The difficulty of acquiring AI professionals also highlights a need for the government to stretch its resources to ensure employees are trained to use and update AI programs once they are in place.
Preventing obfuscation of government decisions. When government agencies and public officials make decisions, they are typically required to explain their reasoning. Officials relying on AI need to understand algorithmic processes enough to evaluate and explain the choices made by AI. If the government is unable to provide explanations for AI choices, public support for tech-savvy leadership may fall off.
Clearly communicating accuracy rates or errors in algorithms. Government agencies will need to pay close attention to the variables going into their models. Unintended consequences and biased training data can lead to disastrously discriminatory effects, especially as more officials turn to AI to make decisions. Each AI program should have checks in place to note when the model is falling into a feedback loop. AI feedback loops occur when a model can learn from its own outputs, which can lead to “filter bubbles” or a model perpetuating and increasing bias with every subsequent output. A specific example is an AI model being unable to detect new fraud schemes because it associates fraud only with past identified schemes.
In the future, the bad guys won’t be alone in leveraging AI; government agencies will continue to implement AI programs to help them in the fight against fraud. Major questions remain about the use and scope of these programs, and many of the problems are the same as those that face the private sector. Government AI programs are quickly becoming a critical component of fraud detection and prevention. Those in the whistleblower community must monitor the progress of these programs and their role in the public-private relationship that makes whistleblower programs successful.
This piece was written by Rosie Tomiak, the Public Interest Advocacy Fellow at The Anti-Fraud Coalition. This blog was edited by MaryAnne Hamilton is an Attorney at Miller Law Group.