top of page

Subscribe to our newsletter

Write a
Title Here

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

© Indic Pacific Legal Research LLP. 

The works published on this website are licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International.

For articles published in VISUAL LEGAL ANALYTICA, you may refer to the editorial guidelines for more information.

US Government Accountability Office’s Testimony on Data Quality and AI, Explained



The Government Accountability Office (GAO) testimony before the Joint Economic Committee highlights a critical challenge facing the federal government: how to leverage artificial intelligence to combat fraud and improper payments while ensuring data quality and workforce readiness. This analysis examines the intricate relationship between data quality, skilled personnel, and AI implementation in government settings, drawing insights from the GAO's extensive research and recommendations.


The Magnitude of the Problem: Fraud and Improper Payments


The federal government faces staggering financial losses due to fraud and improper payments. According to GAO estimates, fraud costs taxpayers between $233 billion and $521 billion annually, based on fiscal year 2018-2022 data1. Since fiscal year 2003, cumulative improper payment estimates by executive branch agencies have totaled approximately $2.8 trillion. The scale of this problem demonstrates why innovative solutions like AI are being considered.


Improper payments—defined as payments that should not have been made or were made in incorrect amounts—are concentrated in five key program areas that account for about 75% ($121 billion) of all estimated improper payments reported for fiscal year 2024. These figures underscore the urgent need for more effective detection and prevention mechanisms.


Data Quality + AI Effectiveness?


At the heart of the GAO testimony is a fundamental principle in artificial intelligence implementation: the quality of data directly determines the effectiveness of AI systems.


This relationship is encapsulated in the common phrase among AI developers: "garbage in, garbage out".


Data Quality Requirements for AI Success


For AI systems targeting fraud and improper payments, several data quality factors emerge as essential:

  1. Labeled Training Data: Machine learning systems require properly categorized historical payment data (accurate, improper without fraud, or fraudulent)1. Incorrect labelling leads to unreliable results.

  2. Data Integrity: The testimony highlights concerns about "data poisoning," a process whereby malicious actors alter training data to compromise AI system behaviour.

  3. Ground Truth Data: The GAO emphasises that "solid, reliable 'ground truth' data" is necessary for AI tools to function effectively1. Without this foundation, even sophisticated algorithms cannot produce trustworthy results.

  4. Data Standardisation: Government data varies significantly in quality and standards across agencies, creating challenges for unified AI applications.


Consequences of Poor Data Quality


The testimony outlines specific risks when AI systems operate with inadequate data quality:


  1. False Negatives: Systems may fail to detect actual improper payments, allowing fraud to continue undetected.

  2. False Positives: Legitimate payments might be erroneously flagged as improper, potentially delaying or denying rightful payments to beneficiaries and small businesses.

  3. Efficiency Loss: If errors become too numerous, agencies could spend more time identifying AI's mistakes than they save compared to traditional detection methods.


Workforce Challenges in AI Implementation


The second critical pillar for successful AI deployment is having personnel with appropriate skills and expertise. The GAO identifies several significant workforce challenges:


  1. Expertise Gap: There is a "severe shortage of federal staff with AI expertise" despite the growing need for these skills.

  2. Compensation Issues: Current federal digital staff compensation is not competitive with private sector offerings, making recruitment difficult.

  3. Hiring Process: Many qualified candidates "may not be willing to endure the lengthy federal hiring process," creating recruitment bottlenecks.

  4. Advanced Skills Requirement: Agencies particularly need staff with advanced skills, suggesting a focus on graduate-level education for the AI workforce.


The testimony notes that these workforce challenges have been persistent, with GAO having "identified mission-critical gaps in federal workforce skills and expertise in science, technology, engineering, and mathematics" for decades.


Governance Frameworks for Responsible AI Use


To address both data quality and workforce challenges, the GAO has developed a comprehensive AI Accountability Framework with 31 key practices. This framework emphasises:


  1. Organisational Governance: Entities should define clear goals, roles, and responsibilities while demonstrating values that foster trust.

  2. Data Governance: Five key practices help entities use appropriate data for developing AI models, including documenting data sources and ensuring appropriate data incorporation into models.

  3. Human Oversight: Maintaining a "human in the loop" is vital for ensuring oversight of data and processes. AI should complement, not replace, human judgment in fraud detection.

  4. Stakeholder Engagement: Organizations should "engage stakeholders with diverse perspectives to mitigate risks" in AI implementation.


Strategic Recommendations for Improvement


The GAO testimony offers several concrete recommendations to enhance AI capabilities for fraud prevention:


  1. Legislative Action: Congress should consider making permanent the Social Security Administration's authority to share full death data with Treasury's Do Not Pay system.

  2. Analytics Center: Establishing a permanent analytics center of excellence focused on fraud and improper payments could centralize AI expertise and tools.

  3. Digital Services Academy: Creating an academy similar to military academies could help train future workers with the necessary AI skills.

  4. Data Access Improvement: Agencies need better access to data for fraud indicator analysis, as identified in GAO's 2023 survey on fraud risk management.


Balancing Technology with Human Expertise


A recurring theme throughout the testimony is the need to balance technological capabilities with human expertise. The GAO notes that "AI does not replace the professional judgment of experienced staff in detecting potentially fraudulent activities".


While AI can process large volumes of data rapidly, human intelligence remains essential for:


  1. Contextual Understanding: Interpreting patterns within their proper context

  2. Judgment Application: Determining appropriate actions based on AI findings

  3. Technology Selection: Choosing the right tools for specific fraud detection scenarios

  4. Ethical Oversight: Ensuring AI systems operate within ethical and legal boundaries


Conclusion


The GAO testimony provides a framework for understanding the dual foundations necessary for successful AI implementation in government fraud prevention: high-quality data and a skilled workforce. Without addressing both elements, AI initiatives risk falling short of their potential or, worse, creating new problems through false results.


As agencies continue exploring AI applications for fraud detection and prevention, the principles outlined in this testimony offer critical guidance for ensuring that these technologies serve their intended purpose while maintaining public trust in government systems.

Comments


bottom of page