Agentic AI is rapidly becoming a top priority for industry federal leaders working with the federal government to modernize large-scale health programs. Two leaders at the forefront of the potential of agentic AI are Corinna Dan, Managing Director of Federal Health at Maximus, and Karen Hay, Population & Public Health Industry Advisor at Salesforce. They recently joined the Clickthrough podcast to unpack the opportunities ahead. These leaders shared actionable insights on how agentic AI can reshape federal health program delivery, and considerations that agencies can put to use now to succeed in the agentic era.
Embrace the art of the possible to streamline population health programs
Federal health agencies face unique opportunities to leverage agentic AI for transforming operations for population health programs. Dan noted specific applications across major health agencies where federal healthcare solutions that include autonomous AI systems can drive significant improvements in program operations:
- At the CDC, agentic AI can accelerate response to emerging health threats through real-time outbreak surveillance, autonomous data ingestion from labs and hospitals, and automated grant reviews.
- For CMS, AI agents offer potential for boosting program integrity by autonomously detecting and preventing payment errors or fraud, validating claims, and streamlining eligibility processes.
- At NIH, agentic platforms have the potential to accelerate biomedical research through autonomous literature reviews and automated grant application triage.
Both experts are inspired by what is possible as multi-agent systems become capable of collaborative AI that could be put to work across programs and agencies. In particular, Hay envisions AI agents orchestrating more coherent, responsive systems through agentic orchestration and data fabrics, driving standardization and improvement across the federal health landscape.
Prioritize quality data to fuel agentic AI workflows
Drawing on clinical and epidemiological expertise, the experts agree: for agentic AI to deliver meaningful results, federal health programs need diverse, high-quality, and securely integrated data. Hay noted crucial data considerations:
- Data integrity and accuracy are paramount—information must be accurate, complete, and free from bias to ensure AI-driven recommendations are both reliable and equitable.
- Security and compliance remain critical, especially with personal health information (PHI) and personally identifiable information (PII), requiring processing and storage on secure platforms like Salesforce Government Cloud that meet HIPAA and FedRAMP standards.
By addressing these considerations, agencies will be better able to utilize rich data to fulfill multiple operational goals:
- Operational data from contact centers can drive process automation and quality assurance.
- Case management and beneficiary data support eligibility verification, proactive outreach, and gap identification.
- Policy and regulatory data present a unique opportunity for automating compliance checks across countless policy manuals and legislative updates.
- Training and feedback data help ensure AI accuracy through human-in-the-loop validation, allowing for model fine-tuning to maintain trustworthy recommendations.
Prepare federal health agency teams to successfully leverage agentic AI tools
Even with operational goals defined and data quality and governance standards in place, health programs need the know-how to succeed with agentic AI. Dan outlined a structured, five-part framework to ensure workforce readiness for AI-enabled tools:
- Establish foundational governance and compliance documentation with clear policies aligned with OMB, NIST, and FedRAMP standards, plus role-based access controls and audit trails.
- Implement comprehensive workforce training and upskilling, starting with AI literacy programs and progressing to hands-on simulations for complex multi-agent applications.
- Create human-in-the-loop protocols with human oversight of critical AI-driven actions and clear escalation paths.
- Build a culture of trust and transparency through communication, feedback loops, and program-level AI ethics officers.
- Maintain continuous monitoring through annual compliance audits, AI tool certification, and ongoing staff upskilling.
Ensure alignment between agentic AI policy and implementation
Hay pointed out that effective agentic AI deployments ensure that federal or agency-level AI policies are reflected in implementation. A proactive approach brings together policy experts, program managers, technology staff, and AI developers for effective cross-functional collaboration. Hay recommends:
- Joint policy-technology working groups that meet regularly to translate policy requirements into technical specifications, helping to ensure AI models adhere to regulations from the outset
- A laser focus on mission outcomes, with governance frameworks establishing clear metrics that tie AI performance to program goals
- Human-centered design and explainable AI are critical—program staff need to understand why AI makes specific recommendations to trust outputs and defend decisions during audits
- Iterative phased rollouts with embedded policy validation allow real-world testing and course corrections before scaling, minimizing risk while ensuring optimal AI utilization
Use agentic AI to enhance, not replace, human capabilities
Finally, the experts emphasized that federal health agencies will be most successful by prioritizing a human-centered focus for agentic AI deployments. Dan noted that Maximus’ approach follows industry best practices, ensuring AI solutions prioritize data governance, privacy, and security while incorporating appropriate human oversight throughout the AI lifecycle. This human-ilead approach proves especially critical as federal programs face staffing challenges and increased demand for automation, ensuring dedicated civil servants can continue meeting the needs of people who rely on vital federal health programs.
Learn more
Clickthrough to hear the full podcast conversation.