Defense organizations are increasingly exploring how artificial intelligence (AI) can boost mission performance, improve decision-making, and reduce manual processes. However, a key insight stood out during the Disruptive by Design: Empowering Emerging Leaders to Transform Government with AI panel at the 2025 AFCEA Alamo ACE conference: technology is advancing faster than the infrastructure, culture, and processes needed to use it effectively.
The session, moderated by Lieutenant General (Ret.) Mary O’Brien, Air Force, brought together technologists and defense leaders to examine the realities of AI adoption across missions. Jericho Gregory, Lead Architect for Cyber and Research & Development (R&D) at Maximus, joined fellow panelists to highlight where agencies are making progress and where readiness gaps are slowing momentum.
Elevating technical expertise in emerging leaders
Lt Gen (Ret.) O’Brien opened the discussion by emphasizing the importance of early-career talent in modernizing mission systems. She shared an example from Project Maven, recalling how an overlooked recommendation from a junior officer about using graphics processing units (GPUs) slowed progress. Her message to the room was clear: listen early, listen often, and create space for rising leaders to contribute.
“Imagine if we had listened to him more closely, how much further along we would be today,” she noted.
She invited each panelist to share guidance for early-career technologists. Gregory encouraged emerging talent to build small demonstrations using whatever tools they had, even if it was just PowerShell on a government-issued machine. He noted that working examples move stakeholders from “I don’t think this works” to “If you can do that, can you also do this?” thereby turning doubt into momentum.
Panelists also emphasized:
- “Finding a way to yes,” by working through policy or procedural barriers.
- Staying focused on warfighter needs to anchor innovation to mission-specific tasks.
- Starting small and supporting communities where experimentation is encouraged.
Improving data readiness for AI
Panelists agreed that fragmented, outdated, and hard-to-access data remain major barriers to AI adoption.
Across the panelists’ experience, several challenges stood out:
- Siloed systems with inconsistent data dictionaries
- Platforms built with stability as the primary goal rather than balancing it with timely data access
- Slow cycles for cleaning, approving, and securely sharing data
- Limited interoperability across systems
Even the most advanced algorithms cannot deliver reliable results if they cannot access clean, structured, and contextualized data. This point reinforced the discussion’s central takeaway: that organizational readiness, not technical capability, is the real determinant of AI success.
Exploring the next wave of AI capabilities
Gregory highlighted several capabilities that have the potential to impact decision speed, accuracy, and mission-level automation.
One development he highlighted was the Model Context Protocol (MCP), a standard for how large language models (LLMs) interact with data sources and tools. In his words, MCP is essentially “a REST API for REST APIs.” He explained that this provides models with a consistent way to access the information and applications they need, so they can gather context autonomously rather than relying solely on user-provided inputs.
Gregory noted that this change supports several mission advantages:
- Greater trust and fewer hallucinations as models access authoritative data rather than relying on training-set assumptions
- More sophisticated automation, including form completion, dataset retrieval, or procedural checks
- A foundation for agentic AI, capable of taking well-bounded actions in parallel, cutting manual, spreadsheet-based tasks
Gregory also explained how natural language interfaces across cyber and mission systems could “talk to their DNS logs” or request threat summaries conversationally, made possible by pairing MCP-enabled data access with intuitive, speech-based interaction.
Building responsible and trustworthy AI systems
As the discussion turned to risk, O’Brien highlighted why responsible deployment is non-negotiable. She shared examples where confidence scores misled senior leaders, and only human validation revealed inaccuracies.
Gregory expanded on how to build trust into AI-driven systems:
- Ground models in deterministic truth, including math, physics, procedures, and domain rules.
- Use the right tool for the job, combining smaller, specialized algorithms with LLMs instead of relying on one model to do everything.
- Adopt a “mixture-of-experts” approach, where models handle different components of a task in a transparent, auditable sequence.
Panelists stressed the importance of human-in-the-loop safeguards, transparent data provenance, and cross-model verification as defense agencies transition to more autonomous systems.
Looking ahead
The panel reinforced that the future of AI in defense will be built by people. Experienced mentors, mission-focused technologists, and rising leaders who see possibilities others may overlook will all play a key role in creating a responsible AI environment for defense missions.
As a Platinum sponsor, Maximus joined discussions across AFCEA Alamo ACE. The Empowering Emerging Leaders to Transform Government with AI panel’s themes reflect the modernization challenges we support, including preparing data for AI, supporting cyber operations, and helping teams adopt new tools responsibly.
Interested in learning more about AI and data readiness for defense? Read “What defense leaders need to know about AI-ready data” from Maximus Data Services lead Duncan McCaskill.