With the 2027 Zero Trust Strategy implementation deadline approaching and artificial intelligence (AI) adoption expanding across federal agencies, defense leaders confront an important transition. They need to upgrade traditional systems and move decades of siloed data into infrastructure that can power AI at the speed today’s missions and warfighters demand.
I recently joined Daniela Fayer at GovExec TV to discuss this challenge and how defense agencies are aligning these ambitious data modernization efforts with mission outcomes. The conversation centered on how making AI operationally effective, not just experimental, means taking a different approach to how we think about data.
Here are some takeaways from our discussion.
Missions define technology decisions
Defense agencies have a data problem that is more difficult to fix than most people realize. Systems that do not talk to each other, decades-worth of data that was never labeled consistently, and mission priorities that have changed over time. You cannot flip a switch and make that data AI-ready.
What has changed is the approach. Instead of trying to modernize everything at once, defense agencies are now disaggregating their most critical missions into component workflows. They are asking which missions matter most, what data powers those missions, where that data lives, and which missions are most amenable to AI modalities. This is not limited to large language models, either. We are seeing natural language processing for one mission, computer vision for another, and clustering algorithms somewhere else.
The agencies making real progress are matching the AI modality to the operational need, a principle that became clear at the National Defense Industrial Association’s (NDIA) Global Defense Technologies Hackathon that Maximus sponsored earlier this year. Defense officials came in with actual mission problems, and nearly 500 people worked on real use cases with real data. The teams that did well understood the mission context before they started coding.
Traditional systems call for data-level transformation
One of the most persistent obstacles to AI readiness is how traditional systems store and control data. As I shared in the discussion, traditional system challenges intersect directly with zero trust architecture (ZTA) requirements and broader government cybersecurity solutions. On the surface, these might seem like separate problems, but they are deeply connected.
Traditional IT security controls access to data at the system boundary. The new model flips this by controlling data at the object level, which means data can move securely across networks without relying on perimeter defenses. This matters because it allows secure data sharing with coalition partners while making the same data more useful for AI training.
This approach aligns directly with zero trust principles while simultaneously making data more useful for AI training. A properly labeled data object matters for coalition operations where data sharing with partners needs to happen securely and efficiently. It also matters for AI development, because algorithms trained on well-labeled data perform better than those trained on ambiguous or inconsistent datasets.
Architecture reflects operational reality
What is becoming clear is that the department should focus on a “yes, and” mentality rather than an “either or” approach. Different mission contexts rely on different data architectures, including data lakes, warehouses, lakehouses, fabrics, and mesh. These architectures all have their place. The key is not choosing one architecture but integrating multiple approaches in ways that balance security, interoperability, and speed. This is especially important in joint and coalition environments, where different organizations operate under varying constraints but need to share information in real-time.
Federal agencies see better results when they work with partners who understand that solutions should work across technical boundaries and support day-to-day operations without disruption.
The importance of balanced metrics for success
Measuring progress in data modernization cannot rely on a single metric. Defense agencies are developing balanced scorecards that track inputs, outputs, and outcomes across multiple dimensions.
How much mission-critical data meets minimum metadata standards? How many mission threads have been fully mapped down to the data level? Which organizations have adopted the standards you have published? These questions give defense programs a much better picture than asking whether you have hit some arbitrary percentage of “data modernized.”
Industry partnerships solve the innovation gap
Defense agencies are partnering with tech companies and AI specialists to develop national defense solutions that drive meaningful impact. The hackathons happening across departments and industry partners represent an effective form of partnership. When you give teams real problems and real data, the solutions that emerge are often good enough to keep funding and move into operational use.
At Maximus, we see defense mission support progressing rapidly as agencies work to operationalize AI. Our work focuses on helping defense organizations address these transitions through data strategy, systems integration, and partnerships that deliver measurable results.
Discover more
For more insight into how defense agencies are making AI operationally ready, not just experimentally interesting, watch the full interview.