Banner image placeholder
Banner image

From AI policy to public value: why procurement matters more than most governments realize


March 23, 2026

Artificial intelligence is moving quickly into government operations. Across U.S. state governments, agencies are exploring AI for tasks such as document handling, vendor and users Q&A, cost and revenue analysis, traffic monitoring, anomaly detection, and so on. The conversation often focuses on the technology itself: what AI can do, how fast it can be deployed, and what efficiencies it might generate. Our latest research published in the International Journal of Physical Distribution and Logistics Management points to a different issue that deserves more attention. The real challenge is not only adopting AI. It is governing AI in ways that protect public value. 
Important stuff: That is where procurement becomes central!
Too often, procurement is treated as the final step in the process, the function that buys the tool after others have decided what is needed. In practice, procurement plays a much broader role. It is one of the main mechanisms through which public agencies can translate broad policy aspirations into operational requirements, supplier obligations, contract terms, and monitoring processes. In other words, procurement is not just supporting AI adoption. It is governing it.
Our study shows that this governance role starts with policy, but it cannot end there. State AI policies often emphasize well argued and known principles of transparency, accountability, fairness, privacy, security, and human oversight. Those principles of course matter, but they only become meaningful when they are converted into things that agencies can actually specify, evaluate, and enforce. Without governance and monitor mechanisms, policy remains just another paper document with written anspirations. With them, it becomes actionable. 
For practitioners, one implication is clear: AI procurement should not be managed as a "conventional" technology purchase. Traditional sourcing often assumes that specifications can be defined up front and performance can be assessed largely before award. But AI does not work that way! This technology is evolving: AI systems can be updated, reconfigured, retrained, or repurposed over time. That means governance has to extend across the lifecycle of the system, not just the moment of contract award. So, agencies need contractual rights and internal routines that allow them to review changes, require re-testing, examine logs and documentation, and respond when incidents or risks emerge. 
A second implication is that responsible AI adoption depends strongly on cross-functional capabilities. Procurement cannot do this alone, and neither can IT (or the single agency!). Our findings point to six capability areas that need to work together: (i) data governance and security; (ii) bias management and explainability; (iii) human oversight and workflow integration; (iv) workforce readiness; (v) procurement-stakeholder coordination and compliance; and (vi) technology and supplier-ecosystem readiness. The question is: what happens if agencies that lack these capabilities? Well, they may still buy AI, but then they will struggle to govern it well (and obtain full benefits out of it!).
A third implication is that governments should pay close attention to evidence. Public value in AI is not demonstrated through a vendor's sales pitch alone. It is demonstrated through auditable traces incouding logs, documentation, testing records, incident reports, and review artifacts that show whether transparency, fairness, accountability, privacy, and security are actually being upheld in use. This is especially important in the public sector, where legitimacy depends not only on outcomes but also on the ability to explain and justify decisions. 
So, what is the main take away? Governments do not create responsible AI simply by issuing policy statements or piloting new tools. They create it when procurement, IT, legal, security, and program leaders work together to convert public values into enforceable governance arrangements. Of course, that work is less visible than the technology itself, but it is what makes AI adoption defensible, value creation-oriented, and credible over time.
If you want to read the full study, you can access it at the following link!

Share

Translate to