In a previous post, Part I: 3 Things Governments Should Consider Before Adopting AI, I discussed the importance of considering accountability, transparency, and privacy in government procurement of Artificial Intelligence (AI). While these are critical elements to consider, there are two additional elements to think about in their AI procurement strategies: regulatory frameworks and procurement policy.
Regulatory oversight is not often considered in the same breath as AI. However, because of the matters identified in my previous post, the impact of AI usage on public welfare becomes a critical issue where the role of government should be considered. AI regulatory issues are not well understood, and it is important to fully understand the ‘tone’ and the impact of any potential regulatory framework. For example, should it apply to how data is collected and used, not just in government, but more broadly across the private sector? Alternatively, should such a framework be associated with transparency and accountability principles? Since AI is not static but evolves continuously with the data, should testing systems and requirements be defined as part of this framework to ensure that biases do not creep into the data over time (and to understand more completely how the algorithms themselves are evolving). Separately, regulatory frameworks that ensure fair and equitable use (and availability) of AI may be important to consider.
In addition, there exists the broader issue of who would design and implement the regulatory framework. Careful consideration of regulatory frameworks is important to ensure that development is not overly hampered, that AI implementation is effectively encouraged and that the broader objectives of openness and transparency are also achieved.
Procurement oversight and transparency are critical roles within government. It can be expected that AI will infuse most of the systems that will be procured by government in the near future. As a result, the development of methods and procurement policy to understand the implementation of AI, specify its requirements, certify its use, and facilitate accountability, transparency, and privacy are important to consider.
Specification of AI requirements is a particularly challenging area because it is the performance or capabilities of the system, not necessarily the algorithms themselves, that must be the focus of the specification. In addition, since any embedded AI changes over the lifetime of the procured system the assessment of performance requirements, and any certifications of use must be ‘renewed’ or re-validated periodically throughout that lifetime.
There are two primary pathways to achieving these objectives – the first is for vendor certifications as part of the procurement terms and conditions and life cycle support requirements; the second is proactive government (or third party) testing and validation. Neither approach would exist in isolation and it is likely that most procurement would require a combination of both.
Given the nature and implications of AI, there are undoubtedly many challenges to address during procurement, many of which apply to both public and private sector companies. When considering government procurement, regulatory frameworks and procurement policies make the top of this list. To effectively begin and execute the procurement process, governments must explore and initiate conversations on how and what they would like regulatory frameworks to address. Similarly, governments must also consider the appropriate procurement policies to apply.
Clarity of expectations and simplicity of implementation will be important to ensure that vendors fully understand the approach and potential impact.
The government procurement system already has significant complexity, and AI procurement oversight must be carefully managed to limit increases in this complexity.