A McKinsey Global Institute study simulating the impact that Artificial Intelligence (AI) will have on the world economy found that
by 2030 “some 70 percent of companies might have adopted at least one type of AI technology”.
Given the significant estimated adoption of AI, we all have a vested interest in ensuring it’s developed and implemented responsibly. This is particularly true when it comes to government procurement as it closely impacts citizens’ safety and security.
For example, the recent White Paper on Responsible Artificial Intelligence in the Government of Canada identified the three objectives listed below for the application of AI for Smarter Government. Though these were originally identified for Canada, they are likely to be on the mind of government officials worldwide and are not specific to one country.
- Applying AI to the delivery of services to the public
- Applying AI to help design policy and respond to risk
- Applying AI to the internal services of government
There are several areas of consideration to achieve these objectives, which can be grouped into the 3 broad themes below. I will cover these topics in a two-part blog post. The first part will address accountability, transparency and privacy, while the second part will discuss regulatory frameworks and procurement policy.
Accountability
Accountability, transparency, and privacy are overarching considerations for the use of AI by any organization, but it is particularly critical for public institutions. There exists a broader responsibility within government to be accountable to the public for its use of AI and the frameworks, methodologies, and processes that underlie its implementation and monitoring. AI is not a static thing; the best AI solutions constantly evolve based on learning algorithms that continuously process new data. In being accountable, government must be able to clearly articulate the algorithms that are used, how they function, and on what basis they evolve. This is a particularly challenging problem because algorithmic complexity often defies understanding, and the algorithms in the most advanced machine learning systems self-adjust and self-generate, resulting in new algorithms and code structures that are even more difficult for a human to understand.
An interesting example as a case in point was the design of language translation systems by Google researchers. The translation machine learning algorithms appeared to have automatically designed and developed an internal intermediary ‘language’ to assist in the translation process. The extent to which this accurately describes the process or outcome is itself in dispute because the complexity of the algorithms that are evolving makes it very difficult to really understand what is going on. For reference, please see:
- Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System
- Google’s AI translation tool seems to have invented its own language
- No, Google Translate did not invent its own language called ‘interlingua’
Transparency
Transparency goes hand in glove with accountability. AI should be viewed as a capability, not a thing. As a capability, it will infuse virtually all software and hardware systems over time and will often exist unbeknownst to the user. While awareness of the extent, use, and implementation of AI in consumer devices and consumer apps may be less of a concern to the general public, a different standard exists for government. As public awareness of AI continues to increase, it can be expected that there will be a growing call for government to identify where, and in what systems and processes, it is using AI. It is important that government get in front of this wave to develop systems, processes, and policies to identify to the public which of the government initiatives and services are AI-infused.
Privacy
Data forms the core and foundation for AI. In the government environment principles of accountability and transparency must clearly identify what data is being used, how it is sourced, how it is maintained, and how it is protected. It can be envisioned that there are three primary buckets of data for public policy and services:
- Data knowingly provided to government by people;
- Open data available by scrubbing public internet sources (such as social media); and
- Third-party private data collection.
Data provided by the public to government must be treated with an extremely high level of privacy and embrace unattributable aggregation for the purposes of AI. The public must also be made aware that their data may be used in AI-based systems. In addition, options should be made available to opt out of AI data use and policies should be emplaced to enable their data to be ‘forgotten’. Not all data is equal across each of these three sources, and data engineering processes to increase the levels of data equivalency and to remove bias in the data must be clearly described and communicated. Data engineering is an often overlooked part of the AI pipeline; however, given how critical it is to ensure that the data is valid, the data engineering techniques and processes should be subject to the same rigour, transparency, and accountability as the resulting AI implementation.
Ultimately, AI remains a highly specialized knowledge domain, evolving at an extremely rapid pace. Developing techniques and processes that demonstrate accountability, transparency, and privacy for AI systems, evolution, and outcomes is critical. Though these elements are key to ensuring public faith in and acceptance of associated government systems and processes, they are not all-encompassing. In Part II of this blog post series, I’ll explore two additional roles for government in the consideration of AI: regulatory frameworks and procurement policy.