By Jigyasa Sharma & Dr Brandie Nonnecke
In the midst of the pandemic, over a million Californians’ unemployment claims remained unprocessed as of October 2020.[1] The main cause of this backlog was the time-consuming process of verifying claimants’ identity. To address this challenge, the California Employment Development Department (EDD) partnered with ID.me, a federally certified identity verification platform, to apply computer vision and biometrics to speed up verifications and processing claims.[2] As a result of using artificial intelligence (AI), the EDD has drastically increased its ability to process claims.
The California State Government isn’t the only government entity interested in harnessing the power of AI. A survey by Meritalk found that 76 percent of government and industry executives and IT decision makers agreed the pandemic has escalated the importance of AI for government. However, as government agencies ramp up their AI adoption, it will be crucial to ensure they have the appropriate technical and governance infrastructure in place. This ties closely with an agency’s AI readiness which can be viewed as the building blocks that in turn allow an agency to advance to higher levels of AI maturity. While AI readiness refers to an agency’s preparedness to adopt and implement AI, AI maturity pertains to the stage an agency is at in their ability to implement AI in order to optimize workflows at scale.
Why is it important to assess AI maturity and readiness when governments can just focus on rapidly building and scaling AI models? Rushing to build AI solutions and overlooking AI maturity and readiness will inevitably lead to system failure at best or significant harm to staff and customers at worst. Disregarding AI maturity and readiness could negatively affect governments in the following ways:[3]
- Fairness and Bias: Governments must safeguard AI-enabled tools from performing in ways that are discriminatory, biased, or prejudicial. Failure to do so may cause serious social, legal, and ethical harms that undermine benefits achieved through the use of AI-enabled tools.
- Mistrust and Withdrawal from AI: Governments that do not engage in experimentation and incremental development of AI-enabled tools before deployment can risk harming people they intend to protect. These failures could lead staff and the public to mistrust AI and withdraw its use.
- System Decay: Governments that do not build internal expertise and depend excessively on external vendors as they mature to more advanced stages are at a high risk of system decay as systems require regular retraining and external evaluation to maintain optimal performance.
While some agencies within the government may just have started off understanding and exploring the potential of AI, others may be rapidly piloting and scaling up its use to a variety of use cases. In this article, we provide a model to determine a government’s maturity and readiness as they progress in their journey to adopt AI at scale.
Stages of AI Maturity
Microsoft, Element AI, and Ovum have created prominent frameworks for determining AI maturity. We combine the most relevant elements of the three to define four stages of AI maturity for governments (see Figure 1):
- AI Explorer: Governments that are exploring the economic value that AI technologies can unlock and are assessing the opportunities and risks of AI-enabled tools. They have no AI models in testing or production.
- AI Experimenter: Governments that have identified the use of AI-enabled tools to solve specific business use cases and are experimenting with conducting a proof of concept, piloting, and testing specific use cases.
- AI Optimizer: Governments that have AI solutions in production and are in the process of optimizing and scaling AI solutions to a variety of use cases.
- AI Exemplar: Governments that have used AI to transform day-to-day operations and are successfully using AI solutions at scale across a variety of use cases.
Figure 1. Stages of AI Maturity
Adapted from: Tom Pringle and Eden Zoller, “How to Achieve AI Maturity and Why It Matters,” Ovum, June 2018, https://www.amdocs.com/sites/default/files/filefield_paths/ai-maturity-model-whitepaper.pdf.
The AI Readiness Model
Assessments of a government’s AI readiness can better ensure that it has the technical infrastructure such as data, computing power, and software as well as the social infrastructure such leadership, ethics review, and governance in place to advance to the next stage of AI maturity if it makes sense to do so.[4] It is critical to ensure that a government’s ecosystem is aligned with its AI strategy.[5] As with any real-world project that deals with complex factors and must operate at scale, the success of an AI project hinges on the success of its components such as the data, security, software, and people. Furthermore, it is important to build in-house expertise and train staff before scaling up the use of AI. Intel’s AI Readiness Model provides a framework to think strategically about various factors that can help to determine AI readiness. governments can evaluate their readiness based on the following parameters:
(1) Foundational Readiness: Is the technical infrastructure established to support AI-enabled tools?
(2) Operational Readiness: Is the social infrastructure established to support AI-enabled tools?
(3) Transformational Readiness: Does the government embrace the use of AI-enabled tools to alter its day-to-day operations?
Foundational readiness pertains to the technical infrastructure required to implement an AI-enabled tool. Governments may not have the technical infrastructure to effectively build and scale the use of AI. Governments that lack in-house technical infrastructure (e.g., computing power, storage) can turn to private sector cloud services, which are favorable for supporting model training and rapid testing as they provide low barriers to entry and pay-per-use service.
In order for a government to appropriately assess, develop, and use AI-enabled tools, it must consider compatibility and adaptability within the existing software systems, associated processes, and data quality. By integrating new AI-enabled tools into existing software and processes, governments will reduce inefficiencies and redundancies from duplicative systems and will be able to glean new insights emerging from connected systems. The quality of data is a critical factor in building effective AI models. Factors affecting data quality include, among others: accuracy, consistency, representativeness, timeliness, and usability. governments should proceed to experimentation (i.e., AI Experimenter stage) with AI-enabled tools only after they have established foundational readiness.
Sustaining the success of AI-enabled tools is contingent on operational readiness. On the one hand, operationalization requires agile delivery and management of AI-enabled tools as both the data and model evolve. On the other hand, this evolution must be accompanied by development of in-house skills and expertise before use can be scaled. Operational readiness can be achieved only if staff understand the interconnections between technical and ethical aspects of AI-enabled tools. This means that the staff should be trained to become familiar with the five ethical AI principles laid out in this report and how such principles can be operationalized.
Adequate governance, including strategies to support legal and policy compliance and risk mitigation, must take center stage as governments prepare to deploy AI to a wide variety of use cases. Paramount among these is addressing risks to security and privacy. As governments move beyond proof of concept, they must address existing vulnerabilities in security and privacy protection before deployment at scale. Therefore, governments should prioritize building a robust cybersecurity resilience plan to achieve foundational and operational readiness to successfully move from the AI Experimenter to the AI Optimizer stage.
Procurement is another important aspect of achieving AI foundational and operational readiness. A robust procurement process should be established to ensure that government governments are able to adopt AI without wasting resources and time in identifying the right technology and vendors. Responsible use of AI should include an ethics-driven procurement process to mitigate potential risks. Such a process should be multi-staged to allow governments to screen vendors by evaluating the adequacy of vendors’ technical and governance capabilities, including their application of ethical AI principles to inform AI development and use, and by validating proposed solutions in sandboxes or testbeds.[6]
Finally, transformational readiness is imperative to transitioning to the AI Exemplar stage. This requires embracing changes in the day-to-day operations of the government driven by use of AI-enabled tools. These changes should be two-fold to address: (i) how tasks are performed, and (ii) awareness and acceptance of AI-enabled tools in the workplace. Changes in work culture can be achieved in two ways. First, employees may be persuaded of the technology’s value by providing clarity regarding the benefits vis-à-vis costs in time, resources, and risks of using AI-enabled tools. Governments in the AI Explorer or AI Experimenter stage may focus more on total cost of ownership (TCO)—the direct and indirect cost of technology. To decide whether to replace the status quo with an AI-enabled tool, governments must consider whether a single point solution (use case) would provide adequate value or return to the government. At the AI Optimizer and AI Exemplar stages, governments must consider the return on investment (ROI)—net profit over the cost—as these governments can benefit immensely from economies of scale by deploying AI-enabled tools across multiple services. Second, strategic leadership is required to support buy-in and acceptance from across the organization and to identify opportunities for further expansion of the use of AI-enabled tools.
Figure 2: Return on Investment at Different Stages of Maturity
Source: Authors
Main Takeaways
In order for governments to capture value from AI-enabled tools, they must first determine their AI maturity and readiness. In doing so, they can better understand the technical and social infrastructures that are in place or needed to support successful development and deployment of complex use cases. Moreover, since the assessment involves exploring use cases depending on an entity’s maturity level and preparedness, governments can more easily identify low-risk, high-impact applications that can be prioritized. While riskier applications would require more careful considerations, governments that are early in their maturity stage are well positioned to implement low-risk AI that will bring high impact. For example, the Department of Defense (DoD) AI strategy prioritizes use of AI-enabled tools that augment personnel by automating and offloading repetitive and mundane tasks. Such tools automate routine tasks that pose limited to negligible risks and when executed at scale are likely to result in significant gains. [7]
AI maturity and readiness must not be viewed as being static, since the technology itself and potential application areas evolve constantly. AI maturity and readiness should be reassessed continually as governments evolve. The ROI from development and deployment of AI-enabled tools is growing exponentially. Governments can maximize ROI by partnering with other organizations at higher levels of AI maturity to share knowledge, expertise, and resources. As governments advance to higher stages of maturity and readiness, they can expect to earn higher ROI (See Figure 2).
Jigyasa Sharma is a Master of Public Policy 2021 candidate and a D-lab/Discovery Graduate Fellow at the Goldman School of Public Policy and a Consultant at the CITRIS Policy Lab. She is an experienced professional with expertise in AI policy, privacy, data governance, cybersecurity and emerging threats, internet policy, and smart cities. Her experience includes projects reporting to C-suite and high-level government officials in the US, Singapore & India.
Brandie Nonnecke, PhD is Founding Director of the CITRIS Policy Lab at UC Berkeley where she supports interdisciplinary tech policy research and engagement. She is a Technology and Human Rights Fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School. She served as a fellow at the Aspen Institute’s Tech Policy Hub and at the World Economic Forum on the Council on the Future of the Digital Economy and Society. Brandie was named one of the 100 Brilliant Women in AI Ethics in 2021. Her research has been featured in Wired, NPR, BBC News, MIT Technology Review, PC Mag, Buzzfeed News, Fortune, Mashable, and the Stanford Social Innovation Review. Brandie has expertise in information and communication technology (ICT) policy and internet governance. She studies human rights at the intersection of law, policy, and emerging technologies with her current work focusing on issues of fairness and accountability in AI.
[1] Beam Adam, “California’s Unemployment Backlog Expected to last Until January,” KCRA Channel 3, October 7, 2020, https://www.kcra.com/article/edd-oversight-hearing-california-capitol/34304079#.
[2] Robert Prigge, “Are Agencies Unintentionally Contributing to Unemployment Fraud?” GCN, last modified September 11, 2020, https://gcn.com/articles/2020/09/11/digital-identity-verification.aspx.
[3] Eric Charran and Steve Sweetman, “AI Maturity and Organizations,” Microsoft, 2018, https://clouddamcdnprodep.azureedge.net/gdc/gdcX8j35c/original.
[4] David Freeman Engstrom et al., “Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies,” Administrative Conference Of The United States, February 2020, https://www-cdn.law.stanford.edu/wp-content/uploads/2020/02/ACUS-AI-Report.pdf.
[5] AI strategy essentially refers to the agency’s roadmap for the adoption, development and deployment of AI technologies.
[6] Testbeds and sandboxes are controlled testing environments to test software and innovations. These may be used both at the proof-of-concept and testing stages.
[7] “Summary of the 2018 Department of Defense Artificial Intelligence Strategy”, Department of Defense, 2019, https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF.
You must be logged in to post a comment.