Collaborative Networks

As the proliferation of data across organizations continues to grow, the need for collaboration enterprise-wide has been required to both expand and strengthen. This is true for AI governance growth as well, particularly as the majority of AI solutions are inherently developed for end users that may be less familiar with the supporting data systems, the policies and procedures developed, or effective use parameters.

Similarly to data governance, an AI governance program needs to include engaged governing bodies that develop the rules outlining the program overall. This can be a singular group that encompasses all needs around AI governance or a number of smaller groups with focus on different elements, and should comprise of executive sponsors, domain leaders, representatives from the CoE including the AI governance lead, representatives from the AI development team, an AI process analyst, and project managers involved with any AI projects. Some processes this team will need to develop include AI use case submission and prioritization, solution lifecycle and sunsetting policies, terms and notifications around use, ethics requirements, and model quality parameters. Depending on organizational need there may be other requirements or participants, and all decisions by the governing bodies should be regularly audited to ensure they remain relevant and up to date.

All AI solutions require defined business (subject matter expert or SME) owners and technical owners. Responsibilities across the two roles may vary based on skillsets, availability, and project involvement, and in certain instances the roles may be filled by the same individual. Overall, these need to be individuals making decisions about the design and construction of the AI solution and intimately knowledgeable about both.

Unlike data governance, there are no domain AI stewards. Enterprise AI stewards exist in the Center of Excellence (CoE) and are responsible for maintaining the integrity of the AI governance platform, advising various governing bodies, delivering AI governance training, and occasionally operating as a governance project manager.

Instead, AI governance relies heavily upon partnerships between business and technology to work effectively. Each domain or area of deployment will be assigned one data scientist and one domain SME to operate as a team to steward their assigned AI products. The data scientist brings technical knowledge to the table and is responsible for developing, iterating upon, and productionizing a holistic solution. Conversely, the domain SME will outline the use cases or business needs and bring industry knowledge, an understanding of regulatory limitations, and the ability to prioritize based on company priorities. The relationship between data scientists and business users ensures development is aligned with tangible outcomes and business users are aware of current capabilities.

Instead of focusing solely on the features needed to support AI development, AI governance projects should include both an AI solution (use case) and governance elements to support that use case. This ensures that tangible outcomes can be demonstrated, time-to-value is significantly diminished, and processes are built on reality, not theory.

As a desired end state, AI use cases and solutions are developed and designed through the partnership of the data scientist and the domain SME in a given area, reviewed by the business and technical owners for validation, and then presented to the governing bodies for approval and prioritization. Especially initially a less formal process may be used as a governing body may not have yet been defined, ownership may be loose or non-existent, and partnership may be unofficial. As the program continues to grow so will the structure of these processes but ultimately even projects at the outset should be defined through use cases.

Use cases provide small slices of the end-to-end AI solution development process and will allow the program to grow organically without significantly delaying any existing or on-deck projects. As more use cases are deployed, project teams will both become more knowledgeable about the processes and also gain insight on how best to augment or update those processes for best fit. This also gives those teams the opportunity to provide input and feedback into various elements of governance instead of the governing bodies working in a vacuum. Finally, this approach ensures AI projects are aligning with company priorities and are scoped properly to ensure an effective and useful outcome.

While new features are being developed, use cases need to be mapped to the feature development roadmap. This should become one of the elements during prioritization discussions and will allow for organic development of use cases and AI governance features in tandem without blocking one or the other. For example, if a data quality remediation process is a feature on the roadmap, that may be paired with a use case that requires data sets with poor data quality scores. This will enable space for that remediation process to be developed and acted upon to identify any potential improvements (essentially beta testing the process while it is being deployed) while also improving the data set itself so it can be used for the AI solution development. There may be use cases with existing models which have somewhat variant outputs and require greater drift monitoring capabilities. Or use cases where bias is in question and requires significant, ideally automated testing.

Use case backlog construction and an AI governance feature roadmap should be the first elements of any AI governance program after the definition of a working team or CoE. Building these will then allow for mapping between the two, and this will influence prioritization of use cases, what the pilot looks like, and what stakeholders need to be involved at the outset versus who can be brought in later. Quick, early wins with this style of alignment will improve chances of the program gaining traction and visibility in addition to continued funding.

The Innovation Cycle outlines the path of any data or AI product from development through productization and end user availability. This cycle can be seen through a variety of lenses including data engineering, report and dashboard development, or infrastructure improvements. Through an AI governance lens, at origination, this cycle starts where the data governance cycle ostensibly ends: the Marketplace. Marketplace is an area where end users can browse products, review metadata including quality, ownership, and lineage, and ideally directly request access to those products. For the AI cycle to begin, data products need to already be available within the Marketplace so data scientists can use those products as the foundation for AI solutions in development.

Once the data scientist has identified the product needed for development, there should be a user-friendly ability to request access or “check-out” the desired product. Governance interferes here ensuring the user has proper access rights, that the data set is verified for AI development use, that the user will be developing in the proper platform, and that proper approvals have been granted. Each of these steps can be and ideally are deployed automatically or with little human intervention, and once all access request tests have been passed, the provisioning itself also should be automated.

As part of automated Continuous Integration Continuous Delivery (CICD) processes and additional AI operations, the steps through development and evaluation need to have governance directly baked in. This should include automatic branching during provisioning to development spaces, collaboration access for team members, automated ethics, bias, drift, purpose, user acceptance, quality testing, and proper manual approval pause points. There will also need to be some manual tests and reviews that should not be automated, like certain ethics reviews and specific elements of quality assurance (QA) testing. These parameters also characterize the sixth phase, productionalization.

The final phase before moving to production is the definition of the new AI product. This requires an analysis of the desired audience and what needs to be included in the product for different use cases. For example, an overall AI solution may have an end user product that solely consists of an output dashboard or the model or agent itself while a developer product may not contain those elements and instead include the supporting tables, functions, and code for that solution. Once these products have been defined, all required assets can be moved to production.

After deploying to production, these new assets will be caught up and ingested into the catalog through the regularly scheduled scans. These then need to be properly curated by data stewards and AI partnerships depending on the types of ingested assets. Finally, the AI partnerships need to define the previously agreed upon products in the Marketplace ensuring that other users can find those products and request access. From here, the process begins again with the next user browsing the Marketplace.

CTI Data’s approach to AI governance identifies and utilizes the needs of the organization overall as well as the needs of individual contributors and business units. This balance allows for a functional rollout that considers all stakeholders and truly abides by the feature and use case prioritization methods outlined above. We believe there is no “one size fits all” mentality that can be applied to these efforts and instead focus on our principles, framework, and best practices to define a plan that caters to specific needs and outcomes.

AI development is in full swing at many organizations already. At CTI Data, we believe that without effective AI governance, however, those efforts will not and should not be deployed to production or ever reach their full potential value. Nor should AI governance be developed in a vacuum; governance for the sake of governance is an expense without a tangible outcome. Pairing both the development and the governance of that development together will lead to a high-value AI program that stands on solid ground. Efficiency and consistency are created through automated testing and deployment, governing bodies ensure policies are defined and enforced, and end products are more trustworthy, transparent, and replicable. Driving progress through use cases ensures these two elements map together seamlessly, and proper prioritization allows both the solution development and feature development roadmaps to move forward at pace.

Amanda Darcangelo is a Lead Data & Analytics Consultant at CTIData.

Where Will You Take AI with Data?

Deliver AI That Gets Adopted.

Build Data Products. At Scale.

Use Data Governance to Fuel AI.

Ensure Trusted, Explainable AI.

Launch a GenAI MVP. Prove Value.

Let’s talk—No Pitch. Just Strategy.

© Corporate Technologies, Inc.