How to Set Up and Run a Workable AI Council to Govern Trustworthy AI

Bitsight ai council
Stephen Boyer
Written by Stephen Boyer
Co-founder & Chief Innovation Officer

As in many companies around the world, Bitsight leadership believes that adoption and innovation through the use of artificial intelligence (AI) capabilities is crucial to the future of our company. From the top down, our employees are continually on the hunt for ways to leverage AI to improve business outcomes and customer productivity.

Given that we work at a cybersecurity risk management firm, we are also all more than a little circumspect about how we use AI. We want to innovate but we also need to remain focused on the potential risks that the adoption of new AI capabilities could pose. As the provider of services to cyber risk management professionals, the most obvious among them for us is the fact that rushed or poorly implemented AI could expand the attack surface of our business, its systems, and its products.

We also recognized early on that new AI tools needed to be applied where appropriate. Not every problem is a nail that needs the hammer of AI to solve it. Simply using AI for the sake of innovation risks failed projects and targets – successful AI adoption and rollout demands that our AI solutions are trustworthy, purposeful, and ultimately improving an experience or outcome for our customers both internal and external. Additionally, when we are embedding AI capabilities into our products and services, we must assure our customers that how we choose to implement those capabilities is transparent for customers.

If you want to design trustworthy AI products and services, you need to start with sound business risk decisions. AI risk management needs to be in place from the moment a need for applied AI is identified all the way through the full execution of a plan and implementation of the AI technology. Balancing the speed of innovation with the proper governance is often tricky so each organization will need to ensure that deliberation about the risks doesn’t unnecessarily stifle the innovation process with too many gates and unnecessary delays.

Striking that balance is something that we’ve been working on for several years now at Bitsight, and the way we’ve achieved our success is through a governance-first model for AI. The execution and the governance of our governance-first AI risk management process depends on the careful stewardship of our AI Council.

Our council is a cross-disciplinary and highly collaborative group that helps to make decisions about policy, protections, and direction around the deployment, for AI projects. The council establishes the governance standards and reviews proposals and tracks AI use cases across the company. The role of the AI council is not to pick winning or losing proposals but to facilitate the acceleration of experimentation and adoption of AI by assuring that the risks of each new proposal are being tracked and managed within the policies and frameworks established. Good governance should be an accelerant and not an inhibitor. The AI council sometimes causes us to go slow so we can ultimately go fast.

Below are answers to the components of the Bitsight AI council and how it functions:

Who is on the council?

We currently have 10 people in the council, with representatives from across the business. The stakeholders include our chief risk officer, chief innovation officer, CISO, field CISO, CTO, lead engineer on AI technology, Chief Legal Officer, operational counsel, and staff from engineering, data science, and IP. We also have a strategic executive from marketing included.

What does council do?

The AI council sets the tone and direction for AI usage at Bitsight. This starts and ends with our AI use policy. The AI council helped to develop the initial iteration of the policy and through regular collaboration, this team ensures that the policy is continually updated to account for new use cases, new risks, and new technologies that are constantly arising. The council also approves new use cases and assures compliance with the policy.

When does the council meet?

The council meets regularly—at least monthly—to have discussions about action items or cases that are submitted to it by AI owners proposing new implementations or changes. The goal is to close these cases swiftly. When action items are completed, representatives from the council get back to the respective business stakeholders to let them know what’s been decided.

How are decisions made?

The AI council has a system where Bitsight employees can essentially submit tickets that request a decision or approval from the council on a new piece of AI use case. The process is not fully automated, but it provides centralized tracking to guide collaboration during the council’s regular meetings. The council deliberates on whether the request meets current policy requirements. If a policy change is necessary, the council makes those adjustments. All decisions are logged and communicated to stakeholders and across the business. The AI use policy is published internally so that everyone in the company can reference it.

While there are quite a few stakeholders on the council, the council works to swiftly act with a bias towards approval. The council is not deciding whether the proposal is a good proposal or not but rather whether there is assurance that any idea proposed would comply with the policy and best practices.

A diverse representation of business perspectives ensures that the group can do a comprehensive risk assessment. The breadth of the team’s business understanding facilitates quick identification of issues and limits analysis paralysis.

AI use case considerations

Some of the considerations that the council helps to weigh as we decide whether to try new AI use cases are:

  • Data management, including where data is sourced from, who owns the data, how it’s managed over time.
  • AI model security, including who owns the model and how it’s trained.
  • Is AI the right tool for the job, or can it be achieved cheaper, faster, better with other available technology?
  • If AI capabilities will be embedded in product features, can the AI be disabled if the customer chooses?
  • Does the use case open the company to legal liability or compliance concerns?
  • Who owns the quality of the output? The answer cannot be “it is what the AI said” if there is a problem

This kind of governance and consideration is critical especially for an organization providing cyber security solutions where trust is paramount. As more and more companies embed AI into their solutions and services offerings, they will need to consider how they manage AI risk and communicate how they are managing this risk to their customers.

We hope that by sharing our learnings and process that will accomplish the following:

  1. Provide transparency and assurance to our customers about the risk management approach that we take toward AI governance. 
  2. Even more importantly, we want to model this as best practice for all enterprises to learn from. Forming an AI council is a great way to improve security culture and start thinking critically about managing AI risks without slowing down the innovation cycle. 

We encourage you to consider doing something similar no matter what business you work in.