How AI impacts RABET-V™ verification

Twitter Facebook


One of the challenges of running a successful verification program is staying apprised of emerging trends in technology development and analyzing the benefits and risks introduced into the final product. Unsurprisingly, artificial intelligence (AI) has been on our radar for a while, and we’ve been brainstorming the best way to integrate it into the RABET-V process. This post outlines some of our thoughts.

For those who are new here

If you’re tuning into our work on RABET-V for the first time, we’ve already written extensively about the program. Initially piloted in the elections ecosystem, RABET-V verifies technology products through three core activities:

  1. Organizational Assessment: This assessment evaluates the technology provider's organizational processes to determine the maturity level of their software development.
  2. Architecture Assessment: This assessment evaluates the product software architecture to assess the security and quality of the product design and the level of risk presented by changes to the product.
  3. Product Verification: This is a point-in-time and compliance test of the product. It uses the results from the previous assessments to prescribe different levels of testing rigor based on the type of change and the product's maturity.

RABET-V also performs functional testing on the product, but the type and scope of tests change depending on the product and state requirements.

The word "AI" is superimposed over a bluish-hued cityscape with illustrations to denote technological futurism
AI can feel pervasive these days — Shutterstock

What is AI?

Broadly, AI refers to computer systems that perform tasks that typically fall within the realm of work purely done by humans, as they require human intelligence. Creating art from a description, making decisions based on several conflicting factors, and weighing the pros and cons of a particular course of action are all human tasks that AI can accomplish—with varying degrees of success.

How does AI affect a technology product?

Our favorite response to this question is: It depends. Is the AI being directly integrated into the product? Is there an API call to an AI? Is AI used in product development? These are just a few scenarios that we must consider in our verification. How do they impact each verification activity in RABET-V?

AI in the Organizational Assessment

The RABET-V Organizational Assessment examines the software development organization’s approach to Governance, Design, Implementation, Verification, Operations, and Human Factors. It’s a slightly augmented implementation of the OWASP’s Software Assurance Maturity Model (SAMM). AI could play a significant role in Governance, Design, Implementation, Verification, and Implementation.

In Governance, the following should be assessed:

  • Does a software development organization have policies around the use of AI?
  • What is the scope of AI usage (e.g., brainstorming design, assistance in writing code)?
  • What are the restrictions on which AIs can be used (e.g., does the company pay for a specific AI, are employees allowed to experiment with non-company-sponsored AI, and are there any controls around when and how AI can be used)?

In Design, we will review the following:

  • Are AI tools used in architecture and design processes? If so, in what way?
  • Is AI tooling used to determine security requirements?
  • Are AI-related libraries, APIs, or other dependencies used in the product?

Implementation is one of the most obvious areas where AI is likely to be heavily used, necessitating evaluation of the following:

  • Are developers using AI to write or help write code? If so, what is the process?
  • Is AI assisting with the validation of the written code?
  • Is AI used to document the written code?

For Verification, we need to explore the following:

  • Are you using automated testing tools with AI integration or similar?
  • For manual testing, is AI used in some capacity?

In the area of Operations, we will look at the following:

  • Is AI used for incident detection and response? If so, how?
  • Are there AI integrations for configuration management and validation?
  • How does AI factor into patch management?

While this won't affect the baseline scoring or a product’s final score, we will change the Organizational Assessment to gather data on whether these policies exist.

AI in the Architecture Assessment

There are four distinctions for how AI is used in product development:

  1. AI was used to design the product
  2. AI was used to build the product
  3. AI is exposed at runtime (e.g., chatbot, decision support)
  4. AI is used as a self-modifying system

In the architecture assessment, we focus on AI within the implemented product (i.e., scenarios two and three).

At the system architecture level, we’ll identify and assess the usage of AI integrations internally or externally to the company. The same level of rigor and analysis of security controls will be leveraged to determine the risk of integrations to the system.

RABET-V never reviews source code. Instead, we use a series of tools to understand how a product was designed and what libraries were used. Using our introspection tools, RABET-V can identify if AI is part of the existing application, whether called via an API or embedded into the product. The analysis and information we collect in the Architecture Assessment will inform the Product Verification.

AI in Product Verification

Currently, there is no way to test AI thoroughly, and any use of AI is an increased risk due to its unpredictable nature and lack of maturity. Any tests developed to test the AI will never be completely exhaustive, especially as the AI continues to train on interactions and data within the product.

Moving forward with initial recommendations

Running a verification program is an exercise in continuous improvement. We monitor new threats and technologies and determine whether RABET-V verification will positively affect products in the emerging environment. Data collection and analysis before changing the program are essential to ensure we respond thoughtfully to the threat environment and market. We’ll continue to work with states and technology providers to inform how we approach our work.

The following are the key points the RABET-V team recommends to technology providers:

  • The use of AI, especially third-party AI companies and their training policies, should be clearly stated and discussed with potential clients.
  • Define clear internal policies on using AI, specifically which systems can and cannot be used.
  • AI integration should consider whether the interactions and data exposed through the product are used to train third-party systems and the potential risks of exposing sensitive information.
  • Like any other development, AI-based code should have a thorough test-driven development pipeline and include static and dynamic application security testing.

For organizations looking to procure new products and experiment with the introduction of AI, here are some additional considerations:

  • Define your own boundaries. If you’re an organization open to experimenting with emerging technology, determine where AI could be incorporated into your operations and where it should not be.
  • Ask direct questions. If you’re looking to procure new technology, you should be aware of technological aspects over which the developers have little control.

AI adoption has advanced rapidly, but it remains an emerging technology and should be approached with reasonable caution. Using RABET-V, we’ll help monitor the use of AI as far down the development chain as possible to ensure that the products on the market incorporate AI responsibly at this stage in its evolution.

We welcome all thoughts on this topic. Don't hesitate to contact us at team@rabetv.org, especially if you feel RABET-V can help you improve your security posture.

Thank you to John Dziurłaj, Grace Gordon, Brian Glas, and Michelle Shafer for their expertise, suggestions, and comments on this piece.

RABET-V™ is a trademark of the Center for Internet Security Inc.

Current page