Waivern logo
All posts
AI Governance

Supplying AI components? Your client's AI governance challenges are now yours, too!

When you supply AI components to enterprise clients, their governance requirements become your responsibility. Understanding compliance frameworks is now essential for AI suppliers.

VNVincent Nunan
3 minutes read
AI governance and compliance

Let me tell you a true story. This last week I was dealing with a prospect for Waivern that was building a voice AI chatbot business. They were asking for someone to help certify their business as EU AI Act Compliant in general terms, because a potential client in the EU was demanding this from them.

They saw their business as relatively straightforward. Their service was designed to be sold to large organisations to help them to scale up and pivot the tasks of their call centre operations more cost-effectively. They achieved this by providing a high quality voice-driven interaction with an LLM to engage with users, sounding as human as possible while providing high quality information. In and of itself, a voice driven interaction with a general purpose AI is just like a textual interaction with a general purpose AI. For the provider of the voice chatbot service, the obligations from August 2025 include

  • educating their staff on their opportunities, risks and legal obligations around AI use in the EU, appropriately detailed to the level of their professional use of AI
  • providing a publicly-available copyright policy statement to declare how creator's rights are respected in the LLM training process
  • ensuring that proper GDPR legal bases for the collection and use of any data used in the model training, if it is personal data (whether voice data = personal data is a whole other article!)
  • creating and publishing a publicly available summary of the data used to train the model
  • creating and maintaining technical documentation, including on how the model was trained and tested
  • defining "usage guardrails" for their AI chatbot, to ensure that clients understood the limits on the intended use cases for the bot

So far, so relatively straightforward, so long as the model has been trained using data that they have the rights to use.

The key question that I asked was about the purposes their clients were going to use the model for. Their answer showed that it was going to be used, amongst others, within the healthcare sector. It was also potentially going to be used in contexts where credit cards were to be offered to the public.

AI Risk Levels

Those new obligations might include:

  • Integrating with the client's quality, risk management and post-marketing monitoring systems for their AI, helping them to identify unpredicted risks
  • Creation and sharing of detailed log files and usage records with their "high risk AI" clients
  • Ensuring any inputs are available to the client to aid in their explainability obligations for information that the voice chatbot's LLM may have included or excluded in the communication
  • Engaging with the consent management/privacy policy of the client in terms of the purposes for which end user/customer data can be used by the chatbot provider
  • Participating in the data governance obligations of their client, e.g. if the client is sharing their customer data within a RAG or other adjustment of the core LLM, then at least that data sharing, storage, processing will likely need to adhere in a specific and agreed way with the client's data governance policies. If the data never leaves the client's own premises, that may be very simple. If there sharing of data across different operating environments and countries, that can get complicated very quickly.

This was an eye-opening discussion, both for me/Waivern and for that client prospect. And it neatly illustrates the challenge that many AI suppliers will face going forward.

Waivern is working on tools to automate as much as possible of this, while creating transparency and accountability for every company and consumer involved. We believe it's the only way this will work in practice for the future.