When you supply AI components to enterprise clients, their governance requirements become your responsibility. Understanding compliance frameworks is now essential for AI suppliers.
Let me tell you a true story. This last week I was dealing with a prospect for Waivern that was building a voice AI chatbot business. They were asking for someone to help certify their business as EU AI Act Compliant in general terms, because a potential client in the EU was demanding this from them.
They saw their business as relatively straightforward. Their service was designed to be sold to large organisations to help them to scale up and pivot the tasks of their call centre operations more cost-effectively. They achieved this by providing a high quality voice-driven interaction with an LLM to engage with users, sounding as human as possible while providing high quality information. In and of itself, a voice driven interaction with a general purpose AI is just like a textual interaction with a general purpose AI. For the provider of the voice chatbot service, the obligations from August 2025 include
So far, so relatively straightforward, so long as the model has been trained using data that they have the rights to use.
The key question that I asked was about the purposes their clients were going to use the model for. Their answer showed that it was going to be used, amongst others, within the healthcare sector. It was also potentially going to be used in contexts where credit cards were to be offered to the public.
Those new obligations might include:
This was an eye-opening discussion, both for me/Waivern and for that client prospect. And it neatly illustrates the challenge that many AI suppliers will face going forward.
Waivern is working on tools to automate as much as possible of this, while creating transparency and accountability for every company and consumer involved. We believe it's the only way this will work in practice for the future.