Tucked into Rubrik’s IPO Filing: A Glimpse at the Company’s Approach to AI Governance
Among the figures and statistics in Rubrik’s IPO filing this week lies a noteworthy nugget that speaks volumes about the company’s approach to the fast-evolving world of AI. Rubrik has quietly formed an AI governance committee to oversee the implementation of artificial intelligence in its business operations – a move that highlights the potential risks and considerations that come with adopting this new technology.
According to the company’s Form S-1, the committee is made up of managers from Rubrik’s engineering, product, legal, and information security teams. Together, they will evaluate the potential legal, security, and business risks associated with using generative AI tools, as well as deliberate on ways to mitigate these risks.
While Rubrik may not be an AI-focused company at its core, its interest in the technology – evident in the chatbot it launched in 2023 – follows a trend among many other businesses exploring the potential benefits of AI. With this in mind, it’s worth considering the potential impact of AI governance becoming the new normal in the corporate world.
Addressing regulatory scrutiny
While some companies have taken it upon themselves to establish AI best practices, others may find themselves mandated to do so by regulations such as the EU AI Act.
Dubbed “the world’s first comprehensive AI law,” this landmark legislation – expected to be ratified across the EU later this year – prohibits certain AI applications deemed to pose “unacceptable risk,” and outlines regulations for high-risk applications. Companies falling within the scope of the bill could face significant consequences if they do not comply, making AI governance a vital aspect of their operations.
Eduardo Ustaran, a privacy and data protection lawyer and partner at Hogan Lovells International LLP, believes that the EU AI Act will magnify the need for AI governance, leading to the establishment of more committees like the one at Rubrik. “Aside from its strategic role in devising and overseeing an AI governance program, a committee like this can effectively anticipate and address any potential risks before they manifest,” Ustaran said. “In a way, an AI governance committee could serve as a foundation for all other governance efforts, giving businesses the reassurance needed to avoid compliance gaps.”
Another advocate for establishing AI governance committees is Katharina Miller, a compliance and ESG consultant. In a recent policy paper on the EU AI Act, Miller suggests companies should establish such committees as a compliance measure.
The legal consequences
Even without regulations dictating their implementation of AI governance, companies are still motivated to comply due to the legal implications of failing to do so. For instance, the EU AI Act carries significant penalties for non-compliance.
Moreover, the act’s reach extends beyond Europe – companies operating outside of the EU may still fall under its provisions if they engage in AI-related activities with EU users or data. Having seen the impact of the GDPR (General Data Protection Regulation), it’s reasonable to expect the EU AI Act to have a global reach as well, given the ongoing collaboration between the EU and the US in the realm of AI.
However, it’s essential to note that AI governance isn’t only necessary for legal reasons. In Rubrik’s case, the committee is also responsible for evaluating a wide range of potential risks and concerns, such as confidentiality, personal data protection, customer data, contractual obligations, open-source software, copyright, and accuracy and reliability of output. This points to the company’s proactive approach to ensuring legal compliance and avoiding any potential issues in the future.
Rubrik’s efforts to cover all legal bases could also be motivated by previous experiences with data leaks, hacks, and intellectual property litigation – all of which highlight the risks associated with AI tools.
A matter of perception
But legal implications aside, companies are also interested in cultivating public perception and trust. As Adomas Siudika, privacy counsel at OneTrust, put it, “We’re at a critical point in the evolution of AI, where its future depends on the public’s trust in AI systems and the companies behind them.”
Establishing AI governance committees may be one way for companies to build this trust. While AI may present opportunities that they don’t want to miss, they also need to assure their clients and stakeholders that they are taking necessary steps to minimize potential risks. This balancing act will likely drive more businesses to form AI governance committees, making them a new norm in the corporate world.