AI governance is here—what your organization needs to know
Stay up-to-date with data-backed insights
Thank you for your submission.
Using AI To Jumpstart Productivity
Commentators have long speculated on how nation-states’ governance of AI will impact corporate adoption of these tools. The conversation has heated up since the explosion of generative AI tools in late 2022, but the AI governance conversation has been going on for years.
“AI governance” refers to the framework for ensuring humanity is using AI ethically and responsibly. There are numerous AI governance paradigms and policies already on the books across the globe, and new policies are being proposed and ratified all the time.
Earlier this year, the EU proposed the AI Act—the first law on AI by a major regulator anywhere—that assigns levels of risk to applications of AI (unacceptable risk, high risk and limited risk). For example, using AI to manipulate the behavior of a vulnerable person or group of persons would be deemed an unacceptable risk, whereas infrastructure-related applications of AI would fall into the high-risk category.
As a product leader and co-founder who has navigated hype cycles and technological breakthroughs to achieve growth, I have seen the optimal (if delicate) balance required between heeding—even staying ahead of—regulatory requirements and embracing innovation.
Integrate AI governance into product strategy.
Balancing risks and benefits is a key first step in finding ways to bring AI into your product. Let’s be clear: In today’s environment, hesitating to incorporate AI into your product strategy introduces risk to your business.
It can result in delayed decision-making and missed revenue opportunities. Product leaders should know how to galvanize the rest of the C-suite to drive efficient action based on collaborative, strategic decision-making.
Here’s how product leaders can think through this process:
Develop a clear understanding of the compliance landscape and implications for AI.
Product leaders shouldn’t rely solely on their legal departments to know what regulations already apply and whether there are any regulations in the works that could impact the incorporation of AI into their product strategy. They need to have this baseline knowledge as they move forward with development.
For example, companies with EU operations will have to understand which risk level their proposed product updates fall into and how to plan around regulatory requirements.
Implement robust data management practices to maintain data privacy and security.
Ensuring data privacy is always of the utmost importance, but especially so in the early days of AI implementation when not all its data privacy implications are known. Regulations like the EU’s General Data Protection Regulation (GDPR) and similar regulations imminent in the U.S. must guide your data privacy controls as you productize AI.
You may find that your organization is already equipped for this. Many organizations have existing functions that can serve as natural partners in this implementation. For example, gathering stakeholders from Data Governance, Privacy, Security and Enterprise Risk Management is the starting point.
From there, you can set up an AI governance committee—instead of reinventing the wheel—that would set policies that will become part of the corporate governance structure.
Stay on top of documentation.
As companies seek to implement generative AI without incurring undue risks, we’re seeing some of them get too bogged down in the “what-ifs.” The pace of innovation is simply too fast to wait to implement an ironclad mitigation plan for all possible risks at the outset. I suggest making decisions based on sound assumptions of the current state of things and documenting the decision-making process, allowing for revisions and updates.
The reality is, things will change. AI’s risks, and our mitigation strategies, will evolve at a rapid pace in the coming years. By documenting your decision-making process, you’ll be able to implement a strategy that works for today and more quickly evolve it for tomorrow.
Involve key stakeholders from throughout the company.
Once a product strategy is in place that takes into account AI governance, leaders from across the organization need to band together to implement a broad strategy that can be integrated into corporate operations and enforced across the organization.
Establish a cross-functional AI governance team.
Engineering and product teams will be the ones developing and implementing AI tools, but the implications of AI governance extend far beyond your technological organizations.
Marketing teams will have to develop positioning and messaging frameworks around it, then equip sales teams to communicate them accurately and effectively. Legal, HR, Security and Finance will also need to be involved to guide what these policies look like and how they are implemented—both within the company and externally. As such, your AI governance team should include representatives from all of these organizations.
Foster a culture of transparency and accountability around the AI decision-making process.
To avoid members of your organization using AI in ways that expose you to risks, leaders should foster a transparent AI implementation process, where decision-makers are accountable for both its benefits and its drawbacks.
Provide ongoing training and education for relevant employees on AI ethics, regulations and best practices.
All major risk areas require ongoing training; it’s easy to envision a future where AI management will require certifications of the type that financial advisors and lawyers currently need.
In the meantime, and as AI regulations intensify, stay ahead of the curve by regularly training employees on every dimension of AI, including the loss of intellectual property, customer data and increasingly sophisticated impersonation tactics.
In part two, I’ll discuss how, once these policies are in place, leaders across the executive team can cascade them to stakeholders, both internal and external.
This post was originally published on Forbes.