In the course of our work, we see a lot of references to Artificial Intelligence (AI) governance, and to the importance of having strong AI governance. As a company that has been on the forefront of data governance for more than two decades, we of course are in favor of effective governance of any tool, technology, or process that consumes, utilizes, or otherwise impacts data.
Effective governance, in our opinion, is governance that helps people use tools better, that reduces friction in the course of accessing tools and resources, and that supports improved operations and strategic planning. So any approach to AI governance needs to start with questions like, what are we trying to accomplish by using AI?
Many of our clients hope to use AI to supercharge their analytics efforts. They would like to generate more insights from their data, they would like to generate those insights more quickly, and they would like that process to be more responsive to the schedules of those who want to act on those insights!
It seems fair to say that there are many promising avenues for utilizing AI for this purpose, among them the ability to ask questions in plain English (or at least business language), the speed with which large data sets can be explored looking for potential trends, and a broad range of output formats for whatever results come back.
However, before the widespread availability of AI, many organizations had teams of people who were asking questions, organizing and analyzing data, interpreting the results, and packaging up those interpretations across a variety of data products.
And while we've all heard the lament that business people and data people don't always speak the same language, it seems naive at best, and possibly highly detrimental to your organization at worst, to believe that conversations with GenAI are always cutting through the fog of miscommunication. One thing we know for sure is that large language models hallucinate, and that without the right prompting their default behavior looks a lot like telling you what they "think" you want to hear.
It turns out that having analytical insights and taking meaningful action based on those insights are not the same thing, and based on what we've observed over the years the gap between them will probably not be bridged immediately simply by adding AI to the equation. For more thoughts on this topic, check out our series of posts on AI and analytics from last summer, including "Demystifying Analytics and Artifical Intelligence? post.
A quick look at some of the arguments in favor of AI governance remind us of arguments made over the years in favor of data governance: AI governance builds trust in AI, much the same way data governance builds trust in data; governance helps organizations avoid (or at least better manage) risk; governance makes it possible to increase the use of organizational assets; governance supports making better decisions, complying with regulations, and protecting reputation (don't get sued); and so on.
One of the things we emphasize when discussing data governance is that it's a system for formalizing accountability. Another aspect we highlight is the central role of data stewards, and in particular data stewards acting collaboratively. When you steward data, you make decisions about its use, and you make those decisions with the good of the organization in mind. A lot of different tasks go into data stewardship, but in our opinion successful data stewards exercise authority so that high-quality data is available, is easily understood, and is accessible at appropriate places and times.
If this is what data stewardship looks like at your organization, then it's probably fairly clear where AI fits into your data management and usage procedures. If data quality is assured, if you have a robust business glossary, if your data sets are curated and access is granted systematically, and if you have standards and reviewers in place to manage the development and release of data products, then you're probably ready to test AI's capabilities to help your organization realize additional value from its data assets.
Many organizations aspire to this model of stewardship, but have yet to attain it. (We discussed one set of barriers in our post on reluctant data stewards just last month.) When we hear these organizatons talk about using AI in their data environment, we are reminded of the old proverb about running before walking. This may just be a muscle memory reaction on our part, since we've seen too many migrations, data warehouses, and BI tool deployments go south due to weak data governance foundations, often composed of a patchwork of outdated and sometimes contradictory policies, variably understood and enforced regulations, taking for granted the heroic efforts of dedicated employees.
AI already has a number of applications that are not primarily data-related. Employees will use AI to help create slide decks, to summarize activities, to generate sales and marketing materials, and to produce any number of similar text (or text-and-image) outputs. AI is undoubtedly being used to help develop and enhance training materials, certifications, fillable forms, quizzes and exams, and the like. While some of this work could well have an impact on an organization's data, such as A/B/C testing of marketing or training materials, the creation process might not consume or otherwise be exposed to your data.
Still, your organization is probably looking to manage the use of AI in tasks like these, and for the same reasons noted above: assuring quality work, avoiding unnecessary risk, maintaining brand reputation, and so on. We don't expect that most employees will really understand how large language models work, or that they will create and deploy agents, but we do expect employees to use tools responsibly, to do work that benefits the organization, and to be cost-effective and collaborative where it's possible.
Our approach to data governance is driven by the same impulses. Employees encounter data in many different circumstances, and a one-size-fits-all approach to managing those encounters often ends up being an empty technocratic exercise. We observe organizations devote countless hours to creating policies few will read and many will ignore, convene multiple meetings that too many attendees will find irrelevant, and focus on security and risks that are foreign to most users' experience.
Our pragmatic approach, which is easily supported using the Data Cookbook, our data intelligence solution, involves engaging users when they have questions about data. If there are discrepancies in terminology, the affected parties work to build out a business glossary. If there are questions about accuracy, then data quality issues can be reported and investigated. Aggressively cataloging data makes it clear what data is available, who is responsible for it, what its lineage is, and so on. Requests for access, or for data products, or for expanded data sets, don't go into a vortex from which they may never emerge.
The focus in our approach to data governance is recognizing that data is a business asset, and that when data is leveraged, when it is used both widely and wisely, employees and users can do better work that benefits them, their department, and the whole organization. Shouldn't similar principles guide your approach to governing AI?
We hope you found this blog post useful. Also check our our data governance spotlight resources located at https://www.datacookbook.com/spotlights. IData has a solution, the Data Cookbook, that can aid the employees and the organization in its data governance, data intelligence, data stewardship and data quality initiatives. IData also has experts that can assist with data governance, reporting, integration and other technology services on an as needed basis. Feel free to contact us and let us know how we can assist.
Photo Credit: StockSnap_YE8Y1H8GJK_TwoMen_ArtificialIngelligence_BP #1303

