IData Insights Blog

Data Foundations for AI-Enhanced Analytics

Written by Aaron Walker | Sep 12, 2025 9:53:02 PM

We’ve devoted a few words in this space over the past few months to some thorny issues facing organizations that want to do more with their data, and that are thinking of employing generative artificial intelligence (AI) as part of their analytics efforts. One issue we noted was a failure to think clearly and critically about what analytics can and cannot do. Another was the challenge of organizing or curating high-quality data sets for analytical purposes. Still another was the difficulty data consumers and data providers have both making and fulfilling requests for and about data.  In this blog post we will discuss how we got here, where we got off course, and how a better map might help.

How Did We Get Here (Wherever That Is)?

How did we get to the point where so many organizations struggle to take advantage of the massive amounts of data they collect, and where so many hopes are now placed on AI to help? We might summarize the past quarter century of data management as follows.

  • First, we needed a BI tool with a logical model or a semantic layer. That tool was going to allow users to extract data specific to their needs without having to understand a whole data model, join tables together, know the names of database columns, or really do much more than drag and drop data objects around. What happened? Well, the semantic layer was usually pretty thin, and often didn’t amount to much more than giving a full name to a column abbreviation. There was certainly no business-oriented terminology in this layer. And even authoring queries against a somewhat curated data set was no small matter, so in practice users would just dump thousands of rows and dozens of columns to a local spreadsheet.

  • When legacy BI tools didn’t solve the problems by themselves, we then needed a data warehouse or other centralized data store for reporting and analysis. It’s true that we wanted improved performance, and that we had multiple data sources we wanted to centralize in one location—it’s not that there weren’t good reasons for this engineering feat. But we also have to admit that our expensive BI tools were not providing us with reliable, digestible information in a timely or consistent manner. Alas, we still gave short shrift to user-facing documentation, and once we opened Pandora’s Box of adding tables on request, we were always behind the eight-ball when it came to making data available to users. Moreover, we were still using our legacy BI tools to query the warehouse, and in practice we often ended up injecting yet another layer of complexity and impenetrability into the data request process.

  • Now, some of the time we did improve performance, and we did make more information available to at least some consumers. But the output still required massaging and analysis, and simply having a data warehouse didn’t magically upskill any of our users. Next, we put our eggs in the visualization and dashboard basket. And, well, in some cases, surely there was additional incremental progress. Displaying aggregated data in a chart, and using colors to separate populations, along with other features of visualization tools, allows for easier understanding, and even some level of interactivity, assuming the consumer is up for it of course!

  • Still, while the investment in dashboards may mean that more people are looking at data more frequently, if our clients are any indication, this work has generated as many problems as it has solutions. Most of the analytics being produced are operational in nature, and their support for strategic planning and decision making is minimal. More data consumption means more opportunities for misunderstanding, and if not misuse then at least questionable applications of so-called insights to business problems. The profusion of SaaS tools providing embedded analytics means that enterprise dashboard applications may not be the only tool providing visualizations shared with leadership.

Today it seems we’re hoping AI can generate the magical, mythical insights we all want in order to solve our most vexing business problems. And while there is surely a lot of hype around AI, we have definitely seen some very impressive displays of AI as an analytics tool. However, given the trajectory we just outlined of wave after wave of the next big thing, all of which failed to move the needle significantly, we think it’s wise to remain skeptical.

Where Did We Go Off Course?

Our story is obviously an exaggeration and an oversimplification, but we've seen it enacted by too many clients. In our experience, these tools were not deployed or implemented as thoroughly or as consistently as would have been advised. Training was skimped on, access to valuable data continued to be limited, and staff to support training and access were in short supply. Equally importantly, these were sold and rolled out as technical solutions to what were at heart cultural and procedural problems. Finally, even in those rare cases where sufficient resources were allocated to deployment, these investments still didn’t necessarily reflect the kind of paradigm shift that was needed to recognize data as an asset and to utilize it as one.

From our vantage point as consultants and vendors, and given the benefit of hindsight, it seems not that controversial to suggest that the reason these investments didn’t pay off as well as they could have was mainly that they did not solve the central data problems our clients were facing. What are the central data problems organizations face? Some of them have to do with data intelligence: for example, not knowing what kind of data is captured, where it is stored, and/or who has access to it. A big issue is data stewardship: is data defined consistently, are those definitions made public, are quality standards for the collection and maintenance of data in place (and adhered to), etc. A broad issue, of course, is data governance: who’s responsible for data, what does that responsibility look like, how do data governance policies translate into data management activities, and so on. Over the past half decade, if not longer, data literacy, or the lack thereof, has been a whipping boy in this discourse, although we would argue that low data literacy competencies actually manifest in each of the issues named in this paragraph.

There are plenty of commonly understood reasons why analytics as a practice fails to take off at organizations.

  • The low-hanging fruit gets picked first. So the insights that are so appealing are the most obvious ones, and the leaders who act on them are often the most data-savvy. As you dig deeper for marginal gains, and you have to explain those insights to people who are less equipped, at least initially, to understand them, you may find yourself falling from the initial heights you once attained!

  • The analytics stack doesn’t support quick and easy output—this often goes hand in hand with our first point. The data structure may be insufficient, the tools themselves may lack performance, the cost to invest in user-friendly visualizations may be excessive, etc. Even if initial returns are promising, it can be difficult to maintain momentum.

  • The people who produce analytics may not understand the business all that well, so their products may not be as relevant as they could be. These products may also not be optimized for understanding by lay viewers, and the producers themselves may lack the skills to convey their insights to other actors in ways that spur action.

  • The people who consume analytics don’t understand the data. While in some cases, data consumers lack the necessary quantitative reasoning and/or critical thinking skills, in our experience this issue boils down to the people who consume analytics don’t see a path to making use of the data as presented. It’s too detailed, or perhaps not detailed enough; the presentation is too busy, or unfocused, or simply lacks context. Regardless of the specifics, ultimately the data provided doesn’t do enough to answer the question originally asked.

  • Most importantly, there is no path for engineers and analysts and consumers to get on the same page with respect to terminology, access, availability, data’s role in the organization, and expectations. Our colleague recently shared with us some information about making effective requests, and at every moment of this conversation we thought this is exactly why organizations can’t get the value they want from their data: ineffective requests! We dove into this in some detail recently.

A Better Map Might Help

There may be good reason to believe that AI will be able to take natural language questions, or even jargon-heavy formulations, and turn those into effective data queries, generating interesting visualizations, and perhaps suggesting thought-provoking insights to be drawn from those outputs.

For this to succeed, great prompts are going to be needed. Our track record of making effective requests is not wholly encouraging, but AI is infinitely patient and available on demand, so there may be some new structural advantages.

For this to succeed, we’ll finally have to take data quality seriously, and we’ll have to stop relying on our internal data heroes who can take a quick look at output and tell us where something’s off. (Unless you think you can train AI to do this for you.)

For this to succeed, we’ll have to get beyond the technical minimalism of semantic layers, and instead build enough of a business glossary and data product catalog for AI to translate a data request into a potentially useful data product.

And for this to succeed, we’re going to need a much better understanding of how our data is an asset, and what we’re trying to use that asset to help accomplish in our business.

Do all those requirements sound like data governance to you? That’s what it sounds like to us. We’ve long thought that the time is always right to govern data, but maybe the age of AI is the rightest time of all! Ready to get to work? We can help. We'd love to help.

IData has a solution, the Data Cookbook, that can aid the employees and the organization in its data governance, data intelligence, data stewardship and data quality initiatives. IData also has experts that can assist with data governance, reporting, integration and other technology services on an as needed basis. Feel free to contact us and let us know how we can assist.

 

Photo Credit: StockSnap_959IURDRGJ_MonitorDashboard_AIAnalytics_BP #1299