Self-Service Analytics Discussion About Gatekeeper Model and Data Governance Framework

Self-Service Analytics Discussion About Gatekeeper Model and Data Governance Framework

StockSnap_LW51P4H4Y6_gatelock_selfservice_analystics_BPMany clients come to us when they are planning, or after they have embarked on, an overhaul to their business intelligence (BI)  stack. They have often determined that their legacy tools are not providing them the data insights they need. It is not always the case that the tools are not good enough; rather, the tools have been in use so long that their use has calcified, and the structure around them has become encrusted with bloated and underperforming reports, poorly documented data models, and no good path to a single version of the truth. There is also often a push from leadership for the organization to start doing self-service analytics.

Analytics, obviously, reflects a different orientation to data insights than business intelligence or operational reporting, and self-service takes that orientation even further away from traditional practices. Self-service is probably as much a description as it is a prescription, pushed into the forefront by multiple factors.  A few thoughts regarding these analytics tools:

  1. The newest generation of visualization and analytics tools makes it easy to deliver data in an instant, and for even the most casual of data consumers to interact with that data.
  2. These same tools make authoring and sharing data products a less onerous process than in the past, which allows business and data analysts to increase throughput, and it expands the ranks of people doing some form of authoring analytics.
  3. Software that includes improved in-application analytics empowers certain users but also simplifies reporting for them.  They no longer have to submit requests for data, or go through the back-and-forth rigor of requirements gathering as they can view information they need right in the report they are viewing.
  4. Data is in many verticals the coin of the realm, so to speak, and users at all organizational levels expect to use, or are expected to use, data in the course of their regular work.

It is a kind of system shock, after so much time and energy spent over the years in a gatekeeping model, to transition to some version of open access. Gatekeeping in this context, for what it is worth, is not the same as having sentinels or castle guards whose task is to deny entry. But for nearly all of our clients, the method for data delivery has involved a centralized portal for requesting data, a template for gathering data requirements and testing outputs, and a limited and easily managed set of options for delivery. 

In many cases, of course, users find this model inefficient at best. So it comes as no surprise that some of them find ways to get around the formal data request process, whether that is data shopping (making the same request of multiple analysts in the hopes of faster turnaround or friendlier data), simply pulling rank (to get their requests responded to sooner, or to avoid following protocols altogether), or even developing their own shadow analytics data sets and tools.

The gatekeeper model helps manage workload and prioritize requests, and since the IT group is usually charged with enforcing data security and access regulations, it is a natural enough extension of those duties to developing and maintaining a security model around reporting and BI. However, we have been in this business a long time now, and in our experience, IT has almost always been desperate to speed up access to data, allow users to author their own deliverables, and make data available in multiple formats so as to enhance understanding and utilization. 


This vision for self-service analytics looks a lot like the most recent attempt to fulfill a goal that has been in place for quite some time. Whatever its limitations, the gatekeeper model was animated to some extent by data governance principles, even if those principles were insufficiently articulated and even if they tended to fall more heavily on data security rather than on making data meaningful. Whatever your journey to self-service analytics looks like, we would like to propose the following considerations.

Does self-service analytics mean that any user can access any piece of organizational data?
While it may be the case that as you examine data sets and develop new data models you end up taking a more expansive view, users still must be authorized by the appropriate data stewards or managers to examine data and use tools.

Does self-service analytics mean that any user can generate any analysis?
You are still curating the data sets they get to work with, by which we mean among other things providing a simplified data model and a so-called intuitive user interface. Ideally you are providing users with a strong, informative semantic layer, and you are continuously improving data literacy capabilities across your organization. 

What about "rogue" analysts (or citizen data scientists)?
There are users who create their own variables, join their own data sets, and author their own deliverables without any oversight. We mentioned that shadow analytics are endemic to a gatekeeping BI model, where people use spreadsheets to create extracts, join data, develop variables, and publish the output.  Self-service analytics moves these satellite analytics out of the shadows into a realm where work can be validated.


For yours to be an organization where self-service analytics actually works, we would argue that the data available for analysis must governed and documented throughout its lifecycle, and the process by which analytics are developed, tested, published, and evaluated must also be part of the data governance framework. This framework will include people (data stakeholders), training, policy revision and process improvements, and quite possibly new tools (or at least better use of existing tools).

The Data Cookbook solution can be an integral piece of your journey to self-service analytics. It allows for an inventory of data assets, including data systems, curated data sets, dashboards and visualizations. Its business glossary and data quality attribute features help to demystify data, and help your data stakeholders explain and document data in the language of your organization. The Data Cookbook's approval workflows allow you to certify data deliverables, and to sign off on official data definitions, and to work collaboratively and asynchronously to increase your organization's collective data knowledge.  

Hope that this blog post gave you some things to think about your BI analytics and moving to self-service analytics.

The Data Cookbook can assist an organization in its data governance, data intelligence, data stewardship and data quality initiatives. IData also has experts that can assist with data governance, reporting, integration and other technology services on an as needed basis. Feel free to contact us and let us know how we can assist.

 Contact Us

Photo  Credit:  StockSnap_LW51P4H4Y6_gatelock_selfservice_analytics_BP #B1240

Aaron Walker
About the Author

Aaron joined IData in 2014 after over 20 years in higher education, including more than 15 years providing analytics and decision support services. Aaron’s role at IData includes establishing data governance, training data stewards, and improving business intelligence solutions.

Subscribe to Email Updates

Recent Posts

Categories