Analytics, obviously, reflects a different orientation to data insights than business intelligence or operational reporting, and self-service takes that orientation even further away from traditional practices. Self-service is probably as much a description as it is a prescription, pushed into the forefront by multiple factors. A few thoughts regarding these analytics tools:
It is a kind of system shock, after so much time and energy spent over the years in a gatekeeping model, to transition to some version of open access. Gatekeeping in this context, for what it is worth, is not the same as having sentinels or castle guards whose task is to deny entry. But for nearly all of our clients, the method for data delivery has involved a centralized portal for requesting data, a template for gathering data requirements and testing outputs, and a limited and easily managed set of options for delivery.
In many cases, of course, users find this model inefficient at best. So it comes as no surprise that some of them find ways to get around the formal data request process, whether that is data shopping (making the same request of multiple analysts in the hopes of faster turnaround or friendlier data), simply pulling rank (to get their requests responded to sooner, or to avoid following protocols altogether), or even developing their own shadow analytics data sets and tools.
The gatekeeper model helps manage workload and prioritize requests, and since the IT group is usually charged with enforcing data security and access regulations, it is a natural enough extension of those duties to developing and maintaining a security model around reporting and BI. However, we have been in this business a long time now, and in our experience, IT has almost always been desperate to speed up access to data, allow users to author their own deliverables, and make data available in multiple formats so as to enhance understanding and utilization.
This vision for self-service analytics looks a lot like the most recent attempt to fulfill a goal that has been in place for quite some time. Whatever its limitations, the gatekeeper model was animated to some extent by data governance principles, even if those principles were insufficiently articulated and even if they tended to fall more heavily on data security rather than on making data meaningful. Whatever your journey to self-service analytics looks like, we would like to propose the following considerations.
Does self-service analytics mean that any user can access any piece of organizational data?
While it may be the case that as you examine data sets and develop new data models you end up taking a more expansive view, users still must be authorized by the appropriate data stewards or managers to examine data and use tools.
Does self-service analytics mean that any user can generate any analysis?
You are still curating the data sets they get to work with, by which we mean among other things providing a simplified data model and a so-called intuitive user interface. Ideally you are providing users with a strong, informative semantic layer, and you are continuously improving data literacy capabilities across your organization.
What about "rogue" analysts (or citizen data scientists)?
There are users who create their own variables, join their own data sets, and author their own deliverables without any oversight. We mentioned that shadow analytics are endemic to a gatekeeping BI model, where people use spreadsheets to create extracts, join data, develop variables, and publish the output. Self-service analytics moves these satellite analytics out of the shadows into a realm where work can be validated.
For yours to be an organization where self-service analytics actually works, we would argue that the data available for analysis must governed and documented throughout its lifecycle, and the process by which analytics are developed, tested, published, and evaluated must also be part of the data governance framework. This framework will include people (data stakeholders), training, policy revision and process improvements, and quite possibly new tools (or at least better use of existing tools).
The Data Cookbook solution can be an integral piece of your journey to self-service analytics. It allows for an inventory of data assets, including data systems, curated data sets, dashboards and visualizations. Its business glossary and data quality attribute features help to demystify data, and help your data stakeholders explain and document data in the language of your organization. The Data Cookbook's approval workflows allow you to certify data deliverables, and to sign off on official data definitions, and to work collaboratively and asynchronously to increase your organization's collective data knowledge.
Hope that this blog post gave you some things to think about your BI analytics and moving to self-service analytics.
The Data Cookbook can assist an organization in its data governance, data intelligence, data stewardship and data quality initiatives. IData also has experts that can assist with data governance, reporting, integration and other technology services on an as needed basis. Feel free to contact us and let us know how we can assist.
Photo Credit: StockSnap_LW51P4H4Y6_gatelock_selfservice_analytics_BP #B1240