At LigaData we’ve had the fortune of working with a number of global enterprises across the last 10 years in the areas of digital transformation and data system modernization. This is one of a series of occasional thought pieces to capture our learnings and observations (plus the odd pointer to interesting research).

Today we’ve been musing on the world of Big Data and its practical impact on decision making in organizations, particularly at the executive level.

There’s no doubt that the availability of transactional data has been transformational on the speed and effectiveness of operators at a business unit level:

  • Customer Service agents can provide a much more efficient and personal response to customers when they can immediately access their purchase history or credit scores;
  • Engineers can decide on the priority of network faults;
  • Retailers can price and promote competitively on the fly…

However, these benefits have not always been reflected when it comes to decision making at senior management or boardroom level. And ironically, it’s the features of big data (Volume, Velocity, Variety and Veracity) that are also the problems.

Firstly, volume & velocity.

There’s just so much of the stuff, and it never stops coming (that’s the velocity). By next year the digital universe (the amount of data produced and copied across the globe) should reach 44 zettabytes (44 trillion gigabytes) (1). Simply put, before Big Data we were not able to capture every signal or intent made by a customer, but now we can track this information at a much higher frequency. While this sounds great, it can also be overwhelming – both conceptually and practically. In large companies with legacy systems, each generating their own data sets, it tends to lead to managers going down a DIY approach to reporting and the creation of silos of information, pulled together with inconsistent processes. We’ve seen organisations with hundreds of people just dedicated to creating management reports.

Additionally, even when near-real-time reliable data is available, its value can be wasted if the culture of management is less than agile and struggles to respond fast enough to its own insights. This happens.

Secondly, variety.

With so many different data sets available, the challenge comes in establishing which points are critical to the particular decision to be made (and the monitoring of its ultimate success). Top-level Key Performance Indicators (KPIs) that were defined months ago and baked into a corporate dashboard may not be the best metrics. New systems might have been deployed somewhere in the organization that output just the data needed, but that information hasn’t filtered up to the top. Without enterprise-wide data governance and good communication, people may have different understandings of data definitions (what, for example, is a telco’s “active user”? One who has just switched on their phone, topped-up their pay-as-you-go account, or one on a contract but who has not made a call for a month?). Identifying the correct data points to use in models, and managing those going forward, can be a nightmare.

Thirdly, veracity.

In research published by KPMG last year (2), only 35% of IT decision-makers had a high level of trust in their organization’s analytics, a statistic we’ve seen borne out in many organizations we’ve worked with. This lack of confidence is not just about underlying data quality (accurate definitions, no gaps, no double-counting etc), but about it being compliant, securely processed and correctly modeled and analyzed.

Indeed, with analytics comes a whole new challenge.

Going AI on the analysis

With the scarcity and cost of data scientists to label, clean, model and analyze data according to business requirements (the objective, of course, being delivery of “actionable insights”), attention is turning to machine learning and the field of ‘Augmented Analytics’. This is described by Gartner (3) as:

an approach that automates insights using machine learning and natural-language generation, [that] marks the next wave of disruption in the data and analytics market.

Indeed, Gartner places it at number 1 of their 10 top trends in data and analytics technology that have significant disruptive potential over the next few years (it’s worth a read) (4). The point is that Augmented Analytics not only automates the tedious data cleaning and labeling of large volumes of various information, but, using a knowledge base of historic business events, starts to put context around the information it’s working on, and even provide recommendations and action points (which it tracks and then uses for further learning).

However, while the application of Artificial Intelligence (AI) in this type of solution can clearly bring countless benefits to the data and analytics teams – and leadership teams – of organizations, the one element that keeps us awake at night is getting the data governance right.

Even if you’re not at the stage to leap on the AI bandwagon, we cannot stress how important it is that due care and attention is paid to your data management now: its definitions and quality, its access and lineage, its controls and security.

So often we’ve seen responsibility for this fall between organizational lines, and it’s only when a major problem occurs (an incorrect or misunderstood algorithm at one end, a security breach at the other) that the issues of who’s responsible and accountable – the CIO, the CFO, the CMO, the CEO… – suddenly comes up for debate.

The potential of big data for enabling informed decisions at board level is there, and growing daily. Getting the governance right can happen now. How far do you feel your organization has travelled along this path? How confident do you feel in the insights and data you use to make your decisions?

The following links are a good read on these topics, and we’d love to hear your thoughts and experiences.


  1. The Digital Universe of Opportunities (2014) IDC:
  2. Guardians of Trust (2018) KPMG:
  3. Gartner report reference: