Introduction

Business intelligence (BI) plays an integral role in every organisation, turning data into meaningful information for making the right decisions. Increasing regulatory scrutiny and evolving business needs, plus the relentless focus on cost and efficiency, are challenging businesses to modernise their business processes and decision making methodology. Business decisions can no longer be based just on a gut feeling, but must use a sensible, calculated, and insight driven approach based on data.

Thinking differently-more strategically-about integrating data and communicating the insights can help any organisation improve their topline and bottom-line. Often, companies trying to modernise their business processes rely on ready-made solution suites or data visualisation tools like Tableau or Qlikview to deliver results.

However, the majority of organisations that take this approach end up disillusioned due to the poor adoption of BI tools and technology on account of the complexities arising from people, process, and implementation issues. Some of the common challenges faced by organisations are:

  • Lack of awareness and change management
  • Metrics are either not useful, or there are too many metrics that do not provide the required insights
  • Tool incompatibilities with legacy systems
  • Lack of proper infrastructure required for efficient data management
  • Complex and time consuming data integration and preparation steps
  • High technical skill requirements limit business user adoption

Sub-optimal outcomes are caused by the lack of a comprehensive, structured, and tailored strategy that starts with a focused assessment of key issues and gaps in data processes, and aligns processes with people and technology.

EXL’s Framework for Enhancing Business Intelligence

By adopting certain best practices which form the core of a BI solution and following guiding principles for transforming existing BI architecture, organisations can ensure they get the most out of their business intelligence solutions.

EXL’s framework looks at the entire landscape of business intelligence holistically and achieves the desired business outcomes by:

  1. Identifying the right KPIs and metrics,
  2. Selecting the most appropriate data architecture and integrating the required data sources,
  3. Choosing a future-ready analytical process,
  4. Adopting a visualisation toolset that fits the enterprise requirements
  5. Getting the right talent for the job

EXL’s high-level framework for enhancing Business Intelligence


 

A. Setting Requirements and High-Level Design

1. Setting Objectives

Setting up common business objectives for business intelligence operations is one the first and most important steps for designing a state-of-the-art ecosystem. It is very important to set objectives that are aligned to the company’s goals and core values. For instance, if the goal of the company is to provide affordable insurance services to all, one of the objectives in the marketing domain would be increasing the reach of company’s policies to wider audience, while the pricing team would be focused on how to accurately price policies based on different types of risk.

2. Defining and Selecting Key Performance Metrics

A metric refers to a direct numerical measure that depicts a business concept. Successful business intelligence transformation requires choosing the right metrics for measuring success. These metrics should be planned in collaboration with the leadership and business units to effectively check the health of a company.

A good metric should perfectly align with the objective set forth. For instance, revenue growth may be a good metric for any insurance companies, but if a company’s objective is to increase market share, then most metrics should be related to factors affecting market reach, including month-on-month customer acquisition and cross-sell opportunities.

All metrics should be predictable, achievable, and actionable. They should also be easy to track over a period of time. Good metric should also be comparable to other competitors’ metrics, if possible, enabling a company to potentially identify areas for improvement.

KPIs can be decided on various levels depending on how granular they are required to be.


 

Metrics can be classified into leading or lagging indicators based on how they impact the decision making process.

  • Leading indicators: Measure the activities necessary to achieve a company’s goals
  • Lagging indicators: Measure the actual results to show whether or not a company hit their goals

Ideally, a combinations of these metrics should be used. For example, if a company wants to acquire one million customers, the total number of customers acquired would be a leading indicator. The month-on-month customer acquisition should also be tracked as a lagging indicator.

Choosing the right metric is an iterative process; no one gets it right the first time. Metrics should be revisited when there is a change in business goals.

3. Creating Business Rules

Business rules are used to define entities, attributes, relationships, and constraints. Usually, they are used to explain a policy, procedure, or principle for an organisation that stores or uses data.

Once the metrics are set, business rules need to be defined and agreed upon by the different business units. These rules are guided by a variety of elements including regulatory agencies, industry standards, business acumen, and common sense. They often vary from country to country, industry to industry, and even business to business. Domain experts define and manage these business rules within an organisation.

For example, insurance organisations may define cross selling as getting a customer to add new products in their portfolio while also renewing all their previously purchased products. This definition may vary with different geographies or companies. Therefore, there is a need to collaborate with different business units and geographies to agree upon and document business rules.

There are three types of business rules:

  • Derivation rule: The derivation rule transforms information received into values returned, such as ARPU calculations
  • Constraint rule: The constraint rule is used to determine if the values of a transaction or operation are consistent. For example, this could include saying the state and ZIP code must match for a customer to be able to proceed with a transaction.
  • Invariant rule: Invariant rules look at multiple changes and ensure that the composite results are consistent. For example, this could include saying the total received premium for an account must equal the previous premium balance plus the new premium received. If something different is achieved, the system is leaking money, and the issue must be escalated.

B. Data Architecture and Technology

Data architecture and technology solutions must align with the BI strategy of an organisation. Some of the key steps in getting the right data ready for subsequent stages are as follows:

1. Sourcing the Right Data

  • Structured internal data: Frontend applications capture information entered by an agent, customer, or other user. This data is stored in the source systems in its raw format. There are typically multiple source systems with a variety of data structures and complex interlinkages. Whenever there is a change in requirements, the source systems need to be updated accordingly to capture the correct data elements, and ensure that the data is flowing into the data lake.
  • Unstructured data: Due to evolution of latest ML and AI tools and concepts like natural language processing, text mining, speech analytics, and image and video analytics, it has become possible to convert unstructured data into structured formats. A lot of information present in unstructured formats like claim notes and agent notes that was unused previously can now be incorporated into the decision making process.
  • External data: Apart from capturing internal data, it has become increasingly important to bring in relevant data from multiple external data sources. This helps enrich models with variables directly or indirectly impacting the target variable selected to make more informed strategic decisions.

2. Data Pre-Processing and Data Loading:

This is an intermediate storage area used for data processing during the extract, transform and load (ETL) process. The raw data is collected from multiple source systems and stored in the staging area. This is the initial stage of the database where the table is loaded without applying any transformation or business rules.

3. Data Transformation and Conformance:

The data goes through a number of data management steps like data cleansing, outlier treatments, data mapping, and data roll-ups to enhance the data quality and make it ready to be used for further analysis. Important business rules are also applied to the data to make it more robust.

4. Data Lake Creation and Data Storage:

Data lakes are repositories of integrated data from one or more disparate sources which act as a single source of truth. They store current and historical data in one single place, and are used as the central repository of all data. The data required for various processes and analyses can be taken from the data lake, streamlining the data gathering process.

The data lake can be hosted on servers on site or in the Cloud. Each method comes with its own set of pros and cons.


 

5. Data Cube:

The data cube is used to represent data along with some measure of interest. It is a data abstraction to evaluate aggregated data from a variety of viewpoints. Even though it is called a cube, data cubes can be one, two, or three dimensional, or even track more dimensions depending on what’s needed. Every dimension represents a new measure, whereas the cells in the cube represent the facts of interest.

Data cubes are mainly categorised into two categories:

  • Multidimensional Data Cube: Most Online Analytical Processing (OLAP) products are developed based on a structure where the cube is patterned as a multidimensional array. These multidimensional OLAP (MOLAP) products usually offer improved performance when compared to other approaches, mainly because they can be indexed directly into the structure of the data cube to gather subsets of data. When the number of dimensions is greater, the cube becomes sparser. That means that several cells that represent particular attribute combinations will not contain any aggregated data. This in turn boosts the storage requirements, which may reach undesirable levels at times, making the MOLAP solution untenable for huge data sets with many dimensions. Compression techniques might help, but their use can damage the natural indexing of MOLAP.
  • Relational OLAP: Relational OLAP (ROLAP) makes use of the relational database model. The ROLAP data cube is employed when approximately twice as many relational tables are used as the quantity of dimensions, compared to a multidimensional array. Each one of these tables, known as a cuboid, signifies a specific view.
    The data cube acts as a golden source for all reporting to avoid different numbers being reported for the same metric and data duplication. If a visualisation tool like Tableau is used, the data needs to be aggregated and fed into cube, as it cannot perform heavy data manipulations. For example, if a user wishes to see the balance of a customer across multiple products, the data cube would store this information at an aggregated level rolled up against different measures like products, geographic location, months, and other factors. It will be easier to pull the data into Tableau, manipulating the information as required.

C. Data Analysis and Insight Generation

Analytics helps in better understanding data and provides the insights necessary to form business strategies. It can drive innovation and operational effectiveness. Analytical processes should be designed keeping the following points in mind:

  • Align with the final objective: The outcome of formulating the entire process should be governed by an objective. This helps in deciding on target variable to be used and thus plays a very critical role in variable selection and the feature engineering process. For example, this could include gaining a 360-degree view of the customer to provide best customer tailored services at the lowest cost and increase the overall profitability from customers.
  • Design the analytical architecture: Analytics processes need to be designed to run on different timelines and cycles. Full automation may or may not be the best solution for a given business problem. For each process, the planners need to decide what role each should each play, such as serving as a subject matter expert, building the model architecture, or a different role.
  • Finalise and perform the analytical intervention required: Deciding on the type of analysis, tools, and techniques that need to be used to perform an analysis is a very important measure to consider during any analytical intervention. The choice depends on a number of factors like the type of data, amount of data, outcome required, technology, and human resources available.

 

D. Visualisation Layer

Data visualisation is both an art and a science. Effective visualisation helps users analyze and reason about data and evidence. It makes complex data more accessible, understandable, and usable. Users may be performing specific analytical tasks, such as making comparisons or understanding causality, and the design principles of the graphic should follow the task.

There are various visualisation tools available. Careful analysis of the pros and cons of each tool needs to be done to understand which one is best suited for an organisation based on the type, size, frequency, architecture of data, and business insights that need to be derived from the data. This understanding is driven by knowing the strategic, tactical, or operational level, and the skill available in the organisation for data visualisation. An overview of the current popular visualisation tools and their salient features which can help organisations choose the right tool for their purposes.
Before making investments in a visualisation tool, the BI planning team should make sure to have a clear idea of the requirements and ensure that the same tool is used across the organisation. This will help in the enterprise-wide adoption of the tool, and allow for better dissemination of information and insights across the entire organisation

E. People

There is a strong need for talent across multiple dimensions. It is critical to have the right talent for the right job to bring efficiency and effectiveness in A BI team. There are three key considerations for the people element of the framework.

1. Sourcing People with the Required Skillset:

BI teams perform best if they are staffed with resources that have specialised skillset in one of the key areas of requirement, such as business understanding and design, data engineering, statistical modeling, or visualisation expertise. Having the same resource perform all the activities leads to sub-optimal performance in one or more areas. Talent should be sourced both internally and externally. These employees should be carefully chosen based on strategic priorities and the long-term BI roadmap for the organisation.

2. Alignment of BI Strategy and Business Unit Collaboration:

Leadership must ensure that the BI strategy is communicated across the business units and everyone is aligned to a common goal. Good strategic alignment has an amazing effect on organisational performance. People perform better when they fully understand and accept the purpose and goals of their organisation, and they develop a better sense of ownership when they understand what a difference they make in achieving those goals.

3. Training, Awareness, and Adoption

Adoption is another major challenge facing most organisations. People are used to seeing numbers and charts in certain formats, making it hard to break old habits. This can present challenges when introducing a new BI or data visualisation tool:

  • Comfort zone: People are often unwilling to change their habits, even if there is a more efficient way of working.
  • Unclear ownership: People are unclear as to who is responsible for the adoption process
  • Measuring success: As the saying goes, it’s impossible to manage what isn’t measured. Success parameters need to be clearly defined.
  • Communicate the change: There should be enough internal marketing done for the change and its benefits. Share success stories and the impact of transforming BI strategies
  • Incentivise change: Provide people with the proper incentives to change.

Adoption can also be facilitated through the following ways:

  • Trainings: Conduct workshops to upskill colleagues, walk through any new dashboards, and educate the end users as to how they benefit from using the tools
  • Handover documents: Create easy-to-use documents as a handbook on how the users can use the dashboards
  • Self-serve capabilities: Empowering people to create and use their own BI reports through simple and intuitive user interfaces and ready-made rich datasets can drive adoption

Conclusion

Irrespective of the business vertical, the aforementioned best practices are fundamental for successfully implementing any BI transformation program. By isolating the current BI architecture in terms of these five critical steps, enterprises can gain more clarity in terms of their current reporting mechanisms and the expected benefits from business insights at different hierarchy levels in the organisation. This leads to a streamlined BI workflow where relevant insights are easily accessible to all roles across the enterprise.

References:

 

Authors:
 

 

Swarnava Ghosh
Senior Engagement Manager (Analytics)
Swarnava.Ghosh@exlservice.com

 

Aditya
Assistant Project Manager (Analytics)
Aditya@exlservice.com


 

 

Vaishnavi Subramanian
Engagement Manager (Analytics)
Risk and Compliance, UK & EU
Vaishnavi.Subramanian@exlservice.com

 

 

Contact US