Posted on

What does data proliferation in the post-pandemic world mean

Data has become crucial for organisations’ business sustainability. According to the International Data Centre (IDC), the global data sphere is expected to grow to 163 zettabytes by 2025, ten times the amount recorded in 2016.

For organisations, the exponential growth in data can prove a challenge, one in which organisations must learn to navigate if they are to sustain their business.

Today’s largest and most successful organisations like Google, Starbucks, and Amazon know well the impact of data. They have utilised their data to their advantage when making high-impact business decisions.

For businesses, the derivation of insights via data has become no longer a choice but a necessity. In addition, Gartner has also predicted that data fabric, the latest term used to describe data nirvana, will also be one of the top technology trends for 2022.

According to Gartner, data fabrics could reduce data management efforts by up to 70 per cent as organisations get to grips with data literacy and democratisation across multiple departments, platforms, and applications. 

Issues arising from data proliferation

Data has not only exploded in volume but has also been scattered across a myriad of locations, from multiple public cloud environments and data centres to remote offices and the edge, often with minimal global oversight.

At each location, data is isolated in specialised infrastructure or functions, like backup, disaster recovery, and network storage, to name a few, and more often than not, from multiple vendors.

The situation is only made worse by silos within silos, such as a single backup solution that requires various dedicated infrastructure components, like backup software, master and media servers, target storage, deduplication appliances, each of which may hold a copy of a given data source.

In addition, each infrastructure component may come from different vendors, each with its user interface and support contracts. As such, these infrastructure silos have a knock-on impact on operational efficiency.

Also Read: COVID-19, the environment, and the tech ecosystem: what opportunity is available out there for us?

With typically no data sharing between functions, storage tends to be overprovisioned for each silo instead of pooled. Multiple copies of the same data are also propagated between silos, thus taking up unnecessary storage space.

However, despite the issues arising from data proliferation, organisations with self-service data infrastructure in the cloud benefit from the data gathered.

These organisations have been able to gain more insights into their customers’ behaviour compared to before the pandemic, which is enabled by the real-time predictive and prescriptive analytics supported on data lake platforms in the cloud.

These organisations are setting themselves apart from the competition, particularly when implementing communication and customer retention strategies.

For organisations that have yet to implement a self-service data infrastructure, an action plan is needed, built firmly around maximising the use of data if they are to catch up to their competitors. 

The upcoming trends and technologies in data post-pandemic

To stay competitive, organisations need to understand the upcoming trends and technologies in data, given how essential data is to operational and strategic effectiveness.

Some of the top 12 strategic technology trends predicted by Gartner include data fabric, decision intelligence and hyper-automation. According to Gartner, increasing overall data and data diversity will drive organisations towards new compute and storage technologies.

The increase in overall data and data diversity will also drive hyper-automation, which is defined as data-driven automation rather than process-driven, thanks to a combination of AI, ML, and defined as automation that is data-driven rather than process-driven, thanks to a variety of AI, ML, natural language programming and predictive analytics technologies.

Hyper-automation has been regarded as a ‘level-up’ to automation and reflects the concept where organisations have implemented technologies to free employees from the monotony of repetitive tasks, enabling them to concentrate on higher-value tasks, which are more stimulating and rewarding. The technology utilises data obtained from every process and equipment. 

According to Gartner, it is believed that 85 per cent of companies would increase or sustain their hyper-automation investment strategies in 2022, with the technology also having been termed by Deloitte as the next frontier for organisations globally.

However, hyper-automation will be a slow and complex process because it is still early for the technology. To create long-term adoption of the technology, organisations will need to invest significant amounts of time and energy.

Thus, it is crucial that time is taken to understand the necessary steps required before organisations set out on their hyper-automation journey to ensure their success in implementing the technology.

Engaging the right partner is also vital for organisations in this journey. It would allow organisations to gain a better understanding of hyper-automation, which would reduce the time and resources needed to begin their journey and enable them to hyper-automate their organisation that much faster. 

For organisations, aside from hyper-automation, data fabric will be vital in modernising their data management and integration.

Also Read: Understanding GDPR’s impact on event data and helpful security tips

A data fabric consists of multiple systems and data flows, with a data mesh of human roles and processes that must all be coordinated to achieve the goal of an architecture that encompasses all forms of analytical data for any analysis with seamless accessibility and shareability by all those with a need for it.

Data fabric continuously identifies and connects data from disparate applications to discover unique, business-relevant relationships between the data points. The insight then supports re-engineered decision-making, thus providing more value through rapid access and comprehension than traditional data management practices.

To ensure that their data fabric architecture delivers business value, organisations need to start by providing a solid technology base, identifying the required core capabilities and evaluating existing data management tools.

There are four key pillars to data fabric architecture:

  • Collect and analyse all forms of metadata
  • Convert passive metadata to active metadata
  • Create and curate knowledge gaps
  • Have a robust integration backbone.

Conclusion

It is crucial for organisations that have already begun their hyper-automation journey to ensure that they continue to work on the technology, given the increased focus and investment on hyper-automation by organisations across the board.

Organisations must resist the temptation to settle for standard automation, which would only provide them with short-term improvements.

For organisations, the successful implementation of hyper-automation will not only enable them to gain an edge over their competitors, but it would also drive more significant benefits for their employees and clients as the technology helps to streamline operations, thus freeing employees up to focus on more stimulating and rewarding tasks, as well as providing exceptional customer service.

Organisations implementing data fabric into their data management must ensure that their data fabric architecture consists of the four key pillars to entirely derive the business value and benefits data fabric can drive for the organisation.

Editor’s note: e27 aims to foster thought leadership by publishing views from the community. Share your opinion by submitting an article, video, podcast, or infographic

Join our e27 Telegram groupFB community, or like the e27 Facebook page

Image Credit: rawpixel

The post What does data proliferation in the post-pandemic world mean appeared first on e27.