Five Data Management Trends to Keep an Eye on In 2022

essidsolutions

With most enterprises on their digital transformation journey already, the volume of data they manage is astronomical. Dave Langton, VP of product at Matillion, shares the five most pressing trends he foresees heating up and dominating the rest of this year when it comes to data management and integration.

The speed of growth often outpaces the rate at which enterprises figure out concrete ways to manage, integrate and leverage data actionably. Some trends become staples (like Salesforce.com), and others go down in history as just a fad (like Netezza and other data warehouse “appliances”).

Now, as we have settled into 2022 and have a clearer picture of what enterprises are prioritizing this year in data management, five emerging trends show potential to last. Data teams will need to catch the wave early and effectively to stay competitive. 

Top Five Data Management Trends for 2022

Predicting and preparing for trends is necessary in a world that evolves as fast as ours. Here are the key trends that we’ll see playing out over the course of this year. 

Trend #1: Demand for real-time capabilities fueling the return of change data capture

As data teams deal with the distribution, diversity and dynamic nature of the expanding data universe, they need to be able to understand what changed, when it changed, and the order in which it changed on each update. We see more teams turn to log-based, queryable historic changelogs to follow every update. 

The focus is not just on the speed of replication but also on the precision to feed use cases like AI model training, fraud detection and real-time marketing campaigns. With change data capture, data teams have visibility into all the changes laid out in chronological order. This expands possibilities to more compelling use cases than just copying a snapshot of the data from point A to B. Change data capture is best deployed on large source databases, which may have a high, or sporadic, update schedule to a smaller fraction of the overall table size. The latency of incremental changes and the time it takes to update them are minuscule.  

Trend #2: Operational analytics moving into the cloud data warehouse

The scale and elasticity of the cloud data warehouse have made it the backend system of the business. Enterprises are finding new, creative avenues to take advantage of the rich data in these platforms and fit different data pieces together. Data sharing is making this increasingly more common. 

As more data teams choose the cloud data warehouse as their playground, it will become more common to see operational analytics performed there. Operational analytics, supported by reverse ETL, enables better day-to-day operations by giving business users the data they need in the applications they use every day. While it’s now almost trivial to load data into a warehouse, the complexities of joining together data from multiple disparate systems have to be navigated to derive value. Once new insights emerge, data teams see an increasing need to get them back into operational systems for automated and assisted decision making rather than just into business intelligence and reporting systems. 

These capabilities in modern data systems make it possible to deliver unique experiences to end-users in the form of personalization. Once customers have tasted this level of personalization, they expect it whenever they visit a website or use an application, and real-time data is the key ingredient. Analytics tools and customer insight platforms allow you to collect, process and deploy this information.

See More: Why Enterprises Should Move on from Legacy Database Infrastructure

Trend #3: DataOps taking a front seat

Despite already implementing many of its practices to automate their entire data pipeline and analytic workloads, many data teams are still fuzzy on what “DataOps” actually is. It’s often mistakenly viewed as just DevOps for data, but because data has an inherent state necessary to manage, DataOps goes beyond the building and deploying of code. 

DataOps is the automation of data change management that sits at the intersection of integration, automation and testing. Taking that extra step to solve business problems through a formalized DataOps lens is the next step many enterprises are taking to make their data more agile and iterative in the moment, rather than six months down the road. For example: through DataOps, data teams can use automation to improve quality and introduce changes without disrupting other areas of the environment. 2022 and beyond will see companies start to adopt these practices into their mainstream data ecosystem widely. 

Trend #4: The data mesh vs. data fabric debate raging on

Proponents of data mesh and data fabric ideologies have been battling over which approach is best for getting subject matter experts closer to – and taking more ownership of – data transformation. While the winner is to be determined, it’s clear that analytics shouldn’t happen in a vacuum – it needs to be driven by business logic.

While there are some philosophical differences in the mesh vs. fabric debate, both aim to tackle the challenge of having multiple data lakes and warehouses. Since the aims of both data mesh and data fabric are to try and make sense of a complex soup of underlying systems, it’s likely an organization needs to commit to one style over the other long-term or risk not fully addressing the problems these ideologies purport to fix. Which one will win out may depend on what company you’re looking at. 

Trend #5: More holistic cost optimization for the cloud 

Cost optimization is frequently misunderstood as cost-cutting. This can result in needless resource limitations, failure to exploit digital business opportunities and weakened competitive advantage. Enterprises are looking beyond spending cuts to increase efficiency among their people, practices and technology. So far, this is revealed in efforts like tool consolidation, cloud migration, exploration of mature open-source solutions and investigation of adaptive pricing strategies. The DBMS market continues to thrive on the cloud, with many new offerings and enhancements to existing offerings reflecting a cloud-first, or even a cloud-only, mindset. Any cost optimization initiative must be undertaken in light of this. A proactive approach to financial governance and cost control includes aligning workloads with appropriate pricing models, financial tracking and observability, and policy setting and enforcement.

Decision-makers should also be looking to align workloads with appropriate service offerings. The key metric to consider as part of any migration is price/performance. Different offerings may provide better results for workloads with different characteristics. They should assess open-source alternatives or subscription pricing options that may provide cloud-like benefits.

Having a full view of the entire data ecosystem – rather than just looking at individual systems and processes – can clarify whether services paid for are truly needed and worth the spend. Enterprises are pickier about pricing models, and cloud vendors are starting to feel the pressure to make options more transparent and flexible.

See More: Three Ways to Stick to Your Data Resolutions in 2022

Riding Trends

Time will tell if these trends will last beyond 2022 or be replaced with more creative solutions. Ultimately, the solutions of tomorrow are often built on the trends that stick today, so data teams would do well to stay ahead of the game by viewing the above trends as competitive opportunities. When implemented thoughtfully, these practices and paradigms may hold the key to gaining control of an organization’s exponentially growing data.  

Which of the above trends do you already see taking shape at work? Tell us on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We love it when you share with us!

MORE ON DATA MANAGEMENT