Need for Data Fabrics Rises as IT Becomes More Distributed
- by 7wData
data is the fuel that drives digital business processes, but most organizations today don’t have an efficient way of managing it across all the platforms on which they have deployed applications.
At its core a data fabric architecture loosely describes any platform that reduces the friction associated with access and sharing of data in a distributed network environment. As such, vendors that have historically positioned themselves as providers of everything from storage systems to data management platforms are all now claiming to varying degrees to provide data fabrics that span multiple computing platforms.
The level of urgency driving the need for data fabrics has increased for two primary reasons:
Those data lakes are also the foundation upon which organizations are training the artificial intelligence (AI) models that many of them are relying on to automate their digital processes.
The challenge IT teams face is that it’s unlikely there will be just one single data lake, notes Howard Dresner, founder and chief research officer for Dresner Advisory Services. Each business unit within an Organization often launches its own data lake initiative.
As a consequence, IT organizations will need to employ some type of data fabric to move data not just from on-premises IT environments into data lakes residing in the cloud, but also between data lakes that will reside in multiple clouds.
Also read:Index Suggests Many Organizations Are Falling Behind Digital Business Curve
The latest generation of data fabrics are taking advantage of Kubernetes clusters that can run anywhere to make it simpler to deploy them across a heterogeneous environment. Hewlett-Packard Enterprise (HPE), for example, has launched a HPE Ezmeral data fabric that is based on technologies it gained by acquiring MapR Technologies in 2019. That data fabric creates a global namespace that is accessible via application programming interfaces (APIs) that can be accessed by containerized and non-containerized applications. A data mirroring capability makes it possible to move data within or between clusters using bi-directional, multi-master tables or event stream replication.
HPE recently announced it is makingits data fabricavailable as a standalone offering in addition to being an integrated component of the HPE Ezmeral Container Platform and HPE Ezmeral Machine Learning Operations (MLOps). The goal is to make it easier for organizations to consistently employ a data fabric from the edge to the cloud, says Anil Gadre, vice president of Ezmeral for HPE. “It’s about reducing the friction,” he adds.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read More