Implementing Data Mesh for Your Enterprise

By Jon Loyens, Co-Founder and Chief Product Officer at, and Paul Gancz, Partner Solutions Architect at Snowflake

Almost every leader knows that it’s critical to be data-driven, and almost every leader knows that their organization has a data problem. Despite significant advancement in data tools and infrastructure over the past decade, it has, in many ways, become harder to be data-driven in recent years. Modern tools give us leverage, and the three V’s of data — volume, variety and velocity — continue to accelerate. This makes it harder to govern, trust and deploy data while being agile and resilient. The cultural problem of being data driven becomes increasingly daunting as the technological pace of innovation increases. All the data in the world doesn’t matter if you can’t make a quick and trustworthy decision with it. A paradigm shift in how enterprise data is managed is the only way to move businesses forward – and that’s where data mesh comes in.

Since Zhamak Dehghani introduced the idea of data mesh in 2019, there has been a lot of talk about what data mesh is and how it could help enterprises solve this problem. While it’s important to discuss how data mesh works and flesh out its principles, three years after its initial launch, it’s time for enterprises to start acting on data mesh rather than just talking about it. 

For the uninitiated, data mesh is a socio-technical approach to data that marries product thinking with a move toward a domain-driven ownership in the data environment. It has myriad benefits for enterprises, including eliminating barriers associated with scaling data, ensuring that data is actionable within organizations, and decreasing time to value.  

Addressing the “socio” in socio-technical

By its nature as a socio-technical approach, data mesh is as much about people and culture as it is about technology. The first key component to implementing data mesh is not data or technology-driven at all. Enterprises must first tackle the cultural shift towards viewing data as a product. Much like when we rolled out agile, DevOps or product thinking with our technology teams, this begins by identifying the individuals who are willing to buy-in to this mindset and be thought leaders within the organization. Looking for individuals who can champion this methodology, mobilize resources and shape a team will help ensure success.

Another key factor in the framework is ensuring that there are individuals willing to truly take ownership of the data in their domain. These leaders are making a commitment to manage the processes of gathering data from existing systems, making it available within the enterprise for analysis, documentation, and creating contracts around things like schema, availability and quality. Starting with one domain will also keep the shift manageable and provide other benefits, such as lessons-learned and best practices that can be applied to future domains. One of the most significant benefits of data mesh is its scalability, so growing from one domain to five to 20 will become easier and easier.

Long-term, as a business’ investment in data mesh progresses, it may also make sense to introduce data product managers — the “leaders” mentioned above — to support the larger system. Data product managers understand the domains they manage and where they fit within a data ecosystem, and help increase collaboration and closeness between data consumers and data producers. In particular, data product managers continuously seek to understand data consumer requirements and satisfaction in order to improve the data product quality for the benefit of the consumers. For a full-scale data mesh operation, they’re critical in reducing the burden of data management and ensuring data is consistent, accessible, and accurate across the organization.

Ultimately they are measured on the adoption of and leverage provided by the data products they manage for the data domains they’re working in.

Balancing decentralization and centralization

To reap the full benefits of data mesh, enterprises must establish a documented method of federated computational governance that balances both centralization and decentralization. Data mesh is unique in that it spreads ownership of data across domain teams that know and understand the data best. While this is the ideal state of decentralization for data mesh to be successful, in practice, current processes and cultures within organizations may limit the ability to transition or speed of transitioning to this decentralized approach.  

However, when decentralizing any process, you need consistent rules of the road that everyone follows to ensure coordination and interoperability. Contracts, SLAs and “definitions of done” are key elements that prevent chaos while increasing agility and resilience. A balanced approach should include experts who work directly with the data giving input and the enterprise leaders also developing company-wide governance policies. 

Getting an entire enterprise on the same page while taking so much input can seem like a daunting task. We recommend beginning with defining basic criteria that are essential to the data product across all domains. Moving incrementally from this position also limits miscommunication and misalignment that could cause silos and other hurdles down the line.

Transforming data into knowledge

Of course, once you have your people-driven framework in place and a path for governance, things do get technical. New tools are needed to support adopting and coordinating around data mesh, and the modern data stack has many key components. Having tools in place to ensure that both data producers and consumers have what they need to succeed – producers gaining autonomy and flexibility and consumers being able to find and understand the data they’re looking for – supports the broader people-driven goals of the data mesh and the accessibility goals within the organization. 

Also, when thinking about data as a product, enterprises should strongly consider knowledge graph- and cloud-based tools as the means to building the accessibility to and analysis of data. A knowledge graph can connect data, metadata, and all users across the data and analytics ecosystem of an organization, automatically creating relationships between data products, increasing speed for data consumers, and making data instantly understandable, queryable and deployable. 

Utilizing cloud-based platforms that can surface your organization’s internal data as well as data from external business units and organizations along with the catalog also connects organizations and individuals to the most relevant data without traditional hurdles like silos and burdensome complexity. And a data catalog built on a knowledge graph captures and organizes relationships between real-world concepts, bridging the gap between how your data consumers understand their business and how your company stores data; they empower your data consumers to search for and understand the data they need as easily as if they were searching Google.

And the ability to define and manage governance and access policies both at an organizational level, as well as at an individual domain level is critical for supporting true federated governance.  These access policies should be pushed as close to the data as possible, ensuring that the data is properly governed and secure.

Ultimately, as being data-driven becomes more and more of a necessity for businesses, executives will struggle with choosing the right approach to implement. The discussion around data mesh over the past few years has outlined the significant benefits it provides and the long-term ability it gives businesses to fully leverage their data. By starting with the true foundation of data mesh – people – and with just a few domains and building to full-scale technologies that support data mesh, businesses can adopt data architecture that truly prepares them for the future. 

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.