If you are a Chief Information Officer, Chief Data Officer or a VP/Senior Director of Data Engineering, this post will change your life (unless you already know about Data Mesh, that is).
I learned about Snowflake two years ago, and I thought: "wow, this is close to a miracle product, from the perspective of a data guy like me." No wonder it is doing so well. I think I found something else like it. Not a technology, however.
I created a hashtag this year, #datavaluehacking. It embodies answers to the question of how we can think differently and come up with ways of solving what prevents us from getting value out of data faster than we do?
Now, like many in data, I have been raised on structured data, operational data stores, data warehouses, data marts and data lakes. We may have the technology today to do a lot with data; some would argue that there is too much. But how do we organize the data within the data lake, fabric or platform(s), and then ourselves in an efficient way, to unlock the potential of the data-driven enterprise? Add agile, DevOps and DataOps to that, and we have a real puzzle on our hands.
Like Snowflake, two years ago, I heard about something called the "Data Mesh" by a smart lady, Zhamak Deghani, from ThoughtWorks, about a year ago.
Data Mesh deserves to be stamped with #datavaluehacking.
Data Mesh is a concept, a set of ideas. It is a mix of product thinking, decentralized data management, data architecture around domains, all mixed with a way of working that recognizes and solves a lot of the problems ("failure modes") we have with engineering data for business value.
I am going even to claim that this has many of the solutions to help organizations become data-driven in a real, concrete sense.
Instead of explaining everything to you, I will let her do it better than me. Here are the links:
Articles
Podcasts
ความคิดเห็น