Blog | Data Intelligence | | 3 min read

What is the Difference Between a Data Architect and a Data Engineer?

data-architect-data-engineer

The growing importance of data in organizations undergoing digital transformation is redefining the roles and missions of data-driven people within the organization. Among these key profiles are the Data Architect and the Data Engineer. For most people, both of these functions are unclear: although their roles can seem quite similar, their purposes and missions are quite different. 

Because enhancing data is a complex task, organizations must work with the right people: specialists who can create a data-driven culture. It is recommended to hire a Data Architect and a Data Engineer within the data department. Although these two key roles overlap and often lead to confusion, they each fulfill different missions. To know whether or not you should hire a Data Architect or a Data Engineer (or both), it is important to understand their scopes of work to create data synergy.

The Wide Range of Skills of a Data Architect

A Data Architect’s main mission is to organize all the data available within the organization. To do so, they must be able to not only identify and map the data but also prioritize it according to its value, volume, and criticality. Researching, identifying, mapping, prioritizing, segmenting data…the work of a Data Architect is complex and these profiles are particularly sought after. And for good reason. Once this inventory of data has been completed, the Data Architect can define a master plan to rationalize the organization of the data.

A Data Architect intervenes in the first phases of a data project and must therefore lay the foundations for exploiting data in a company. As such, they are an essential link in the value chain of your data teams. Their work is then used by data analysts, data scientists, and, ultimately, by all your employees.

What are the Essential Skills of a Data Engineer?

A Data Engineer follows a Data Architect in this vast task of creating the framework for researching and retrieving data. How do they do this? With their ability to understand and decipher the strengths and weaknesses of the organization’s data sources. As a true field player, they are a key to identifying enterprise-wide data assets. Highly qualified, a Data Engineer is an essential part of a data-driven project.

If a Data Architect designs the organization of the data, the Data Engineer ensures its management, the respect of good practices in the processing, the modeling and storage on a daily basis. Within the framework of their missions, a Data Engineer must constantly ensure that all of the processes linked to the exploitation of data in an organization are fluid. In other words, a Data Engineer guarantees the quality and relevance of the data, while using the framework defined by the Data Architect with whom they must act in concert.

Data Architect vs. Data Engineer: Similar…but Above All, Complementary

A Data Architect and a Data Engineer often follow similar training and have comparable skills in IT development and data exploitation. However, a Data Architect, with their experience in database technology, brings a different value to your data project. With more conceptual contributions, a Data Architect needs to rely on the concrete vision of a Data Engineer. The combination of these key profiles will allow you fully exploit enterprise data. Indeed, a Data Architect and a Data Engineer work together to conceptualize, visualize, and build a framework for managing data.

This perfect duo will allow any organization to maximize its data projects success and above all, create conditions for a sustainable, rational and ROI-driven exploitation of your data.


Blog | Data Management | | 5 min read

What’s an Edge Data Fabric?

edge data fabric

What’s an Edge Data Fabric?

A data fabric is a data architecture, management practices, and policies to deliver a set of data services that span all these domains and endpoints. Data fabrics provide that framework. They essentially serve as both the translator and the plumbing for data in all its forms, wherever it sits and wherever it needs to go, regardless of whether the data consumer is a human or machine.

Data fabrics aren’t brand new, but they are suddenly getting a lot of attention in IT these days as companies move to multi-cloud and the edge. That’s because organizations desperately need a framework to manage it – to move it, secure it, prepare it, govern it, and integrate it into IT systems.

Data fabrics got their start back in the mid-2000s when computing started to spread from data centers into the cloud. They became more popular as organizations embraced hybrid clouds, and today data fabrics are helping to reduce complexities involving data streams moving to and from the network’s edge. But the goalposts have moved, the network’s edge is now the IoT, collectively labeled “the edge.”

What’s different is where the data will emanate from and how fluid it will be. In other words, mobile and IoT – the edge – will drive data creation. Further, the processing and analysis will happen at various points from on the device, at the gateways, and across the cloud. Perhaps a better term would be Fluid Distributed Data instead of Big Data?

Regardless, more data ultimately translates to more viable business opportunities – particularly given that this new data is generated at the point of action from humans and machines. To take full advantage of the growing amounts of data available to them, enterprises need a way to manage it more efficiently across platforms, from the edge to the cloud and back. They need to process, store, and optimize different types of data that come from different sources with different levels of cleanliness and validity so they can connect it to internal applications and apply business process logic, increasingly aided by artificial intelligence and machine learning models.

It’s a big challenge. One solution enterprises are pursuing now is the adoption of a data fabric. And, as data volumes continue to grow at the network’s edge, that solution will evolve further into what will more commonly be referred to as an edge data fabric.

How Data Fabric Applies to the Edge

Edge computing provides a unique set of challenges for data being generated and processed outside the network core. The devices themselves operating at the edge are getting more complex. Smart devices like networked PLCs manage solenoids that, in turn, control process flows in a chemical plant, pressure sensors that determine the weight and active RFID tags to determine the location of a cargo container. The vast majority of the processing used to take place in the data center, but that has shifted to the point where a larger portion of the processing takes place in the cloud. In both cases, the processing happens on one side of a gateway. The data center was fixed, not virtual, but the cloud is fluid. If you consider the definition of cloud, you can see why a data fabric would be needed in it. Cloud is about fluidity and removing locality, but, like the data center, it’s about processing data associated with applications. We may not care where the Salesforce cloud or Oracle cloud or any other cloud is actually located but we do care that my data must transit between various clouds and persist in each of them for use in different operations.

Because of all that complexity, organizations have to determine which pieces of the processing are done at which level. There’s an application for each, and for each application there’s a manipulation. And for each manipulation, there’s processing of data and memory management.

The point of a data fabric is to handle all the complexity. Spark, for example, would be a key element of a data fabric in the cloud, as it quickly has become the easiest way to support streaming data between various cloud platforms from different vendors. The edge is quickly becoming a new cloud, leveraging the same cloud technologies and standards in combination with new, edge-specific networks such as 5G and WLAN 6. And, like the core cloud, there are richer, more intelligent applications running on each device, on gateways, and at what would have been the equivalent of data center running in a coat closet on the factory floor, in an airplane, on a cargo ship and so forth. It stands to reason you will need an analogous edge data fabric to the one that is solidifying in the core cloud.

Edge Data Fabric’s Common Elements

To handle the growing number of data requirements edge devices pose, an edge data fabric has to perform several important functions. It has to be able to:

  • Access to many different interfaces: http, mttp, radio networks, manufacturing networks.
  • Run on multiple operating environments: Most importantly POSIX compliant.
  • Work with key protocols and APIs: Including more recent ones with REST API.
  • Provide JDBC/ODBC database connectivity: For legacy applications and a quick and dirty connection between databases.
  • Handle streaming data: Through standards such as Spark and Kafka.

Conclusion

Data fabric is not a single product, platform, or set of services and neither is edge data fabric. Edge data fabric is an extension of data fabric but, given the differences in resources and requirements at the edge, sufficient change to what is necessary to manage edge data is required. In the next blog we’ll discuss why edge data fabric matters and why now.