Know your options

Master data made simple

What is the best way to achieve your data management objectives? Our expert tells you what to do, how to do it, and exactly where the pitfalls lie.

No guts, no glory 

An ambitious IT manager might raise his or her profile in the company by managing the successful implementation of an ERP system, by introducing digital processes in a strategically important part of operations, or by overseeing the worldwide rollout of logistics software.

Establishing a system of high-quality master data management is probably not one of the more promising routes to fame and glory. And that’s a crying shame. Because master data management is not only a highly complex undertaking – it also brings critical added value to the company.

Consistent master data is the underpinning of the entire digital transformation, including next-gen industry and logistics, the internet of things, and other digital business models.

“Digital transformation projects without high-quality data are doomed to failure” – that’s the harsh verdict reached by the market research and consulting firm Lünendonk in the study “Revival of master data: Is a poor quality of data slowing the digital transformation?”. 

A survey among 155 companies in the manufacturing and commercial sectors and beyond revealed that only 15 percent feel well positioned in their master data management, while another 60 percent rank themselves as still mediocre at best.

Export controls and digitization drive change

What is the best way to achieve your data management objectives? It starts with awareness. That may sound obvious – and indeed, it is. Without an understanding of the legal consequences and economic disadvantages that result from poor master data management, it’s not possible to implement the appropriate processes.

Awareness of this message as it relates to global trade has really taken hold in recent years. The developments around export controls and digital transformation can certainly be seen as one key driver. The legal consequences of unclassified or misclassified goods can be significant.

Moreover, the growing trend toward electronic management of customs and global trade processes require that data be universally and centrally accessible. The same is true of the integration of internal systems – such as global trade and logistics solutions – and the integration with external systems of service providers and partners. Entering data separately in each system at the transaction level is a thing of the past.

The digital transformation is pushing the integration of processes, making it increasingly important to provide centralized access to data of the appropriate quality and scope.

Take export management, for example: Businesses with a larger volume of exports often seek a high level of automation. The only way to achieve this is if the export declaration can be auto-completed with the appropriate master data in addition to the logistics data. If the quality of this data is good, the process of completing export declarations can be fully automated. This yields a significant advantage in efficiency and performance.

But which key master data is important for global trade?

The following is a selection of typical master data:

  • Address data of customers, suppliers, service providers, etc.
  • Materials, products, item/product descriptions
  • Product classification data (commodity codes, etc.)
  • Classifications (EU dual-use numbers, Export Control Classification Numbers, etc.)
  • Origins (preferential and non-preferential)
  • Coded documents
  • Weight data
  • Legal basis/conditions (authorizations, bans, restrictions, etc.)
  • EORI/customs numbers

This list is only a partial one, which hints at the scope and thus the complexity of master data management. That’s why it’s important to have an appropriate process in place for entering and approving master data – one that involves a wide range of stakeholders in the company. Only after all this data has been managed can it be used to trigger transactions and reports: to enter a customer order, submit a customs declaration, or report trade statistics to Intrastat.

Centralized or distributed? Two factors for consideration

A key organizational question is whether the data should be managed in a more “centralized” model by the customs department or in a more “distributed” model by the product management team, for example.

Greater complexity = more distributed

When products are very technologically sophisticated (such as complex chemical compounds), the product management team typically handles the classification. For outsourced parts, the supplier’s input is helpful. In this type of situation, the customs department typically assumes overall responsibility for the processes and performs additional quality assurance measures, but operational responsibility lies elsewhere.

Larger company = more centralized

Larger enterprises are more likely to have their global trade master data managed centrally. This type of centralization is also dominant in corporate groups, where a shared service center is often set up to manage this data. The challenges here should not be underestimated, however, since some master data may require extensive knowledge of the legal context in a particular jurisdiction. The very same article may be classified one way under the EU-TARIC index and another way under the Swiss nomenclature, for example. To make things even more complicated, product classification can be subject to varying legal interpretations even within a single jurisdiction such as the EU. Binding Tariff Information (BTI) can help here.

Data consistency and ERP limitations: Two options

A key challenge in master data management is the need for consistency. Consistency ensures that a single (8-digit) commodity code is used in the export declaration, in the Intrastat report, and in the invoice document.

The problems begin with the fact that many ERP systems only provide a reduced dataset for global trade master data. Some systems accommodate only 8-digit commodity codes, not the 11-digit variants used internationally. Another example: Most ERP systems only provide one field for the origin. If you want to enter both the preferential and non-preferential origin, you’re out of luck.

The solution to these IT limitations is usually one of two approaches:

First, you can extend the ERP system to accommodate the necessary functionality. This requires a certain investment of time, however, and you have no guarantee that your customization will continue to work when a new version is released.

Secondly, the other approach is to sync master data between various systems. This involves automatically transmitting the master data from the central ERP system to the various specialized applications (for customs management, etc.). The application-specific data is then used as needed in the appropriate application.

Automation to the rescue: Three tangible tips

Caution is generally advised when entering master data, especially product classification data. The complexity can vary greatly. Most companies have a fixed selection of primary materials and commercial goods, but they need to be continually reclassified – when changes are made to the procurement process, for example.

Modern algorithms can help here by providing good suggestions based on previous product classifications: machine learning, in other words. This makes it possible to automate simple classifications, so your employees can focus on the tougher cases. In addition, the following three tips may help you to keep further important aspects on screen when you start to tackle the challenge:

Document changes

It sounds obvious, but this is all too often neglected in practice. If you are classifying your products and discover that a similar material was misclassified in the past, for example, you should first document which user entered the incorrect data, and when. IT systems generally capture this information automatically in log files. Then, you should also document any actual use of the obsolete or incorrect classification.

Use tools to help with year-end changes

A lot of reclassifications are introduced at the start of a new year, especially when the comprehensive HS reforms come along every five years. At such times, the various authorities and content providers offer “correlation tables” that let you automatically update all products for which the old commodity code is simply replaced with a new commodity code (1:1 relationship). Correlation tables can also be used for automatic updates when multiple commodity codes are consolidated into a single new commodity code (n:1 relationship). If, however, one commodity code is split into multiple new commodity codes (1:n relationship), the user must decide which commodity code to apply. These changes should also be documented, of course.

Establish collaborative master data management

As companies become more digitally connected, there is an increased need to share certain master data with external partners. This is usually done for reasons of efficiency, but it can also be required for liability. Suppose, for example, that a company hires a service provider to clear its imports through customs. Above a certain volume of declarations, it might make sense for the importer to share its master data (product classification data, etc.) with the service provider so that the provider can use this data to clear the imports.

And we can conclude for today: The digital transformation won’t work without smart master data management. It’s quite obvious, really. Please let us know if you have any questions – or contact us to discuss the topic in more detail.