Marketing Geeks

Unpacking tangible data and digital strategies

arrow-left-icon
September 5, 2022
9 mins read

Unpacking tangible data and digital strategies

Unpacking tangible data and digital strategies

Do you want to unpack tangible data and digital strategies? Begin here as standards, collaboration and reuse are well-understood ideas across departments within most companies. Last time out, we mentioned the definition of a “true north” for your business’s data and digital transformation, which both IT and the business could agree to use for direction.

 

It’s crucial to remember that most development teams clearly understand systems architecture and code reusability. In contrast, the business side will have more focus and in-depth knowledge of business processes and management methods.

 

Unfortunately, applying these concepts to data to support improving data accuracy, access, sharing, and reuse is still a foreign concept to most departments. The idea behind developing a data strategy is to ensure all data resources are positioned so that they are usable, shared and moved quickly and efficiently, so there is an actual tangible benefit.

 

Data isn’t simply a byproduct of business processes anymore – it has transitioned into a fundamental business asset that enables streamlined processing and decision-making. 

 

A well-put-together data strategy ensures that data is managed, used, and treated like an asset. An effective strategy establishes the essential methods, practices, and processes to collect, manipulate and share data across the business repeatedly. 

 

It provides a standard set of goals and objectives across projects to ensure data can be used effectively and efficiently across the business.  

 

Most enterprises have multiple data management initiatives, with most efforts focusing on solutions that address a specific project or organizational need. A data strategy defines a road map that compels the business to align all of these initiatives across each data management discipline, enabling them to build on one another to deliver far more significant benefits.

 

IT organizations have primarily defined data strategies with a focus on storage. They’ve built comprehensive plans and sophisticated methods for handling all aspects of retaining data. While unquestionably important, it’s addressing the tactical elements of storing content and not only planning how improvements will be made in data acquisition, storage, management, sharing, and data use. 

 

An effective data strategy must address data storage and consider how data is identified, accessed, and used. Once a data strategy includes all aspects of data management, it can be deemed effective. Only once this has been addressed will it solve all of the issues related to making data accessible and easily used to support today’s endless processing and decision-making activities. 

Five critical elements of a data strategy work together to form the foundation to support data management across an organization in its entirety: 

 

  1. Identification
  2. Storage
  3. Provisioning
  4. Processes
  5. Governance

 

Creating a data strategy aims to improve data acquisition, storage, management, sharing, and use. 

Identification for your data and digital strategies is a must!

 

The first and arguably most important element is identifying data and understanding its meaning irrespective of its structure, origin, or location. An essential part of using and sharing data within a company is creating a means to identify and have the content represented. 

 

It does not matter if it is structured or unstructured content; the manipulation and processing of data aren’t feasible unless a name is assigned to the data value, some kind of defined format and representation of value. Creating a consistent naming and value convention is critical to the use and sharing of data.

 

Referencing and accessing your data (origination, values, etc.) is another essential step. This information should be independent of how the data is stored (database, filing, etc.) or the existing system where it resides. A good example would be how having a valid library card catalogue can support a reader in retrieving a book from the library. Successful data usage is dependent on the existence of metadata. 

 

Metadata is critical once you factor in the impossibility of knowing the origin, storage place, and meaning of the thousands of data elements across a company’s data. A common way of dealing with this problem is to consolidate the business terminology and definitions into a business data glossary. 

 

If data is truly a corporate asset, then a data strategy must ensure that all data is identifiable. Without a developed and maintained data glossary and metadata (i.e., the “data card catalogue”), many companies will ignore some of their most prized data assets by not knowing they exist. 

Storage of data is essential to any data and digital strategy!

 

Data storage is an essential capability in a company’s technology portfolio, but it is incredibly complex. From a requirements point of view, most IT companies have methods of identifying storage requirements for individual systems where each system receives sufficient storage to support its processing and storage requirements. 

 

When dealing with processing applications, analytical systems or even data storage for general purposes, most organizations use sophisticated methods to forecast the capacity requirements and allocate storage to the various systems. The downside to this approach is that it only reflects a data creation point of view and does not factor in the sharing and usage of data.

There’s rarely a plan for managing storage. The most visible data sharing in the IT world is transactional. Transactional details between applications are transferred and shared to complete a business process. Bulk data sharing isn’t well-understood. 

 

Thanks to the meteoric rise in popularity of big data, the rapid growth of business analytics and the ever-increasing sharing of information between companies, it is becoming much more common to share large volumes of data. Most of this shared content falls into two categories: data created internally, including customer details and purchase details, and content created externally, such as cloud applications and third-party data.

 

Storing data in a single location has proven not feasible as organizations have evolved and data assets have continued to grow. It’s not that we don’t have the expertise to build a system large enough to hold the content; the problem is that given the size of our organizations and the distributed nature of organizations, loading data into a single platform becomes impractical. 

 

Not everyone needs access to all of the company’s data; they only need access to specific data to support their individual needs and functions. Ensuring there’s a practical means of storing all the data created in a way that allows it to be easily accessible and shareable is the key. Keeping the data in one centralized place is unnecessary; you only need to store it once and provide a way for people to find and access it. Once created, the data will be shared with numerous other systems, and it’s critical to address storage efficiently to ensure access. A good data strategy will ensure that any data created is accessible in the future without requiring copies.

Provisioning

 

Packaging data to be reused and shared and providing rules and access guidelines for the data is imperative. In IT’s early days, most applications were built as stand-alone data processing engines containing all necessary data to perform their duties. Organizations paid little attention to the thought of sharing data across applications. Traditionally, data was organized and stored for the convenience of the application in collecting, creating and keeping the content. 

Upon the unusual request for data, an application developer started an extract by dumping that data into a file or building a one-off program to support data sharing to another application. The application developer didn’t consider providing data on an ongoing basis. At that time, data sharing was a rarity. Data sharing is not rare these days, and the logic and rules required to decode data for others are rarely documented or even known outside the development team.

 

Today most IT organizations don’t provide resources to address data sharing that holds no transactional value. Data sharing is viewed as a courtesy between teams in the modern climate. When data is shared, it’s usually packaged in a way that is convenient to the application developer. This approach is seen as utterly impractical in today’s world, where your IT team manages dozens of systems that rely on data from multiple sources to support individual business operations and processes. 

 

Data sharing is not a specialized technical capability which can only be given attention by specific application developers, architects, and programmers. It has become a need of production for business. Businesses now depend on the sharing and provision of data to support operations and analytics. The critical message here is ensuring that data is packaged and provisioned so those who manage ten downstream systems can access the data.

 

A company’s data is only an asset if packaged and prepared for sharing. To treat data as an asset instead of a burden, a data strategy has to address data provisioning as a standard business practice.

Processing of data is a key component for data and digital strategies

 

Moving and combining data that reside in separate systems and providing a unified and consistent view is what is meant by processing. The boundless amounts of data generated from applications are nothing short of an absolute treasure chest of knowledge – but data is raw at the time of creation. 

 

Processing in a data strategy refers to what activities are required to take data from its raw material state to its finished and final product. Think of the data acquired from a system as a primary ingredient in manufacturing. There has been no preparation, transformation or correction of the data to make it ready to use or of any value. We must now develop a process to extract value from this raw data.

 

In most companies, data comes from both inside and outside the company. Internal data is generated through dozens (maybe even hundreds) of application systems, while contrastingly, external data is delivered from multiple sources like cloud applications or business partners.

 

Most organizations have started onboarding teams focused on cleansing data, standardizing it, transforming it, and integrating it into the data warehouse or storage space. While this data often contains essential information, it has not been provisioned and packaged in a way which will see it seamlessly integrated with the unique combination of sources and systems in each company. To make data ready for use, there are a series of steps necessary to transform the data, correct it, and then format it. This process creates a set of correlative data sets that a data user can integrate with a set of data preparation tasks that will be unique to their individual needs. 

 

Unfortunately, many have learned that most data users require ready-to-use data, so they take the development process in-house, which presents challenges. Developing code that can identify and match records across individual applications and sources can be highly complicated. Many organizations have not focused on delivering ready-to-use data that promotes reuse and sharing despite having initiatives seeking to address code reuse and collaboration.

 

A team of developers will spend enormous time building the logic to match and link values across multiple sources. Unfortunately, as each new team will require access to all data sources, they will have to reconstruct the logic needed to connect values across the data sources.

 

Many organizations have multiple initiatives addressing code reuse and collaboration for application development; however, they have not focused their efforts on delivering ready-to-use data that promotes sharing and reuse. Making data ready to use is about creating and offering tools and creating processes that allow for data production that individuals can use without the involvement of IT. 

Governance

 

Data continuously gets perceived as a byproduct of application processing. This has led to few organizations having fully developed methods and processes to manage data outside the context of an application and across the enterprise. With many invested initiatives around data governance, unfortunately, most are still in the infancy stage in their industries.

 

Most data governance initiatives start by addressing specific tactical issues like terminology standards and data accuracy; however, this is confined to particular organizations or projects’ efforts.

 

With the growth of governance awareness and the increased visibility of data sharing and usage issues, most governance initiatives have no choice but to broaden in scope.

With these initiatives expanding, organizations may establish policies, rules and methods that ensure data usage, manipulation, and management are all uniform. 

 

Adoption is the biggest challenge with data governance – because data governance is a set of information policies and rules everyone must respect and follow. The big misconception is that data governance is a rigour specific only to users and the analytics environment; however, data governance applies to all applications, systems and staff members. The primary factor for establishing solid governance processes is to ensure that once data decouples from the application it was created in, the rules and details of the data are known and respected by all. 

 

Data governance laws and policies dictate how data will be processed, manipulated and shared. The role governance plays within an overall data strategy is to ensure consistency across the company’s data management. An effective data governance policy ensures consistency in data management, manipulation, and access, whether for data correction logic, data naming standards, or new data rules.

 

There should be no surprise that a data strategy must undoubtedly include data governance. Data governance aims to ensure that data becomes easier to access, use and share. There should be no burden on those who require the data through the rigors of the data governance policies. In the initial stages, while developers are adjusting to data governance policies, there may be some loss of productivity; however, the benefit in terms of productivity for those in downstream teams will dramatically outweigh this initial production loss. 

 

It is impractical and illogical to move forward with a data and digital strategy without integrating a data governance initiative into the road map, which should highlight how you capture, store, manage and use information. 

 

We’ll unpack common problems and misconceptions regarding data and digital strategies in our next blog. For now, we hope that you’ve found your answers here, and if you need assistance to delve deeper into developing your digital and data strategy across your platforms, be sure to contact us here.

share-icon
blog-article-yahoo-icon blog-article-twitter-icon blog-article-facebook-icon

Recommended posts