Poor-quality data cost businesses an estimated $9.7 million in 2017. Was your organization part of that loss?
Those costs manifested in many forms, all equally destructive to the businesses that incurred them: erroneous strategic and tactical decisions, lost operational efficiency, dissatisfied customers, and even employee burnout. Without accurate, clean data, businesses are repeatedly misled in the wrong directions.
Based upon the observations of Portside Technology’s team of resident data experts, poor quality data can strike during any stage in a database or other data platform’s lifecycle: initial design and implementation, migrations to other data platforms, and even during day-to-day usage of the data.
The first common cause for “bad data” is generic database design. Too often, database and programming projects fail to prioritize the uniqueness of the data at each particular business. Instead, the profit of the database development company is prioritized, and generic, off-the-shelf data models are mass produced and forced to fit extremely diverse data at thousands of businesses. A better solution—one which we practice at Portside Technology—is to customize data models to each customer’s unique needs. Data should never be distorted to fit into a generic data model; instead, the model should be adapted to fit the data.
“Bad data” can also be caused by careless errors during integration and migration projects. For example, one of Portside’s resident data experts once repaired a pharmaceutical cancer research database in which a previous (non-Portside) migration team had overwritten every patients’ “diagnosis date” with “today’s date”—that is, the date of the migration. Thus, due to one careless mistake, thousands of patients suddenly, irreparably no longer had a meaningful date of diagnosis (one of the most important metrics in the entire cancer research database)! This may seem like an extreme example, but the shocking truth is that approximately 88 percent of data integration projects at least partially fail!
Finally, and perhaps most imposingly, simply using data can lead to its decay: the quality of data at a typical enterprise is estimated to decay at a rate of 2 percent per month, which equates to losing a quarter of data quality in just one year! Every time a datum is touched, there is an opportunity for a mistake to be made and an error to be introduced. Even beyond introduced error, temporal error is introduced whenever the world changes over time and data are not updated to reflect those changes. Time itself is an enemy of data quality!
The good news is that all of these causes of data quality degeneration can be prevented and rectified with vigilant, intentional precautions: proper data backups for restoring data to the last known “good” state; considerate standard operating procedures that establish processes for protecting and updating data; reporting tools for evaluating current data quality and areas for improvement; and responsible, experienced data engineers who are careful to avoid mistakes and always make it right if mistakes do occur. However, without tools and standards like these, calamity can—and does—strike vulnerable data all too often.
The challenge of data quality is that protecting it is not a one-time fight. Preserving data quality is an ongoing, unending fight against entropy, the natural tendency of all order to descend into chaos. Portside Technology offers end-to-end solutions for all of your data quality needs, from custom database design and development, to integrations and migrations, to ongoing support and maintenance. We walk alongside you every step of the way, constantly fighting to protect your invaluable data quality. So trust Portside Technology—we’ll ensure that your business isn’t part of the next report on poor-quality data casualties!
Contact us to get started!