10 Jul 2013 03:35pm

The unseen cost of bad data

It seems most organisations are jumping on the significant opportunity that big data presents, and accounting practices are no exception. But there are numerous practices out there relying on flawed information that is set to cost them a fortune – and why? Because the old adage rings true – garbage in…garbage out. Or if the truth be told, bad data in…..errors to the power of ten out

Not only does inaccurate and irrelevant data in your business systems cost you money, its effect on the outgoing effluent is expanded tenfold, and this is known as the 1-10-100 rule. 

Too often insights are based on invalid data, which can lead to a negative version of this payoff

The ideal payoff for accumulating data is rapidly compounding returns. By gaining more data on your own business, your clients and your prospects, the idea is that you can make more informed decisions about your business and theirs based on clear insight. Too often however, these insights are based on invalid data, which can lead to a negative version of this payoff, to the power of ten.

In a similar way in which the modern day internet ‘filter bubble’ can lead to a false information reality via the internet, bad or inaccurate data can lead you down an expensively blind path.

By its very nature accounting is heavily-data-driven and, as practices have scaled in the cloud, the amount of data being processed from invoices, emails, accounts payable, etc., has increased exponentially. For many this has meant that errors have started to creep into the information inputted into business systems. At face value these mistakes may not appear to be too much of an issue, but when accountants and practice managers try to act on inaccurate or flawed data the practice suffers.

Mailings are sent to non-existent prospects, clients don’t learn about a new service being offered, invoices are unpaid and the wrong advice is given to clients. The results are clear: damage to your company’s reputation, missing growth targets and risking regulatory compliance. It all adds up to lost productivity, lost revenue and lost clients.

The 1-10-100 rule fairly accurately quantifies this when it comes to the output cost of bad data. Estimates suggest that if the cost to fix a data error at the time of entry is £1, the cost to fix it an hour after it's been entered escalates to £10.

Once trust in your data is undermined it becomes a difficult position to recover from

Fix it several months later and you’re looking at over £100. Wait longer and the sky’s the limit. Once trust in your data is undermined it becomes a difficult position to recover from.

The most frustrating thing for industry observers is that it is totally avoidable. It’s actually quite easy to avoid the main drivers of bad data and to stop your ‘source code’ becoming expensive: Get a coherent integration strategy, make sure you have a comprehensive set of data sources and ensure your data strategy evolves with your business – but the prime directive must always be - garbage in, garbage out.

So if you’re investing in a big data programme, define and implement an advanced data quality strategy and take a hard look at the way you treat data more generally. Once you’ve done this, get the full ROI you deserve by partnering with a service that can detect and protect your organisation from the risks and costs of inaccurate and irrelevant data entering your line of business systems.

This will allow you to automate your daily inbound information streams, including email and paper sourced data, whilst recognising and extracting key data from any unstructured or structured documents.

A UK financial services company recently achieved over 25% in overall revenue having fully cleaned its data: proving that with the right level of real-time visibility any garbage coming your way can also turn to gold on the way out.

Andrew Anderson Andrew Anderson is CEO of Celaton


Related articles

Xero announce tie-up with Celaton

Out of date software leads to exclusion

From Paciloi to Big Data