Opinion
Michael Izza 7 Sep 2017 12:38pm

Artificial intelligence: learning through error

Making the most of the opportunities presented by the growth of artificial intelligence requires a greater tolerance of error and a recognition that mistakes are integral to innovation

One of the characteristics of successful entrepreneurs – along with passion, grit and perseverance – is the ability to learn from mistakes. Take Paul Allen and Bill Gates, co-founders of Microsoft, whose first joint project was Traf-O-Data, a computerised microprocessor for traffic data analysis; it failed, but as a result the pair learned to write software and the rest is history. In 2011, Allen told Newsweek that since that time he had made his fair share of business mistakes but Traf-O-Data had always been his favourite, “because it confirmed to me that every failure contains the seeds of your next success”.

Closer to home, Virgin founder Richard Branson believes strongly in the importance of failure as one of the building blocks to successful entrepreneurialism. “Every person, and especially every entrepreneur, should embrace failure with open arms,” he says. “It is only through failure that we learn. Many of the world’s finest minds have learned this the hard way.” His favourite quotation on failure, by the way, comes from Thomas Edison, entrepreneur and inventor of the light bulb (among other great innovations), who said: “I have not failed. I’ve just found 10,000 ways that won’t work”.

This approach to risk taking and innovation, and our acceptance of it, is of fundamental importance if we are to take full advantage of technological change. We are, I believe, at one of those stages in the development of the human race where we are about to take an exponential leap forward thanks to artificial intelligence. AI will transform the way the accountancy profession works, although at this stage it is still unclear as to the pace of change and the spread of adoption. Some have suggested that it will take up to a decade for us to understand and apply AI to improve efficiency, with more radical change to follow. Others think the transformation will come more quickly and they sketch out the scenario where larger firms adopt AI swiftly, leaving small firms to catch up if they can. Alternatively, smaller businesses could be quick adopters as AI is just integrated into existing software.

At a roundtable held at ICAEW in the summer, with Professor Moshe Vardi, one of the world’s leading experts on AI, as guest speaker, it was apparent there was optimism among chartered accountants about the benefits AI could bring to the profession. For example, using AI to interpret data and automate processes would free them up to focus on more valuable tasks such as decision-taking, strategic advice and relationship building. They saw AI as providing innovative new opportunities to apply accountancy discipline and measurement to anything from the UN sustainable development goals to intangible assets on balance sheets.

All this will necessitate a lot of experimentation and innovation and, of course, mistakes will inevitably be made. Indeed, if we want to end up with the best outcomes, they need to happen. But what worries me about all this is that those innovators could end up being pilloried and their creativity stifled because of society’s changing attitude to mistakes. In the past few years we seem to have developed a very low tolerance of error and an unforgiving nature long term. And it’s not just in the UK: it’s happening globally. You only have to look at the lambasting that politicians have received, often unfairly, in recent months. If society and regulators and, yes, even professional bodies do not allow a culture where people can innovate, experiment and fail along the way, then we will not be in a position to exploit all the potential that AI has to offer.

Many of the conversations I have been having recently have revolved round the question of what accountability means today, particularly for the 21st century professional, in the light of changing attitudes to failure. There has to be some level of accountability where mistakes are made, but the degree should depend on their nature. If they involve gross negligence or recklessness, then clearly there has to be some form of serious sanction to protect the public interest. But our increasing blame culture has begun to blur the distinction between simple mistakes or errors of judgement and real misconduct.

What we need over the coming years is somewhere where entrepreneurs and innovators can experiment with AI without fear of retribution. I would urge governments, regulators and professional bodies to get together as a matter of urgency to create that safe space.

Michael lzza
ICAEW chief executive

Topics