Features
David Adams 8 Nov 2018 03:39pm

Rise of the machines

Will robots steal your job? Can artificial intelligence elevate the professional’s position? David Adams looks at the ethical implications of AI on business and accountancy

https://economia.icaew.com:443/-/media/economia/images/article-images/ai-rise-of-the-machines630.ashx
Caption: Image by Julius Drost/Unsplash

For decades, portrayals of artificial intelligence (AI) in popular culture hav warned us that if we ever created genuinely “intelligent” machines, they might decide we were surplus to requirements. Prominent voices in the fields of science and technology have also expressed misgivings: Stephen Hawking suggested humans would be “superseded” by AI; Tesla CEO Elon Musk has said that, “with artificial intelligence we’re summoning the demon”.

None of the AI technologies in use today are likely to enslave or exterminate humanity. But increased use of AI and machine learning tools able to digest huge quantities of unstructured data, then recommend or execute actions, does raise important ethical questions around transparency and accountability.

Worldwide spending on cognitive systems is expected to reach $19bn during 2018, up 54% on 2017, according to IDC. Technology such as the virtual PA tool x.ai is becoming more widespread in many industries: for automation of generic administrative tasks, and much more specialised applications. For example, in 2017 investment bank Goldman Sachs implemented AI systems in its US brokerage operations that scan client emails to identify and execute requests for routine transactions, leaving human brokers to focus on more complex, higher value tasks. In addition, the bank uses machine learning to develop trading strategies based on previous market movements.

AI is also being used by businesses to process, accelerate or reject customer applications for financial products; to identify trends that might help businesses target new customers; and to improve the personalisation of online or voice interactions between customers and customer service “bots”. It is used in recruitment processes to spot the most promising candidates. Unilever employs technology from Pymetrics and HireVue to screen applications for entry level jobs: and claims that doing so has removed unconscious bias in its recruitment processes, resulting in a more diverse new intake of personnel, while also cutting the duration of the recruitment process from four months to four weeks.

In accountancy, AI is already used in processes related to tax, audit and forecasting. Ben Taylor, partner and head of financial accounting advisory services at EY, says the firm uses AI to analyse spreadsheet data and to help clients fulfil IFRS 16 reporting requirements. “It gives us the ability to sift through thousands of leasing contracts and filter the relevant information,” he says. “A machine can do in minutes what a person would take days or months to do.”

Shamus Rae, partner and head of digital disruption at KPMG UK, outlines how the firm employs AI to run due diligence on suppliers; and is developing a system for automated compliance checks of audit disclosures. It has also created an AI-based tool that can review a bank’s entire loans book, rather than just a sample of it, complementing this with a review of the original paperwork and additional data searches, such as new credit checks, or valuations of mortgaged properties. This system is being rolled out in the US first, with further roll-outs to follow.

Governance and accountability

Such tools are clearly useful and efficient, but to what extent can anyone using a technology that “learns” from its work really control and understand the processes it uses to do this? And who should be held accountable for the consequences of decisions that are informed by these technologies?

In the UK, research and development of AI is a key focus within the government’s Industrial Strategy; and a consultation seeking views on the work of the proposed government-backed Centre for Data Ethics and Innovation ran during the summer of 2018.

In spring this year the House of Lords Select Committee on Artificial Intelligence published a report that examined how the UK might lead in AI technologies and proposing five principles that could form the basis of “a shared ethical AI framework”. The government of Singapore is sponsoring in-depth, long-term research into the governance of AI and data, aiming to ensure that decisions made by AI systems are “explainable, transparent and fair”. Other governments, alongside organisations such as the World Economic Forum, have also signalled their intention to address these issues.

For Peter Montagnon, associate director at the Institute of Business Ethics (IBE), the rise of AI makes business ethics even more important, because they “can reach into places that are hard to regulate”. He describes a hypothetical example of an employee who discovers a way to use technology that will mislead or con a customer in some way.

“What will stop them doing that isn’t necessarily the risk of being found out, it’s going to be the understanding that ‘in our business we don’t do that’,” says Montagnon. This is particularly important if an employee’s understanding of these technologies is much greater than the level of understanding among those individuals who are responsible for governance within the organisation.

EY’s Taylor agrees. He suggests that although laws and regulation may encourage better practice within specific jurisdictions, it is an organisation’s corporate culture, in combination with a practical approach to identifying and mitigating AI-related risks, that is the best guarantee of ethical use of AI. “Companies that are best in class in this area have defined a internal AI accountability framework and have aligned policies such as codes of conduct to specific provisions related to AI,” he says.

AI in accountancy and audit

The impact of increased use of AI on ethics and governance is also a key area of interest for Simon Learmount, lecturer in corporate governance at Cambridge Judge Business School, and a former director of its MBA and Executive MBA. He suggests that use of these technologies, alongside other aspects of digital disruption, may create business risks related to reputation, or to possible future compensation claims. He says these are balance sheet liabilities that organisations and their accountants need to identify and quantify.

“Are accountants or auditors able to take account of these types of risks?” Learmount asks. “Are the ethical frameworks used by accountants sufficient to deal with ethical questions related to these technologies?” ICAEW is addressing these questions through a number of different AI-related initiatives. Kirstin Gillon, technical manager in the IT Faculty, is part of a team considering subjects such as how increased use of these technologies could affect

ICAEW’s ethical code; and how to ensure such technologies are used ethically. For Gillon, the big difference with AI compared to other technologies is the fact that humans surrender a larger degree of control over the processes completed by the technology. “We don’t create the rules: we give the inputs and specify the outputs, but we don’t know what the computer is doing to get there,” she explains. As use of AI in audit increases, Gillon asks how an auditor can prove that AI-based processes are being completed in accordance with best practices.

While she does not believe that increased use of AI will mean that the fundamental principles underlying ICAEW’s codes of conduct will have to change, she does think that the way the principles are applied might have to evolve, in recognition of changes in the way business processes are completed.

Writing rules that work in practice

AI is also likely to play an increasingly important role in major corporate finance transaction services, delivered by accountancy firms to companies in the process of mergers and acquisitions, or undertaking large-scale debt refinancing. In July 2018 ICAEW launched a new initiative designed to facilitate and support the development and use of AI and big data technologies in these areas.

“Big data is already a significant part of those transaction services and AI and machine learning apps are being introduced,” says Shaun Beaney, manager of the Corporate Finance Faculty. “This project will look at current use, opportunities and risks in AI for these transaction services. The initial thrust is going to be around questions of ethics and practice governance.”

The initiative will incorporate in-depth research about use of AI and related ethical questions related to governance. The problem faced by any professional body, regulator or policymaker is the speed at which these technologies are developing – and this is another reason why an ethical principles-based approach may prove more effective.

The ethics-led approach recommended in the House of Lords Select Committee Report could be based on its proposed basic guiding principles for use of AI, which include “principles of intelligibility and fairness”; and a commitment that use of the technology “should not diminish the data rights or privacy of individuals, families or communities”.

For Montagnon, the most important step to take will be to impress upon individuals working with AI the fact that they are accountable for what the technology does. “We need to learn how to use the opportunities these technologies can offer in ways that are socially and ethically acceptable,” he says. “I think the debate starting to bubble up about this in a number of different places is a very positive sign. But we’ve got to get to grips with this: it’s something that a lot of businesses are going to have to deal with day to day.”

When AI goes wrong

There are some troubling examples of AI use having unforeseen consequences when it is based on flawed inputs and data, which may reflect bias within an organisation – or may enable malicious misuse of the system.

In 2014, a computer algorithm used by Florida police to assess the risk of an arrested individual committing future crimes was found to label black defendants as high risk at twice the rate it did white people. A black 18-year-old woman, Brisha Borden, arrested for the first time for stealing a child’s bicycle (in fact, she did not even ride it away, before being challenged by its six-year-old owner’s mother), was classified a high crime risk by the system – whereas a white 41-year-old shoplifter, Vernon Prater, previously convicted for armed robbery, was classified low risk. Prater has since been jailed for another offence.

Financial technology experts remain concerned about the risks of increased use of high frequency trading algorithms that use machine learning. One dramatic illustration of the problems algorithms can make was seen in May 2010. A “flash crash” was created when many hundreds of algorithm-driven trading systems responded rapidly to a sudden flurry of selling activity, quickly magnifying and deepening its impact.

The value of the Dow Jones Index fell by 9% in just 15 minutes, reducing the value of listed stocks by more than $1trn – although it then recovered its value almost as quickly. The speed at which high frequency systems like this operate, coupled with increased use of systems that act according to their interpretation of vast quantities of unstructured data, mean there is a risk of similar, artificially created volatility causing more significant damage in future.

In 2016, Microsoft launched a “humanoid” AI chatbot based on machine learning, on Twitter. “She” was called Tay – but was soon hijacked by human tweeters who exposed her to some unsavoury online views. Within a few hours they had persuaded Tay to express sympathy for Hitler’s views; to say she hated feminists; and to declare 9/11 to have been “an inside job”.


Topics