An Article by Ian Kilbride.
When geniuses such as Stephen Hawking, Bill Gates and Elon Musk warn of the dangers to humanity of Artificial Intelligence (AI), we should sit up and take notice. But when the Godfather of AI, Geoffrey Hinton, blows the whistle on his own creation, then we really do need to take stock. Having built a career that laid the foundations for applications such as ChatGPT, the former Google engineer, Hinton, has now issued a Dr Frankenstein warning, stating that, “I’m just a scientist who suddenly realised that these things are getting smarter than us.” Really? The penny has just dropped? In May this year, Hinton, along with more than 1,100 scientists and global tech gurus signed an open letter published by the Future of Life Institute calling for a six-month moratorium on the development of AI systems more advanced that ChatGPT.
On the one hand, it’s hard to take seriously a call by scientists (of all people) to halt research and development of creatures of their own making. The idea that scientists, academics, engineers, software developers, and social media companies would halt R&D on leading edge and highly competitive technology is a non-starter. Given that the next world war is already being fought in cyberspace, the notion that Russian, Chinese, North Korean or any EU country for that matter would pause AI research and development is absurd.
But the naivete of the proposal should not detract from taking the concerns seriously. What Hinton and his supporters are really asking for is for policy, legislation and regulation to catch-up with AI in order for it to be better managed and to minimise the temptation for it to be exploited and abused by malevolent forces, be they countries, criminal networks, terrorist groups or multinational corporations. In this regard, they have a point. Artificial Intelligence ‘bots’ are already deployed across all social media platforms not only to gather vital data that feeds the algorithms that drive and connect users across the internet, but are increasingly deployed to influence, shape and in some cases determine the outcomes of social behaviour, consumption patterns and even political elections. Having voted its approval in June, the EU is likely to pass the world’s first legislation dedicated to regulating AI in all sectors of society, except for defence. It will be fascinating to see you exactly how this legislation is going to work in practice, how it will be enforced and whether this leaves the EU at an advantage or disadvantage in the global and highly competitive AI arena
This latter point provides a segue into the issue about which I have most interest and concern, namely, the role of AI in financial services. Let’s deal with some of the concerns first. The major general concern is that of the risk of breaches of privacy that comes with the massive personal data set collected via AI. Secondly, the current AI client interphase, particularly in the banking sector is primitive to say the least. We are some way away from interacting satisfactory with a chat bot that can deal with anything more than the most basic and generic queries. It is also contended that the human designers of the current AI algorithms used in the client interphase of the financial services sector have huge-inbuilt biases with respect to the collection, ranking and sequencing of data such that this can generate and reproduce highly skewed analysis and ‘advice’.
Yet, having been personally involved and invested in the development and application of algorithms to asset portfolio construction, I can attest to the enormous power of Artificial Intelligence and Machine Learning in investment management. But I see this as an innovative and expanding tool in the armoury of asset management, rather than replacement of analysts and portfolio managers. Human intervention remains critical to the formulation and specified purpose of the algorithm as well as its adjustment and repurposing.
I embrace the utility of AI with respect to data gathering to enhance risk assessment, risk management, and anti-corruption monitoring. AI in financial services is already a vital tool in the banking sector with respect to customer the customer interphase, particularly in the mass market. Covid-19 lockdowns certainly propelled the rapid development and adoption of applied AI technologies that all but remove the necessity of bricks and mortar banking. In purely commercial terms, AI in the banking sector makes the cost of banking manageable for the average customer and of course, profitable for the banks!
Equally, I see the burgeoning use and value of financial technology more broadly and have even tested a number of robo-advisor applications and interphases. And while these technologies are truly remarkable in their ‘intelligence’, I personally do not know of a single client who would trust a robo-advisor and less still invest their hard-earned wealth with one. This will change of course, particularly as AI bots learn how to ‘speak’ to cautious and skeptical customers, but also as millennials and generation Z begin accumulating personal wealth and who are more comfortable with Fintech and its offerings.
AI is the most exciting space in what has been for some players a sleepy, flaccid and complacent financial services sector. But I would caution that the private client relationship should not become inverted and that AI should never become a Frankenstein monster that ends up killing its creator.