Home   News   Features   Interviews   Magazine Archive   Industry Awards  
Subscribe
Securites Lending Times logo
Leading the Way

Global Asset Servicing News and Commentary
≔ Menu
Securites Lending Times logo
Leading the Way

Global Asset Servicing News and Commentary
News by section
Subscribe
⨂ Close
  1. Home
  2. Features
  3. The Generation Game
Feature

The Generation Game


15 nov 2023

Generative AI is the latest development to get everyone talking. Where could it have the most impact in the industry, and are fears around the new technology justified?

Image: immimagery/stock.adobe.com
Artificial intelligence (AI) might bring with it suggestions of anthropomorphised robots, or maybe a sinister Skynet ready to take over our jobs, minds and lives, but it’s safe to say that we’re not there yet.

Conversation around the technology has amped up over the past year, with companies across the board bringing in new use cases and finding different ways to incorporate AI into their operations. The latest element of AI that has caught the attention of market participants and the general public alike, and sparking both enthusiasm and trepidation, is generative AI (GenAI). Earlier this year, ChatGPT hit headlines worldwide. Despite being launched by OpenAI in 2018, the fourth iteration of the generative pre-trained transformer large language model (LLM) has opened numerous new doors.

Ashmita Gupta, senior vice president and head of analytics at Linedata, affirmed on a panel earlier this year that programmes like ChatGPT and other LLMs could have a “magnificent impact” on the industry. The exact form this impact will take has not yet been determined, and with developments progressing at pace companies are facing a range of opportunities and challenges as they determine how to capitalise on the emerging technology, improve their operations, and bring new innovations to the market.

“With GenAI, we have a new opportunity to interact with technology differently,” says Emily Prince, group head of analytics at LSEG. As it continues to develop at pace, tools such as large language models (LLM) have “huge potential to create greater efficiencies and enable customers to do more, faster,” she affirms.

What comes next

During a panel discussion at this year’s Sibos, Kevin Blair, senior vice president of asset servicing at Northern Trust, broke AI technology down into two broad categories; tools mimicking human actions, and tools mimicking human behaviour. Examples of the former are already widespread, seen in the automation of various operations in asset servicing and beyond. The latter, GenAI, is at an earlier stage of development. This type of AI can learn to detect patterns and act accordingly, operating more complex algorithms.

This next stage of AI will be able to take in pictures, videos and speech as prompts, says Cathy Sui, global head of AI engineering at TS Imagine. “We will see knowledge workers who do not have formal technical training starting to envision and create things that, historically, would have been complex and time consuming,” she says. An analytics dashboard, for example, could be conjured “simply by drawing a picture or taking a screenshot.” She goes on to suggest that bringing more knowledge workers into the technological development side of things will exponentially increase opportunities for innovation.

While traditional machine learning models can be trained and deployed to execute specific tasks, this is a long process, Melanie McGrory, director and EMEA tech and customer solution manager at Amazon Web Services, explained during the company’s ‘Business Transformed with AI and Data’ conference in London. Using foundation models, however, that are pre-trained on large datasets and then ‘fine tuned’ or customised to suit a particular use case, is a far more efficient process. This approach uses less data to execute, as rather than creating a model for each individual task one big model can be adapted to fit many purposes.

Use cases

Timothée Raymond, head of innovation at Linedata, believes that while GenAI is primarily seen as a decision-making or question-answering tool, it has untapped potential around providing access to existing data. “It can patch a business need by accessing the right data within a specific maze of data models,” he says, expediting the document crunching process. LSEG’s Prince agrees that the technology’s ability to drive enhanced data discoverability, generate insight and create reports, are key strengths.

This type of use case may only cut five minutes from the task, “but we believe that it’s the sum of these little things that make the industry more efficient, and help our clients generate more value for their clients,” Raymond says.

Tools like Microsoft Copilot are being employed across the industry for this purpose, with small time-saving operations freeing up employees and boosting overall efficiency. Copilot is “an amazing tool” and an “exceptional new feature” for the industry, says Amicorp CEO Kin Lai. By removing the need for repetitive tasks to be completed manually, staff focus can be shifted to value-adding, higher-level tasks. No matter how small a time-saving benefit may seem, over time the impact can be transformative.

Firms are also developing their own programmes in this vein, with SteelEye launching a compliance copilot to accelerate the communication surveillance alert review process. The solution learns from user feedback and will therefore improve over time; a prime example of Kevin Blair’s second category of human behaviour-mimicking technology.

Another popular use of copilot-style programmes is in the development of code or test material, accelerating operations, improving efficiency and facilitating differentiated decision making.

GenAI also has huge potential in the customer experience space, providing a highly customised service for each client. “This can be, for example, an automatically personalised view in a banking app, a client message that is tailored to the client’s situation and preferred communication style, or a call centre worker whose skills are augmented by GenAI,” says Satu Kiiski, consulting director for global banking at CGI.

Furthermore, “huge efficiency and quality improvements can be found when applying GenAI for processes related to compliance, security, and privacy,” Kiiski continues, highlighting fraud detection as an area of opportunity for the technology. LSEG’s Prince adds that the performance of existing models can be enhanced with GenAI, which can determine key themes and related investor sentiment from operations such as mining earning releases, filings and transcripts.

Misuse cases

While GenAI has the potential to be a major disruptor in the industry — and is already proving itself to be so, in several cases — many have raised concerns that the technology may be starting to be used as a one-size-fits-all solution, whether or not it can offer any actual benefits to users. “Everyone’s trying to figure out ‘what can I do, what problem can I solve with AI?’ And there’s a lot of [solutions] out there that just don’t make much sense,” says Anthony Caiafa, chief technology officer at SS&C.

As well as being seen as the first port of call to fix existing problems, several market participants have expressed worries around GenAI-based replacements being brought in for systems that are already working efficiently, and do not need to be changed. There’s a sense that some are attempting to jump on the bandwagon and implement GenAI simply so they can say that they’re using it; a tactic that will no doubt have short-term benefits at best. Firms must “define smart use cases and be very explicit with [their] prompts,” TS Imagine’s Sui affirms.

Fears

While many cry fearmongering, there are numerous genuine, credible concerns about the advent of GenAI in the financial services industry. “Although AI has the potential to drive growth and innovation across numerous sectors, it is true that businesses and policymakers must not neglect their responsibility to protect against current emerging threats caused by the rise of AI technologies,” says Tony Petrov, chief legal officer at verification technology firm Sumsub. An avid supporter of AI, he advocates for strong regulation in the space, urging firms to see the developing frameworks as “an opportunity to shape the future of business in a more controlled way”.

One concern that many have about GenAI is that it will become unexplainable, making decisions and running operations on a logic that its human counterparts and overseers cannot understand. This fear has echoes of the general panic that AI will take over the world and overthrow its creators, but theatrics aside, the importance of explainability is nonetheless an essential consideration for those developing AI programmes. Transparency is key, allowing users to see how the programme’s decision-making process works and why it has arrived on a particular action or recommendation. If a model is unable to explain how its results are produced, “it can be difficult to assign responsibility or liability for the decisions and resulting consequences,” says TS Imagine’s Sui. If an inanimate machine makes a mistake, it can’t be held accountable for its actions — but it’s important to know who should be.

This clarity will also help reduce suspicious or incorrect results, and can allow potential bias to be identified. As AI is human-made, there is an inherent risk that the technology will mimic its creators’ behaviours — both conscious and unconscious — and develop biases, subjectively interpreting data and reducing a programme’s reliability and efficiency. In other fields, the primary manifestation of this could come in the form of racial or gender bias. In asset management, it may appear in the level of risk-taking behaviour or decision-making patterns that a system demonstrates.

“If previous corporate credit pricing decisions are used as reference in making new decisions, the model might become biased towards a specific industry or region,” Kiiskii explains. “It’s crucial to address this challenge right from the start,” using suitable and comprehensive data to train models and ensuring that a human is in place to keep the model in check. At CGI, “we assist our clients in incorporating responsible AI by designing [programmes] with humans in the loop”, she says, with model bias routinely checked and ethics, trustworthiness and robustness built into solutions from the get go.

The data issue

As in all areas of the industry, data is a significant problem in the GenAI space. An oft-repeated phrase in AI circles is ‘garbage in, garbage out’; if a prompt is incorrect, or data is lacking or insufficient, then the results will be unrewarding.

“GenAI is only as good as the quality of data it’s trained on,” LSEG’s Prince says; if the data being fed into a large language model that a GenAI programme learns from is insufficient or unreliable, “the output will not be useful”, Sui confirms. Prince highlights the need for high quality, wide-reaching data in GenAI initiatives, ensuring ‘responsible AI by design’ and the production of reliable results.

“Inaccurate, or incomplete data can introduce risks of bias and misinterpretation,” says Niamh Kingsley, head of product innovation and AI at Delta Capita, “potentially leading to regulatory or security vulnerabilities and unforeseen ‘black swan’ events.” Caiafa advocates for the use of closed datasets in the training of AI models, commenting that “AI itself is just a service; data is the core.”

Kingsley reassures that “significant strides” are being made to address these issues. One approach is ‘red teaming’, whereby researchers identify and exploit a system’s potential vulnerabilities. Google has even turned this into an incentivised programme, rewarding employees who find buys in GenAI programmes.

There are also concerns around data security. Users need to know where data is coming from, and that it is both secure and reliable, says LSEG’s Prince. The firm “aims to deliver the industry’s highest quality data, combined with verified content credentials such as provenance and auditability,” she explains. “This will allow customers to ascertain the origins of their data with confidence, understand associated intellectual property rights and meet their regulatory and compliance standards.”

Looking at security from a different angle, Delta Capita’s Kingsley, references the “potential controversial applications” of generative AI in advanced due diligence — biometric identification and categorisation practices, for example. “We anticipate that the EU Artificial Intelligence Act will classify such practices as either ‘high risk’ or ‘unacceptable risk’,” she says, and will either be banned outright or closely regulated and assessed. However, the majority of banking-applicable GenAI use cases — chatbots, text, image and audio generation — “are likely to be classified as ‘limited risk’”, meaning they will need to meet transparency obligations but their use will not be so closely interrogated.

Ensuring that a responsible AI framework is in place is essential to keep the industry and its participants safe. Standards and best practices often develop after a technology becomes widespread, but waiting for standards to emerge organically is perhaps a more risky approach when it comes to GenAI, given the security risks associated with it. “It’s important to consider all factors relating to data privacy and data protection, both relating to the organisation using the technology but also clients, employees and other parties where personal data is affected,” Marie Measures, chief digital and information officer at Apex Group, attests. “Ethics need to be a strong consideration when selecting use cases.”

A waste of energy?

Discussing the technology driving GenAI, graphics processing units (GPUs), SS&C’s Caiafa reflects on its high energy consumption. “If you’re using AI in all of these contexts that aren’t actually going to work, you’re wasting electricity. We have to be cognisant that when we’re building these tools, building these infrastructures, that AI is a blatantly wasteful consumption of energy.”

The crypto industry recently came under fire for its excessive environmental costs, but AI is yet to face the same backlash — despite the fact that “the hardware that crypto miners use is no different from the hardware that’s needed for GPUs. We say, ‘you can’t mine crypto’, but we use the same compute for AI.”

“There’s a line that we still have to figure out,” Caiafa says, and one that will certainly be the source of many heated debates as time goes on.

Human in the loop

Of course, one of the biggest worries around AI is that its technical skill and business benefits will lead to humans being pushed out of their jobs. This is a reasonable concern — we’ve been told for years that this will be an inevitability. However, what’s often left out of the narrative is the benefits that AI can bring to existing jobs, and the new roles it can create.

In many cases the technology can be used to enhance and assist, simplifying cumbersome tasks. “AI has the potential to turn the average knowledge worker into a superstar,” Sui maintains. With TS Imagine already employing AI in its operations, development and data practices, “we plan to enhance our overall productivity by leveraging AI across more functions,” she affirms.

New technology always induces fear. Phil LeBrun, enterprise strategist at Amazon Web Services, affirmed during the company’s conference that while risk and opportunity are always present, “the technology alone does nothing”. AI won’t make humans obsolete, he states, addressing perhaps the most widespread concern about the emerging technology. He predicts that work will and must be “reinvented” with the rise of GenAI in order for its benefits to truly be felt.

In fact, maintaining the human component is essential. As everyone hops onto the GenAI train, several market participants have warned of an overreliance on this new technology. “Whilst AI offers immense potential in solving complex problems, it is crucial to remember that it is a tool, and not a complete solution,” says Delta Capita’s Kingsley. Human oversight is needed for ethical decision-making, working through complex situations and preventing biased results.

“Whilst [GenAI] can support individuals to reduce manual activities in their roles, it cannot replace their knowledge in ensuring the right data is being used to make key decisions,” Apex’s Measures comments.

Part of changing the attitudes towards and culture around AI involves more effectively educating employees, explaining what AI integration into a business will truly mean for their roles. Rather than losing their jobs, employees will develop new skill sets and take on newly-created roles that are inimitably “human in nature”.

These tend to be around critical thinking, imagination and tasks based on a complex understanding of the business, he explains. A culture of continuous learning, and willingness to adapt, is crucial to be implemented into the workplace. AI needs humans, and as companies invest in their technology they must invest equally in their workforce.

“The jobs that we have today are not necessarily the ones we will have tomorrow,” McGrory agrees. But, as with the rest of the AI space, this should perhaps be seen more as a source of opportunity rather than trepidation.
Next fearture →

Expanding services
NO FEE, NO RISK
100% ON RETURNS If you invest in only one asset servicing news source this year, make sure it is your free subscription to Asset Servicing Times
Advertisement
Subscribe today