News by sections
ESG

News by region
Issue archives
Archive section
Multimedia
Videos
Search site
Features
Interviews
Country profiles
Generic business image for interview article

30 May 2024

Roy Saadon
AccessFintech

I have seen you previously refer to data as “the new gold”. Has the value and importance of data changed over the last 10 years, or is this more to do with its dissemination and use in modern systems?

The value of data has changed by orders of magnitude, and there are different underlying reasons for that. It is the willingness of the industry to collaborate when it comes to data, which has not always been there. We are in a regulated market that historically was not focused on sharing data, but instead built walls around every organisation to protect access. But the willingness to share data is a fundamental mental-switch, or change in state of mind, which is important for the industry.

The technology has evolved dramatically over the last 10 years, with regards to handling large quantities of data, normalisation of data, and the governance of data. When we are talking about using artificial intelligence (AI), that is a data hungry environment. We have AI in an opaque market, so we are forced to use bad or limited datasets to try and build models.

The reason for using, and the availability of data, has changed, and what can be generated from that data has also changed. It is a completely different conversation now compared to how it was, and we just happen to be right in the eye of the storm. I would say we are lucky to be in a data age, and be a data company, right now.

Do you see the use and importance of data increasing even further in the future within the finance industry? What will be the factors that impact this?

Yes, certainly. Historically, data was used in a very myopic way. Access to data would lead to companies building pricing streams and front office strategies. That was basically what data was about — speed and pricing.

Today, companies look at data as a fundamental way of managing risk, driving the direction of travel, optimising efficiency or the use of capital etc. All of these things can be improved with data.

As you have more participants who know how to extract the value of data, its uses grow in scope. Now data has its uses in everything from operations and finances, to cybersecurity — companies wanting to understand how to protect themselves from cyber risk. Everyone is now making better decisions, and reducing effort, by looking at data.

What I think has really changed is that the industry focus has moved from a ‘let me fix my own data and use my own data to be better’ attitude, to a collaborative, ‘let me use industry or network data to be better’. It can be seen as a supply chain issue.

For a single transaction, there is a buy side, an executing broker, a prime broker, an agent bank, a custodian and outsourced middle offices — six different firms all having some role in the same transaction. They will all have their own ‘version’ of it, or opinion of it, and likewise have their own view of any issues that arise with the transaction. No single entity really has an ability to look across the lifecycle of the transaction, and they definitely do not have the ability to look at the trends over time. There comes a point where there is an actual reason for firms, even ones where there is potential competition, to collaborate, because they can help each other get better.

Because of that they are willing to share data, albeit with a lot of governance, control and security. But the principle is that by sharing data with each other, and so increasing the dataset they can use, both firms improve. In effect, they reach the ceiling of what they can do as a single entity in terms of advancing, so the next frontier is developing as an industry.

But for the industry to get better it has to look at these supply chain issues and ask ‘what is outside my wall?’, which is the area AccessFintech focuses on. It is the place that we find interesting because it is much more challenging, getting to consider the ownership and security of data. The quality of data means there is not a full dataset available, so firms have to somehow work with others to get this full problem set. The value is in getting significantly larger amounts of data, and correlated data, and therefore it has a direct impact on the value firms can get out of it.

AI is a hot topic across most industries in the past year or two, what value and use do you currently see AI having within your own, or clients’ businesses? How do you think this will change in the future?

I am old enough to have seen multiple hype-cycles come and go. Over time, you get a bit more experienced, and so I am trying to avoid that same hype now. It is fairly common when a new technology emerges, that there is a tendency to think this technology is going to fix everything. This means people throw the most complicated problems at it, rather than fixing what is obvious, which would build trust and momentum with the technology and actually create value.

There are arguably two extremes. On the one hand, AI is being used to handle repetitive tasks. These can be areas where companies actually struggle to staff, because people do not want to do those roles. There is concern about AI taking the jobs of humans, but in these cases there is zero interest in the position.

On the other extreme is what I call ‘the move from 98 to 100 per cent’. You can achieve 98 per cent efficiency with standard tooling, but it is the last two per cent that is really hard to crack. This is where the data may not be correlated, or the pattern recognition requires too much ‘compute’ power to do it in your head or on a spreadsheet.

These are the two areas of what AI does now, either high volume, repetitive tasks, or high-end, complicated scenarios.

There is a lot AI can do in the ‘middle’, but it is about building confidence and working on things that are not too controversial, and where immediate value can be had. This means the resistance to change is not as dramatic, because emotions and perceptions are less involved. The counter to this has always been using the technology for the more dramatic, big things — ‘I’ve got this thing that I’ve been trying to solve for 30 years, and I couldn’t, maybe AI is going to do it.’ But I think we should focus on the thing that will return value.

The term ‘AI’ is a massive umbrella. Almost anything automated gets called AI these days. But it really means focusing on the things that get better with data and get better with pattern recognition. There is a lot happening around documentation conversion, extracting value out of contracts, for example.

There are a lot of new firms popping up in this space, because the technology is really suited for that, and it would be very difficult to hire, say fresh graduates, to do it.

Are there any notable relationships or potential impacts you see AI and data having on repo specifically?

At the highest level, we are moving a tonne of assets around the globe for no reason, because the tools available do not allow easy discovery of efficiencies. In the repo world, when you want to roll positions over, you have to roll them back and send back the assets. You can actually compress that into a pure cash movement, reducing about 90 per cent of the asset movement just by being more efficient.

No firm can do this alone though, they need the other side to agree for a ‘coordinated dance’ to happen, which can take time and is a manual process. We are now at a place where a machine can identify the opportunity, meaning repo compression is in a place where you can better manage inventory.

High interest rates are very painful. This was a quiet, dormant market, but now it is at the heart of so many things.

What, if any, do you think are or will be the limitations of AI within the industry?

I think there are two things. There is the socio-economic limitation, which is when, on a large scale, it does displace a human. This is not just about saving money though. These tasks are often fraught with human error, and these errors cost a lot of money — potentially more than the actual work itself.

I look at it as ‘noise reduction’. If you reduce the amount of this noise, it can be worth pursuing. This then raises the question of job security and how do human workers and AI coexist together. Personally, I think there are ways to address this, and it may not be as great a concern as some think.

The other area is supervision, and what kind of data models are being trained on. There is a natural tendency to assume that if AI tells you something, it is the truth. If people assume this, it could then potentially become the ‘truth’.

Questioning and challenging the model or results, agreeing or disagreeing with its conclusions, and when not to use it, will take human judgement. Of course there is plenty of bias in humans as well.

The barriers right now are more the social ones, as well as the access to data.

Think of the car industry as an example. AI and automation will design and build the car, and now we are at a point where it can drive it as well. You can explain all of the statistics about reducing accidents to people, but the moment an AI-driven car has an accident, it will be front page news, but all the other benefits would not be mentioned.

Consider innovations like ‘assisted driving’, where there is an attempt to address these fears. Again, we go back to what I said earlier, we should start with more accessible activities that bring immediate value and do not trigger this emotional resistance.

With this evolving relationship, and growing significance of AI and data, what do you believe are some of the concerns surrounding ownership, privacy and security for the industry.

I think people understand the value of data and treat data as an asset, which is different to what in the past was perhaps seen more as a knock-on side effect of a product.

Once that ‘side effect’ is actually a valuable asset, you have to treat it as such, which means you have to be very clear on ownership.

We position ourselves as custodians of data. My job is to help ‘Bank A’ be on the cloud, and be able to collaborate with other parties, while also maintaining ownership of the data, and total control of who has access to what. This enables them to make use of their data, whether it is for the benefit of clients, working collaboratively with competition, or for regulatory oversight.

If the infrastructure to manage your data like this does not exist, you end up back to being entrenched. When I talk about custody of data, I mean I am here to protect your data and make your data work for you, in a way that you are comfortable with and that you are not ‘letting go’ of it.

I use the idea that we are putting a GPS on a piece of data, meaning at any point in time you know where your data is, who does and does not have access to or permission for it.

That level of granularity is what we have built. It was less about understanding finances than about understanding data, and who has got ownership rights. There is also the question of who owns, or has control, in the transitional phase. That is really where the complexity lies.

Automation is another hot topic for many firms in the industry. How do you see this being used, both now and in the future, and do you think the US move to T+1 (and potentially the UK and other countries) will accelerate the shift?

The basic nature of T+1 is, effectively, tasks that could previously take two days now only take one. Working with clients in Asia and Europe, this might even mean getting tasks done in half a day.

You could dramatically increase your workforce to do this, but you are not always able to speed up a process by dividing it among more people. Automation may be the way forward.

Another approach is to eliminate the variables that cause idle time or dependence on others for task completion, and to automate these processes with machines. This automation not only facilitates collaboration but also generates valuable intelligence. The key benefit is reaching the decision-making stage more quickly, while allowing investigations to be conducted automatically.

I believe the industry has evolved by leveraging data sharing, which reduces the effort required for investigations and allows machines to handle them. By utilising collaborative datasets rather than individual ones, people began receiving valuable insights on a T+0 basis, which previously would have taken one to three days. The urgency of time constraints has pushed people to adopt technologies that were always available but underutilised.

Ending with a broad topic, could you provide an overview of trends you believe the industry will see in the next 10 years? Are there any predictions about areas of importance that are perhaps not being looked at or discussed much at the moment?

We are undertaking many initiatives that we now know can be scaled effectively. For example, we are focusing on repo compression, which was possible before but could not be scaled due to its reliance on human, email-driven processes.

We are looking at regulatory reporting, which has arguably got out of control. We could flip the whole thing on its head. There are many aspects that have simply grown in a linear fashion, but if you were able to go back and redesign regulatory reporting from scratch, with today’s tools a much better, data-driven solution, would be the way forward.

Advertisement
Get in touch
News
More sections
Black Knight Media