The importance of an ethical AI implementation in the financial industry | NTT DATA

Mon, 02 August 2021

The importance of an ethical AI implementation in the financial industry

With the new developments in artificial intelligence, ethical concerns such as algorithm bias or the fairness and transparency of results, are rising. In fact, the European Commission announced in April 2021 a groundbreaking policy dedicated to regulate how businesses and governments will be able to use artificial intelligence in their activities. With this new regulation, companies that use AI in high-risk areas such as developing medicine, underwriting insurance policies or evaluating a client’s credit worthiness, will need to provide proof of the fairness of the decision making process and its safety.

The concerns around the ethical use of AI are particularly important in the financial sector given the sensitivity of the information entities collect. The European Banking Authority has made clear its concern that ethics should be considered in the new AI based systems and since more and more entities are designing and implementing their strategy using data analysis and AI, it’s more important than ever that they include these ethical concerns in their strategy. Clearly, an evolution towards an ethical integration of AI in the world of finance is a great challenge for the industry both from a technical perspective as well as from the point of view of the necessary investments so it’s vital to understand the importance of ethics in the financial industry and prioritize it.

So where should financial entities start when addressing the issues of ethical use of AI? The first step is acknowledging and taking into account issues such as bias, trustworthiness of algorithms, concerns with the fairness of results and a lack of transparency, when designing and implementing a strategy using data analysis and AI. In fact, the strategy needs to be built on a strategic framework which incorporates tactical actions related to AI systems and algorithms, governance, society and organization. 

Artificial Intelligence Algorithms and Systems

The rapid development of technology in recent years and its foreseeable acceleration in the immediate future, made AI’s capacity to transform processes and generate new business models, services and experiences grow exponentially. Financial institutions that want to keep up with market needs have begun incorporating new technologies such as machine learning, which helps improve their decision-making process due to its ability to process big amounts of data. But how does a financial entity implement compatible ethical guidelines at every step of this elaborate process? We’ve identified four key factors that help companies face the challenges of the ethical implementation of AI:

1.   Transparency - there is a growing interest for knowing and understanding the decisions that artificial intelligence systems make as they often condition access to products such as credits or investment plans. Explainability is particularly important for developing and implementing a transparent and ethical AI process in order to quickly trace back and understand how algorithms reached a result. Explainability is important to both explain what happens in the decision-making process and be able to transmit it to the different business areas. Traceability because it requires that all the data that is captured and processed be documented. This is essential in order to reduce ethical concerns, such as tracking information that may have an impact on the financial exclusion of specific groups back to its source. Communication is another key to increasing transparency given that the users must always be informed that they are interacting with AI systems.

2.   Fairness is another important and controversial component in ethical AI. Any artificial intelligence technology should not only guarantee accessibility to all individuals, regardless of their age, gender, race, sexual orientation, political or religious views but it should also guarantee that the decision making processes aren’t influenced by any type of bias. For risk mitigation purposes any ethical AI system must be aware of several points of interest such as the fact that data is representative and can be generalized or that the design will not include variables, traits or structures that don’t have proper justification. Furthermore, the results cannot generate discriminatory situations for the individual involved and the implementation of a bias-free AI system should be carried out by professionals that are trained on how to make systems operate responsibly.

3.  Security - whenever implemented, artificial intelligence technologies need to count on safe internal models which will protect entities from cyberattacks. Establishing proper safety protocols is critical to maintaining the integrity of the system in negative circumstances. A level of strictness is required to calibrate the system, and it must be reliable within expectations, regardless of unexpected changes, anomalies, or disturbances. The safety and strength of the system depend on constant testing, validation, and re-evaluation.

4.   Privacy - personal data protection is often at the core of any ethical issues with AI. It can be approached from different points of view (technical, research, regulatory), but the common denominator is the obligation to preserve the protection and security of the personal data. An important concept to have in mind is privacy by design which refers to the fact that when creating a system, privacy needs to be an inherent and fundamental component throughout the lifecycle of the system.

Ethical concerns for Governance

In the wake of the explosion in popularity of Big Data, we have witnessed the growing importance of data governance and the incorporation of regulatory principles to ensure its proper management. The goal of AI governance is to integrate the ethical framework that guides the principles and values for its management within organizations.

Human supervision brings value and trust to the AI system. The ideal model is the Human Machine Interaction or Augmented AI, where decision-making occurs by a combination of human perspective and automated suggestions. The partial intervention done by human supervision can refer to either a limited involvement that handles automated decisions in real-time (also named Human on the loop) or a middle-level human supervision which occurs in flexible automated systems, where a human decides when to apply or ignore AI judgments (Human in command). In addition, when human involvement happens at every step, we have a “Human in the Loop” situation. Aside from human supervision, we have another 2 important aspects:

1.     Accountability is vital for AI governance. Ethical principles need to be accounted for, in every phase of the lifecycle. It must have the capability of auditing the system and locate responsibility throughout the whole cycle. All participants (creators, designers, developers, etc.) are responsible for the system reach as well as any malfunctioning that may happen.

2.     Reporting all the necessary information, actions or decisions taken that influence the outcome of the system is a necessary requirement for an AI model. Identifying, evaluating, notifying, and monitoring possible negative AI system end results are substantial for those affected by it.

The Organization

Consumers, as well as employees, are more aware than ever of whether companies are committed to solving ethical principles. Therefore, it is management’s obligation to create awareness in the organization about the importance of AI ethical integrity and prioritize creating solutions that anticipate issues that derive from unethical practices. It’s necessary to create a structure that supports these values and integrates them into the company culture.  

Creating an AI Ethics Committee formed by individuals within and outside of the organization can evaluate good internal practices and in turn, can support the ethical use of artificial intelligence. Its effectiveness depends on establishing its action framework and processes for review and communication, as well as their influence in institutional politics, particularly when launching a new product.

The Society

Ethical AI must serve the interest of the community and generate tangible long-term benefits. Following principles such as inclusion, diversity, justice, sustainability, prevention, and progress the people at the forefront of any system should use AI responsibly and with the goal of making a positive impact. As the COVID pandemic sparked a worldwide panic, we are confronted with the new reality of millions of people that experience a disruption in their reality. Financial institutions need to adapt to new ways to assess risks, customizing their new decision-making guidelines according to new social contexts and credit needs. Organizations now have the responsibility of executing ethical AI principles, using this new technology for the overall benefit of society.

Get more information and donwload our whitepaper Artificial Intelligence in the Financial Sector, Open Innovation and Ethical Commitment.


How can we help you?

Get in touch