Veriff
LibraryblogHow Veriff is tackling bias in identity verification

How Veriff is tackling bias in identity verification

Bias in technology must be acknowledged and countered at every turn, or else genuine customers will be denied access to goods and services in a discriminatory fashion. Veriff's Senior Product Manager, Liisi Soots, explains the importance of removing bias in our products and our methods of achieving this.

Header image
Author
Chris Hooper
Director of Content at Veriff.com
May 11, 2023
Onboarding
Analysis
Share:
On this page
Intro
Defining bias
The types of bias in IDV
The impact of bias on machine learning
Why Veriff
Eliminating bias in business
Using multiple methods to eliminate bias
The reality of zero bias

In an increasingly online world, it is imperative that the digital tools people use every day actively locate and remove bias. This is because biased technologies have negative real-world implications as people may be unfairly denied access to services or experience discrimination; from employment to criminal justice, bias has been exposed as embedded in many sectors over a number of years. Additionally, there are substantial financial, legal, and reputational risks for businesses that are exposed for having bias in their technology. 

Bias can be present through the way a person interacts with a system; for instance, if a person has to understand or follow complex instructions, this could be biased against people with limited cognitive abilities. Additionally, if a process requires physical movement, like manipulating a device, this can represent bias against people with limited physical mobility, often the elderly. If a system only works on new - often the most expensive - technology, this presents bias against people with limited financial means. 

In terms of innovative digital tools, thanks to its usability, capacity to scale, and effectiveness at preventing fraud, businesses are increasingly adopting remote identity verification (IDV) for the customer onboarding process. As an IDV industry leader, Veriff is committed to creating equal access to goods, products, and services for people across the world.

However, acknowledging and tackling biases in IDV is essential for creating fair and accurate systems which are not discriminatory. To outline how we’re committed to tackling bias, we sat down with our Senior Product Manager, Liisi Soots, to detail her views of what bias is, how bias can be found in technology, and actionable steps to reduce bias.

Can you define the concept of bias and how it arises?

We see bias in everyday life when a person unfairly favors or disfavors a person, idea, object, or other matter, displaying a skewed perspective. It can be influenced by factors such as upbringing, daily environment, social group, or culture. To give an example, for many people in Western cultures the term ‘wedding dress’ evokes a white dress and veil, but not all cultures would associate that as a ‘wedding dress’.

When talking about bias in machine learning, we have to consider whether systems are biased towards or against people because of factors including age, race, gender, or disability. A system could work better for children and not so well with old age people, or vice versa. The existence of bias puts people at real risk of discrimination based on traits they have no control over, so it should be seriously addressed.

What are the different types of bias that we see in the IDV space and why are they problematic to our customers?

In addition to the biases of age, race, and gender, in IDV we have to be mindful of specific situations where bias can occur, like what kind of identity document or personal device an individual is using. For example, not everyone can afford the newest high-spec mobile phones that can take a high-resolution image. At Veriff, we already support a wide variety of identity documents and devices, and continue to expand the breadth so that more genuine people can get verified online.

Veriff operates in many different countries and industries, which means our IDV solutions have to be able to be used by a wide spectrum of people. As part of our quality assurance processes, we regularly monitor how our solutions are performing with different groups of people, for example, whether we are performing similarly for men as we are for women.

Veriff operates in many different countries and industries, which means our IDV solutions have to be able to be used by a wide spectrum of people.

Liisi Soots

What we are trying to mitigate is a difference in performance; for example, based on gender, making sure that women are getting through our IDV flow as much as men, ensuring that we are not blocking them, or people of different races. This is so that our system is equal towards everybody, no matter where they are from, or what they look like.

For companies today, it’s vital to ensure that they are able to onboard genuine customers from around the world. Failure to do so limits the company’s ability to scale, expand in new markets, and creates the risk of financial or reputational harm if customers are affected by bias.

How would you say that bias affects machine learning?

Systems including machine learning models are built by humans, therefore the bias humans have are naturally propagated to all the systems that we are building. Machine learning models have to be created using the best practices possible. We train our models with a wide range of data, making sure we gather different regular and edge cases.

If any of these areas had inbuilt bias, for example from using only a limited data set built from a narrow population or created by individuals who only had exposure to limited cultures, then there is a risk of introducing bias into the algorithms. Veriff is constantly aware of the risk of introducing bias into our systems and we consciously try to mitigate the risk by monitoring our performance and acting when problems arise.

The bias in machine learning is more relevant than when we have bias in humans as we are using models to make thousands of decisions daily. We as humans are not making thousands of decisions a day. With machine learning, we are propagating the bias to a wider context. We don't want the systems to be biased, to be discriminating against some people. We are measuring how we are performing for different racial groups and genders and we have to make sure that we are performing equally.

Fast decisions

A 98% check automation rate gets customers through in about 6 seconds.

Simple experience

Real-time end user feedback and fewer steps gets 95% of users through on the first try.

Document coverage

An unmatched 11K+, and growing, government-issued IDs are covered.

More conversions

Up to 30% more customer conversions with superior accuracy and user experience.

Better fraud detection

Veriff’s data-driven fraud detection is consistent, auditable, and reliably detects fraudulent forms of identification.

Scalability embedded

Veriff’s POA can grow with your company’s needs and keep up with times of increased user demand.

What can businesses do to eliminate bias or mitigate the risk of bias?

There are multiple methods to remove bias. The first is the data that we train with: what data we are putting in the model and how we get the data. It's really important to make sure that the data itself is not biased, that we don't have data only from white men, but we have data from different racial groups and we have it equally distributed. 

It’s important that we have a diverse range of professionals, like data scientists and annotators, from different backgrounds working with the data, because they raise different questions and ensure that the data is prepared well. Lastly, we can eliminate bias and mitigate it by making sure that we are aware of it, that we are measuring how much bias we have in models and are making changes to ensure high quality.

Veriff uses manual document checking alongside machine learning in the IDV process. Does this help remove bias?

Putting together the automatic and manual flow helps to make sure that we provide the best service to the world. If we are not sure about an automated decision, our highly trained in-house specialists can double-check the decision when we ask. If the investigation identifies any errors in the algorithm decisioning the response is fed back to the system as a process of continual improvement.

How close is Veriff to achieving “zero bias”?

In mathematical terms, no system can ever have zero bias. The most important approach is to ensure that we, as a leading technology company, continually monitor our systems for any evidence of bias. If this is ever found, we ensure that we have a process to promptly mitigate and rectify any issues. Additionally, there are new laws coming in place that also regulate machine learning bias and measurements.

Get more details

Contact our experts to discover how we’re working to create access to goods and services for genuine customers online.