Skip to main content
14 April 2022
Follow Us

Technology with Ethics

Facial Recognition

How do you teach your mobile phone to recognize a human face? How do you distinguish a dog from a tennis racket or an apple from a person? Does it identify a woman and a man or people of different races with the same degree of accuracy?

It’s all about having a huge number of training data with images that contain faces and others that don’t. But who is training the machine to learn? What information are you passing on? Will the information be inclusive and diverse? Can biases cross over to technology? Can algorithms distort information?

When preparing the data that will integrate the machine learning models, it is usually said to be careful with this phase, otherwise we will have garbage going in and consequently garbage going out. I’ll add that, unfortunately, we have also biases going in and biases going out!

Results of studies evaluation the functioning of facial recognition systems presented in the Netflix documentary ‘Coded Bias’

 

Darker Males

Darker Females

Lighter Males

Lighter Females

Amazon

98,7 %

68,6%

100%

92,9%

kairos

98,7%

77,5%

100%

93,6%

Surveillance/Privacy

In London, the police started a facial recognition procedure without people’s permission, which at a legal level is like taking a DNA sample or a fingerprint, something that in principle would only happen if the person in question had a record. They created a database with people’s biometric data.    

Added to this is the fact that they use systems that are quite imprecise and that act with a huge bias in terms of race, namely in the much greater identification of black individuals as alleged/possible criminals.

In the USA there are also several examples of this kind. We might be led to believe that the rich get the most sophisticated tolls first. However, in many places the most punitive and surveillance-oriented tools that exist are applied to the poorest communities. If they work in these environments where little respect for people’s rights is expected, then they will be applied in other, more select communities.

What is happening is that we are increasingly being cornered. Whether through the greater control that exists in terms of image and surveillance, or through the great technologies that closely monitor every like we put on Instagram, every tweet we share, every comment we make on LinkedIn.

What is the limit of information we should share? Is there a limit? Can they take the information that we consent and that we do not consent to and use it to manipulate, discriminate or for the self-interest of large corporations?

How much privacy and individual freedom should a person have? Now we can’t even have a private conversation anymore because the phone is always listening!!

Recommendation Systems/Publicity

What outfit will I wear? What content will I follow on YouTube? What series will I start watching? Which politician will I vote for?

More and more people, especially young people, are influenced by social networks and the paths that applications and programs force us to take. Existing recommendation engines and huge amount of advertising can make a person very limited in terms of the amount and/or diversity of subjects in which they can become interested.

In advertising you compete for views, but you actually compete for views of rich people!

And the poor people? Those who compete for them tend to be predatory industries! Lenders trying to convince you to take out a loan, for-profit colleges, casinos and bookmakers encouraging you to make the first deposit.

We currently voluntarily provide a lot of data to a very specific set of companies. This data, crossed with all our profiles that exist on the Internet, crossed with our activity on the Internet in terms of publications, in terms of purchases, even a simple like in a publication of a certain political party…all this will result in very detailed information about each individual.

What happens is that the algorithms that work with this data will be able to predict very accurately what decisions we will make in the future! And that should sound a little bit scary!!

One way or another, they’re going to get to a gambling addict and they’re going to show him a 50% discount if he goes to Vegas and makes a 1000 dollars deposit.

In 2010, during the US elections, Facebook decided to do an experiment with 61 million people. People on their profile either saw a simple message saying it was election day or saw the same message, but this time with a variation in the lower left corner where imagens of their friends who had already clicked on the “I already voted” button appeared. They only showed this message once!!

Then they matched the names of these people with the names of the voters and found that they had moved 300.000 people to the polls!!

The 2016 elections were decided by 100.000 votes. A Facebook message, shown just once, could easily triple the number of people changing the course of elections that year. Let’s suppose that one of the candidates previously announced in the campaign that he wanted to regulate Facebook more. Facebook can, without even us realizing it, advertise on a large scale so that the votes are for the opposing candidate.

Don’t even get me started on recommendation engines. What is free will? Could it be that having seen one or two MMA videos on Youtube, I know want my entire feed flooded with videos of fights, fighters, martial arts, and other content that can possibly become more violent? Could it be that having seen a war movie on Netflix, I now want my suggestions to be all around that theme?

How many of my decisions are really mine? Or are they the result of days, weeks, months, and years of influence of these companies? Just because I searched for a trip to Barcelona doesn’t mean that I now want to have my browser filled with hotels, restaurants, or entertainment in Barcelona.

Decision makers

Which CV’s will be chosen? Will I have my credit approved?

At Amazon they developed a program to automate the visualization and selection of CV’s. As they were already doing successfully inside the warehouses of goods or directing decisions about prices.

The goal was something that would receive 100 resumes and say which are the top 5 to be hired. But in 2015 they began to realize that their system discriminated gender wise.

This is because the models were trained to screen candidates by looking for patterns or keywords in CV’s sent over the past 10 years. You already figured it out. Most of the CV’s given to the dataset for the training belong to men, a gender that clearly dominates the entire tech industry.

The model discriminated if it found words like ‘woman’, ‘captain of the female water polo team’. It wouldn’t even consider the resume if it had a feminine college name.

The Amazon team still tried to work on the model and make it less discriminatory, but the machines found other ways to rank candidates, which led the company to shut down the program.

One day my girlfriend and I were at the mall and we saw a marketing campaign encouraging the purchase of the new Iphone XR. We entered the store determined to buy one for each of us. After all, this campaign announced the possibility of paying in installments over 2 years without interest.

After choosing the color we both had to run a credit simulation in an algorithm of their own. The Fnac employee herself did not know how the program worked, she simply collected our data individually, entered the data in the program and waited for the response from the algorithm to authorize or decline the credit.

 

Teresa

Tiago

Gender

Female

Male

Age

27

28

Salary

1900

1150

Contrato type

Permanent

1 year contract

Any person who has a minimal understanding of how a credit works can guess who got the credit and who didn’t right? Teresa is younger, earns more and has a permanent contract. However who had the credit approved?

Well, it was me!!

Because I have something very important in my favor, I’m a man. And clearly the machine has considered that being a man would magically provide me with a greater ability to repay the loan.

Purposely or not, the algorithm is running with a bias that a society that fights for equal rights without regard to race, gender and religion cannot accept.

Is this a case where the biases of the machine learning model maker have passed into his program? We can accept that men have a much longer history of borrowing that goes back to a past when a low percentage of women worked and borrowed money from banks, and that it is the use of this data that has fooled the system.

Nonetheless we need to make a really big effort to avoid this bias and better train our machine learning models. The technology is evolving right in front of our very eyes and it has huge potential and many advantages, but we have to be very responsible using it, we must respect people’s privacy and we must be ethical at all times and give the same opportunities to all, doesn’t matter if the person is a man or a woman, black, white, yellow or pink, catholic, muslim or budist, poor or rich, straight or gay.

Until the next article!

Tiago Valente

Ficheiros em anexo

Tiago Valente

Assine a nossa newsletter e receba o nosso conteúdo diretamente no seu email