The new digital divide is between people who opt out of algorithms and people who don’t

Every aspect of life can be guided by artificial intelligence algorithms – from choosing what route to take for your morning commute, to deciding whom to take on a date, to complex legal and judicial matters such as predictive policing.

Big tech companies like Google and Facebook use AI to obtain insights on their gargantuan trove of detailed customer data. This allows them monetize users’ collective preferences through practices such as micro-targeting, a strategy used by advertisers to narrowly target specific sets of users.

In parallel, many people now trust platforms and algorithms more than their own governments and civic society. An October 2018 study suggested that people demonstrate “algorithm appreciation,” to the extent that they would rely on advice more when they think it is from an algorithm than from a human.

In the past, technology experts have worried about a “digital divide” between those who could access computers and the internet and those who could not. Households with less access to digital technologies are at a disadvantage in their ability to earn money and accumulate skills.

But, as digital devices proliferate, the divide is no longer just about access. How do people deal with information overload and the plethora of algorithmic decisions that permeate every aspect of their lives?

The savvier users are navigating away from devices and becoming aware about how algorithms affect their lives. Meanwhile, consumers who have less information are relying even more on algorithms to guide their decisions.

Should you stay connected – or unplug? pryzmat/shutterstock.com

The secret sauce behind artificial intelligence

The main reason for the new digital divide, in my opinion as someone who studies information systems, is that so few people understand how algorithms work. For a majority of users, algorithms are seen as a black box.

AI algorithms take in data, fit them to a mathematical model and put out a prediction, ranging from what songs you might enjoy to how many years someone should spend in jail. These models are developed and tweaked based on past data and the success of previous models. Most people – even sometimes the algorithm designers themselves – do not really know what goes inside the model.

Researchers have long been concerned about algorithmic fairness. For instance, Amazon’s AI-based recruiting tool turned out to dismiss female candidates. Amazon’s system was selectively extracting implicitly gendered words – words that men are more likely to use in everyday speech, such as “executed” and “captured.”

Other studies have shown that judicial algorithms are racially biased, sentencing poor black defendants for longer than others.

As part of the recently approved General Data Protection Regulation in the European Union, people have “a right to explanation” of the criteria that algorithms use in their decisions. This legislation treats the process of algorithmic decision-making like a recipe book. The thinking goes that if you understand the recipe, you can understand how the algorithm affects your life.

Meanwhile, some AI researchers have pushed for algorithms that are fair, accountable and transparent, as well as interpretable, meaning that they should arrive at their decisions through processes that humans can understand and trust.

What effect will transparency have? In one study, students were graded by an algorithm and offered different levels of explanation about how their peers’ scores were adjusted to to get to a final grade. The students with more transparent explanations actually trusted the algorithm less. This, again, suggests a digital divide: Algorithmic awareness does not lead to more confidence in the system.