Humans and machines can improve accuracy when they work together

Whether artificial intelligence systems steal humans’ jobs or create new work opportunities, people will need to work together with them.

In my research I use sensors and computers to monitor how the brain itself processes decision-making. Together with another brain-computer interface scholar, Riccardo Poli, I looked at one example of possible human-machine collaboration – situations when police and security staff are asked to keep a lookout for a particular person, or people, in a crowded environment, such as an airport.

It seems like a straightforward request, but it is actually really hard to do. A security officer has to monitor several surveillance cameras for many hours every day, looking for suspects. Repetitive tasks like these are prone to human errors.

Some people suggest these tasks should be automated, as machines do not get bored, tired or distracted over time. However, computer vision algorithms tasked to recognize faces could also make mistakes. As my research has found, together, machines and humans could do much better.

Two types of artificial intelligence

We have developed two AI systems that could help identify target faces in crowded scenes. The first is a facial recognition algorithm. It analyzes images from a security camera, identifies which parts of the images are faces and compares those faces with an image of the person that is sought. When it identifies a match, this algorithm also reports how sure it is of that decision.

The second system is a brain-computer interface that uses sensors on a person’s scalp, looking for neural activity related to confidence in decisions.

People and computers were asked to look at images like this briefly and then identify whether they had seen a particular face. ChokePoint data, NICTA
We conducted an experiment with 10 human participants, showing each of them 288 pictures of crowded indoor environments. Each picture was shown for only 300 milliseconds – about as long as it takes an eye to blink – after which the person was asked to decide whether or not they had seen a particular person’s face. On average, they were able to correctly discriminate between images with and without the target in 72 percent of the images.

When our entirely autonomous AI system performed the same tasks, it correctly classified 84 percent of the images.