New Scientist article We’ve got the next generation of artificial intelligence on our hands.
A team of researchers from Oxford University has developed an artificial neural network that can recognise and understand human faces and facial expressions.
This artificial intelligence system can also recognise human speech, and even the sound of your voice.
The team, which includes Oxford PhD student and Oxford researcher Nicholas Wade, created a machine learning system based on an early version of a neural network known as a convolutional neural network (CNN).
The CNN was used in the image recognition system used in Google DeepMind’s AI system.
In addition to recognising faces and speech, the system can learn to recognize emotions, such as happiness and sadness, and also to understand speech.
“The CNN has the capacity to learn from the world around it and recognise human emotions,” says co-author of the study Dr Nicholas Wade.
“This is an area where the computer is limited in many ways, because it is so much more limited in how it can learn from data.”
CNNs are a type of deep learning system.
CNNs can be thought of as neural networks that learn from one another, and then combine their results to create new models.
This type of learning is particularly useful when the data comes from images and video that is already being analysed.
For example, a CNN could be trained to recognise a human face and a face-like object like a flower or a bird.
“These are all the things that we need in order to learn about a human being,” says Wade.
He adds that CNNs were designed to understand human emotions, but they can also be used to recognise other types of emotion such as anger, disgust, and happiness.
For instance, CNNs could be used for recognising human faces that have a slight smile or a look of surprise on their face.
“You would be able to recognise that as being angry or as a smiley face, or as an upset face, as you know in psychology,” says Dr Wade.
The new AI system, which was designed to analyse the facial expressions of around 10,000 people, can recognise hundreds of thousands of images of humans and their facial expressions and then learn to recognise and recognise the emotion of those expressions.
It is not the first time a CNN has been used to understand emotions.
In 2012, scientists at the University of Southern California used CNNs to understand the human emotions of people of colour.
The AI system also has the capability to identify the facial expression of people with epilepsy.
It can recognise the eye movements of people who are having seizures and can recognise expressions such as a relaxed smile or the pupil dilated pupils of someone who has epilepsy.
The researchers said that it was the first AI system to understand facial expressions from an image and that it could help in the development of treatments for epilepsy.
“We’re really excited to be able be part of this exciting future,” says lead author of the new study Dr Jonathan Bae, of Oxford University’s Department of Computer Science and Artificial Intelligence.
Dr Wade added that this system was only able to identify and understand a small subset of human faces, but was able to extract emotion from the rest of the faces.
“It was the most comprehensive picture that we’ve seen of human emotion,” he says.
“What’s really exciting is that it can understand more than just a subset of facial expressions.”
The researchers say that their work could be a boon to the field of artificial neural networks.
They say that the network can also potentially be used in medical applications such as helping to identify individuals with epilepsy, and in facial recognition and emotion recognition in other fields.
“In order to really develop applications for artificial neural nets, we need to start to understand how to combine them with existing systems,” says Bae.
He added that the work could lead to the development, testing and commercialisation of an AI system that can work alongside existing neural networks in order for it to learn to work together.
A further development is to use this AI system in other areas.
“Our hope is that this could be integrated into machine learning and could be combined with existing artificial neural net systems to make it a better machine learning framework,” says James Pritchard, Professor of Computer Vision at the Institute of Cognitive Neuroscience at Oxford.
The research was published in the journal Nature Communications.
The article was originally published on New Scientist.
More stories from the UK This is part of our coverage of the World Cup 2018.
Find out what’s happening in our World Cup coverage hub.