Forget handheld virtual reality controllers: a smile, frown or clench
will suffice
Facial recognition tech taken to the next level in virtual reality
Date:
February 18, 2022
Source:
University of South Australia
Summary:
An international team of researchers has taken facial recognition
technology to the next level, using a person's expression to
manipulate objects in a virtual reality setting without the use
of a handheld controller or touchpad.
FULL STORY ==========================================================================
Our face can unlock a smartphone, provide access to a secure building
and speed up passport control at airports, verifying our identity for
numerous purposes.
==========================================================================
An international team of researchers from Australia, New Zealand and
India has taken facial recognition technology to the next level, using
a person's expression to manipulate objects in a virtual reality setting without the use of a handheld controller or touchpad.
In a world first study led by the University of Queensland, human computer interaction experts used neural processing techniques to capture a
person's smile, frown and clenched jaw and used each expression to
trigger specific actions in virtual reality environments.
One of the researchers involved in the experiment, University of South Australia's Professor Mark Billinghurst, says the system has been designed
to recognise different facial expressions via an EEG headset.
"A smile was used to trigger the 'move' command; a frown for the 'stop'
command and a clench for the 'action' command, in place of a handheld controller performing these actions," says Prof Billinghurst.
"Essentially we are capturing common facial expressions such as anger, happiness and surprise and implementing them in a virtual reality
environment." The researchers designed three virtual environments --
happy, neutral and scary -- and measured each person's cognitive and physiological state while they were immersed in each scenario.
==========================================================================
By reproducing three universal facial expressions -- a smile, frown and a clench -- they explored whether changes in the environment triggered one
of the three expressions, based on emotional and physiological responses.
For example, in the happy environment, users were tasked with moving
through a park to catch butterflies with a net. The user moved when they
smiled and stopped when they frowned.
In the neutral environment, participants were tasked with navigating a
workshop to pick up items strewn throughout. The clenched jaw triggered
an action -- in this case picking up each object -- while the start and
stop movement commands were initiated with a smile and frown.
The same facial expressions were employed in the scary environment,
where participants navigated an underground base to shoot zombies.
"Overall, we expected the handheld controllers to perform better as
they are a more intuitive method than facial expressions, however people reported feeling more immersed in the VR experiences controlled by facial expressions." Prof Billinghurst says relying on facial expressions in
a VR setting is hard work for the brain but gives users a more realistic experience.
========================================================================== "Hopefully with some more research we can make it more user friendly,"
he says.
In addition to providing a novel way to use VR, the technique will also
allow people with disabilities -- including amputees and those with motor neurone disease -- to interact hands free in VR, no longer needing to
use controllers designed for fully abled people.
Researchers say the technology may also be used to complement handheld controllers where facial expressions are a more natural form of
interaction.
========================================================================== Story Source: Materials provided by University_of_South_Australia. Note: Content may be edited for style and length.
========================================================================== Journal Reference:
1. Arindam Dey, Amit Barde, Bowen Yuan, Ekansh Sareen, Chelsea Dobbins,
Aaron Goh, Gaurav Gupta, Anubha Gupta, Mark Billinghurst. Effects
of interacting with facial expressions and controllers in
different virtual environments on presence, usability, affect, and
neurophysiological signals. International Journal of Human-Computer
Studies, 2022; 160: 102762 DOI: 10.1016/j.ijhcs.2021.102762 ==========================================================================
Link to news story:
https://www.sciencedaily.com/releases/2022/02/220218100717.htm
--- up 10 weeks, 6 days, 7 hours, 13 minutes
* Origin: -=> Castle Rock BBS <=- Now Husky HPT Powered! (1:317/3)