Everyday objects like smartphones and laptops could be equipped with a ‘bat-like sense’ of their surroundings, according to a groundbreaking paper published today.

Researchers from the University of Glasgow have found ways to create detailed images through potentially any device equipped with microphones and speakers or radio antennae.

In their report, the computing scientists and physicists outline how the technique could be used to monitor the movement of vulnerable care home patients.

The procedure uses a sophisticated machine-learning algorithm which can measure echoes and sounds to generate images and create the shape, size and layout of the immediate environment, similar to the way bats navigate using echolocation.

Bats, which find their prey using echolocation, inspired the development of the tool. They emit high-frequency sound waves that bounce back to their ears, enabling them to detect objects in total darkness.

The researchers’ algorithm measures the time it takes for blips of sound emitted by speakers or radio waves pulsed from small antennas to bounce around inside an indoor space and return to the sensor.

By cleverly analysing the results, the algorithm can deduce the shape, size and layout of a room, as well as pick out in the presence of objects or people. The results are displayed as a video feed which turns the echo data into three-dimensional vision.

There is one key difference between the team’s achievement and the echolocation of bats. Bats have two ears to help them navigate, while the algorithm is tuned to work with data collected from a single point, like a microphone or a radio antenna.

Dr Alex Turpin and Dr Valentin Kapitany, of the University of Glasgow’s School of Computing Science and School of Physics and Astronomy, are the lead authors of the paper.

Dr Turpin said: “Echolocation in animals is a remarkable ability, and science has managed to recreate the ability to generate three-dimensional images from reflected echoes in a number of different ways, like RADAR and LiDAR.

“What sets this research apart from other systems is that, firstly, it requires data from just a single input – the microphone or the antenna – to create three-dimensional images. Secondly, we believe that the algorithm we’ve developed could turn any device with either of those pieces of kit into an echolocation device.

“That means that the cost of this kind of 3D imaging could be greatly reduced, opening up many new applications. A building could be kept secure without traditional cameras by picking up the signals reflected from an intruder, for example. The same could be done to keep track of the movements of vulnerable patients in nursing homes. We could even see the system being used to track the rise and fall of a patient’s chest in healthcare settings, alerting staff to changes in their breathing.”

The paper outlines how the researchers used the speakers and microphone from a laptop to generate and receive acoustic waves in the kilohertz range. They also used an antenna to do the same with radio-frequency sounds in the gigahertz range.

In each case, they collected data about the reflections of the waves taken in a room as a single person moved around. At the same time, they also recorded data about the room using a special camera which uses a process known as time-of-flight to measure the dimensions of the room and provide a low-resolution image.

By combining the echo data from the microphone and the image data from the time-of-flight camera, the team ‘trained’ their machine-learning algorithm over hundreds of repetitions to associate specific delays in the echoes with images.

Eventually, the algorithm had learned enough to generate its own highly accurate images of the room and its contents from the echo data alone, giving it the ‘bat-like’ ability to sense its surroundings.

Dr Turpin added: “We’ve now been able to demonstrate the effectiveness of this algorithmic machine-learning technique using light and sound, which is very exciting. It’s clear that there is a lot of potential here for sensing the world in new ways, and we’re keen to continue exploring the possibilities of generating more high-resolution images in the future.”
The team’s paper, titled ‘3D imaging from multipath temporal echoes’, is published in Physical Review Letters.

The research was funded by the Royal Academy of Engineering and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).


Photo: Shutterstock.com/Benjamin B