University of Lincoln, UK aiming to embed a smart vision system in mobile devices

86

The new era of technology aiming to build a device that helps blind people to see through their smartphone or tablet. Specialists in computer vision and machine learning-based at the University of Lincoln, UK, founded by a Google Faculty Research Award, are aiming to embed a smart vision system in mobile devices to help people with sight problems navigate unfamiliar indoor environments.

“This project will build on our previous research to create an interface that can be used to help people with visual impairments,” said Project lead Dr. Nicola Bellotto, an expert on machine perception and human-centered robotics from Lincoln’s School of Computer Science.

dell-venue-8

The ongoing team plan to use color and depth sensor technology inside new smartphones and tablets to enable 3D mapping and localization, navigation, and object recognition works smoothly. The team will then develop the best interface to relay users – whether vibrations, sounds, or spoken words.

“There are also existing smartphone apps that can, for example, recognize an object or speak text to describe places. But the sensors embedded in the device are still not fully exploited.

The research team includes Dr. Oscar Martinez Mozos, a specialist in machine learning and quality of life technologies. Dr. Grzegorz Cielniak, who works in mobile robotics and machine perception, aims to develop a system that will recognize visual clues in the environment.

This data would be detected through the device camera and used to identify the type of room as the user moves around the space.

A key aspect of the system will be its capacity to adapt to individual users’ experiences, modifying its guidance as the machine ‘learns’ from its landscape and human interaction.

It will become a boon for the world. “If people were able to use technology embedded in devices such as smartphones, it would not require them to wear extra equipment which could make them feel self-conscious.”

“We aim to create a system with ‘human-in-the-loop’ that provides good localization relevant to visually impaired users and, most importantly, that understands how people observe and recognize particular features of their environment,” said Bellotto.

 

Previous articleLG declared no Android Lollipop for the LG Vu 3
Next articleVodafone Smart 4 Max – Full Smartphone Features, Specs, Prices

Share your thoughts here!

This site uses Akismet to reduce spam. Learn how your comment data is processed.