New Facebook AI lets blind users “see” photos
Facebook is flooded with photos, videos, and other kinds of visually appealing media for users. However, recently the company came across a Cornell University study that proved blind people often feel frustrated and excluded because they cannot engage.
In an attempt to include all of its users, Facebook has created a new feature that lets those with vision problems see what’s going on in pictures.
The solution, called automatic alt text, provides visually impaired and blind people with a text description of a photo using object-recognition technology.
How it’s made
During the 10-month creation process, Facebook engineers conducted a series of data/performance analysis and needed to determine what the most important features of a photo were since interpretation can be a very personal thing. For instance, people probably care about the subject of a photo, but maybe the background isn’t that important.
The platform in its current form provides a visual recognition engine that can see inside images and videos to understand what’s in them. For example, the engine would know if an image contains a cat, was taken at the beach, or includes the Eiffel Tower. The platform can also learn new visual concepts within minutes and start detecting them in new photos and videos.
According to Facebook, its object-detection algorithm used in the software can detect any of these concepts with a minimum precision of 0.8 (some are as high as 0.99), but over time it would like to increase even more.
After the software detects the major objects in a photo it will report the number of people in the photo (including facial expressions) objects detected, and scenery (indoor, outdoor, selfie, etc.). Then the software will come up with a sentence like “Image may contain: two people, smiling, sunglasses, sky, tree, outdoor.”
Comments are closed, but trackbacks and pingbacks are open.