OrCam My Eye Helps Patients with Low Vision to Read Text and Recognize Objects in Everyday Life

Approximately 2% of Americans have a visual disability— vision that cannot be corrected even with the strongest prescription—and in developing countries where infectious disease or untreated cataracts are more common, the percentage is often higher. Many different diseases and conditions can cause low vision, including age-related macular degeneration, diabetic retinopathy, and cone dystrophy (a genetic mutation affecting the cone cells of the retina).

People with low vision find everyday activities more challenging. They may not be able to decipher small type, especially text on busy or colored backgrounds; see a plastic toy or other trip hazard left lying on the sidewalk; distinguish faces from more than a few feet away; or read street signs or the route number on a bus to help them get around town.

Increasingly, however, technologies and devices are making their way to the market to assist visually impaired individuals to do all the little things that fully sighted people take for granted, according to experts at the University of Michigan (UM) Kellogg Eye Center, a facility that includes 100 faculty (both clinical and research) and in 2015 alone handled more than 163,000 patient visits and performed nearly 7,700 surgical procedures.

The OrCam MyEye assistive technology device mounts on the frame of a pair of eyeglasses and connects to a pocket-size computer. The user aims the device at something to be read, such as a street sign, newspaper, or book, and the device’s camera reads the type aloud. If a user inputs images of supermarket products or human faces, the device can also announce the identity of the product or person.

Figure 4: With OrCam, users can also upload images of people or products to be recognized, and the system will then identify those products or people. (Photo courtesy of OrCam.)

Figure 4: With OrCam, users can also upload images of people or products to be recognized, and the system will then identify those products or people. (Photo courtesy of OrCam.)

eSight

Figure 5: The eSight system includes a headset (inset), which fits over a special prescription lens frame, and a controller. A camera in the headset sends a video stream to the controller, where it is processed, enhanced, and then transmitted to LED screens in front of the user’s eyes. With this system, the user can customize zoom, contrast, color, and brightness settings to improve overall vision, whether it’s watching a movie at a theater or reading a book. The headset can flip up and out of the way when users prefer to use their peripheral vision. (Images courtesy of eSight.)

Figure 5: The eSight system includes a headset (inset), which fits over a special prescription lens frame, and a controller. A camera in the headset sends a video stream to the controller, where it is processed, enhanced, and then transmitted to LED screens in front of the user’s eyes. With this system, the user can customize zoom, contrast, color, and brightness settings to improve overall vision, whether it’s watching a movie at a theater or reading a book. The headset can flip up and out of the way when users prefer to use their peripheral vision. (Images courtesy of eSight.)

eSight is a wearable headset that captures the user’s field of view, processes it, and nearly immediately displays it on LED screens in front of the user’s eyes. “I know people who are wearing the headset all the time, because the glasses are able to tilt up and down. That means they can tilt the glasses up and out of the way when they want to use their own peripheral vision, and then bring the glasses back down to read addresses or street signs or to identify faces,” Wicker says, noting that the Kellogg Eye Center will soon begin clinical trials of eSight.

Argus II Retinal Prosthesis System

Figure 6: Second Sight’s Argus II Retinal Prosthesis System consists of (a) a retinal implant and (b) a glasses-mounted camera that sends an image to a computer, which processes the information and transmits it wirelessly to the implant. The implant relays the information to retinal cells, and the information ultimately passes along to the brain. (Photo courtesy of Second Sight Medical Products, Inc.)

Figure 6: Second Sight’s Argus II Retinal Prosthesis System consists of (a) a retinal implant and (b) a glasses-mounted camera that sends an image to a computer, which processes the information and transmits it wirelessly to the implant. The implant relays the information to retinal cells, and the information ultimately passes along to the brain. (Photo courtesy of Second Sight Medical Products, Inc.)

The Argus II Retinal Prosthesis System is a surgically implanted retinal implant made by Second Sight, of Sylmar, California. Designed for individuals who have severe to profound retinitis pigmentosa, this device processes an image from a glasses-mounted camera and sends the information wirelessly to the retinal implant. The implant relays the information to retinal cells, and the information ultimately passes along to the brain. “The implant doesn’t provide vision like a sighted person has, so it won’t replace a cane or a guide dog,” explains Wicker. “With rehabilitation, however, patients can do things like making out different shapes and objects so that they can avoid obstacles while they’re walking.”

The smart cane is part of a larger program that promotes projects for the low-vision population, according to program coordinator Lauro Ojeda, an assistant research scientist in the UM Department of Mechanical Engineering. “The idea for the smart cane is to equip it with a three-dimensional camera that can provide range feedback from different angles and signal to the user in specific ways, depending on where it identifies an obstacle,” he explains. One issue the team has yet to solve is how best to communicate with patients. “These independent 3-D range sensors can take single shots 20–30 times per second and provide that information in a digital format, but how do you convey that information to the patient through vibration, sound, or electronic stimulus so patients can understand it?” he asks. “That’s where we’re focusing a lot of attention now.”

Figure 7: Lauro Ojedo (left), who coordinates low-vision projects at the University of Michigan, and student Tim Wesley use a Google Tango device and its 3-D camera to get real-time range measurements of the surrounding environment. (Photo courtesy of Lauro Ojedo.)

Figure 7: Lauro Ojedo (left), who coordinates low-vision projects at the University of Michigan, and student Tim Wesley use a Google Tango device and its 3-D camera to get real-time range measurements of the surrounding environment. (Photo courtesy of Lauro Ojedo.)

Currently, the program includes three teams, each with six to seven students from different engineering disciplines and from other fields, such as computer science. Each team takes on a separate project. “With all of these projects, we hope to make something beneficial for the patients and also give the students a real-life problem to solve,” Ojeda says.

Visual disability is a field ripe for innovation, in part because it doesn’t draw the kinds of research funding that more common health issues do. In addition, patients often have different needs depending on the specific cause of their vision problem. “The problem may not be huge when compared to something like cancer or diabetes,” Ojeda remarks, “but we still have 1– 1.5 million Americans who are legally blind [with vision that cannot be corrected to better than 20/200]. This could be one of my kids. This could be my parent. Somebody has to work on low-vision technology not from the economic point of view but from the social point of view, and that’s what we’re doing.”