A recent paper published in Translational Vision Science & Technology argues that the type of testing used to demonstrate the effectiveness of prosthetic vision devices is inadequate.1 Eli Peli, OD, of the Schepens Eye Research Institute at Massachusetts Eye & Ear, suggests this system, which is based on tests of clinical vision designed to measure functioning visual parameters, is not suited to determine the sight restoration ability of prosthetic devices.

Dr. Peli identifies two main problems that prosthetic vision device testing tends to face: 1) nuisance variables provide “spurious cues that can be learned in repeated training, which is common in prosthetic vision, and thus defeat the purpose of the test,” and 2) even properly performed tests may not measure the correct factors, resulting in incorrect interpretation of results. He also notes that confounding factors in the administration of tests give rise to additional problems, presenting another limitation in the way these devices are currently evaluated.

“If a patient who is known to be a dichromat passes a color vision test using a device, does that mean that the device restored normal trichromatic vision?” Dr. Peli asks in his paper. “The answer is clearly no. Similarly, passing a vision test with a prosthetic vision system does not prove that the type of vision assumed by the test actually exists.”

To illustrate, he offers the example of the Ishihara Color Vision Test. “A dichromat wearing tinted glasses may pass the test but does not have his/her color vision restored to normal trichromatic vision,” he explains. “In fact, the user’s color vision is neither restored nor improved.”

Another example he cites is head tracing. The limited field of view of retinal implant systems—typically about 20° and directly related to the retinal device’s size—is compounded by low resolution electrode displays, requiring the ineffective method of head tracing to locate objects of interest.

Dr. Peli notes that research currently under review found that “all eight studies of retinal prostheses that evaluated the orientation of a spatial grating and applied multiple-alternative forced-choice (MAFC) testing were flawed.”2 The subjects undergoing head tracing used integrated light perception from a single sensor alone, rather than spatial vision to discriminate grating orientation. In this way, head tracing “masqueraded as improved vision,” Dr. Peli writes.

He points out that the three approved retinal implants are no longer being implanted. “It’s apparent that they don’t provide the expected or desired level of vision restoration,” he says. “Yet they were approved by the regulatory agencies based on clinical trials that used, what this paper argues, are inadequate tests.”

Dr. Peli concludes that the clinical MAFC tests (where one presentation may result in multiple possible responses) commonly used as evaluative tests for prosthetic vision devices are inadequate. “They are considered to be superior in the sense that they are free of bias,” he says. “However, typical MAFC tests such as tumbling E are not free of bias. Multi-interval testing with multiple stimuli may not be free of bias either.” He concludes that an ideal physiophysics testing paradigm, perfectly implemented, “will not work if the task assigned to the subject does not reveal if the subject has or (re)gained vision.”

Disclosures: Dr. Peli has two patents and a patent application on image processing for visual prostheses, all of which are assigned to the Schepens Eye Research Institute.

1. Peli E. Testing vision is not testing for vision. Transl Vis Sci Technol. 2020;9(13):32.
2. Hallum LE, Dakin SC. Retinal electrode array implantation to treat retinitis pigmentosa:  systematic review. Transl Vis Sci Technol. Under revision.