Check out Dr. Michael Schlitt’s newest blog post
There’s a joke by the comedian Brian Regan about people who only wear glasses when they’re driving. “Why don’t you just get a car window with lenses in it?”, he says. While this sounds goofy, something like it may soon become reality. Scientists at UC Berkeley are currently developing computer algorithms that compensate for an individual’s visual impairment, creating vision-correcting displays that allow users to see text and images clearly on a computer screen without the need of eyeglasses or contact lenses. This technology could help hundred of millions of people who currently need glasses to use their smartphones, tablets and computers. For example, one common problem is presbyopia, a type of farsightedness in which the ability to focus on nearby objects is gradually diminished as the aging eyes’ lenses lose elasticity.
More importantly, these displays being developed could eventually aid people with more complex visual problems, known as high order aberrations, which cannot be corrected by eyeglasses. In a world where screens such as computers and smartphones are a daily fact of life, many of us could take for granted the ability to use such devices without any sort of visual aid. A lot of people with high order aberrations have irregularities in the shape of the cornea, which makes it very difficult to have a contact lens that will fit them. In some instances, this can be a barrier to holding certain jobs or functioning in society, since many workers need to look at a screen as part of their work. Such research, if successful, could transform the lives of such people.
UC Berkeley researchers teamed up with Gordon Wetzstein and Ramesh Raskar, colleagues at MIT, to develop their latest prototype of this vision-correcting display. The setup adds a printed pinhole screen sandwiched between two layers of clear plastic to an iPod display to enhance image sharpness. The research team will present this computational light field display in August at the International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH) in Vancouver. According to the study’s lead author, Fu-Chung Huang, this project is so significant because, instead of relying on optics to correct vision, this is one of the first attempts at using computation. It’s a very different, non-intrusive, class of correction. The algorithm works by adjusting the intensity of each direction of light that emanates from a single pixel in an image based on a user’s specific visual impairment. In a process known as deconvolution, the light then passes through the pinhole array in such a way that the user will perceive a sharp image.
In the experiment, researchers displayed images that appeared blurred to a camera, which was set to simulate a farsighted person. When using the new prototype display, the blurred images appeared sharp through the camera lens. This latest approach improves upon earlier versions of vision-correcting displays that resulted in low-contrast images. This new display combines light field display optics with a new algorithm. This research prototype could easily be developed into a thin screen protector, and continued improvements in eye-tracking technology would help make it easier for the displays to adapt to the position of the user’s head position.