Einstein or Marilyn? How this optical illusion hides two faces in one portrait

An optical illusion created by MIT shows Marilyn Monroe from far away, but changes to Albert Einstein up close. The illusion offers clues as to how our brains process the details in images or scenes.

An optical illusion offers clues about how our brains process the details of images or scenes.

YouTube

April 3, 2015

You’re probably familiar with classic optical illusions, such as this image titled “My wife and my mother-in-law,” drawn in 1915 by cartoonist W. E. Hill. Depending on how you interpret certain features, the figure appears as either an old woman looking to the side or a young woman looking over her shoulder.

A recent optical illusion created by an MIT research group takes things several steps further, and provides insight into how the human brain processes images.

“Marilyn Einstein,” shown in the video above, appears first as a small, blurry picture of model Marilyn Monroe. But then, as the image zooms in, it appears to transform into a photo of physicist Albert Einstein. Monroe’s features are blurry and indistinct, while Einstein’s are finely drawn. Those fine details are only visible at closer distances, so the image appears to change as it zooms in (or as the viewer moves closer to it).

Howard University hoped to make history. Now it’s ready for a different role.

The optical illusion can highlight vision problems – people who might need glasses are often unable to pick out the fine details of Mr. Einstein’s face, and are left seeing an image of Ms. Monroe – but also points out a quirk in how the human brain processes visual information.

The MIT team that created “Marilyn Einstein” performed a series of experiments in which they showed participants the hybrid image for different lengths of time. When people saw the picture in just a brief flash of 30 milliseconds, they could only see Monroe – their brains simply didn’t have time to pick out the fine details of Einstein’s face, no matter what how close to or far away from the image they were. When they saw the picture for 150 milliseconds, they saw Einstein but not Monroe.

The experiments suggest that our brains prioritize different details within an image or scene. If we see a picture only very briefly, we’re left with “low spatial resolution” information – the overall shape of what we saw. If we see that same picture for a slightly longer period of time, we’re able to pick up on finer details. The MIT team believes our brain processes low spatial resolution information first, before it moves on to details.

This research could be used by companies who want to change the way their advertisements or logos appear at different sizes or distances from potential customers. You might spot a billboard from far away and notice certain colors or features, then see more detailed information as you get closer to it. Manufacturers might also be able to use the research to mask text or other information that must be printed on their devices so that the text is only visible from close up.