Monday, February 15, 2010

Extracting thoughts from brain scans

The ability to see a persons thoughts sounds like something from science fiction but last year at the Society for Neuroscience meeting Jack Gallant, a leading neural decoder at the University of California, Berkeley, presented some impressive results. They have developed a computational model that uses functional MRI (fMRI) data to decode information from an individual's visual cortex - the part of the brain responsible for processing visual stimuli. He and colleague Shinji Nishimoto showed that they could create a crude reproduction of a movie clip that someone was watching just by viewing their brain activity.

They used fMRI to measure visual cortex activity in people looking at more than a thousand photographs. This allowed them to develop a computational model and "train" their decoder to understand how each person's visual cortex processes information.

Next, participants were shown a random set of just over 100 previously unseen photographs. Based on patterns identified in the first set of fMRIs, the team was able to accurately predict which image was being observed.

Scientists just may one day be able to extract dreams, memories and imagery.

(Journal reference - Nature DOI: 10.1038/nature06713)

Japanese Mind Reading Technology by NTDWorldNews

Lisa Katayama (who had a nice IGNITE presentation last year on Japanese gadgets well worth watching) has written an excellent piece in Popular Science where she spells out much of the current work in this field. She even participated in an experiment with the mind reading technology.
Ten minutes feels like an eternity, but finally the fMRI announces the conclusion of its program with another loud beep. The researchers remove me from my bind and escort me to the control room, where a giant monitor is displaying 30 scanned images of my brain from different angles. I see bunches of white squiggly lines and light gray V shapes inside rows of gray circles. “That’s it? That’s my brain?” I ask, my head foggy from having tried so hard to stay still. It surprises me that all the goings-on in my mind can be reduced to a bunch of geometric shapes. Gallant tells me that brain activity is basically just a bunch of neurons firing—an estimated 300 million in the primary visual cortex alone, according to the latest research.
To help make sense of the shapes, the brain scanner divides them up into a grid of three-dimensional cube-like structures called volume pixels, or voxels. To me, each voxel looks like a random mix of whites, grays and blacks. But to Gallant’s computer model, which can see more-precise data in those shades, the voxels are a meaningful matrix of zeroes and ones. By crunching this matrix, it can transform the shapes back into a remarkably accurate rendering of the Einstein Guy or the grazing sheep. Gallant and his team didn’t have time to generate enough scans of my brain to make their algorithm work, but they showed me some convincing results from other volunteers. “It’s not perfect,” says Shinji Nishimoto, one of Gallant’s postdocs, “but we’re getting pretty close.”

Someone's brain has to have already been scanned multiple times for this technology to work, and only then in certain circumstances. But there are still some interesting possibilities for how this might be used... Imagine some of the possibilities for communicating with the severely disabled or other possible therapeutic uses. This is some fascinating stuff!

Now this may not actually be what people think of when they consider mind reading, although John-Dylan Haynes of the Max Planck Institute for Human Cognitive and Brain Sciences is working on a project "Decoding of conscious and unconscious mental states" which might be closer to what we actually imagine as reading thoughts. By showing people, including some with eating disorders, images of food, Haynes's team could determine which suffered from eating disorders via brain activity in one of the brain's reward centers.

Another interesting focus of neural decoding is language. Marcel Just and his colleague Tom Mitchell of Carnegie Mellon reported that they could predict which of two nouns - such as "celery" and "airplane" - a subject is thinking of, at rates well above chance. They are now working on two-word phrases. Turning brain scans into short sentences may be still a long way off, but getting my thoughts into a tweet appears to be a fairly complicated scientific process.