A preprint study published in November demonstrated the ability of artificial intelligence (AI) to interpret an individual’s thoughts and then generate an AI-generated image from them. Data was collected from several participants about their brain activity through functional magnetic resonance imaging (fMRI) while they were shown images. The participants were then asked to continue to visualise the images they had been shown while their brain activity was recorded. An AI model, known as Mind-Vis, was then fed the data, which enabled it to reconstruct images based on the participants’ thoughts while they were in the fMRI machine. The AI-generated images that were created linked specific brain activities with visual image features, such as colour, shape and texture.
The study was conducted by a team of researchers from Stanford University, the National University of Singapore and the Chinese University of Hong Kong. The AI generated images that corresponded with the original attributes of the reference images, with an 84% success rate. However, the technology is not yet perfect and requires further training before it is capable of deciphering an individual’s precise thought patterns.
Although the research team’s focus is on understanding activity from human brain scans, the findings could be used to create treatments for neurological disorders such as communication devices for those who cannot verbally communicate. The researchers anticipate it could also be used in medicine, psychology and neuroscience. In the future, people could potentially control a computer by just thinking of the command rather than typing on a keyboard, and send messages by purely thinking, according to doctoral student Zijiao Chen from the National University of Singapore.
Similar experiments have been conducted by researchers from Osaka University and a team from Russia. Despite the findings, the technology still requires further development before it can be used widely.