We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Our minds may no longer be a safe haven for secrets. Scientists are working toward building mind-reading algorithms that could potentially decode our innermost thoughts through memories that act as a database.
For most, this probably sounds like an episode of Netflix’s hit series Black Mirror. The dystopian sci-fi thriller recently showcased a chilling episode called “Crocodile” that used memory-reading techniques to investigate accidents for insurance purposes. The eerie episode is set in an AI-driven world of driverless vehicles and facial recognition technologies. The plot of “Crocodile” centers on the icy crimes of a witness that investigators revealed with help from intelligent technology.
The insurance agent uses a memory recaller (known as “corroborator” in the episode) that comes with a surveillance chip. Once connected to the user, the device allows insurance agents to access engrams and creates a corroborative picture of the witness’ range of memories on a screen. It replays the entire accident from the user’s position.
The agent recreates a similar atmosphere to jog the subject’s memory (in this case employing a song and beer). While insurance tech in the real world may not be quite this sophisticated, technology that reveals a subject’s innermost thoughts could actually be a reality one day.
Experts are currently mapping sections of the brain to collect data that helps them understand human interactions using language, sentences, images, thoughts, and even dreams.
In a 2016 study funded by the National Science Foundation, neuroscientist Alexander Huth of UC Berkeley and a team of researchers built a “semantic atlas” to decode human thoughts.
The atlas displayed how the human brain organizes language through vivid colors and multiple dimensions. The system also helped identify areas in the brain that correspond to words with similar meanings.
Researchers conducted the brain-imaging study by asking subjects to remain inside an fMRI while they listened to stories on Moth Radio Hour. Functional magnetic resonance imaging (fMRI) detects subtle changes in blood flow in the brain to measure neurological activity, and the study did just that. The experiment disclosed that at least one-third of the brain’s cerebral cortex was involved in language processing, including areas dedicated to high-level cognition.
Such data-driven techniques could give a voice to those who cannot speak, especially those with motor neuron diseases like ALS or victims of brain damage or stroke.
In 2017, a team from Carnegie Mellon University (CMU) led by Marcel Just developed a way to identify complex thoughts like “The witness shouted during the trial.” Researchers used machine learning algorithms and brain imaging technology to show that different areas of the brain formed the mind’s building blocks to construct complex thoughts.
In 2014, CMU introduced BrainHub, an initiative that focuses on modern brain research, linking neuroscience to behavior through machine learning applications, statistics, and computational modeling. BrainHub continues to examine ways we could use neural interventions to help people with neurological conditions and developmental disorders.
In 2014, a research group led by Alan S. Cowen, a former undergraduate student at Yale University, accurately reconstructed images of human faces based on how the study subjects’ brains reacted to the images.
Researchers mapped the brain activity of subjects as they showed them a range of images of faces. The researchers created a statistical library on the subjects’ brains’ responses to individual faces. When researchers showed new faces to the subjects, they used the library to reconstruct the face each subject was viewing. According to Yale News, Cowen predicts that as the accuracy of facial reconstruction increases with time, such research tools could help study how autistic children respond to faces.
In 2013, Japanese scientists managed to “read dreams” with 60 percent accuracy by decoding some aspects of dreams in an early stage of a dream cycle.
Researchers used MRI scans to monitor test subjects as they slept. The team built a database to group objects into broad visual categories. During the final sleep round, the researchers could identify what the volunteers were seeing in their dreams by monitoring their brain activity.
In 2014, Millennium Magnetic Technologies (MMT) NeuroTech became the first company to commercialize “thought recording” sessions. Using its patented and proprietary Rosetta Technology, MMT identifies Cognitive Engrams that represent the patient’s brain activity and thought patterns. The technology uses fMRI patterns and biometric analysis of video images to interpret facial recognition, object recognition, truth vs. deception during interrogation, and dream sequences.
To be fair, memory tech comes with many limitations.
For one, brain mapping is a lengthy and expensive process. For researchers in Kyoto, dream-reading took 200 test rounds for each participant. Moreover, even if companies and organizations were to implement mind-reading tech, the action would violate several human rights. Reports have already highlighted at least four rights that unauthorized mind-reading would violate if our brains were connected to computers.
Unlike “Crocodile,” in the real world, mind-reading AI will face many limitations and a lot of pushback before public officials approve it for investigations. And even with that, regulations could curb the enthusiasm for this revolution.
Deena Zaidi is a Seattle-based contributor for financial websites like TheStreet, Seeking Alpha, Truthout, Economy Watch, and icrunchdata.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.