Your brain is a lot like your DNA. It is, arguably, everything that makes you uniquely you. Some types of brain scans are a lot like DNA tests. They may reveal what diseases you have (Parkinson’s, certainly; depression-possibly), what happened in your past (drug abuse, probably; trauma, maybe), or even what your future may hold (Alzheimer’s, likely; response to treatment, hopefully). Many people are aware—and properly protective—of the vast stores of information contained in their DNA. When DNA samples were collected in New York without consent, some went to great lengths to have their DNA expunged from databases being amassed by the police.
WIRED OPINION
ABOUT
Evan D. Morris, Ph.D., is a professor of radiology and biomedical imaging at Yale. He uses PET and fMRI to study drug abuse and drug action in the brain. In August 2019, he was a visiting scholar at the Hastings Center to study the ethics of brain imaging.
Fewer people are aware of the similarly vast amounts of information in a brain scan, and even fewer are taking steps to protect it. My colleagues and I are scientists who use brain imaging (PET and fMRI) to study neuropsychiatric diseases. Based on our knowledge of the technologies we probably ought to be concerned. And yet, it is rare that we discuss the ethical implications of brain imaging. Nevertheless, by looking closely, we can observe parallel trends in science and science policy that are refining the quality of information that can be extracted from a brain scan, and expanding who will have access to it. There may be good and bad reasons to use a brain scan to make personalized predictions. Good or bad, wise or unwise, the research is already being conducted and the brain scans are piling up.
PET (Positron Emission Tomography) is commonly used, clinically, to identify sites of altered metabolism (e.g., tumors). In research, it can be used to identify molecular targets for treatment. A recent PET study of brain metabolism in patients with mild cognitive impairment predicted who would develop Alzheimer’s disease. In our work at Yale, we have used PET images of a medication that targets an opioid receptor to predict which problem drinkers would reduce their drinking while on the medication.
fMRI (functional Magnetic Resonance Imaging) detects local fluctuations in blood flow, which occur naturally. A key discovery in the 1990s found that fluctuations in different brain regions occur synchronously. The networks of synchronized regions have been shown repeatedly to encode who we were from birth (our traits) and also long term external effects on our brains (from our environment). fMRI analysis techniques are getting so powerful that the networks can be used like a fingerprint. fMRI networks may be even richer in information than PET–but also more problematic. The networks (sometimes called “functional connectivity” patterns) have been used to predict intelligence. They have been used to predict the emergence of schizophrenia or future illicit drug use by at-risk adolescents. Functional connectivity is being used to predict which adult drug abusers will complete a treatment program and who is likely to engage in antisocial behavior. Some predictions are already 80 to 90 percent accurate or better. Driven by AI and ever-faster computers, the predictive ability of the scans will improve. Most medical research using brain imaging is funded by the NIH (National Institutes of Health). At least one institute (the National Institute of Mental Health) requires that its grant recipients deposit all of their grant-funded brain scans into an NIH-maintained database. This and similar databases around the world are available for other “qualified researchers” to mine.
Some uses of brain imaging would seem to have only upsides. They might provide certainty for patients and their families who desperately need help planning for their colliding futures. They could avoid unnecessary and costly treatments that are destined to fail. But other uses of brain imaging lie in an ethical gray area. They foretell behaviors and conditions that could be stigmatizing or harmful. They generate information that an individual may wish to keep private or at least manage. In the right circumstance, the information may even be of great interest to the police or the court system.
As the New York Times recently reported, the police in New York City tricked a child into leaving his DNA on a soda can. I recognize that fMRI networks cannot be captured surreptitiously by enticing a 12-year old to drink a soda. The police will not use fMRI fingerprints solely as identifiers. It would be too much trouble. But many questions arise. Could a court order someone to undergo fMRI or PET? Could a prosecutor subpoena a brain scan that a suspect consented to in the past as a research volunteer? Forensic genealogists tracked down the Golden State Killer without ever taking a sample of his DNA. They triangulated using DNA markers he shared with unacquainted third cousins who had uploaded their DNA sequences to a public database. Could a forensic brain imager identify you as unlikely to complete drug treatment and thus a bad candidate for diversion? What if we could predict your future behavior by similarities that your fMRI networks share with those of psychopaths who had been analyzed and whose data now resides in a database? Even now, it seems plausible that a qualified scientist working with police could download the data. If that didn’t work, the police might get a warrant. Will the NIH relent and share their databases of images when the police come calling?
These are questions that brain imagers, legal experts, ethicists, and the public should be debating. Scenarios that may seem far-fetched right now raise troubling questions that ought to be anticipated. Genetic testing controversies of today can serve as models for how we think about the potential uses and misuses of brain imaging. Thorough debate should lead to guidelines or policies. A report by the National Academies on the Ethics of Brain Imaging may be needed.
What is at stake? The integrity of the scientific enterprise. As scientific researchers, we are obligated to obtain “informed consent” from our research subjects. Volunteers must be apprised of the risks they may incur by agreeing to participate in our studies. Scientists generally do a good job of explaining risks to volunteers. “The radiation exposure you will receive is comparable to the natural radiation you would get from three round-trip trans-continental airplane rides.” But brain scanning may be moving toward new uses and abuses that come with risks we have not yet considered. The principle of “autonomy” establishes the right of volunteers to control how their brain scans will be used – scientifically or otherwise. The public who funds, and the volunteers who participate in research studies must have confidence that brain scanning is being conducted ethically and that the far-reaching personal information being generated is being used only as intended.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at opinion@wired.com.
More Great WIRED Stories
- The unbuilt streets of California’s ghost metropolis
- Computer scientists really need to take ethics classes
- London is changing its skyscraper designs—to favor cyclists
- Jeffrey Epstein and the power of networks
- A history of plans to nuke hurricanes (and other stuff too)
- 👁 How do machines learn? Plus, read the latest news on artificial intelligence
- ✨ Optimize your home life with our Gear team’s best picks, from robot vacuums to affordable mattresses to smart speakers.