The future of medicine | A conversation with Alfred Sandrock

Imagine a future where you don’t have to wait for symptoms to know something isn’t right with your body. A future where diagnosis and prognosis aren’t informed by one or two forms of measurement, but by multiple streams of data, providing a more comprehensive and accurate view of what to expect and how to manage your disease. Most importantly, imagine a future where you can predict a disease and its trajectory, addressing it before it manifests and treating people before they become patients.


A new routine

A visit to the doctor typically includes a physical examination, review of any test results and discussion of treatment options. This experience, from examination and measurement to diagnosis and prognosis, is one we’ve come to rely on. But according to Biogen’s Executive Vice President, Research and Development, Alfred ‘Al’ Sandrock, M.D., Ph.D., our growing understanding of genetics coupled with the use of biomarkers and measurement tools, more broadly, will provide us with insights that could potentially revolutionize the field of medicine and drug development.

“Ultimately, I predict that the future of medicine will look much different. And it will come down to our ability to integrate data from four key measurements: initially with genetics, blood tests, input from mobile devices, and if further evaluation is indicated, imaging,” Al says.

Alfred Sandrock, M.D., Ph.D., Biogen’s Executive Vice President, Research and Development

Similar to the routine annual exams most of us are accustomed to, Al foresees a future where everyone could be genotyped at birth and receive a core set of quantifiable measurements that are fairly simple, inexpensive and expand with age, such as blood tests and digital measurements. With multiple streams of data collected over time across large populations, it will be possible to establish a range of what is normal and what requires further investigation through imaging, and perhaps ultimately other sophisticated and even more invasive testing.

If we reach a point when everybody gets measured, even people who are not sick, the data collected will be key to understanding the range of “normal,” and what constitutes a red flag.

He points to the Framingham Heart Study as an example of the impact of a longitudinal study, which has led to a better understanding of risk factors for cardiovascular disease, like high cholesterol, and how they can be prevented1. “I believe that neuroscience is probably the final frontier of our industry. There’s nothing more complex than the human nervous system (CNS), yet we try to fix it only after it’s diseased. But despite the complexity, I believe that we are entering a new era for major therapeutic advances,” he says.


A new era of possibilities  

“Our understanding of the molecular pathophysiology of disease has grown exponentially in recent years, largely due to human genetics pointing to causal biological pathways,” he says. “This is key when working on developing a disease-modifying drug.” Al also points to new measurement tools Biogen uses, such as imaging and fluid biomarkers, that track biological changes in the CNS.

PET scanner, Biogen’s molecular imaging lab

And as new digital measurement tools emerge, we can now imagine a future that holds many more possibilities. This is especially important in the field of neurology, where the journey to diagnosis is complex and often includes some trial and error.

Earlier this year, Biogen created the Biogen Digital Health unit, dedicated to personalized and digital medicine in neuroscience.

Al argues that the advent of digital technologies will also help detect and quantify even subtle changes in neurological function that patients themselves may not notice.


Beyond the diagnosis

Integrating these different datapoints will be key and could lead to a better understanding of disease and how to manage it, especially in neurology, where disease can present differently from patient to patient.

In multiple sclerosis (MS), for example, inflammation is caused by overactive immune cells, which damage areas of the brain and spinal cord. These damaged areas, called lesions, are identified by MRI. Although MRI has been used for decades, more recently, the scientific community started analyzing and quantifying these MRI images by machine learning. And just in the last few years, we discovered that a blood test called serum neurofilament can indicate when nerve fibers have been damaged in the brain2. “Serum neurofilament is just the tip of the iceberg,” Al says. “As we continue to integrate and quantify our various measures, we will be able to develop a whole host of other tests that tell us very precisely what is going on in the CNS, even potentially tracking progression of the disease, giving us signals of abnormality long before people get sick.”

What puts some people at more or less risk of developing a disease? And why do some people experience more severe symptoms than others? Al argues that if we can leverage artificial intelligence to sift through multiple streams of data across large populations, we can potentially find these answers; thus, advancing our understanding of the molecular and cellular pathophysiology of disease and ultimately leading to more disease modifying treatments.

Robotic arm used at Biogen Digital Health to test different devices and validate algorithms.


Understanding genetics

Understanding how certain genes contribute to disease development is another important hurdle for scientists to overcome. And although the scientific community has made significant headway, especially for diseases caused by single gene mutations, there’s still so much we don’t know about so-called “sporadic” diseases. Many of these diseases result from the combination of damage in multiple genes, as well as behaviors and environmental triggers that may affect one’s genes. 

“Many of these ‘sporadic’ diseases will almost certainly be composed of multiple diseases (subtypes) but with such similar clinical expression (phenotypes) that we currently call them one disease, like Alzheimer’s disease, Parkinson’s disease or ALS. Continuing to advance our measurement capabilities may help us sort all this out, define better drug targets and develop precision therapies.”

The value of data collected from large populations is not a novel idea. Large biomedical databases, such as the UK Biobank, are considered instrumental in advancing scientific discovery. In fact, in a study published in the journal Nature, researchers looked at brain imaging of 10,000 individuals from the UK Biobank and were able to discover that genetic influences on brain structure and function correlated with a number of neurodegenerative, psychiatric and personality traits.


Are we there yet?

Over the past few decades, we have seen many measurement tools established as reliable detectors of disease, particularly in cardiology and oncology. This has led to early intervention and even prevention, as in the case of the BRCA gene test in breast and ovarian cancer. According to Al, this is just the beginning of what’s possible, especially in the field of neurology.

“Almost everyone already receives annual blood tests, almost everyone has a digital device, and our imaging capabilities continue to advance. Similarly, our understanding of genetic causes is increasing, and genetic testing is not as uncommon as it was a few years ago,” he says.

But beyond the effectiveness and accuracy of measurement tools, the real game changer will be integrating this data using artificial intelligence, and we are already seeing evidence of what’s possible.

Using artificial intelligence, scientists from multiple institutions used cross-sectional measurements of MRI data from the Genetic Frontotemporal Dementia Initiative (GENFI) and used a machine learning algorithm called SuStaIn to identify four subtypes of Frontotemporal Dementia (FTD). Remarkably, the study (Alexandra Young et al., 2018) found that these subtypes of FTD, identified by slice-in-time MRI images, segregated by genotype, which meant that the subtypes were based on different biological underpinnings of disease.

“As we continue to advance these methodologies and tools, the power and value of measurement is becoming clearer,” he says.

It will take years to get there, but we are starting to see a path being formed already, and we have an opportunity to help create it and make sure that we do this right.

Of course, the promise of this future has implications that go beyond science, medicine and the biotechnology industry. On a societal level, it poses important privacy and ethical questions, as well as drastic changes to healthcare systems and payers.

“As a physician, I see this integration of measurements revolutionizing medicine, treating people before they become sick,” he says. “As a drug developer, I see it revolutionizing the way we determine the efficacy of a drug.”

Being monitored all the time, no matter how discreet or noninvasive the devices become, raises important questions around privacy: where is the data held and who has access to it? Indeed, this information is not just about an individual’s health but can also impact insurance cost, and potentially employment. 

“The citizens of the world will have to decide collectively that the potential benefits outweigh these concerns,” Al says. “We will have to be ever watchful of the potential for abuse in the wrong hands. But my hope is that with every advancement, with every careful step in the right direction, we will see that the benefits outweigh the risks. To be clear, the potential benefit is that we may be able to preserve health and stave off the major diseases of our time—cardiovascular, metabolic, neoplastic, psychiatric and neurodegenerative diseases. But there is a lot to be done, and it will require that the entire ecosystem of industry, academia, ethicists and government works together. I believe that we should take the right actions and start having these conversations now to ensure that we can advance toward this future state in a way that benefits humanity.”