BIG DATA, BIG PHARMA – How Big Data is Transforming the Healthcare Industry

Society is being quickly reshaped by the increasing use of digital devices.

The technological wave of the past decade has created an unstoppable, invisible force called “Big Data” that is transforming human lives. Big Data enables us to measure and understand aspects of our existence that are otherwise unapproachable, such as climate change, food security, education or disease. In the latter regard, pharmaceutical industries are awakening to the potential of data analytics in modern healthcare.

Although the field has been skeptical about adopting changes into the business organisation, growing competition and increasing regulation are forcing big pharmaceutical companies change. These companies are now pushed to implement new analytical infrastructures to extract information from disparate data sources. A deeper trust on data science could potentially help big pharma companies to identify new drug targets, improve patient stratification and develop better predictive models for drug development.

 

What is Big Data?

The term “Big Data” was coined by John R. Mashey back in 1998 to describe large and complex datasets generated by human activity, yet not until 2005 did the concept received an accurate definition. The “3 V’s idea”, provided by Gartner’s analyst Douglas Laney, is still used nowadays to explain the term:

Big Data is high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.

Within these three dimensions, Big Data gathers a massive amount of useful information that is buried inside complex data. The recent advances in information technology have enabled capturing and analysing these data to gain insights into different aspects of our lives. Big Data movement is spanning every field of knowledge, including science, sociology, engineering and art, and its influence can be understood in economic terms: it is predicted that the market for Big Data technology and services will reach $203 billion in 2020, up from $57 billion in 2010, an annual growth rate of 15 percent.

 

Is Big Data essential for big pharma success?

The emergence of Big Data is making enormous strides in every market across the globe, and the pharmaceutical industry is no exception.

However, this was not the case until recently. Traditionally, big pharmaceutical companies have pursued blockbuster medicines to generate venture revenue, as this allowed them to enjoy high profit margins given the lack of competitiveness of the healthcare industry. This model, however, is now in sharp decline due to expiring patents and the surge in competition. Approximately 25% of the current best-selling drugs will become off patent by 2025, and the rise of the generics medicines is expected to result in a 50% of the market share. The situation is compounded by low research and development (R&D) productivity: the industry is expanding twice as much as ten years ago in R&D, while only producing two-fifths of output medicines. As Jean-Pierre Garnier from GlaxoSmithKline argued:

The blockbuster model is highly profitable in the first 8-10 years, but beyond that period you are guaranteed to lose your entire book of business… fundamental changes are necessary. 

Since the classical business model is no longer sustainable, pharma companies are now searching to find a more productive ways of modifying their own workflow. Although rapid change has not been the industry’s forte – often hindered by the technical complexity and tight organisational regulation of the field – big pharmaceutical companies are now leveraging Big Data analytics for this aim. 

 

Big Data analytics in pharmaceutical research

Currently, big pharma suffers from declining return-of-investments and stagnant pipelines and the main goal for these companies is to overcome these challenges at the stem of the workflow – in R&D. The industry spends globally $150 billion a year on R&D and stills fails to find alternatives to replace its ageing blockbusters. However, unlocking the tremendous value behind Big Data could be a key element for the much-needed cure.

Thus, big pharma is starting to take advantage of Big Data-driven techniques to develop appropriate decision-making tools that could enable them to discover new treatments. Instead of following a traditional “trial-and-error” approach, Data analytics is shifting the industry’s R&D approach towards a so-called “inform yourself before you try” strategic focus, or predictive computer modelling, which tells them whether the new drug molecule will work within a patient or not. Many drug discovery projects are implementing bioinformatic tools to dig out appropriate molecule structures for drugs and model their interactions with host molecules in humans. This can ultimately have a great impact in the revenue and safety of the developed drug.

 

Big Data and personalised medicine

Today, publicly available databases can be mined to extract meaningful information. When it comes to understanding disease mechanisms and predicting clinical outcome, the main focus has recently been placed on genomic data. Human genome projects, such as the 1000 Genomes Project and the Human Genome Project, are making genetic information available to help and identify new drug targets by linking particular genes and their products to individual diseases. In addition, real-world evidence is an even bigger source of patient information. This refers to the data coverage beyond the traditional randomised clinical trials, and some examples include electronic health records and other patient-related information in registries, hospital administration and payer databases. 

Data from the Genome Projects and real-world-evidence can help pharma companies to understand their customers through personalised medicine. The invention of social media platforms and apps like Apple’s HealthKit paired with biosensors builds a closer relationship between online data and healthcare. Pharmaceutical companies can gather daily evidence from smartphones, wearable monitors and other technologies. This includes patient parameters such as blood pressure, energy consumed per day or daily routines, that are used to build a personalised profile. Analysing complex trait networks enables companies to develop new medicines tailored to smaller patient cohorts, with an associated greater efficacy and fewer side effects.

 

Pharma holobionts open doors for data sharing

Big Data is also modifying the traditional closed operation model of the pharma industry, creating a shift towards open data availability. The super-secretive activities carried out within the pharmaceutical pipeline tried to protect pharma companies from any threat to their intellectual property. This enclosed collaboration worked well during the rise of the blockbuster model, however, Big Data pressures pharma companies to attempt to democratise their clinical development data. For this reason, GSK began posting detailed data from its clinical trials online in 2013, making it widely available to outside researchers.  As argued by Dr. Sowder, Pfizer’s external medical communicator:

No single actor in the healthcare ecosystem can hope to master the zettabytes of information flowing in from numerous sources. Companies have to build collaborations with other organisations to gain insights from many streams of real-world information and comparative effectiveness studies, with the stated needs of patients in mind.

External collaborations thus involve more players than simply the company’s stakeholders, including academic researchers, contract research organisations, and even companies that typically seen as fierce rivals. Breaking the silos between these organisations can ultimately produce a pharmaceutical holobiont, or collaborative ecosystem, that is likely to improve the business outcomes. A recent pharma holobiont example involves the partnership between GSK, the European Bioinformatics Institute and The Wellcome Trust Sanger Institute to form the Center for Therapeutic Target Validation, which aims to mine the combined sources of genomics, proteomics and chemistry data to find new therapeutic targets. An open collaboration scheme between companies can bring valuable information to the boundaries of the pharma industry, allowing a more in-depth understanding of the customers.

However, open data sharing opens ethical questions about data ownership and patient privacy. These include debates on the degree of corporate access to personal data when tech giants such as Apple, Amazon and Google are seeking to carve out a healthcare niche. Many academics are worried about drug firms using the data from these companies just as a commercial tool, intruding patient’s privacy. Stricter stricter legislation is still needed to ensure patient privacy, although restricting the information pharma companies can obtain could  undermine the drug development process. 

 

Concluding perspectives

More data has been created since 2003 than in all previous recorded history. As predicted by PwC:

Without such a change of strategy, no country will be able to meet the healthcare needs of its inhabitants by 2020.

Managing and analysing the influx of Big Data is essential for pharma companies to address the issues undermining their traditional business models. Success in the industry is no longer measured by the number of blockbuster drugs in the market – what ultimately attracts business value is the analytical solution for real-life evidence. Big Data analytics is therefore a needed change in pharmaceutical R&D, patient focus and external collaboration schemes and mayor players in the field are already inching towards this digital transformation.

 

Read Next

The Promise and Challenge of Big Data for Pharma

MIT Tech Review on Big Pharma and Data Sharing

 

Watch Next

Leave a Reply

%d bloggers like this: