Biopharma Data: Chaos or Harmony?
When the key to your biologic’s success is locked in its data
To get the most out of recent innovations in biopharmaceutical development and QC such as multi-attribute monitoring (MAM), it’s time to better connect the systems that manage biopharma data. This series explores the what, the why, and the how of better biopharma data.
Imagine a biopharma nightmare scenario: You are developing a large-molecule drug, a therapeutic monoclonal antibody (mAb). With strong phase three clinical trial results, your molecule looks promising. Simply getting this far means your molecule has already beaten the odds, succeeding past a point where hundreds of others have failed. To cross the finish line, you’re preparing for commercial production. Tens or hundreds of millions of dollars are on the line.
Then an artifact shows up in your QC data.
One product attribute is unexpectedly out of range. Your drug sits on the launch pad while you must find out why. Is it the product itself? A raw material? An assay or instrument used in your quality system? Weeks or even months can pass as you investigate the answer. If your drug has blockbuster potential, every day wasted could amount to $10 million or more of lost sales.
Time and money are of the essence, and you’re losing both. Since mAbs are inherently complex molecules, and the biological processes used to produce them have so many variables, how do you not get tripped up by this variability as you race to the finish?
The answer is in your data. If data can throw your development program into chaos, it can also help move you forward.
Let’s see how.
Terabytes of data, all for one drug
With biologics, the process is the product. In other words, since it’s challenging to fully characterize as large a molecule as a mAb, you control the quality of your drug by controlling how you make it. And you control that process through the data you gather about it—ultimately, understanding process parameters and their effect on the product’s critical quality attributes (CQAs).
To meaningfully comprehend these factors results in gigabytes of analytical data per day, terabytes per month, all of it accessible to regulators and in-line with their requirements. And, since drug development takes place over years, this large, constantly growing body of data spans multiple touchpoints:
- Analytical instruments, each with its own software platform
- Contractor organizations, which may have different data management practices
- Development phases, with different teams and goals
- Production phases, any of which can lead to meaningful changes in the structure, function, and/or stability of sensitive proteins
From the many, many different data points produced in a development effort, there must emerge one coherent safety, efficacy, and quality profile to present to your regulator and take to market. And bringing these points together poses a challenge that’s only becoming more critical.
Bringing multiple attributes together
Given the multiple data points involved, the need for efficiency is high. Mass spectrometry (MS) instruments offer an important tool for the collection of complex analytical data. As they have gone from room-sized, highly complex instruments to benchtop, more user-friendly units (such as the ACQUITY QDa Mass Detector), their integration into later development phases such as QC has increased.
A look at recent Biologics License Applications shows that, over the last few years, MS has risen from 20% to 80% in the use of analyzing CQAs.1 Additionally, the number of CQAs a single method can analyze has increased into the double digits. Such approaches, known as multi-attribute monitoring methods, allow a single MS to analyze multiple CQAs quickly, and permits scalability into the workflow as more complex data is collected.
Yet for all their efficiency, multi-attribute methods are only part of how biopharma can better harmonize its data.
The threat of chaos, the promise of harmony
Let’s go back to the thorny scenario that began this blog: A QC parameter comes back out of range, due to an unknown cause, and the resulting investigation holds up commercialization. That fact that your quality system gives you such a result may be a good thing, if it does in fact signal a problem with a batch or the process.
What makes the difference in whether you end up having a short investigation or a long one may be how well-connected your data system is. That gives you the insight and the ability to analyze and understand—and thereby control—the connections between process parameter changes and product CQA changes.
The pressures on drug makers to close the gaps in data management and analysis, and better control process and quality, are only increasing. They include pressures to:
- Develop more complex innovator biologics, such as antibody-drug conjugates (ADCs), which means an increase in data as well (ADCs, for example, need data for the antibody, the small molecule, the linker, and the conjugate itself)2
- Develop niche drugs for smaller populations, such as those suffering from orphan illnesses or from highly specific genetic variants of cancer
- Develop biosimilars, which require fast development schedules in the face of heavy competition
- Use continuous manufacturing with real-time monitoring and automation, all of which require a tighter feedback loop between data and process
In short, the data needs of biopharma are huge—but so are the opportunities.
Next in this blog series, we’ll take a look at how data fits into the ultra-competitive, rapidly emerging field of biosimilars development.
Visit waters.com/tamethechaos for more information.
- A Quadrupole Dalton-based multi-attribute method for product characterization, process development, and quality control of therapeutic proteins. Xu W, Jimenez RB, Mowery R, Luo H, Cao M, Agarwal N, Ramos I, Wang X, and Wang J. mAbs. 2017; 9 (7): 1186-1196.
- Mass spec weighs in on protein therapeutics. Arnaud CH. Chemical & Engineering News. May 2016: 30-34.