Eighty-five. That’s the number of warning letters the FDA issued in 2018 alone. And of those letters, 49% were surrounding data integrity.
This highlights a serious issue: there are data integrity deficiencies in the lab environment. With so many warning letters issued to drug manufacturers, it was time for some guidance on how to improve.
It starts with warning letters and Form 483 citations, and since companies are under great pressure to produce medicines faster and at a lower cost, a data integrity issue could push them over the edge with costly fines effecting their ROI. In today’s competitive landscape, it could result in a company’s downfall.
But it’s not only about the pharma and biopharma companies. Complying to regulatory standards of data integrity preserves product quality, so it’s safe and available for patients when they need it.
The FDA’s recently released guidance on data integrity has set the bar; now all that remains is to raise lab standards to meet it. In a previous blog post, we saw how digitization and software can help close the gap. In this post, we’ll take a broader view on the topic, and see what the future holds for data integrity in the lab.
Striving towards data integrity
Technology has changed our lives in every aspect and continues to shape it as we work towards a better future. Scientific research is no different. We conduct research and develop drugs differently compared to just a few years ago. It has touched everything from overhauling manual processes to streamlining workflows and modernizing the lab. Now, technology is working alongside scientists at the bench to increase efficiency in highly repetitive tasks and reduce the chance of errors.
But there are still gaps in the standards, especially when it comes to data integrity. Automation has sped up research and manufacturing, churning out reams of data. But then what? Scientists need an effective system in which to categorize, organize and store the data.
There should be no missing data. Say you spill some reagent on your paper lab notebook by accident, and the ink runs a little. Is that a 5 or a 9? Or say you’d like to compare your notes to some taken last week. But you can’t find them, so you re-do the work, wasting both your time and reagents, and future projects get pushed back.
In the lab, missing or wrong data could translate to bad decisions, costly repeats, failed clinical trials, and, as was the case for at least 42 labs last year, rejected medications or studies coming to an abrupt halt. It could set a company back millions of dollars, and waste years of research.
Software in the laboratory
The first step is automating how data is captured and managed, which speeds up workflows and has the added benefit of reducing errors. A good data management system should provide a way to find data from present and past experiments quickly. Software bridges the gaps by providing a platform to collate all relevant data to be stored and used efficiently.
An electronic lab notebook or ELN is one such piece of software. Unlike manual processes and paper lab notebooks, an ELN offers a way to automatically capture data, organize it and analyze it with consistency. Removing data silos helps to make decisions in present and future experiments. Every movement is tracked and recorded, ensuring no data is lost, falsified or deleted.
Data integrity is enforced with full audit trails. With software like an ELN, organizations can build in standards and regulatory guidelines, such as CFR 21 Part 11. And sign-off functions during peer review ensure these are being followed.
Once the connection has been made between the data capture, analysis, metadata and records, data integrity is maintained. The data is proof that procedures in the lab meet the regulatory standards. This validation is necessary before the data can be used further down the line.
Understanding data integrity
Data integrity is the first step, then comes data reliability, and finally, science’s goal: data quality. But you must start at the beginning.
Many labs use the services of IT professionals to build workflows into their procedures and set checks according to what guidelines require for data integrity. But there is a problem here, as data integrity means something different in software engineering standards than it does for the pharmaceutical and biopharma industry standards. When IT hears ‘data integrity’, they normally think of data security, while to scientists, it means changing records.
It’s important to set the definition and expectations early and synchronize standards. That way, the network of procedures and programs will cover all the bases that pharma needs.
Data integrity in the future
Updates to the FDA guidance have labs sticking to certain qualifiers that mark data integrity. Data should be attributable, legible, contemporaneous, original, accurate, consistent, available, enduring and complete (ALCOA+). Data is still temporary until someone saves it, at which point it becomes ‘enduring’.
This is vital, as temporary data can be changed and manipulated without a record, either to correct a mistake or to falsify data to meet requirements. Regulatory bodies are now interested in temporary data and records as well as enduring ones to see the whole story. Data integrity standards today require all captured data and changes made to be tracked and recorded. This feature has been built into many informatics platforms, including IDBS’ E-WorkBook.
As the world moves from one type of tech to another, so does the lab. Adopting new technologies including wireless tech, the internet of things (IoT), and Big Data in the lab is exciting, but governance over data integrity must keep up.
Manual processes that satisfied regulators in the past don’t quite cut it anymore. The FDA guidance is an excellent step in the right direction. Now all we have to do is implement it effectively to ensure products for consumer health are safe, effective and efficient.