IDBS Blog | 10th June 2015
Next-generation informatics – the next step in effective process development
Process development groups face challenges on several fronts: quickly developing and transferring robust processes, constant optimization, cutting down on errors and saving time and money. But with research and development (R&D) spend in many parts of the world hitting a plateau, the pressure is on.
Teams must develop new products faster with less resources. The job of process development scientists is made trickier still when you consider the requirement for a blend of functionality – structured and unstructured data storage, analysis and traceability capabilities – not offered by traditional informatics solutions.
Is next-generation tech the answer?
The requirements of process development scientists are two-fold. They require the flexibility to make process changes and try new ideas, and the ability to capture and analyze complex data. Neither of these are supported effectively by manual processes. Development scientists are already leveraging automation technology to increase throughput and DOE software to plan efficient experiments. So why are many still using Excel and paper-based processes for documentation and analysis?
When systems exist that can efficiently capture and report on run data, integrate with existing instruments, and allow for ad hoc query and analysis, perhaps the barrier is cultural. It’s certainly a shift to move into a fully electronic environment. There are standard operating procedures to consider, as well as access within the labs. But the benefits of converting far outweigh the effort to get there. These move the user from a personal tool to a collaborative environment where runs can be compared and analyzed across groups, and material tracked across labs.
One size does not fit all
Depending on the stage of development you are in, your regulatory overhead and general lab culture, your needs for a data management system will vary from other scientists in the same space. This is where the flexibility and configurability of a system is crucial. While the overall workflow may be very similar – cell lines to bioreactors to protein purification to final formulation – the data and needs can vary significantly.
In all cases, traceability is key. When things go wrong, the ensuing investigations can take weeks of digging through notebooks, binders and files before an issue is resolved. This is especially difficult in a development environment where there is often a complex web of materials, solutions, cell lines, process intermediates and final products. Ultimately, automatically linking entities together in a system allows clear data insight when it is needed most. This saves weeks of valuable time searching for an issue recorded via manual processes – time which can be invested in re-running the experiment with the necessary changes made.
While everyone can benefit from streamlining their data management processes, there is no ‘one-size-fits-all’ solution. Different groups have different needs, which makes it vital to establish requirements and goals up front. What’s clear is that next-generation informatics – tailored to the end user – will be critical to the process development lab of the future.