Search

Type here to search the website for related content

Search

Show results for
  • Pages ()
  • Blog ()
  • In the News ()
  • Press Releases ()
  • Events ()
  • Webinars ()
  • Resources ()
  • Success Stories ()

More resources

lightbulb

Whitepaper: Opportunities in Preclinical Development

Finding ways to optimize the holistic preclinical development process

The E-WorkBook Cloud

THE E-WORKBOOK CLOUD

FLEXIBLE | SCALABLE | POWERFUL

Preclinical development is the combination of numerous specific scientific domains. It often encompasses:

  • Pharmacology (PD – both in-vitro e.g. plate-based and in-vivo e.g. in life)
  • Drug metabolism (what happens to the drug when it is metabolized in the body
  • Pharmacokinetics (PK – what happens to the drug when in the body, where it goes and at what concentration for how long
  • Bioanalysis (specific analytical techniques used to estimate the drug concentration and metabolite concentration in specific tissues)
  • Formulations (how best to take the API and formulate this into a drug product (a pill, injectable, inhaled aerosol) and
  • Safety/toxicology (analysis of the effects of the drug on specific organs and safety concerns e.g. cardiac safety, liver toxicity, etc.)

When it comes to informatics and data management, each of these domains is characterized by specific requirements due to their diverse scientific needs, which we will discuss later in this white paper. However, they all produce critical elements of data and supporting information that is required for investigational new drug (IND) and new drug application (NDA) submissions and further regulatory approval.

Every domain produces critical data and this data and contextual information should be captured and managed in the most effective and robust manner. This also means that, given this data is required for regulatory submission, the speed and efficiency of how this data is collected is also of great importance. Data needs to be accessible as quickly as possible, without compromising on the quality of the data. This is what we describe as a ‘data value chain’. It links all of the domains together, at the data level.

These key drivers are tractable problems that can, to a certain degree, be addressed with informatics and data management applications. However, they are not just solvable with software and data management – the complementary laboratory and scientific processes need to be optimized at the same time. But, why? Because automating an inefficient lab process does not add as much value to the business as automating an extremely efficient and optimized lab process.

So, when thinking about optimizing the holistic preclinical development process, we must address both the data management facets and the laboratory/scientific process at the same time. This is not new information, but it needs to be kept in mind when choosing what to do and when to do it with respect to an informatics/lab process optimization strategy. Many of our customers have been through this process – but many will honestly say that they could have done things better with hindsight.

Many suggest that they:

  • tried to go too big, too fast – driving change into the labs and science areas that they could not, or did not want, to deal with at that time
  • chose a domain that was too tricky to start with i.e. the most complicated domain first as that was seen as the biggest prize because of ROI (return on investment) or business impact
  • did not anticipate the nuances of the science and over simplified the problem
  • failed to understand the data inputs and data outputs in enough detail to be able to deliver on the expectations of the ancillary groups

What can be done to reduce the risks of informatics and project oversight errors becoming significant issues causing project delays or failure? The next sections discuss the specific things that our customers believe are important for each of the preclinical areas.

Each section will talk about the problems for that area and the areas of focus that are solid, proven starting points for defining and executing laboratory and scientific process reengineering projects supported by data management and scientific software applications.

Pharmacology (PD)

This area is typified by many one-off or single point experiments. The diversity is massive and many scientists work in both in-vitro and in-vivo domains.

They are required to gather data from a diverse range of instrumentation and data sources, and use a wide range of applications to massage the data and do statistical analysis and curve fitting of the data due to the ‘noisy nature’ of real world data. Experiments and studies are very dynamic in the early stages as scientists try to understand the effect of a drug on the subject – for example, what response do they get? And can they measure it robustly and consistently?

When providing critical data to other areas, such as pharmacokinetics (PK), pharmacology data needs to be provided with all its context (accuracy, method of capture, statistical robustness) so it can be combined with other data to produce critical reports, such as PKPD. Here, the scientists are able to see the effect of a given dose on response and understand the actual concentration of drug in the various tissues of interest.

Drug Metabolism (DM)

This area is characterized by a very common set of assay types that are used to build a profile of new drug candidates.

Over the past few years, the science has become increasingly automated. Many organizations use a ‘cascading’ business logic system to automatically progress drugs through tests. This results in better efficiency and better cost control – why, for example, test something if it has a bad profile in the ‘cell absorption tests’? As with pharmacology, the data that this area produces is integral for later decisions, but, the most important area of data management and software is ‘in process’, ensuring it supports automated decision making, entity progression and sample logistics.

Many labs still do not have a fully automated DM environment and still use manual entity progression and manual data collection
from robotic systems. The data and results produced in DM are crucial in trimming the number of candidates that go on to further, more in-depth and expensive, downstream
PK and safety testing.

Pharmacokinetics (PK)

PK is characterized by a low number of candidate drugs with very in-depth, expensive, long-term studies into how the drug behaves in the body and at what concentration it should be dosed.

It involves complicated statistical design, subject tracking, elongated study timescales, and very dynamic analysis parameters with robust statistical methods. Pharmaceutical development is one of the most complicated areas of science and bridges into clinical development with human subject testing. It is also complicated by the need to run in a compliant regulated laboratory setting (GLP).

As mentioned previously, it produces data that is used to define dosing regimens and therefore needs to be managed and captured in a manner that allows for this to be done. PK groups work hand in hand with Bioanalysis (BioA) groups, as it is here that the analytical testing of samples produced in the PK groups is done, however, sometimes the PK group does its own BioA.

Bioanalysis (BioA)

This area is typified by sample management and automated testing. Wherever possible, the sample handling is automated and the analysis technique and science is validated and robust.

The measure of a BioA lab is how quickly and robustly it can process a sample from receipt to data publication. They frequently run in a GLP regulatory setting and produce data that is used in preclinical development and clinical development. BioA has been revolutionized over the past decade with the advent of lab automation, combined with robust data management driving measurable tenfold lab efficiency gains.

The data and application requirements are grounded in the quality and robustness zone. The data from BioA is used by PK for producing study reports that are often used in regulatory submissions. It is obvious that manual data capture and management is a big risk for this area and should be avoided.

Formulations (Form)

Sometimes this domain is described as pre-formulations, but, the essence of the scientific demands is common across all areas and even all industries.

Formulations requires the capture and management of all elements that go into ‘formulations’, including all the ingredients, all the unit operations that are used and the combinations and orders of each element. This recipe concept is just as important as the ingredients. Changes in formulation can change the behaviour of the drug product at the PK and PD level – so ensuring that all the critical aspects of how a new formulation is developed and matched to the results from the PK and PD testing is of utmost importance.

These aspects drive out the critical aspect for the data management and scientific applications used by formulators of precise tracking and process step capture coupled with traceability. This allows the opportunities in preclinical development changes in formulation to be correlated with changes in PK and PD.
Here, we step into the realm of in-vitro in-vivo (IV/IV) correlation reporting.

Analytical development and execution (Analytical Dev)

There are two parts to the analytical services domain. One revolves around the development of new analytical assays that produce repeatable, robust and precise data. The other is the systematic execution of this analytical test at volume, usually in regulated (GLP/GMP) settings.

Analysts require very deep integration with analytical instruments and robust sample management – being a service organization, analytical execution is a process and efficiency driven domain. Other integral elements include how methods are validated and moved into an execution environment easily and the speed at which the new tests can be developed.

Process execution and instrument integration are critical, along with data context and accessibility for easy reporting to other departments. These examples typify the requirements in this domain.

Bringing it all together

Each area, as described above, has its own requirements for scientific domain support and data management. However, the trick is knowing what to do first and how to build towards a ‘big vision’, where the whole preclinical domain is joined up and the transformation impacts of doing so can be leveraged by an organization. Each domain can be treated as an island as long as the connection and data requirements for cross domain collaboration are well understood and mapped (at the data level).

The question then is how to start…  big or small? This is a matter concerning both perspective and available funding. But, the key aspects of what will work for each area must be considered:

PD – demands a flexible data management layer, simple instrument data capture and integrated curve fitting and statistics. Traditional spreadsheets are the predominant data management tool in pharmacology and so the ease of use and ability to deliver new point assay templates is key. However, the data needs to be accessible and re-usable by the other domains.

DM – demands a defined and limited catalog of assays that have defined inputs and outputs. Spreadsheets cannot be used as it is difficult to automate. The data analysis needs automation and also needs to be made available for automated decision tools to make the automated progression of candidates easier.

PK – demands flexible study setup and definition coupled with robust subject management. Data needs to be captured easily and integrated with data from BioA. In study changes to the design and data capture are also common requirements. Study reporting should be as automated as possible, along with validated lab support (GLP).

BioA – demands a validatable (GLP) data capture and instrument integration environment. Sample management, analytical instrument integration and links to analysis tools is critical, coupled with data provisioning to PK groups in both preclinical and clinical development for study reporting purposes.

Form – requires a validatable (GLP) recipe development and raw material component data capture and management environment that can capture process steps and analytical data associated with a given step/sample. The instrument integration elements are important, but, full automation can perhaps be deemed of lower importance. The associated areas of solid state sciences and ancillary ‘formulation’ domains all require good data capture in terms of image/spectra and numerical data and provisioning of this data with context for aggregate formulations reporting.

Analytical Dev – requires a validatable (GLP/GMP) analytical development and execution environment. Deep integration with analytical instrumentation and very robust sample management are both capabilities that need to be married in order to support both development and execution. Data needs to be provisioned to other systems and domains with context – spectra, numerical data, etc.

Best practices for choosing what to do and when

This is a tricky area – and there is no one size fits all approach to rolling out applications and data management tools to preclinical development because each domain has its own specific requirements.

Pharmacology (PD), for example, can be introduced to data management easily. Most PD workflows are single point experiments, and the data management solution is likely to be spreadsheet based. Delivering capabilities that streamline experiment write ups and the approval of data can quickly deliver business benefits and make lives easier, saving time in the order of hours per experiment.

Once this is completed, the more intricate data importing from instruments and automated data analysis can be introduced, perhaps then followed by automated data publication which can deliver significant improvements in experiment time to completion and big improvements in cross domain data reporting e.g. PK/PD reporting. Both of these deliver hard ROI and also provide other important benefits around the quality of decision making.

This start simple approach will not work in BioA, where the lab process and intricacies of data manipulation and reporting mean that you have to ‘reimplement’ the process from the get-go, otherwise lab productivity will be impacted. BioA scientists do not need to just document their experiments – they also need to run and report on their experiments.

This is similar for many of the preclinical domains – simple experiment and study reporting is not going to deliver benefits to them or the organization – so it is critical to map out what will have an impact and then deliver this capability. However, this needs to be done with the holistic data value chainin mind, to ensure that the data silo problem does not exist in the ‘to be’ state.

Download now More info on Pharmaceuticals Request a demo