The Shapeshifting Landscape of Scientific Informatics
How Legacy Technology Can Block the Road to Scientific Discovery
In nearly every work environment around the globe, digital transformations are taking place that change the way individuals produce, interact, and engage with their work. Scientific research laboratories are no different. Whether they’re big or small, these changes mean laboratories are now creating, transforming, and leveraging data in unprecedented ways.
Scientific organizations need to keep up with this changing business landscape, and are under pressure to go-to-market faster with new, improved and compliant products, whilst still conducting innovative research that results in brilliant and impactful outcomes.
Mastering data and information is an essential part of any organization’s research. When the task is to find the next ‘big thing’ – from pharmaceuticals through to new food flavors and improved plastics – past practices often aren’t enough, meaning new approaches and technologies must be brought into the fold. The technology that empowers experiments and projects must be able to supply a solution for the problems of today, while also being capable of growing and enabling the promises of tomorrow.
Market and business factors
Changing market and business factors are affecting the entire dynamic of the research and development (R&D) process. In the pharmaceutical industry, for example, it’s easy to see the change that has taken place over the past decade with the rising dependence on contract research organizations (CROs), collaborative academic partnerships, and smaller specialized biotechnology players. To remain competitive, the big pharma and life sciences organizations have forged collaborations and strategic partnerships with these new players to identify and
work on cutting-edge products.
This shift isn’t as new in the food and beverage industry, where collaborations with third-party suppliers, customers, and other speciality partners have been happening for years. Whilst collaboration or externalization in these industries is at different maturity stages, change is always being driven by complex, evolving business demands.
At IDBS, we have been working with scientists and researchers for nearly three decades – and with customers spanning across pharmaceuticals, biotechnology, agricultural sciences, chemicals, consumer goods, energy, engineering, food and beverage, and healthcare sectors – we’re well positioned to explain how new technology can be used to unblock the road to scientific discovery across these different industries.
Problems facing scientists today
Scientists today face a range of external and internal challenges, ranging from outside market pressures through to the continuous technological evolution of scientific tools. The global enterprise laboratory informatics market has responded to new customer demands by extending their core competencies and deepening their point solutions. Despite this, many laboratories are now looking for additional help. Why?
Because adding more specialized depth to a singular, point solution might not solve challenges that are bigger than say, a LIMS, SDMS, ELN, or LES system. What we’re seeing is a landscape spanning different scientific markets riddled with pain-points, but ultimately, there’s one common thread: scientists are not being empowered with their own data.
The number of siloed tools being used (like those mentioned above) have created a vast network of disconnected data, causing inevitable headaches for CIOs and CTOs. At a time when there is pressure to reduce vendors in their technology stacks – particularly while information technology (IT) resources are stretched thin – CIOs and CTOs are increasingly having to manage business critical data across hundreds of standalone applications.
This challenge is so prominent that it has given rise to the ‘data lake’ concept. Data lakes are essentially free-standing data repositories where all applications can virtually deposit information. Data lakes are valuable because both the input and the output of data is generally application agnostic and can be made accessible across an organization.
While data lakes make data available quickly and efficiently to researchers, they don’t add value to scientific discovery efforts, or elicit harmony between technologies. Even with a data lake, those expected to discover and innovate are challenged by data inaccessibility, contextual workflow capture, and an absence of systems integration. The lack of unified technologies that talk to one another and allow for workflows across domains bogs down efficiency
Other challenges blocking scientific discovery include:
Old technology holding research back
We regularly see customer technology stacks assembled with best-in-breed point solutions, but these stacks are often created without a broader vision around data flow and workflow continuity. This creates both a significant amount of specialization and limitation all at the same time. We always end up asking our customers these questions:
- What parts of your laboratory have
been built for future needs, integration, and scalability?
- Do you have the confidence that your systems and processes will be able to grow with you?
- How will your technology empower your ability to make scientific discoveries and capitalize on them?
Why do we ask? Because singular, point solutions are usually designed to only support a single part of the R&D cycle. These legacy systems are mostly deployed on-premise with limited scalability and static features (until a big upgrade). It’s difficult to discover insights across workflows unless the point solutions can engage with one another to drive to that next level of value. Another difficulty lies in the period after implementation, when a point solution has been built for purpose without the ability to adjust or change to meet an organization’s specific needs.
As researchers will know, the discovery processes of the past will not always be right for the future. It’s simply inefficient for chemists to do stoichiometry rendering in a specialized system, only to then create a .PDF file which can be uploaded to an ELN. These tools should not work outside of a scientist’s workflow.
Around the lab, research teams are expected to both capture data in a structured manner and collect it from the third-party research organizations they partner with. Scientists have become project managers, and the tools they use must allow for the creation and use of new science and technology.
Isolated silos of data
Data silos can be a challenging problem for both the users and IT support teams of enterprise laboratory informatics systems – particularly in relation to data accessibility and a lack of workflow interruption. Consider a researcher focused on developing a new beverage flavor. They are likely to be overseeing the development of hundreds of different flavor samples, using thousands of different raw materials and ingredients, and potentially creating hundreds of thousands of different iterations to track, along with an equal amount of scientific data.
This R&D work may be supported by a range of tools, like a corporate-wide inventory system, ELN, LIMS, LES, SDMS, and PLM – along with any internally built applications. Frequently, our customers are telling us these systems, which aren’t talking to one another, are creating isolated silos of data that cause extra work steps. This conundrum isn’t limited to food and beverage researchers. All science-based organizations are creating huge, critical sets of unstructured data collected by individual systems and tools, across a wide range of industries.
Repeated experiments and lost work
Have you ever been in a situation where you’ve been about to submit a new drug to a regulatory body and you couldn’t locate complete data from your experiments? Maybe, at the time of the experiment, it wasn’t a priority to capture and store certain aspect of experimental data – or technology wasn’t commonplace at the time and storage wasn’t necessary. It might sound like naivety, or even carelessness, but it’s a very common problem.
One of our customers estimated that up to 20% of their experiments are repeated due to lost or incomplete data. Many organizations can’t even track the amount of repeated work that occurs. Depending on the size of your organization, this repeated work could be costing you millions.
Challenges to point solutions and singular domain tools
The failure of lab tools to keep pace with R&D challenges is causing organizations to look at new platforms that support a unified approach rather than specific applications to solve a specific problem.
While tools like Laboratory Information Management Systems (LIMS) can target certain operations to capture efficiencies, introducing a LIMS will not solve all your lab efficiency challenges. A LIMS will always have a place in the lab for managing data around specific workflows – but, as our customers tell us, there is a limit to their operational productivity. That cap is a problem. Many LIMS also use outdated technology platforms, cannot integrate with other systems, and also bring a range
of other usability and supportability issues.
Electronic Laboratory Notebooks (ELNs), another popular enterprise informatics platform, have long been a staple of the laboratory environment for researchers. It’s a technology that pulls organizations away from paper and supports them as they go digital with the collection and management of all of their scientific data. But like a LIMS, an ELN also has its limits. If there is a spectrum of laboratory informatics tools, a LIMS would sit on one end of the line as the most rigid, inflexible, and specialized type of technology. An ELN would sit on the opposite side as the most open, untailored, and general. This brings with it a set of problems that can make it difficult for researchers to conduct specific procedures and feel confident capturing data for complex workflows.
It’s common to find large pharmaceutical organizations with many standalone applications, such as a handful of ELNs and tens of systems that provide LIMS-like capabilities. Disconnected data sources throughout the product research lifecycle do not encourage innovation and collaboration. Running a lab entirely with these singular tools can quickly turn into a disjointed nightmare for integration and slow future growth of the lab – especially when the changing demands of the lab requires data to be shared and leveraged as quickly and efficiently as possible.
Our vision of a solution and its key characteristics
Our deep domain knowledge as scientists has helped us understand that data mastery goes beyond management and storage. Data mastery is a vision of grasping complex workflows involved in producing valuable data, enhancing the workflows themselves, and knowing when to automate the junctures so data can be most intuitively surfaced to users when it’s most valuable. Data mastery should include:
- Intelligent data capture and entry throughout a workflow and experiment, by digitally capturing sample information and tying it to a specific experiment and project. This additional contextual and procedural information can be incredibly valuable at a later time
- Data traceability in a structured, consumable method that can be
queried and discoverable in seconds. Being able to review a developed cell line and trace the entire development cycle to understand which teams conducted the experiments, what samples were used, and what compliance was adhered to
- Reporting and surfacing of actionable, scientific data for everyone to leverage. A platform should be able to surface information intelligently for a lab director in the United States, for example, even when their organization has labs worldwide
- Automation in the laboratory between different instruments and electronic systems to save users from manual data entry or redundant actions – consequently improving compliance and data validity
Lonza Biologics, a leading contract biologics development organization
| The Challenge|
| The Solution|
| The Results|
Collaboration and externalization
The landscape of scientific research is changing. The days of all R&D activities being conducted in-house are ending. The number of partner research firms is growing, like contract research organizations (CROs) for the pharmaceutical industry, or certain flavor and raw ingredient companies in the food and beverage area. Dependence upon these research partners is becoming the new normal and technology tools involved must support it.
When an outsourcing partner provides experimental results, often contextual and procedural data isn’t also provided – either because it’s not considered critical, or because the technology being used doesn’t easily support outsourced collaboration. Our vision is for a comprehensive solution that empowers collaboration and externalized research as part of an overall lab informatics solution, not just another standalone system.
Any solution should:
- Enable consistent and complete data capture, regardless of the source. It should be able to define experiment and data models through templates that get pushed out to CROs
- Provide a secure, seamless manner to push out work requests to partners that allows them to conduct their science but push the results back structured and in the format needed
- Capture the contextual and procedural data needed for greater analysis of the experiments by your own researchers, but still within a secure, IT-compliant process
- Reduce set-up time for partners to
begin their work, alongside a way to work with you to schedule and deliver as you need it
- Provide a better way to communicate and interact than traditional email and an ability to view experimental data in real time
NuSkin, a global personal care and nutrition products company
| The Challenge|
| The Solution|
| The Results|
Growth and scalability
A robust, comprehensive solution must consider both growth and scalability. Some solutions may include capabilities that might not make sense for a smaller company – but as a research organization evolves, these capabilities must be considered from the start, as introducing them at a later date may be impossible with an inflexible solution.
Solutions should also be able to maintain scale with the organization as it grows from a technology standpoint, allowing organizations to easily handle the organic and inorganic growth of its customers, such as an increase in the number of users, labs, sites, or countries.
Any solution should provide:
- Plug-and-go capabilities that can be activated quickly
- Scalability to match organizational growth and deployment demands
- Evolution to grow with your lab’s requirement complexity as scientific processes and innovation expands
Operationalizing scientific discovery
Making and capitalizing on scientific discoveries is the ultimate goal of
any science-based organization. The technologies and tools used by
these organizations should, above all else, be helping researchers be successful in accomplishing that goal. Too often, scientific practices and systems are getting in the way of that discovery and becoming a burden to work with. Innovation should be allowed and catered to with the use of modern lab technology and informatics tools.
The ideal solution has data inputs, management, and outputs that result in scientific discovery to occur and innovation realized with researchers. It should provide:
- An informatics data backbone that speeds up a scientist’s process and workflow
- Context and improve the reproducibility of outcomes, by capturing results at
- A view across the organization for individuals of all levels to understand the progress of scientific work, surfacing relevant information along the way
Top five global pharmaceutical organization
| The Challenge|
| The Solution|
| The Results|
Our platform is built for the Lab of The Future
Our solution is different from our competition’s in that it is not a set of ELN features or a defined, rigid LIMS. It is a comprehensive data management and workflow solution, designed for scientists and researchers with powerful capabilities and the flexibility to meet the unique, innovative demands of our customers, which can grow with their demands as they grow in complexity.
The E-WorkBook Cloud is a scientific informatics platform that can scale from five licenses up to the tens of thousands with its cloud and hybrid deployment capabilities.
A firm data mastery can be reached for the capture, management, sharing, reporting, and surfacing of scientific information needed to innovate and discover. Its capabilities are driven by our product modules that are plug-and-play grow with customer demands and instant scalability.
Our modules work with one another to amplify the solution beyond their individual functionality, with the most value resulting from the full platform deployed in the lab. This approach puts scientists in a position to make discoveries and capitalize on their scientific work.
The E-WorkBook Cloud product modules include the Electronic Laboratory Notebook (ELN), Advance, Inventory, Request, Integrations, Connect, Biology, and Chemistry.
|The ELN is a best-in-class enterprise lab notebook allowing laboratories to ditch practices of using paper, pushing scientists to go digital and capture, manage, share, query, and report scientific data on a common cloud platform|
|Advance takes the ELN to the next level driven with its proprietary spreadsheet technology, providing a centrally managed ontology and standardization of data, as well as the ability to create scientific assay and workflow templates|
|Inventory simplifies lab and sample management processes empowering experiments to occur faster, track and review inventory quickly and efficiently, and improve traceability and compliance of inventory|
|Request enables teams to prioritize, schedule and fulfill complex, multi-step internal and external work requests, ensuring efficient research and development operations especially with large and externalized research teams|
|Integrations provides an extensive set of out-of-the-box integrations and APIs, enabling easy integrations with other scientific systems in the|
lab and cloud
|Connect enables a single environment for bringing together scientific task management, research content submission and review, and communication between researchers|
|Biology adds biomolecular drawing, visualization, and searching to provide enhanced biologics functionality and integrated biologic registration|
and inventory capabilities
|Chemistry adds chemistry drawing, molecular stoichiometry modeling, and chemistry search capabilities|
The E-WorkBook Cloud empowers scientists.
The modular design of The E-WorkBook Cloud means new capabilities can be added as required and organizations can scale according to their needs, cut their support costs, launch new products faster and more efficiently, and focus on product innovation.
More importantly, the cloud platform is designed to allow for encouragement and capitalization of the scientific discoveries and advances that occur both internally and externally through partnerships and collaborations.