Times have changed from the heady days when sequencing a genome was a global endeavour that cost $3 billion. The latest NGS systems can do it for $10,000 in a few weeks – well almost. The challenges in the move towards better understanding of the genetic basis for disease and a more personalised delivery of medicines is now squarely centred on the associated informatics and data management. The Omics data is out there. We simply have to use it better.
I am lucky to visit many pharmaceutical, biotech, diagnostics and academic medical research centres and see data challenges play out in different ways. Any group working in translational and biomarker research has a mixture of gene expression, proteomics, GWAS, NGS and arrayCGH (to mention a few) capabilities that are used to analyse samples (or specimens for our US colleagues) for molecular insights.
What I see are scientists and clinicians creaking and groaning (literally) under the weight of data and expectations because their infrastructure isn’t designed to support such an influx of information.
Most labs have LIMS to track their samples but are relying on file structures, emails and paper notebooks to track the exhaustive normalisation, QC, analysis and annotation required to get value from the raw data. Few systems register data files and results. Fewer still make them searchable and sharable to support more effective genomic collaboration.
What’s needed is a way to take genomic collaboration to the next level; a way to identify, store, manage, update and access all relevant Omics data – both inside and outside your organization. That’s the only way that everyone (the patients that is) will benefit from all the fantastic advances in molecular instrumentation and sequencing.
If you’d like to know what we’re doing to improve the management and use of Omics data for translational medicine, take a look at our recent webinar for the IDBS Biomolecular Hub.