Recent years have seen automated technologies, such as machine learning and AI, really starting to take off.
While these technologies are growing in popularity, we need to have realistic expectations.
In an ideal world we could have voice-activated computers that can tell us in a brief second if a recently invented molecule will be able to cure Parkinson’s. An AI-enabled lab of the future could look like this:
“Will this molecule work against Parkinson’s?” the scientist would ask. The computer would then respond; “This molecule has a 62% chance of succeeding in the fight against Parkinson’s in the female population with a missense mutation.”
Machine and deep learning are on the rise in numerous areas of the R&D space, including product development and manufacturing. However, achieving the above described scenario would require huge advances in technology.
So, how far is AI from fulfilling a scientist’s dream? Putting the advertised potential aside, we must carefully consider our expectations towards AI and the role it can fulfil in our labs today.
We can already see the advanced use of AI in the life sciences space – it can even be used to diagnose a disease, by looking through images, test results and further data sets.
However, AI and other automated tools need to be taught, and this can only be achieved by accessing a vast amount of pertinent data. Good quality, and regulated data that can be accessed, analysed, and applied in a trusted data infrastructure. For dreams to become a reality, there is still work to be done, and relevant questions need to be asked – and the importance of data management shouldn’t be overlooked.