The externalisation train has left the station. Whether we know it or not we are all on it. Academics, biotechs and SMEs have laid the tracks for major companies to travel through multiple international collaborations; accessing talent and making sure that the best and most cost-effective hands are on the job.
The world is now manufacturing R&D data for everyone else, all the time: an enlarging ecosystem of the big and small, young and old.
“So what?” you say. It’s all going to be fine. We take the best resources working in parallel and all with one aim. Then we look at the reality and reach for the Blue Mountain. What is being created is a wonderful, complex new internet of R&D, but without the detailed data and rich context required to communicate that science effectively.
Externalisation 1.0 was the concept of large companies reaching outside their walls to others for help. V 2.0 is about everyone effectively communicating scientifically and being able to access their entire data landscape. V 3.0 will be all about pre-competitive collaboration and truly open innovation. But that’s for another day. How do today’s groups communicate through the collaboration jungle right now?
The truth is that it’s very patchy, the equivalent of hand signals and tree markings. Tossing documents – or even (shock horror!) – PowerPoint decks into network shares are the lowest form of data communication. These can be subject to subsequent transcription errors and cannot be drilled down into for that vital establishment of scientific credibility. It is miles away from the expressive language that defines clear, challengeable scientific debate and way below the standards set for internal scientific communication.
Data is the asset, not the structured report, which should be able to be generated ‘just-in-time’ if you have all the data available. With the underlying data you can make better decisions and secure collaboration IP more effectively. Also, the more data you can work with and analyze, the better the insight. The more you can dig into it, the more you trust it.
As Michael Palmer from ANA put it: “Data is just like crude oil. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic and chemicals to create a valuable entity that drives profitable activity. Data must be broken down and analyzed to have value.”
That’s precisely why a data-centric approach to collaboration is so vital. Wherever, however and with whomever you choose to share it, it’s vital to be able to thoroughly challenge and drill down into data to establish real credibility.
Unfortunately, the data – the assets generated by the collaboration – are often an afterthought of the negotiation process and the groups needed to enable effective data sharing often find out about the partnership after the event, leaving email, secure network shares and other document-driven solutions as the fast-to-implement choice.
Thankfully, hostable Platform-as-a-Service collaboration solutions such as our E-WorkBook Suite are already showing that they can eliminate on-site data storage, dramatically improve the quality of collaboration, improve IP capture and reduce bottlenecks for all parties. Add to that a sophisticated, highly granular security model that allows the right people access to the source data and the ability to manage multiple collaborations in one place and you’re really getting somewhere.
Even if a collaboration terminates the value of having your data accessible in a secure collaboration cloud – but not scattered throughout your internal warehouses – this can save huge headaches.
That solid bedrock of secure, partitionable experimental data and interpretation can only lead to better communication and trust. It also drives improved decision-making and more effective project control. And that – in any language – is surely what good collaboration with great data is all about.