One of the goals of Data in Biotech is to build a community of professionals interested in how data can impact biotech organizations and create a space for discussion. Only one month in, we are starting to see that community grow as one of our previous guests, Markus Gershater from Synthace, put us in touch with Jesse Johnson so we could get his take on how biotechs can make the most of their data.
Jesse Johnson began his career as an academic mathematician before transitioning to software engineering at Google. He then ventured into the biotech world, working at startups like Solarity and Dewpoint Therapeutics in roles that combined data science and software engineering. Currently, Jesse is an independent consultant, helping early-stage biotech start-ups better manage their data.
In the podcast, we spoke with Jesse about the challenges faced by biotech startups in managing their data, the potential of automation and machine learning, and the role of software and hardware vendors in the biotech industry. Here are our top highlights:
Further reading: As is our Data in Biotech tradition, we asked Jesse for his reading recommendations. He suggested Kaleidoscope's blog post on the different phases that biotech startups go through and Benn Stancil’s newsletter from a few weeks ago on why big solutions can be hard to adopt and why organizations often only need 10% of the solution created. In addition, Jesse publishes a weekly ‘Scaling Biotech’ newsletter that you can subscribe to here.
One of the most interesting things that Jesse highlighted in the podcast was the cultural differences between the wet lab and dry lab teams and how that drives their differing perspectives with regard to data capture throughout the experimentation process.
Biologists, by nature, are used to following complex processes through their interaction patterns and trying to discern through experimental procedures what mechanisms are driving those interactions. They are very much in the weeds of determining what is happening to a particular cell in a particular environment with an expectation about what will happen under different conditions. They tend to view each experiment in isolation, rather than seeing how it relates to experiments conducted by other team members.
Data teams, by nature, are used to integrating information from multiple sources, converting the data into a format that can be easily explored, analyzed, aggregated, and presented to different stakeholder groups. Their approach to problems is about viewing them through a lens of the elements that can be efficiently represented across all experiments and all categories of experimental metadata. They typically see each experiment as an instantiation of a larger experimental data model, into which every experiment should fit neatly.
In the context of those differences in culture, mentality, and analytical approach, it is easy to understand how conflicts emerge between wet lab and dry lab scientists. Yet these teams need each other to be successful. A wet lab scientist can see an anomaly and immediately generate hypotheses about its origin. A data team member can come up with efficient computational approaches to measuring phenomena in a dataset that would require much longer if the wet lab scientist were asked to work on the same problem. Having an expert present who can help to navigate these cultural differences, and help each stakeholder group feel their concerns and needs are being addressed, enables R&D organizations to get out of their own way and collaborate on their common goals.
If you're interested in discovering how your organization can unlock the value of data and maximize its potential, get in touch with CorrDyn for a free SWOT analysis.
Want to listen to the full podcast? Listen here: