Design, Build, Test & Learn workflow
The Biofoundry ensures effective development and implementation of strain design procedures in our design-build-test-learn (DBTL) workflow using big data and data science.
When designing, building, testing and ultimately learning from microbial cell factory research efforts, it is imperative to have the right tools to pick the best strategies. Our goal with the Biofoundry is to optimise the development of high-performing, industrial-grade microbial cell factories for the production of sustainable products for sustainable lifestyles.
The Biofoundry also aims at developing the tools to make custom-designed genomes using big data. The Biofoundry develops, amongst other things, a software suite that will include genome reconstruction tools, including gene deletion predictions, proteome calculations and enzyme state calculations, as well as sequence testing and prediction tools
The Biofoundry, formally established in its current form in 2020, has evolved from the DBTL iterative workflows developed at the Centre during 2011-2019. The Biofoundry is the Centre's largest field of activity, in terms of cross-organisational involvement. It integrates ‘wet’ and ‘dry’ biology, big data, machine learning and artificial intelligence to address basic research questions aimed at improved understanding of relevant genetic, biochemical, and physiological characteristics of production strains. It also addresses translational research on optimal production strain procedures and their use.
All activities within these domains are underpinned by a common Informatics Platform that manages data all the way from collection and processing to analysis and knowledge management.
The Biofoundry’s working organisation is formally categorised into five sets of activities:
1) Design (Genome Design): deploy computational tools towards prospective design and analysis of platform strains
2) Build (DNA Foundry and Adaptive Laboratory Evolution (ALE)): generate, screen and validate DNA and cell constructs 3) Test (Big Data Engine): generate large, comprehensive datasets to support the big-data-driven Learn and Design steps.
4) Learn (Genome Analytics): develop genome-scale analytical methods and tools
5) The four domains above are underpinned by a common Informatics Platform that manages data all the way from collection and processing to analysis and knowledge management.
Explore our workflow in the boxes below.