03 March 2026 | Tuesday | Expert Opinion
As artificial intelligence becomes increasingly embedded in formulation science and early clinical planning, the industry faces a dual challenge: accelerating timelines while safeguarding scientific rigor, patient safety, and regulatory trust. In this BioPharma Boardroom feature on The Integration of AI into Formulation Development, Andy Lewis and Christine Allen explore how collaborative AI-driven platforms are reshaping formulation design, optimizing sparse early-stage datasets, and reducing API consumption — all while maintaining human oversight at every critical decision point. From data integrity and bias mitigation to evolving global regulatory frameworks, they outline how AI can serve as a powerful decision-support tool that enhances — rather than replaces — scientific expertise in early drug development.
As AI becomes more embedded in formulation development and early clinical planning, how is Quotient Sciences ensuring transparency, accountability, and ethical oversight in how AI-generated insights are used to guide scientific and clinical decisions?
Our work with our partner Intrepid Labs has demonstrated that their proprietary machine learning (ML) platform accelerates formulation development while also generating deeper insight into the relationships between composition and performance. Because the model learns across a broader formulation design space, it enables exploration of options that may be overlooked by traditional, experience-led approaches, thereby reducing the impact of human bias.
This approach can reduce API consumption, which is often limited in early development, strengthen decision-making, and provide additional confidence as molecules transition into and progress through clinical development. This collaboration reinforces an important principle: AI is a decision-support tool and not a replacement for subject matter experts.
To ensure transparency, accountability and ethical oversight, we use AI within clearly defined constraints, with comprehensive documentation of inputs, assumptions, and outputs. Subject matter experts remain in the loop at every decision point to ask the right questions, interpret results within context, and maximize real-world impact.
Machine learning platforms rely heavily on historical and experimental data. What measures does Quotient take to ensure data quality, minimize bias, and maintain scientific rigor when using AI to support formulation and early-stage clinical development?
We use machine learning across several areas of our business. In formulation and early clinical development, we utilize the approach developed by Intrepid Labs. This approach performs well even with relatively sparse datasets, which is critical at this stage when API is limited and available data on the drug substance, drug product, and in vitro and in vivo performance are still emerging.
The type and quality of data used to train the model are paramount. We place strong emphasis on collecting the most informative dataset. Additionally, it is essential that the model is trained on both positive and negative results to effectively learn how to optimise toward the desired target.
This represented a key learning for us, as it required our formulators to adapt to a different way of working, allowing the AI to learn while knowing when to redirect it at the appropriate time to solve the problem at hand.
AI has the potential to accelerate timelines significantly. How does Quotient balance speed with responsibility, particularly in ensuring that AI-supported formulation decisions meet safety, validation, and ethical standards before progressing into clinical trials?
We use AI to identify formulation compositions and/or processing parameters that, when manufactured, result in drug products that meet defined specifications, ensuring their quality, and there is always human oversight of any drug product designed using AI. Once identified, these drug products are scaled up and manufactured to cGMP, and undergo the same QC checks as any drug product to ensure they are suitable for clinical evaluation in patients.
With regulatory agencies increasingly scrutinizing AI use in drug development, how is Quotient preparing for evolving regulatory expectations, and what role do explainability and auditability play in ensuring compliance and sponsor confidence?
AI has so many potential applications to accelerate and/or improve drug development that it is very difficult to predict where it will be applied and the direction the field will take. It is therefore understandable that detailed guidance on its development is not yet available, and that regulations will evolve to address each use case.
In the UK, the MHRA considers it most appropriate to apply regulatory rules for AI in line with those for medical devices and has established a National Commission to further define how it should be regulated. The EMA and FDA have also recently jointly published 10 guiding principles for the use of AI in medicines development, providing an overarching framework focused on human centricity, safety, ethics, data integrity and risk management – common themes in many ways across all medicines and medical device regulation.
How these principles will be applied for individual use cases will no doubt need to be determined on a case-by-case basis as they emerge. However, it is clear that AI is on the radar of regulators around the world, and that early, open dialogue during the development of AI tools will support both the development of the emerging technologies and its regulations.
Protecting sponsor data and intellectual property is critical when using AI-driven platforms. Could you explain how Quotient’s secure, project-specific AI environments maintain confidentiality, data integrity, and ethical data stewardship?
The security of our clients’ data is paramount, whether it is processed using AI or not. The AI solution we are developing with Intrepid is embedded within a secure IT infrastructure, with each client’s project firewalled. The ML model (Andromeda) is entirely naïve (untrained) at the start of each project and learns by “doing”. There is therefore no risk for cross-contamination between clients. However, if a client places multiple projects with us, we can securely store their data and train the model across multiple programs to potentially improve predictability.
From your perspective, how will AI reshape early-phase clinical trials and formulation strategies over the next five years, and what ethical principles should guide its adoption to ensure it strengthens trust, patient safety, and scientific decision-making?
I am incredibly excited by the potential of AI in drug development. There have been some great successes in drug discovery, clinical trial design and patient recruitment, for example, but there are many more opportunities to improve what we do to bring better medicines to patients faster.
In early development, I am already seeing the potential to reduce drug product development timelines, decrease API demands, and improve preclinical decision-making. I anticipate this will extend into translational medicine, building on established in silico tools to help bring innovative medicines through proof-of-concept Phase 2 trials more rapidly, while improving decision-making and de-risking later development, from both a CMC and clinical perspective.
© 2026 Biopharma Boardroom. All Rights Reserved.