Advanced manufacturing and industrialisation of the CGT sector – What’s next?
We sat down with Katy, our Chief Scientific Officer, to discuss the recent ARM event, what the mood was like and the hot topics of discussion. Katy shared her thoughts on why robust analytics are key to the next generation of CGT, how automation has the potential to change CGT process development to better aid patient outcomes, and how the technology is finally catching up to the sectors ambition.
What were your takeaways from the ARM meeting?
Given that the ARM meeting was a workshop focused on the industrialisation and advanced manufacturing in the CGT space, it was really exciting to get a tour of the test beds over at the [CGT] catapult. There was a lot of really cool tech on display and it was nice to see our prototype, the Development Engine, also featured alongside all of that really disruptive technology.
I think it’s clear that our ability as an industry to meet the growing demand of the delivery of CAR-Ts to the patients that need them is absolutely going to rely on embracing the technology that’s becoming available. It’s only going to be achieved by the adoption of all of these different technologies into the space. It wasn’t just evident in the new equipment that was available, but also the discussion around AI and big data. The good news is that machine learning and AI is absolutely going to revolutionise the way we do things, but it also needs a personal touch. We’re not just going to give it over to the machines and let them have at it. It always needs a person reviewing, asking the right questions, and making sense of what’s happening. What was obvious is that the advances are only going to be as good as the data feeding them. So we need good, robust, reproducible data feeding in if we’re going to make any sense of all this. We need that alignment and organisation of that data, and the more data that we can get [the better], especially in the autologous setting. You’ve got so much variability being driven just by the patient cells themselves. Then the ability to have so many more process runs and data points being fed into to these algorithms. That’s how we really start making advances.
And so that’s happily fits with the vision of MFX, which is obviously starting to generate those advanced data sets at the small scale and translates directly to the large scale. This means you can really rely on those data and start generating that data early in your process development journey.
We know that it’s been a bit difficult for cell gene therapy the last couple of years. How was the morale at ARM?
It was acknowledged that there had been recent setbacks and there’s clearly a tough market condition. I think the mood was reflective, but I generally think there was optimism. Clearly, with all the amazing technology on display and then the talk of the future, and there’s still a lot of success happening in the sector, especially with the successes of all the CAR-T trials and the advancement of those going forward. Everyone felt that lessons can be learned from some of the recent failures that can still move the industry forward as a whole and in a positive way. So, people felt that there were reasons to be optimistic.
On the potential automation to go into the next band or the next scale of generation of these therapies – the technology is finally catching up with the ambition, so now we’re able to implement that technology. Not only is it just the cool kit that was on display, it’s clear that the behind-the-scenes infrastructure must be there as well. The data collection, the integration of the data, and the output of all the different machines involved. Pulling that all together into an EBMR to be able to release those products in a timely fashion with a low threshold for involvement with the QP. Otherwise, we’re just going to get to a point where we’ve got all these products waiting to be released and they can’t be.
So, there was some very positive discussions with the regulators and about the potential for hub-and-spoke type mechanisms and decentralised manufacturing solutions. Everybody was pulling together knowing that there is going to be this huge demand, and how are we going to meet that demand. I think even given the current landscape, the view was very optimistic about being able to make these products available to the patients that really need them.
We see a big push for manufacturing automation, both for an integrated solution that can do the whole process or for robotisation of legacy processes, as well as a lot of general manufacturing automation technology. Where do you see this fitting in and what are the pros and cons of each approach?
I think the benefits of automation are clear. For so long we’ve had these magicians who go into the lab, and everyone’s got a quirky little thing that they do slightly differently. Even following the SOP and with all the training in the world, there is still going to be that variability between different operators. And it’s not good or bad, everyone can make the product. It’s just going to be done a slightly different way each time. And obviously you layer that onto the variability of the incoming material and each of those things just compounds. So, automation is really going to solve quite a lot of that.
There’ll be a lot less digging down if a product does fail – going back and seeing “was a mistake made, was the SOP followed” etc. You don’t need to do that anymore because if you’ve validated your automation, it’s going to do the same thing every time. So, then it comes down to the starting material, which is a much easier investigation, less time consuming, and everybody can just move on.
That increase in product consistency is obviously going to lead to an improvement in the number of products that can be released, a reduction in deviations, a reduction in investigations. So not only is there a reduction in manpower in the actual suite, there’s the reduction in manpower to run around when things go wrong. That’s why we want to move to automation.
10 years ago, everybody thought you’d have a black box, you’d stick cells in at the beginning, you’d take your cell product off at the end, and we’re all happy and we can go home. I think we’re all realising that’s not the reality and the ability to connect those different unit operations with the robotics technology is an exciting advancement. The benefit of that means that you can add in new technology as it becomes available. So, you’ve got something upfront for the processing of the blood product when it comes in, and then you’ve got something else that you want to grow your cells on or something else that you want to use to select your cells, or something else at the end, and you want to free these cells. Technology is constantly advancing, and I think if you just inbuilt all that technology into your black box and press go, then you’ll never change it because you have to re revamp that whole black box. And because everybody does things slightly differently, there is not one black box provider that’s going to work for everyone.
A drug developer isn’t going to develop their own technology, they’re there to develop a drug. So, I think we’ve ended up in a world where we’ve gone “okay, now we have that module and it’s linked by the robotics”. But that’s great because that means that if I find a better way of expanding my cells, I can validate my new piece. I don’t have to do everything else. It’s just that piece that comes out and goes back in. I retrain my robot and then away we go. It’s still a big endeavour to change your process, but as innovators in the space you want the ability to use new technology as it comes across. You don’t just want to lock into a certain way of doing things and then you’re still doing it 30 years later, because then you’re not going to advance.

So as the chief scientific officer in charge of process development at a biotech company, what are some of the major pain points that you’ve encountered in your job?
It was always our ability to have relevant small-scale runs that enabled optimisation of the process. Being in charge of process development, you don’t rest on your laurels. You always think there’s going to be a better way of doing something and you want to investigate that. There are certain, very manual systems, that do enable you to do those small-scale runs. But 66if you wanted to go into any kind of automated large-scale system you absolutely had to reoptimise because it was different. Therefore, you didn’t always know that any optimisations that you made at the small scale were going to translate into any of those larger scale systems.
So having to reoptimise obviously takes time and resource and these very large-scale runs are just very expensive to run. Very expensive in terms of the cell number, so you can’t do many runs side by side because you use all of your donor cells, the donor being a huge driver of variability in the system, and then the amount of data that you can generate. If you do three runs that’s your validation run. You’ve done it, you’ve got the cells that you want to have at the other end, but you can never use those for full optimisation of all the parameters and that exploration of really “are you making the best products you could make?”
There was also no ability to monitor what was happening. You had things go in, you looked at the products that came out the other side, and that is a perfectly acceptable way to optimise your products. But, if you want to start modelling the process and looking at the key decision points within that process, or the key parameters that are important and related to outcomes, you need to know more about what’s happening during your process and not just what you put in and what you got out the other end. Things like, monitoring pH oxygen, lactate and glucose. To know what’s going on in that pot during the time, how quickly at different time points are your cells dividing? Would it be better to feed at different times within your process? Those are things that we always knew were important, but now we’re getting to the point where we want to be able to refine our processes in order to make the right decisions for the cells.
In an ideal world, what would you want to be able to do in PD that you cannot do currently with the existing technology?
For me it was certainly to gain those insights about what’s happening between the time that you see the cells and harvesting the cells. It’s the ability to not only see that but control it. At the large scale now, you can control most things. But how do you test, which of those parameters are the most important for the outcome for the cells at the other end? What’s happening during that cell process? The oxygen availability, the pH of your cells, the lactate and glucose availability and production. I think all those things will give us a much better insight into the kinetics of what’s happening during the process, allow us to generate enough data to allow intervention, and make sure [we’ve identified] the key parameters. Because all these parameters do not operate in a vacuum. They’re not independent and they will interact with each other. So, applying the ability to do more parallelised experiments that will then help you apply more advanced mathematics than my small brain can compute – but to be able to unpick those things that really react with each other within the whole cake mixture that you’ve got going on when you’re expanding your cells. I think that will move us into the next phase of understanding what’s happening in our process.
In an ideal world, what would you do with that data? If you were able to generate that in-process data in high quantity and quality, what would be the use for it in process development?
The use of that [data] is you can apply that when you scale up. So, if you are scaling up to any of the larger scale runs, you now know which of the important parameters and boundaries to stay within for any given analyte that you’re looking at. You’ve measured it at small-scale; you know what impact that has on the outcome of your process, therefore, you’re at a much better starting point when you begin those large-scale runs to know what you are aiming for and know what effect each one of those parameters has on your process.
Often before you start a clinical trial, you have a set of product quality attributes that you think are going to be driving the outcomes for the patient. Then the clinical trial starts, and you realise that maybe some of those are not so important, while others that you didn’t think about may be more important. How does this data help you when you are honing your product attributes?
Going back to that early data – you obviously have optimised around one thing: I want more cells, or my cells need to be able to proliferate when I restimulate them, or my cells need to be able to make cytokine to X extent. Are those things important? Yes. Clearly the cells all must do that when they get in, but which of one of those is the most important? Only testing it in an actual person is going to tell you that. And then to be able to go back and say, it wasn’t X, Y, Z. It was A and B, which were the two things that were very much associated. Now that obviously relies on you having a translational program where you’re measuring X, Y, Z, A, B, and C in your, in your cell product, which I’m a big advocate of. Definitely test everything, even if it’s not a release criteria. But having that list of exploratory endpoints for your product, not just for the patients, but for your product. Looking at your product from all the different angles but then being able to go back to that wealth of data – and it must be well characterised, and well organised. That’s again where automation comes in and being able to have those runs saved in the way that you can analyse them at later date. Then you could go back and see, “Okay. Yes, we did these rounds of experiments to drive it in this direction. But we also know that when you change this, it seems to get more of this other thing, and now that turns out to be more important”. So, we can go back to those small-scale experiments and drive our process development in the direction of what we now know is more important to the efficacy of the product in the patient.
What are the advances in cell and gene that excite you the most?
I don’t want to keep banging on about it, but I think it is that automation and in the data that we can generate now. Obviously as drug developers, we will have these integrated tools that can really allow us to fully optimise our process earlier and quicker when it’s always a race against time to get to the clinic with these products. Knowing that you’re going with the best possible process it can be with the data that you have at the time is going to make all the difference to hopefully making these things work first time or without much refinement. That’s going to be important for the sector, it’s going to be important for confidence and it’s going to be important to move forward.
I also think that the advance in all the support systems, so the EBMR data integration, being able to pull in all the data from the different machines that you’ve used both during the process. And especially starting to think about that in the setting of QC, which always seems to get left behind a little bit, but that’s as important for being able to release your product. So, all those process analytics, both in-process and at the end of the process, need to be integrated and having these EBMRs that can pull those things in, allow a layer of AI or something on there that can go in and highlight where things are out of spec, where the QP needs to come in to double check those things look like outliers – almost moving to a release by exception model. Those things are going to make a huge difference. because otherwise we’re going to end up with lots of products waiting to be released. We’ve made them, but now we just don’t have the infrastructure to release them.
So only thinking broadly, what does it take to get a new generation of medicine to a whole population of patients?
I think affordability, it must be affordable. Therefore, you need that layer of automation. We moved away from the a and b clean room mentality to ‘let’s be in grade D as much as humanly possible’, and now we’re moving again into ‘the more we can automate as part of this process, the less interventions that people are required to make’. That’s going to increase our throughput, increase consistency, and increase our ability for those people to be doing something else.
I don’t think it’s going to wipe out all the jobs in the CGT space. I just think those people will be able to make more. Because just taking what we’ve got now and amplifying that over and over again that isn’t scalable, it won’t make things more affordable, and it certainly wouldn’t allow us to treat a number of patients and give accessibility to as many patients as possible that could really benefit from this. Ultimately, that’s why we’re all here. Because we want to make people better from things that otherwise there’s no solution for. So, I think that affordability is achieved through quicker development timelines, bringing things through quicker, bringing through optimisations or next generations within a pipeline quicker or more robustly. I think that will also lower costs. We talked about the automation piece, but I think that the logistics as well become very important. There was a lot of discussion [in the workshop] about the potential for decentralised manufacturing, which I think can certainly help in the early phases being able to get that high-throughput for clinical trials.
But we’re going to need to take another look at the QC burden and what it will take to release these therapies, because I don’t think that will be sustainable when you reach full GMP if you are a BLA type level. It’s it all comes down to affordability. So again, that back-room piece of having everything integrated and being pulled together in an automated fashion and releasing that paperwork burden, that’s where we need to go. It’s nice to see that there’s so many solutions coming out, and everybody was really jumping on that bandwagon and embracing these technologies. While there is an investment in time and money upfront, it’s going to pay off down the line.
What excites you most about MFX?
In my previous life as a drug developer, I obviously tested the MFX technology in-house and we saw the potential it had, just on a manual basis to expand ourselves. We saw and understood the vision, and that was to parallelise experimentation, to be able to explore the interactions between the different parameters that we have going on in our growing cell cultures and through the application of DOE, while being able to control some of those variables like gassing, the amount of oxygen, the frequency of that delivery of oxygen, agitation of cells, and then feeding frequency, the media composition and any supplementation, and looking at how all those things can interact with each other to get you to a much happier space for your cells.
The sky’s the limit on how quickly you could advance your knowledge of what the best process could be. So having that layering on top of that, having the ability to generate data around the glucose content and the lactate, and then being able to see cell number and monitor the oxygen and pH that cells are being exposed to on a consistent basis is really going to give those important insights into how we can control and respond to the processes. Especially with the backdrop of the patients being so variable.
What is your hunch about how much more improvement there is to be found in both process and product quality through that systematic, data-led approach as opposed to the more trial error approach that we’ve had so far?
Who knows, right? I’m excited to find out. We have a lot of very intelligent people in the space, but we have problems pulling in data from many different sources and churning it through. So, to systematically be able to sit down and look at it in a non-biased way, because we all have our pet favourite and you want to be as non-biased as possible. The fact is you are testing that more often because you think it’s going to work, or you want it to work because you want to be right, but the ability to really explore the space. While that cytokine combination might work well in the space over here, in the context of low oxygen or something else’s happening over here it doesn’t work as well. So, to unpick that in a way and see what interactions are important and not just what single parameters are important. That’s where we need both bigger data – because an N of five runs just isn’t going to cut it, especially when you’re looking at autologous therapies and that layer of variability that you factor in – and input from AI or machine learning or whatever else you want to call it to come in and really unpick that in a systematic, non-biased way to actually see if there is a step change to find.
Conclusion
There’s no doubt that the rapid advancements in technology will play a crucial role in CGT manufacture over the coming years. Automation in particular, Katy believes will be the biggest development. Not just for process development, but for the whole supply chain to ensure that patients receive treatments quicker and cheaper than before.
After a trying few years for CGT it’s refreshing to hear that there’s optimism and momentum building, not just from manufacturers, but regulators too. It’s really highlighting that if we want big changes to happen, the industry needs to work together. As technology continues to evolve the opportunities for accelerated development and improved patient access has never been greater. It may be a bumpy road ahead, but with the right tools, talent and vision, it’s one the community appears ready to face.
The Alliance for Regenerative Medicine and CGT Catapult Workshop: Advanced Manufacturing and Industrialisation of the CGT Sector was held on the 12th June 2025 in Stevenage.