Multi-Sensory Surgical Robots — With Philipp Fürnstahl

 

BUILDING THE NEXT GENERATION OF SURGICAL ROBOTS

 
 

October 14 2025—First generation surgical robots such as the Da Vinci set new benchmarks for precision and accuracy. With A.I. and robotics developments racing ahead at breakneck speed, what new capabilities are in the pipeline? How might next-generation surgical robots impact the future of healthcare?

I visit Prof. Philipp Fürnstahl, a global leader in this field, to unpack how his robots are going beyond vision to listen and feel and apply other senses as they operate. He compares orthopaedic and soft-tissue systems, explains why preop plans must be supplemented by real-time context, dives into spinal surgery as an early use-case for his next-gen robots, and explains the systems challenges of integrating the new innovations with teams, operating room workflows, telemedicine and training.

And as you will hear, Philipp gets me thinking much bigger about which patients will benefit and why. With procedure demand rising and surgeons in short supply, the opportunity is more than safer and more precise surgery, it's scaleable surgery.

Prof. Fürnstahl has authored more than 150 publications in computer-assisted surgery. As well as heading the lab at Balgrist, he is Professor for Research in Orthopedic Computer Science (ROCS) at the University of Zürich. He invited me to visit him at the Computer-Aided Surgery Lab at Balgrist University Hospital where we toured the full-scale surgical theatre used to test the robots. 

Click below to listen.

As always, scroll down for some further reflections and a full transcript, and don’t forget you can listen to more of these wonderful insights from leading scientists, experts and future-makers by subscribing to “FutureBites With Dr Bruce McCabe” on Spotify, Apple, or on your favorite podcast platform.

My thanks to Philipp and his colleagues for so generously sharing their insights and answering my many questions.

Cover picture credit: Balgrist University Hospital

 
 

PATHWAYS TO A BETTER FUTURE

SURGICAL plan VERSUS REALITY

Fascinating to hear Philipp’s research ‘origin story’ and how he started researching how to translate a medical imaging-based plan into execution in surgery, then realised that plans don’t account for the variations frequently encountered when surgery is underway (necessitating the intervention of a human surgeon) and switched his interest to how to help robots adapt to new information encountered while surgery is taking place.

New Sensory Modalities

Adding sensory capabilities will help surgical robots detect and adapt to new information and unanticipated developments that routinely occur mid-surgery, just as skilled human surgeons adapt using their senses and years of experience. As soon as you hear that you realise it must happen. New senses, after all, are being added to other kinds of robots, so why not the surgical variety? The benefits are obvious.

But which senses? The modalities likely to be useful are fascinating, and definitely not the same as what human surgeons use. Philipp and his colleagues are prioritising contact microphones to capture structure-borne sounds so that a strike on bone, for example, can be recognised and localised even when the camera cannot see it. Conductivity sensors help differentiate tissues so the robot knows instantly, for example, when a drill breaks into soft tissue. Haptic sensors relay firmness or softness to the touch of the robot’s instruments – further invaluable feedback when operating. Ultrasound, well established in diagnostics, is an exciting new modality when it comes to monitoring subsurface anatomical structures while surgery is underway.

Target: OrthopAEdic Surgery

The well-known Da Vinci surgical robot is only suited to soft-tissue surgeries. The early target for multisensory surgical robots is orthopaedic surgeries, and spinal surgeries in particular, where complex operations require very high precision, for example in the pinpoint placement of multiple screws in close proximity to the spinal chord. A new generation of multisensory orthopaedic robot would make spinal surgeries more precise, less invasive, and much safer.

O.R. Digital Twins

I learned that testing, training, and optimizing real-world procedures are big challenges. Bigger, it seems, than the technologies themselves.

Philipp and his colleagues built OR-X, a one-to-one replication of a real surgical environment, to provide a realistic testbed. OR-X incorporates every team member, every piece of equipment and every working procedure as it is in the real world.  

The same environment is used to train surgeons how to work with robots.

The same environment will be used to capture interactions between patient anatomy, surgical instruments and surgeon, as well as all interactions between people and equipment in the room, to create O.R. “digital twins” to analyse and improve workflows.

Ultimately, this should help bring robot-assisted surgical procedures into standard surgical training in medical schools.

Equality Of Access

A surprise takeaway came near the end of the conversation when Philipp predicted the biggest impacts in the long term will be in under resourced countries. He pointed out that (human) surgical standards in places like Switzerland are already very high, but in developing countries, where demand for procedures is sharply increasing, the number of surgeons is limited, and the capacity to train up many more surgeons doesn’t exist, high-quality robot assistance could be transformative in expanding patient access to high quality surgery.

MORE TO EXPLORE

Prof Philipp Fürnstahl bio & publications

https://rocs.balgrist.ch/en/prof-dr-philipp-fuernstahl/

2024 University of Zurich news article “Surgical Robots That Hear, Feel and Act”

https://www.news.uzh.ch/en/articles/news/2024/faros.html

Research in Orthopedic Computer Science (ROCS) Project

https://rocs.balgrist.ch/en/

FAROS (Functionally Accurate RObotic Surgery)

University Medicine Zurich pages: https://umzh.uzh.ch/en/project/faros#read

Balgrist University Hospital pages: https://www.balgrist.ch/en/research/research-units/research-orthopaedics/faros/

Operating Room X (OR-X) Links

Homepage: https://or-x.ch/en/translational-center-for-surgery/

Research Projects: https://or-x.ch/en/activities/research-projects/

Research Library: https://or-x.ch/en/activities/research-library/

Resident Courses: https://or-x.ch/en/activities/assistenzarztkurse/

Surgical Courses: https://or-x.ch/en/activities/surgical-courses/

 

INTERVIEW TRANSCRIPT

Please note, my transcripts are AI-generated and lightly edited for clarity and will contain minor errors. The true record of the interview is always the audio version.

Bruce McCabe: Welcome to FutureBites, where we look at pathways to a better future. My name is Bruce McCabe. I'm your host, the Global Futurist, and my special guest today to talk about the future of surgical robots, this amazing future ahead of us with robots in healthcare, is Professor Philipp Fürnstahl. Welcome Philipp, to the podcast.

Philipp Fürnstahl: Thank you, Bruce. Very excited to be here.

BM: I'm excited to be here! I'm here at the lab. It's wonderful to see you in person here in Zurich. And I just want to quickly give people a sense of your background. But you're the head of the Computer-Aided Surgery Lab. So we actually have a lab called that here at Balgrist University Hospital. You've been here since 2012, so some time now. And you're also the professor for research in orthopaedic computer science at the University of Zurich. This is correct, is it not? And you've got quite a history of inventing devices in this space. So you're a perfect person to talk about the subject.

I wanted to open, before we get into the robots, just quickly, what inspired you or who inspired you to take this line of research in your career?

PF: It was not the particular technology or robotics. It was more the field of medicine. I'm a computer scientist and what motivated me is that, as a computer scientist, I still have the possibility to create some really an impact on patients by using or by developing technologies. And this is what inspired me.

BM: Okay, so the healthcare side came first. And you gradually got into devices.

PF: I actually did already my master thesis quite a while ago in the meantime in the field of surgical planning with augmented reality.

BM: Really?

PF: So 20 years ago, very bulky devices compared to the current devices. And this was my start in the field of surgery. And as you said, I gradually shifted into surgical robotics. I started, my primary focus in the beginning was on preoperative planning. So how can we carefully plan and simulate procedures on the computer? Orthopedic surgery is an elective field, meaning that we have time to plan carefully and to prepare. But then the question comes, how can we transfer this plan to the surgery to accurately execute this plan? And there are some ways to do that. But during my time as a researcher, I realized that a plan remains often a plan and the surgery happens differently. So we need to find ways how to capture the situation in the surgery, create computer models that are more context aware and can adapt to the situation in the surgery.

BM: Which must be the hardest part, the adaptation. Is how you're saying like we can have the plan, we can use the scanning, the radiology to model something. We transfer that into activity with the robot, but the robot also needs to be able to adapt to new information while the surgery is taking place.

PF: The robot needs sensors in order to understand the environment. We call this robotic perception.

BM: Yeah. Okay, fantastic. So take us through just the history, because people, I think certainly the people listening to this podcast are really aware that one of the leading applications of robots has been in assistive surgery. So it is an exciting space. People realize this potential. And in some geographies, the idea of remote surgery is very exciting to people. So how far have we come so far in the capabilities of robots and how much automation, as opposed to assistance, do we have now?

PF: The history of robotics is different in orthopedics compared to other disciplines. You may know the most famous robot is the Da Vinci robot. It is a soft tissue robot, completely remote-controlled. In orthopedics, the central anatomy is the bone. As I said, we have the possibility to plan in advance. Orthopedic robots were, from the beginning on, connected with cameras and navigation systems with the goal to execute a plan. They were a bit more autonomous, I have to say, but the capabilities are much more limited compared to a Da Vinci where you have 20 or 30 instruments that you can use. So this is the difference. You also mentioned that in the meantime, we hear a lot about remote surgeries and autonomy of surgeries. And also here, this discrepancy between orthopedics and Da Vinci is visible. Da Vinci has a big advantage. So you can execute complete robotic surgeries from the beginning on without autonomy because the surgeon is doing the entire procedure. This allows us to learn how the robot executes the surgery. In orthopedics, this is not possible at the moment because we do not have the possibility of remote control. And this is something which is a long-term goal of my research to come into this direction.

BM: Okay, well let's now move into your research area. So I came to this because I was reading about your work online and the thing that stood out in particular was this idea of adding more sensory capabilities. Is this the principal line of research, the most important one?

PF: Yes. So a big drawback of the current systems is that they stupidly follow a surgery plan without adapting to the current situation. And in order to address this, we have to add real-time sensors that allow you to create the surgical context in real time as the surgeon does. The surgeon constantly re-evaluates and adapts during the surgery based on his senses, the eyes, the ears, and also the haptic feedback. And by integrating these kind of sensors, we hypothesize that we could improve the intelligence and the autonomy of surgical robots.

BM: So historically, up until now, most of the sensory input has come visually?

PF: Yes.

BM: And by comparing visual, I guess there's also, there's not really live scanning. It's not like using ultrasound or other types of... It's only camera vision or is it other types of vision, if you know what I mean? Is there other types of radio sensing today or is it just vision-based? Because now we're talking...

PF: It is primarily vision-based but often also in combination with medical imaging, either diagnostic imaging coming from the preoperative phase but also intraoperative fluoroscopy. So you could take intraoperative fluoroscopy shots to create a better picture of the intraoperative anatomy. But that's it more or less. So this is what you have.

BM: So that's the current state today?

PF: Yes.

BM: And you've mentioned two sensors and these are the principal ones. You've mentioned sound but also haptic, the touch.

PF: Yes.

BM: So can we deal with those separately and understand how you're adding those capabilities?

PF: So in the project which you were referring to, we added multiple non-visual sensors. One was different microphones, contact microphones to capture also the structure-borne sound, not only the airborne sound. We used ultrasound, you mentioned it before. Ultrasound is a very interesting modality, we can speak later about it. And we used force sensors integrated into the end effector of the robot. And finally, one very interesting modality was coming from a company, the company is called SpineGuard. And they developed conductivity sensors that can be integrated into the tip of the drill.

BM: Conductivity?

PF: Yes, because you leverage the fact that different tissues exhibit distinct conductivity characteristics. And by measuring the conductivity during the drilling, you can make an assumption in which kind of tissue you are. For instance, whether you are still in the bone or whether you are already breaking into soft tissue, which is dangerous.

BM: Right, okay. And I guess the audio side is similar in that you're looking at sound based on the interaction between the robot and the patient. Is that the sound you're trying to pick up or is that another type of... I mean, there's ultrasound, I get that. But then you initially mentioned sound, are there other types of sound inputs?

PF: Yes, we primarily looked into the sound that is generated during the interaction between the patient anatomy and the surgical instrument during interaction, drilling for instance or sawing.

BM: Cutting, yeah.

PF: And by doing this, we can, for instance, detect whether a drill breaks into soft tissue. But we can, a sound is even a complementary modality to the visual modality. Which means that we can consider use cases where standard visual guidance is difficult to perform or even fails.

BM: And we still have additional data, additional input data. What about the haptic side of things? When I imagine haptics, I think of haptic gloves and things and I think of prosthetics and how we're trying to introduce sensors in prosthetics that will interface with nerves. So do I imagine the robots as having hands with pressure sensors on them? What do I imagine here as to the addition of haptics?

PF: The haptic factor is much more important in the manipulation of soft tissue where you really have to be careful. When it comes to bone anatomy, then we are interested in the forces that are generated, but it is not required to provide haptic feedback.

BM: Okay. Where are you at with that? You're already doing experimentation with all of these modalities?

PF: Yes. So we had a four-year European Union project and at the end of the project we made the final validation here in the OR-X where not all of those sensor modalities were integrated in the final prototype, but several. So ultrasound, conductivity measurements, and we made also side experiments with other modalities. Primarily ex vivo.

BM: So it gets a lot onto how you test. It really is interesting because we were talking on the way in about how AI and robotics is moving very rapidly, but in medicine there's obviously other layers of care and due diligence and governance required. How do you test these robot capabilities? What's the method?

PF: We are sitting now in the OR-X, which is a translational center of surgery, and one of the main objectives of having such infrastructure is to provide a realistic testbed for new technology. In the old days it was not so common to have a one-to-one copy of a real operating room, so for us researchers it was very difficult to develop a technology which can actually be tested only in the later stages. So the development phase was relatively long and the testing phase was more at the end. On the one hand it was okay because the computer methods were much more simpler and straightforward. We say algorithms, so step by step compared to the methods now where we speak about deep learning where we have to already collect realistic data during development. Due to this development, also the validation and testing part shifted, so we go much earlier into a realistic environment, but the environment needs to be safe. Therefore we replicate the real environment. We work with ex vivo models, sometimes with in vivo animal models, but outside of the patient treatment, of course.

BM: So, you have a full-size facility, so you're replicating the entire environment of the surgical environment?

PF: Yes, it's a one-to-one copy of a real surgery room.

BM: And do you bring in surgeons from different disciplines to work with the robot? How do you do this from the human-robot interfacing point of view? Do you test it with people who are never... Surgeons who have never worked with robots before?

PF: We have different settings. So, in the research setting, of course, the surgeons are part of the team and they know how to operate the robots. But we offer also courses in the OR-X to train surgeons. So, we have two parallel use cases here. This training is conventional resident training with plastic bones or ex vivo anatomy, but we also offer robotic courses where particularly the young residents can get early in touch with new technologies.

BM: So, let's talk about the use cases a little bit. What are the early ones, the ones where you feel we can expect to see these capabilities first in the operating environment? What sort of surgical procedures would be the first and which ones would be more aspirational that we might expect to see in five or ten years or longer?

PF: So, if you see it from the perspective of the surgical use cases, one of our target procedures or target anatomy is spine.

BM: Spine.

PF: The spine requires high precision in the surgical execution because the bone is surrounded by vital structures. You have the nerve, you have the spinal cord, et cetera. It requires high precision, but it is also a very complex anatomy. It's not a single bone. So, essentially, you have multiple bones that can move to each other. It is influenced by breathing. And spine is also very interesting for us because minimally invasive spine surgery is just in the beginning and evolving. So, compared to other disciplines where archoscopy is already very established, this is now coming to spine surgery. And, of course, this is a very great opportunity for robotics because robotics is always connected with minimally invasive surgery. One big benefit of robotics is that you can perform a procedure less invasively.

BM: Yes, and I can relate to this very strongly. I've had a discectomy.

PF: You had one?

BM: Yes, I've had spinal surgery and it went very well.

PF: Minimal invasive?

BM: No, it was some time ago and it was fairly invasive. But, at some point, I'm sure I will have to have some additional correctional work because ...

PF: Could be. Yeah,

BM: Yeah, I did not have fusion, so there's some movement. And this was all part of the plan to retain flexibility. There's wear and tear. One day they're going back inside. So, I hope it's one of your robots assisting in that process!

So, that's the near-term, the most exciting target area would be spinal. And I also understand that straight away from the planning perspective, it works perfectly. You need so much scaling to plan spinal surgery to get through all the ... To make sure you don't cut the wrong place. The complexity factor is very high. I get it. Any other things we can expect to see? I guess, you know, if you were listening to this and you were in a major hospital and you were in surgery, this is the sort of thing you'd expect within the next five years or sooner than that.

PF: We should also see it from the technology side. I think the next modality that will come in the context of robotic surgery is ultrasound. So ultrasound is well established in diagnostics, we know that, but interventional it is not really used routinely. But it is already out of the research phase, so many many research groups work already on translational and applied research to bring this into clinics with collaboration with industry. So I think this can be expected in the next three years.

BM: Fantastic. And is there more aspirational goals? Like if I say 10 or 20 years from now, are there things you're imagining might be possible that you're hoping your research will bring?

PF: Yes. So I have to come back to the different histories between orthopedics and soft tissue surgery. I really believe that orthopedic would greatly benefit from Da Vinci like robots that can be remote controlled, which is the first step before speaking about autonomy. And over the midterm I conduct research in these directions, making Da Vinci like robots for orthopedic procedure. Another topic are digital twins. We are interested to create a high fidelity representation of the surgery over time, ideally in real time, which opens many, many opportunities. We can speak here about remote participation, remote assistance, but also the training and educational aspect is very important.

BM: Yes. When I think about augmented reality, just as a field, the thing that excites me most is medical training. It seems that you can add haptics and augmented reality. This is before we get to robotics, but there's a joke, ‘no one wants to be a nurse's first catheterization.’ We want them to practice. You add haptics and augmented reality, they can practice and practice. But I guess in your saying, once we combine these elements, this is for training surgeons as well. Just tell me about the digital twin concept a bit more, because what are we twinning here? We're twinning which aspect? There's the patient, there's the surgeon, there's the procedure. What are you thinking?

PF: We divide the digital twin into two areas. One is the near field. In the near field, we are really interested in the interaction between the patient anatomy, the surgical instrument and the surgeon, because this is of course influencing the treatment. Here you can optimize the treatment. But there is also the far field, capturing what is happening in the entire room. The interaction between the people with equipment. For instance, the fluoroscopy device has to come in in order to acquire pictures and so on. This is interesting for finding ways to optimize workflows and to make surgeries more effective, more efficient.

BM: Fewer errors.

PF: Yes, and also shorter time.

BM: Higher quality. Fantastic. One question I had is a little bit strange, but clearly all of this work improves patient quality outcomes. Fewer errors, more precision, better surgery outcomes. Is there also an outcome economically? Do you think robots will make procedures quicker, cheaper, faster, that sort of thing? Is that part of this or not really?

PF: So I think I have to give a longer answer here. Even the generation of better surgical outcome is not really clearly visible in the literature. So there is, if you see it globally, there is little evidence that robotic surgery brings really a big, big impact in terms of outcomes. And I think we have to look into the countries where the studies are made. So here in Switzerland or in Europe and also in the US, we have very, very good surgeons that are excellently trained. We have specialized centers. In particular, those specialized centers do use technologies. And the technologies cannot yet compete with a well-trained surgeon. But where we can generate already now an impact is in less resource settings, let it be a small hospital or a developing country.

BM: Very interesting.

PF: And here we can generate an impact on the outcome. And you were mentioning cost saving.

BM: As well as quality.

PF: I mean, if you look into the development of surgery, we will have an increase, a drastic increase of procedures. The procedures will double around until 2050. And the number of surgeons will decrease and the costs will rise. Right. So if we look into the future, we have really a big, big problem. And actually technology should address this problem. I would say the pressure here again in Europe, Switzerland, is not so high as in other countries. If you you look to China or India, where you have billions of people that must be somehow treated, compared to Europe, where you have many, many hospitals for, let's say, a few millions of people, you have to find other solutions because you cannot scale up the number of surgeons. You cannot scale up the education to such a level that you can maintain highest quality of care in these countries. And I have discussions also with researchers and industry from these countries, and the motivation to utilize a robot to make procedures more effective is much higher than here.

BM: That's really interesting. You've helped me think a lot bigger about the impact. Once you start thinking that way, it's a very big footprint, potentially. Not only, it's also about equality of access. It's potentially giving high quality surgical access to people who would never get that access without that assistance …

PF: Yes.

BM: Brilliant, brilliant. And I guess, as we finish, but the other thing in my head was your adding of sensory capabilities has to also have utility for other industries beyond healthcare, I imagine. Do you work with other industries as well, or is it just primary focused on healthcare? I mean, haptic sensory capabilities, one imagines it would be useful in every general purpose robot.

PF: Exactly, but we work mainly in the other direction that we try to learn from other fields and transfer these experiences and learning to the surgical field. So we also use haptic devices, et cetera, as you say.

BM: So is there anything else that you wish more health leaders knew about your work here? What would your other messages be if you wanted people to know more about your research and this field in general?

PF: I think it would be important to speak also about the challenges that we face if we want to bring these technologies really into clinical practice. And we see at the moment that one of the big challenges is related more to the hospital infrastructure than to research. So on the one hand, we need to collect a lot of data, also real data, for training our models, but also for testing our models with real-world data. And we can capture those real-world data only in the real operating room, but the OR infrastructure is not ready to host complicated IT infrastructure. This is one important aspect. And what we also see is that we need to think about training and education, because when those new technologies will come into patient treatment, then we need, on the one hand, surgeons who can operate these devices, but we also need still surgeons who can eventually do the surgery conventionally if such devices do not work or will fail, right? And this creates big problems in the educational system.

BM: It's new.

PF: It's new, and it will not be possible over the long time to have two parallel pathways of education, so one conventional plus the robotics. So we need to find either ways to train differently or we should rethink how to assemble a surgical team, yeah, maybe.

BM: So, yeah, when you are specializing, you're at university, you're doing your postgraduate work in surgery as a clinician, it needs a whole new pathway now, a whole new training regime to integrate all these capabilities. Okay, and the last question. Is there anything else I've missed on those challenges?

PF: No, no, it's fine.

BM: Yeah, the last question, I ask everyone the same thing, but in the context of your field with robotics and everything, what do you think the most important thing is that health leaders can do to create a better future? It's a more general question, but is there anything that stands out that you just wish, if we did more of this, health care would be better in a surgical context?

PF: I think I have to again repeat the topic of low resource settings a bit. What I think is very important is democratization of technologies and access to technologies in two ways. One is the innovation viewpoint. At the moment, the surgical robots are developed by a few medtech companies. The robots are proprietary systems, always connected with implants, because the cash cows of the industry are implants and not the robots, right? And this hinders a bit innovation. It already starts that some startup creates more generic solutions that are compatible with different implants, et cetera., but this has to be increased, in my opinion, and health care leaders could do that by creating more open standards or maybe incentives in the reimbursement system. And the other aspect of democratization is that we want to empower low resource countries to use this technology in order to create also there a big impact.

BM: Yeah, perfect. That's a wonderful answer and a wonderful way, I think, to finish this interview. Professor Philipp Fürnstahl, thank you so much for your time today.

PF: Sure, thank you. Bye-bye.

 
Previous
Previous

WILL A.I. BOIL THE PLANET? — With VLAD COROAMA

Next
Next

WILL WE WORSHIP A.I.? — WITH BETH SINGLER