INSECT-LIKE DRONES THAT SENSE, LEARN AND SWARM — WITH GUIDO DE CROON
FUTURE OF BIO-INSPIRED DRONES AT TU DELFT MAVLAB
November 11 2025—Insect-like drones that tend crops, perform aircraft inspections and find gas leaks are here … and they’re only the beginning!
Today I’m diving into the future of bio-inspired drones with the one and only Professor Guido de Croon, head of the Micro-Air Vehicle Lab at Delft Technical University in The Netherlands.
Guido shares the latest insect-inspired research developments in vision-systems, energy-efficient intelligence, height-sensing, autonomous navigation, swarming, energy-harvesting and speedy decision-making. We discuss early commercial applications and spinout companies, and how the MAVLab team is building on it’s victory in the 2025 autonomous drone racing championship in Abu Dhabi to take agility to a whole new level.
It’s the stuff of sci-fi, except … it isn’t. MAVLab creations flap their wings like birds and dragonflies, carry sensors that process information and facilitate ‘event-based’ decisions much like eyes, ears and brains do in the natural world. They emulate the navigation ‘algorithms’ of bees and ants. They swarm to accomplish collective goals based on how insects do it. Will robot insects one day harvest energy from their environment, thereby extending their missions indefinitely? Listen to find out!
As you’ll hear, I’m thrilled to be catching-up with Guido. He’s one of my favorite scientists. He’s happy, positive, and bursting with ideas to help industry — and it’s infectious! Ever since we first connected in 2021 I’ve enjoyed featuring MAVLab creations in my presentations. Why? Because these tiny drones ‘push the limits’ of what’s possible by drawing inspiration from nature, an approach that will continue to unlock new opportunities for decades, and because (despite being tiny) they inspire audiences think BIGGER about the future!
Thank you Guido, for your passionate, joy-filled insights on the future of tiny drones. And special thanks also to Dequan Ou, for showing me around the lab and patiently answering my many questions!
Click below to listen.
Afterwards, scroll down for some personal reflections, links to start-up companies and drone examples mentioned in our conversation, plus a full transcript. And as always, don’t forget to share this with others who might be interested and subscribe to “FutureBites With Dr Bruce McCabe” on Spotify or Apple or wherever you get your podcasts to listen to more future-makers like Guido. Or drop us a line if you’d like to book a keynote on game-changers, opportunities and the future for your industry event, anywhere in the world.
bio-inspired AT EVERY LEVEL
The biggest takeaway for me by far, was that bio-inspiration is driving the future of drones not just at one level, but at EVERY level.
morphology
Flapping wing configurations make drones more agile, safer to be around, and able to glide and soar on updrafts and fly further in the same way birds and insects do. Also insect abilities to accomplish tasks with fewer physical assets, such as the way bees do height-sensing using eyes only, has inspired simpler, pared-down drone configurations.
Sensors
Onboard cameras, microphones and other sensors are steadily becoming more ‘neural like’ (neuromorphic) because in nature, powerful sensing outcomes are achieved using vastly less energy. Instead of processing repeat snapshots of all the information that hits them, neuromorphic sensors process only changed information. As in life, their sensing is analog not digital, and asynchronous instead of being slaved to a clock.
Processing
On-board information processing and decision-making is becoming neuromorphic for all the same reasons.
Together, neuromorphic sensing and processing yield the following:
Lower energy use, which translates to longer range / more complex and useful missions.
Smaller drones that can do more with less. How small? This is the lab that in 2008 made the DelFly Micro that weighed 3 grams with a camera and a transmitter, and in 2013 made a version of the DelFly that could fly completely by itself and weighed only 20 grams. Neither used neuromorphic components. So, smaller than those!
Ultrafast event-triggered responses. This is a significant benefit and it doesn’t get enough airplay. Guido’s description of the partially neuromorphic decision ‘pipeline’ that gave his drones the agility and speed to win the drone racing championships paints the picture: “in the old days the pipeline would be, ‘we know where we are, we plan a path, we need to make this path optimal.’ So that would be very computationally intensive. And then we need to track the path. [Whereas today] we basically say, you are here, this is your velocity, and then the neural network directly sends commands to the motors, but in a super agile way. [And now we are] translating these networks to neuromorphic on our own devices.”
Watching the MAVLab team testing their latest drone creations — this one able to hover and write on a whiteboard!
Algorithms
Nature’s super-efficient ‘algorithms’ are there for the finding for every task and challenge – how to determine height, how to land, how to perch, how to navigate, how to return the way I came – because bees, ants and other insects must solve the same problems!
Particularly inspiring in our conversation was the bumblebee method of height-sensing using sight only, by keeping optical flow (the relative motion seen as you move) constant as they descend, then making small oscillations near the ground – and also the ‘snapshot + odometry’ way of navigating large distances (inspired by ants) as a substitute for the usual robot approach of generating 3D maps using cameras, LiDARs, etc. As Guido pointed out, biologists are pretty sure insects don’t make detailed maps of their surroundings!
ENERGY Harvesting
Guido confirmed that some of his colleagues in this field are working on drones with solar cells (but that the ‘wait-time’ to harvest useful amounts of solar energy is a challenge) and other colleagues are making drones that find and ride updrafts, both as a way to gain ‘free’ height for range, and also as a way to charge batteries by role-switching propellers into generators. Will robot insects one day harvest enough energy from their environment to extend missions indefinitely? In my fantasy future drones ‘eat’ and metabolise chemical energy from leaves while they rest. Surely a huge challenge technologically, but equally surely insects already do it, so in the spirit of bio-inspiration don’t rule it out!
Cooperation
Jumping from the individual to the collective, strategies for cooperating across many drones will ever be inspired by the colonies and communities in nature. Ants and bees accomplish vastly more as a team than they could individually. MAVLab drones are already working in swarms for gas detection, exploration and search and rescue.
Especially interesting was Guido’s comment that industry people will need to open their minds to new ways of doing things because ‘nature’s way’ will sometimes be different but equally effective, his example being firemen walking in to find drones hovering around the gas leak instead of sending a map of where to find the gas leak.
THINKING BIGGER: END-TO-END
Soon MAVLab will be connecting up all bio-inspired parts to squeeze the maximum out of small drones. Guido: “One of the main things I'm doing now is a project in which I want to make a fully neuromorphic autopilot for these drones … it should do attitude control, so very low level, like you don't crash into the ground, velocity control, like ego-motion control, obstacle avoidance, but also navigation, and even applications, so like try to detect ripe fruit or something like that. I want to put this all in a single network, and yeah, the cool thing is, like you say actually, you have different levels of hardware, all of which will be now lining up to be bio-inspired [and] we expect these big advantages in terms of latency, energy efficiency, and the fact that we can make such tiny drones fully autonomous in the first place”
BIGGER AGAIN: A BIO-INSPIRED FUTURE FOR ALL ROBOTS
Future drones will more and more come to resemble and behave like living biological organisms.
It’s a theme across just about every robot lab I’ve visited. It applies to Brad Nelson’s medical micro-bots, Philipp Furnstahl’s multi-sensory surgical robots, Hod Lipson’s goals for robots that self-metabolize and reproduce and ‘evolve’ by upgrading themselves, and so on.
Furthermore, it's been my longstanding position that the transition to neuromorphic computing architectures is fundamental to the future of artificial intelligence. See my recent conversation with 2025 Turing Laureate Prof Rich Sutton “Limitless A.I. Is Coming” and with Prof Alex Marcireau “Neuromorphic Computing And The Future of A.I.” and going back further, “Exponential A.I.”
But stepping out of the MAVLab into the freezing Delft wind, a bioanalogous robot future suddenly feels so much more certain, almost to the point of being revelatory, I think because never has it been articulated to me quite so comprehensively, component-by-component, layer-upon-layer, as it has in this visit.
SPINOUT COMPANIES MENTIONED IN OUR conversation
Three TU Delft spinouts were mentioned or relate to what we discussed:
Flapper Drones – manufacturing robust robot flapping-wing drones for all kinds of commercial and creative applications. The drone Guido and I are playing with in the photo on this page is a Flapper Drone product.
PATS Delft – automated insect control platform for greenhouse agriculture. Insects flying through the camera’s view are counted and differentiated. Insects identified as pests are set as targets for palm-sized drones that intercept and kill them without the need for pesticides.
Emergent Swarm Solutions – fully autonomous swarms of small drones for scalable data gathering, including disaster response, law enforcement, civilian and military purposes. An early demonstration, which I understand was the original student project that inspired the spinout, was using swarms to perform inspections of aircraft. See also the company’s LinkedIn page.
You can find out about other TU Delft spinouts here.
The UK company Guido mentioned doing cutting-edge work on insect-inspired intelligence for robots is University of Sheffield spinout Opteran Technologies.
INSPIRING VIDEO CLIPS
Watch MAVLab’s winning runs at the 2025 Abu Dhabi Autonomous Drone Racing Championships
Explanation of ant-like snapshop-plus-odometry navigation in tiny drones.
MAVLab demonstrations of swarm exploration, swarm search-and-rescue with onboard cameras, autonomous gas detection and source localization in a factory.
Joint MAVLab project with Royal Brinkman and start-up Mapture on palm-sized drones for monitoring greenhouse diseases and pests. I particularly like the ‘drone in a box’ approach and the way drones launch/return to their charging platforms from 3:34 onwards!
More videos can be found at MAVLab’s youtube channel.
INTERVIEW TRANSCRIPT
Please note, my transcripts are AI-generated and lightly edited for clarity and will contain minor errors. The true record of the interview is always the audio version.
Bruce McCabe: Welcome to FutureBites where we look at pathways to a better future and I'm here at the Micro Air Vehicles Lab at the Delft Technical University, TU Delft, with Professor Guido de Croon. Welcome to the podcast.
Guido de Croon: Yeah, thanks.
BM: Thanks for hosting us. We're surrounded by little drones that we've just started talking about and we weren't even recording so now we're finally recording. We can talk about them in more detail. So I'm really excited to be here. I think you've been doing micro air vehicles for, well, since 2008 at least.
GC: Yeah, definitely.
BM: And you got famous particularly, I remember seeing, this is well over six, seven years ago, the DelFly 20 gram, the DelFly Micro, is that what it was called? The flapping wing one?
GC: Yeah, so we, in 2013, we made a version of the DelFly that could fly completely by itself and weighed in total 20 grams. But it was actually not even our smallest one. But because the DelFly Micro is older than that and was made by colleagues here in 2008 and it weighed 3 grams and it could carry a camera and a transmitter.
BM: Unbelievable.
GC: Yeah.
BM: And flapping wings, right?
GC: Exactly, yeah, flapping wings. So the project started by a group of students and a few other colleagues in 2005 to draw inspiration from nature to make a new type of drone that's used flapping wings. And yeah, we've been active on it ever since.
BM: Fantastic. So I really wanted to get into the bio-inspired stuff that you're doing because if we look at the future of robotics and drones in particular, there's so much more potential. There's so much more we can do with small robots. And your lab, I think, more than any other has demonstrated some of these frontiers. Soft flapping wing drones that can work in greenhouses around people and that are getting more and more sensory capabilities to do things not just for agriculture, but I remember years ago you started sending me videos and one in particular was detecting gas leaks in an industrial environment. These tiny little drones flying around. So just tell us a bit about some of the potential, I guess, industrially, and then maybe we can get into some of the projects to improve their capabilities.
GC: Yeah, so our point of departure is that drones should be safe to humans and that is why we focus a lot on really lightweight drones. Of course, the flapping wing ones are particularly safe because even if they touch you, it doesn't even hurt. But also indeed the very small propeller-type drones, when they're small enough and light enough, if they touch you it will hurt a little bit, but it will not pose a real threat, let's say. And so we start from that point that they have to be safe and then we work on making them autonomous because economically, but also for applications like search and rescue, you just cannot have a human that needs to control them all the time, especially because in many applications you would like to have many of them.
So the greenhouse you already said, there's a great demand by the sector to help them detect diseases and pests early so that they can treat them, for example, with natural enemies on time instead of having to use pesticides or even throw away crop. And so there you need basically what you want is actually a large group of these very light and safe drones and if you're the owner of the greenhouse, you don't want to worry about controlling them or whatever, you just want the data. And that makes for a super interesting challenge because, yeah, because they're so lightweight and safe it also means they cannot carry a lot of sensing and processing.
BM: Yes, which gets into this sort of capability thing. And the other thing about greenhouses is they're basically indoors, they're out of the wind, right? So it's kind of a perfect environment, would you say, in that sense?
GC: Yeah, so indeed with the very light ones I think a major application area is indoors because obviously outdoors, if there's too much wind, yeah, they will typically not fly very well, which is the reason actually why many insects also just choose to stay inside.
BM: [laughter] Right, okay, let's get into the bioinspiration stuff. We were just talking about navigation capabilities and sensing height and perhaps you could just tell us a short summary of this story of how bumblebees inspired you to think differently about how to get drones to look at their height control.
GC: Yeah. So what we see is that in order to make these drones autonomous by themselves, we cannot really rely on the state of the art in AI for like self driving cars and like other devices, because state of the art AI requires lots of computation. For robots, it typically means that in order to navigate, they need to make highly accurate three dimensional maps of their environment, which takes a lot of computation and memory.
BM: They wouldn't be able to fly anywhere if they're using all that energy for computation.
GC: I typically show like if you need this kind of computation and memory, you need such a big computer. It's actually heavier than the drone that we envisage. And yeah, the only way out is to look at nature and see how nature does things. And over time, yeah, what we then do is we look at, okay, what do biologists think that honeybees are doing? Like you said, for example, for landing, engineers would typically say we need a height sensor. But honeybees do that purely based on vision, which in their case is a tiny, very lightweight sensor. And I've worked on that and I've looked at what biologists thought they were doing, which is to keep optical flow constant. And optical flow is basically the motion you see when you move. And if you keep it constant while you go down, then basically as you go lower, you go slower. And the funny thing then is you can implement it on a drone. So that worked, but it only worked partially because close to the ground, it would start to oscillate. And when I studied that in more detail, I found out that this is actually a fundamental property of optical flow control and that you can actually leverage this to see distance.
And yeah, this was a new insight also for biologists, which you can actually only get when you're trying to build the system you're trying to understand.
BM: You now also understand nature better by doing it that way.
GC: Yeah. And I think that, so many of the things insects do are super efficient. And I think one of the most exciting things at the moment is that with AI, with the deep neural networks and all, it runs on graphical processing units. And this is the kind of hardware that takes quite some energy to run.
BM: Absolutely.
GC: And if you look at nature, also our human brain, we use around 20 watts to do amazing things. And it's much more efficient than what you see basically with AI at the moment, the conventional hardware. And that's why we're focusing a lot on a new type of technology that they call neuromorphic AI.
BM: Yeah. We've covered that a lot on this podcast because of the exciting possibilities for all AI, and even eventually the data centers, as we get more neuromorphic. But with you, it's about if you use a neuromorphic processor for a camera, a camera chip, it's less energy to do all of those capabilities, isn't it? Again, it can fly longer, there's less on board. You could use a neuromorphic microphone, I guess, in all of these capacities.
GC: Yeah. So indeed, the sensors, if you make them neuromorphic, it means that they're asynchronous. So for example, with a normal camera, you always record a certain number of frames per second. And basically the first pixel is only registered again when you're done with the last one. Yeah, that's not how our own eyes work or animal eyes work. It works asynchronously, which has this advantage that it's actually also quicker. So as soon as something changes, so that's another big difference. So yeah, our eyes register change much more than the absolute brightness. So as soon as something changes, signals are sent. And this makes it quicker, but also more energy efficient because, for example, if you take a neuromorphic camera and you make it static and it looks at a fixed scene, it will basically send nothing because nothing is changing. And as soon as you move, then things start to be sent. So the neuromorphic sensing is super exciting.
And what we're working a lot on now is to also do neuromorphic processing in which neural networks that are used in AI are made slightly more similar to the neurons in our brain in the sense that they have some internal dynamics like a membrane voltage. And then as the voltage goes over a threshold, they send a spike. And the spike is like a binary signal. So it's really easy to compute with. And this makes it also cheaper in terms of compute.
BM: Fantastic. So I guess this relates to something else that we interacted on recently. You won the Drone Racing Championships, the World Championships in Abu Dhabi, or your postdocs did, or your team did recently. And looking at the footage of these tiny drones racing under and over obstacles and changing direction really fast. I imagine the neuromorphic processing is part of the agility, part of the speed of decision-making. Would that be the case?
GC: That's interesting because this champion... So drone racing we do since 2016. We got in the game because we saw that autonomous flight solutions were slow also because they needed a lot of computation. And we thought like, yeah, if you use bio-inspired intelligence, you can be much quicker. And in the case of this competition, the organizers actually create and construct a drone. And they put a conventional rolling shutter camera on there and a conventional GPU, like an NVIDIA or an NX.
BM: So you have to work with the standard hardware that they give you? It's all about the software?
GC: Yeah, it's all about software. And in terms of sensors, there's the camera and there's an IMU, which actually, it's not the best IMU, let's say. And all the teams have to work with this hardware. And the difference between the teams are only in software. And we do make elaborate use of neural networks. So we call it the pipeline when you put all the software together that ran in April. And it actually beat three human world champions in FPV drone racing. So the images are interpreted by a deep neural network to detect gates. And then we have some traditional what we call state estimation. Like, okay, so we know where the gate is in the image. And then we can find out where are we on the track. And then we can kind of filter over time also based on a model of the drone that we have. Like, where are we? And then the cool thing is that control is yet again entirely a neural network.
So in the old days, the pipeline would be, we know where we are, we plan a path, we need to make this path optimal. So that would be very computationally intensive. And then we need to track the path, right?
But what we do is we basically say, you are here, this is your velocity. And then the neural network directly sends commands to the motors, but in a super agile way, as you saw. And what we're doing now, but that's not within the competition, is that we're now translating these networks to neuromorphic on our own devices. And until now, we see that for the control, for example, we really expect similarly agile flights.
BM: Fantastic. So you won this on the basis of superior algorithms and neural networks behind the scenes. Now you can couple them with superior hardware and see how far you can go.
GC: Yeah, yeah, because indeed, if we do this with neuromorphic camera, neuromorphic processing, it should be quicker.
BM: Oh, man.
GC: It's so stunning now. It was already like going almost 100 kilometers an hour. So I don't know how much quicker you can get.
BM: It's like watching a science fiction movie, watching these drones chasing each other around. It's unbelievable.
GC: Yeah, it's pretty cool.
BM: So did I read somewhere also that you had used ants, you'd looked at ants and how they navigate back to a source to do a more streamlined version of navigating back, getting a drone back to where it came from? Is that true as well?
GC: Yeah, because to me, so one of the computationally heavy components that typically is used in robotics is this making of this 3D map. If you look at insects, they actually also navigate over large distances and the biologists are pretty convinced that they're not using such highly detailed maps.
BM: Yes, ants probably don't have GPU-sized … Yeah, anyway.
GC: Exactly. But how do they do it? And there's different strategies. One of the strategies that the one you saw, I guess, is for ants. Yeah, when they're in like a cluttered environment, they typically learn their route when they go outbound, like to find food. And then they come back along the same route. So they use visual memory for this. And what we did on a tiny drone is that we put omnidirectional camera on it because what insects do have and which you don't see yet so much in robotics is they have actually quite a wide field of view. So they don't have a lot of resolution. So it could be like sometimes 10,000, let's call it pixels, but for insects they are called ommatidia, like the photoreceptors. So there's not a lot of pixels, let's say, but they do have a broad field of view. [NOTE:- “ommatidia” = individual photoreceptor units that make up the compound eyes of insects and other arthropods]
And what we did is actually combine it with odometry. So it's based on actually quite old biological theory called the snapshot theory. So while you move out from time to time, you make a snapshot, like a picture of your environment. And then you move and then make a snapshot again.
And before what people were doing is they were trying to make the snapshots, quite many snapshots so that you could go. If you go back, you don't know where you are exactly, but you can compare with the snapshots you made. And then you minimize the difference and then you should be in the same spot. And then they wanted to move to the next snapshot. But what we did is we said, okay, with each snapshot, we also store like ‘now go two meters to the front,’ right? And then make another snapshot. And so what we do is when we go back, we use odometry, which is never perfect. So odometry is like path integration, or like as a human counting your steps, if you have your eyes closed. And you can imagine if you have your eyes closed, you say, I'm going two meters to the front. Yeah. You're not 100% sure that you did those exact two meters and that you didn't deviate a little bit to the right or something like that. So again, so the robot uses odometry, and then it makes a picture and compares it with the snapshot. So basically, then it...
BM: Corrects.
GC: Yeah, it corrects, yeah, yeah,
BM: Yeah. And it uses far less information to achieve the same outcome.
GC: Exactly, so because we could now space these snapshots apart much further, yeah, we could really travel like a hundred or more meters with a few kilobytes.
BM: So we've got the bio-inspiration on the physical side with the wings and the flapping and the dynamics there, we've got it on the hardware in terms of processing and trying to be more neural-like in our processing, and now also at the algorithmic level we're trying to say what are the strategies, so we're really doing it at every single level in the flying robot.
GC: That's super cool that you say this, because one of the main things I'm doing now is a project in which I want to make a fully neuromorphic autopilot for these drones. And yeah, so the autopilot, what does it do? So it should do attitude control, so very low level, like you don't crash into the ground, velocity control, like ego-motion control, obstacle avoidance, but also navigation, and even applications, so like try to detect ripe fruit or something like that. I want to put this all in a single network, and yeah, the cool thing is, like you say actually, you have different levels of hardware, all of which will be now lining up to be bio-inspired, and it makes sense, so it's not just to be bio-inspired, I mean we expect these big advantages in terms of latency, energy efficiency, and the fact that we can make such tiny drones fully autonomous in the first place.
BM: And nature's good at accomplishing tasks, so it comes together, right? And if I take you up one level, a meta level, what about bio-inspiration from swarms and the hive mind, if you like? Give us a sense of where you're going with that?
GC: Yeah, so the interesting thing, also if you just look at real world applications, I think you typically need more. If you need more, they will be sharing the same space, they need to do something with that, because in the very least you don't want them to fly into each other or something. And then actually you can also basically optimize the way in which they collaborate. And in nature this was done by evolution, and yeah, that's super interesting. I think the new thing that we'll be bringing to the table is that in the area of swarming, people always say like, yeah, you have super simple units, but together they can do something more complex. So that's true, that's the concept of swarming, but the units do not have to be super simple. Like if you look at honeybees, of course they're limited, you know, they won't play chess or something, or I don't know. But they're actually very intelligent compared to our drones, right?
BM: And they will communicate with one another about the source of pollen.
GC: Exactly, and for example, if you don't have this quite advanced intelligence to be able to navigate, it doesn't make sense to navigate about this flower bed and where it is. So in that sense, I actually think that we're making more intelligent swarm members, and this will have an impact on what you can do at the swarm level, because they can still, of course, go beyond their individual capabilities, but this will probably even be more impressive if your individual robots are more intelligent.
BM: Well, I saw a video, I think from some of the postdocs in your lab, of a swarm of tiny flying robots going around an aircraft to do an inspection. So they all reposition themselves around this aircraft and did a collaborative inspection of the aircraft. And it translates into time and money and everything because it's faster and more efficient, and then they all pack themselves away, I guess, at the end. So you can see how swarms have an effect. Or even your gas detection. Having 10 of these things going to every room in a factory and then, I don't know, one communicating with others ‘you know it's somewhere over here,’ and then the other nine reconfigure and slowly converge.
GC: Yeah, what I think is interesting is that both with the things I do, but also there's this company in the UK, Opteran, they work on insect-inspired intelligence for robots. But what you see is a bit that if you develop this kind of intelligence, it's actually very robust, efficient, etcetera. In the same time, it does need a kind of change in the way that companies and end users think. So the gas leak, for example, typically firemen will think like, okay, you fly in and then you fly out and you give me a map and you show me where it is. But like I said, I don't think our drones like insects will be building maps. And so our concept there was that they actually don't even need to come back. So you sent them in, they sense the gas, avoid obstacles, and they basically find the leak. And then so 10 minutes or five minutes later, you walk in as a fireman and you see them in the corner, you know, that's where the leak is. You know what I mean? Like, of course you could do a beep or even make a chain to bring you there. But I think actually just- It's enough. Yeah, it's enough to, I mean, yeah, what do you want? Do you want a map or do you just want to know where the gas leak is?
BM: Yeah, yeah, yeah. So interesting.
GC: Yeah, yeah. So that's a different way of thinking and we're not used to that. And it also still needs some time, obviously, for companies that bring this to the real world. But yeah, it has so many advantages.
BM: Well, you know, the bio-inspired thing that I fantasize about, what I'd like to see, is the ability to metabolize. I mean, you've got these little dragonfly-sized things flying around. You can make them perch and stare, so they're not using energy, I guess, so the cameras can be from a fixed location. I don't know if that is possible now. But if they could eat something and then refuel themselves like an insect, then you have potentially unlimited range. And it's a whole different ballgame. I'm just fantasizing, but you look in the future and you go, well, hang on, how do we fuel it for longer? Or recharge, you know …
GC: It's, of course, one of the main challenges, indeed, and where nature really is ahead. The fruit fly sucks a bit, I don't know, the stuff in your banana, and then it can fly for another hour, right? And you're like, okay, how does it do that? And the way that it's done, that's interesting. Could it live off something on plants? Already, ethically, that would be a bit easier to sell. Of course, you have little solar panels, but with the current efficiency, they would have to rest quite long. So, yeah, harvesting energy from the environment. One of the things we do is that we actually, with fixed-wing drones, we imitate what birds do by leveraging updrafts, for example, around buildings or dunes, so that you can basically fly indefinitely. If the wind is there, you're really hardly spending energy.
BM: Can you even harvest energy from the updraft as well? Use the wings to charge?
GC: Yeah, so a colleague of mine, Bart, he also works on that idea to kind of have propellers that, if they're spun by the wind, they also...
BM: I'm coming to work with you! This is just... It's endless. It's endless.
GC: Yeah, it's really quite cool, yeah.
BM: It just shows how much more future potential there is, which is, I think, really the main purpose. Now, have we still got time to go and have a look at the lab?
GC: Yeah. A little bit of time!
BM: Well, let's wrap this up. I guess just the last question, is there anything else that you wish more people knew about your work? We'll put lots of links together with this on the podcast, but anything you wish more decision-makers knew about small drones.
GC: Yeah, so I think one of the things... So there's, like, a worry about AI as well. Of course, you know, we should really be careful, but what I want to say is that you can make the hardware such that it cannot harm people, and even the intelligence, to be honest. So if we make the AI for a tiny drone, it's going to be efficient, it's going to be quite a small network, it's going to be quite limited in the end, even though, you know, I say insects are much more intelligent than people typically think. You know, it's not going to all of a sudden then take over the world with some devious scheme.
BM: Or be Minority Report or something like this.
GC: Exactly, exactly. You can also take care that you make it really safe also from a point of view of, like, cyber attacks or something. Because there's this Black Mirror episode on, like, these robotic bees, but then, yeah, they set the system up such that a hacker can hack into a central point and control all the bees. Now, first, like I say, this drone, like, yeah, if you take it over, yeah, you cannot even harm someone, but still you can easily set up your system that such a takeover is not possible. And I think we should really think about this when we start deploying robots in the real world so that we make it safe for humans, and I think that's possible. So if we work towards that together, then that's actually something we can achieve.
BM: Well, you started this conversation with safety as your starting point, and we finish with safety. So it's a good way to end it.
GC: Yeah, it's a good circle.
BM: Professor Guido de Croon, thank you so much for your time today.
GC: You're welcome. Thanks.