Professor Schramm, what does CAPTN Fjord Area do?
23. October 2024Prof. Dr. Hauke Schramm has been a professor of computer science at Kiel University of Applied Sciences since 2007 and has been a secondary member of Kiel University since 2014. Since the beginning of the project, he has been actively involved in CAPTN Fjord Area. His field of work is sensor technology, and his research group focuses on object recognition, a crucial aspect of autonomous shipping. In this interview, he discusses the project and its future beyond the current funding phase.
What is the focus of CAPTN Fjord Area?
Our goal is to build an autonomous ship that can independently cross the fjord or navigate on the fjord in the foreseeable future. To achieve this, we must be able to perceive the ship’s surroundings and plan routes accordingly. This means the ship must safely travel from point A to point B, avoiding obstacles and performing necessary maneuvers.
What technical tools does such a self-driving ship require?
First, we need environmental perception, meaning we must understand what is happening around the ship within a relevant area. For example, which other maritime objects do we see—buoys, ships, swimmers, and so on? That is the task of my research group. We pass this information to another research group, which integrates it into a nautical chart and combines it with other data. There are systems that allow ships to share information about their direction, speed, and type. We merge this data with ours to ensure that the ship can plan a legally compliant route.
Does that mean CAPTN Fjord Area is quite extensive?
It is enormously extensive! It requires several scientific research groups and companies contributing their expertise.
Who is involved in CAPTN Fjord Area?
It is a collaboration between various university research groups working on topics like route planning and source integration. We also have industry partners, such as Anschütz, which focuses on remote control of the ship, and Addix GmbH, which ensures the data connection between the ship and the land station.
How long has the project been running?
Since 2021, meaning for about three years. We are currently in the second funding phase, which runs until mid-2025, and we are applying for a third phase to continue immediately afterward.
What does your research group work on?
We focus on environmental perception, trying to determine what is happening around the ship. Our 21-meter-long research vessel, MS Wavelab, is equipped with optical cameras and LiDAR sensors. We are currently experimenting with additional sensors, such as a special SWIR (Short Wave Infrared) camera. Since it operates in a different wavelength range than LiDAR, we hope it will allow us to detect objects even in poor weather conditions.
How does the SWIR camera differ from LiDAR?
They are two completely different sensors. LiDAR is a laser that scans the surroundings and provides a point cloud, giving us the exact distance to each object. The SWIR camera, on the other hand, is like a regular optical camera but operates in a different wavelength range, allowing it to penetrate certain conditions better.
LiDAR and SWIR—are these the sensors an autonomous system needs for navigation?
We are trying to determine the optimal sensor combination. Some research initiatives, like Seabuz with its autonomous ferry Estelle in Stockholm, rely on LiDAR, radar, and AIS (Automatic Identification System), which allows ships to identify each other and share speed and direction data. However, Estelle does not use cameras, which we consider problematic because it cannot detect a swimmer in the water. We believe a combination of cameras, AIS, LiDAR, and radar is essential.
How does the ship learn to recognize what the sensors detect?
The system receives training data that is annotated, meaning it is explicitly labeled with what is present in a given area—such as a sailboat, a buoy, or a motorboat—and its size. We then draw a box around the sailboat, assign an object class, and label it as a “sailboat.” This method is called supervised learning. The system learns based on the annotated data we provide.
Is such a system faster than a human at recognizing objects?
It depends on the system. We use neural networks with different numbers of fundamental units, or neurons, which are interconnected like in our brains. The more neurons in the network, the longer the processing takes, but the more accurate it becomes. The system is scalable. If we need fast recognition that does not have to be extremely precise, we use fewer neurons. For perfect recognition, the system takes more time to process objects.
Does that mean you need an enormous amount of data?
Yes, absolutely. That is why we have now installed our sensors at various locations on the fjord. Currently, they are on a ship from the Kiel Tug and Ferry Company (SFK), on the measurement tower at the Marina Arsenal, and, of course, on our research catamaran, MS Wavelab. Soon, a ferry on the Kiel Canal will also be equipped with cameras, LiDAR, and other sensors. We collect as much data as possible, covering different weather conditions, various traffic scenarios, and all possible types of traffic participants. Initially, for example, we had trouble recognizing submarines because we had only seen one once. If the perspective changed, the system did not recognize it anymore. Now that we have seen many submarines, we can identify them relatively well. We are in the process of annotating tens of thousands of images, which is a huge effort at the beginning.
Do you do this manually?
Yes, initially. We must manually integrate every object captured by the cameras into our framework and explain to the system what it is seeing. That is a lot of work—especially for images from Kiel Week, where there are many objects. But over time, it gets easier. We now have trained systems that automatically annotate most of the objects in an image. We are mainly interested in objects the system has not yet recognized, which we manually add or correct.
What results do you expect from the project?
Our goal is to autonomously cross the Kiel Fjord during the Kiel Week and the Windjammer Parade. To achieve this, we have set out on this path. The key is that we have a real research vessel to conduct experiments and test our systems under real conditions. This is incredibly valuable! Without such research equipment and infrastructure, building an autonomous system would not be possible. We must integrate subsystems, from environmental perception to route planning and network capacity forecasting, under real conditions to ensure they function correctly. We are confident that soon, perhaps in a few months, all systems will work together, and autonomous operation under simple conditions will be possible—meaning the ship will recognize and avoid obstacles while following a designated route.
Have programmed maneuvers already been developed?
Yes, we have worked on route planning, and basic avoidance maneuvers have already been programmed.
What is still missing for fully autonomous operation?
System integration. We need to fix the last remaining errors. There are still bugs to address. But the basic functionalities are there, and we hope that by the end of the year, we will be able to operate partially autonomously.
Are there further challenges?
Yes, many—especially in environmental perception. What works well in good weather might not work as well in rain, fog, or rough seas. Another challenge is system integration. For example, our LiDAR sensor is currently being disrupted by the ship’s radar, causing electromagnetic compatibility (EMC) issues. We need to solve this, possibly by repositioning the sensor. Issues like these only become apparent in practice when integrating systems. That is why having a physical infrastructure and a research vessel for experimentation is so crucial.
Thank you for the interview.
Find the interview on YouTube (German).
(Translated with assistance of ChatGPT.)