CoreLab - Projects

Research in Robotic Sorting on a Cobot Demonstrator

VLM Cobot

This AI-powered demonstrator is designed to enable research in Human-Robot Interaction (HRI). It provides a versatile platform for evaluating different instruction modalities like text, sketches or speech. The setup features a robotic manipulator and an object sorting station. The primary objective of this system is to enable complex sorting tasks through natural communication, completely eliminating the need for traditional manual programming. By utilizing AI-driven interpretation of user input, the platform allows even non-technical users to define workflows intuitively via a chosen natural instruction method.

The cobot HRI demonstrator serves as a testbed for investigating how intelligent robotic systems interpret human intent. The hardware setup consists of a robotic manipulator integrated with a specialized sorting station and a digital or physical instruction interface. The core innovation of this platform lies in its ability to perform autonomous sorting and organization without any manual programming by the user. Instead of writing code or defining coordinate-based paths, users interact with the system through high-level inputs that Large Vision-Language Models (VLMs) translate into executable robotic trajectories and logic.

The platform is specifically designed to integrate latest VLM-spec hardware, including the NVIDIA DGX Spark, which facilitates dedicated research into the performance of local LLMs and VLMs. This capability is particularly relevant for industrial applications where companies require high-performance AI solutions that comply with strict data protection regulations.

To illustrate the platform's utility, a specific research question regarding the effectiveness of different instruction modalities was evaluated. The results of this study highlighted a significant advantage for visual input: the sketch-based interface achieved a 94.5% task completion rate, compared to 71.7% for text-based prompts. Furthermore, the average time required to instruct the system was reduced from 128 seconds to just 27 seconds, representing a 5.5-fold increase in efficiency.

Additionally, a separate study utilized the demonstrator to investigate user trust and benchmarking performance by comparing local Vision-Language Models (VLMs) against cloud-based API solutions. The findings indicated that the latest local VLMs can achieve performance levels and user trust ratings comparable to their API-based counterparts within the investigated sorting application. While the local models typically required more time to generate results, the study confirmed that they provide a viable, high-trust alternative for environments where data sovereignty is required. By automating the transition from human idea to robotic execution, the demonstrator showcases a future where complex automation tasks are accessible to a broader range of users while maintaining the security and performance standards required by modern industry.

See also: Publication "Visual Instruction as an Intuitive Interface for Robotic Sorting"

AI-based Airhockey Robot

Roboter Airhockey Tisch

Our air hockey system combines a physical table with an AI-controlled robot that has been trained using reinforcement learning. The training takes place initially in a realistic simulation before the strategies are applied to the real table.

A key area of research is the Sim2Real gap – the challenge of reliably transferring skills learnt in simulation to the real world.

Our air hockey system combines a physical table with an AI-controlled robot that has been trained using reinforcement learning. The training takes place initially in a realistic simulation before the strategies are applied to the real table.

A key area of research is the Sim2Real gap – the challenge of reliably transferring skills learnt in simulation to the real world.

AI-powered Autonomous Screw Sorter

Schaubensortierer

This innovative screw sorter uses state-of-the-art AI technologies to precisely identify and sort screws using a camera. Through the use of unsupervised learning, the screws are automatically clustered based on their visual characteristics – without the need for any prior manual classification.

Currently work in progress.

Autonomous E-Scooter & Simulator

Autonomous E-Scooter with different sensors for environment perception

Research projects on developing an autonomous E-Scooter, as well as an accompanying simulation.

Automated e-scooters could offer an efficient and sustainable solution to urban mobility challenges. They could move autonomously to charging stations or high-traffic areas, thereby improving availability and reducing operating costs. Furthermore, they could respond flexibly to demand, thereby enabling new mobility concepts in urban areas.

E-Scooter Simulator

Sustainable transport is key to building smart and environmentally friendly cities. The use of e-scooters in urban transport is becoming increasingly widespread. It is important here to recreate scenarios that help us understand how e-scooters fit into urban transport settings, so that we can study participants’ riding behaviour in a safe and controlled environment. For this, reserach and development of a simulation is ongoing work.

Project SAFES - Sustainable AI For Energy-efficient Systems

Prüfstand zur Energiemessung

The SAFES project at Heilbronn University develops precise measurement methods and open-source energy models to capture the actual power consumption of AI hardware. By analyzing devices ranging from edge computing to GPU servers, the initiative aims to provide a reliable foundation for "Green AI" in industrial and automotive sectors. Supported by the Carl Zeiss Foundation, the project promotes sustainable technology development by making the environmental costs of AI transparent and manageable.

Research Project SAFES: Sustainable AI For Energy-efficient Systems

The research project SAFES (Sustainable AI For Energy-efficient Systems) at Heilbronn University of Applied Sciences (HHN) addresses one of the most pressing challenges in modern technology: the high energy consumption of Artificial Intelligence. While AI systems continue to revolutionize industries such as mobility and manufacturing, their ecological footprint remains a critical concern. SAFES aims to bridge the gap between technological performance and environmental sustainability by fostering the development of "Green AI."

Core Objectives and Methodology The primary goal of SAFES is to develop precise methods for measuring the actual power consumption of AI systems. Current studies indicate that integrated measurement tools provided by chip manufacturers often deliver inaccurate data, which does not provide a reliable basis for sustainable development. To solve this, the research team, led by Professor Marco Wagner, is establishing a specialized measurement laboratory. This lab evaluates the real-world power requirements of various hardware components, ranging from high-performance GPU servers and workstations to edge devices and embedded systems used in industrial robots or autonomous driving.

Impact and Open Science The project is designed to be highly practical and collaborative. The data collected from these measurements are used to create realistic energy models, which are then released as Open Source tools. This allows companies and research institutions to evaluate and optimize their own AI applications regarding energy efficiency. By providing these transparent datasets, SAFES enables the industry to develop systems that are not only powerful but also resource-efficient.

Funding and Timeline SAFES is funded by the Carl Zeiss Foundation (CZS) as part of the "CZS Forschungsstart" program. The project is scheduled to run until August 2027, serving as a vital link between science, society, and industry to ensure a more responsible digital transformation.

You can find more detailed information in the official press release: https://www.hs-heilbronn.de/de/safes

Autonomous Maze

Autonomes Labyrinth

The autonomous maze uses AI to steer a ball through a maze by adjusting the tilt of the platform. It allows us to explore how reinforcement learning systems can learn through interaction within a simulated environment and how a trained policy can be transferred to a physical device.

A central area of research in this context is the Sim2Real Gap—the challenge of reliably transferring skills learned in simulation to real-world systems.

In this work, we investigate a motion‑control task in which a ball is steered through a maze mounted on a tilting plate. We developed a simulation environment to train reinforcement learning models and then transfer the resulting policies to a physical setup consisting of two servo motors and a camera that provides state observations. Our research focuses on how the fidelity of the simulation affects the transferability of various reinforcement learning approaches. Specifically, we compare model‑free methods such as Soft Actor-Critic and Proximal Policy Optimization with model‑based approaches like DreamerV3.

Model Cars for Research in Autonomous Driving

Modellauto

Our model vehicle is equipped with state-of-the-art sensor technology – including LiDAR, cameras and a wide range of other sensors. A high-performance GPU enables complex algorithms to be processed directly on board.

This system provides a safe and cost-effective platform for developing and testing autonomous driving technologies and driver assistance systems on a small scale – without any of the risks associated with real vehicles.

Our model vehicle is equipped with state-of-the-art sensor technology – including LiDAR, cameras and a wide range of other sensors. A high-performance GPU enables complex algorithms to be processed directly on board.

This system provides a safe and cost-effective platform for developing and testing autonomous driving technologies and driver assistance systems on a small scale – without any of the risks associated with real vehicles.