robotics research labs

The Visual Analysis and Perception lab conducts research on how to build systems that automatically sense people and infer information about them. 
The sensing primarily comes from cameras.

This lab has a motion tracking system from Vicon, which is capable of tracking objects with very high accuracy. Located next to the Autonomous Vehicles Lab it is possible to do control and estimation experiments with a variety of autonomous vehicles without attaching anything but small reflective markers on them. This allows rapid prototyping for a range of projects and experiments.
 

The Biomechanics Research Group is focused on computer simulation of musculoskeletal systems, which can be humans actuated by muscles, robots actuated by motors or combinations of these, such as exoskeletons. The group is has developed the AnyBody Modeling System and works intensively on further development of simulation technology as well as its applications.

The Robotics, Vision and Machine Intelligence (RVMI) lab performs research and development of adaptive robotic systems that are able to naturally communicate with humans, perceive their environment and autonomously act in it. Research in the lab involves ways of intuitively teaching how to perform new tasks to robots without expert robotics knowledge, efficient visual and multimodal perception, as well as sensor fusion and machine learning capabilities.

This section maintains a range of autonomous vehicles for indoor as well as outdoor use. The focus is on helicopters and 
wheeled robots, and this lab supports these by providing test and operation facilities such as base stations, sensor suits, and GPS and telemetry equipment.


 

The focus of the Robotics & Automation Group at the Department of Mechanical and Manufacturing Engineering, Aalborg University is on design and implementation of model and sensor based control systems, human machine interfaces, automatic and real-time shop floor control and mobile robots (co-workers).
 

The Social Robots Lab aims at developing the core technologies to make service robots socially intelligent and capable of establishing durable relationship with their users. The focused research areas are multimodal sensing (including speech and vision), human-robot interaction and user modeling. Multimodal sensing enables the robots to locate, track and identify persons, to detect emotions and to recognize users intention including speech content. The Lab possesses two self-constructed social robots, which have been demonstrated in a number of high-profile public events, one Nao robot and several Turtlebots. 

Human Robot Interaction LabThe Human Robot Interaction Lab is an interdisciplinary and cross-departmental research group that focuses on the challenges that arise if robots are envisioned to work side by side with humans in dynamic environments both in production contexts but also in societal context like health care, education, or commerce. This will require robots to become socially accepted, to become able to analyze human intentions in meaningful ways, and to become proactive, requiring new methods and success criteria for interaction with single or multiple users in the pursuit of common goals that are relevant in the given context.