Home

Department of Architecture, Design and Media Technology

PhD defence by Stefan Hein Bengtson

On Thursday, February 16, Stefan Hein Bengtson will defend his PhD thesis: "Semi-Autonomous Control of an Exoskeleton using Computer Vision".

Create

Seminar Room: 4.521
Rendsburggade 14, 9000 Aalborg, Denmark.

  • 16.02.2023 11:00 - 15:00

  • English

  • On location

Create

Seminar Room: 4.521
Rendsburggade 14, 9000 Aalborg, Denmark.

16.02.2023 11:00 - 15:0016.02.2023 11:00 - 15:00

English

On location

Department of Architecture, Design and Media Technology

PhD defence by Stefan Hein Bengtson

On Thursday, February 16, Stefan Hein Bengtson will defend his PhD thesis: "Semi-Autonomous Control of an Exoskeleton using Computer Vision".

Create

Seminar Room: 4.521
Rendsburggade 14, 9000 Aalborg, Denmark.

  • 16.02.2023 11:00 - 15:00

  • English

  • On location

Create

Seminar Room: 4.521
Rendsburggade 14, 9000 Aalborg, Denmark.

16.02.2023 11:00 - 15:0016.02.2023 11:00 - 15:00

English

On location

Program

11:00 – 11:05 Moderator David Meredith welcomes the guests

11:05 - 11:50 Presentation by Stefan Hein Bengtson

11:50 – 12:30 Break

12:30 – 14:30 (latest) Questions

14:30 – 15:00 Assessment

15:00 Reception and announcement from the committee

Assessment committee

Associate Professor Claus Brøndgaard Madsen
Department of Architecture, Design and Media Technology, Aalborg University, Denmark

Professor Tal Oron-Gilad
Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Israel

Professor Norbert Krüger
The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Denmark

Supervisors

Professor Thomas B. Moeslund
Department of Architecture, Design and Media Technology, Aalborg University, Denmark

Professor Thomas Bak
The Technical Faculty of IT and Design, Aalborg University, Denmark

Information

The defense will be conducted in-person.

If you wish to participate in the reception, please sign up via Doodle

Abstract

This PhD thesis was carried out as part of the EXOTIC project, funded by Aalborg University from 2018 to 2021. The shared goal of this interdisciplinary project was to research the idea of an intelligently tongue-controlled upper limb exoskeleton for persons with tetraplegia. The main focus of the work presented in this thesis is the application of computer vision for intelligent control in a semi-autonomous manner to make it easier to control the exoskeleton.

A review of existing work on using computer vision for semi-autonomous control of assistive robotics manipulators revealed a tendency of having a clear-cut division of control between the human and the system. This clear division is easy to understand, easy to implement and often improves the objective performance of the system, such as completing predefined tasks faster.  However, other studies indicate that such clear-cut schemes may be less satisfying to use, especially for persons with mobile impairments, as it can be experienced as a loss of control when the machine takes over completely.

A semi-autonomous control scheme with an adaptive level of autonomy was hence proposed such that the user will always have a sense of control. This scheme was evaluated against a manual control scheme and a semi-autonomous control scheme based on a more clear-cut division of control. These different control schemes were evaluated across two studies, with the latter one including solely persons with movement impairments in their arms. Both studies indicated a statistically significant improvement across multiple scenarios when using the adaptive scheme instead of the other two schemes. Especially in more complex tasks, where the hand of the exoskeleton needed to be both oriented and positioned in a certain way.

The computer vision applied in the two studies of the semi-autonomous control schemes relied on classical methods, such as detecting objects by color thresholding. This was a deliberate choice to ensure reliable detections of the objects in the studies as the main purpose was to test the semi-autonomous control and not the computer vision. The applied computer vision algorithms would hence fail to work outside the restricted environment of these two studies.

Research on computer vision in less restrictive environments was conducted as well as part of this thesis, namely pose estimation of objects from RGB images where the pose information would be useful for automating grasping of these objects for e.g. an exoskeleton. An existing state of the art approach for doing pose estimation was expanded to alleviate many of its shortcomings, resulting in an increased pose estimation performance, and a significant reduction in memory usage, while at the same time maintaining an inference speed suitable for real-time usage. A custom loss function was proposed as part of the solution which is able to inherently handle symmetric objects which can be an issue when dealing with pose estimation.

Finally, the above approach was expanded even further by using a single shared model for all objects instead of multiple object-specific models. This reduced memory consumption even further while also boosting the pose estimation performance by fine-tuning parts of this shared model.