
Staff scientist with focus on AI trustworthiness modeling in human-robot interactions
- Umeå, Västerbotten
- Tillfälligt
- Heltid
Read more atProject descriptionThe purpose of the research project is to identify factors that may undermine the trustworthiness of data and models used in human-robot interaction and further investigates common and specific trustworthiness issues related to fairness and safety in human-robot interaction applications. An AI trustworthiness model will be developed and validated ensuring that both data and models of human interaction are robust, especially in the selected industry use cases.This staff scientist position is linked to the research group Deep Data Mining in the department of Computing Science, which focuses on fusing data science and artificial intelligence and developing AI trustworthiness (e.g., fairness, privacy, safety) models. The project is part of the EU project XSCAVE ( ) whose ambition lies on large scale deployment of autonomous heavy mobile machines in earthmoving, forestry and urban logistics industries. The XSCAVE consortium involves eleven partners from all over Europe.To meet these needs, we are now looking for a staff scientist. The position is temporary for 12 months on full time and is expected to start in February 2026 or according to agreement.The project offers opportunities for:
- Participating in pioneering research and innovation initiatives, gaining interdisciplinary knowledge.
- Internal and international collaborations with academic research groups and industrial companies in the field of AI robotics, large language modelling, simulation, mobile robotics, and off-road heavy equipment.
- Investigate elements that may undermine the trustworthiness of data and models used in human-robot interaction.
- Develop and validate an AI trustworthiness model that addresses bias and ensures safety to promote transparency.
- Collaborate with partners to enhance the trustworthiness model by mitigating identified risks associated with large language models (LLMs).
- Research dissemination (e.g., peer review scientific publications, presentations).
- A PhD degree in computer science, mathematics, or engineering physics, preferably with a focus on data analysis, AI trustworthiness, human robotics development.
- Hands-on experience in the field of data science.
- Communication skills in English
- Practical experience in AI trustworthiness-related topics (e.g., fairness, privacy or explainable AI or safety).
- Experience in machine learning, causal inference, image processing, human-robot interaction, or large language models.
- Experience in analyzing multimodal data (e.g., text, sensor, images etc.).
- Ability to work independently, with effective problem-solving skills and a collaborative mindset.
- Ability to communicate and document your work.
- Ability to disseminate research achievements through peer review publications.
- Cover letter in which you provide a brief description of your research interests and a statement describing why you are interested in the position (no more than 2 A4 pages).
- CV with publication list.
- Verified copy of doctoral degree certificate and other relevant degree certificates.
- Copy of doctoral thesis and up to five relevant articles,
- Other documents that the applicant wishes to claim.
- Contact information to two persons willing to act as references.