The Department of Creative Technologies is intensifying its research in the field of generative AI. The aim is to seamlessly integrate new technologies into the world of work and everyday life and to sustainably optimise existing processes. Practical and user-orientated, versatile algorithms are being developed - from improving AI security and data protection to combating disinformation and providing support in the education sector. Find out more about our pioneering projects and how we are advancing the trustworthiness of generative models.
![](/fileadmin/_processed_/3/3/csm_CT_AIForschung_2025_alt_443839d56d.jpg)
GenAIedTech:Content Creation with Generative Artificial Intelligence for Interactive Educational Technologies
The GenAIedTech project is researching the use of generative AI to create educational content in digital learning technologies. The aim is to efficiently generate customised multimedia content and develop human-AI co-creation processes. The research focuses on the creation, analysis, planning and implementation of learning experiences. Outcomes of the project are the development of a generative AI pipeline, a benchmark dataset and demonstrator use cases that show EdTech companies the potential of AI. In addition, ethical and regulatory aspects will be taken into account to ensure compliance with applicable standards.
Funding: FFG COIN Aufbau - FH-Forschung für die Wirtschaft 2024
Duration: 05/2025 – 04/2029
Partners: TU Graz, Paris Lodron Universität Salzburg, PH Salzburg, Österreichisches Institut für angewandte Telekommunikation (ÖIAT)
Guardrails for RAG:Enhancing Privacy and Security (GREP)
The GREP project is a Google-funded project that addresses privacy and security risks in Retrieval Augmented Generation (RAG) systems. It develops specific security measures to minimise the risks of unintentional disclosure of sensitive data through RAG architectures. The research includes the integration of Guardrails into open source solutions, the investigation of threat scenarios and the implementation of data anonymisation techniques. The aim is to create secure and privacy-compliant AI systems in line with regulatory requirements.
Funding: Google Privacy Faculty Award 2024
Duration: 07/2025 – 06/2026
FAIR-AI: Fostering Austria's Innovative Strength and Research Excellence in Artificial Intelligence
FAIR-AI analyses the challenges of compliance with European AI law in the development and application of AI technologies. The project aims to identify and minimise risks on a technical, management-related and social level. A bottom-up strategy is used to develop practical use cases that highlight typical pitfalls in AI projects. A key result is the development of a recommendation system that supports companies in the risk-conscious implementation of AI.
Duration: 01/2024 – 12/2026
Partners: AIT Austrian Institute of Technology GmbH, Fachhochschule St. Pölten ForschungsGmbH, Brantner Österreich GmbH, Fabasoft R&D GmbH, DIBIT Messtechnik GmbH, onlim GmbH, Fachhochschule Technikum Wien, Research Institute AG & Co KG, Siemens Aktiengesellschaft Österreich, Österreichische Akademie der Wissenschaften, Fachhochschule Salzburg GmbH, JOANNEUM RESEARCH Forschungsgesellschaft mbH, Wirtschaftsuniversität Wien, eutema GmbH, Semantic Web Company GmbH, Women in AI Austria, WAI Austria, MD meinDienstplan GmbH, Austrian Standards International - Standardisierung und Innovation, Technische Universität Wien, Hochschule für Angewandte Wissenschaften Burgenland GmbH, SCHEIBER Solutions GmbH, Software Competence Center Hagenberg GmbH.
Program: AI AUSTRIA Initiative, FFG (Artificial Intelligence Mission Austria)