FAQs on the topic of AI at the University of Innsbruck

Explanations to the FAQs
These FAQs are intended to provide an orientation framework for students, lecturers and researchers at the University of Innsbruck on questions relating to artificial intelligence (AI). They do not replace legal advice in individual cases. The FAQs are continuously updated. (Status April 2024)

Generative AI means that new content, such as text, images, audio and video, is produced using prompts based on fed-in data, the automated origin of which may not be obviously recognisable by humans. Examples of generative AI are AI chatbots such as ChatGPT, Bard or Bing. Other examples include DALL-E, Murf, Simplified and Midjourney.

Today's generative AI is based on probability distributions generated from training data. These model natural language based on the statistical relationships between words (or parts of words), images based on the statistical relationships between pixels, and so on. The generative process consists of generating content that is highly probable under such a probability distribution, conditional on the user input (prompt). In particular, today's generative AI does not have an explicit understanding (through lexicons, ontologies, interaction with the physical world, ...) of the words it reads or generates. Image generators have no understanding of the physical structures that they depict (roads do not end behind bushes, spokes centre the hub in the rim ring by traction, ...). The statistical basis ensures far-reaching coherence of the generated content, but the limitation to statistics sets limits to the reliability of generative AI. Apparently plausible, but incorrect output frequently occurs.

Currently, text-based dialogue systems in particular are being supplemented by specialised modules that solve mathematical tasks or query facts from databases, for example. This development will increase the reliability of such dialogue systems in certain areas. However, it cannot solve the general problem, as the translation of queries generally requires a minimum level of linguistic understanding and not every type of query can be outsourced in this way.

The following current training programmes on AI are recommended:

General overview: KI-Campus | The learning platform for artificial intelligence

Foundations of Artificial Intelligence I (in-depth course specifically on the history of AI, whereby the entire course series I-VI represents an in-depth course on the topic): Kurs AI Foundations

AI and Sustainable Development Goals (specifically delving into the 17 SDGs and the role of AI in their implementation): AI and SDGs course

Further courses can be found at imoox.at

Internal training courses at the UIBK are available, for example, at Training programme in the Human Resources Development department. If required, the Human Resources Development team can be contacted to request special training courses on specialised topics.

Further training courses on digital tools etc. are available at eCampus, the training programme of the ZID's Digital Media and Learning Technologies department. There is also the self-study course Crash Course E-Didactics, which provides up-to-date content on AI.

A regularly updated collection of recommended further education programmes, especially on AI, can be found at Community-Kurs DMLT Mentoring on OpenOlat.

There are currently no scientifically substantiated statements on resource and energy consumption when using AI tools, and there are many - sometimes widely varying - figures circulating. However, it is clear that queries to generative AI are many times more resource-intensive than queries to conventional search engines. The university's mission statement obliges all members of the university, students and employees to actively take responsibility for making a sustainable contribution to university and social development and therefore expects them to take a critical look at the use of AI tools from both an ecological and social perspective. The University of Innsbruck makes its contribution to environmental sustainability in the area of energy and has been sourcing 100% of its electricity from certified renewable sources since 2018.

The fact that content is AI-produced does not change the fact that the person who subsequently uses this content publicly is responsible for it. Anyone who publishes AI content therefore makes the content their own to a certain extent by publishing it.

The source and training data naturally have a formative influence on the AI-generated output. However, no responsibility can be assigned to the AI system itself, as it has no legal personality. It is not sufficiently recognisable for the person who compiles the source and training data which output the AI would generate with which input (so-called black box problem), so that the output cannot be attributed to them in terms of content either. An exception could exist if the collection of source and training data were to be handled in a way that violates due diligence, e.g. if a medical AI was trained with non-peer-reviewed texts.

Works that are freely and lawfully accessible on the Internet may be downloaded for text and data mining purposes and used for training, provided that the rights holder has not prohibited this in machine-readable form at the place where the works are retrieved and the user deletes the works immediately as soon as they are no longer required for text and data mining (Section 42h (6) öUrhG).

AI-generated outputs are generally free from copyrights of the system provider or the system itself. It currently remains to be clarified in which cases the copyright of persons whose works were used as training data for the system is 'continued' in the AI output, provided that the output is sufficiently similar to their works. It is very likely that such a continuation exists if the output of the system reproduces the original works one-to-one, as for example the New York Times claims in its legal dispute against OpenAI. If there is no such continuation, the outputs are at least copyright-free. Nevertheless, they may infringe other intellectual property rights, such as trade mark rights. It is also conceivable that the system manufacturer may exclude the use of content created with its system for certain, e.g. commercial, purposes.

Firstly, the output of a system does not give rise to a copyright for the system itself, as European copyright laws are based on a human act of creation, the realisation of which results in the creation of a work worthy of protection. An AI system is not human and therefore not a sufficient creator. In principle, the system manufacturer also does not establish any copyright in the system output, as it is not sufficiently creative in its creation process. Finally, the user may be entitled to a copyright in the output if they have merely used the AI system as a tool in their creative process. It is disputed whether this is already the case if the user selects a particularly creative prompt, or whether the user must also use the output, such as creating a collage from various AI-generated outputs.

Yes, image generators can be used to create royalty-free images, but with certain restrictions:

  1. Copyright status: although in many cases the images generated by AI tools can be considered royalty-free, the actual copyright status depends on the specific terms of the service used. Some platforms or tools may have their own terms of use stating that the generated images are subject to certain restrictions or that the platform itself retains rights to the images.
  2. Protected content: When generating images based on or containing copyrighted works, there is a risk of infringement of third-party copyrights. It is important to ensure that the generated images do not contain any recognisable elements that are protected by copyright and whose use could constitute an infringement. The extent to which a "style" is worthy of protection is controversial. This is currently the subject of several proceedings.
  3. Commercial use: The conditions for the commercial use of AI-generated images vary depending on the platform and tool. Some providers may allow the commercial use of their generated images, while others may impose restrictions.
  4. Ethical aspects: In addition to legal aspects, there are also ethical considerations when using AI-generated images, especially when it comes to the depiction of people or sensitive topics.

In general, it is advisable to familiarise yourself with the specific terms of use of the image generation service used and seek advice if necessary to ensure that the use of the generated images does not cross any legal or ethical boundaries.

In principle, the processing of personal data by entering it into an AI system may also constitute a use of personal data that requires authorisation.

There is also a risk that an evolving system will learn certain correlations from prompts that contain personal data. Various companies already prohibit the entry of sensitive information in prompts. Against this background, it is advisable to formulate prompts as free of personal data as possible and, in case of doubt, to refrain from using the system. Anonymisation alone is often not enough.

Careful handling is necessary not only for personal data, but also for confidential data in general: everything that is entered into AI generators can potentially be read, stored and used by the providers. This means that whatever you enter in AI generators - or on the internet in general - can no longer be considered confidential.

At the moment, the University of Innsbruck does not recommend any specific AI tools. Under this link you will find AI tools for teaching and research that have been tested by the Department of Digital Media and Learning Technologies.

There is currently no centralised assumption of costs for the use of AI tools. However, licence fees can be paid from institute, project or department budgets.

The use of AI has already found its way into the academic field (in research and teaching) and offers many enriching possibilities. It therefore seems all the more sensible to address the critical and reflective use of AI as well as the possibilities and limitations of such tools in teaching and to try them out with students.

If the respective faculty has not formulated rules for dealing with AI, lecturers can assign AI the significance that they consider to be expedient or enriching in the respective course.

Regardless of the use of AI by the teacher, many students already work with AI tools on an ongoing basis. It is therefore recommended to discuss their use with the students and set rules for the course. For more information, see "To what extent can I as a teacher determine which AI tools are permitted and which are prohibited in my teaching and examinations?"

Information on dealing with AI at your faculty can be found on the homepage of your faculty.

Dealing with AI at the Faculty of Business Administration

Important for the use of AI tools are competences such as a reflective use of AI, a critical approach to sources, but also knowledge of the functions and limitations of AI tools and their use. Students should be able to acquire these skills during their degree programme as part of the methodological and research skills taught.

For teachers, this means a targeted engagement with AI tools, which leads to a greater understanding of them and knowledge of didactic possibilities for the use of AI tools in teaching.

We highly recommend the website KI-Campus | The learning platform for artificial intelligence
, which offers a wide range of courses, videos and podcasts.

It is not yet possible to estimate the extent to which AI will change the examination culture at the University of Innsbruck. In previous exchange formats on "AI and university" as well as in the individual working groups dealing with this topic, attendance examinations, oral examinations or reviews of the writing process were often mentioned as a likely scenario in the future.

In any case, students are responsible for any errors generated by AI (including missing citations).

Current recommendations for the use of AI tools in courses and examinations can be found under "To what extent can I, as a teacher, decide for myself which AI tools are permitted and which are prohibited in my teaching and examinations?"

From the current perspective, a general revision of certain learning outcomes is not considered immediately necessary. Mentioning AI literacy/AI competence in the learning outcomes of a module would restrict the freedom of the teachers of this module outlined above with regard to the thematisation or use of AI tools. In principle, however, it is up to each curriculum commission to examine and possibly implement possible adjustments.

In the previous exchange formats on "AI and university", considerations were expressed to introduce an oral examination of written work. This would entail curricular changes.

AI tools are increasingly focussing on the creation process of presentations or written work, for example. Different creation processes require different competences, even if the end product is the same. The course management must define certain competences as learning outcomes and align teaching and examination methods accordingly. For example, in the case of courses with continuous assessment and final examinations, the focus will often be on the development process, whereas in the case of academic theses, the focus is mainly on the end product.

Unless specifically regulated by the faculty, responsibility lies with the course director. They can decide what is and what is not permitted in their course and at what time. In the case of courses with examinations, the use of AI could be permitted for certain work assignments, for example. In any case, students are responsible for any errors generated by AI (including missing citations).

In any case, clear and transparent communication with students is important. Students should also be able to talk about their previous experiences of using AI tools in their studies in courses and thus contribute to the discourse and an informed approach.

If students are asked to use AI in courses, care should be taken to ensure that the recommended tools are accessible and usable for everyone (free of charge).

Further information at the University of Innsbruck is available from the following organisational units:

Digital Science Center DiSC

Institute of Computer Science

Institute for the Theory and Future of Law

Information page of the University of Innsbruck on the topic of AI

Austria-wide working group on AI in university teaching

The Vice-Rectorate for Digitalisation and Sustainability is available to assist with further questions and suggestions on the topic of AI and more complex issues - if required, also in coordination with the Vice-Rectorates for Research and Teaching and Students:

Vice-Rectorate for Digitalisation and Sustainability

digital-sustainable

Nach oben scrollen