AI Statement - Dealing with AI

AI at the Department of Clinical Pharmacy (DCP)

Embracing AI will never involve replacing the human touch in education—it means augmenting our abilities and optimising our processes. The department is committed to ensuring that AI integration serves as a safe and supportive augmentation that enables delivery in a sustainable manner. We believe that artificial intelligence has the potential to enrich many aspects of academic life and our objective is to inform and signpost staff, students, and researchers towards practical information and resources. 

Office of the Vice-Rector for Digitalization and Sustainability hosts a collection of FAQs relating to artificial intelligence (AI), that provide an orientation framework for students, lecturers and researchers at the University of Innsbruck. Viewing these FAQs is a recommended first step for those wishing to familarise with AI at the university.

An image of the UIBK AI website

View the university AI FAQs

Additional advisories from department staff

Students should be aware of the need to become a competent practitioner of their chosen science profession without reliance on the presence of AI tooling. Such tools will be common in their future; however, students must not allow a dependency to form if they wish to be fully competent. Even where AI tool contributions are permitted by your lecturer, consider a total maximum AI 'assisting' contribution of 20% to any work assisted by AI, against an 80% human proportion. 

It is increasingly common for students to use AI tools to help start a science research review or reporting work, either through outline planning or abstract drafting. Those doing so should understand that AI tooling cannot take responsibility for what is produced. 

Responsibility always sits with the user of the AI model. A model cannot fail an assignment; it can only fail to please the user. Students should take full responsibility for facts that are suggested and treat the outcome as "popular lies" until they are verified.

A recognised danger of being a consumer of conversational AI tooling is that we can unwittingly become conditioned to outsource our critical thinking practice to those tools. To help protect against this, consider using the tools as opponents to positions you hold.

Keeping close adherence to the central recommendations, the department advocates for the safe adoption of AI-powered tools to help fulfill its goals and to equip students and staff for the AI-supported labor market of the future. Adoption is suggested across Teaching, Learning, research and task automation.

Department members are invited to:

  • Utilise AI content creation tools to plan lecture materials, where the majority of the creative contribution is human.
  • Utilise agentic tools that provide simulation, modelling and summarising for the purpose of skill developments; and 
  • Develop case studies of AI applications to inspire curriculum innovations and interdisciplinary collaboration.

Content created using AI tools to a high degree should always be declared as such or, ideally, avoided entirely. You should also be able to make sound judgments of copyright violation between known original works and generated content. 

The department encourages student use of AI-powered tools that can:

  • Personalise learning experiences by adapting content and pace to individual needs,
  • Offer real-time support in the form of virtual assistants that can guide the learning pathway,
  • Enhance accessibility and engagement through interactive chat platforms that can break down complex subjects; and
  • Serve to increase the practicing of skills that contribute to core competencies, such as critical thinking.

Students should be invited to experiment with these technologies thoughtfully and critically, ensuring that AI-powered learning complements traditional pedagogical methods. Educators should define and discuss acceptable or unacceptable use in their subjects with their students. There must be a shared understanding that students retain authentic competencies in a world where AI use is increasing.

The department advocates for research initiatives that involve:

  • Leveraging machine learning and data analytics to extract insights from large datasets in various fields,
  • Implementing AI-enhanced research co-pilots that speed up literature reviews, hypothesis testing, and data visualisation; and
  • Exploring the ethical, societal, and technical implications of AI to contribute to the dialogue on responsible technology use.

Researchers are encouraged to integrate AI methods into their work where relevant, harnessing its potential while engaging critically with challenges it brings. Researchers should not use AI-powered tools for topics they are not competent to judge the validity of the outcome.

In the context of task automation, department members are invited to investigate the inclusion of AI to:

  • Reduce repetitive administrative tasks, freeing up time for more creative and strategic endeavors.
  • Improve the efficiency of data management, scheduling, and resource allocation across departments; and
  • Improve the quality of status monitoring of university systems, assets, or resources.

Pilot projects are essential to ensure the implementation of AI-powered solutions are carried out responsibly. Data security, transparency and fairness must be ensured.

Nach oben scrollen