Departement of Theory and Future of Law open a Trustworthy AI Lab
Given the challenges of implementing the AI Act, especially as Generative AI become increasingly important, the Departement of Theory and Future of Law open a Trustworthy AI Lab.
The Trustworthy AI Lab at the Departement of Theory and Future of Law is affiliated with the Z-Inspection® Initiative.
Z-Inspection® is a holistic process used to evaluate the trustworthyness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios.
The Z-Inspection® process is distributed under the terms and conditions of the Creative Commons (Attribution-NonCommercial-ShareAlike CC BY-NC-SA) license
The process is published in IEEE Transactions on Technology and Society.
Z-Inspection® is listed in the new OECD Catalogue of AI Tools & Metrics Z-Inspection
The following Labs are affiliated with the Z-Inspection® Initiative:
The Laboratory for Trustworthy AI at Arcada University of Applied Sciences (Helsinki, Finland)
https://www.iit.edu/center-ethics/research/ethical-and-trustworthy-ai-lab
Trustworthy AI Lab Venice at Venice Urban Lab (Venice, Italy)
Trustworthy AI Lab at the University of Copenhagen (Copenhagen, Denmark)
The Trustworthy AI Lab at L3S Research Center Leibniz, University Hannover (Hannover, Germany)
Trustworthy AI Lab at The Center for Bioethics and Research (CBR) (Ibadan, Nigeria)
Trustworthy AI Lab at the Imaging Lab, University of Pisa (Pisa, Italy)
Trustworthy AI Lab at the Goethe University Frankfurt (Frankfurt, Germany)
Trustworthy AI Lab at the Graduate School of Data Science, Seoul National University (South Korea)
Trustworthy AI for healthcare Lab, Tampere University (Finland)
Trustworthy AI Lab at ICube (CNRS, University of Strasbourg, ENGEES, INSA) Strasbourg, France
The Trustworthy AI for Healthcare Lab at the ITP Foundation (Poznań, Poland)
The Trustworthy AI in Practice lab at the DY Patil College of Engineering, Akurdi Campus, (Pune, India)
The Trustworthy AI Lab at the Philipps-University Marburg (Marburg, Germany)
Trustworthy AI Lab at the University College Østfold, (Fredrikstad, Østfold, Norway)
The Trustworthy AI Lab @ TIM, Carleton University (Ottawa, Canada)
The Trustworthy AI Lab at the Open University (DL Heerlen, the Netherlands)
CeADAR Trustworthy AI Lab at CeADAR (Dublin, Ireland)
The Trustworthy AI Lab at the Universitat Politècnica de Catalunya-BarcelonaTech (Barcelona, Spain)
The TRustworthy AI Lab (TRAIL) at the University of Brescia (Brescia Italy)
The Trustworthy AI Lab at Helmut Schmidt University (HSU/UniBwH), Hamburg , Germany
__________________________________________________________________________________________________________________________________________
The following Institutions are affiliated with the Z-Inspection® Initiative:
Center for Interdisciplinary Studies of Law and Policy (CISLP), Kyoto University, Japan
The Visual Analytics Lab at the Centre of Research & Technology – Hellas, Greece
The Information Management Unit of the National Technical University of Athens (Athens , Greece)
Digital Living Lab at Laurea University of Applied Sciences, Leppävaara-campus, Espoo, Finland
ShodhGuru Innovation and Research Labs, Uttar Pradesh, India
Q-PLAN International, Thessaloniki, Greece
__________________________________________________________________________________________________________________________________________
The following Projects are affiliated with the Z-Inspection® Initiative:
XAI project and Kdd Lab (Pisa Italy)
Literature on the Z-Inspection (r) process:
Articles and Reports
Lessons Learned in Performing a Trustworthy AI and Fundamental Rights Assessment
Marjolein Boonstra, Frédérick Bruneault, Subrata Chakraborty, Tjitske Faber, Alessio Gallucci, Eleanore Hickman, Gerard Kema, Heejin Kim, Jaap Kooiker, Elisabeth Hildt, Annegret Lamadé, Emilie Wiinblad Mathez, Florian Möslein, Genien Pathuis, Giovanni Sartor, Marijke Steege, Alice Stocco, Willy Tadema, Jarno Tuimala, Isabel van Vledder, Dennis Vetter, Jana Vetter, Magnus Westerlund, Roberto V. Zicari
This report shares the experiences, results and lessons learned in conducting a pilot project “Responsible use of AI” in cooperation with the Province of Friesland, Rijks ICT Gilde-part of the Ministry of the Interior and Kingdom Relations (BZK) (both in The Netherlands) and a group of members of the Z-Inspection® Initiative. The pilot project took place from May 2022 through January 2023. During the pilot, the practical application of a deep learning algorithm from the province of Frŷslan was assessed. The AI maps heathland grassland by means of satellite images for monitoring nature reserves. Environmental monitoring is one of the crucial activities carried on by society for several purposes ranging from maintaining standards on drinkable water to quantifying the CO2 emissions of a particular state or region. Using satellite imagery and machine learning to support decisions is becoming an important part of environmental monitoring. The main focus of this report is to share the experiences, results and lessons learned from performing both a Trustworthy AI assessment using the Z-Inspection® process and the EU framework for Trustworthy AI, and combining it with a Fundamental Rights assessment using the Fundamental Rights and Algorithms Impact Assessment (FRAIA) as recommended by the Dutch government for the use of AI algorithms by the Dutch public authorities.
Comments: On behalf of the Z-Inspection® Initiative
Subjects: Computers and Society (cs.CY)
Cite: arXiv:2404.14366 [cs.CY] (or arXiv:2404.14366v1[cs.CY] for this version)
…………………………………………………………………………………………………………………………………………………………………………………………………………………
Lessons Learned from Assessing Trustworthy AI in Practice.
Dennis Vetter, Julia Amann, Frederick Bruneault, Megan Coffee, Boris Düdder, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Dr Eleanore Hickman, Elisabeth Hildt, Sune Holm, George Kararigas,Pedro Kringen, Vince Madai , Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Ph.D , Magnus Westerlund, Renee Wurth, PhD, Roberto V. Zicari & Z-Inspection® initiative (2022)
Digital Society (DSO), 2, 35 (2023). Springer
Link:
https://link.springer.com/article/10.1007/s44206-023-00063-1
…………………………………………………………………………………………………………………………………………………………………………………………………………………
Assessing Trustworthy AI in times of COVID-19. Deep Learning for predicting a multi-regional score conveying the degree of lung compromise in COVID-19 patients.
Himanshi Allahabadi, Julia Amann, Isabelle Balot , Andrea Beretta, Charles Binkley , Jonas Bozenhard , Frédérick Bruneault, James Brusseau , Sema Candemir, Luca Alessandro Cappellini , Subrata Chakraborty , Senior Member, IEEE, Nicoleta Cherciu, Christina Cociancig, Megan Coffee , Irene Ek, Leonardo Espinosa-Leal, Davide Farina, Geneviève Fieux-Castagnet, Thomas Frauenfelder , Alessio Gallucci, Guya Giuliani, Adam Golda , Irmhild van Halem, Elisabeth Hildt , Sune Holm, Georgios Kararigas , Sebastien A. Krier, Ulrich Kühne, Francesca Lizzi , Vince I. Madai, Aniek F. Markus , Serg Masis , Emilie Wiinblad Mathez, Francesco Mureddu, Emanuele Neri, Walter Osika, Matiss Ozols , Cecilia Panigutti, Brendan Parent, Francesca Pratesi , Pedro A. Moreno-Sánchez, Giovanni Sartor, Mattia Savardi , Alberto Signoroni , Hanna-Maria Sormunen , Andy Spezzatti, Adarsh Srivastava , Annette F. Stephansen, Lau Bee Theng , Senior Member, IEEE, Jesmin Jahan Tithi, Jarno Tuominen , Steven Umbrello , Filippo Vaccher, Dennis Vetter , Magnus Westerlund, Renee Wurth, and Roberto V. Zicari
in IEEE Transactions on Technology and Society
* Publication Date: DECEMBER 2022
* Volume: 3, Issue: 4
* On Page(s): 272-289
* Print ISSN: 2637-6415
* Online ISSN: 2637-6415
* Digital Object Identifier: 10.1109/TTS.2022.3195114
Link: https://ieeexplore.ieee.org/document/9845195
Link to .PDF:https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9845195
…………………………………………………………………………………………………………………………………………………………………………………………………………………………………….
How to Assess Trustworthy AI in Practice.
Roberto V. Zicari, Julia Amann, Frédérick Bruneault, Megan Coffee, Boris Düdder, Eleanore Hickman, Alessio Gallucci, Thomas Krendl Gilbert, Thilo Hagendorff, Irmhild van Halem, Elisabeth Hildt, Georgios Kararigas, Pedro Kringen, Vince I. Madai, Emilie Wiinblad Mathez, Jesmin Jahan Tithi, Dennis Vetter, Magnus Westerlund, Renee Wurth
On behalf of the Z-Inspection® initiative (2022)
Abstract
This report is a methodological reflection on Z-Inspection®. Z-Inspection® is a holistic process used to evaluate the trustworthyness of AI-based technologies at different stages of the AI lifecycle. It focuses, in particular, on the identification and discussion of ethical issues and tensions through the elaboration of socio-technical scenarios. It uses the general European Union’s High-Level Expert Group’s (EU HLEG) guidelines for trustworthy AI. This report illustrates for both AI researchers and AI practitioners how the EU HLEG guidelines for trustworthy AI can be applied in practice. We share the lessons learned from conducting a series of independent assessments to evaluate the trustworthiness of AI systems in healthcare. We also share key recommendations and practical suggestions on how to ensure a rigorous trustworthy AI assessment throughout the life-cycle of an AI system.
Cite as: arXiv:2206.09887 [cs.CY] [v1] Mon, 20 Jun 2022 16:46:21 UTC (463 KB)
The full report is available on arXiv .
Download the full report as .PDF