Track Introduction by Kevin Baum, Thorsten Helfer, Markus Langer, Eva Schmidt, Andreas Sesing-Wagenpfeil, and Timo Speith: Normative Perspectives on and Societal Implications of AI Systems

Normative Perspectives on and Societal Implications of AI Systems ๐Ÿ”—

As AI systems continue to permeate various sectors and aspects of our lives, ensuring their trustworthy and responsible design and use becomes critical. This conference track aims to bring together interdisciplinary research from philosophy, law, psychology, economics, and other relevant fields to explore the normative perspectives on and societal implications of developing and employing AI systems. It will examine the challenges of setting domain-dependent minimal requirements for verification, explanation, and assessment of ML systems with the aim of ensuring trustworthy and responsible use of real-world applications.

Program ๐Ÿ”—

Topics ๐Ÿ”—

The topics of interest include but are not limited to:

  • Legal and Regulatory Challenges
    • Understanding the legal consequences of using AI systems
    • Robust legal frameworks to support responsible AI adoption, including methods for and challenges to compliance with the future EU AI Act
  • Ethical and Psychological Challenges
    • Balancing the benefits and risks of AI systems
    • Ethical design and value alignment of AI systems
    • Addressing AI-related fears, misconceptions, and biases
  • Economic and Societal Challenges
    • Assessing the impact of AI on the labor market and workforce
    • Evaluating the trade-offs of AI implementation
    • AI for social good: opportunities and challenges in driving economic growth and addressing societal issues
  • Responsibility and Accountability
    • Attributing blame and answerability between AI systems and their users
    • The role of human oversight in AI decision-making
    • Legal responsibility (e.g. civilliability) in case of malfunctioning AI systems
  • Ownership and Attribution in AI-Generated Content
    • Determining the originator of AI-generated text, code, and art
    • Intellectual property rights and revenue sharing for AI-created works
    • The impact of AI-generated content on education, grading, and academic integrity
  • Bias, Discrimination, and Algorithmic Fairness
    • Understanding and mitigating biases/promoting fairness in AI systems
    • Ensuring fairness and preventing discrimination in AI applications
    • Legal obligations arising from anti-discrimination laws with regard to AI systems
    • The role of privacy issues and data protection laws for bias mitigation
  • Perspicuity in AI Systems
    • The role of transparency, explainability, and traceability in societal risk mitigation
    • Suitability of methods for enhancing AI system perspicuity and user understanding with respect to societal expectations and desiderata
    • Transparency in socio-technical ecosystems
  • Holistic Assessment of AI Models
    • Evaluating AI models within larger decision contexts
    • Evaluating normative choices in AI system design and deployment
    • Investigating consequences of implementing AI systems for groups, organizations, or societies

By exploring these topics, this track will contribute to a deeper understanding of the normative perspectives and societal implications surrounding AI. It aims to promote the development of responsible, trustworthy, and beneficial AI technologies while addressing the ethical, legal, psychological, economic, and more generally societal challenges they pose in real-world applications.

This track is co-organized by the Lamarr Institute and EIS.

Submission ๐Ÿ”—

Please submit your contributions via EquinOCS

Submitting an Abstract ๐Ÿ”—

Deadline for submissions of abstracts: July 31st, 2023

Please submit an abstract of up to 500 words.

Talks will be up to 45 min long and there will be plenty of time for discussions.

(Following the conference, there will also be an opportunity to submit related articles in double-blind reviewed (post-)proceedings We are still organising the details.)

Track Organizers ๐Ÿ”—

Kevin BaumGerman Research Center for Artificial Intelligence, DE
Thorsten HelferSaarland University, DE
Markus LangerPhilipps-University of Marburg, DE
Eva SchmidtTU Dortmund, DE
Andreas Sesing-WagenpfeilSaarland University, DE
Timo SpeithUniversity of Bayreuth, DE