AISoLA provides a forum for discussing how to responsibly deal with this new potential structured in tracks addressing the following six concerns:

  • A) The nature of AI-based systems, i.e., their specific profiles in terms of strength, weaknesses, new opportunities, and (future) implications (threats and such). This track focuses mainly on neural network models, but discussion regarding other methods of machine learning or statistical methods is also welcome. The goal of this track is not to discuss whether AI-systems should be promoted, but rather their unique strength/weakness profile and how to best deal with them as they will inevitably influence our reality. Just remember the question on the influence of Cambridge Analytica during the BREXIT referendum showcase the need to regulate AI systems - and this was only a beginning as indicated by ChatGPT which has the potential to radically change our entire educational system and professional lives. This influence is independent of the fact that large-scale AI-based systems, by their nature, will never be fully understood. Corresponding track:

  • B) Ethical, economic and legal implications of AI-systems in practice. If autonomous AI-systems are to be employed in practical applications, responsible standards need to be set. This includes setting domain-dependent minimal standards for verification, explanation and testing of ML-systems that are deemed trustworthy and responsible enough for real-world application. In particular, it is important to know who to ‘blame’, the DL-based system or its user. This does not only concern potential error handling, but also the attribution of ownership: Who is the originator of, e.g., ChatGPT-generated text and code? Who should be paid for paintings drawn by generative image models? Already today, teachers receive texts where this questions arise with clear impact on grading. More complex is the situation with generated code which is typically based on open source libraries of different licensing. What (legal) consequence has, e.g., a copy left requirement of some included artefact for an unaware user of the generated code? Corresponding track:

  • C) Ways to make controlled use of AI via the various kinds of formal methods-based validation techniques. Important is here the distinction between statistical guarantees that guarantee that failures are rare and verification which excludes errors. The point is to push the boarder of reliability forward: Can, e.g., DL-based systems for automotive driving be made reliable enough for responsible use? AISoLA explicitly addresses ways to make controlled use of AI in the following corresponding tracks:

  • D) Dedicated applications scenarios which, depending on their criticality, may allow certain levels of assistance, up to cases where full automation is uncritical. It is important to systematically explore the new potential of DL via minimal viable scenarios in order to better understand the future role of the new technology and its societal and economical impact: automotive driving in controlled scenarios where, e.g., the technological level of automotive driving seems currently only economic and socially acceptable in controlled environments. To guarantee a level of concreteness, AISoLA will specifically address the application potential of DL-based technologies in the following corresponding tracks: