Bridging the Gap
Between AI and Reality

30.10. - 03.11.2024

Crete, Greece

Visit: https://2024-aisola.isola-conference.org

AI impacts our lives

What does this mean for us?

A Utopian Dream

"The advancement of machine learning brings many exciting possibilities, from improved efficiency and decision-making to new breakthroughs in various industries. It has the potential to drive innovation and improve quality of life for people. It's an exciting time as we continue to explore the capabilities and benefits of this technology."

A Dystopian Nightmare

"The rapid advancements in machine learning technology, while promising, come with a host of risks. The loss of privacy, mass unemployment, and the potential for perpetuating societal biases are just the tip of the iceberg. As machine learning becomes more integrated into our lives, the consequences of not addressing these issues could be dire, leading to a future where we have lost control over our own lives and destinies."

Or Ramblings of a Chatbot?

Both of the previous pages were generated by ChatGPT, prompted to write out utopian/dystopian visions of a future with AI.

Modern AI is too
powerful to ignore.

dive deeper
ˇ

Natural Language Processing

AI has made remarkable strides in recent years. Large Language Models (LLM's) like GPT, PalM and BERT have brought natural language processing to a viable level for every-day usage.

Computer Vision

AI has been a hallmark technology in Computer Vision, solving tasks that used to be impossible with traditional, handwritten programs.

Playing Games

AI systems have played at a top-human to superhuman level in games way too complex for traditional approaches. This ranges from ancient, well-studied board games such as Chess and Go (e.g., Alphazero) to modern real-time strategy games (e.g., Alphastar II and OpenAI Five). Through self-learning, AI systems attain a surprisingly strong ability to plan long-term strategies and quickly adapt to changes in their environment.

And Much More

AI systems are getting increasingly better at generating artworks (e.g.,DALL·E and ImaGen) and music (e.g., MusicLM and JukeBox). AI has driven progress in drug discovery ), written program code, animated computer graphics, compressed video files and much more.

AI needs to be
ready for us.

dive deeper
ˇ

Societal Biases

As AI often learns from humans or human-generated data, it tends to replicate existing societal biases. Even specifically fine-tuned systems such as ChatGPT are liable to replicate existing biases. For AI to be used in practice, these issues must be mitigated.

Safety Issues

As AI systems are increasingly employed in safety-critical domains, their uncertain nature becomes problematic. Repeated reports of self-driving cars crashing due to faulty AI as well as misdiagnoses by medical AI applications show that AI has to make big steps with respect to safety.

Threats to Democracy

Due to the complexity and opacity of AI systems, public knowledge of and trust in AI are low. To ensure that the people affected by AI can make informed decisions about it, explainability of AI becomes key.

We need to be
ready for AI.

dive deeper
ˇ

Copyright and Authorship

As AI takes over creative tasks, a rethinking of copyright and ownership is required. With image generation software on the rise (e.g., DALL·E and ImaGen), discussion about who should own or profit from generated images is mandatory.

Education

Text generation software such as ChatGPT is challenging our education system in completely new ways. Text generating AI makes cheating trivial. Do we try to ban students from using it or embrace it as a new way of teaching?

Responsibility

AI systems prove a novel challenge for our legal systems. Discussions about error attribution in self-driving car accidents are as old as the field itself and are discussed to this day. Finding answers to these questions is key to getting AI adopted in critical domains.

AISoLA

Join us in discussing how to responsibly deal with the huge potential of AI technology. AISoLA is loosely structured along six concerns, but welcomes an open discussion.

dive deeper
ˇ

The Nature of AI systems

This concern addresses the strengths and weaknesses of AI systems. What do they offer to us beyond what was traditionally possible? How do we extend the technology to cover its weaknesses? Any submission adhering to the AISolA theme that does not explicitly belong to any other track is welcome here.

Ethical, Economic and Legal Implications of AI Systems in Practice

Any discussion about ethical or legal matters in regard to AI is addressed here. This includes discussing necessary safety standards for AI and how they can be achieved, as well as questions of error attribution and copyright.

Ways to Make Controlled Use of DL-technology

Due to the uncertain nature of AI systems, formal methods-based validation techniques are key to ensure that they are used safely and responsibly. This includes mathematical verification techniques as well as statistical methods and shielding at runtime.

Dedicated applications scenarios

This concern comprises four tracks, each addressing the use of AI in one of these problem domains: Health-care, assisted programming, autonomous driving and publishing. Core questions are domain-specific safety standards and how humans and AI can cooperate in these fields.

Education in Times of Deep Learning

As increasingly more powerful AI systems become available, human work becomes much harder to distinguish from AI-generated text. This poses a new challenge for education: How do we pose tests such that they test human ability and, perhaps more importantly, how can we use this technology for good?

Scalability of Validation Methods

AISoLA hosts a competition for DNN validation tools in order to compare approaches and tools and explore the strength and the limitations of the applied technology.

Join AISoLA here!

Visit https://2024-aisola.isola-conference.org for AISoLA 2024

News

AISoLA 2023 Conference Proceeding available

Free access for conference participants will be granted until 31 January 2024.

more...


Pi interviewed by Barbara Steffen

Welcome to AISoLA’s mission to better understand the world of Artificial Intelligence. I’m Barbara Steffen, and I engaged in a series of insightful conversations with interdisciplinary experts. The mission? To reveal the complexities of AI and its transformative impact on our daily lives, organizations, and society at large.

more...


AISoLA Interdisciplinary Research Survey

Survey

Interdisciplinary collaboration is key to groundbreaking discoveries. We’re reaching out to tap into the collective insights. Your input will illuminate future research interests and uncover potential synergies between disciplines. Let’s co-create the research landscape! Your contribution shapes AISoLA’s future endeavors.

more...


Airport Transfers

The conference organisers have negotiated competitive rates for AISoLA 2023 participants. Bookings can be made directly to the taxi company.

more...


Aims and Scope 🔗

AISoLA provides an open forum for discussions about recent progress in machine learning and its implications. It is our core belief that this topic must be explored from multiple perspectives to establish a holistic understanding. That is why AISoLA invites researchers with different backgrounds like computer science, philosophy, psychology, law, economics, and social studies to participate in an interdisciplinary exchange of ideas and to establish new collaborations

Artificial Intelligence (AI), Machine Learning (ML) and, in particular, its subfield of Deep Learning (DL), have become perhaps the most hyped and popular branches of computer science. Rooted in a long history of impressive success stories, reasons for the hype seem clear:

  • In natural language processing, Large Language Models (LLMs) routinely boast impressive results in both syntactical and semantical tasks. (e.g. GPT, PaLM, GLaM, LaMDA, Gopher and others)
  • In game-playing, AI systems have achieved top-human to superhuman performance in complex games where outperforming humans was long thought impossible. This ranges from boardgames such as Go (i.e., AlphaGo) to even modern, real-time video games (e.g., OpenAI Five and AlphaStar).
  • In industrial applications, AI has been a driving force behind innovations in autonomous driving (e.g., Tesla Autopilot), logistics, recommender systems and healthcare (e.g., AlphaFold), promising safer driving, efficient supply chains, adequate advertisement and better medical care.

Most strikingly, these milestones were achieved with minimal human supervision: DL systems, be they for languages, games,or traffic, are learned from samples! On the flip side, AI-based systems make, from a human perspective, idiotic mistakes that may well have fatal consequences.

Due to this unsafe nature of AI systems, their responsible use is currently restricted to non-critical and low risk scenarios. E.g., drawing pictures with DALL·E 2, ImaGen and Stable Diffusion or using one of the many LLMs to write decently convincing articles. Even solving small size computing tasks or documenting user-provided pieces of code is not critical (c.f., GitHub Copilot ), as long as, e.g., the generated code is reviewed before deployment. However, lawsuits around GitHub Copilot and Tesla’s automated driving mode as well as sustained concerns regarding social fairness issues show: AI is not quite ready for practice. And practice is not quite ready for AI.