AISoLA provides an open forum for discussions about recent progress in machine learning and its implications. It is our core belief that this topic must be explored from multiple perspectives to establish a holistic understanding. That is why AISoLA invites researchers with different backgrounds like computer science, philosophy, psychology, law, economics, and social studies to participate in an interdisciplinary exchange of ideas and to establish new collaborations
Aims and Scope 🔗
Artificial Intelligence (AI), Machine Learning (ML) and, in particular, its subfield of Deep Learning (DL), have become perhaps the most hyped and popular branches of computer science. Rooted in a long history of impressive success stories, reasons for the hype seem clear:
- In natural language processing, Large Language Models (LLMs) routinely boast impressive results in both syntactical and semantical tasks. (e.g. GPT, PaLM, GLaM, LaMDA, Gopher and others)
- In game-playing, AI systems have achieved top-human to superhuman performance in complex games where outperforming humans was long thought impossible. This ranges from boardgames such as Go (i.e., AlphaGo) to even modern, real-time video games (e.g., OpenAI Five and AlphaStar).
- In industrial applications, AI has been a driving force behind innovations in autonomous driving (e.g., Tesla Autopilot), logistics, recommender systems and healthcare (e.g., AlphaFold), promising safer driving, efficient supply chains, adequate advertisement and better medical care.
Most strikingly, these milestones were achieved with minimal human supervision: DL systems, be they for languages, games,or traffic, are learned from samples! On the flip side, AI-based systems make, from a human perspective, idiotic mistakes that may well have fatal consequences.
Due to this unsafe nature of AI systems, their responsible use is currently restricted to non-critical and low risk scenarios. E.g., drawing pictures with DALL·E 2, ImaGen and Stable Diffusion or using one of the many LLMs to write decently convincing articles. Even solving small size computing tasks or documenting user-provided pieces of code is not critical (c.f., GitHub Copilot ), as long as, e.g., the generated code is reviewed before deployment. However, lawsuits around GitHub Copilot and Tesla’s automated driving mode as well as sustained concerns regarding social fairness issues show: AI is not quite ready for practice. And practice is not quite ready for AI.