Technology and Democracy 🔗

Track: B2) Democracy in the Digital Era
Moderator: Edward A. Lee, UC Berkeley, US
Date: Monday 23rd, 16:30
Room: 1

NameInstitution
James LarusÉcole Polytechnique Fédérale de Lausanne, CH
Viola SchiaffonatiPolitecnico di Milano, IT
Moshe Y. VardiRice University, US

Idealized, democracy is based on the principle that a population makes choices among clearly defined alternatives that affect how that same population is governed. The alternatives presented for a vote become clearly defined through public discourse, where freedom of expression ensures that all viewpoints can be presented, considered, and debated. The rise of the Internet and social media, however, have changed the concept of public discourse, and the rise of generative AI promises further drastic changes. John Stuart Mill argued for freedom to express even the most offensive and misleading ideas because putting these ideas in public enables debate. But the line between public and private has blurred beyond recognition. Public discourse, historically, is a broadcast to a large group of strangers. Private discourse is about messaging acquaintances and friends. Today, we have filter bubbles, microtargeted ads, and, most recently, finely tuned AI-generated customized content. Ideas becomes visible only to those who are unlikely to debate. The result is a population where one person’s vote has a completely different meaning than another person’s vote. Can democracy survive this transition? Are there technology innovations that can improve public discourse? Are there policy mitigations? This panel will consider these questions.

Beyond Chat-GPT: The Impact of AI on Academic Research 🔗

Track: B4) Research and Education
Moderator: Viola Schiaffonati, Politecnico di Milano, IT
Date: Tuesday 24th, 17:30
Room: 1
Pdf: DigHum Panel. Beyond Chat-GPT: The Impact of AI on Academic Research

NameInstitution
James LarusÉcole Polytechnique Fédérale de Lausanne, CH
Edward A. LeeUC Berkeley, US
Eva SchmidtTU Dortmund, DE
Moshe Y. VardiRice University, US

The widespread diffusion of Artificial Intelligence (AI) tools, such as conversational agents powered by Large Language Models, has popularized the debate on AI and its impact on different fields. Discussions about the consequences of the adoption of these tools in the job market or their impact on creativity are now common also in the public discourse.

One of the contexts in which the debate about the impact of AI has been relatively less discussed, at least for a broader audience, is academic research. However, the use of AI tools for research, and the correspondence discussion of its impact, has been common from the early days of AI back in the second half of the last century. It seems now that this debate is revamped by the adoption of the last generation of AI tools, some of which are freely available not only for researchers, but also for a larger population. Is this a real new perspective, with revolutionizing results – as many claim – or is it rather a continuation of a development already started decades ago?

This panel will address this and similar questions starting from the first-hand experience of different scholars working in the context of computer science and engineering that will provide their perspective on this relatively undebated issue. These questions will be discussed within the framework of Digital Humanism, as the pursuit of supporting people through digital technologies, especially AI, and of protecting people from adverse effects of these technologies.

What would Turing say? 🔗

Track: A1) The Nature of AI-based Systems
Moderator: Mike Hinchey, University of Limerick, IE
Date: Friday 27th, 10:00
Room: 1

Of course, we cannot know what Alan Turing would have said. Still, let us imagine.

LM-Based Publication-Support: What should be Desired, Allowed, Forbidden 🔗

Track: D3) Publishing
Moderator: Bernhard Steffen, TU Dortmund, DE
Date: Friday 27th, 17:00
Room: 2

Track: A1) The Nature of AI-based Systems
Moderator: Bernhard Steffen, TU Dortmund, DE
Date: Saturday 28th, 10:00
Room: 1

The first AI hype concerned automated deduction and logical programming which are clearly theoryful algorithmic approaches according to Leslie Valiant’ classification. In contrast, he would attribute today’s amazing success stories, in particular those around large language models, to what he calls (theoryless) ecorythms that are characterized by learning from observation. In fact, it seems that conceptual structure can be largely neglected here as long as we are fast enough to scale to huge dimensions and vast data repositories. Is this the end of the story?