Deliberative processes are often seen as a possible remedy to reconcile citizens with democracy. However this promise falls short when facing issues such a unbalanced participation, the cost of organizing large-scale consultations, etc. While the prospect of AI-enhanced deliberative and collective decision-making processes has not yet fully materialised, the emergence of Large language models is perceived as a chance to enlarge participation, facilitate deliberation, suggesting common grounds, or deducing unvoiced preferences. Conversely, the use of AI technologies in such sensitive settings also pose risks through the inherent biases in their responses and potential exploitation for manipulating decisions, which call into question fundamental assumptions of classical frameworks.The aim of this project is thus to explore how post-generative AI systems can enhance deliberation and improve collective decision-making, in order to promote fair and collectively acceptable policies. The objective is to design AI mechanisms and tools which can be used in deliberative democratic processes, and to evaluate experimentally and assess their actual benefit at various scales and in different contexts (eg. civic assemblies, online deliberative platforms, international or local negotiations, etc.). To achieve this objective, we envision a combination of: (i) descriptive approaches, based either on massive quantitative approaches (digital traces analysis), or on local situated studies (questionnaires or ethnographic work). The objective is to analyze and visualize how people actually deliberate and raise early warnings about potential risks of manipulation or coercion (controversy mapping, dynamics of coalitions, diagnostic of influence among people, etc.); (ii) descriptive-normative approaches based on multiagent simulations, allowing to run in silico complex systems based on empirically validated behavior, and test counterfactual scenarios; (iii) normative approaches based on idealized models, which explore properties of deliberative and collective decision mechanisms. Specifically, computational social choice, formal argumentation, multicriteria decision-aiding and preference learning, offer a range of new techniques to design mechanisms with theoretically guaranteed desirable properties. We intend to experiment such AI-augmented deliberative settings in various real-life situations.
AIAD is a 5 years project (2025-2030) involving several academic labs from Sorbonne Université, Sciences Po and Université Technologique de Compiègne, two industrial partners (Make.org and Nukke.AI), with the support of Conseil Economique et Social.