A center for AI safety in Paris, established in May 2024 by EffiSciences, aiming to promote a responsible development of AI. The center will focus on:
Advocacy: raise awareness about AI safety.
R&D: conduct technical projects in partnership with organizations responsible for implementing the EU AI Act, as well as other key players within the ecosystem.
Field-building: train researchers and engineers, and support policymakers.
Our main mission is to shape public and policy discussions around AI safety by engaging with the ecosystem in several complementary ways:
Policy outreach and support: engaging with key policymakers, sharing insights, writing policy briefs, and building collaborations with relevant institutions.
Public awareness: writing op-eds and articles, organizing events, and giving interviews.
R&D: engaging in technical projects, such as developing an open-source platform for benchmarks in partnership with startups and public institutions.
Education and field-building: nurturing our AI safety talent pipeline through university courses, bootcamps, our textbook, online programs, and mentoring.
Budget for 12 months of operations:
$212k: Salaries for 6 FTEs (5 full-time staff and 2 part-time):
Executive director (already 50% funded)
Head of research (already funded)
Head of policy (already funded)
Head of operations
Head of strategy
Scientific director (part-time)
Media and communications expert (part-time)
$74k: Programs
$50k: R&D grants and internships
$24k: Talks, round tables and workshops
$52k: General expenses
Offices
Subscriptions, equipment, transport, compute
Charbel-Raphaël Ségerie: Executive director. Charbel has been coordinating most of EffiSciences’ AI activities, teaches an accredited AI safety programs in the top French research university, kickstarted and facilitates the ML4Good bootcamps, and creates content such as articles and an AI safety textbook. (LessWrong profile)
Alexandre Variengien: Head of research. Alexandre is an independent researcher who has previously interned at Redwood Research as research manager for the REMIX program, and has done his master’s thesis at Conjecture. He was second author on the Circuit for Indirect Object Identification paper (LessWrong profile)
Florent Berthet: Head of operations. Florent is currently EffiSciences’ executive director, and previously co-founded and ran EA France.
Manuel Bimich: Head of strategy. Manuel has been involved with EffiSciences' AI division since its early days.
Vincent Corruble: Scientific director. Vincent is Associate Professor at Sorbonne University and is a regular visiting researcher at CHAI.
Track record:
We have been doing AI safety field-building in France for two years with good success, reaching 1,000+ students and orienting more than 30 people towards AI safety careers. Our ML4Good bootcamps have now been replicated in several countries, and our textbook is already being used by several groups. You can find more detail on our LessWrong post from last year.
We have recently started building collaborations with multiple organisations to develop tools that might eventually be used to implement the AI Act. These organizations have shown strong interest in our work, and collaborating with them will help us gain us credibility among key private and public stakeholders.
While public advocacy was not a priority for us previously, it will be one of our core activities moving forward. We are rapidly acquiring experience in this area, and have already begun establishing partnerships with leading AI journalists in France. For example, we recently published an op-ed supported by Yoshua Bengio in a major French newspaper.
Likely causes:
Insufficient engagement or resistance from key AI actors and policymakers due to ideological differences or bad economic incentives.
Inability to secure adequate funding and talent, which is essential to reach a critical mass that would, in turn, attract additional resources and skilled people. Being able to attract people with sufficient experience is especially important for our policy-focused work, but it is challenging to find candidates who are both deeply knowledgeable about the subject and well-connected within the policy ecosystem.
Potential outcomes:
Limited impact on shaping public and policy discourse on AI safety, potentially resulting in France adopting positions that undermine international coordination efforts.
Polarizing the public discourse. The fields of AI ethics and AI safety are somewhat divided, and we are seeing sparks of this happening in France. By inviting experts from different AI fields and with different beliefs to discuss (e.g. during round tables and panels, as we are currently doing), we aim to promote a healthier debate and foster positive relationships between AI actors in France.
We have already raised $150k for this project. To see how we will use that budget, check the "already funded" mentions in the "How will this funding be used?" section.