I'll vouch for the quality of the AI Safety Events & Training newsletter.
I guess the main point I'd like clarity on is their plan for increasing distribution of this newsletter.
This is an offer to donate this amount to the project on the condition that it eventually becomes active. Otherwise, your funds will remain in your Manifund account.
Horizon Events is a non-profit dedicated to advance AI safety research and development through events and related efforts.
This proposal aims to fund the project in 2025. It consists of the AI Safety Unconference, Guaranteed Safe AI Seminars, AI safety events in Montréal, the weekly AI Safety Events and Training newsletter, specific collaborations, and potentially more.
Technical events
12 seminars from field experts with community Q&A (in 2024 we had Yoshua Bengio, Steve Omohundro, Rafael Kaufmann, Miyazono, etc).
AI Safety Unconference 2025 (successor to VAISU)
3 days, online; with custom event app, enabling creating and reviewing sessions, poster sessions, schedule voting, matchmaking, chatting, awards, collective intelligence tooling, ... Collaboratively created with community. Focus on x-risk/catastrophic risk, but open to all AI safety works
Target: >500 relevant registered participants
Montréal AI Safety R&D Events: 2h events with technical talk followed by mingling.
This will complement the Mila and CAISI ecosystem (which has a growing AI safety community)
Knowledge distribution, community support
AI Safety Events & Training newsletter (>950 subscribers): Weekly newsletter on upcoming events and training opportunities, sometimes with notes from past events. As of November 2024, we have >950 subscribers. We're doing it in partnership with alignment.dev since 2024Q1.
Collaboration with partners (potentially AIGS, Apart Research, Mila/CAISI, etc) and help for events-related projects in AI risk reduction, including those with low budgets
The funding is to employ and provide tooling for the core team to pursue the above projects.
On the lower end (3K + founder self-funding), we will maintain the established projects: AI Safety Unconference 2025, 12 Guaranteed Safe AI Seminars, AI Safety Events and Training newsletter. This can be achieved by the founder working 1.5 days/week, with the grant providing partial support.
On the higher end (>35K), all above projects will be covered, with more team members enabled to work on the projects.
In 2024, we also had the following team members involved Diego Jiménez (AI strategist and events ops), Arjun Yadav (Generalist and events), Linda Linsefors (Advisor, events and AI safety), Pascal Huynh (Event and interaction design), Nicolas Grenier (Advisor, worlding), and Manuela García (graphic design).
This grant proposal focuses on enabling Orpheus Lummis to work, and may enable further team members (existing or potential) to be active.
Organizational track record
AI Safety Unconference at NeurIPS (2018, 2019, 2022): Event series that gathered many important AI safety folks
Virtual AI Safety Unconference 2024 (VAISU), with 400 registered participants, 24 talks published, in survey 95% reported net positive value, and we estimated 30-100 new professional connection formed (see report)
Guaranteed Safe AI Seminars featuring prominent speakers such as Yoshua Bengio, Steve Omohundro, ...
AI Safety Events and Training newsletter growth to >950 subscribers, ~27K post views
AI safety reading group at Mila (2018) with David Krueger, …
Events consulting for Atlas Computing
Some testimonials of previous events:
Stuart Armstrong at AISU NeurIPS 2018. – "A great way to meet the best people in the area and propel daring ideas forward."
Adam Gleave at AISU NeurIPS 2018 – "The event was a great place to meet others with shared research interests. I particularly enjoyed the small discussion groups that exposed me to new perspectives."
Haydn Belfield at AISU NeurIPS 2022 – "This was a fascinating event that was helpful for keeping up with the cutting edge of the field, and for launching collaborations."
Aaron Tucker at AISU NeurIPS 2022 – "The 2018 event was extremely helpful for meeting people working on AI Safety, and played a large role in my decision to go to graduate school to work on AI safety."
The project may fail because of:
Lack of funding
Single point of failure with small team, so eg burnout risk or accident risk
Poor event participation/engagement
Potential outcomes if failed:
Disruption to established event series
Reconfiguration or loss of knowledge-sharing platform (950+ newsletter subscribers)
Reduced AI safety event capacity in Montréal (counterfactual impact on Mila/CAISI ecosystem)
Gap in technical AI safety community building
No funding for 2025.
In 2024, most work was self-funded. We also graciously received funding from LTFF for 6 months of GS AI seminars, and for VAISU. Lastly, participants of VAISU collectively donated hundreds of dollars to the project.
We would like to give all funders the option to be thanked publicly on the Acknowledgement page of our website.
We are happy to discuss. Reply on Manifund, or reach out via email at team@horizonevents.info.
Horizon Events is incorporated as HΩ, a tax-exempt non-profit in Canada (# 15845360). We otherwise support donations via credit card, wire, etc.
Chris Leong
1 day ago
I'll vouch for the quality of the AI Safety Events & Training newsletter.
I guess the main point I'd like clarity on is their plan for increasing distribution of this newsletter.
Orpheus Lummis
about 6 hours ago
Thanks for your comment and question.
Our strategy to increase distribution of the newsletter:
- Increase the quality of the newsletter such that the organic growth rate increases
- Contact team members of safety teams of major AI labs and academic centers (supernodes) to share it on their internal communications channels
- Share it on the AI Alignment Slack (#events), and potentially other popular events-related channels
- Nudge maintainers of onboarding resources for new AI safety researchers to include it as a recommended resource
This is tentative and open to feedback. We've grown to ~1K subscribers so far with organic growth but the resource would likely be beneficial to at least 2-3x more people. We would attempt proactive growth in 2025.
Steve Omohundro
2 days ago
Horizon Events has put on several important AI Safety events that I've been involved with which brought people together and led to an exchange of ideas.
Rafael Kaufmann
3 days ago
Horizon serves a critical knowledge dissemination function for the AI safety community and I appreciate the high bar set by Orpheus and team.
Aditya Arpitha Prasad
7 days ago
I know orpheus and am excited for these projects, since he has a proven track record already. I also have had personal use with his projects like AI Safety Events - https://www.aisafety.com/events-and-training