Catalyze aims to run a 3-month program to incubate new AI Safety research organizations. In our pilot cohort, we want to focus on incubating technical research organizations and host up to 20 participants in London (ideally at the LISA offices) for 2 of those months in Q1/Q2 2024.
The program would bring together strong AI Safety researchers and engineers with talented entrepreneurs/generalists with a strong track record. The program would help them:
Find a complementary cofounder.
Gather additional knowledge and skills they need through an educational component.
Get access to our support network of expert mentors and advisors, a seed funding network, and many networking opportunities.
Why this program?
After prioritizing between dozens of AI safety field-building interventions, we found this intervention particularly promising. We believe it has the potential to address an important bottleneck for increasing much-needed outputs of high-quality AI safety research: there seem to be too few open positions to work on AI safety research. This leads to the current situation where many talented researchers and engineers with an AI safety focus resort to either independent research or non-AI safety jobs. Furthermore, senior talent from other fields is not attracted into the AI safety field as much as it could with good job openings (independent research is not a good option if you have a mortgage and family). Existing organizations do not seem to be scaling up quickly enough to address these problems, and we shouldn’t expect that to change in the near future. We gather that this is often because scaling is hard and not always the best move for a given organization.
We are in conversation with a number of promising individuals who would either like to or already have set up an organization and asked them about their hurdles and which support structures they'd want to see. With this program, we aim to address these issues and help create more AI safety organizations.
This is a very condensed version of our reasoning, feel free to reach out via alexandra@catalyze-impact.org if you’d like to discuss things further.
Program overview
When: Starting Q1 2024, a 3-month program.
Where: 1 month online, 2 months in person, ideally at London initiative for Safe AI (LISA).
Who: up to 20 outstanding technical and/or entrepreneurial individuals who deeply care about AI safety and have a strong track record (e.g. founding/scaling a previous successful organization, having published well-regarded research).
What:
Month 1: online lectures & workshops, including from expert guest speakers (e.g. leaders of research organizations). The goals are:
Teach crucial skills in building AI safety research organizations.
The lectures form a basis for assignments which help to 1) Further develop the research organization proposals, and 2) Test cofounder fit with the other participants by working together.
Month 2 & 3: in-person in London. Participants finish up the formation of cofounder teams and start building their organizations whilst receiving support and opportunities.
Continued workshops & testing of cofounder fit.
Ongoing support from us and referrals to our network of mentors and advisors (e.g. business advice, scientific advice, legal advice).
Networking opportunities within the AI safety ecosystem.
Fundraising opportunities: towards the end of the program, participants get to present their proposals to our network of seed funders.
Theory of Change - summary graphic
Please take a look at our Theory of Change graphic for this program for a more visual overview of the goals we have for this project and which actions we plan to take to get there.
Theory of Change - description
Our goals: ultimately, we aim to reduce the risk of severely negative outcomes from advanced AI. We hope to do this by increasing the number of new AI safety research organizations with strong founding teams, a promising approach to AI safety, and the right resources. These are opportunities for large scale, well coordinated and well funded collaborative research, and a larger diversity in approaches to the AI alignment problem.
To achieve this, the outputs we strive for are:
A cohort of strong participants, part of whom bring promising research directions to set up an organization around.
Participants test their cofounding fit through various collaboration projects.
Participants have the necessary skills and knowledge to start an AI safety research organization.
Participants have a strong AI safety network.
They have access to a network of advisors and mentors they can easily interact with.
They have access to a seed funding network.
They are integrated in the London AI Safety Ecosystem.
Concrete inputs that we hope will lead to these outcomes are:
Outreach to encourage suitable potential participants to apply to the program.
Thorough vetting to identify the most promising applicants and the strength of the research directions they propose.
Selection criteria participants:
For all participants: ambitiously altruistic, value-aligned, easy to work with.
Technical/researcher participants: good academic or engineering track record, promising proposal for type of research organization to start (as determined by an external selection board), aptitude for research management, strong understanding of AI safety.
Entrepreneurial generalist participants: project management experience (~ 2+ years), strong problem-solving abilities, ideally a good entrepreneurship track record (e.g. started a successful company).
Selection of proposed research directions:
We will make a pre-selection of promising applications which we show to a panel of experts with diverse opinions.
We plan to judge proposals on the following dimensions:
Clear path to impact.
Scalability.
How promising does our panel judge the proposal to be.
We offer participants a training program which helps prepare them for founding their AI safety research organization. This includes assignments which help to test working together.
E.g. workshops from experts on building a theory of change, best practices of research management, fundraising for AI Safety research 101, setting up a hiring pipeline to find great researchers, etc.
Outreach to potential advisors and mentors for the participants.
Outreach to potential funders to join a seed funding network which the participants can meet with towards the end of the program.
We bring participants into the London AI safety ecosystem throughout the program.
Which contains Conjecture, Apollo, LISA, SERI MATS, ARENA, Blue Dot Impact, UK Task Force.
Indicators of success
The pilot program has attracted as participants at least 8 of our target researchers and 8 of our target entrepreneurs.
At least 30% of incubated organizations will each have raised > 1M $ in funding within a year.
Incubated organizations publish/conduct good AI safety research within the first two years - e.g. at least 4 research papers that get presented at a top-tier ML conference like ICML, ICLR, NeurIPS.
>80% of program graduates would recommend others to join the program (i.e. average Net Promoter Score of >8).
At least 3/5 participants find a cofounder through the program who they still work with 4 months after the program ended.
Through the program, participants meet on average at least 20 new contacts in the AI safety space they’d be comfortable asking for a favor.
If you would like to learn more details on how we plan to approach certain things or have input, feel free to reach out via alexandra@catalyze-impact.org.
High-level breakdown
$ 149 K: Accommodation costs participants (2 months in London)
$ 146 K: 3-Month stipend for 20 participants
$ 139 K: Catalyze team salaries (8 months runway, 4 FTE salaries)
$ 41 K: Renting office and venue space for the program (2 months)
$ 24 K: Travel costs participants
$ 20 K: Travel & admin costs Catalyze team (8 months)
$ 18 K: Daily lunch during the in-person part of the program (2 months)
$ 15 K: Costs related to guest lecturers
$ 83 K: Buffer of 15%
Total: $ 635 K
Please look here for a more detailed breakdown
What difference will different levels of funding make?
Receiving the first $15 - 60K in funding would provide us with a crucial runway, enabling us to continue to fully dedicate our attention to this project. This increases the chance of us being able to fundraise enough to run a proper version of this program. It would also create space for us to execute on more Minimum Viable Products (MVPs) that can help us build towards the more comprehensive pilot.
In the spirit of cost-effectiveness, we have considered what trade-offs we expect are fine to make vs. which methods of cutting costs would seriously reduce the expected impact of the program. The version of the program tied to the budget above reflects these judgments. If we in total receive less funding than our fundraising goal, but at least $260K, we would try to run a version of the program might differ in one or several of the following ways:
Running more of the program online rather than in-person.
Though we believe having more of the program in-person would aid with assessing cofounder fit, collaborating more efficiently while building the organization, and will help tremendously with building participants’ AI safety network through exposure to the ecosystem.
Reducing the number of participants (e.g. 16 instead of 20).
Though reducing the number of participants would likely decrease the percentage of participants who find a cofounder because there would be less choice. We think this could have a strong negative effect on the number of successful organizations coming out of the program.
Running the program for 2 months instead of 3.
Though we think this gives participants a very tight runway to prepare for their first fundraising round, which makes joining the program a more uncertainty-heavy move. This might deter participants who need more financial certainty (such as more senior people with a mortgage and family).
We could ask participants to arrange and pay for their own accommodation.
Though, given that the stipends we have in mind are already quite low (2400$ per month), we think that asking participants to pay for their own accommodation in London could make a portion of them unable to join the program. Additionally, finding affordable accommodation in London can be challenging and time-consuming, again possibly stopping some of our preferred participants from joining.
Our core team currently consists of three members. All three have an entrepreneurial background and are highly value-driven and impact-focused.
Kay Kozaronek has past experience in business consulting and technical AI Safety research, having participated in SERI MATS and cofounded the AI Safety research group Cadenza Labs. He is currently also a part of the Incubation Track of the Future Academy.
Gábor Szórád brings 15 years of operational and entrepreneurial experience to the organization, having led teams from 5 to 8,200 in leadership roles across Europe, Asia, and the Middle East. Since the beginning of 2023, he has focused full-time on upskilling and volunteering in AI Safety.
Alexandra Bos initiated Catalyze, and has done AI Safety field-building prioritization research over the past months to figure out what interventions could have an outsized impact in the field. She has set up and led a non-profit in the past (TEDxLeidenUniversity), and was amongst the top ~3% in Charity Entrepreneurship’s selection process last year (pool >1000).
We intend to include more people on the executive team in preparation for the program.
Next to the executive team, there are also board members of the Catalyze Impact Foundation who co-decide on important matters:
Stan van Wingerden is the COO/CFO of Timaeus, a recently founded AI safety research organization dedicated to making breakthrough fundamental progress on technical AI Alignment. Their research focus is on Developmental Interpretability Singular Learning Theory. Previously Stan was the CTO at an Algorithmic Trading Fund, an ML Researcher, and studied Theoretical Physics & Philosophy.
Magdalena Wache is an independent technical AI Safety researcher, previously at MATS. She has a Master's degree in Machine Learning, has organized the AI Safety Europe Retreat, and was involved in founding the European Network for AI safety.
In addition, we are putting together an advisory board. The current members of our advisory board are:
Ryan Kidd is the co-director of MATS, a board member and cofounder of the London Initiative for Safe AI (LISA), and a Manifund regrantor. Previously he completed a PhD in Physics at the University of Queensland (UQ), where he ran UQ’s Effective Altruism student group for three years.
Jan-Willem van Putten is the cofounder and EU AI Policy Lead at Training For Good. He is an impact-focused serial entrepreneur, having founded and led several other enterprises as well. He leverages his management consulting and for-profit experience to create positive societal change through innovative training programs.
[ Medium - high likelihood ]
A significant portion of the incubated organizations fail to have a big impact, for example because:
Cofounder fit is not good enough.
-> To reduce this risk, one of our selection criteria will be for participants to be ‘easy to work with’. Furthermore, we will pay close attention in our selection process to get together a group of complementary participants. For better odds of finding a match, we also will try our best to fundraise for a cohort with more participants rather than a very small cohort.
Their research agendas/plans for the organizations do not turn out to be as useful as expected.
Lack of (AI safety) ambition in the cofounders.
-> To reduce this risk, one of our selection criteria will be for participants to be value-aligned, i.e. care strongly about reducing the worst risks from AI.
Incubated organizations do not find enough funding, either immediately after the program, or a few years after being founded.
-> To reduce this risk, we are getting together a seed funding circle. We are currently already in touch with several potential members about this. We also consider assisting incubatees with responsibly tapping into VC funds.
[ Medium likelihood ]
We are not able to fundraise enough for an ambitious version of this program, which could lead to us:
1) Not being able to attract high caliber more senior talent that we expect would create the best organizations through this program.
2) Not being able to create an environment with the highest odds of success for participants (e.g. running a larger part of the program online rather than in-person).
The money spent on this program has less impact than it would have had when spent on different AI Safety field-building projects or existing AI safety organizations (which we could consider as a counterfactual).
With the program, we will have taken time from a number of talented and important individuals that would have been spent better elsewhere.
We attract less than 7 people who meet our standards for suitable technical participants with a promising proposal for research directions.
-> To reduce this risk, we are already on the look-out for and in conversation with promising prospective participants.
[ Low likelihood ]
We attract less than 7 suitable participants with entrepreneurial background to join the program.
-> To reduce this risk, we are already on the look-out for and in conversation with promising prospective participants.
We do not manage to select the most promising research organization proposals to incubate.
-> To reduce this risk, we’re working on setting up an external board of trusted experts to help us evaluate these applications.
We inadvertently incubate organizations that end up fast-tracking capabilities research over safety research.
-> To reduce this risk, we will:
Select for a strong impact focus & value-alignment in participants.
Assist the founders to set up their organization in a way that limits the potential for value drift to take over (e.g. a charter for the forming organization that would legally make this more difficult, helping them with vetting who you take on as an investor or board member or suggesting ones who we know have a strong AI safety focus).
Lightspeed awarded Catalyze 30K in seed funding in August of this year.
Fundraising plans
We are in the process of applying to LTFF, SFF and OpenPhilanthropy.
We are planning to lower accommodation and office costs through potential partner- and sponsorships.
We are working on acquiring funding from a number of High Networth Individuals .
We are planning to approach the UK’s Foundation Model Taskforce / AI Safety Institute for funding as well.
Whenever possible, our goal is to fundraise money that counterfactually would not have been spent on x-risk- focused AI safety and/or EA-aligned projects.
App status across various funders