I left a full-time job to focus on figuring out the space for slowing or stopping frontier AI development. This money would be for me to get oriented, provide support in the space, and possibly found an org in the next six months.
I want funding to be an organizer in this space. to organize a people movement for an AGI development moratorium. This is the most neglected angle of AI x-risk regulation. The actors already in the space are invested in different strategies and are generally reluctant to engage a popular audience with requests for action. Some are ideologically opposed to popular messages because of their lack of nuance. I believe the people not only can be engaged responsibly on this issue, but must be if we are to have any hope of getting legislation through to slow or stop capabilities progress. AI safety doesn't yet have a class of people like the average environmentalist-- not experts, but people that have a grasp of the high-level issues and form the backbone of the orgs and voter blocs that give environmentalism its political power. Alignment is not a topic I want to hand to that group, but "pause until it's safe" is a simpler, more fail-safe message that I feel good about spreading to voters and politicians. By staying connected to groups involved in "inside game" efforts, I hope to provide the sorts of popular support that are most helpful to policy efforts.
I'm asking for 6 months salary (Bay Area cost of living) plus travel expenses and miscellaneous event and supply expenses. With this, I want to provide community infrastructure and support for the various AI x-risk regulatory efforts, particularly in the Bay Area, help them to coordinate, pursue public outreach to shift the Overton window in favor of a moratorium on frontier AI projects, and help create a base of medium-information voters dedicated to the moratorium position.
Using the environmentalist and anti-nuclear proliferation movements as a guide, I am exploring options for activities that cultivate the medium-information voter, from a forum dedicated to moratorium to writing for the press to in-person demonstrations that get media attention to local political action to (for lack of a better word) "stunts" like suing AI labs for building weapons of mass destruction. By staying close to the inside game political efforts, I hope to be able to provide the kinds of pressure or permission that are needed for policymakers to move in favor of shutting it down.
My desired impact on the world is to shift the Overton window in favor of moratorium, reframing the issue from "do we have the right to interfere with AI progress?" to "AI labs have to show us their product will be safe before we allow them to continue". Because taking the argument for moratorium to the public is still highly neglected, and because the moratorium position is rather safe and robust (not easily confused for advocating harmful actions), I think that even my first flailing steps in this direction will be helpful.
I desire this shift in the public frame because it will lead to actions taken to stop AGI. Politicians respond to voter opinions. Right now, the AI labs have much easier access to politicians than we do, but by harnessing and increasing the already high public support for moratorium policies, we have another path to influence, one I do not believe the AI companies could replicate. There is already astonishingly high public support for an AI pause, and politicians take that kind of public support seriously. Ultimately, my goal is to create the public pressure necessary to get laws passed to slow or stop AGI development.
(As part of staying connected to other policy efforts, I expect to be providing general aid and capacity to many others in the space in the people organizer role. I expect to have direct impacts through that work as well, though money received from Manifund would not go to funding political campaigns or lobbying directly.)
6-month salary, Bay Area cost-of-living: $50,000 (incuding ~$10,000 self-employment tax)
Travel: $10,000
Events, transportation, bureaucracy fees, supplies: $10,000
20% buffer: $15,000
Total: $85,000
My most relevant experience is an organizer at Harvard EA from 2014-2020. I attempted many successful projects, including a podcast and a career advising fellowship, and many projects that fizzled out, including trying to partner up with Harvard's practical ethics center. Overall, I think my most important impact was just being there to advise people on their own individual trajectories (I got Chris Bakerlee into biosecurity, for instance) and for when important opportunities arose, such as when CEA needed someone to liase with Harvard administration for EAG Boston. I think others at Harvard EA would be most grateful for my steadiness and willingness to stay on top of paperwork and activities fairs to keep the various Harvard EA organizations alive even when programming was light. I see myself in a similar anchor role for the moratorium movement.
I performed a generalist role (in addition to doing my own research) in several labs I was part of in my previous life as an evolutionary biologist. In labs that didn't have technicians or managers, I performed tasks such as ordering, shipping, scheduling safety evaluations, updating software and equipment, and ordering repairs. Just figuring out what needed doing was the biggest part of the job, and the thing that reminds me most of the task at hand organizing popular sentiments in favor of AI x-risk regulation.
My PhD at Harvard was undertaken very independently and in the face of multiple external setbacks (two advisors left unexpectedly, causing me to have to change labs and projects, and later on I was very sick for a year). I did not receive much guidance and was disappointed by how much of the program in my department seemed not to be subject matter learning, but sinking or swimming in academia and within the hidden norms of the field. By the end, I was uncertain that I wanted to continue in academia, so I convinced my committee to accept a "minimal dissertation" so I could leave sooner and still get the degree. You could count this as a failure at what I set out to achieve or as an appropriate response to changing circumstances. I might have been better off quitting so I could get to work on something new sooner, but I thought the authority of the PhD would be useful, and so far it has seemed to be helpful. I think being "Dr. Elmore" is likely going to be helpful in organizing a people movement where I interface with lots of new people and need symbols of credibility.
At Rethink Priorities, I researched Wild Animal Welfare. I discovered that the most productive direction for the research was more sociological and journalistic than biological, which was not anticipated and not what I was trained for. Nonetheless, I think I figured it out and produced some good thinking about how to implement the most palatable wild animal welfare interventions. (However, I also found through my time in this job that I didn't believe WAW was a promising cause, so when pausing AGI progress burst into the Overton window and I felt really inspired, I was ready to pivot.)
I am a published scientific author with an h-index of 3 (which is not bad for an entering assistant professor) and a well-regarded popular writer. I have maintained a blog since 2016, where I occasionally share thoughts about organizing. I even won one of the EA Forum's Decade in Review Prizes for this essay. I foresee a lot of expository and persuasive writing in this project and I feel well-qualified to handle it.
What I lack the most is expertise about ML and AI Safety. I don't think that that is necessary to do the people organizing well, since (hardware overhang issue perhaps notwithstanding) I don't believe the issue of whether or not to wait to develop AGI hinges on the details. But I am working hard to rectify my ignorance and believe that there is no better way to get caught up than being on the ground trying to solve actual problems. I fortunately also have access to the world's leading experts on AI Safety when I am out of my depth.
The most serious technical objection to a moratorium is hardware overhang. If a moratorium stopped compute scaling but did not stop algorithmic progress, there could be unexpected outcomes when the more powerful algorithms were eventually applied train models with more compute, perhaps overshooting our ability to respond. Although I take this possibility seriously, I don't see an overall better solution than moratorium at this time, as I do not believe we are on track to solve the alignment problem without buying more time. Although policing algorithmic progress is a harder technical challenge, it is not impossible, and will much more conceivable as the basic idea of a moratorium becomes more mainstream.
Some people think that advocating for more "extreme" asks will harm the chances of moderate asks being adopted, but this is not my model of social change. When a flank is willing to push on the Overton window, it generally makes comparatively moderate proposals seem more reasonable.
Others have hesitated to appeal to the public with the issue of AI x-risk because of the possible corrosive effects of "public relations" thinking on our epistemics. It is possible to be very honest and upfront when presenting the moratorium issue, which is why I consider public outreach promising here, but not on the more complex topic of alignment. But there may still be many toxic effects of bringing this issue into the political realm.
I received a one time gift from Greg Colbourn to cover two months of expenses, which I am currently using. I have applied elsewhere to cover the same 6-month time period when that money is gone, but so far have no other funders. I will update if that changes. I've also launched a GoFundMe (https://www.gofundme.com/f/pause-artificial-general-intelligence) to see how well that goes.
Change log:
Edited 7/7/23 to propose only activities that can be funded by a 501(c)(3) organization. However, some of the activities still listed might potentially count against Manifund’s allowable “portion” of lobbying activities (as described here: https://nonprofitquarterly.org/advocacy-lobbying-501c3-charitable/ ).
7/8/23: corrected year range I organized at Harvard EA. I began my degree in 2013, but didn’t begin organizing Harvard EA until 2014.
7/10/23: Added Greg's name and the GoFundMe to Other Funding section.
App status across various funders