I applaud you and the team for trying to make this happen in Singapore! And to the other comments by Chris and Ryan, glad to see you're implementing the fixes upon the earliest sign of feedback! That's always commendable.
This is an offer to donate this amount to the project on the condition that it eventually becomes active. Otherwise, your funds will remain in your Manifund account.
Google Docs Link: [v1.0] Grant Proposal for STARA
Q: Why so soon? What’s the rush?
A:
The urgency is deliberate. First, we want to capitalize on the summer break (June 2025) when undergrads are free. Miss that, and we won't be able to run another one of its type until December or even next year.
Second, it’s a credibility play. Singapore’s nascent AI safety scene needs wins to generate momentum and credibility. We’re in a window where early visible wins—like a bootcamp that produces actual talent—can move the Overton window and secure long-term institutional buy-in with government agencies and other NGOs.
Q: Who is this for?
A:
STARA targets talent will be more local and more junior and not ARENA branded. Examples:
Software engineers or ML practitioners pivoting into AI safety.
Postgrad CS/Math students from top Singaporean universities (NUS, NTU).
People already considering roles in SG AISI or similar orgs.
STARA (Singapore Technical Alignment Research Accelerator) is a 4-week machine learning safety bootcamp running from 2–27 June 2025, aimed at building Singapore’s AI safety research pipeline. It is modeled on the successful ARENA curriculum and delivered in part by alumni of both ARENA and MATS.
We will train 12 technically strong participants, selected from Singapore’s top universities and early-career professionals, with a curriculum adapted from the ARENA syllabus on transformers, interpretability, and evaluations. The goal is to transition them into impactful AI safety research roles, and in the process, solidify Singapore’s position as a serious secondary hub for alignment talent globally.
We are requesting USD 15,303.82 to fund this initial pilot cohort.
Our core goal is to accelerate the development of alignment-relevant technical talent in Singapore. We aim to:
Train 12 participants to become research-contributing alignment engineers,
Generate capstone projects that could plausibly become papers or research submissions,
Place graduates into roles in organizations like SG AISI or global alignment labs.
We’ll achieve this by:
Running a full-time, intensive 4-week bootcamp (in-person where possible),
Embedding participants into the local AI safety ecosystem (via SASH),
Closely mentoring participants with a high teacher-to-student ratio,
Using rigorous screening and technical vetting to ensure quality,
Focusing capstone projects around tractable, valuable research questions.
We propose a minimum budget of USD 15,303.82, covering:
Venue at SASH, an AI safety hub co-located with lorong.ai (government/industry nexus),
Operations support to run logistics, events, and track participant progress,
Compute credits for Colab Pro,
T-shirts and small program costs for branding and morale.
If additional funding becomes available, we would expand the budget to include:
Participant stipends, increasing retention and enabling full-time focus,
Catered food, which increases participant bonding, satisfaction, and onsite time,
Additional ops staff, to reduce burnout risk and improve execution quality.
Organizers are not paid out of this grant—the project director’s salary is already covered by SG AISI, and the technical director is volunteering.
ARENA alum, MATS Spring 2023 cohort
Ran AI Safety Fellowships at NUS and NTU (8 fellows per cohort)
Co-author of MACHIAVELLI (ICML 2023) and cybercapabilities.org (AAAI 2025 workshop)
Leading the operational execution, outreach, and teaching for STARA
Research Engineer at Singapore AISI
Published in EMNLP 2024, NeurIPS 2024, and ICLR 2025 on interpretability and LLM safety
Past mentees now at AI Safety Camp, Apollo Research, UK AISI, U.S. Congress
Responsible for curriculum quality, lectures, capstone mentorship, and technical office hours
Core team at EA NUS, facilitator for NUS Governance Fellowship
Handles university outreach, logistics, and program comms
Co-founder of WhiteBox Research, a technical AI upskilling program in the Philippines
Advisor on strategy, branding, and organizational connections
Together, this team combines prior fellowship execution experience, technical teaching skill, and publication track record, with ties to institutions like AISI, Apart Research, and WhiteBox.
We see three plausible failure modes:
Insufficient participant quality.
If our outreach fails to attract technically capable applicants, the program output will be weak. We’re mitigating this by front-loading outreach (especially in-person at SASH), using pre-screening tasks, and running a teaser workshop to calibrate expectations.
Mid-program attrition.
Without stipends or external motivation, participants may drop off. We’re addressing this with milestone-based incentive structures (e.g., completion-based stipends) and by embedding participants into a tight cohort with shared accountability (pair programming) and structure.
Post-program disengagement.
If participants don’t stay active in AI safety, the program’s long-term value will decay. To combat this, we’ll actively integrate them into SASH, alumni networks, and introduce them to research mentors and fellowship pipelines.
Failure here wouldn’t just mean wasted resources—it would signal to Singapore’s institutions that alignment is unserious. But success would establish STARA as a credible local training pipeline and justify long-term investment from regional funders, governments, and labs.
Jonathan received a grant of SGD ~30000 from the Singapore AISI to organize programs targeted primarily at top university students in NUS and NTU.
Tzu Kit Chan
about 3 hours ago
I applaud you and the team for trying to make this happen in Singapore! And to the other comments by Chris and Ryan, glad to see you're implementing the fixes upon the earliest sign of feedback! That's always commendable.
Chris Leong
4 days ago
I strongly agree with this comment that Ryan Kidd left on TARA and I think it applies to this program as well:
"As with all training programs that empower ML engineers and researchers, there is some concern that alumni will work on AI capabilities rather than safety. Therefore, it’s important to select for value alignment in applicants, as well as technical skill."
Have you given much thought to this? (You should probably think carefully about what you want to say publicly as providing too much information may make it easier for folks to hack any attempt to assess them).
My concern isn't just about alumni working on AI capabilities, it's that many people would absolutely love a free ML bootcamp and AI safety is still a relatively niche interest. So having some kind of filtration mechanism seems important to prevent the impact being diluted.
I guess it would be possible to try to convince folks about the importance of safety during a bootcamp, but I think it'd be challenging. The Arena curriculum is quite intensive, which makes it hard to squeeze in time for people to deeply reflect on their worldview. Also, I'm just generally in favour of programs that do one thing well, since adding more goals makes it harder to hit each individual one out of the park.
If you think there aren't enough folks locally who are interested in AI safety/interpretability, you may want to consider running a variant of Condor Camp or ML4Good instead. I don't know what exactly is in their curriculum, but my impression is that these programs might be more suitable if you're aiming for a mix of technical upskilling and outreach.
Ryan Kidd
4 days ago
It seems like parts of your linked grant proposal were directly copied from TARA without citation, e.g., "grow and support an ambitious, influential, and technically capable local community focused on preventing the most harmful impacts of AI." The KPIs also seem to be copied without citation. Also, the name "STARA" is "TARA" with an "S". Did you ask TARA if you could copy their materials and name?
Jonathan Ng
4 days ago
@RyanKidd Thanks for your comment.
You're right that I drew heavily from TARA's grant proposal. Doing that without citing TARA's grant application was unfair to their effort, and I should have acknowledged their work.
I’ve reached out to Yanni to offer a direct apology and ask if they’re comfortable with STARA acknowledging them explicitly in the proposal, and/or they prefer further changes to the name or proposal content.