Laude Institute Launches First Slingshots AI Grants Cohort

Key Points
- Laude Institute launches first Slingshots AI grant cohort.
- Fifteen projects receive funding, compute, and engineering support.
- Grants focus on advancing AI evaluation benchmarks.
- Included projects: Terminal Bench, ARC‑AGI, Formula Code, BizBench.
- John Boda Yang leads CodeClash, a competition‑based code assessment.
- Recipients must deliver tangible outcomes such as startups or open‑source tools.
- Institute warns against benchmarks becoming company‑specific.
The Laude Institute announced its inaugural Slingshots grant program, providing funding, compute power, and product support to 15 AI research projects focused on evaluation. The cohort includes initiatives such as the Terminal Bench coding benchmark, an updated ARC-AGI project, Formula Code from Caltech and UT Austin, and Columbia's BizBench. SWE‑Bench co‑founder John Boda Yang leads the new CodeClash competition framework. Recipients are expected to deliver tangible outcomes like startups or open‑source codebases, while the institute warns against benchmarks becoming overly company‑specific.
Program Overview
The Laude Institute unveiled its first batch of Slingshots grants, a new accelerator designed to advance the science and practice of artificial intelligence. The program supplies resources that are often unavailable in typical academic settings, including funding, compute power, and product and engineering support. In return, grant recipients commit to producing a concrete work product—such as a startup, an open‑source codebase, or another type of artifact.
Cohort Composition and Focus
The inaugural cohort consists of fifteen projects, with a particular emphasis on AI evaluation. Notable projects include Terminal Bench, a command‑line coding benchmark, and the latest version of the long‑running ARC‑AGI project. Formula Code, a collaboration between researchers at Caltech and the University of Texas at Austin, aims to evaluate AI agents’ ability to optimize existing code. From Columbia University, BizBench proposes a comprehensive benchmark for “white‑collar AI agents.” Additional grants explore novel structures for reinforcement learning and model compression.
CodeClash and Industry Concerns
SWE‑Bench co‑founder John Boda Yang is part of the cohort, leading the new CodeClash project. CodeClash assesses code through a dynamic, competition‑based framework, seeking to drive progress by keeping benchmarks relevant and challenging. Yang expressed concern that benchmarks could become overly specific to individual companies, emphasizing the need for broader, open evaluation standards.