🔥 ARC Prize 2026 has officially launched!

Looking back, our lab’s research around ARC has spanned several flavors:

  1. Work directly targeting ARC-AGI competition (e.g., Jiwon/Hyunseok in 2024)
  2. Research that is directly relevant to ARC-AGI solving but didn’t translate into competition submissions:
    • 2-1) Benchmarks providing unconventional paradigms (e.g., O2ARC/ARCLE/ARCTraj for imitation learning/reinforcement learning, GIFARC for connecting common-sense knowledge and abstracted forms of information)
    • 2-2) Methodologies: Symbolic solver with Knowledge Graph, DIAR/LDCQ/DT — a concept we pioneered, but couldn’t escape the hungry appetite for massive offline trajectories, Genetic Programming with LLM refinement, or LLM with feedback loops, GFlowNet-based solving which gracefully pivoted into data augmentation, model-based RL
  3. Research that started from the ARC-AGI wave but evolved into:
    • 3-1) Methodological work pushing SoTA on related benchmarks (e.g., skill RL, curriculum learning, n-step chunking in RL; soft prompt/memory augmentation at test-time evaluated on mathematical reasoning)
    • 3-2) Underlying ML foundations — studying program spaces for ARC-AGI-like problems, which opens doors to geometric deep learning, continuous-to-discrete transforms, and representation learning more broadly
    • 3-3) Analytical work explaining phenomena in and around large models (e.g., analyzing LLM reasoning capabilities on ARC-AGI, findings from generating Multiple-Choice LARC problems, LLM addiction)
  4. Research rooted in broader interests in general intelligence and learning mechanisms (e.g., language of motion, active inference, RAG)

With that context, a few words to the team:

This year, for students directly working on ARC-AGI-2/3 solving, I’d encourage you to take ownership and build a competition-ready working model as quickly as possible, then maintain and evolve it throughout the year — rapidly integrating modules that your labmates develop. This is a great way to sharpen your engineering skills through real competition experience. Becoming a great researcher requires many things, but I believe that research conducted by someone with proven engineering and competition ability is far more likely to be solid and error-free (NVARC is from Kaggle Grandmaster’s at NVIDIA, also The ARChitects at Lambda). And if you’re considering industry positions, this capability is essential.

On top of the ARC-AGI competition itself, you’re all now aware that creative approaches like CompressARC and TRM have been gaining traction. Your novel module contributions can also be submitted for the Paper Award — and most Paper Award winners have been published at top-tier venues (ICLR, NeurIPS, etc.) and landed in job and PhD opportunities (e.g., NDEA, Tufa labs, MIT, …). So I’d love to see many of you aiming for algorithmic, ML, or learning-methodological contributions as well.

As for me, I plan to shed side projects as much as possible and focus my energy on methods directly applicable to the ARC-AGI (effcient/effective program synthesizer), as well as thinking about benchmarks and research directions beyond that (self-consciousness, etc).