IMProofBench

Informal Mathematical Proof Benchmark

IMProofBench evaluates the ability of AI systems to create research-level mathematical proofs. We maintain a curated, private repository of PhD-level problems across pure mathematics to measure genuine mathematical reasoning capabilities while preventing data contamination and benchmark overfitting.

Questions

Create and review mathematical proof problems to test frontier AI models.

Community

Connect with mathematical researchers and track contributions.

Dashboard

Real-time statistics and benchmark performance metrics.

Participants
60
Total Participants
Questions
91
Drafts
14
Under Review
1
Accepted
0
Graded

About

Learn about the benchmark's goals, methodology, and team.

Key Features

AI Model Testing
Test problems against frontier models with immediate feedback
Peer Review System
Expert review ensures problem quality and appropriate difficulty
Automated Grading
Subquestions enable objective evaluation alongside proof assessment
Privacy Preservation
Majority private dataset prevents overfitting and gaming