IMProofBench
Informal Mathematical Proof Benchmark
IMProofBench evaluates the ability of AI systems to create research-level mathematical proofs. We maintain a curated, private repository of PhD-level problems across pure mathematics to measure genuine mathematical reasoning capabilities while preventing data contamination and benchmark overfitting.
Benchmark Status
156
Draft28
Under Review48
Accepted45
GradedQuestion Submission Pipeline
178
Participants
15
Models
Questions
Create and review mathematical proof problems to test frontier AI models.
Community
Connect with mathematical researchers and track contributions.
Dashboard
Real-time statistics and benchmark performance metrics.
Percentage of questions with complete and correct solution