☕︎ DomainEval ☕️
An Auto-Constructed Benchmark for Multi-Domain Code Generation
# | Model | Pass |
---|
# | Model | Pass |
---|
📝 Submission
Thank you for your interest in DomainEval. We warmly welcome
researchers to submit additional benchmarking results, as we believe
that collaborative efforts can significantly advance the study of
Large Language Models and software engineering. For submission
guidelines, please refer to our
Github Repository Submission Guide.