Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Codes for Evaluating Generative Benchmarks #26

Open
kygguo opened this issue Oct 10, 2024 · 1 comment
Open

Codes for Evaluating Generative Benchmarks #26

kygguo opened this issue Oct 10, 2024 · 1 comment

Comments

@kygguo
Copy link

kygguo commented Oct 10, 2024

Thanks for sharing this awesome repo!

The paper reports results on MMLU, GSM8K, HumanEval and BigBench-Hard. It seems this repo does not contain the codes for evaluating on these benchmark currently. Could you also share these codes? It would be great to follow the exactly same evaluation steps when comparing with other alignment methods.

@xwinxu
Copy link
Collaborator

xwinxu commented Oct 17, 2024

Thanks for your interest. Alpaca evals are standard instructions following the original repo: https://github.com/tatsu-lab/alpaca_eval
The other non-LLM judged results, you may refer to this repo that downloads the data and has run scripts with different benchmarks: https://github.com/bigcode-project/bigcode-evaluation-harness

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants