Vibe-Eval: A hard evaluation suite for measuring progress of multimodal language models

P Padlewski, M Bain, M Henderson, Z Zhu… - arXiv preprint arXiv …, 2024 - arxiv.org
P Padlewski, M Bain, M Henderson, Z Zhu, N Relan, H Pham, D Ong, K Aleksiev…
arXiv preprint arXiv:2405.02287, 2024arxiv.org
We introduce Vibe-Eval: a new open benchmark and framework for evaluating multimodal
chat models. Vibe-Eval consists of 269 visual understanding prompts, including 100 of hard
difficulty, complete with gold-standard responses authored by experts. Vibe-Eval is open-
ended and challenging with dual objectives:(i) vibe checking multimodal chat models for day-
to-day tasks and (ii) rigorously testing and probing the capabilities of present frontier models.
Notably, our hard set contains> 50% questions that all frontier models answer incorrectly …
We introduce Vibe-Eval: a new open benchmark and framework for evaluating multimodal chat models. Vibe-Eval consists of 269 visual understanding prompts, including 100 of hard difficulty, complete with gold-standard responses authored by experts. Vibe-Eval is open-ended and challenging with dual objectives: (i) vibe checking multimodal chat models for day-to-day tasks and (ii) rigorously testing and probing the capabilities of present frontier models. Notably, our hard set contains >50% questions that all frontier models answer incorrectly. We explore the nuances of designing, evaluating, and ranking models on ultra challenging prompts. We also discuss trade-offs between human and automatic evaluation, and show that automatic model evaluation using Reka Core roughly correlates to human judgment. We offer free API access for the purpose of lightweight evaluation and plan to conduct formal human evaluations for public models that perform well on the Vibe-Eval's automatic scores. We release the evaluation code and data, see https://github.com/reka-ai/reka-vibe-eval
arxiv.org
Showing the best result for this search. See all results