Nov 27, 2023 · Abstract page for arXiv paper 2311.16502: MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI.
A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI. Xiang Yue*†, Yuansheng Ni*, Kai Zhang*, Tianyu Zheng*, Ruoqi ...
The introduction of MMMU marks a significant step towards evaluating the capabilities of LMMs in the context of Ex- pert AGI. By assessing both basic perceptual ...
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI" ...
Dec 1, 2023 · 31 votes, 15 comments. Expert AGI Benchmark We introduce MMMU: a new benchmark designed to evaluate multimodal models on massive ...
Nov 27, 2023 · MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI. Published on Nov 27, 2023. Upvote. 35. +27.
How to create benchmarks for measuring Expert AGI? Since the definition is based on comparison with skilled adults, a natural starting point is college-level ...
MMMU: A Massive Multi-Discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI. 2024, pp. 9556-9567,. DOI Bookmark: 10.1109/CVPR52733 ...
We believe MMMU will stimulate the community to build next-generation multimodal foundation models towards expert artificial general intelligence (AGI). We ...
MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI ... may enhance the reasoning capability of multimodal models.