scholar.google.com › citations
Apr 5, 2024 · Abstract:Knowledge probing assesses to which degree a language model (LM) has successfully learned relational knowledge during pre-training.
Our experimental evaluation of 22 common LMs shows that our proposed framework, BEAR, can effectively probe for knowledge across different LM types.
Feb 29, 2020 · Knowledge probing assesses to which degree a language model (LM) has successfully learned relational knowledge during pre-training. Prob-.
Dec 16, 2023 · We propose an approach that uses an LMs' inherent ability to estimate the log-likelihood of any given textual statement.
Apr 5, 2024 · We release the BEAR datasets and an open-source framework that implements the probing approach to the research community to facilitate the ...
Feb 29, 2020 · In BEAR, we create separate textual statements for a list of potential answers, select the statement with the lowest (pseudo) perplexity as.
Apr 4, 2024 · The BEAR dataset and its larger version, BEAR big, are benchmarks for evaluating common factual knowledge contained in language models.
Apr 7, 2024 · Description: The paper aims to develop a unified framework, BEAR, for assessing relational knowledge in both causal and masked language models.
Apr 7, 2024 · This paper introduces BEAR, a unified framework for evaluating the relational knowledge in both causal and masked language models.
BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language Models. J Wiland, M Ploner, A Akbik. arXiv preprint arXiv:2404.04113 ...