×
Oct 29, 2022 · This paper chose seven representative models to test their performance on a certain benchmark dataset.
Text classification is a common NLP task where it needs to assign an appropriate categorial label to a sentence or document. The cate- gories depend on the ...
The results demonstrate that the newly proposed algorithm achieves higher accuracy and F1-measure on this type of professional dataset, and even outperforms the ...
24 hours ago · In this work, we present a performance comparison between pre-trained models and standard models on a classification task. We have developed ...
People also ask
Jun 12, 2024 · In this paper, we adopt pretrained models trained on software engineering data, such as CodeBERT, BERTOverflow, seBERT and IssueBERT, and.
Jul 2, 2022 · This study aims to explore the model performance of various deep learning algorithms in text classification tasks on medical notes with respect to different ...
23 hours ago · In this work, we present a comparison between different techniques to perform text classification. We take into consideration seven pre-trained ...
Missing: Performance | Show results with:Performance
Sep 23, 2024 · The findings of this study offer valuable insights into the nuanced strengths and limitations of pretrained models in addressing the ...
Dec 19, 2023 · Generally, the fine-tuned BERT-cased model outperformed all other compared models regarding classification accuracy, performance, and loss ...
Nov 15, 2024 · Comprehensive overview including systematic review of PLMs, deep learning-based text classification, methods based on pre-trained models, ...
Missing: Comparison Seven