F-score: Difference between revisions
→Definition: Moved superscripts outside of text blocks; cleaned up LaTeX formatting a little. Replaced unicode betas with inline LaTeX. Made subsection title clearer. |
m minor rewording |
||
(37 intermediate revisions by 28 users not shown) | |||
Line 2: | Line 2: | ||
{{For|the significance test|F-test}} |
{{For|the significance test|F-test}} |
||
[[File:Precisionrecall.svg|thumb|350px|Precision and recall]] |
[[File:Precisionrecall.svg|thumb|350px|Precision and recall]] |
||
In [[statistics|statistical]] analysis of [[binary classification]], the '''F-score''' or '''F-measure''' is a measure of |
In [[statistics|statistical]] analysis of [[binary classification]] and [[information retrieval]] systems, the '''F-score''' or '''F-measure''' is a measure of predictive performance. It is calculated from the [[Precision (information retrieval)|precision]] and [[Recall (information retrieval)|recall]] of the test, where the precision is the number of true positive results divided by the number of all samples predicted to be positive, including those not identified correctly, and the recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known as [[positive predictive value]], and recall is also known as [[Sensitivity and specificity|sensitivity]] in diagnostic binary classification. |
||
The '''F<sub>1</sub>''' score is the [[harmonic mean]] of the precision and recall. The more generic <math>F_\beta</math> score applies additional weights, valuing one of precision or recall more than the other. |
The '''F<sub>1</sub>''' score is the [[harmonic mean]] of the precision and recall. It thus symmetrically represents both precision and recall in one metric. The more generic <math>F_\beta</math> score applies additional weights, valuing one of precision or recall more than the other. |
||
The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if |
The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if precision and recall are zero. |
||
== Etymology == |
== Etymology == |
||
The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the Fourth [[Message Understanding Conference]] (MUC-4, 1992).<ref>{{ |
The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the Fourth [[Message Understanding Conference]] (MUC-4, 1992).<ref>{{Cite news |
||
| last1 = Sasaki |
|||
| first1 = Y. |
|||
| year = 2007 |
|||
| title = The truth of the F-measure |
|||
| journal = Teach tutor mater |
|||
| volume = 1 |
|||
⚫ | |||
| pages = 1–5 |
|||
| url=https://nicolasshu.com/assets/pdf/Sasaki_2007_The%20Truth%20of%20the%20F-measure.pdf }}</ref> |
|||
== Definition == |
== Definition == |
||
{{Refimprove section|date=December 2018}} |
|||
The traditional F-measure or balanced F-score ('''F<sub>1</sub> score''') is the [[Harmonic mean#Harmonic mean of two numbers|harmonic mean]] of precision and recall:<ref>{{Cite journal |
The traditional F-measure or balanced F-score ('''F<sub>1</sub> score''') is the [[Harmonic mean#Harmonic mean of two numbers|harmonic mean]] of precision and recall:<ref>{{Cite journal |
||
| last1 = Aziz Taha |
| last1 = Aziz Taha |
||
Line 21: | Line 29: | ||
| volume = 15 |
| volume = 15 |
||
| issue = 29 |
| issue = 29 |
||
| pages = |
| pages = 1–28 |
||
| doi = 10.1186/s12880-015-0068-x |
| doi = 10.1186/s12880-015-0068-x |
||
| pmid = 26263899 |
|||
| pmc = 4533825 |
|||
| doi-access = free}}</ref> |
| doi-access = free}}</ref> |
||
:<math>F_1 = \frac{2}{\mathrm{recall}^{-1} + \mathrm{precision}^{-1}} = 2 \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}} = \frac{\mathrm{tp}}{\mathrm{tp} + |
:<math>F_1 = \frac{2}{\mathrm{recall}^{-1} + \mathrm{precision}^{-1}} = 2 \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}} = \frac{2\mathrm{tp}}{2\mathrm{tp} + \mathrm{fp} + \mathrm{fn}} |
||
</math>. |
</math>. |
||
=== < |
=== F<sub>β</sub> score === |
||
A more general F score, <math>F_\beta</math>, that uses a positive real factor <math>\beta</math>, where <math>\beta</math> is chosen such that recall is considered <math>\beta</math> times as important as precision, is: |
A more general F score, <math>F_\beta</math>, that uses a positive real factor <math>\beta</math>, where <math>\beta</math> is chosen such that recall is considered <math>\beta</math> times as important as precision, is: |
||
Line 50: | Line 60: | ||
{{diagnostic testing diagram}} |
{{diagnostic testing diagram}} |
||
[[File:Harmonic_mean_3D_plot_from_0_to_100.png|thumb|300px|Normalised harmonic mean plot where ''x'' is precision, ''y'' is recall and the vertical axis is F<sub>1</sub> score, in percentage points]] |
[[File:Harmonic_mean_3D_plot_from_0_to_100.png|thumb|300px|Normalised harmonic mean plot where ''x'' is precision, ''y'' is recall and the vertical axis is F<sub>1</sub> score, in percentage points]] |
||
[[File:PR curve with optimal fscore.png|thumb|Precision-Recall Curve: points from different thresholds are color coded, the point with optimal F-score is highlighted in red]] |
|||
== Dependence of the F-score on class imbalance == |
== Dependence of the F-score on class imbalance == |
||
⚫ | |||
Williams<ref name="williams-21">{{cite journal |last1=Williams |first1=Christopher K. I. |title=The Effect of Class Imbalance on Precision-Recall Curves |journal=Neural Computation |date=2021 |volume=33 |issue=4 |pages=853–857 |doi=10.1162/neco_a_01362 |pmid=33513323 | arxiv=2007.01905|doi-access=free }}</ref> has shown the explicit dependence of the |
|||
⚫ | |||
⚫ | |||
{{cite conference |
|||
⚫ | |||
| title = On model evaluation under non-constant class imbalance |
|||
| last1 = Brabec |
|||
| first1 = Jan |
|||
| last2 = Komárek |
|||
| first2 = Tomáš |
|||
| last3 = Franc |
|||
| first3 = Vojtěch |
|||
| last4 = Machlica |
|||
| first4 = Lukáš |
|||
| book-title = International Conference on Computational Science |
|||
| pages = 74–87 |
|||
| year = 2020 |
|||
| publisher = Springer |
|||
| doi = 10.1007/978-3-030-50423-6_6 |
|||
| arxiv=2001.05571 |
|||
| doi-access = free |
|||
}}</ref> |
|||
This means that comparison of the |
|||
F-score across different problems with differing class ratios is |
F-score across different problems with differing class ratios is |
||
problematic. One way to address this issue (see e.g., Siblini et al, |
problematic. One way to address this issue (see e.g., Siblini et al., |
||
2020<ref name="siblini-20"> |
2020<ref name="siblini-20"> |
||
{{cite conference |
{{cite conference |
||
Line 84: | Line 114: | ||
== Applications == |
== Applications == |
||
The F-score is often used in the field of [[information retrieval]] for measuring [[web search|search]], [[document classification]], and [[query classification]] performance.<ref>{{cite thesis | first=Steven M. |last=Beitzel. |citeseerx = 10.1.1.127.634 | title=On Understanding and Classifying Web Queries | degree=Ph.D. | publisher=IIT | year= 2006}}</ref> Earlier works focused primarily on the F<sub>1</sub> score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall<ref>{{cite conference |author1=X. Li |author2=Y.-Y. Wang |author3=A. Acero |s2cid=8482989 | title=Learning query intent from regularized click graphs | work= Proceedings of the 31st SIGIR Conference |date=July 2008|doi=10.1145/1390334.1390393 }}</ref> and so <math>F_\beta</math> is seen in wide application. |
The F-score is often used in the field of [[information retrieval]] for measuring [[web search|search]], [[document classification]], and [[query classification]] performance.<ref>{{cite thesis | first=Steven M. |last=Beitzel. |citeseerx = 10.1.1.127.634 | title=On Understanding and Classifying Web Queries | degree=Ph.D. | publisher=IIT | year= 2006}}</ref> It is particularly relevant in applications which are primarily concerned with the positive class and where the positive class is rare relative to the negative class. |
||
Earlier works focused primarily on the F<sub>1</sub> score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall<ref>{{cite conference |author1=X. Li |author2=Y.-Y. Wang |author3=A. Acero |s2cid=8482989 | title=Learning query intent from regularized click graphs | work= Proceedings of the 31st SIGIR Conference |date=July 2008|page=339 |doi=10.1145/1390334.1390393 |isbn=9781605581644 }}</ref> and so <math>F_\beta</math> is seen in wide application. |
|||
The F-score is also used in [[machine learning]].<ref>See, e.g., the evaluation of the [https://dl.acm.org/citation.cfm?id=1119195].</ref> However, the F-measures do not take true negatives into account, hence measures such as the [[Matthews correlation coefficient]], [[Informedness]] or [[Cohen's kappa]] may be preferred to assess the performance of a binary classifier.<ref>{{cite arXiv |eprint=1503.06410|title=What the F-measure doesn't measure|last1=Powers|first1=David M. W|class=cs.IR|year=2015}}</ref> |
The F-score is also used in [[machine learning]].<ref>See, e.g., the evaluation of the [https://dl.acm.org/citation.cfm?id=1119195].</ref> However, the F-measures do not take true negatives into account, hence measures such as the [[Matthews correlation coefficient]], [[Informedness]] or [[Cohen's kappa]] may be preferred to assess the performance of a binary classifier.<ref>{{cite arXiv |eprint=1503.06410|title=What the F-measure doesn't measure|last1=Powers|first1=David M. W|class=cs.IR|year=2015}}</ref> |
||
The F-score has been widely used in the natural language processing literature,<ref name="Derczynski2016">{{cite conference |first=L. |last=Derczynski | url= https://www.aclweb.org/anthology/L16-1040 | title= Complementarity, F-score, and NLP Evaluation | work= Proceedings of the International Conference on Language Resources and Evaluation| date= 2016}}</ref> such as in the evaluation of [[named entity recognition]] and [[word segmentation]]. |
The F-score has been widely used in the natural language processing literature,<ref name="Derczynski2016">{{cite conference |first=L. |last=Derczynski | url= https://www.aclweb.org/anthology/L16-1040 | title= Complementarity, F-score, and NLP Evaluation | work= Proceedings of the International Conference on Language Resources and Evaluation| date= 2016}}</ref> such as in the evaluation of [[named entity recognition]] and [[word segmentation]]. |
||
== Properties == |
|||
The F<sub>1</sub> score is the [[Dice coefficient]] of the set of retrieved items and the set of relevant items.<ref>{{cite book |last1=Manning |first1=Christopher |title=An Introduction to Information Retrieval |date=April 1, 2009 |publisher=Cambridge University Press |location=Exercise 8.7 |page=200 |url=https://nlp.stanford.edu/IR-book/pdf/irbookonlinereading.pdf |access-date=18 July 2022}}</ref> |
|||
* The F<sub>1</sub>-score of a classifier which always predicts the positive class converges to 1 as the probability of the positive class increases. |
|||
* The F<sub>1</sub>-score of a classifier which always predicts the positive class is equal to 2 * proportion_of_positive_class / ( 1 + proportion_of_positive_class ), since the recall is 1, and the precision is equal to the proportion of the positive class.<ref>{{cite web | url=https://stats.stackexchange.com/q/390541 | title=What is the baseline of the F1 score for a binary classifier? }}</ref> |
|||
* If the scoring model is uninformative (cannot distinguish between the positive and negative class) then the optimal threshold is 0 so that the positive class is always predicted. |
|||
* F<sub>1</sub> score is [[Concave function|concave]] in the true positive rate.<ref>Lipton, Z.C., Elkan, C.P., & Narayanaswamy, B. (2014). F1-Optimal Thresholding in the Multi-Label Setting. ArXiv, abs/1402.1892.</ref> |
|||
== Criticism == |
== Criticism == |
||
Line 103: | Line 144: | ||
| pmid = 31898477 |
| pmid = 31898477 |
||
| doi = 10.1186/s12864-019-6413-7 |
| doi = 10.1186/s12864-019-6413-7 |
||
| pmc= 6941312 |
| pmc= 6941312 |
||
| doi-access = free |
|||
}}</ref> |
|||
David Powers has pointed out that F<sub>1</sub> ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and the true class predicting the classifier prediction, proposing separate multiclass measures [[Informedness]] and [[Markedness]] for the two directions, noting that their geometric mean is correlation.<ref name="Powers2007">{{cite journal |first=David M W |last=Powers |date=2011 |title=Evaluation: From Precision, Recall and F-Score to ROC, Informedness, Markedness & Correlation |journal=Journal of Machine Learning Technologies |volume=2 |issue=1 |pages=37–63 |hdl=2328/27165 }}</ref> |
[[David M W Powers]] has pointed out that F<sub>1</sub> ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and the true class predicting the classifier prediction, proposing separate multiclass measures [[Informedness]] and [[Markedness]] for the two directions, noting that their geometric mean is correlation.<ref name="Powers2007">{{cite journal |first=David M W |last=Powers |date=2011 |title=Evaluation: From Precision, Recall and F-Score to ROC, Informedness, Markedness & Correlation |journal=Journal of Machine Learning Technologies |volume=2 |issue=1 |pages=37–63 |hdl=2328/27165 }}</ref> |
||
Another source of critique of F<sub>1</sub> is its lack of symmetry. It means it may change its value when dataset labeling is changed - the "positive" samples are named "negative" and vice versa. |
|||
This criticism is met by the [[P4-metric|P4 metric]] definition, which is sometimes indicated as a symmetrical extension of F<sub>1</sub>.<ref>{{cite arXiv | eprint=2210.11997| last1=Sitarz| first1=Mikolaj| title=Extending F1 metric, probabilistic approach| year=2022| class=cs.LG}}</ref> |
|||
==Difference from Fowlkes–Mallows index== |
==Difference from Fowlkes–Mallows index== |
||
Line 116: | Line 162: | ||
| date = August 2018 |
| date = August 2018 |
||
| volume = 17 |
| volume = 17 |
||
⚫ | |||
| pages = 168–192 |
| pages = 168–192 |
||
| doi = 10.1016/j.aci.2018.08.003| doi-access = free |
| doi = 10.1016/j.aci.2018.08.003| doi-access = free |
||
Line 123: | Line 168: | ||
==Extension to multi-class classification== |
==Extension to multi-class classification== |
||
The F-score is also used for evaluating classification problems with more than two classes ([[Multiclass classification]]). |
The F-score is also used for evaluating classification problems with more than two classes ([[Multiclass classification]]). A common method is to average the F-score over each class, aiming at a balanced measurement of performance.<ref>{{cite journal|last=Opitz|first=Juri|title=A Closer Look at Classification Evaluation Metrics and a Critical Reflection of Common Evaluation Practice|journal=Transactions of the Association for Computational Linguistics|date=2024|volume=12|pages=820–836|doi=10.1162/tacl_a_00675|url=https://doi.org/10.1162/tacl_a_00675|arxiv=2404.16958}}</ref> |
||
===Macro F1=== |
|||
Macro F1 is a macro-averaged F1 score. To calculate macro F1, two different averaging-formulas have been used: the F-score of (arithmetic) class-wise precision and recall means or the arithmetic mean of class-wise F-scores, where the latter exhibits more desirable properties.<ref>{{cite arXiv | author1 = J. Opitz | author2 = S. Burst | year = 2019 | title = Macro F1 and Macro F1 | eprint=1911.03347 |class=stat.ML }}</ref> |
|||
==See also== |
==See also== |
||
* [[BLEU]] |
* [[BLEU]] |
||
* [[Confusion matrix]] |
* [[Confusion matrix]] |
||
* [[ |
* [[Evaluation of binary classifiers#Single metrics|Hypothesis tests for accuracy]] |
||
* [[METEOR]] |
* [[METEOR]] |
||
* [[NIST (metric)]] |
* [[NIST (metric)]] |
||
Line 136: | Line 185: | ||
* [[Word error rate]] |
* [[Word error rate]] |
||
* [[LEPOR]] |
* [[LEPOR]] |
||
{{Machine learning evaluation metrics}} |
|||
== References == |
== References == |
Revision as of 09:11, 24 July 2024
In statistical analysis of binary classification and information retrieval systems, the F-score or F-measure is a measure of predictive performance. It is calculated from the precision and recall of the test, where the precision is the number of true positive results divided by the number of all samples predicted to be positive, including those not identified correctly, and the recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known as positive predictive value, and recall is also known as sensitivity in diagnostic binary classification.
The F1 score is the harmonic mean of the precision and recall. It thus symmetrically represents both precision and recall in one metric. The more generic score applies additional weights, valuing one of precision or recall more than the other.
The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if precision and recall are zero.
Etymology
The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the Fourth Message Understanding Conference (MUC-4, 1992).[1]
Definition
The traditional F-measure or balanced F-score (F1 score) is the harmonic mean of precision and recall:[2]
- .
Fβ score
A more general F score, , that uses a positive real factor , where is chosen such that recall is considered times as important as precision, is:
- .
In terms of Type I and type II errors this becomes:
- .
Two commonly used values for are 2, which weighs recall higher than precision, and 0.5, which weighs recall lower than precision.
The F-measure was derived so that "measures the effectiveness of retrieval with respect to a user who attaches times as much importance to recall as precision".[3] It is based on Van Rijsbergen's effectiveness measure
- .
Their relationship is where .
Diagnostic testing
This is related to the field of binary classification where recall is often termed "sensitivity".
Predicted condition | Sources: [4][5][6][7][8][9][10][11] | ||||
Total population = P + N |
Predicted positive (PP) | Predicted negative (PN) | Informedness, bookmaker informedness (BM) = TPR + TNR − 1 |
Prevalence threshold (PT) = √TPR × FPR - FPR/TPR - FPR | |
Actual condition
|
Positive (P) [a] | True positive (TP), hit[b] |
False negative (FN), miss, underestimation |
True positive rate (TPR), recall, sensitivity (SEN), probability of detection, hit rate, power = TP/P = 1 − FNR |
False negative rate (FNR), miss rate type II error [c] = FN/P = 1 − TPR |
Negative (N)[d] | False positive (FP), false alarm, overestimation |
True negative (TN), correct rejection[e] |
False positive rate (FPR), probability of false alarm, fall-out type I error [f] = FP/N = 1 − TNR |
True negative rate (TNR), specificity (SPC), selectivity = TN/N = 1 − FPR | |
Prevalence = P/P + N |
Positive predictive value (PPV), precision = TP/PP = 1 − FDR |
False omission rate (FOR) = FN/PN = 1 − NPV |
Positive likelihood ratio (LR+) = TPR/FPR |
Negative likelihood ratio (LR−) = FNR/TNR | |
Accuracy (ACC) = TP + TN/P + N |
False discovery rate (FDR) = FP/PP = 1 − PPV |
Negative predictive value (NPV) = TN/PN = 1 − FOR |
Markedness (MK), deltaP (Δp) = PPV + NPV − 1 |
Diagnostic odds ratio (DOR) = LR+/LR− | |
Balanced accuracy (BA) = TPR + TNR/2 |
F1 score = 2 PPV × TPR/PPV + TPR = 2 TP/2 TP + FP + FN |
Fowlkes–Mallows index (FM) = √PPV × TPR |
Matthews correlation coefficient (MCC) = √TPR × TNR × PPV × NPV - √FNR × FPR × FOR × FDR |
Threat score (TS), critical success index (CSI), Jaccard index = TP/TP + FN + FP |
- ^ the number of real positive cases in the data
- ^ A test result that correctly indicates the presence of a condition or characteristic
- ^ Type II error: A test result which wrongly indicates that a particular condition or attribute is absent
- ^ the number of real negative cases in the data
- ^ A test result that correctly indicates the absence of a condition or characteristic
- ^ Type I error: A test result which wrongly indicates that a particular condition or attribute is present
Dependence of the F-score on class imbalance
Precision-recall curve, and thus the score, explicitly depends on the ratio of positive to negative test cases.[12] This means that comparison of the F-score across different problems with differing class ratios is problematic. One way to address this issue (see e.g., Siblini et al., 2020[13] ) is to use a standard class ratio when making such comparisons.
Applications
The F-score is often used in the field of information retrieval for measuring search, document classification, and query classification performance.[14] It is particularly relevant in applications which are primarily concerned with the positive class and where the positive class is rare relative to the negative class.
Earlier works focused primarily on the F1 score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall[15] and so is seen in wide application.
The F-score is also used in machine learning.[16] However, the F-measures do not take true negatives into account, hence measures such as the Matthews correlation coefficient, Informedness or Cohen's kappa may be preferred to assess the performance of a binary classifier.[17]
The F-score has been widely used in the natural language processing literature,[18] such as in the evaluation of named entity recognition and word segmentation.
Properties
The F1 score is the Dice coefficient of the set of retrieved items and the set of relevant items.[19]
- The F1-score of a classifier which always predicts the positive class converges to 1 as the probability of the positive class increases.
- The F1-score of a classifier which always predicts the positive class is equal to 2 * proportion_of_positive_class / ( 1 + proportion_of_positive_class ), since the recall is 1, and the precision is equal to the proportion of the positive class.[20]
- If the scoring model is uninformative (cannot distinguish between the positive and negative class) then the optimal threshold is 0 so that the positive class is always predicted.
- F1 score is concave in the true positive rate.[21]
Criticism
David Hand and others criticize the widespread use of the F1 score since it gives equal importance to precision and recall. In practice, different types of mis-classifications incur different costs. In other words, the relative importance of precision and recall is an aspect of the problem.[22]
According to Davide Chicco and Giuseppe Jurman, the F1 score is less truthful and informative than the Matthews correlation coefficient (MCC) in binary evaluation classification.[23]
David M W Powers has pointed out that F1 ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and the true class predicting the classifier prediction, proposing separate multiclass measures Informedness and Markedness for the two directions, noting that their geometric mean is correlation.[24]
Another source of critique of F1 is its lack of symmetry. It means it may change its value when dataset labeling is changed - the "positive" samples are named "negative" and vice versa. This criticism is met by the P4 metric definition, which is sometimes indicated as a symmetrical extension of F1.[25]
Difference from Fowlkes–Mallows index
While the F-measure is the harmonic mean of recall and precision, the Fowlkes–Mallows index is their geometric mean.[26]
Extension to multi-class classification
The F-score is also used for evaluating classification problems with more than two classes (Multiclass classification). A common method is to average the F-score over each class, aiming at a balanced measurement of performance.[27]
Macro F1
Macro F1 is a macro-averaged F1 score. To calculate macro F1, two different averaging-formulas have been used: the F-score of (arithmetic) class-wise precision and recall means or the arithmetic mean of class-wise F-scores, where the latter exhibits more desirable properties.[28]
See also
- BLEU
- Confusion matrix
- Hypothesis tests for accuracy
- METEOR
- NIST (metric)
- Receiver operating characteristic
- ROUGE (metric)
- Uncertainty coefficient, aka Proficiency
- Word error rate
- LEPOR
References
- ^ Sasaki, Y. (2007). "The truth of the F-measure" (PDF). Teach tutor mater. Vol. 1, no. 5. pp. 1–5.
- ^ Aziz Taha, Abdel (2015). "Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool". BMC Medical Imaging. 15 (29): 1–28. doi:10.1186/s12880-015-0068-x. PMC 4533825. PMID 26263899.
- ^ Van Rijsbergen, C. J. (1979). Information Retrieval (2nd ed.). Butterworth-Heinemann.
- ^ Fawcett, Tom (2006). "An Introduction to ROC Analysis" (PDF). Pattern Recognition Letters. 27 (8): 861–874. doi:10.1016/j.patrec.2005.10.010. S2CID 2027090.
- ^ Provost, Foster; Tom Fawcett (2013-08-01). "Data Science for Business: What You Need to Know about Data Mining and Data-Analytic Thinking". O'Reilly Media, Inc.
- ^ Powers, David M. W. (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies. 2 (1): 37–63.
- ^ Ting, Kai Ming (2011). Sammut, Claude; Webb, Geoffrey I. (eds.). Encyclopedia of machine learning. Springer. doi:10.1007/978-0-387-30164-8. ISBN 978-0-387-30164-8.
- ^ Brooks, Harold; Brown, Barb; Ebert, Beth; Ferro, Chris; Jolliffe, Ian; Koh, Tieh-Yong; Roebber, Paul; Stephenson, David (2015-01-26). "WWRP/WGNE Joint Working Group on Forecast Verification Research". Collaboration for Australian Weather and Climate Research. World Meteorological Organisation. Retrieved 2019-07-17.
- ^ Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (1): 6-1–6-13. doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477.
- ^ Chicco D, Toetsch N, Jurman G (February 2021). "The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation". BioData Mining. 14 (13): 13. doi:10.1186/s13040-021-00244-z. PMC 7863449. PMID 33541410.
- ^ Tharwat A. (August 2018). "Classification assessment methods". Applied Computing and Informatics. 17: 168–192. doi:10.1016/j.aci.2018.08.003.
- ^ Brabec, Jan; Komárek, Tomáš; Franc, Vojtěch; Machlica, Lukáš (2020). "On model evaluation under non-constant class imbalance". International Conference on Computational Science. Springer. pp. 74–87. arXiv:2001.05571. doi:10.1007/978-3-030-50423-6_6.
- ^ Siblini, W.; Fréry, J.; He-Guelton, L.; Oblé, F.; Wang, Y. Q. (2020). "Master your metrics with calibration". In M. Berthold; A. Feelders; G. Krempl (eds.). Advances in Intelligent Data Analysis XVIII. Springer. pp. 457–469. arXiv:1909.02827. doi:10.1007/978-3-030-44584-3_36.
- ^ Beitzel., Steven M. (2006). On Understanding and Classifying Web Queries (Ph.D. thesis). IIT. CiteSeerX 10.1.1.127.634.
- ^ X. Li; Y.-Y. Wang; A. Acero (July 2008). Learning query intent from regularized click graphs. Proceedings of the 31st SIGIR Conference. p. 339. doi:10.1145/1390334.1390393. ISBN 9781605581644. S2CID 8482989.
- ^ See, e.g., the evaluation of the [1].
- ^ Powers, David M. W (2015). "What the F-measure doesn't measure". arXiv:1503.06410 [cs.IR].
- ^ Derczynski, L. (2016). Complementarity, F-score, and NLP Evaluation. Proceedings of the International Conference on Language Resources and Evaluation.
- ^ Manning, Christopher (April 1, 2009). An Introduction to Information Retrieval (PDF). Exercise 8.7: Cambridge University Press. p. 200. Retrieved 18 July 2022.
{{cite book}}
: CS1 maint: location (link) - ^ "What is the baseline of the F1 score for a binary classifier?".
- ^ Lipton, Z.C., Elkan, C.P., & Narayanaswamy, B. (2014). F1-Optimal Thresholding in the Multi-Label Setting. ArXiv, abs/1402.1892.
- ^ Hand, David. "A note on using the F-measure for evaluating record linkage algorithms - Dimensions". app.dimensions.ai. doi:10.1007/s11222-017-9746-6. hdl:10044/1/46235. S2CID 38782128. Retrieved 2018-12-08.
- ^ Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (6): 6. doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477.
- ^ Powers, David M W (2011). "Evaluation: From Precision, Recall and F-Score to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies. 2 (1): 37–63. hdl:2328/27165.
- ^ Sitarz, Mikolaj (2022). "Extending F1 metric, probabilistic approach". arXiv:2210.11997 [cs.LG].
- ^ Tharwat A (August 2018). "Classification assessment methods". Applied Computing and Informatics. 17: 168–192. doi:10.1016/j.aci.2018.08.003.
- ^ Opitz, Juri (2024). "A Closer Look at Classification Evaluation Metrics and a Critical Reflection of Common Evaluation Practice". Transactions of the Association for Computational Linguistics. 12: 820–836. arXiv:2404.16958. doi:10.1162/tacl_a_00675.
- ^ J. Opitz; S. Burst (2019). "Macro F1 and Macro F1". arXiv:1911.03347 [stat.ML].