Jump to content

Incremental decision tree

From Wikipedia, the free encyclopedia

An incremental decision tree algorithm is an online machine learning algorithm that outputs a decision tree. Many decision tree methods, such as C4.5, construct a tree using a complete dataset. Incremental decision tree methods allow an existing tree to be updated using only new individual data instances, without having to re-process past instances. This may be useful in situations where the entire dataset is not available when the tree is updated (i.e. the data was not stored), the original data set is too large to process or the characteristics of the data change over time.

Applications

[edit]
  • On-line learning
  • Data streams
  • Concept drift
  • Data which can be modeled well using a hierarchical model.
  • Systems where a user-interpretable output is desired.

Methods

[edit]

Here is a short list of incremental decision tree methods, organized by their (usually non-incremental) parent algorithms.

CART family

[edit]

CART[1] (1984) is a nonincremental decision tree inducer for both classification and regression problems. developed in the mathematics and statistics communities. CART traces its roots to AID (1963)[2]

  • incremental CART (1989)[3] Crawford modified CART to incorporate data incrementally.

ID3/C4.5 family

[edit]

ID3 (1986)[4] and C4.5 (1993)[5] were developed by Quinlan and have roots in Hunt's Concept Learning System (CLS, 1966)[6] The ID3 family of tree inducers was developed in the engineering and computer science communities.

  • ID3' (1986)[7] was suggested by Schlimmer and Fisher. It was a brute-force method to make ID3 incremental; after each new data instance is acquired, an entirely new tree is induced using ID3.
  • ID4 (1986)[7] could incorporate data incrementally. However, certain concepts were unlearnable, because ID4 discards subtrees when a new test is chosen for a node.
  • ID5 (1988)[8] didn't discard subtrees, but also did not guarantee that it would produce the same tree as ID3.
  • ID5R (1989)[9] output the same tree as ID3 for a dataset regardless of the incremental training order. This was accomplished by recursively updating the tree's subnodes. It did not handle numeric variables, multiclass classification tasks, or missing values.
  • ID6MDL (2007)[10] an extended version of the ID3 or ID5R algorithms.
  • ITI (1997)[11] is an efficient method for incrementally inducing decision trees. The same tree is produced for a dataset regardless of the data's presentation order, or whether the tree is induced incrementally or non incrementally (batch mode). It can accommodate numeric variables, multiclass tasks, and missing values. Code is available on the web. [1]

note: ID6NB (2009)[12] is not incremental.

Other Incremental Learning Systems

[edit]

There were several incremental concept learning systems that did not build decision trees, but which predated and influenced the development of the earliest incremental decision tree learners, notably ID4.[7] Notable among these was Schlimmer and Granger's STAGGER (1986),[13] which learned disjunctive concepts incrementally. STAGGER was developed to examine concepts that changed over time (concept drift). Prior to STAGGER, Michalski and Larson (1978)[14] investigated an incremental variant of AQ (Michalski, 1973),[15] a supervised system for learning concepts in disjunctive normal form (DNF). Experience with these earlier systems and others, to include incremental tree-structured unsupervised learning, contributed to a conceptual framework for evaluating incremental decision tree learners specifically, and incremental concept learning generally, along four dimensions that reflect the inherent tradeoffs between learning cost and quality:[7] (1) cost of knowledge base update, (2) the number of observations that are required to converge on a knowledge base with given characteristics, (3) the total effort (as a function of the first two dimensions) that a system exerts, and the (4) quality (often consistency) of the final knowledge base. Some of the historical context in which incremental decision tree learners emerged is given in Fisher and Schlimmer (1988),[16] and which also expands on the four factor framework that was used to evaluate and design incremental learning systems.

VFDT Algorithm

[edit]

Very Fast Decision Trees learner reduces training time for large incremental data sets by subsampling the incoming data stream.

  • VFDT (2000)[17]
  • CVFDT (2001)[18] can adapt to concept drift, by using a sliding window on incoming data. Old data outside the window is forgotten.
  • VFDTc (2006)[19] extends VFDT for continuous data, concept drift, and application of Naive Bayes classifiers in the leaves.
  • VFML (2003) is a toolkit and available on the web. [2]. It was developed by the creators of VFDT and CVFDT.

EFDT Algorithm

[edit]

The Extremely Fast Decision Tree learner[20] is statistically more powerful than VFDT, allowing it to learn more detailed trees from less data. It differs from VFDT in the method for deciding when to insert a new branch into the tree. VFDT waits until it is confident that the best available branch is better than any alternative. In contrast, EFDT splits as soon as it is confident that the best available branch is better than the current alternative. Initially, the current alternative is no branch. This allows EFDT to insert branches much more rapidly than VFDT. During incremental learning this means that EFDT can deploy useful trees much sooner than VFDT.

However, the new branch selection method greatly increases the likelihood of selecting a suboptimal branch. In consequence, EFDT keeps monitoring the performance of all branches and will replace a branch as soon as it is confident there is a better alternative.

OLIN and IFN

[edit]
  • OLIN (2002)[21]
  • IOLIN (2008)[22] — based on Info-Fuzzy Network (IFN)[23]

GAENARI

[edit]

See also

[edit]

References

[edit]
  1. ^ Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. (1984). Classification and regression trees. Belmont, CA: Wadsworth International. ISBN 978-1-351-46048-4.
  2. ^ Morgan, J.N; Sondquist, J.A. (1963). "Problems in the analysis of survey data, and a proposal" (PDF). J. Amer. Statist. Assoc. 58 (302): 415–434. doi:10.1080/01621459.1963.10500855.
  3. ^ Crawford, S.L. (1989). "Extensions to the CART algorithm". International Journal of Man-Machine Studies. 31 (2): 197–217. doi:10.1016/0020-7373(89)90027-8.
  4. ^ Quinlan, J.R. (1986). "Induction of Decision Trees" (PDF). Machine Learning. 1 (1): 81–106. doi:10.1007/BF00116251. S2CID 13252401.
  5. ^ Quinlan, J.R. (2014) [1993]. C4.5: Programs for machine learning. Elsevier. ISBN 978-1-55860-238-0.
  6. ^ Hunt, E.B.; Marin, J.; Stone, P.J. (1966). Experiments in induction. Academic Press. ISBN 978-0-12-362350-8.
  7. ^ a b c d Schlimmer, J.C.; Fisher, D. (1986). "A case study of incremental concept induction". AAAI'86: Proceedings of the Fifth National Conference on Artificial Intelligence. Morgan Kaufmann. pp. 496–501.
  8. ^ Utgoff, P.E. (1988). "ID5: An incremental ID3" (PDF). Machine Learning Proceedings 1988 Proceedings of the Fifth International Conference on Machine Learning. Morgan Kaufmann. pp. 107–120. doi:10.1016/B978-0-934613-64-4.50017-7. ISBN 978-0-934613-64-4. Publishers.
  9. ^ Utgoff, P.E. (1989). "Incremental induction of decision trees" (PDF). Machine Learning. 4 (2): 161–186. doi:10.1023/A:1022699900025. S2CID 5293072.
  10. ^ Kroon, M., Korzec, S., Adriani, P. (2007) ID6MDL: Post-Pruning Incremental Decision Trees.
  11. ^ Utgoff, P.E.; Berkman, N.C.; Clouse, J.A. (1997). "Decision tree induction based on efficient tree restructuring" (PDF). Machine Learning. 29: 5–44. doi:10.1023/A:1007413323501. S2CID 2743403.
  12. ^ Appavu, S.; Rajaram, R. (2009). "Knowledge-based system for text classification using ID6NB algorithm]". Knowledge-Based Systems. 22 (1): 1–7. doi:10.1016/j.knosys.2008.04.006.
  13. ^ Schlimmer, J.C.; Granger, Jr., R.H. (1986). "Incremental learning from noisy data". Machine Learning. 1 (3): 317–354. doi:10.1007/BF00116895. S2CID 33776987.
  14. ^ Michalski, R.S.; Larson, J.B. (1978). Selection of most representative training examples and incremental generation of VL hypotheses: The underlying methodology and the description of the programs ESEL and AQ11 (PDF) (Technical report). University of Illinois, Department of Computer Science. hdl:1920/1544/78-03. UIUCDCS-R-78-867.
  15. ^ Michalski, R.S. (1973). "Discovering classification rules using variable-valued logic system VL1" (PDF). IJCAI'73: Proceedings of the Third International Joint Conference on Artificial Intelligence. Stanford, CA: Morgan Kaufmann. pp. 162–172. hdl:1920/1515/73-01.
  16. ^ Fisher, D.; Schlimmer, J. (1988). Models of Incremental Concept Learning: A coupled research proposal (Technical report). Vanderbilt University. CS-88-05.
  17. ^ Domingos, P.; Hulten, G. (2000). "Mining high-speed data streams" (PDF). Proceedings KDD Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining. ACM Press. pp. 71–80. doi:10.1145/347090.347107. ISBN 1-58113-233-6. S2CID 8810610.
  18. ^ Hulten, G.; Spencer, L.; Domingos, P. (2001). "Mining time-changing data streams" (PDF). Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining. ACM Press. pp. 97–106. doi:10.1145/502512.502529. ISBN 978-1-58113-391-2. S2CID 6416602.
  19. ^ Gama, J.; Fernandes, R.; Rocha, R. (2006). "Decision trees for mining data streams". Intelligent Data Analysis. 10: 23–45. doi:10.3233/IDA-2006-10103.
  20. ^ Manapragada, C.; Webb, G. I.; Salehi, M. (2018). "Extremely Fast Decision Tree". KDD '18: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM Press. pp. 1953–62. arXiv:1802.08780. doi:10.1145/3219819.3220005. ISBN 978-1-4503-5552-0. S2CID 3513409.
  21. ^ Last, M. (2002). "Online classification of nonstationary data streams" (PDF). Intell. Data Anal. 6 (2): 129–147. doi:10.3233/IDA-2002-6203.
  22. ^ Cohen, L.; Avrahami, G.; Last, M.; Kandel, A. (2008). "Info-fuzzy algorithms for mining dynamic data streams" (PDF). Applied Soft Computing. 8 (4): 1283–94. doi:10.1016/j.asoc.2007.11.003.
  23. ^ Maimon, O.; Last, M. (2000). The info-fuzzy network (IFN) methodology. Knowledge Discovery and Data Mining. Kluwer. doi:10.1007/978-1-4757-3296-2. ISBN 978-1-4757-3296-2. S2CID 41520652.
[edit]