Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter Open Access February 11, 2011

A Learning Algorithm based on Primary School Teaching Wisdom

  • Ninan Sajeeth Philip EMAIL logo

Abstract

A learning algorithm based on primary school teaching and learning is presented. The methodology is to continuously evaluate the performance of the network and to train it on the examples for which they repeatedly fail, until all the examples are correctly classified. Empirical analysis on UCI data show that the algorithm produces good training data and improves the generalization ability of the network on unseen data. The algorithm has interesting applications in data mining, model evaluations and rare objects discovery.

References

[1] T. Hediger, M. Wann and N.N. Greenbaun. The influence of training sets on generalization in feedforward neural networks. In International Joint Conf. on Neural Networks, 1990.Search in Google Scholar

[2] S.M. Katz. Estimation of probabilities from sparse data for the language model component of a speech recognizer. In IEEE Transactions on Acoustics, Speech and Signal Processing, pages 400-401, 1987.10.1109/TASSP.1987.1165125Search in Google Scholar

[3] K. Keeni and H. Shimodaira. On selection of training data for fast learning of neural networks using back propagation. In Artificial Intelligence and Applications, 2002.Search in Google Scholar

[4] S. Vijayakumar, M. Sugiyama, and H. Ogawa. Training data selection for optimal generalization with noise variance reduction in neural networks. Springer-Verlag, 1998.10.1007/978-1-4471-0811-5_14Search in Google Scholar

[5] R. Setiono and H. Liu. Neural network feature selector. IEEE Trans. Neural Networks, 8:654-661, 1997.10.1109/72.572104Search in Google Scholar PubMed

[6] M.H. Choueiki and C.A. Mount-Campbell, Training data development with the doptimality criterion. IEEE Transactions on Neural Networks, 10(1):56–63, 1999.10.1109/72.737493Search in Google Scholar PubMed

[7] Masashi Sugiyama and Hidemitsu Ogawa. Training data selection for optimal generalization in trigonometric polynomial networks. In Advances in Neural Information Processing Systems, pages 624-630. MIT Press, 2000.Search in Google Scholar

[8] Kazuyuki Hara and Kenji Nakayama. A training method with small computation for classification. In IEEE-INNS-ENNS International Joint Conference, pages 543-548, 2000.10.1109/IJCNN.2000.861365Search in Google Scholar

[9] J. Wang, P. Neskovic, and L.N. Leon N Cooper. Training data selection for support vector machines. In ICNC 2005 LNCS. Springer, 2005.10.1007/11539087_71Search in Google Scholar

[10] J.R. Cano, F. Herrera, and M. Lozano. On the combination of evolutionary algorithms and stratified strategies for training set selection in data mining. Applied Soft Computing, 2006:323-332, 2006.10.1016/j.asoc.2005.02.006Search in Google Scholar

[11] C.E. Pedreira. Learning vector quantization with training data selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28 (1):157–162, 2006.10.1109/TPAMI.2006.14Search in Google Scholar PubMed

[12] D.L. Wilson. Asymptotic properties of nearest neighbor rules using edited data sets. IEEE Transactions on Systems, Man and Cybernetics, 1972.10.1109/TSMC.1972.4309137Search in Google Scholar

[13] P. Devijver and J. Kittler. On the edited nearest neighbor rule. IEEE 1980 Pattern Recognition, 1:72-80.Search in Google Scholar

[14] E. Garfield. Citation Indexing: its Theory and Application in Science, Technology, and Humanities. John Wiley & Sons, 1979.Search in Google Scholar

[15] C.F. Eick, N. Zeidat, and Z. Zhenghong. Supervised clustering algorithms and benefits. In 16th IEEE International Conference on Tools with Artificial Intelligence (ICTAI04), Boca Raton, Florida, pages 774-776, 2004.Search in Google Scholar

[16] W. Wei, E. Barnard, and M. Fanty. Improved probability estimation with neural network models. In Proceedings of the International Conference on Spoken Language Systems, pages 498-501, 1996.10.21437/ICSLP.1996-133Search in Google Scholar

[17] N.S. Philip and K. Babu Joseph. Boosting the differences: A fast bayesian classifier neural network. Intelligent Data Analysis, 4:463-473, 2000.Search in Google Scholar

[18] http://www.iucaa.ernet.in/~nspp/dbnn.html.Search in Google Scholar

[19] A. P. Dawid. Conditional independence in statistical theory. J.R. Statist. Soc. B, 41(1):1-31, 1997.10.1111/j.2517-6161.1979.tb01052.xSearch in Google Scholar

[20] A. Moore. A short intro to naive bayesian classifiers. Available at http://www.autonlab.org/tutorials/naive.html.Search in Google Scholar

[21] C. Elkan. Naive bayesian learning. Technical Report CS97-557, Department of Computer Science and Engineering, University of California, San Diego, September 1997. Available as http://www4.cs.umanitoba.ca/~jacky/Teaching/Courses/COMP4360-MachineLearning/Ad-ditionalInformation/elkan-naivebayesian-learning.pdf.Search in Google Scholar

[22] S.C. Odewahn, S.H. Cohen, R.A. Windhorst, and N.S. Philip. Automated galaxy morphology: A fourier approach. ApJ, 568:539-557, 2002.Search in Google Scholar

[23] N.S. Philip, YogeshWadadekar, Ajit Kembhavi, and K. Babu Joseph. A difference boosting neural network for automated stargalaxy classification. Astronomy and Astrophysics, 385(3):1119-1133, 2002.10.1051/0004-6361:20020219Search in Google Scholar

[24] S. Goderya, J.D. Andreasen, and N.S. Philip. Advances in automated algorithms for morphological classification of galaxies based on shape features. In Astronomical Data Analysis Software and Systems XIII, volume ASP Conference Series, 314, pages 617-620, 2004.Search in Google Scholar

[25] R. Sinha, N.S. Philip, A. Kembhavi, and A. Mahabal. Photometric classification of quasars from the sloan survey. In IAU Highlights Of Astronomy, volume 14(3), 2006.10.1017/S1743921307012070Search in Google Scholar

[26] http://www.dtreg.com/benchmarks.htm.Search in Google Scholar

[27] Benjamin Quost, Thierry Denoeux, and Mylene Masson. Oneagainst-all classifier combination in the framework of belief functions. In Eighth Conference on Information Fusion Conference, pages 356-363, 2006.Search in Google Scholar

[28] Dan Geiger, Moises Goldszmidt, G. Provan, P. Langley, and P. Smyth. Bayesian network classifiers. In Geiger97bayesiannetwork, pages 131-163, 1997.Search in Google Scholar

[29] SDSS home page is http://www.sdss.org.Search in Google Scholar

[30] Bora A., Gupta R., Singh H.P., Murthy J., Mohan R., and Duorah K. A three-dimensional automated classification scheme for the tauvex data pipeline. MNRAS, 384:827-833, 2008.10.1111/j.1365-2966.2007.12764.xSearch in Google Scholar

Received: 2010-8-11
Accepted: 2011-1-31
Published Online: 2011-2-11
Published in Print: 2010-9-1

© Ninan Sajeeth Philip

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.

Downloaded on 19.1.2025 from https://www.degruyter.com/document/doi/10.2478/s13230-011-0002-z/html
Scroll to top button