Share This Article:

Experiments with Two New Boosting Algorithms

Abstract Full-Text HTML Download Download as PDF (Size:120KB) PP. 386-390
DOI: 10.4236/iim.2010.26047    6,090 Downloads   9,657 Views   Citations

ABSTRACT

Boosting is an effective classifier combination method, which can improve classification performance of an unstable learning algorithm. But it dose not make much more improvement of a stable learning algorithm. In this paper, multiple TAN classifiers are combined by a combination method called Boosting-MultiTAN that is compared with the Boosting-BAN classifier which is boosting based on BAN combination. We describe experiments that carried out to assess how well the two algorithms perform on real learning problems. Fi- nally, experimental results show that the Boosting-BAN has higher classification accuracy on most data sets, but Boosting-MultiTAN has good effect on others. These results argue that boosting algorithm deserve more attention in machine learning and data mining communities.

Conflicts of Interest

The authors declare no conflicts of interest.

Cite this paper

X. Sun and H. Zhou, "Experiments with Two New Boosting Algorithms," Intelligent Information Management, Vol. 2 No. 6, 2010, pp. 386-390. doi: 10.4236/iim.2010.26047.

References

[1] Y. Freund and R. E. Schapire, “A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting,” Computational Learning Theory: 2nd Euro- pean Conference (EuroCOLT’95), Barcelona, 13-15 March 1995, pp. 23-37.
[2] R. E. Schapire, Y. Freund, Y. Bartlett, et al., “Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods,” In: D. H. Fisher, Ed., Proceedings of the 14th International Conference on Machine Learning, Morgan Kaufmann Publishers, San Francisco, 1997, pp. 322-330.
[3] Y. Freund, “Boosting a Weak Learning Algorithm by Majority,” Information and Computation, Vol. 121, No. 2, 1995, pp. 256-285.
[4] J. R. Quinlan, “Bagging, Boosting, and C4.5,” In: R. Ben- Eliyahu, Ed., Proceedings of the 13th National Conference on Artificial Intelligence, Portland, 4-8 August 1996, pp. 725-730.
[5] R. E. Schapire, “The Strength of Weak Learnability,” Machine Learning, Vol. 5, No. 2, 1990, pp. 197-227.
[6] N. Friedman, D. Geiger and M. Goldszmidt, “Bayesian Network Classifiers,” Machine Learning, Vol. 29, No. 2-3, 1997, pp. 131-163.
[7] J. Cheng and R. Greiner, “Comparing Bayesian Network Classifiers,” In: K. B. Laskey and H. Prade, Ed., Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers, San Fran- cisco, 15 October 1999, pp. 101-108.
[8] J. Cheng, D. A. Bell and W. Liu, “An Algorithm for Bayesian Belief Network Construction from Data,” In: Proceedings of Conference on Artificial Intelligence and Statistics, Lauderdale, January 1997, pp. 83-90.
[9] H. B. Shi, H. K. Huang and Z. H. Wang, “Boosting- Based TAN Combination Classifier,” Journal of Computer Research and Development, Vol. 41, No. 2, 2004, pp. 340-345.
[10] UCI Machine Learning Repository. http://www.ics.uci. edu/~mlearn/ML.Repository.html
[11] Y. Freund and R. E. Schapire, “Experiments with a New Boosting Algorithm,” In: L. Saitta, Ed., Proceedings of the 13th International Conference on Machine Learning, Bari, 3-6 July 1996, pp. 148-156.
[12] X. W. Sun, “Augmented BAN Classifier,” Proceedings of International Conference on Computational Intelligence and Software Engineering, Wuhan, 11-13 December 2009.

  
comments powered by Disqus

Copyright © 2019 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.