Development of Automatically Updated Soundmaps for the Preservation of Natural Environment

Abstract

Automatically Updated Soundmaps are maps that convey the sound rather than the visual information content of an area of interest, at a certain time instant or period. Sound features encapsulate information that can be combined with the visual features of the landscape, thus leading to useful environmental conclusions. This work aims to construct an Automatically Updated Soundmap of an area of environmental interest. A hierarchical pattern recognition approach method is proposed here, that can exploit sound recordings collected by a network of microphones. Hence, after appropriate signal processing, the large amounts of information, originally in the raw form of sound recordings, can be presented in the concise yet meaningful form of a periodically updated soundmap.

Share and Cite:

I. Paraskevas, S. Potirakis, I. Liaperdos and M. Rangoussi, "Development of Automatically Updated Soundmaps for the Preservation of Natural Environment," Journal of Environmental Protection, Vol. 2 No. 10, 2011, pp. 1388-1391. doi: 10.4236/jep.2011.210161.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] A. D. Mazaris, A. S. Kallimanis, G. Hatzigiannidis, K. Papadimitriou and J. D. Pantis, “Spatiotemporal Analysis of an Acoustic Environment: Interactions between Landscape Features and Sound,” Landscape Ecology, Vol. 24, No. 6, 2009, pp. 817-831. doi:10.1007/s10980-009-9360-x
[2] B. Krause, “Bioacoustics, Habitat Ambience in Ecolo- gical Balance,” Whole Earth Review, Vol. 57, 1987, pp. 267-271.
[3] B. Krause, “Wild Soundscapes: Discovering the Voice of the Natural World,” Wilderness Press, Berkeley, 2002.
[4] R. M. Schafer, “The Soundscape: Our Sonic Environment and the Tuning of the World,” Destiny Books, Rochester, 1993.
[5] M. G. Turner, R. H. Gardner and R. V. O’Neill, “Landscape Ecology in Theory and Practice: Pattern and Process,” Springer-Verlag, New York, 2001.
[6] SEKI Group, “Measurement and Analysis of Environ- mental Acoustics in Sequoia National Park: A Soundscape Perspective,” 2010. http://envirosonic.cevl.msu.edu/seki
[7] R. O. Duda, P. E. Hart and D. G. Stork, “Pattern Classification,” 2nd Edition, John Wiley & Sons, Ltd., Hoboken, 2000.
[8] E. Wold, T. Blum, D. Keislar and J. Wheaton, “Content- based Classification, Search and Retrieval of Audio,” IEEE Multimedia, Vol. 3, No. 3, 1996, pp. 27-36. doi:10.1109/93.556537
[9] T. Zhang and C. C. J. Kuo, “Audio Content Analysis for Online Audiovisual Data Segmentation and Classification,” IEEE Transactions on Speech and Audio Processing, Vol. 9, No. 4, 2001, pp. 441-457. doi:10.1109/89.917689
[10] I. Paraskevas, S. M. Potirakis and M. Rangoussi, “Natural Soundscapes and Identification of Environmental Sounds: A Pattern Recognition Approach,” 16th International Conference on Digital Signal Processing (DSP’09), Santorini, 5-7 July 2009, pp. 1-6.
[11] I. Paraskevas and E. Chilton, “Combination of Magnitude and Phase Statistical Features for Audio Classification,” Acoustics Research Letters Online, Vol. 5, No. 3, 2004, pp. 111-117. doi:10.1121/1.1755731
[12] S. Parsons and G. Jones, “Acoustic Identification of Twelve Species of Echolocating Bat by Discriminant Function Ana- lysis and Artificial Neural Networks,” The Journal of Ex- perimental Biology, Vol. 203, No. 17, 2000, pp. 2641- 2656.
[13] Natura, “2000 Ecological Network,” 2010. http://www.natura.org/

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.