Skip to main navigation menu Skip to main content Skip to site footer

Scientific papers

Vol. 49 No. 1 (2025)

Realization of the “Camera dei Diamanti” and performance evaluation of the sound spatialization

DOI
https://doi.org/10.3280/ria1-2025oa19121
Submitted
gennaio 10, 2025
Published
2025-09-22

Abstract

“Camera dei Diamanti” is the virtual reality lab of the University of Ferrara. The main applications range from medicine and psychology to industry. Practical examples are acoustical prosthesis research, speech intelligibility, sound quality of industrial products, education and safety training. This work shows the physical realization of the hardware, so how 41 loudspeakers (including a subwoofer), an acoustic treatment and virtual reality glasses were added to an audiometric chamber. These make it possible to reproduce realistic virtual reality scenarios; the audio playback is achieved using Ambisonics technology, that is a standard nowadays for those labs; the video reproduction is managed by Unity.

An objective methodology, based on multichannel microphones, has been developed to evaluate the performance of spatialization of the sound in order to be aware of the hardware’s limits. The results obtained for the “Camera dei Diamanti” are encouragingly positive since the reproduction error evaluated for single source, both real and virtual ones, is around 1°-2°. These values are comparable to the human capability to distinguish two sounds coming from different directions under optimal conditions.

References (including DOI)

  1. N.A. Lesica, Why Do Hearing Aids Fail to Restore Normal Auditory Perception?, Trends in Neurosciences 41 (2018) 174–185. https://doi.org/10.1016/j.tins.2018.01.008.
  2. V. Hohmann, R. Paluch, M. Krueger, M. Meis, G. Grimm, The Virtual Reality Lab: Realization and Application of Vir-tual Sound Environments, Ear & Hearing 41 (2020) 31S-38S. https://doi.org/10.1097/AUD.0000000000000945.
  3. Y.-H. Wu, E. Stangl, O. Chipara, S.S. Hasan, S. DeVries, J. Oleson, Efficacy and Effectiveness of Advanced Hearing Aid Directional and Noise Reduction Technologies for Older Adults With Mild to Moderate Hearing Loss, Ear & Hearing 40 (2019) 805–822. https://doi.org/10.1097/AUD.0000000000000672.
  4. R.A. Bentler, Effectiveness of Directional Microphones and Noise Reduction Schemes in Hearing Aids: A System-atic Review of the Evidence, J Am Acad Audiol 16 (2005) 473–484. https://doi.org/10.3766/jaaa.16.7.7.
  5. M.T. Cord, R.K. Surr, B.E. Walden, O. Dyrlund, Relation-ship between Laboratory Measures of Directional Ad-vantage and Everyday Success with Directional Micro-phone Hearing Aids, J Am Acad Audiol 15 (2004) 353–364. https://doi.org/10.3766/jaaa.15.5.3.
  6. G. Llorach Tó, G. Grimm, M. Hendrikse, V. Hohmann, To-wards Realistic Immersive Audiovisual Simulations for Hearing Research: Capture, Virtual Scenes and Reproduc-tion, 2018. https://doi.org/10.1145/3264869.3264874.
  7. G. Grimm, J. Luberadzka, V. Hohmann, A Toolbox for Ren-dering Virtual Acoustic Environments in the Context of Audiology, Acta Acustica United with Acustica 105 (2019) 566–578. https://doi.org/10.3813/AAA.919337.
  8. T. Huisman, A. Ahrens, E. MacDonald, Ambisonics Sound Source Localization With Varying Amount of Visual Infor-mation in Virtual Reality, Frontiers in Virtual Reality 2 (2021). https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2021.722321.
  9. A. Guastamacchia, R.G. Rosso, G.E. Puglisi, F. Riente, L. Shtrepi, A. Astolfi, Real and Virtual Lecture Rooms: Valida-tion of a Virtual Reality System for the Perceptual As-sessment of Room Acoustical Quality, Acoustics 6 (2024) 933–965. https://doi.org/10.3390/acoustics6040052.
  10. F. Pausch, G. Behler, J. Fels, SCaLAr – A surrounding spherical cap loudspeaker array for flexible generation and evaluation of virtual acoustic environments, Acta Acust. 4 (2020) 19. https://doi.org/10.1051/aacus/2020014.
  11. G.D. Romigh, D.S. Brungart, B.D. Simpson, Free-Field Lo-calization Performance With a Head-Tracked Virtual Audi-tory Display, IEEE J. Sel. Top. Signal Process. 9 (2015) 943–954. https://doi.org/10.1109/JSTSP.2015.2421874.
  12. F. Zotter, M. Frank, Ambisonic Amplitude Panning and Decoding in Higher Orders, in: F. Zotter, M. Frank (Eds.), Ambisonics: A Practical 3D Audio Theory for Recording, Studio Production, Sound Reinforcement, and Virtual Re-ality, Springer International Publishing, Cham, 2019: pp. 53–98. https://doi.org/10.1007/978-3-030-17207-7_4.
  13. F. Zotter, M. Frank, All-Round Ambisonic Panning and De-coding, Journal of the Audio Engineering Society 60 (2012) 807–820.
  14. V. Pulkki, Spatial Sound Generation and Perception by Amplitude Panning Techniques, (2001).
  15. J. Blauert, Spatial Hearing: The Psychophysics of Human Sound Localization, The MIT Press, 1996. https://doi.org/10.7551/mitpress/6391.001.0001.
  16. A. Carlini, C. Bordeau, M. Ambard, Auditory localization: a comprehensive practical review, Frontiers in Psychology 15 (2024). https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1408073.
  17. J. Blauert, Sound Localization in the Median Plane, Acta Acustica United with Acustica 22 (1969).
  18. A.W. Mills, On the Minimum Audible Angle, The Journal of the Acoustical Society of America 30 (1958) 237–246. https://doi.org/10.1121/1.1909553.
  19. W. Grantham, B. Hornsby, E. Erpenbeck, Auditory spatial resolution in horizontal, vertical, and diagonal planes, The Journal of the Acoustical Society of America 114 (2003) 1009–22. https://doi.org/10.1121/1.1590970.
  20. D.R. Perrott, K. Saberi, Minimum audible angle thresholds for sources varying in both elevation and azimuth, The Journal of the Acoustical Society of America 87 (1990) 1728–1731. https://doi.org/10.1121/1.399421.
  21. K. Sochaczewska, P. Malecki, M. Piotrowska, Evaluation of the Minimum Audible Angle on Horizontal Plane in 3rd order Ambisonic Spherical Playback System, 2021. https://doi.org/10.1109/I3DA48870.2021.9610858.
  22. R. Meng, J. Xiang, J. Sang, C. Zheng, X. Li, S. Bleeck, J. Cai, J. Wang, Investigation of an MAA Test With Virtual Sound Synthesis, Frontiers in Psychology 12 (2021). https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2021.656052.
  23. J. Cooper, Immersive Audiovisual Materials Database, (2024). https://doi.org/10.5281/ZENODO.10571315.
  24. M. Wright, A. Freed, OSC, (2021). https://ccrma.stanford.edu/groups/osc/index.html (ac-cessed December 5, 2024).
  25. S. Ciba, A. Wlodarski, H.-J. Maempel, WhisPER – A New Tool for Performing Listening Tests, 126th Audio Engi-neering Society Convention 2009 1 (2012).
  26. H. Levitt, Transformed Up-Down Methods in Psychoacous-tics, The Journal of the Acoustical Society of America 49 (1971) Suppl 2:467+. https://doi.org/10.1121/1.1912375.
  27. B. Hagerman, Sentences for Testing Speech Intelligibility in Noise, Scandinavian Audiology 11 (1982) 79–87. https://doi.org/10.3109/01050398209076203.
  28. G.E. Puglisi, A. Warzybok, S. Hochmuth, C. Visentin, A. Astolfi, N. Prodi, B. Kollmeier, An Italian matrix sentence test for the evaluation of speech intelligibility in noise, International Journal of Audiology 54 (2015) 44–50. https://doi.org/10.3109/14992027.2015.1061709.
  29. J. Beatty, Task-evoked pupillary responses, processing load, and the structure of processing resources., Psycho-logical Bulletin 91 (1982) 276–292. https://doi.org/10.1037/0033-2909.91.2.276.
  30. M.B. Winn, D. Wendt, T. Koelewijn, S.E. Kuchinsky, Best Practices and Advice for Using Pupillometry to Measure Listening Effort: An Introduction for Those Who Want to Get Started, Trends in Hearing 22 (2018) 2331216518800869. https://doi.org/10.1177/2331216518800869.

Metrics

Metrics Loading ...