Salta al menu principale di navigazione Salta al contenuto principale Salta al piè di pagina del sito

Articoli Scientifici

V. 49 N. 1 (2025)

Realizzazione della “Camera dei Diamanti” e verifica delle prestazioni di spazializzazione del suono

DOI
https://doi.org/10.3280/ria1-2025oa19121
Inviata
10 gennaio 2025
Pubblicato
22-09-2025

Abstract

La “Camera dei Diamanti” è una piattaforma per realtà virtuale sviluppata presso l’Università di Ferrara. Le principali applicazioni spaziano dalla medicina alla psicologia e all’industria. Ad esempio: ricerca su protesi acustiche, intelligibilità della parola, sound quality dei prodotti, formazione e sicurezza. È illustrato il processo di realizzazione del sistema formato da una cabina audiometrica con al suo interno 41 altoparlanti (di cui un subwoofer) e un visore per realtà virtuale. Questa strumentazione provvede alla riproduzione di ambienti virtuali realistici; la parte audio è ottenuta grazie alla tecnologia Ambisonics, ormai diventata uno standard del settore; quella video è stata sviluppata con Unity.

È stata ideata una metodologia oggettiva che sfrutta microfoni multicanali per stimare la bontà della spazializzazione del suono, utile a verificare il funzionamento del sistema e conoscerne i limiti. I risultati ottenuti per la “Camera dei Diamanti” sono particolarmente incoraggianti poiché l’errore di riproduzione di una sorgente sonora singola virtuale o reale si aggirano attorno a 1°-2°, valori simili alla capacità umana di distinguere due suoni come provenienti da direzioni diverse in condizioni ottimali.

Riferimenti bibliografici (comprensivi di DOI)

  1. N.A. Lesica, Why Do Hearing Aids Fail to Restore Normal Auditory Perception?, Trends in Neurosciences 41 (2018) 174–185. https://doi.org/10.1016/j.tins.2018.01.008.
  2. V. Hohmann, R. Paluch, M. Krueger, M. Meis, G. Grimm, The Virtual Reality Lab: Realization and Application of Vir-tual Sound Environments, Ear & Hearing 41 (2020) 31S-38S. https://doi.org/10.1097/AUD.0000000000000945.
  3. Y.-H. Wu, E. Stangl, O. Chipara, S.S. Hasan, S. DeVries, J. Oleson, Efficacy and Effectiveness of Advanced Hearing Aid Directional and Noise Reduction Technologies for Older Adults With Mild to Moderate Hearing Loss, Ear & Hearing 40 (2019) 805–822. https://doi.org/10.1097/AUD.0000000000000672.
  4. R.A. Bentler, Effectiveness of Directional Microphones and Noise Reduction Schemes in Hearing Aids: A System-atic Review of the Evidence, J Am Acad Audiol 16 (2005) 473–484. https://doi.org/10.3766/jaaa.16.7.7.
  5. M.T. Cord, R.K. Surr, B.E. Walden, O. Dyrlund, Relation-ship between Laboratory Measures of Directional Ad-vantage and Everyday Success with Directional Micro-phone Hearing Aids, J Am Acad Audiol 15 (2004) 353–364. https://doi.org/10.3766/jaaa.15.5.3.
  6. G. Llorach Tó, G. Grimm, M. Hendrikse, V. Hohmann, To-wards Realistic Immersive Audiovisual Simulations for Hearing Research: Capture, Virtual Scenes and Reproduc-tion, 2018. https://doi.org/10.1145/3264869.3264874.
  7. G. Grimm, J. Luberadzka, V. Hohmann, A Toolbox for Ren-dering Virtual Acoustic Environments in the Context of Audiology, Acta Acustica United with Acustica 105 (2019) 566–578. https://doi.org/10.3813/AAA.919337.
  8. T. Huisman, A. Ahrens, E. MacDonald, Ambisonics Sound Source Localization With Varying Amount of Visual Infor-mation in Virtual Reality, Frontiers in Virtual Reality 2 (2021). https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2021.722321.
  9. A. Guastamacchia, R.G. Rosso, G.E. Puglisi, F. Riente, L. Shtrepi, A. Astolfi, Real and Virtual Lecture Rooms: Valida-tion of a Virtual Reality System for the Perceptual As-sessment of Room Acoustical Quality, Acoustics 6 (2024) 933–965. https://doi.org/10.3390/acoustics6040052.
  10. F. Pausch, G. Behler, J. Fels, SCaLAr – A surrounding spherical cap loudspeaker array for flexible generation and evaluation of virtual acoustic environments, Acta Acust. 4 (2020) 19. https://doi.org/10.1051/aacus/2020014.
  11. G.D. Romigh, D.S. Brungart, B.D. Simpson, Free-Field Lo-calization Performance With a Head-Tracked Virtual Audi-tory Display, IEEE J. Sel. Top. Signal Process. 9 (2015) 943–954. https://doi.org/10.1109/JSTSP.2015.2421874.
  12. F. Zotter, M. Frank, Ambisonic Amplitude Panning and Decoding in Higher Orders, in: F. Zotter, M. Frank (Eds.), Ambisonics: A Practical 3D Audio Theory for Recording, Studio Production, Sound Reinforcement, and Virtual Re-ality, Springer International Publishing, Cham, 2019: pp. 53–98. https://doi.org/10.1007/978-3-030-17207-7_4.
  13. F. Zotter, M. Frank, All-Round Ambisonic Panning and De-coding, Journal of the Audio Engineering Society 60 (2012) 807–820.
  14. V. Pulkki, Spatial Sound Generation and Perception by Amplitude Panning Techniques, (2001).
  15. J. Blauert, Spatial Hearing: The Psychophysics of Human Sound Localization, The MIT Press, 1996. https://doi.org/10.7551/mitpress/6391.001.0001.
  16. A. Carlini, C. Bordeau, M. Ambard, Auditory localization: a comprehensive practical review, Frontiers in Psychology 15 (2024). https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1408073.
  17. J. Blauert, Sound Localization in the Median Plane, Acta Acustica United with Acustica 22 (1969).
  18. A.W. Mills, On the Minimum Audible Angle, The Journal of the Acoustical Society of America 30 (1958) 237–246. https://doi.org/10.1121/1.1909553.
  19. W. Grantham, B. Hornsby, E. Erpenbeck, Auditory spatial resolution in horizontal, vertical, and diagonal planes, The Journal of the Acoustical Society of America 114 (2003) 1009–22. https://doi.org/10.1121/1.1590970.
  20. D.R. Perrott, K. Saberi, Minimum audible angle thresholds for sources varying in both elevation and azimuth, The Journal of the Acoustical Society of America 87 (1990) 1728–1731. https://doi.org/10.1121/1.399421.
  21. K. Sochaczewska, P. Malecki, M. Piotrowska, Evaluation of the Minimum Audible Angle on Horizontal Plane in 3rd order Ambisonic Spherical Playback System, 2021. https://doi.org/10.1109/I3DA48870.2021.9610858.
  22. R. Meng, J. Xiang, J. Sang, C. Zheng, X. Li, S. Bleeck, J. Cai, J. Wang, Investigation of an MAA Test With Virtual Sound Synthesis, Frontiers in Psychology 12 (2021). https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2021.656052.
  23. J. Cooper, Immersive Audiovisual Materials Database, (2024). https://doi.org/10.5281/ZENODO.10571315.
  24. M. Wright, A. Freed, OSC, (2021). https://ccrma.stanford.edu/groups/osc/index.html (ac-cessed December 5, 2024).
  25. S. Ciba, A. Wlodarski, H.-J. Maempel, WhisPER – A New Tool for Performing Listening Tests, 126th Audio Engi-neering Society Convention 2009 1 (2012).
  26. H. Levitt, Transformed Up-Down Methods in Psychoacous-tics, The Journal of the Acoustical Society of America 49 (1971) Suppl 2:467+. https://doi.org/10.1121/1.1912375.
  27. B. Hagerman, Sentences for Testing Speech Intelligibility in Noise, Scandinavian Audiology 11 (1982) 79–87. https://doi.org/10.3109/01050398209076203.
  28. G.E. Puglisi, A. Warzybok, S. Hochmuth, C. Visentin, A. Astolfi, N. Prodi, B. Kollmeier, An Italian matrix sentence test for the evaluation of speech intelligibility in noise, International Journal of Audiology 54 (2015) 44–50. https://doi.org/10.3109/14992027.2015.1061709.
  29. J. Beatty, Task-evoked pupillary responses, processing load, and the structure of processing resources., Psycho-logical Bulletin 91 (1982) 276–292. https://doi.org/10.1037/0033-2909.91.2.276.
  30. M.B. Winn, D. Wendt, T. Koelewijn, S.E. Kuchinsky, Best Practices and Advice for Using Pupillometry to Measure Listening Effort: An Introduction for Those Who Want to Get Started, Trends in Hearing 22 (2018) 2331216518800869. https://doi.org/10.1177/2331216518800869.

Metriche

Caricamento metriche ...