- Loudspeaker drivers in sound localization:
Usually, when loudspeakers with more than one driver are used for sound spatialization, the relative distance between the drivers and the ears may cause a combing filter that shifts the auditory image vertically. In this research, we want to investigate such effect, and ways to prevent it.
- Synthesis of binaural signals with arbitrary trajectories:
In this project, we will create a VR trajectory editor, a software that allows a user to “draw” the trajectory of a sound. This trajectory can be used to create the sound spatialization either in real-time or offline, depending on the length and accuracy required. The selected student must have strong interest in Unity, and skills in C# programming.
- Improving speech localization in Virtual Reality:
Traditional recordings of Head-Related Transfer Functions (HRTFs) assume a point source radiating acoustic energy in all directions with the same intensity. But, human voice has a distinct directional pattern that is affected by the the torso, the head, etc. These cues are important to add realism to a virtual scene, so we can detect where an avatar is facing regardless its location. In this project, we will collect a database of HRTFs from a mannequin at different rotations, and perform some subjective experiments for validation.
- Eliminating chroma from human voice:
Pitch is a complex phenomenon that comprises chroma (the name of the note) and height (the order of the notes). In this project we will create a real-time effect that eliminates chroma from a speaker’s voice. This is useful as a sound effect (for expression purposes) and to investigate probable intelligibility gains of the processed audio.
- HRTF with range control plug-in for Virtual Reality:
In this project, we will create a plug-in for Unity software (or similar), based on Pure-data. The idea is to use HRTF spatialization techniques developed in our lab in virtual environments such as those created in Unity. Our plug-in will display aural images in the near field at arbitrary azimuth, elevation, and distance (this is not commonly found). The selected student must have strong interest in Unity, and skills in C# programming.
- Improving speech localization in Virtual Reality:
Traditional recordings of Head-Related Transfer Functions (HRTFs) assume a point source radiating acoustic energy in all directions with the same intensity. But, human voice has a distinct directional pattern that is affected by the the torso, the head, etc. These cues are important to add realism to a virtual scene, so we can detect where an avatar is facing regardless its location. In this project, we will collect a database of HRTFs from a mannequin at different rotations. The selected student should have interest in acoustic research, knowledge in Matlab and R are desired.
- Lombard effect onset:
We are investigating the differences in speech intensity of Japanese speakers subjected to alternating periods of silence and noise while engaged in tasks that vary in their communication effort and purpose.
- Build a Pure-data object for transaural spatialization
Transaural audio is a method used to deliver binaural signals to the ears of a listener using stereo loudspeakers. The idea is to filter a binaural signal (a signal that has been filtered to modify its apparent location) such that the subsequent loudspeaker presentation produces the desired location at the ears of the listener.
- To build a localization system with two microphones
Humans, among many other species, can accurately lo- cate the position of objects using sound. Two ears are usually sufficient for that job. Many modern computers and smartphones have built-in several microphones, the iPhone 5, for example, has three microphones. The idea of this project is to create a system capable of recognize lateralization of sounds based on inter-microphone level differences, and inter-microphone delays, inspired in some of the cues used by humans (ILD and ITDs, respectively).
- Loudness perception with headphones and vibration
In this research, we investigate how vibration could be use to enhance the perception of low frequencies en music.
- Feasibility of using audio signals for steganography
We are interested on finding ways to convey information using the highest part of the audible spectrum for in-door way finding, advisories, etc.
- Quantifying the benefits of bimodal navigation systems
We’re investigating the benefits of traditional (visual) navigation systems and bimodal (visual + 3D sound) systems in terms of reaction times to sudden events and overall distance traveled.
- Sound lateralization using 5.1 surround system and energy equalization:
In this research, we compare subjective judgements of azimuth obtained by several methods: Vector-Based Amplitude Panning (vbap), VBAP mixed with binaural rendition over loudspeakers (VBAP+HRTF), and anewly proposed method based on equalizing spectral energy.
- Sound elevation using 5.1 surround system and energy equalization:
We are investigating the relative influence of spectral cues on elevation localization by comparing judgements of loudspeaker reproduced stimuli spatialized with several methods (VBAP, Ambisonics, real loudspeakers, etc.).