Home » (English) P.I. profile

Comentarios recientes

    (English) P.I. profile

    Disculpa, pero esta entrada está disponible sólo en Inglés Estadounidense. For the sake of viewer convenience, the content is shown below in the alternative language. You may click the link to switch the active language.

    Julián Villegas

    Computer Arts Laboratory
    Associate Professor
    Ph.D. (in Computer Science and Engineering), the University of Aizu


    • Ph.D. in Computer Science and Engineering, University of Aizu. Dissertation:
      “Psychoacoustic Roughness Applications in Music: Automatic Retuning and Binaural Perception.” Japan, 2010.
    • M.Sc. Computer Science and Engineering, University of Aizu. Master thesis:
      “Local Consonance Maximization in Realtime.” Japan, 2006.
    • B.S. Electronic Engineering, University of Valle. Graduation project:
      “A Distributed Genetic Algorithm for Solving a Time-Tabling Problem.” Colombia, 2001.

    Research Experience




    Supervisor: Martin Cooke

    Language and Speech Lab., University of the Basque Country

    The purpose of the EU-funded LISTA project (the Listening Talker) is to develop the scientific foundations needed to enable the next generation of spoken output technologies. LISTAwill target all forms of generated speech by observing how listeners modify their production patterns in realistic environments that are characterized by noise and natural, rapid interactions.

    Research Associate



    Supervisor: Jun Yamadera, Michael Cohen

    Eyes, JAPAN & University of Aizu

    Implementation of vehicular location-aware augmented reality and spatial audio. (Sponsored by the Fukushima Technology Centre).

    Guest Researcher



    Supervisor: William Martens

    CIRMMT–McGill University

    Collaborative research on musical dissonance, roughness, spatial audio, and preference.

    External Grants

    2016-2017 Method for improvement of pitch and better singing experience Kawai foundation for Sound and Technology & Music P.I.
    2016-2019 Saund: Simulation of auditory near-field distance Grant-in-Aid for Scientific Research P.I.
    2015-2018 Study and development of smart supermarket by using visible light communication (VLC) and smartphone technologies Grant-in-Aid for Scientific Research Co-P.I.
    2013-2014 Perceptual tests for next generation of speech codec for mobile phones (3GPP SA4 EVS codec) Danish Electronics Light & Acoustics P.I.

    (Graduate) Instructor, Spatial Hearing and Virtual 3D Sound, University of AizuTeaching Experience

    • (Graduate) Instructor, Music Technologies, University of Aizu
    • (Graduate) Co-instructor, Introduction to Sound and Audio, University of Aizu
    • (Undergrad.) Co-instructor, Human Interfaces and Virtual Reality, University of Aizu
    • (Undergrad.) Co-instructor, Intro. to Software Engineering exercises, University of Aizu

    Work Experience

    Colombia National Productivity Center (CNPC)

    Cali, Colombia


    IT Manager


    Research and development, web-based application development coordinator, marketing and promotion.


    “Sound spatialization by equalizing filters and delay adjustments”, Julián Villegas (100%) Patent number 2015247, Japan, 2016.

    Honors and Awards

    • Aizu IT Forum Encouragement Prize presented to Sanuki Wataru for “Machi-Beacon” App. Supervised by Julián Villegas and M. Cohen (2014).
    • Best Paper Prize (Student Section). B. Ryskeldiev, Julián Villegas, and M. Cohen. Exploring virtual sound environments with mobile devices. Tohoku Section Joint Conv. of Institutes of Electrical and Information Engineers (2013)
    • Rotary Yoneyama Commemorative Foundation Scholarship (2008–2010)
    • Third Prize literary contest Commemorating Colombia–Japan friendship (2009)
    • Prize for Outstanding Work. First University of Aizu Digital Photo Exhibition (2007)
    • President’s Award, University of Aizu (2006)
    • Colciencias Young Researcher Fellowship. For the research project ‘Impacts of using IT in Industrial Productivity’ (2002 – 2003)
    • CICCAOTS Scholarship ‘Integrated Network Management’ (2002)
    • Travel grant to attend the Genetic and Evolutionary Computation Conference, Las Vegas (USA) (2000)
    • With “El proyecto del diablo”
      • Official selection Int. Public Television (INPUT) Festival 2000 (Halifax, Canada. 2000)
      • Official selection Festival des Films du Monde (Montreal, Canada. 2000)
      • Official selection Recontres cinemas de Ámerique Latina (Toulouse, France. 2002)


    • C, Java, Matlab, Pure Data, R.
    • Fluent in Spanish and English; conversational Japanese.

    Professional Interests

    • Speech intelligibility, interdisciplinary research on music and sound, psychoacoustics, experimental psychology, realtime programming, visual and aural illusions, binaural audio, etc.

    Professional Affiliations

    • Acoustical Society of America
    • Acoustical Society of Japan
    • Audio Engineering Society

    Publications summary

    • 10 journal articles
    • 3 book chapters
    • More than 70 conference articles
    • invited talks
    • more than 16 works (non-refereed articles, original music, software, etc.)

    Selected Publications

    Book chapters

    1. M. Cohen and Julián Villegas. Fundamentals of Wearable Computers and Augmented Reality, chapter 13. Applications of Audio Augmented Reality. Wearware, Everyware, Anyware, and Awareware. CRC Press, 2nd edition, 2015.
    2. Julián Villegas and Michael Cohen. Principles and Applications of Spatial Hearing, chapter Mapping Musical Scales Onto Virtual 3D Spaces. World Scientific, 2010.


    1. J. González-Alonso, Julián Villegas, and M. P. García-Mayo. English compound processing in bilingual and multilingual speakers: The role of dominance. Second Language Research, May 2016.
    2. Julián Villegas. Locating virtual sound sources at arbitrary distances in real-time binaural reproduction. Virtual Reality, 19(3):201–212, Oct 2015.
    3. M. Cooke, C. Mayo, and Julián Villegas. The contribution of durational and spectral changes to the Lombard speech intelligibility benefit. J. Acoust. Soc. Am., 135(2):874–883, Feb 2014.
    4. Julián Villegas and M. Cohen. Roughness Minimization Through Automatic Intonation Adjustments. J. of New Music Research, 39(1):75–92, 2010.
    5. M. S. Alam, M. Cohen, Julián Villegas, and A. Ahmed. Narrowcasting for Articulated Privacy and Attention in SIP Audio Conference. J. of Mobile Multimedia, 5(1):12–28, 2009.

    International refereed conferences

    1. Julián Villegas, T. Stegenborg-Andersen, N. Zacharov, and J. Ramsgaard. A comparison of stimulus presentation methods for listening tests. In Proc. 139 Audio Eng. Soc. Int. Conv., 2016.
    2. D. Erickson, Julián Villegas, I. Wilson, Y. Iguro, J. Moore, and D. Erker. Some acoustic and articulatory correlates of phrasal stress in Spanish. In Proc. 8 Speech Prosody, Boston, MA, 2016.
    3. Julián Villegas, I. Wilson, Y. Iguro, and D. Erickson. Effect of a fixed ultrasound probe on jaw movement during speech. In Proc. Ultrafest VII, 2015.
    4. S. Nogami, T. Nagasaka, Julián Villegas, and J. Huang. Influence of spectral energy distribution on subjective azimuth judgements. In Proc. 139 Audio Eng. Soc. Int. Conv., New York, Oct. 2015.
    5. Julián Villegas. Movement perception of Risset tones with and without artificial spatialization. In Proc. 137 Audio Eng. Soc. Conv., 2014.
    6. Julián Villegas and M. Cooke. Speech modifications induced by alternating noise bands. In Proc. SPiN–2013: The 5 Int. Wkshp. on Speech in Noise: Intelligibility and Quality, page NA, Vitoria, Spain, Jan 2013.
    7. Julián Villegas, W. L. Martens, M. Cohen, and I. Wilson. Spatial separation decreases psychoacoustic roughness of high-frequency tones. In J. Acoust. Soc. Am., volume 134, page 4228, 2013.
    8. Julián Villegas and M. Cooke. Maximising objective speech intelligibility by local f0 modulation. In Proc. Interspeech, Sep. 2012.
    9. V. Aubanel, M. Cooke, Julián Villegas, and M. L. G. Lecumberri. Conversing in the presence of a competing conversation: effects on speech production. In Proc. Interspeech, 2011.
    10. Julián Villegas, M. Cooke, V. Aubanel, and M. A. Piccolino-Boniforti. MTRANS: A multi-channel, multi-tier speech annotation tool. In Proc. Interspeech, 2011.

    Nov 14, 2016