#SounDoer# 讲座:Current Directions in Procedural Audio Research

@SounDoer:来自 aesuksection 的视频,一场有关目前 Procedural Audio 研究进展的讲座,Andy Farnell 开场介绍,博士生 Christian Heinrichs 和 Roderick Selfridge 分别展示了各自的研究成果,以及 Procedural Audio 在 Game Audio 领域的应用。
 

Current Directions in Procedural Audio Research
By PhD students Christian Heinrichs and Roderick Selfridge with introduction by Andy Farnell. Location: QMUL, Date: 12th April 2016.
Advances in real time computational audio for virtual worlds, animation, or real world applications continue apace. Queen Mary University of London has become an emerging centre for new research with projects guided by Andrew McPherson, Josh Reiss and Andy Farnell. These two presentations of QMUL doctoral researchers demonstrate the breadth and rigor of this emerging field, covering a range of enquiry from sound design psychology to fluid mechanics.
Roderick Selfridge will present “Real-time Aeroacoustic Sound Synthesis Models” Aeroacoustic sounds emit from objects moving relative to a flow, and include the Aeolian tone, cavity tones and edge tones, sounds like a fence whistling in a storm, or turbines and jets. Efficient models incorporating fundamental fluid dynamic equations and listener position are developed by Selfridge, and results are evaluated against those produced by offline computational software solving finite difference equations and physical readings from wind tunnel experiments.
Christian Heinrichs examines the use of human gesture in the design of next-generation procedural game audio. With the dawn of virtual reality there is a move toward gameplay interactions that require sophisticated audio feedback and parameterisation. Heinrichs furthers Procedural Audio research by examining contextual aesthetics and behaviour in the design process to complement realism and efficiency. Physically-based sound engines that match the properties of an object are important but often fail to equal the expressivity of sound performed by a Foley artist. This research explores how gestural interaction can be employed in all stages of the sound design process, starting with the casual exploration of a sound model’s parameter space and leading to its integration in a game.
This lecture was organised by the UK section of the Audio Engineering Society, which holds lectures on the second Tuesday of every month in the London area. For more information, please go to http://www.aes-uk.org/
 
摘要:
  1. 距 Andy Farnell 开始教授有关 Practical Synthesis Sound Design 等 Procedural Audio 的课程已有十多年之久,使用 Pure Data、CSound 等工具。
  2. Rod Selfridge, Real-time Synthesis of Aeroacoustic Sounds Using Physical Models.
  3. Aeroacoustic Sound 是什么?由空气与物体边界的交互运动引起的声音;Aeolian Tone、Cavity Tone、Edge Tone
  4. Why Physical Models?
    Signal Based Models – computationally simpler, parameters signal based
    Physically Inspired – added realism for minimal computational cost
    Physical models designed to achieve accurate sounds based on fundamental equations with real-world parameters
    Computational Fluid Dynamics – accurate solution, high computational cost, not in real-time
  5. Aeroacoustic Sources can be approximated by compact sources: Monopole, Dipole, Quadrupole…
  6. Aeolian Tone Parameters: Air Speed, Diameter, Length, Distance, Elevation, Azimuth
  7. Tools: Unity, OSC, Pure Data; Sword Sounds, whooshes; Propellor Model, Natural Frequencies.
  8. Christian Heinrichs, Expressive Control and Integration Strategies for Computationally Generated Sound.
  9. 从 Performing Sound Effects / Foley 拟音讲起;In sound design, there is never a 1:1 relationship between sound and image.
  10. Sample-based game audio helps keep some techniques and aesthetics from film. Sound designer has a lot of freedom for designing asynchronous sound.
  11. Sampled Audio: Interactivity is limited and needs to be “faked” (e.g. through use of large sample library); Advanced techniques in constrained audio middleware to achieve simple continuous interactions.
  12. Computationally Generated Sound (aka. Procedural Audio)
  13. Synthesis vs. Re-enactment; How do we design models for performing instead of for integrating?
  14. A Squeaky Door; Stick-Slip Friction. Mapping touchpad interface to squeaky door model; Without behavior the model doesn’t sound anything like a door.
  15. Component-wise Structure of Computational Audio Models
  16. Pure Data patch 演示:Control Strategies for Controlling a Procedural Model of a Squeaking Door https://vimeo.com/118528663
    论文:Human performance of computational sound models for immersive environments http://krighxz.info/files/Heinrichs-McPherson-Farnell_Human%20performance%20of%20computational%20sound%20models%20for%20immersive%20environments.pdf
  17. Performing procedural audio: Gesture “unlocks” the expressive potential of procedural models. “The power of the foley artist is not that they have the coconuts, it’s that they know how to use the coconuts.”
  18. 与 QMUL (Queen Mary University of London) 和 Enzien Audio 合作开发了一款软件 Foley Designer,Prototype aimed at maximizing gesture in the design process。
  19. Integration: Sensors > Control Layers > Animations & Sound Medel, Animations > Sound Model; Game Engine > Animations > Sound Model.
  20. Deployment
    Animation data is compatible with animation software and audio middleware
    Parallels to sample-based workflow
    But power of animation-style processing: Natural blending between animations; State machines; Meta-parameters.
 
 
SounDoer – Focus On Sound Design
@SounDoer 编译,若有错误还望不吝指教。转载烦请告知并注明出处。