Digital sound synthesis has opened up a world of possibilities, allowing us to explore vast parameter spaces filled with millions of configurations. However, harnessing this potential is no easy task. That’s where quality diversity (QD) evolutionary algorithms come in. These algorithms promise to help us navigate these expansive sonic landscapes, but their success depends on having the right sonic feature representations.
Traditionally, QD methods have relied on handcrafted descriptors or supervised classifiers. While these have their merits, they also come with drawbacks. They can introduce unintended exploration biases and limit discovery to familiar sonic regions. This is where the work of Björn Þór Jónsson, Çağrı Erdem, Stefano Fasciani, and Kyrre Glette comes in. They’re investigating unsupervised dimensionality reduction methods for automatically defining and dynamically reconfiguring sonic behaviour spaces during QD search.
The researchers applied Principal Component Analysis (PCA) and autoencoders to project high-dimensional audio features onto structured grids for MAP-Elites. They also implemented dynamic reconfiguration through model retraining at regular intervals. The results were impressive. Across two experimental scenarios, the automatic approaches achieved significantly greater diversity than handcrafted behaviour spaces, all while avoiding expert-imposed biases.
Dynamic behaviour-space reconfiguration played a crucial role in these results. It maintained evolutionary pressure and prevented stagnation. Among the dimensionality reduction techniques, PCA proved to be the most effective.
This research is a significant step forward in the field of automated sonic discovery systems. It shows that it’s possible to explore vast parameter spaces without manual intervention or the constraints of supervised training. This could open up new avenues for sonic exploration and discovery, pushing the boundaries of what’s possible in digital sound synthesis.



