To realize Humus, I applied methods developed in diglo, specifically real-time video tracking based on the dissemination of computer vision techniques through the OpenCV library. Of particular interest is the genealogy of this technology: OpenCV originated at Willow Garage Robotics, founded by Scott Hassan, with the initial goal of developing an autonomous, solar-powered marine exploration SUV. Hassan had previously served as lead programmer of the Google Search Engine in its early phase, then known as BackRub.
Meanwhile, at the opposite end of the globe, in the Patagonian Andes, Teresa Pereda, together with Juan Pablo Ferlat, filmed a cluster of so-called “ojos de agua” — the local term for what geology describes as sand boils or sand volcanoes: a phenomenon grounded in the fluid mechanics of unconsolidated sediments. Pereda’s sustained attention to water and temporality (see restitución), together with my own interest in the metaphorical potential of silt limo as an unstable, granular fluid, provided the conceptual substrate for vectorizing a musical form. This form is organized around gesture–movement and energy divergence–convergence, producing articulation regimes not derived from syntax, but from physically emergent phenomena.
In this sense, the orchestration between the capture of sand volcanoes and the construction of a video-based instrument using OpenCV establishes an unexpected equivalence between two types of “navigators”: Hassan, mapping the ever-expanding, potentially infinite space of the web; and Pereda, navigating the infinite space of seismic and subterranean potentiality.

tools
Humus was implemented as a real-time transduction system using OpenCV integrated within a Max/MSP/Jitter environment. The video footage of sand boils is processed through parallel computer vision pipelines combining background subtraction, blob detection, optical flow, and velocity estimation. These algorithms operate at different spatial and temporal resolutions, producing multiple streams of control data derived from particle density, motion vectors, and divergence patterns.
These data streams drive a sound synthesis and processing engine based on a bank of fifty audio samples. Each layer is parameterized across pitch, spatialization, time-stretching, and Doppler-based equalization. Low-frequency materials are controlled by blob-based fundamental features, such as aggregate motion and low-velocity clusters, while higher-frequency and vocal materials are articulated by fast-moving particles and high-gradient flow vectors.
The instrument thus functions as a transducer between sedimentary fluid dynamics and sound, converting granular motion into continuous control signals that shape musical behavior in real time. The mapping strategy privileges emergent physical relationships over predefined musical syntax, allowing the system’s output to be governed by the intrinsic dynamics of the observed phenomenon.
This instrumental logic originates in an earlier mimetic experiment with von Reichenbach’s graphic-to-analog converter (torrentes extenderse) and later evolved into an autonomous plugin architecture applied to other works, including buzzing by Silvia Rivas.
