Same visual instrument, different approaches

It's interesting to compare the two very different series of live visuals gigs I've been doing. The music played by John Law's "Boink!" is fairly structured, but there's a lot of improvisation and it's metrically very challenging (i.e. there's no sign of 4/4 time anywhere). Toby Mark's Banco de Gaia is more solid metrically, but as it's largely digital, the sonic palette is extremely varied. I've used a very similar setup for both - Resolume on a laptop, and an EMU keyboard as my main controller - but the way I use it is quite different. 

For Boink! I know roughly what's coming next in terms of broad structure, but the solos are often drawn out and unpredictable in character. I might have prepared a number of clips that suddenly become redundant if the players decide to head in a direction I'm not prepared for. So I rely heavily on effects to adapt what I've got set up. 

For Banco I've no idea what's coming up (unless Toby leans over and tells me). But because the music tends to move in fairly regular phrases I know when to act, even if I don't really know how to react. So I use more audio-reactive effects and often just hit, hope, then react very quickly once I can hear what's going on.

The key thing for me is to somehow, on the spur of the moment, express visually some aspect of what's being played. Simply providing visual wallpaper is something I'll never do. My goal - which I think I'm a little way away from at the moment - is to provide visuals that are as expressive, responsive and flexible as if I had a visual instrument as powerful generatively as any musical one.