Eggs for Tea

I’ve just released some new music!

This one has been percolating for a while. The story begins with a remix contest and a piece of left-field “indie” audio workstation software known as EnergyXT.

I’m constantly looking for different tools and techniques to help me make music. To produce music more fluidly and expressively, and to organise and build up an arrangement from a few small patterns elegantly.

One feature that I often look for is the ability to have alias patterns. Instead of pasting copies of a pattern across the arrangement, you copy aliases or clones. If you edit or tweak one of the copies, they all update. In principle I love this way of working, because I like structure – I want to start with something rough, e.g. a basic beat, then build up an arrangement with it, and then iterate on the beat to make it sound better.

There’s a related principle when working agile – always be ready to ship. Start with a basic version of a complete product that works, and then see what details need to improve.

So back to EnergyXT. It’s built to work this way, with alias patterns. Notable big name software doesn’t work like this – e.g. it’s not possible in Ableton Live or Bitwig; in Logic Pro X it’s possible but a little cumbersome.

I was looking at using it more and found on the KVR forum that someone was running a strange remix contest. The source material was a strange short excerpt of some strange samples (thanks Kejkz). The rules were that you had to use the default elements built into EnergyXT – the sampler, the synth. The prize: a license for EnergyXT!

Remix contests are great – you get some source material to work with, some kind of deadline, and maybe some constraints. I often find myself much more focused and productive in these contest situations, so I entered.

I decided to use the default drum samples and a basic synth preset from EnergyXT – I thought they sounded great! Once I had my beat + bassline, I layered on some stabs from the source samples and before long I had a track. Great when things come together quickly.

The track sat around for a while – I’d often play it out in DJ or live sets. Cut to earlier this year and I thought it was time to get it out, and that it could do with a psychedelic cut-up treatment from Sharkweek, featuring vocals from Michael Chen.

So here it is! Go listen to it everywhere!

The cover art is based on a photo of Alcatraz.

How to sync the Web Audio API with Web MIDI

The Web Audio API is an incredibly fun audio playground, widely supported in most browsers. You can quite easily do things like build synthesisers and sound effects, or slice up audio samples – in JavaScript code, in your web browser.

There’s also a companion API for sending and receiving MIDI events. This allows you to play your Web Audio synthesiser using a physical (piano-style) keyboard, or send midi notes and controller information out to play other instruments or devices, either hardware or other software.

I’ve been experimenting with these APIs because I want a more flexible way to sequence and trigger musical parts live.

In my app I have various patterns of music. Some contain audio events – for example a drum beat, made of different drum sounds triggered in Web Audio in the browser. Others contain MIDI events – for example, a bassline. These MIDI patterns are sent out to other synth software – they don’t make sound in the browser.

What’s the problem?

When I’m playing my different patterns, I want the notes to line up in sync.

The Web Audio API and Web MIDI API use different scheduling, so things don’t line up by default.

This post explains how to get these to play back in sync so I can combine audio events and MIDI events in a performance or piece.

How to sync audio and MIDI

Web Audio events are scheduled relative to when the AudioContext started, in seconds.

Web MIDI events are scheduled relative to when the page started loading, in milliseconds.

So we have two differences to account for:

  • Seconds vs. milliseconds – e.g. x 1000
  • When the page started loading vs when the AudioContext started

This second item is the tricky one – the AudioContext doesn’t necessarily start when the page loads, could be much later.

We can measure this difference by using the high resolution time API, and comparing that to the current AudioContext time.

const perfNow = window.performance.now()
const audioNow = audioContext.currentTime;
const audioContextOffsetSec = ( perfNow / 1000.0 ) - audioNow;

This tells us how late audio events are relative to MIDI or real time. (MIDI events are sent in close to real time.)

So to sync we need to offset (delay) MIDI events by this latency:

const timestamp = ( startSeconds * 1000 );
const offset = ( audioContextOffsetSec * 1000 );
midiOutPorts[0].send(
  [ 0x90 + 0, midiNote, 100 ],
  timestamp + offset
);

For a long time I had this backwards – I’d schedule my audio events earlier by audioContextOffsetSec, trying to account for the latency, but this breaks down when you are scheduling close to now. The AudioContext can only schedule so far in advance.

I’ve put up a complete example on GitHub as a demo – take a look.

For a deeper dive on how to build a reliable, accurate sequencer in Web Audio and JavaScript, check out A Tale of Two Clocks on HTML5 Rocks. Spoiler alert: there are more than two clocks.

Hopefully this article helps someone – it took me a while to get my head around this. Although the Web MIDI API is still experimental, I’m really excited to see what apps and tools will emerge in its wake.

🚤 💻 🎛

Padded Landscape

a loop-based generative audio-visual experiment

A while ago I decided to relax a bit more with my music production. The idea was to remove the pressure to finish and release tracks, and just noodle around and make loops. If I had fun jamming, I’d render the parts as a loop, archive it, and move on.

It was really liberating to focus in on short sections of music, and not concern myself with arrangement.

I might set out to make an “east coast whine” sound, and end up with a little trap/hip hop loop (Kufca). Or I might want to make a classic “synth strings” sound, and then turn that into a vocal-sample driven 90s big-beat loop (Mivova).

Around the same time, I was experimenting with the Web Audio API. It’s really powerful, making it easy to slice up audio files, sequence audio and midi, and even build custom synthesis and effects processing.

I’ve also been dabbling in ways to combine audio and (primitive) animation. Wouldn’t it be cool to jam out on a sequencer or midi controller, and synthesise visual and audio content at the same time?

How the audio works

There are currently nine songs/loops of material set up. Each song is divided into four stem parts – sometimes a single instrument (drums, bass, or vocals), sometimes a combination.

I’ve named the different mix layers using words that you might use to navigate a mountainous landscape:

  • alpine – typically drums
  • ridge – typically bass
  • uplands – lead, chords, synth
  • hills – vocals, texture, arpeggio, pad

These aren’t hard & fast categories; for example, some songs have different drum patterns in alpine and ridge. The idea here is that if you combine say a ridge and alpine from one song and hills from another, it won’t sound terrible 🙂

The playback is automated and has some gentle randomisation so it’s not exactly the same each time.

  • Playback happens in 64 beat cycles.
  • The order of the songs is shuffled. 🗂
  • For each song, a part is selected at random in each cycle, until all parts are playing.
  • Then the parts are removed in a random order.
  • When there’s only one or two parts of the outgoing song playing, a new song kicks in. This means parts from two different songs will overlap – like a DJ blend.
  • Each part might fade in or filter in slowly over the cycle, or start fully turned up. Similarly, parts may fade or filter out. 🎛
  • When we run out of songs, we shuffle the order and start again. 🎲

To add a little punctuation & spice to this, I’ve added in a handful of sound effects. Some are recordings of cutting card or ripping paper, but there’s also an airhorn. These are randomly sprinkled on cycle changes, especially if a song is changing. These are fed into a dub-delay effect.

There are also some texture samples – maybe birds, a dripping tap, the ocean, or sea scouts around a campfire. When the song changes, a beat-synched chunk is sampled and looped from one of these textures, to add another layer to the transition.

The mixing and effects processing happens live in your web browser, using Web Audio nodes – filters, delays.

How the animation works

The mountains are React components containing SVG. There are four mountain ranges, to match the four mix layers. If a ridge (e.g. a bassline) part is playing, you’ll see a moderately tall mountain range. When parts fade in or out over the cycle, the mountains fade in too.

Each mountain is an SVG polygon, and the points are animated, parallax-scrolling style. There’s a codepen prototype here.

When a song comes in, random colours are picked for the front (hills) & back (alpine) range. The intermediate two colours are interpolated.

What next?

Occasionally I may add new songs to Padded Landscape. One song will get released on Cartoon Beats later this year – Redline Train, based on the Maenyb dub / reggae loop. Keep an ear out for it – this will include a remix from nsu (of Newclear Music).

If you need a low-maintenance soundscape…
fire up the site and click play!

Manhole Covers

Click play to listen to an evolving soundtrack as you scroll through the post.

AnunakuTeleported

If you read this blog you might already know I like taking photos of tiles. This post is about another photographic hobby of mine – manhole covers.

Here’s one from Amsterdam which (to my eyes) evokes a computer keyboard. I did a quick google to find out what Purator is – and found a website devoted to manhole covers!

HerzelGlide

So far, the Czech Republic is winning – two manhole covers of note. I think this one is from the quad inside Prague Castle.

CYRKMemorial

This one’s just a regular manhole from a random street in Prague, but it’s a damn cool design, especially against the backdrop of the concentric intersected rings of the pavement.

NoirDisruption

Actually it looks like the USA (or LA to be precise) is a close second, though the manhole covers aren’t nearly as interesting. This one’s a pretty great nod to futuristic 80s vector art.

That one looks more square than the others. I heard that “Why are manhole covers round?” is an interview question at Microsoft. I feel like this is the one question I wouldn’t have a problem with.

ShedbugThere’s Hope for You Yet

This one here is not really noteworthy, except for the clear pride that LA has that this manhole cover was made in India.

Wyatt MarshallThis & That

Nothing to say about this one from Canada, except that the sun was nice and low when I took the photo.

ShedbugThere’s Hope for You Yet

Computer & Nerd Boxes

First stop on our summer holiday was Sunnyvale. The first thing to do in silicon valley – the Computer History Museum. So much historical computer info and hardware all housed in Silicon Graphics’s old headquarters.

After the CHM we visited ex-colleague Aidan at his new workplace – one of the many Apple campuses dotted around. He’s in Wolfe, where marketing happens, and we also peeked at new, inpenetrable Apple Park. Impressive and inspiring places to work, nice food, and overly carbonated water on tap!

Memorable Moments from 2019 Automattic Grand Meetup

I’ve finally recovered from the Automattic Grand Meetup – took me a while this year, spent a few days in nomad-working in Cocoa Beach, and found myself sick for quite a few days. One reason it took so long to recover was because the meetup was totally amazing!

The keynote speakers this year were consistently fantastic. Each one made me look at the world slightly differently – very thought-provoking. Here are some memorable moments 🙂

Alexander Rose of the Long Now Foundation

I’d heard about the Long Now Foundation, but never really looked past the surface. At face value, it’s a compelling idea – how can we think and operate on timescales larger than our experience?

The work they are actually doing is delightfully idiosyncratic, and generally inspiring engineering (of the mechanical variety). I learned a lot about exactly what they mean by “long now clock”, and lots of fascinating details about what they are building.

Stephen Wolfram

Stephen Wolfram was an intense experience! He casually walked us through a bunch of big ideas, noodling in his interactive, natural-language based system for so-called “computational essays”.

His way of working really resonated with me – this seems like an important part of the next wave of computing. It really was like science fiction unfolding on stage. The Wolfram language allows you to jot down questions in a much more precise format. The system has types & semantics so that it really understands the places, concepts and numbers that you type, and it has “knowledge” – it’s connected to many vast data sources.

What I found compelling about this is its expressive and communicative power. When we use natural language, there’s always a “translation gap” – the symbols in the language are an approximation for what we mean. Can we develop a more precise way to communicate? Could such a system provide a common, standard set of tools for white papers or public policy development?

Scott Berkun

Scott Berkun presented a very a holistic view of design. He’s currently working on a new book – refreshing to have a talk not selling a book, but workshopping a book that’s in progress!

Everything is design, everything is designed, everyone is a designer. Often the nominal “designer” is working within constraints that they don’t control, and the “true” designer might be a producer, a mayor, a lobbyist, etc.

I really appreciated the “call to arms” in Scott’s talk. How can we ask questions and peel the layers back to understand what patterns and agents are really designing the experiences, cities, and products that we all use every day? Get involved here!

WordCamp Brisbane talk – Cool Stuff Inside Gutenberg

I spoke at WordCamp Brisbane last weekend – my first WordCamp! I chose Brisbane as it seems like the strongest community in my part of the world. It didn’t disappoint – the 2019 event broke records as the biggest #WCBNE ever!

For my talk topic, I wanted to shine a light on all the work going into the Gutenberg project. This code base powers the block editor in WordPress, but there’s so much potential here.

Inside Gutenberg there’s a rich library of components that you can use to build the custom block your site needs, dashboard interfaces in wp-admin, and more. You can even use these packages outside WordPress – the possibilities are endless!

Demo – Page Soundtrack

To demo this, I decided to invest some of my spare time in building something fun that I might use. The idea was to add a soundtrack to blog posts and pages.

When writing a post, you add loops to the page. When the user is reading (scrolling) through the page, it will automatically sync and crossfade between the loops, a bit like a DJ.

The two blocks – a loop block and a play button – use Gutenberg components to allow the author to configure things like loop settings, and the page tempo. Have a play with it!

Explore more

  • https://developer.wordpress.org/block-editor/
  • https://github.com/WordPress/gutenberg/
  • https://thegutenbergsite.com
  • https://make.wordpress.org/design/2019/08/12/project-reviewing-wordpress-components/
  • https://gziolo.pl/2019/07/15/growing-javascript-skills-with-wordpress/
  • https://reactjs.org

Building a Dining Booth

A little while ago we had some work done on our house to make it better.

We swapped the windowless, dank bathroom out to the large sun-filled laundry (with a vaulted/angled ceiling), extended the hallway to the back of the house, and added a toothbrush nook.

Next to our kitchen there was a nice chunk of space, with a view out to the back yard (and forthcoming deck). I was really keen on turning this into a dining booth – a bit like a cafe/diner booth, but a bit bigger.

So I borrowed a circular saw, started measuring, and built it!

Vogel Street Party

In 2014 the first Vogel Street Party happened in the warehouse precinct in Dunedin. The reason for the party was to celebrate the local community making things happen & the general rejuvenation of the area.

It was incredible! 

From 3pm to 10pm on Saturday the 18th of Oct 2014:

  • the street was closed to traffic
  • street food vendors sold delicious wares
  • a huge range of activities for young and old were held
  • a HUGE LED wall screen showed animation, video and digital art
  • musicians performed over the afternoon
  • there was an upcycled street-fashion show
  • DJs played into the evening
IMG_1560

Also the party coincided with the Dunedin Street Art Festival. Local and international artists transformed walls around the area into vibrant pieces of art.

A strong group of volunteers made this happen. I got myself involved from day one and put a lot of energy into the website, booking the DJs, as well as curating & producing the digital screen content.

I also had the privilege of performing – DJing and triggering custom synchronised animations on the big screen.

DSC_5296

A huge thanks to everyone who contributed to the event, and the sponsors who backed us!


This post was originally posted in 2014; since then there have been more Vogel Street Parties every year (with a hiatus in 2018), each one getting bigger and better! See you at the next one 🙂