How to get custom ringtones & text alerts on your iPhone

This page has a musical soundtrack – click the Play button to hear it as you scroll.

Haszari – Redline Train Rubadub (chords & strings)

I like default settings .. but I also like to customise things. Especially my sound world. It’s quite possible to get a custom ringtone on an iPhone, though it’s a bit of a hidden feature. I’m not sure about other phones (e.g. Android) – let me know how that works in the comments.

If you just want to download my new dub-reggae ringtone, scroll down!


Haszari – Redline Train Rubadub (drums & bass)

I make my ringtones on my laptop – because that’s where I make music and wrangle all my samples. You can also do this on an iPhone (details below).

  • Make or record some audio less than 30 seconds in duration.
  • Encode it as aac (.m4a) format.
  • Rename the file .m4r (ringtone) format.

Then, connect your phone to your laptop via USB. In recent versions of macOS you should see your phone in the finder sidebar:

You’ll notice there’s no mention of ringtones anywhere here.

BUT you can drag your .m4r files over and drop them and they will show up on your phone!


How to make a ringtone – right on your iPhone

Without a laptop, it’s a similar process, using GarageBand on iOS.

  • Open GarageBand on your phone, start a new project.
  • Make or record less than 30 seconds of sound/music.
    • This is the tricky bit, one reason why I prefer desktop.
  • Share song and select Ringtone.

Why make custom ringtones?

I like making music but like a lot of creatively minded people I can get bogged down polishing things to perfection. Or feel paralysed by the many half-finished projects or the long list of ideas.

Producing and rendering ringtones is a way of focusing on a low-stakes outcome. Also it’s a fun way to road-test ideas – if you still like it after using it as a ringtone, maybe it’s a keeper!

Free download

Haszari – Redline Train Rubadub

My new ringtone is a more laid-back, casual “rubadub” style version of Redline Train – an unreleased dub reggae tune. It’s the soundtrack for this page!

You might have already heard the song – it’s part of Padded Landscape, my continuously-evolving loop-based audiovisual installation & website.

When someone rings me, this gentle reggae ditty will echo through my surroundings as I scrabble to answer my phone. When you send me a text, a little dubby chord stab alerts me about this!

Download the files and drag them on to your phone – this archive includes .m4r files for iPhones and .mp3 files which should work on other devices (e.g. Android, Samsung). Let me know if your phone uses some other format.

Daniel Crooks – time & space from a different angle

In 2019 I gave a talk at my first ever WordCamp (in Brisbane). Looking back I’m feeling lucky to have travelled and attended in person. Like so many other communities, WordCamps are now pretty much 100% online & distributed.

Whenever I find myself in a new place there are a couple of things that I’ll typically do:

  1. Wander around and look at buildings and other urban scenery.
  2. Explore an art gallery, ideally full of contemporary work.
  3. Find a local club with DJs or live music and see what’s on.


In Brisbane there is a fantastic art gallery – QAGOMA. I think I spent a solid morning there and then went back to co-work in the cafe, riverside, the following day.

Not only is QAGOMA chock-full of contemporary art; the building and grounds is an archetypal demonstration of modernist concrete, hedges, sculpture and lawn working in harmony. It’s like walking around a two-point perspective tech-drawing artist’s impression of the future, and I love it πŸ‘Œ

Daniel Crooks

Work from a fellow New Zealander – Daniel Crooks – immediately grabbed me, early in my visit.

(Side note: I’m grateful to Australia’s larger economy and mineral wealth for helping develop and promote so many creative Kiwis.)

Daniel Crooks is not a painter; he works with moving images. I’m always interested to see what artists do with motion or video, and how they can transcend the standard formats (e.g. rectangular, linear film).

Train no. 1

The first work I saw was Train no. 1, an ultra-wide format video work. A shot of a train platform is stretched across multiple screens in a technique that looks like a visual analogue of audio timestretching.

The video is sliced up horizontally into windows, and multiple similar windows are laid out horizontally, producing repetitions of bits of action across the frame, reminiscent of audio delay. Anyone who’s heard my music or me DJing knows that I’m pretty obsessed by dub delay. I’m experimenting with GLSL shaders in an attempt to build a shader which simulates this effect.

Phantom Ride

Crooks’ Phantom Ride was running in the next room. This is a much larger, two-screen installation. And a much richer work, with beautiful hi-definition photography of urban and countryside scenery with one common element throughout – railway lines, and the unstoppable, continuous progression of time.

I draw a parallel between this work and music, carving up time into chunks and sections (verse, chorus, bridge) while maintaining a common rhythm across the changes. I also loved that such a simple technique and idea could produce such a powerful, meditative space, and that the work could be looped infinitely without appearing as such – make the temporal edits (or, the frame) an integral part of the work.

The accompanying musical score is by Byron Scullin. The music plays a huge role in the contemplative nature of the piece. To me this is an audiovisual work realised by a collaboration, though I could imagine that the ideas and themes are driven by Crooks. I’ll have to seek out more sound from Scullin and hear what he’s all about. Bass Bath sounds intriguing!

I was grabbed by lots of the work on display – here are some photos of other highlights!

Darryl Baser “Second Selfie” launch party

I played a little gig on the weekend – had a great time. The show was the release party for Dunedin songwriter and local media personality Darryl Baser‘s Second Selfie album.

An assortment of local songwriters played. A particular favourite for me was the Dragonfly Rustlers. Tight folk-blues harmonies, and a unique ability to trade solo/rhythm guitar duties back & forth so quickly it made my head spin. They made it look and sound effortless, almost as though it was one person playing two guitars.

And of course I played a little Haszari set too πŸ™‚

Playing this gig was really inspiring. Everyone played and sang amazingly, the drum kit had a suitcase for a kick drum, and the tiny room was rammed with a roudy bunch – there was the odd good-natured heckle.

Collaborating with Darryl on Blank Canvas Insomnia was an invigorating experience. A year or so ago he sent me 3 stem tracks for the song: vocals, guitar, and a percussion track. I built a slightly unhinged electronic track around these parts, which is on the full CD/download version of the album.

For the show we played it live – Darryl on vocals and guitar, me on beats, bleeps, and dub delay. I need to find more people to jam with like this! Or start honing my own lyrics and vocals.

Eggs for Tea

I’ve just released some new music!

This one has been percolating for a while. The story begins with a remix contest and a piece of left-field “indie” audio workstation software known as EnergyXT.

I’m constantly looking for different tools and techniques to help me make music. To produce music more fluidly and expressively, and to organise and build up an arrangement from a few small patterns elegantly.

One feature that I often look for is the ability to have alias patterns. Instead of pasting copies of a pattern across the arrangement, you copy aliases or clones. If you edit or tweak one of the copies, they all update. In principle I love this way of working, because I like structure – I want to start with something rough, e.g. a basic beat, then build up an arrangement with it, and then iterate on the beat to make it sound better.

There’s a related principle when working agile – always be ready to ship. Start with a basic version of a complete product that works, and then see what details need to improve.

So back to EnergyXT. It’s built to work this way, with alias patterns. Notable big name software doesn’t work like this – e.g. it’s not possible in Ableton Live or Bitwig; in Logic Pro X it’s possible but a little cumbersome.

I was looking at using it more and found on the KVR forum that someone was running a strange remix contest. The source material was a strange short excerpt of some strange samples (thanks Kejkz). The rules were that you had to use the default elements built into EnergyXT – the sampler, the synth. The prize: a license for EnergyXT!

Remix contests are great – you get some source material to work with, some kind of deadline, and maybe some constraints. I often find myself much more focused and productive in these contest situations, so I entered.

I decided to use the default drum samples and a basic synth preset from EnergyXT – I thought they sounded great! Once I had my beat + bassline, I layered on some stabs from the source samples and before long I had a track. Great when things come together quickly.

The track sat around for a while – I’d often play it out in DJ or live sets. Cut to earlier this year and I thought it was time to get it out, and that it could do with a psychedelic cut-up treatment from Sharkweek, featuring vocals from Michael Chen.

So here it is! Go listen to it everywhere!

The cover art is based on a photo of Alcatraz.

How to sync the Web Audio API with Web MIDI

The Web Audio API is an incredibly fun audio playground, widely supported in most browsers. You can quite easily do things like build synthesisers and sound effects, or slice up audio samples – in JavaScript code, in your web browser.

There’s also a companion API for sending and receiving MIDI events. This allows you to play your Web Audio synthesiser using a physical (piano-style) keyboard, or send midi notes and controller information out to play other instruments or devices, either hardware or other software.

I’ve been experimenting with these APIs because I want a more flexible way to sequence and trigger musical parts live.

In my app I have various patterns of music. Some contain audio events – for example a drum beat, made of different drum sounds triggered in Web Audio in the browser. Others contain MIDI events – for example, a bassline. These MIDI patterns are sent out to other synth software – they don’t make sound in the browser.

What’s the problem?

When I’m playing my different patterns, I want the notes to line up in sync.

The Web Audio API and Web MIDI API use different scheduling, so things don’t line up by default.

This post explains how to get these to play back in sync so I can combine audio events and MIDI events in a performance or piece.

How to sync audio and MIDI

Web Audio events are scheduled relative to when the AudioContext started, in seconds.

Web MIDI events are scheduled relative to when the page started loading, in milliseconds.

So we have two differences to account for:

  • Seconds vs. milliseconds – e.g. x 1000
  • When the page started loading vs when the AudioContext started

This second item is the tricky one – the AudioContext doesn’t necessarily start when the page loads, could be much later.

We can measure this difference by using the high resolution time API, and comparing that to the current AudioContext time.

const perfNow =
const audioNow = audioContext.currentTime;
const audioContextOffsetSec = ( perfNow / 1000.0 ) - audioNow;

This tells us how late audio events are relative to MIDI or real time. (MIDI events are sent in close to real time.)

So to sync we need to offset (delay) MIDI events by this latency:

const timestamp = ( startSeconds * 1000 );
const offset = ( audioContextOffsetSec * 1000 );
  [ 0x90 + 0, midiNote, 100 ],
  timestamp + offset

For a long time I had this backwards – I’d schedule my audio events earlier by audioContextOffsetSec, trying to account for the latency, but this breaks down when you are scheduling close to now. The AudioContext can only schedule so far in advance.

I’ve put up a complete example on GitHub as a demo – take a look.

For a deeper dive on how to build a reliable, accurate sequencer in Web Audio and JavaScript, check out A Tale of Two Clocks on HTML5 Rocks. Spoiler alert: there are more than two clocks.

Hopefully this article helps someone – it took me a while to get my head around this. Although the Web MIDI API is still experimental, I’m really excited to see what apps and tools will emerge in its wake.

🚀 πŸ’» πŸŽ›

Padded Landscape

a loop-based generative audio-visual experiment

A while ago I decided to relax a bit more with my music production. The idea was to remove the pressure to finish and release tracks, and just noodle around and make loops. If I had fun jamming, I’d render the parts as a loop, archive it, and move on.

It was really liberating to focus in on short sections of music, and not concern myself with arrangement.

I might set out to make an “east coast whine” sound, and end up with a little trap/hip hop loop (Kufca). Or I might want to make a classic “synth strings” sound, and then turn that into a vocal-sample driven 90s big-beat loop (Mivova).

Around the same time, I was experimenting with the Web Audio API. It’s really powerful, making it easy to slice up audio files, sequence audio and midi, and even build custom synthesis and effects processing.

I’ve also been dabbling in ways to combine audio and (primitive) animation. Wouldn’t it be cool to jam out on a sequencer or midi controller, and synthesise visual and audio content at the same time?

How the audio works

There are currently nine songs/loops of material set up. Each song is divided into four stem parts – sometimes a single instrument (drums, bass, or vocals), sometimes a combination.

I’ve named the different mix layers using words that you might use to navigate a mountainous landscape:

  • alpine – typically drums
  • ridge – typically bass
  • uplands – lead, chords, synth
  • hills – vocals, texture, arpeggio, pad

These aren’t hard & fast categories; for example, some songs have different drum patterns in alpine and ridge. The idea here is that if you combine say a ridge and alpine from one song and hills from another, it won’t sound terrible πŸ™‚

The playback is automated and has some gentle randomisation so it’s not exactly the same each time.

  • Playback happens in 64 beat cycles.
  • The order of the songs is shuffled. πŸ—‚
  • For each song, a part is selected at random in each cycle, until all parts are playing.
  • Then the parts are removed in a random order.
  • When there’s only one or two parts of the outgoing song playing, a new song kicks in. This means parts from two different songs will overlap – like a DJ blend.
  • Each part might fade in or filter in slowly over the cycle, or start fully turned up. Similarly, parts may fade or filter out. πŸŽ›
  • When we run out of songs, we shuffle the order and start again. 🎲

To add a little punctuation & spice to this, I’ve added in a handful of sound effects. Some are recordings of cutting card or ripping paper, but there’s also an airhorn. These are randomly sprinkled on cycle changes, especially if a song is changing. These are fed into a dub-delay effect.

There are also some texture samples – maybe birds, a dripping tap, the ocean, or sea scouts around a campfire. When the song changes, a beat-synched chunk is sampled and looped from one of these textures, to add another layer to the transition.

The mixing and effects processing happens live in your web browser, using Web Audio nodes – filters, delays.

How the animation works

The mountains are React components containing SVG. There are four mountain ranges, to match the four mix layers. If a ridge (e.g. a bassline) part is playing, you’ll see a moderately tall mountain range. When parts fade in or out over the cycle, the mountains fade in too.

Each mountain is an SVG polygon, and the points are animated, parallax-scrolling style. There’s a codepen prototype here.

When a song comes in, random colours are picked for the front (hills) & back (alpine) range. The intermediate two colours are interpolated.

What next?

Occasionally I may add new songs to Padded Landscape. One song will get released on Cartoon Beats later this year – Redline Train, based on the Maenyb dub / reggae loop. Keep an ear out for it – this will include a remix from nsu (of Newclear Music).

If you need a low-maintenance soundscape…
fire up the site and click play!

Manhole Covers

Click play to listen to an evolving soundtrack as you scroll through the post.

Anunaku – Teleported

If you read this blog you might already know I like taking photos of tiles. This post is about another photographic hobby of mine – manhole covers.

Here’s one from Amsterdam which (to my eyes) evokes a computer keyboard. I did a quick google to find out what Purator is – and found a website devoted to manhole covers!

Herzel – Glide

So far, the Czech Republic is winning – two manhole covers of note. I think this one is from the quad inside Prague Castle.

CYRK – Memorial

This one’s just a regular manhole from a random street in Prague, but it’s a damn cool design, especially against the backdrop of the concentric intersected rings of the pavement.

Noir – Disruption

Actually it looks like the USA (or LA to be precise) is a close second, though the manhole covers aren’t nearly as interesting. This one’s a pretty great nod to futuristic 80s vector art.

That one looks more square than the others. I heard that “Why are manhole covers round?” is an interview question at Microsoft. I feel like this is the one question I wouldn’t have a problem with.

Shedbug – There’s Hope for You Yet

This one here is not really noteworthy, except for the clear pride that LA has that this manhole cover was made in India.

Wyatt Marshall – This & That

Nothing to say about this one from Canada, except that the sun was nice and low when I took the photo.

Shedbug – There’s Hope for You Yet

Computer & Nerd Boxes

First stop on our summer holiday was Sunnyvale. The first thing to do in silicon valley – the Computer History Museum. So much historical computer info and hardware all housed in Silicon Graphics’s old headquarters.

After the CHM we visited ex-colleague Aidan at his new workplace – one of the many Apple campuses dotted around. He’s in Wolfe, where marketing happens, and we also peeked at new, inpenetrable Apple Park. Impressive and inspiring places to work, nice food, and overly carbonated water on tap!

Memorable Moments from 2019 Automattic Grand Meetup

I’ve finally recovered from the Automattic Grand Meetup – took me a while this year, spent a few days in nomad-working in Cocoa Beach, and found myself sick for quite a few days. One reason it took so long to recover was because the meetup was totally amazing!

The keynote speakers this year were consistently fantastic. Each one made me look at the world slightly differently – very thought-provoking. Here are some memorable moments πŸ™‚

Alexander Rose of the Long Now Foundation

I’d heard about the Long Now Foundation, but never really looked past the surface. At face value, it’s a compelling idea – how can we think and operate on timescales larger than our experience?

The work they are actually doing is delightfully idiosyncratic, and generally inspiring engineering (of the mechanical variety). I learned a lot about exactly what they mean by “long now clock”, and lots of fascinating details about what they are building.

Stephen Wolfram

Stephen Wolfram was an intense experience! He casually walked us through a bunch of big ideas, noodling in his interactive, natural-language based system for so-called “computational essays”.

His way of working really resonated with me – this seems like an important part of the next wave of computing. It really was like science fiction unfolding on stage. The Wolfram language allows you to jot down questions in a much more precise format. The system has types & semantics so that it really understands the places, concepts and numbers that you type, and it has “knowledge” – it’s connected to many vast data sources.

What I found compelling about this is its expressive and communicative power. When we use natural language, there’s always a “translation gap” – the symbols in the language are an approximation for what we mean. Can we develop a more precise way to communicate? Could such a system provide a common, standard set of tools for white papers or public policy development?

Scott Berkun

Scott Berkun presented a very a holistic view of design. He’s currently working on a new book – refreshing to have a talk not selling a book, but workshopping a book that’s in progress!

Everything is design, everything is designed, everyone is a designer. Often the nominal “designer” is working within constraints that they don’t control, and the “true” designer might be a producer, a mayor, a lobbyist, etc.

I really appreciated the “call to arms” in Scott’s talk. How can we ask questions and peel the layers back to understand what patterns and agents are really designing the experiences, cities, and products that we all use every day? Get involved here!

WordCamp Brisbane talk – Cool Stuff Inside Gutenberg

I spoke at WordCamp Brisbane last weekend – my first WordCamp! I chose Brisbane as it seems like the strongest community in my part of the world. It didn’t disappoint – the 2019 event broke records as the biggest #WCBNE ever!

For my talk topic, I wanted to shine a light on all the work going into the Gutenberg project. This code base powers the block editor in WordPress, but there’s so much potential here.

Inside Gutenberg there’s a rich library of components that you can use to build the custom block your site needs, dashboard interfaces in wp-admin, and more. You can even use these packages outside WordPress – the possibilities are endless!

Demo – Page Soundtrack

To demo this, I decided to invest some of my spare time in building something fun that I might use. The idea was to add a soundtrack to blog posts and pages.

When writing a post, you add loops to the page. When the user is reading (scrolling) through the page, it will automatically sync and crossfade between the loops, a bit like a DJ.

The two blocks – a loop block and a play button – use Gutenberg components to allow the author to configure things like loop settings, and the page tempo. Have a play with it!

Explore more