Welcome back to The Mage’s Workbench, a mini-series dedicated to exploring various production techniques as they relate to chiptune and chip-adjacent workflows. This month, the topic at hand is post-production in a hybrid chiptune workflow. In this context, ‘hybrid’ refers to the combination of a modern production environment like a digital audio workstation (or DAW) with more traditional trackers used by many chiptune artists. As with most things, there are lots of ‘right answers,’ but perhaps this episode will shed light on a conundrum you’re facing in the studio. If you’ve ever wondered how to lock in the tempo of audio exported from a tracker and sync it up with something else, then this article is for you!
For this article, we’re going to use the following tracks as reference:
Both of these examples were written using the same reference tempo of 150 BPM. For those familiar, this speed is one that jives particularly well with the 2A03 (the microprocessor and sound chip in the NES). For the time being, the number itself is not relevant as much as the fact that the two stems are approximately the same tempo. The word ‘approximately’ is the operative term here. Here’s why:
While not all chips are created equal, you may encounter some subtle idiosyncrasies that introduce unique challenges when multitracking. Check out what we get when both examples are dropped into the same project file:
Right away we can see (and hear) a noticeable gap at the beginning of our tracker stem. This is an intentional artifact of FamiTracker designed to preserve the integrity of exported audio. If you’re following along at home and your track looks similar, trim out that silence and slide the track backward so that the two tracks begin at the same time. For demonstration purposes, we’re using the latest version of Audacity since it is flexible, free, and available on all operating systems.
After trimming, this is what we end up with:
As you can hear, both tracks are in time with one another. However, even with ‘optimal’ tempo and speed settings, it doesn’t take long for some noticeable discrepancies to surface. If, for instance, we take both tracks and loop them, this is what we end up with on the 16th iteration:
It should go without saying, but in very few circumstances would this be ideal. At its most disparate, the tracker audio is about 40 milliseconds behind the DAW’s audio. Granted, a deviation of this magnitude is occurring after 7 minutes, but the effect leading up to this is noticeable after only a few minutes. Without getting too technical, the reason for this divergence is due to an engine clock speed that isn’t a whole number. In other words, our tracker audio doesn’t end up being exactly 150 BPM.
Since we want both tracks to share the same tempo, we need a way to manipulate the audio right down to the millisecond. With some audio software, this is actually pretty straightforward. In FL Studio, for example, audio clips can be manually stretched. While altering the duration of audio in this way would normally affect the clip’s pitch as well, FL Studio has some built-in time-stretching algorithms that make this kind of manipulation simple. In particular, e3 Mono is an off-line (not real-time) stretch method that is tailor-made for this type of precise adjustment.
Because we have already committed to using universally-accessible tools, though, let’s examine how to do this in Audacity. There are a few more steps involved, but the results are undeniable!
The first order of business is to identify a clear, rhythmic landmark* as late into the song as possible. Ideally, the goal should be to align both tracks to this exact point. Since the DAW track can (presumably) be trusted to maintain a consistent tempo, we will use that as our anchor point (so it won’t be altered at all). In our example, the chosen target is the last beat of the last bar.
* this process is made far less cumbersome if both stems contain percussion as the attack of drum hits are very easy to identify when looking at a waveform
Now that we know where in the song to line up our tracks, let’s get an accurate timestamp for that target. In the latest version of Audacity, you can create a click track or metronome. To do this, add a new mono track from the Tracks menu (Ctrl + Shift + N on Windows). With this new track selected, add a Rhythm Track from the Generate menu. You’ll be presented with a dialogue box containing various text fields and sliders for configuring the click track. How you configure these parameters* will depend on your song, but the important thing is that the target section follows the same tempo as our anchor track —which, in this case, is our DAW audio. Once you’re happy with your settings, select ‘OK.’
* selecting a sharp metronome sound (e.g. Cowbell) makes identifying the exact start point of each tick much easier
With a clear target and an easy-to-visualize method for pinpointing its exact timestamp, click any of the tracks at the target time (it helps to zoom in). Now, access the Labels submenu from the Edit menu and select ‘Add Label at Selection.’ A new label track will appear with a marker and a blank text field. Give it a name (e.g. ‘Target.) and press Enter. This step isn’t required, but it allows you to reference the target timestamp quickly and accurately since a vertical marker will appear whenever you hover your mouse over the label.
With our target identified, all that is left is to time-stretch our tracker stem. In FL Studio, this process is effortless as the bulk of the ‘heavy lifting’ involves clicking and dragging the audio clip until it is the desired length. In Audacity, however, the method we have to use is less tactile. Begin by zooming in near the target timestamp and selecting the track containing the tracker stem. From the Effect menu, select ‘Change Tempo…’ A new dialogue box will appear with a few distinct options. We care most about: ‘Percent Change.’ Because the needs of each song will vary greatly, the value you type into this text field is not a matter of exact science (or perhaps that science isn’t the best use of your energy). As such, you will have to arrive at your solution through trial and error. If the target point in our tracker stem drops behind the target point of the DAW stem (which is also represented by the label created earlier), try again with a lower value. As it turns out, the value that syncs the demonstration tracks together is: 0.008.
All right. From the waveform, everything looks right. But since we’re dealing with audio, the real question is: how does it sound?
Not bad at all. As you can hear in the above video, none of the tracks are stuttering or lagging behind. Everything is snappy and locked in tight around a steady BPM of 150. If you notice anything fishy in your own tracks, keep tweaking. The answer most likely lies in a fraction of a percent.
Want some practice? Download the example stems below!
Thanks for reading this episode of The Mage’s Workbench! Got your own process or workflow that you’re excited to share? Have a question about the material in this article? Drop a comment below or chime in on Discord! Don’t stop sharing and don’t stop creating!
Note: traducción al Español por Pixel Guy encontrado aquí.