SONAR - Development Diary

Naoto

Prismatic Architect
Member
Messages
16
Location
Pennsyltucky
So I’m going to try something a little different here~

As I’m sure many of you are no doubt aware, I’ve been developing my own audio streaming driver for the Mega Drive since around early August of this year called SONAR. Progress on it has been fairly steady as of late and I’m hoping to have a proper source release of the driver ready by mid-January (that’s not a promise though!). However, I’m not exactly keen on waiting until then to put the word out about it beyond a few littered messages on various Discord servers. Instead, I would like to try a new post format out.

This is gonna be an ongoing thread where I can share my journey of developing this driver with the community and allow others to voice their feedback as time goes on. Even though I call this a dev diary, the format of this thread is really more like a blog crossed over with a discussion post. This format could either turn out to be really great or really messy, but I’m optimistic about it.

I will go ahead and note right up front that this post is gonna be a lot longer than any of the subsequent updates I write since there’s a pretty sizable timeline to cover up to this point. I promise I’ll try to keep this interesting. That said, this post is going to assume a fair bit of prior knowledge on PCM playback and MD sound drivers, so if you’re not versed in those subjects, I can’t guarantee this will all make sense to you.

Oh, and uh, don’t expect super proper English or an overly professional tone from this post, either. As I write this, It’s a very late Sunday night following an absolutely dreadful shift at work, so I’m pretty much going to write this the same way I would explain it to someone if I was just giving them an off-the-cuff summary of my journey thus far. I’ve got enough of the professional essay writing shit to do in college anyway and I get the sense I’d just be wasting everyone's time if I approached this with any other tone but a casual one.

Without further ado, let’s get into it.

The Prologue​

So I guess the most natural place to start is actually what I was doing before I started work on SONAR.

Back in May of this year, I got the bright idea to try and make my own flavor of 1-bit ADPCM. I don’t fully remember exactly what drove me to do this, but nonetheless, I thought it was worth pursuing. I can, however, confirm that the idea of building a driver dedicated to streaming full songs was nowhere on my radar at the time. I was simply trying to put together a format that would act as an extremely lightweight alternative to 4-bit DPCM that still provided a reasonable level of quality; something you could use the same way you might use MegaPCM or DualPCM, but you know… shittier.

I was also doing this as a means to teach myself EASy68K. I’ve been out of practice with high-level languages for quite some time now, so I thought I could use EASy68K to just coast along with the knowledge I already have of 68k asm to write tools I’d need. This was, in hindsight, really fucking stupid, since the environment EASy68K provides is fairly limited and I’m in a position now where I kinda have to learn a higher-level language anyway (more on that later). I could’ve had a head start on that, but I chose not to. Dumb.

Anyway, I’m not gonna go into detail about what the theory behind this 1-bit format was or how unbearably slow the encoder was because it’s not super relevant to SONAR. All you really need to take away from this time period is that it was called Variable Delta Step (VDS for short) and that it was a colossal failure. I was pretty put off-put by the fact that my experiment had essentially boiled down to waiting 45 minutes at a time for the encoder to spit out something that sounded like a broken fax machine made angry love to a jammed floppy drive, so I pretty much chucked the project in the bin after working on it for about 3 days straight.

By that same token, I wouldn’t say it was a totally pointless endeavor. While working on VDS, I was primarily using Audacity to convert my sound clips from signed 16-bit PCM to unsigned 8-bit PCM. However, there was something I noticed about the conversions that puzzled me quite a bit: the 8-bit samples sounded noisy, even during passages of complete silence. Now, some noisiness is to be expected with the move to a lower fidelity format, and I did understand that going into it.

However, at some point, the thought crossed my mind that maybe Audacity was simply unsigning and then chopping off the lower 8-bits of the sample data and calling it a day, rather than applying any sort of rounding rules to reduce noise in quieter passages. I decided to investigate it further, and sure enough, that’s exactly what it was doing. I responded by showing the program my favorite finger and writing an EASy68K script to handle the conversion properly, and the results were a fair bit cleaner in terms of signal noise.

Sometimes you find small victories in your failures, even if you don’t see them right away.

Back to the Drawing Board​

The days following my VDS experiment are a bit of a blur, but I’ll do my best to recount what happened as best I can.

Even though I had thrown in the towel on VDS, I was still hungry to do something PCM-related on the Mega Drive. The idea of doing an audio streaming driver sorta just came to me from a place of my earlier ambitions when I first joined SSRG years ago. A streaming driver is something I had wanted to do back then even before I knew a single Z80 opcode. So, without much of a second thought, I kinda went “fuck it” and decided to see how viable this dream from early on really was.

I knew going into this that no matter what PCM format I settled on, my driver was likely going to require the use of a cartridge mapper to be anywhere near viable for a full game OST, and it was something I personally felt okay with. To me, it didn’t seem like it was all that different from what it ended up getting used for in the only MD game to contain a mapper (SSFII), and when looking at the ratio of how much space music took up vs. every other asset on CD-based games of the era, it seemed pretty reasonable by comparison. I understand some people will disagree with that perspective, but this is just how I personally feel about it.

That said, I still wanted to find a PCM format that would make the most out of every byte without sounding like crap. Having learned my lesson with VDS, I decided that creating a format from scratch was not even nearly within my depth, so I needed to find a pre-existing one that was well-documented, had encoding tools that still worked, and source code for a decoder that I’d be able to reference. However… it turned out that there are very very few viable PCM formats out there that meet these criteria. COVOX ADPCM was a tempting option since it could go down to 2-bits per sample, but the encoder I found was old and it had no decoder source. Not to mention… even the 4-bit version of the format sounded worse to me than standard 4-bit DPCM.

That’s when it hit me: why not just use standard 4-bit DPCM? It’s a format I’ve worked with before and fully understand, source code is readily available for an encoder, and I even had some old source code laying around from a DPCM routine that I helped Aurora optimize ages ago. That was enough for me to convince myself to look into it more deeply, despite a couple of big concerns I had about going in this direction.

Tackling the Shortcomings of DPCM​

Perhaps my biggest concern with the DPCM format was the fact that it was seemingly impossible to detect the end of a sound clip in an elegant way, Most pre-existing implementations of this format on the MD rely on using a length counter to track how many bytes of the sound clip are left to play. However, due to Z80 limitations, you can really only viably have a length counter at a max value of either 0x7FFF or 0xFFFF depending on how it’s implemented. This is fine when the only types of sound clips you have to worry about playing are drums or voice clips that don’t last more than 2 seconds tops, but a full song is gonna be maaaaaany bytes longer than what such a small counter could possibly hope to account for (no pun intended).

The way we could get around this is by using an end flag to simply tell the driver that we’re at the end of the sound clip and don’t need to play any more bytes after it. There’s one slight problem, though; that’s damn near impossible to get away with on a delta-based format. If you wanted to designate a byte value of 0x00 as the end flag on an 8-bit raw PCM format, it’s fairly trivial. You can simply give any value of 0x00 in the sound clip a promotion to a value of 0x01 and add an extra 0x00 at the very end. In a delta-based format, however, each value corresponds to the difference from the previous sample to the current one. This means that every delta encoded sample needs every sample that came before it to be correct in order to produce the correct value upon decoding. Therefore, you can't make any byte value an end flag because it could very easily throw off the sample values in a really nasty way. It's impossible to avoid that… or is it? Let’s take a closer look at the delta decoding table:

0x00, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40 0x80, 0xFF, 0xFE, 0xFC, 0xF8, 0xF0, 0xE0, 0xC0

You’ll notice one of the values is 0x80, and that corresponds to a 4-bit delta value of 8. This is significant because it means having two delta values of 8 in a row (88) would cause the accumulated sample to overflow upon decoding. This caught my eye because ValleyBell’s PCM2DPCM tool has two different anti-overflow algorithms. I figured one of them had to make a byte value of 88 impossible, so naturally, I tested both of them out on a test sound clip and checked the results with a hex editor. The 2nd anti-overflow algorithm didn’t prevent byte values of 88 from being produced, but the 1st one did! This meant that an end flag was possible after all. All I had to do now was swap around decoding values 0x00 and 0x80 around like this:

0x80, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40 0x00, 0xFF, 0xFE, 0xFC, 0xF8, 0xF0, 0xE0, 0xC0

Swapping the positions of those two values basically just allowed me to use 00 instead of 88 as the end flag value; it’s easier to detect and there’s less paperwork. Armed with this knowledge, I went ahead and ran a test to encode a sound clip with this modified delta table and anti-overflow. The results were… fucked up, somehow. As it turns out, that the anti-overflow algorithm didn’t get along with the new delta table at all. I decided to get around this by converting it with just anti-overflow and using another quick and dirty EASy68K script to swap the 8s and 0s around on each byte after the fact and add the end flag.

With my major concern put to rest, I moved to squash another minor concern that I had: the noise level. I would say this is a large part of why people perceive DPCM as sounding kinda crappy. A sound clip encoded with DPCM is more or less an approximation of the source signal, and since we only have 16 possible delta values to work with, there’s going to be more signal noise. This is something I knew I couldn’t outright eliminate without turning to a form of ADPCM, but I knew I could reduce how perceivable it was, so I got to work.

I had already sorta started laying the groundwork for reducing the perception of signal noise back when I worked on VDS, but it was only around now that that started paying off. The 16-bit to 8-bit sound clip conversion script I made back then proved very useful here. Unsurprisingly, the cleaner an 8-bit sound clip is before you convert it, the cleaner the DPCM version will sound after conversion. This made a surprising amount of difference and really makes me wonder if DPCM got a bad rap for the wrong reasons. I still wanted to take it further, though, so I started investigating optimal playback rates. My goal was to figure out at what rates the noise level becomes indistinguishable from the 8-bit equivalent. I also paid particular attention to real MD hardware for these tests, as I suspected the low-pass filter might mask the signal noise a little.

What I ultimately found was, at playback rates of around 20kHz or above, the signal noise differences between 8-bit PCM and 4-bit DPCM were negligible to my ears, especially on real MD hardware and the BlastEm emulator, where the low-pass filter indeed smoothed things out. With that, I had done it. I had a working variation of DPCM that was viable for streaming full songs on the Mega Drive. By my math at the time, one could expect to fit just about 1 minute and 40 seconds of audio per MB of ROM. This means if you dedicated 12 MB worth of ROM mapper banks to your game’s soundtrack, you could fit about 20 minutes of audio within that space, which seemed quite reasonable to me, if a little tight.

It was around this time I shared some of my findings in a few places, including a concept pitch for a ROM hack project where I planned to use the eventual driver. After starting to put together a team for the ROM hack project though… I started attending College again, which removed a significant amount of free time from my schedule. As such, I ultimately had to hit the pause button on both projects. As I started to get the hang of my coursework and other responsibilities though, I found myself ready to develop the driver only a few months later.

At some point early in development (I don’t exactly remember when), I settled on naming the driver SONAR. No, it’s not a clever acronym or even really one that ties into anything related to the technology employed by this driver. In reality, I don’t fully know what drove me to pick the name SONAR aside from maybe the fact that it just sounds cool and powerful. If I had to guess the path of logic my brain took to justify that, it’s probably something flimsy like actual sonar technology being a powerful use of sound waves and that this driver also being a powerful use of sound waves… or some other similarly hot bullshit. Questionable logical origins or not though, the name felt like enough of a keeper for me to whip up a logo for it, and it’s been here to stay ever since.

DriverLogo.png

Coding the Damn Thing​

After getting the new DPCM format finalized, there were some other goals I needed to focus on while developing SONAR to ensure maximum quality. One of these was DMA protection. For the uninitiated, when a DMA is in progress, the Z80 isn't allowed to access the 68K address space. This means the Z80 will pause if it tries to read samples from ROM during a DMA, which leads to really stuttery playback on drivers that don't take that into account. We can reduce the likelihood of stuttering by reading samples into a buffer faster than we play them back. This allows us to tell the driver to only play from the buffer while a DMA is in progress. Thankfully, the DPCM code I worked on with Aurora was designed to do just that, so designing the playback loop was mostly a matter of reworking that code to fit the new format.

Another goal I had, in the beginning, was making sure decoding times were equal between the high and low 4-bit delta values. On many DPCM drivers I’ve looked at over the years, the high nybble takes longer to decode each loop than the low nybble, which I figured also contributed to lower quality playback. As such, I tried my best to make sure cycle times were even between decoding each nybble, and with some finesse, I succeeded! For the early versions of SONAR, its decoding times were even for each sample, no matter what. Eventually, however, I conducted a test where I allowed the decoding times to be more uneven and found there was no perceivable drop in quality to my ears. After discovering this, I decided to retire that aspect of SONAR for a higher maximum playback rate of around 24kHz at the time. However, I still conducted future tests at a rate of 20.5kHz; I simply found the option to go higher than that appealing. Further tweaks would eventually cut the max rate down to 21.7kHz, which is where it remains today.

Perhaps my most important goal with SONAR, however, was to make it simple and easy to work with, yet versatile and extremely powerful. Early on, I thought a good way to do this was to allow songs to be split up into individual clips and support a module format to go along with it. The concept was that a song’s module file would be a list of instructions that tell the driver which clips to play in what order. These module instructions are presented as macros for easy editing, readability, and assembling.

Originally, module instructions were translated directly into Z80 instructions that would be executed when a song was ready to play. This turned out to be really wasteful of ROM and Z80 RAM, so I simplified it to a more lightweight system that I call the clip playlist. Each entry in the clip playlist is only 3 bytes long; 1 byte for the clip’s Z80 bank number, and another 2 bytes for the clip’s address within that bank. Additionally, this system supports jumping to earlier or later parts of the clip playlist when the bank value is 0, so things like looping are possible.

I also decided that I wanted an intuitive command system of some kind early on. This would be used for things like adjusting the playback speed on the fly, volume control, fading a song in or out, and pausing, stopping, or resetting the current song. Originally I had planned to let the Z80 take care of processing commands all by itself, but I later realized that it would degrade the playback quality too much to be viable. Instead, the plan is to have the Z80 only check for pitch updates and pausing, while a 68K-side command processor handles the more advanced parts.

Oh, I also originally wanted SONAR to handle sfx processing near the beginning of development, but uh… no. That was naive as hell to even dream of and I’m so glad I didn’t because trying to make that magic happen probably would have broken me.

Where Development Stands Now​

Development has been a little sparse this last week or so, what with Turkey Day and all that happening here in the states but other than that, it’s going fairly smoothly.

Of course, there’s still a lot of work to be done. While I’ve roughly prototyped out how a few commands are going to work, I have not started building the 68k-side of SONAR yet. Additionally, I’m still using EASy68K scripts for my encoding workflow, which will not be acceptable for the majority of users, so I need to brush up on my high-level language skills again and write a proper all-in-one tool for it. More than likely, it’ll be written in C# since It doesn’t immediately make me want to tear my long, beautiful hair out.

There are also some additional things I would like to at least investigate and put on the road map prior to the first public release of SONAR. I had a concept for an optimization option for the tool and I want to see how viable it is. The idea is to break a song down into clips the size of the song’s individual beats, combine the duplicates, and stitch it back together, with a proper module file and clip folder provided as the output. You would need to know the exact bpm of the song for this and it may not always save much space, but every byte counts, right?

Perhaps an even more imminent plan is to create a custom version of Aurora’s new upcoming sound driver, Fractal Sound, that’ll fully support SONAR. This version of Fractal Sound will likely be an SFX-only driver, while SONAR handles the music. However, this does mean there might be a small wait on SONAR, even if it’s finished before Fractal Sound is. I would really prefer to release Fractal + SONAR as the full package, but I might change my mind.

I’ll also likely need to hunt down some beta-testers prior to the first release and write a fair amount of comprehensive documentation on how to work the damn thing. I may also put together my own sound test program (similar to the one Aurora’s making for Fractal) that people can just drop their converted songs into, assemble, and have a quick listen to. There’s probably even more that I’m forgetting to mention, too, but after writing this post for the last 3 days, I can barely be arsed to remember right at this moment. :V

All in all, the future is looking bright for this project and I cannot wait to share more of my journey with all of you as I continue to press forward. If you made it this far through my rambling and babbling, I’m both extremely grateful and, quite frankly, impressed that you were able to bear with me this whole time. I don’t exactly make it the easiest to follow everything I say, so if you have any questions or would like something explained more properly, feel free to ask.

Until next time, I wish you all a wonderful day, afternoon, evening, or whatever~
 
Last edited:
It’s time for another entry in this development diary. Since my last post, I’ve been able to get a decent amount of work done on the project, notwithstanding the chaos of the holidays and some additional hardships I’ve found myself having to push passed (I’m currently kicking a breakthrough case of COVID as I write this, and that’s only the tip of the iceberg). If all goes to plan, I should have a polished sound test program put together by this time next month. Without further ado tho, I’d like to share what I’ve managed to add in since last time.

Volume Control​

Previously, I’ve mentioned my plans to add support for volume control into SONAR, and I’m pleased to say it’s been fully implemented and works like a charm. It took a fair bit of experimentation to get it to a point that I'm satisfied with, but in the end, my efforts were worth it. Technically, I tried to add volume control much earlier on in the driver’s development, but to explain why that didn’t work out, I have to go on a tangent and give some backstory.

Back in 2018, I had made my own modifications to Clownacy's S1-friendly Sonic 2 Sound Driver. The main tweak was a custom 8-bit pcm playback loop that supported up to a 26,320Hz playback rate and added in 16 levels of linear volume control. It was a fully-z80 driver (the driver handles fm and psg playback too), which meant that I didn't have enough space to go the simple route of using pre-calculated tables for volume control. Instead, I simply made 16 routines that would calculate it manually. I was very careful to count both cycles *and* bytes when designing these routines so that both of those were equal no matter what volume level was selected. Then, at the end of the driver's V-int routines, if a volume update was queued, I used a ldir loop to move the instructions of the appropriate routine into a designated part of the pcm playback loop.

While I'm still proud of those early efforts to shove pcm volume control into a sound driver, I quickly found out it wasn't suitable for SONAR. Why? Because each volume control routine took 56 cycles to execute per sample processed! This would have lowered the audio quality significantly, so I quickly scrapped it and decided that I wouldn't do volume control in SONAR at all.

A few months later, while I was showing my code to Aurora, I realized I could move a few register functions around and handle volume control with tables in a similar fashion to DualPCM. This ultimately lowered the max sample rate from 24kHz to 21.7kHz, but I thought it was worth it. Development paused for a little longer with the semester ending and the holiday season ramping up, but recently I was able to sit down and implement volume control alongside the beginnings of the 68k-side command processing code.

My first attempt was actually something Aurora and I cooked up as an attempt to improve DualPCM's method of volume control for her Fractal Sound driver. The idea was to use a clever combination of movep instructions to transfer an entire volume table within a single frame, instead of relying on the Z80 to load it in pieces over the course of a few frames. This approach actually works just fine with infrequent volume updates, but if you try to update the volume level every single frame, it causes noticeable interruptions in the pcm playback. This proved to be too much when paired with a volume fadeout.

The fix for this wasn't too hard, however. Instead of decrementing the volume by 1 level each frame, I decrement the level by 8 and only send an updated volume table every 8 frames. This effectively left me with 16 levels of volume during fadeouts, and it actually sounded surprisingly smooth despite being a much less fine fadeout implementation. So, I committed my tweaks to the repository, played around with possible filters for the playback (more on that later), went to bed… and then woke up the next day asked myself "Okay, but now why do I need 128 levels of volume control if 16 levels sound smooth enough?"

Later that day, I went ahead and reduced the volume control down to 16 levels. Initially, I had difficulty figuring out how many decibels I should decrease each step of volume to get the smoothest control possible. I eventually settled on -3db per step, as it meant the last level of volume could be fully mute without sounding abrupt. Incidentally, according to a few sources, a change of 3db is the smallest difference level easily detectable by most people listening to music or speech, so this was perfect!

Reducing the number of volume levels from 128 to 16 had another huge benefit: the data for the volume tables now took up far less ROM space. In fact, it took up so much less space, that I could afford to preload the data for all 16 levels into the bottom half of the Z80's RAM. This meant I no longer had to use the 68K transfer a new volume table with a series of movep instructions every time a volume update happened as well. This reduced the 68K overhead of volume updates by a *ton* and still leaves plenty of Z80 RAM for the sequence data (formerly called the “clip playlist”).

The only downside of this is that I initially had to sacrifice handling pause/unpause logic on the Z80-side in favor of volume updates; there’s not enough time during the V-int routine to process all volume, playback speed, and pause updates independently without audible stuttering in the music playback. However, I found that I could use the sign bit on the speed value as the Z80-side pause/unpause toggle. This way, there's no need to do anything unconventional or painful like using the 68K to stop the Z80 for an indefinite amount of time.

There are a few more developments related to volume control that I could go over from here, but given their ties to more general concepts that warrant their own section and a desire to move on… we’re moving on. =P

The “Peak Roll-off” Filter​

In the last section, I very briefly mentioned experimenting with filters for SONAR’s output. To illustrate what I mean by this, it might help to take a look at the shape of a standard volume table in SONAR when you plot the raw data visually:
Linear.png
As you can see, the shape of a volume table’s output is linear. If an output sample is 0x00, the table returns a value of 0x00. If an output sample is 0x01, the table returns 0x01. Output sample is 0x34? The table returns 0x34. This may not seem very useful on its own, but if we adjust the table to keep the same shape at a lower amplitude…
Linear_low.png
…we can reduce the level of the output. In this example, 0x00 will correspond to 0x38, while 0xFF corresponds to 0xC7, and every value in between those two falls somewhere on the linear slope between them. This is how we’re able to control the volume of the output. However, the more inquisitive folks reading through this section are probably asking yourself the same question I did: what happens if we change the shape of this table to something like… this?
wave.png
The answer to that question in this example is it turns the output into unsalvageable mush, but that’s beside the point. I was fascinated by how shifting or distorting the shape of a volume table to something non-linear might manipulate the sound of the output. I didn’t go into this with high hopes of getting anything particularly useful out of it, but I figured there was no harm in a little play, either. I called these output filters.

Of the few shapes I applied to SONAR’s output for science, there was one that stood out, and it was this one:
boost.png
Unlike most of the filters I tested, this one didn’t seem to distort the audio in an unpleasant way. In fact, all it seemed to do was boost the perceived volume of the output by around 3db. This does actually make a lot of sense when you view the waveform of a song like the one shown below. Notice how the waveform only peaks into the higher and longer sample values very rarely throughout the course of the entire song:
song.png
What this filter shape is essentially doing is it is smoothly rolling-off those rare peaks and boosting how loud the more central parts of the waveform are, allowing the output to sound louder without the audible clipping that would’ve resulted from boosting the gain by an equivalent amount.

I was absolutely thrilled to discover this! For those who don’t know, sample playback tends to be fairly quiet on the Mega Drive by default, since the total level value of the PCM channel is locked to 16 on the YM2612. This was a very easy way to make the streamed music more audible without relying on any sort of debug register fuckery.

There was a downside, however. If this filter was applied in real-time to playback of the driver, it would decrease the dynamic range of the output by a fair bit. This made signal noise just a bit more audible than I was comfortable with, so I knew I needed to go a slightly different route. Instead of applying the filter to the output, I decided it made more sense to run the raw 16-bit sample through the same kind of filter before it is converted to 8-bit pcm and, eventually, the 4-bit dpcm format of SONAR. So, I modified my conversion script to apply this filter to the song as it converts, and it ended up working like a charm! The song got a decent volume boost with no discernible clipping, loss in dynamic range, distortion, or increase in signal noise. Sweet!

Improvisational Branches​

Around a month ago, as I was working on the driver and showing the progress off to the Prism Paradise dev team, @ProjectFM suggested a unique idea for the musical direction of the project. His idea was to take full advantage of the clip sequencing system that SONAR allows for by taking a series of loop sequences and playing them in a semi-random fashion. I thought this idea was really interesting and wanted to pursue it, but it meant I needed to find an intelligent way to code that in. After some light experimentation, I added in a feature that would allow this dynamic style of music playback to be done practically on SONAR.

I call the feature improvisational branching. An improvisational branch is essentially a sequence jump command that randomly picks between one of two paths, allowing for certain sections of a song to be “improvised” during playback. Additionally, improv branches can also be layered in sequence to pick between more than just two paths. This does mean you have to be mindful to jump back onto a “main path” at the end of each branching path when writing the sequence script, but this isn’t that difficult to manage with careful planning, as seen in the example below:

Code:
ExSong:      seqstart
        
.intro       seqclip        exIntro

.loop        seqclip        exMain0
             seqclip        exMain1
             seqclip        exMain0
             seqclip        exMain2
             seqimprov      .branch,.braAlt

.cont        seqclip        exMain7
             seqclip        exMain8
             seqclip        exMain7
             seqclip        exMain9
             seqjump        .loop

.branch      seqclip        exMain3
             seqclip        exMain4
             seqclip        exMain5
             seqclip        exMain6
             seqjump        .cont

.braAlt      seqclip        exMain3Alt
             seqclip        exMain4Alt
             seqclip        exMain5Alt
             seqclip        exMain6Alt
             seqjump        .cont

ExSong_End:  seqend

Originally, I was going to use the Z80’s r (refresh) register as a quick way to determine which path of an improv branch to take, since it’s a counter that could potentially be any number between 0-7F by the time it reaches the branch. However, I eventually wised up and realized this wasn’t going to be nearly random enough. Instead, I opted to use a Z80 implementation of a 6502 RNG routine called AX+ Tinyrand8. I chose this because it was a fairly quick and compact RNG routine that made good use of self-modifying code and had a sizable period. It’s, of course, much slower on the Z80 than it is on the 6502, but it’s still as fast as I could’ve hoped for given its performance.

68K-side Code​

The last thing I’d like to give an update on is the code for the 68K-side command processor. I, unfortunately, don’t have as much of a journey to document here because it all came together so quickly, but the good news is it’s basically ready!

The 68K code primarily handles volume, speed, and song playback updates, but that also encompasses a few features I haven’t touched on yet. For one thing, in addition to volume fade-ins and fade-outs, you also have the option to rev up or rev down the speed of a song in the same kind of way. I have also added support volume and speed envelopes so that interesting effects like vibrato and tremolo are possible.

Lastly, I have been working on a set of helpful macros that will hopefully make the process of working with SEGA Mapper banks as painless as possible. One example is a macro called “defsong”, which allows users to define a song ID corresponding to a label on the same line and calculate which mapper bank and relative address the song is at within said bank; you don’t need to manually sort out what the bank number or address is ahead of time. There is also a macro for warning users when a song they’ve included is larger than 1MB or is aligned to span more than 2 mapper banks.

It’s really cool stuff and, once a proper test program is put together and the source is shared, I can’t wait to hear what you all think of it!

Closing​

That’s gonna be all from me for now. This entry has run fairly long, but I hope you enjoyed reading it nonetheless. Maybe the next few will be a more reasonable length if I can remember to write them at more reasonable intervals of time. Only time will tell. See you all next time~
 
that part with DPCM - i am like :) didnt understand anything, but like!

my target is MK3 DPCM. it have 16 values table for decode 4bit DPCM into 8bit PCM. with it no any problem. then i start to make encoder from 8 bit PCM into 4 bit DPCM. but i have no knowledge about it. i am make some code... but quality is not very well. did you know any advise or knowledge about how our ancestors do that samples for MK3 at 90s? they make nice quality. samples sounds nice and clear... as much it is possible for Mega Drive \ Genesis. how to recreate same prepearing for sample before encode - to get max possible quality? encrease volume? decrease high and low part of equalizer and a little up middle part?
 
Back
Top