Aurora SDK/Development questions

I just started digging into the Aurora SDK, and am loving it! I haven’t done any Daisy development before, so this is super exciting for me, but I also have a ton of questions. I am interested in making a chord based pseudo-polyphonic oscillator. If you want to see my starting code, I forked the AuroraSDK repo here and added a new example here.

Would love any help on these questions: (though some might be super noob Daisy questions)

  • So far I have done all my processing in the AudioCallback included in some of the examples, this works, but introduces a good amount of latency. For example, even when twisting the frequency input, there is a noticeable delay. I assume I am doing something wrong here? How do I get it so it updates immediately? Feels like maybe I am using this callback the wrong way…
  • Dont get me wrong the USB drive is AWESOME, I love the ease of it, but is there a faster way other than compile, copy to USB, eject USB, move USB to module, boot module? Maybe some sort of dual writable USB drive that I could leave attached to my computer and the module? I looked around a bit and didn’t see anything.
  • What’s the preferred way to debug using a Daisy platform like this? I just need simple print statement level debugging…

THANK YOU to QuBit so much for adding this SDK platform to Aurora. As a developer, this is amazing.


PS: I ordered several blank SDK covers for the module, love that. However, I wish the inputs/outputs were blank also! Who says I have to have a left/right output? I was planning on doing a chord/individual output…would be nice to have that labeled.

PPS: Made a little TikTok showing off what I have so far here


Just a little update.

  1. I woke up this morning and rebuilt everything and my latency issues went away…so not sure why I was getting that delay in my processing. Seems fast now.

  2. Added/tightened up my oscillator code, adding an individual output & offset. So the left output is a ‘chord’ with the size determined by the time knob (1 to 5). The root is set by the frequency (warp knob), then it adds the 3rd, 5th, 7th and 9th depending on the chord size (right now I just default to the minor scale cause I like sad songs :sob:). The right output is a single oscillator defaults to the root, but the reflect knob offsets to the 3rd, 5th, 7th or 9th. I use the LEDs to visually show the chord size (in blue), and I use a white LED to show the current individual offset.

  3. For debugging it looks like there is a Daisy Seed with a micro USB input, so I assume I could get the STLINK v3 debugger and use that to get an easier/faster development setup? But the Daisy Seed in Aurora says ‘Daisy Seed 2’? What’s that all about? The electro-smith shop sells the Daisy Seed but it looks different? Newer version? Older Version?

Anyway, I’ll keep cranking away on this. Next steps for my little oscillator project:

  • Add v/oct input for the root note
  • Add a gate input that changes the offset of the individual output - would love to make this like an arpeggio output, maybe have different algorithms for what the next offset would be (up, down, up/down, random, etc)


BTW: The LED outputs are a total game changer for a platform like this.

Does anyone have any tips on using the AuroraSDK to send in v/oct to the warp input?

I’m just not sure how the calibration data intersects with with functions like GetCVValue() or GetKnobValue(). The GetCVValue() notes that “when returning warp CV from this function it will be with no calibrated offset, and is identical to reading from the AnalogControl itself.”

So far I have been setting the oscillator frequency using the warp knob like this:

float freq = fmap(hw.GetKnobValue(KNOB_WARP), 10.0, 1500.0, Mapping::LOG);

But now I want to take the CV input…do I just use GetCVValue()? Do I need to map or offset it somehow with the calibration data?

Thanks in advance.


Been making good progress…put up a video just now with a walkthrough of my oscillator firmware for Aurora: Qu-Bit Aurora SDK Patch Walkthru - YouTube

1 Like

Hey @pj4533 !

Awesome stuff; super cool to see what you’ve done so far!!

I think we’ve spoken a little already about the debugging on slack, but for all intents and purposes, the Daisy Seed2 DFM on the back of the Aurora can be treated as an ordinary Daisy Seed from the debugging perspective.

The Aurora uses the Daisy Bootloader, which can be debugged as normal.
However, the program itself can’t be flashed over the ST-Link itself when using the bootloader. So you’ll need to update the module using USB on the front panel, and then connect via the ST-Link using the .elf file you want to use.

When using VS Code in the Aurora SDK environment, this can be seen on these lines of the launch.json file.

I usually recommend anyone interested in debugging their own program verify their setup is working correctly by stepping through the Blink program first since the SDK repo is pre-configured to do that.

The only limitations that come up are the use of many breakpoints, and having breakpoints set prior to launching the debugger.
If you run into any issues (errors in the GDB window), you can usually resolve them by wiping out all of your breakpoints in the debug toolbar, and moving back to the

Regarding your question on using the V/OCT input for your oscillator, the Hardware class has a helper function for this that accesses the calibration data loaded at start up.

Calibration is done before any Aurora is shipped, and should automatically load during the Hardware::Init function.

For example,

// with an aurora::Hardware object named "hw"
// . . .
// Get a root note in the range of 12-84
float my_root_note = 12.0 + (hw.GetKnobValue(KNOB_WARP) * 72.0f);
// Get V/Oct CV in the range of midi notes 0-60
float my_voct_cv = hw.GetWarpVoct();
// Combine to limit within a reasonable range (you could do this in frequency or midi notes)
float note = daisysp::fclamp(my_root_note + my_voct_cv, 0.0, 127.0);
// convert to frequency
float freq = daisysp::mtof(note);
// and use with your oscillator, or anything else as desired:

Hope that answers your questions, and I’m excited to see how your project progresses!!

Thanks for the reply! I am just getting back from vacation, hope to dive back into work on my little oscillator project this week.

When you say the program can’t be flashed with the ST-Link, is that because the daisy in Aurora doesn’t have the headers soldered to connect it to the ST-Link, or is it something inherit to the design of Aurora?

Re: the v/oct stuff, THANK YOU! I didn’t see that mtof function, and just converting to Midi feels weird, but whatever works!

Thanks again, I’ll post when I have something interesting to show again.

btw: we going to see an SDK for the Nautilis? I’d love to play with that 3rd output and those buttons/leds!

So the ST-Link is physically capable of flashing the Aurora, but we don’t have any software set up to programmatically write the bootloader applications, or to ignore an older binary on the USB drive when the program starts.

So there are two approaches you can take:

  1. Upload your app via the USB drive, and then connect the ST-Link, and run the launch debug command with the .elf file generated at the same time as the .bin file loaded.
  2. Build your application as a non-bootloader app, which allows the ST-Link to program, and connect to the Aurora while running your program.

The latter requires that you use a different linker script, and will disable the firmware update functionality, it will also have less memory available for your program.

Note: If you do choose to do that latter, it will remove the daisy bootloader from the internal flash, and that will have to be reloaded before you’re able to put the official Nautilus firmware back on your module.
You can resinstall the Daisy Bootloader by running make program-boot with the Daisy in the System bootloader, or by connecting to the Daisy Web Programmer and selecting “Flash Bootloader Image” from the “Advanced” drop down

Re. MIDI Notes – it’s definitely a weird change if you’re used to working directly with frequency, but I personally find it a lot easier (and computationally cheaper) to manage musical intervals, chords, scales, etc. with MIDI notes, and then convert to frequency as a last step.

Depending on what you’re doing (multiple octaves, etc.) doing math directly on frequency may still be cheaper. So your mileage may vary, and you may want to convert to frequency sooner than later.

Re. SDK for Nautilus

It’s definitely on the agenda, but it’ll probably be at least a few weeks. There are a few things from the official firmware that need to be pushed back to libDaisy, and DaisySP before we can release a public SDK.
Knowing someone wants it definitely pushes it higher up into the queue :wink:


Made an account to say that I’m interested too!
Any news on this?

1 Like

Not sure where @pj4533 is at on developing this firmware, but if you want to check it out in its current state I built a .bin file from his example for you to try!

Here is the .bin file:
PeejOsc.bin (95.3 KB)

Just place it onto your Aurora’s USB drive, and make sure its the only .bin file on the drive.
Place it into your Aurora, and turn on the module. The firmware will automatically update!

Here’s a video from October of @pj4533 walking through the oscillator:

1 Like

Thanks, @michael
Sorry, I was unclear. I meant an SDK for Nautilus.
Any developments on that?

Hello all,
I understand the many reasons, but in a way it is unfortunate, that the original “spectral reverb” source code is not included in the SDK. I am not saying I would fully understand it. :slight_smile: I guess, it is far more complex than the ringmodulator example, with all the FFT’ing, forwards and backwards. And obviously it would make egg hunting much less fun.

As I like the Aurora as it is for the most part, the thing I am still having a hard time wrapping my head around the most, is the “blur” parameter.

In a way I would expect “frequency blur” to leak input frequencies to neighboring bands, effectively producing more and more colored noise ending up with white noise on full CW position. But what I perceive, when I turn “blur” up is more of a resonator.

This could be a side effect of how “blur” is applied, which is what I currently cannot evaluate. If the blur is applied directly after FFT and the result of blurring is then quantized to a fixed number of fixed frequency bands, then these might be more uniformely “excited” and thus produce a continuous resonance of all bands.

So in a way, what I would be interested to work on, would be modification of the original Aurora code to let “blur” diffuse the frequencies more, instead of increasing a fixed spectrum resonance. Or I would like to be able to switch the blur CV input to something, that allows me to shift the spectrum, so the end result does not always sound the same, almost no matter what input is sent in.

I do not know, if I imagine it correctly. Maybe I am actually hearing the window function of the FFT with high blur amounts.

What do you guys think?


I wonder if this would address the issue where high blur seems to turn all inputs into mush at the same perceived pitch.

Seconding this. I would love to see something that helps address the issue where high blur turns all inputs into mush at the same perceived pitch, or at least one of four depending on FFT size.

@andrewikenberry - any news on Aurora firmwares? I’m still living FDN Verb and Aurora main at certain settings, but it’s so easy to get to uncomfortable extremes with Blur. If the firmware remains closed-source, perhaps Shift + Blur could do… something that helps avoid this?