Challenging would be an understatement. Had to create an editor from scratch in canvas to support the inline visuals, then a DSL that generates the code for each permutation of audio and scalar parameters, then the language itself which is Turing complete and controls the whole thing in a VM, choosing the optimal permutation for each case, and all the edits/recompilation be done in few ms to not distrupt the experience, all across a thread (the WebAudio AudioWorklet). The audio engine is in WebAssembly as it was the only way to get the performance needed. You can check out the code[0], the project is open-source!
Try tweaking the accent multiplier to .1 from .5 - you can get there but it requires a lot of value tweaking. There's no singular TB-303 sound, but the components are there.
A music livecoding app[0], it's open-source[1] and it's been in the works for years in various iterations, but I've finally settled on the format and delivery. I'm now trying to make it as newbie friendly as possible by doing tutorials[2] and videos[3] and having ready-made instruments[4] to begin with. Thinking also to expand it as a general purpose creative editor in a standalone electron app and bundle in other livecoding languages as well, for graphics also.
This looks amazing! Well done! Are these visualizers open-source somewhere? I'd love to use something like that in my music livecoding project[0], it's open-source[1], I've only now started adding a little support[2] for shaders but only very simple stuff still. Yours look incredible! Are you interested in collaborating maybe?
Thanks! Repo for v3 is coming, for v2 you can find it here https://github.com/stagas/lm2 - i just didn't have the time to do it and the launch is beta, for the actual launch there will be a repo for v3 as well!
Thank you! The idea is to build full tracks, hence in the app you will find a timeline and a minimap, and there are builtin tools like labels/timeline functions to make arrangements over time. The DSP is state-of-the-art, i'm doing codegen of all the permutations of scalar/audio parameter inputs so the execution is always optimal. There is still room for improvement of course.
If it was clipping it would show as red in the amplitude visualizer, however the bass does have a lot of energy and that might be the cause for the clipping you're hearing. You can multiply it by a factor to reduce its amplitude.
Challenging would be an understatement. Had to create an editor from scratch in canvas to support the inline visuals, then a DSL that generates the code for each permutation of audio and scalar parameters, then the language itself which is Turing complete and controls the whole thing in a VM, choosing the optimal permutation for each case, and all the edits/recompilation be done in few ms to not distrupt the experience, all across a thread (the WebAudio AudioWorklet). The audio engine is in WebAssembly as it was the only way to get the performance needed. You can check out the code[0], the project is open-source!
[0]: https://github.com/loopmaster-xyz
reply