Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not about quantization error (which is quantifiable as noise) or the kind of nonlinearities you're talking about, but timing concerns.

It's basically the difference between naive automation in a DAW and sample accurate automation, it's not about the granularity of your changes but the fact that sample accuracy allows your system to reproduce the same thing every time. Not so many years ago, online renders in certain DAWs were perceptually and quantifiably different than offline renders because of things like this - you want to be able to tell a user what they hear while they work is the same when they go back and render.

With MIDI 1 and 2.0 that's rather difficult when factoring in live input due to the fact that your production system has wack drivers on top of a non-realtime OS and can't provide guarantees. MIDI 2.0 goes a good step in the direction with synchronization, but I have doubts that it will be utilized to where we can guarantee received events are replicated accordingly, due to the accuracy of reception and synchronization of clocks. Maybe we'll get it, idk.



> Not so many years ago, online renders in certain DAWs were perceptually and quantifiably different than offline renders because of things like this

and how has moving from 7 bits to 32 bits helped with this? rendering the changes in values across 32 bits is going to take a bit more cpu power than doing it across 7 bits. that's not really relevant here.

moving from 7 bits to 32 bits allows for smoother transitions - which is fantastic, but remember that the sound coming out is the culmination of a lot of different factors: having more bits doesn't change the 1+1 behavior.

> With MIDI 1 and 2.0 that's rather difficult when factoring in live input due to the fact that your production system has wack drivers on top of a non-realtime OS and can't provide guarantees.

great, so adding possibly more instability. I guess that's a "change", but probably not unless the underlying protocol is changed - otherwise no real changes: indeterministic results. and I think I'm ok with that.

midi 2.0: great, but don't expect much different - the music world has moved beyond midi (again), it should be interesting to see how midi adapts past 2.0.


I'm not arguing with you, just agreeing in a different way :D

Nonlinearity is fun. I'm a big fan of it, and have spent a lot of time on the DSP side developing NLP that can be predictable and repeatable, and all the garbage associated with making it sound good.

My issue is more that MIDI 2.0 goes towards part of the issue - e.g. if I press N keys at the same time, N messages should be stamped at the same time and be able to be rendered by the synth at the same time - but I'm doubtful that systems will be able to handle this in a deterministic way, both in recording the incoming messages and replicating them in the same way the performer intended while playing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: