Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Everything is serial so far. Got it. Here's the thing though: Plugin A processes packet n, Plugin B processes packet n-1, Plugin C processes packet n-2, [...] Plugin G processes packet n-6. Now you have 7 independent threads processing 7 independent data packets. As long as the queues between plugins are suitably small you won't introduce latency.

The process is realtime so you cannot receive events ahead of time. It is actually running how you describe, but you can only process so much during the length of a single buffer. Typically solution is to increase the length of the buffer, but that increases latency or reduce the length of the buffer but that introduces overhead.

> Each pedal processes its data concurrently (but not parallel with) with every other pedal.

That's how it works.

> The single core performance of these CPUs are absolutely atrocious, but the devs make it work for latency sensitive activities like gaming.

I am talking about realistic simulations. You can definitely run simple models without latency, that's not a problem.

> Only if it is evicted from the L3 cache, and the 3950X has 64MB of it. That's over a second(!!) of latency at 16 channel+192kHz+32 bits/sample audio.

That's nothing. Typical chain can consists of dozens of plugins times dozens of channels. There is no problem with such simple case as running 16 channels with simple processing.

> Speaking of channels, that seems like a natural opportunity for parallelism.

That works pretty well. If you are able to run you single chain in realtime you can typically run as many of them as you have available cores.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: