Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh jeeze, I don't think there'd be anything simple about having to divine facts about user space by the code flow graph near the interrupted instruction on basically every thread switch.


You don't need to worry about the CFG here, a spinlock is literally just a ~4-instruction loop (or a few more, depending on the form you use). All you need is to handle a few common codegen patterns. Like if you see the instruction pointer in the middle of a mov + xchg + test + jne sequence then you know it's a spinlock. If you don't detect it in some canonical form then you're back where we are now; whatever. It's not complicated.


The issue with spin locks isn't the lock sequence itself, it's the code between it being locked and unlocked.


I'm not sure which issue you're referring to, but the one I was trying to address was the one above about the spinning itself causing a priority inversion: "you now have a descheduled thread that owns the lock and a spinning thread waiting to acquire it, which is a recipe for priority inversion."


A) it's not just a priority inversion. The problem happens with simple round robin schedulers too.

B) The problem is the descheduled thread which has the lock and isn't in the lock sequence. Simply killing the time slice of the spinning thread without knowing what it's waiting on may lead to even worse behavior.

The answer here is simple: just don't use pure spinlocks if you can be preempted.


> it's not just a priority inversion.

Did I claim it was...?

> Simply killing the time slice of the spinning thread without knowing what it's waiting on may lead to even worse behavior.

It's not obvious to me how likely this is compared to the other case, but if you're trying to make a case for why a kernel doing that would result in worse behavior in typical cases, it would probably help to explain this.

> The answer here is simple: just don't use pure spinlocks if you can be preempted.

I don't get why you're arguing with me on this. I wasn't telling anyone to use spinlocks, nor claiming this is the one and only problem with spinlocks. I was just saying a kernel could be a little smarter about a particular case by examining the instruction sequence.


Your idea doesn't solve the problem. By the time the lock acquirer is pre-empted and the kernel has a chance to do something clever, a significant fraction of a timeslice has already been wasted.


Maybe hardware support would make it viable, at least at an academic level.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: