> For addition, subtraction, and multiplication, you just take one operand, calculate the largest possible second operand for that data type which won't overflow, and check that the actual second operand doesn't exceed it
Sure, but that calculation is CPU-dependent, since it depends on how the underlying hardware represents signed integers. By definition, it is impossible to portably check for signed integer overflow in C, as I'm sure you know.
I implemented a simplistic BIGNUM library in C once (that's where I pulled that expanding multiply code in the other comment from). The only truly portable way to do that is to make your bignums sign-magnitude and use exclusively unsigned arithmetic on them. That's what I was envisioning in my original point about performance degradation due to overflow checking.
Realistically of course, most CPU's these days are twos-complement, and you can make signed overflow defined by compiling with "-fwrapv", which I would guess is what you're doing.
Yes I'm assuming two's complement, but there's not a lot of hardware around these days that isn't two's complement. I'm writing a library for something that already assumes two's complement while doing other things.
If the code had to be portable to one's complement hardware, then I would create special cases for that type of hardware. Laying my hands on such hardware for testing would be the big problem, and if you haven't tested it, then how do you know that it works?
As for "-fwrapv", it's not portable either, and I need to cover both signed and unsigned math. It's also not compatible with what I need to link to (I've gone down this road already). I also need to cover the largest native word sizes, so the trick of using a larger word size won't work for me.
I'm only dealing with arrays of numbers though, so I can often amortize the checking calculations over many array elements instead of doing them each time. This is an example of knitting the checks into the overall algorithm instead of using a generic approach.
As things stand, there's currently no universal ones-size-fits-all answer to this problem in most languages.
I do like how Python has handled this - integers are infinitely expandable and simply can't overflow. This comes at the expense of performance though. What this type of solution needs is an option for unchecked native arithmetic for cases where you need maximum speed.
Sure, but that calculation is CPU-dependent, since it depends on how the underlying hardware represents signed integers. By definition, it is impossible to portably check for signed integer overflow in C, as I'm sure you know.
I implemented a simplistic BIGNUM library in C once (that's where I pulled that expanding multiply code in the other comment from). The only truly portable way to do that is to make your bignums sign-magnitude and use exclusively unsigned arithmetic on them. That's what I was envisioning in my original point about performance degradation due to overflow checking.
Realistically of course, most CPU's these days are twos-complement, and you can make signed overflow defined by compiling with "-fwrapv", which I would guess is what you're doing.