This story reminded me of the time I was trying to exploit a web app as part of a bounty programme.
Unfortunately the application was so unstable that any attempts to SQLI, or really send any kind of malformed request would simply crash the entire site for hours, presumably until an admin came and restarted the server. DoS wasn't included in the programme so I never won a bounty. I'm sure it could have been exploited, but it was simply so rickety and shoddy that I couldn't figure out how.
Reminds me of the time at Sun when I tried to use XBugTool to report a bug against itself, and it got extremely angry at me, and trapped my input focus in the bug description text editor, where I vented my frustration.
I once took out the internal telephony network of a large bank's head office through simply scanning the network for devices. They weren't tracking IP addresses (or anything) in any form of database or even a spreadsheet, and I was tasked with finding and updating every device on a particular network. I used nmap and simply did a subnet scan to find anything that would respond on common ports.
It all went down in a heap and I felt very guilty for a few minutes until I realised that this infrastructure was so fantastically unstable that a TCP half-open (just a SYN packet!) could kill it stone dead. That's not my fault.
You can't blame the postman if knocking on the front door demolishes your building.
At a previous job, I had to do a penetration test on a platform, and I had the same thing. Any SQL errors would just crash the back end entirely and I'd have to wait for them to bring it back up manually, which could take a long time since I was on the US West coast and they were based in the UK.
Among all the other security issues they had (easily gained a root shell via template injection, multiple XSS issues, CSRF, basically everything in the OWASP Top 10), to call their security posture Swiss cheese would be an understatement.
A couple months after my test, the entire project was scrapped.
I feel like there's an analogue to the CAP theorem for code, where you can have an un-exploitable application if you don't need it to keep running in the presence of unusual requests.
Secure, Available, Unattended, choose any 2? You can have a secure&available app but you need to keep patching it. You can have an available&unattended app but it won't be secure. Or you can have a secure&unattended app, it will just need to crash a lot.
Available and Unattended are the same thing. The third value is 'Cheap'.
You want unattended servers? Fine. You want someone not to be able to steal the keys from a backup? We can do that too. You want the servers to be able to restart at 2 am without getting someone out of bed to come type a passphrase into a console on each server?
HSMs were quite expensive. People made any number of attempts to make them cheap and found out why they aren't (Ari Shamir has entered the chat), often a Too Good to be True scenario.
Unfortunately the application was so unstable that any attempts to SQLI, or really send any kind of malformed request would simply crash the entire site for hours, presumably until an admin came and restarted the server. DoS wasn't included in the programme so I never won a bounty. I'm sure it could have been exploited, but it was simply so rickety and shoddy that I couldn't figure out how.
Security through instability?