It's my suspicion, at least. In fact I have two of those drives, bought at the same time, and both died in the same sudden way with no more than two weeks between.
Yes, there was a well published Seagate issue [1] that I think was related to uptime, and you could fix it after the fact by grabbing a shell through the ttl serial on the jumper pins. (Edit: thanks jaclaz for providing a link with more details!) It was claimed Seagate would fix it for you under warranty as well, although shipping a drive always has risks.
HPe had two rounds of enterprise SSDs that failed because the uptime counter overflowed, but I never saw content about fixing those after the fact. And I think I had seen a different SSD uptime based failure a year or so before.
IMHO, it's best to avoid same batch storage, and if that's not possible, stagger the online time to try to give enough time to notice a failure, obtain replacement storage, install replacement stotage, and migrate data. Backups are important too, but it's nicer to have a path towards mostly online recovery. And some mostly replacable data is hard to justify backups for (do I need three copies of format shifted media? probably not, if my online storage fails, I can re-rip)
I don't recall hearing about this for Western Digital drives, but there's some xbox360 stuff that I thought involved the ttl serial on WD drives... It's certainly worth exploring. WD green drives do also have a very short default timeout to park the drive, and as a result can experience a large number of parking cycles in some applications, and the parking ramp can wear out; I don't think this is really recoverable, the heads are likely to get damanged and debris may damage the platters.
Only for the record, at the time the Seagate issue was due to a bug in the firmware, when the drive was powered and found a counter at certain values, it went into a sort of loop and failed to "boot" (the internal OS) further.
This happened only on some disk drives because it was initially triggered by a defective testing equipment only on some production lines, see "Root cause" here:
Doesn't ring a bell for me, but given the highly questionable tiering/marketing methods from HDD manufacturers these days and the fact that they have been reducing warranty durations by some 50%, that type of planned obsolescence wouldn't surprise me.
The drives are 3.5" 2TB WD Green purchased around 2012, and had been in use for about 1 year when they both died.
If that's not a typo, then it seems like those drives have been powered off for about 10 years.
I think powered off hard drives are commonly said to retain data for um... maybe 3 years (from rough memory). So, your drives have probably lost their magnetism (and thus the data). :(
Naah, that may apply to SSD's, not to good ol' (rotating platters) hard disks, the magnetism does not evaporate.
The only issue that may happen on an unpowered for several years hard disk is so called bearing seizing, the (fluid) bearing of the motor/platter may become stuck, but it is relatively rare, though some particular make/models are more prone to this, and though (usually) fixable, in some cases it can be made to rotate freely again, but you need the services of a specialized service, as the disk needs to be opened, in some other cases the bearing can be replaced, and some specialized tools are needed:
That reminds me of a work colleague a few years ago. He got an ancient drive working again by tapping it on the side with a screwdriver while powering it on, to get it "unstuck".
Not a typo, they've been powered off for around a decade. But HDDs (just like floppies) don't lose their magnetized property in just 3 years, or even 10 years, barring the very rare case of an extreme fluke or environmental exposure. Flash memory (memory cards, USB sticks, SSDs) loses electrical charge relatively fast, however. That might be what you're thinking of.
Note that many controllers store parts of their firmware/config on the disk platter, so without the disk platter the board may not show up via sata.