So ... when I first got my current main server, babylon4 (which was new to me, but not by any stretch new hardware — including its disks), I set it up with Solaris 10u5 x86. I installed a mirrored pair of 2.5" 80GB SATA laptop drives in the single bay intended for a boot drive, and configured the main array of twelve 300GB SATA disks as a ZFS RAIDZ2 pool. RAIDZ2, with two parity disks, should be able to survive any two drive failures and continue operating in degraded mode.
So, about two months later, we had an overnight cascade failure of three drives, and the array went down.
I reconfigured the remaining nine drives as an eight-drive RAIDZ2 plus a single hot spare, and restored all the data. Over the next few months, one more drive failed; the hot spare was automatically pulled in as a replacement, just as it should. When I got my hands on replacements for the (by now) four failed drives, I added them in as hot spares. We've had no further failures.
Recently something hosed the boot archive and took the system down. All the zpools were intact, so we didn't lose any user data, but I ended up reinstalling with Solaris 10u8. Then, not long after, Sun ... er ... Oracle released Solaris 10u9 as a developer release. Completely unsupported, sure, but I can't spare the money Oracle wants for a support contract anyway, so what's the difference? So I live-upograded the machine to u9, and upgraded all the zpools to ZFS version 22. But, ZFS 22 now supports RAIDZ3, an even-higher-reliability format for large disk pools, using three independent parity stripes.
So, yesterday I made one last differential backup, then blew away the RAIDZ2 zpool, reconfigured the array as an eleven-drive RAIDZ3 plus a single hot spare, and restored all the data overnight. RAIDZ3 and a hot spare may be a little paranoid ... but I just increased the size of the working set by 600GB, and it should be able to survive up to four drive failures, as long as it finishes rebuilding the first failed drive on the hot spare before the fourth drive fails.
As an incidental side note, I note that the statfs() call in a 32-bit Linux kernel overflows when called on a filesystem with more than 2TB of free space...