

It’s about 5:1 cost ratio these days, it’s honestly pretty worthwhile to just go all nvme these days when you consider the reliability, performance and noise benefits. A raid 5 of nvme can be cheaper and faster than a raid 1 of hdds.
I don’t think I’m adding any more hard drives to my home ceph array at this point.
This conversation is about ssds vs hdds in a server environment, but I’m not sure if those claims are true on either environment.
sata ssds are identical to sata hdds, the controller is just able to write down faster.
I could see some argument about nvme interrupts/polling being slower than sata at scale, but you’re not going to see a difference on a modern CPU with less than 10 nvme drives.
Sequential performance is meaningless these days, workstation and server performance are both limited by iops and latency. Raid increases latency slightly, but iops scale linearly until you run out of CPU or memory bandwidth.
Any file system will always be faster on an ssd than on an hdd. xfs/ext4/btrfs don’t have any hdd specific optimizations as far as I know. ZFS does, but it’s not going to make ssds slower than hdds, it just causes some write amplification.
Enterprise ssds are cheaper and faster than consumer ssds, you can buy them super cheap on eBay. 2TB with PLP for $100. However, you need to make sure you can fit a 22110 m.2 or have an adapter cable for u.2.
You’re always going to be better off building raid on ssd than hdd as long as you have the budget for it.