It’s about 5:1 cost ratio these days, it’s honestly pretty worthwhile to just go all nvme these days when you consider the reliability, performance and noise benefits. A raid 5 of nvme can be cheaper and faster than a raid 1 of hdds.
I don’t think I’m adding any more hard drives to my home ceph array at this point.
Nvme and flash in general works very different from HDDs internally and on OS level. It’s a common misunderstanding that SSDs are ready to replace HDDs in all situations. For example, you can actually NOT scale SSD performance linerarly like HDDs would when combining them in a raid. You can also not scale them in size. At some point, the same amount of HDDs will be actually MORE performant than the SSDs in terms of throughput.
Servers are an entirely different thing as they use different file systems that optimize on SSDs. Also, they implement layered hardware controllers for the flash chips rather than having a single controller per chip. In servers, SSD might be the future for many use cases. Consumer market is not nearly there yet.
This conversation is about ssds vs hdds in a server environment, but I’m not sure if those claims are true on either environment.
sata ssds are identical to sata hdds, the controller is just able to write down faster.
I could see some argument about nvme interrupts/polling being slower than sata at scale, but you’re not going to see a difference on a modern CPU with less than 10 nvme drives.
Sequential performance is meaningless these days, workstation and server performance are both limited by iops and latency. Raid increases latency slightly, but iops scale linearly until you run out of CPU or memory bandwidth.
Any file system will always be faster on an ssd than on an hdd. xfs/ext4/btrfs don’t have any hdd specific optimizations as far as I know. ZFS does, but it’s not going to make ssds slower than hdds, it just causes some write amplification.
Enterprise ssds are cheaper and faster than consumer ssds, you can buy them super cheap on eBay. 2TB with PLP for $100. However, you need to make sure you can fit a 22110 m.2 or have an adapter cable for u.2.
You’re always going to be better off building raid on ssd than hdd as long as you have the budget for it.
It’s about 5:1 cost ratio these days, it’s honestly pretty worthwhile to just go all nvme these days when you consider the reliability, performance and noise benefits. A raid 5 of nvme can be cheaper and faster than a raid 1 of hdds.
I don’t think I’m adding any more hard drives to my home ceph array at this point.
Nvme and flash in general works very different from HDDs internally and on OS level. It’s a common misunderstanding that SSDs are ready to replace HDDs in all situations. For example, you can actually NOT scale SSD performance linerarly like HDDs would when combining them in a raid. You can also not scale them in size. At some point, the same amount of HDDs will be actually MORE performant than the SSDs in terms of throughput.
I wrote another related comment somewhere here.
Servers are an entirely different thing as they use different file systems that optimize on SSDs. Also, they implement layered hardware controllers for the flash chips rather than having a single controller per chip. In servers, SSD might be the future for many use cases. Consumer market is not nearly there yet.
This conversation is about ssds vs hdds in a server environment, but I’m not sure if those claims are true on either environment.
sata ssds are identical to sata hdds, the controller is just able to write down faster.
I could see some argument about nvme interrupts/polling being slower than sata at scale, but you’re not going to see a difference on a modern CPU with less than 10 nvme drives.
Sequential performance is meaningless these days, workstation and server performance are both limited by iops and latency. Raid increases latency slightly, but iops scale linearly until you run out of CPU or memory bandwidth.
Any file system will always be faster on an ssd than on an hdd. xfs/ext4/btrfs don’t have any hdd specific optimizations as far as I know. ZFS does, but it’s not going to make ssds slower than hdds, it just causes some write amplification.
Enterprise ssds are cheaper and faster than consumer ssds, you can buy them super cheap on eBay. 2TB with PLP for $100. However, you need to make sure you can fit a 22110 m.2 or have an adapter cable for u.2.
You’re always going to be better off building raid on ssd than hdd as long as you have the budget for it.