Tiering is a fundamental concept in HPC. We tier everything starting from registers, over L1-L2 Cache, Numa-shared L3, memory, SSD Cache. It makes only sense to add HDD to the list as long as it’s cost effective.
There is nothing wrong with tiering in HPC. In fact, it’s the best way to make your service cost effective while not compromising on end-user performance.
It is a matter of priorities. In my experience, while saving in financial costs, tiering between HDD and SSD frequently comes at great sanity/frustration cost for those maintaining it. It can make sense commercially if there’s a team (internal or external) to take care of things going sideways.
Tiering is a fundamental concept in HPC. We tier everything starting from registers, over L1-L2 Cache, Numa-shared L3, memory, SSD Cache. It makes only sense to add HDD to the list as long as it’s cost effective.
There is nothing wrong with tiering in HPC. In fact, it’s the best way to make your service cost effective while not compromising on end-user performance.
It is a matter of priorities. In my experience, while saving in financial costs, tiering between HDD and SSD frequently comes at great sanity/frustration cost for those maintaining it. It can make sense commercially if there’s a team (internal or external) to take care of things going sideways.
The point is that as long as HDDs are cheaper, they will definitely be used. SSDs are not replacing them in environments where latency isn’t an issue.