• BombOmOm@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      4 days ago

      I’m looking forward to when SSDs aren’t that much more expensive than HDDs for the same capacity. Seems HDDs have been holding their own better than I thought as of late.

      • doodledup@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        4 days ago

        There are fundamental problems with how SSDs work. Large-capacity flash might soon become a thing in servers but there won’t be any cost effective SSDs in the consumer market for at least 10 years.

        The problem is how operating systems access the data: they assume a page-organized sequential disk and access the data in that way. SSD controllers essentially need to translate that to how they work internally (which is completely diffetent). This causes latency and extreme fragmenentation on large SSDs over time.

        Instead of buying a 20TB SSD you’re much better buying 4 5TB HDDs. You’ll probably get better write and read speeds if configured in a Raid0 in the long run. Plus, it’s a lot cheaper. Large SSDs in the consumer market are possible, the just don’t make any sense for performance and cost reasons.

        • ShortN0te@lemmy.ml
          link
          fedilink
          arrow-up
          3
          ·
          4 days ago

          But the fragmentation on SSDs does not really matter, does it? Yes you need to keep track of all the fragments but this is not really a problem as far as i am aware. To my knowledge, increasing latency on bigger storage is a problem that faces all storage technologies we hqve atm.

      • JustinA
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        4 days ago

        It’s about 5:1 cost ratio these days, it’s honestly pretty worthwhile to just go all nvme these days when you consider the reliability, performance and noise benefits. A raid 5 of nvme can be cheaper and faster than a raid 1 of hdds.

        I don’t think I’m adding any more hard drives to my home ceph array at this point.

        • doodledup@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          4 days ago

          Nvme and flash in general works very different from HDDs internally and on OS level. It’s a common misunderstanding that SSDs are ready to replace HDDs in all situations. For example, you can actually NOT scale SSD performance linerarly like HDDs would when combining them in a raid. You can also not scale them in size. At some point, the same amount of HDDs will be actually MORE performant than the SSDs in terms of throughput.

          I wrote another related comment somewhere here.

          Servers are an entirely different thing as they use different file systems that optimize on SSDs. Also, they implement layered hardware controllers for the flash chips rather than having a single controller per chip. In servers, SSD might be the future for many use cases. Consumer market is not nearly there yet.

          • JustinA
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            4 days ago

            This conversation is about ssds vs hdds in a server environment, but I’m not sure if those claims are true on either environment.

            sata ssds are identical to sata hdds, the controller is just able to write down faster.

            I could see some argument about nvme interrupts/polling being slower than sata at scale, but you’re not going to see a difference on a modern CPU with less than 10 nvme drives.

            Sequential performance is meaningless these days, workstation and server performance are both limited by iops and latency. Raid increases latency slightly, but iops scale linearly until you run out of CPU or memory bandwidth.

            Any file system will always be faster on an ssd than on an hdd. xfs/ext4/btrfs don’t have any hdd specific optimizations as far as I know. ZFS does, but it’s not going to make ssds slower than hdds, it just causes some write amplification.

            Enterprise ssds are cheaper and faster than consumer ssds, you can buy them super cheap on eBay. 2TB with PLP for $100. However, you need to make sure you can fit a 22110 m.2 or have an adapter cable for u.2.

            You’re always going to be better off building raid on ssd than hdd as long as you have the budget for it.

    • doodledup@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      4 days ago

      Tiering is a fundamental concept in HPC. We tier everything starting from registers, over L1-L2 Cache, Numa-shared L3, memory, SSD Cache. It makes only sense to add HDD to the list as long as it’s cost effective.

      There is nothing wrong with tiering in HPC. In fact, it’s the best way to make your service cost effective while not compromising on end-user performance.

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        arrow-up
        3
        ·
        4 days ago

        It is a matter of priorities. In my experience, while saving in financial costs, tiering between HDD and SSD frequently comes at great sanity/frustration cost for those maintaining it. It can make sense commercially if there’s a team (internal or external) to take care of things going sideways.

        • doodledup@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          4 days ago

          The point is that as long as HDDs are cheaper, they will definitely be used. SSDs are not replacing them in environments where latency isn’t an issue.