• doodledup@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    6 days ago

    There are fundamental problems with how SSDs work. Large-capacity flash might soon become a thing in servers but there won’t be any cost effective SSDs in the consumer market for at least 10 years.

    The problem is how operating systems access the data: they assume a page-organized sequential disk and access the data in that way. SSD controllers essentially need to translate that to how they work internally (which is completely diffetent). This causes latency and extreme fragmenentation on large SSDs over time.

    Instead of buying a 20TB SSD you’re much better buying 4 5TB HDDs. You’ll probably get better write and read speeds if configured in a Raid0 in the long run. Plus, it’s a lot cheaper. Large SSDs in the consumer market are possible, the just don’t make any sense for performance and cost reasons.

    • ShortN0te@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      5 days ago

      But the fragmentation on SSDs does not really matter, does it? Yes you need to keep track of all the fragments but this is not really a problem as far as i am aware. To my knowledge, increasing latency on bigger storage is a problem that faces all storage technologies we hqve atm.