Comment by bullen

Comment by bullen a day ago

25 replies

The best SSD purchases of my life was the last Intel X-25E (64GB 45nm SLC with 100.000 writes per bit from 2011) I found on ebay ~2021.

I ordered one first expecting it to be used or fake, but the packaging looked good (original and untampered) and the Intel disk software said it had only factory number of read/writes so I went all in and bought all the disks they had...

30x at $100 instead of the original $1.000 price tag. Still $3.000 sounds like an aweful lot when it's only 64GB disks, but I know how it feels when your OS drive corrupts and that's not something I want to keep experiencing over and over every 5 (if you are lucky) years.

Now with a few (24/7 operation) years under their belt I can confidently say this was exactly "How to buy a SSD".

Panzer04 a day ago

Are you joking?

You bought 30 64GB SSDs from 2010 in 2021?

You realise 3k buys you much faster, much higher capacity modern SSDs with the same longevity characteristics? Hell, if it really matters there's still some optane floating around, with 100k cycle endurance ratings and much, much better performance.

  • krackers a day ago

    Not the same longevity characteristics, I thought newer SSDs are TLC compared to SLC of old SSDs

    • Panzer04 16 hours ago

      Typically true. The more expensive the drive the more likely it'll be MLC or SLC, and if you're willing to go out of spec it's usually possible to reprogram a drive controller to operate in SLC mode.

Havoc a day ago

Big fan of 2nd hand enterprise SSDs strategy too but wow not like that.

The slightly more modern intel MLCs like S3700 crush the early SLCs across the board - including endurance. Would have cost you 1/3rd the price for vastly superior specs.

  • bullen a day ago

    No MLC lasts 5 years tops.

    I have 5x SLC (40-60GB) drives from 2010 still running. Not a single SLC has failed for me ever.

    I also have 5x MLC (120-400GB ones) drives that failed. All MLC have failed for me.

    The stats don't look too good.

    • archi42 9 hours ago

      That's pure anecdata. We don't even know your workload or configuration.

      Contrary anecdata: I just replaced my old SSDs: 2013 64GB 20nm MLC at 19% wear level and a 2018 500GB TLC at 34% wear level. Not because they failed, but because I had the OS on a 64GB RAID1 and needed more space. Only optimization was setting "noatime".

      But that's still a horrible small N, so even the comined data is essentially meaningless.

      btw, I replaced them with a bunch of HGST DC SS200 1.6TB from 2018, two of which have about as much capacity as your 30 disks. The 15nm MLC NAND is rated for 3 DWPD and has a 3% wear level. The dual ported SAS3 interface is overkill for me.

      I went for a 5 disk RAID6, and could replace it another 8 times while still keeping some spare change for a visit at a gourmet restaurant.

    • Dylan16807 a day ago

      How much are you writing to those drives?

      Have you tried any drives that will extend their pseudo-SLC cache across the entire space, and then only partitioned 25-30% of the nameplate capacity? That'll get you a terabyte for less than $300.

      Personally I've had two SSDs in active use and both have done a lot better than that. One was MLC and died after 13 years, and the other is TLC and still working after 10 years.

    • Havoc a day ago

      >No MLC lasts 5 years tops.

      A 64gb Intel’s X25-E is rated for about 2 PB of TBW.

      A S3700 (400gb) is rated for 7 PB TBW range and gets you 400gb not 64gb usable space.

      >The stats don't look too good.

      It seems to me that you're trying very hard to not look at stats and insist on extrapolating your small sample personal experiences?

      Frankly for 3k you could have built a pure optane rig of equivalent capacity that would have crushed both your X-25E suggestion and my S3700 if you're really obsessed on endurance.

      I'm generally of the meet people where they are and support their journey persuasion but when someone says 64GB SATA v2 drive with no trim and really bad metrics across the board is their best ssd buy I gotta say something

    • vel0city 7 hours ago

      I've had a few Kingston V300 120GB SATA MLC SSD's I bought on a stupid cheap sale at Microcenter and tossed into a raid 0 for funzies in 2012. They're still running just fine after being online all the time for the last decade.

    • justsomehnguy a day ago

      Depends on the usage and the initial claimed DWPD.

      I've seen Samsung 860 Pro (DWPD of 0.6) doing fine after years under LUKS (the worst case for SSD). As soon as you go for DWPD > 1 (real or effective) the wearout is not a problem.

      • hangonhn a day ago

        Why is LUKS bad fro SSDs? I'm thinking of using LUKS for my USB thumb drive.

justsomehnguy a day ago

> I can confidently say this was exactly "How to buy a SSD".

More like "How to spend $3k and think you did something".

For this amount what you spent you could get any, literally any SSD, use only 64Gb and be fine for decades. Or use more than 64Gb and be fine for... decades anyway.

You literally could buy a server class mixed workload SATA drive with a DWPD of 4.

https://www.solidigm.com/products/data-center/d3/s4620.html

And quite amusingly, any modern SATA SSD runs at the top of SATA3/SATA600 specs, with ~500MB/s for read and write:

    Sequential Bandwidth - 100% Read (up to): 550 MB/s
    Sequential Bandwidth - 100% Write (up to): 500 MB/s
    Random Read (100% Span): 85000 IOPS
    Random Write (100% Span): 48000 IOPS
While Intel® X25-E Extreme SATA Solid-State Drive is SATA2/SATA300 and runs at 250MB/s at reading:

    Sustained sequential read: up to 250 MB/s
    Sustained sequential write: up to 170 MB/s
    Random 4 KB reads: >35,000 IOPS
    Random 4 KB writes: >3,300 IOPS
https://download.intel.com/newsroom/kits/ssd/pdfs/Extreme-SA...
  • bullen a day ago

    You would think that, but I also have Samsungs industrial 3.5TB drives and they are flaky at best.

    Eternal growth does not exist, SSDs peaked in 2011 for durability without complexity.

    Just like DDR3 has the lowest CAS latency with ok bandwidth and longevity.

    DDR4 actually breaks after 10 years.

    DDR2 probably lasts more than 100 years.

    Think about that, any device manufactured in the coming 50 years will be outlived by 32-bit Raspberry 2!

    You just need a bunch of older SD cards and distributed storage so that you don't loose data.

    • gruez a day ago

      >Just like DDR3 has the lowest CAS latency with ok bandwidth and longevity.

      Source? AFAIK successive generations eventually had the same or slightly CAS latency in absolute terms. However, because CAS latency is measured in clock cycles, and successive generations have higher clock speeds, the latency "number" is higher, but that's an illusion. DDR3-1600 CL8 has the same latency as DDR4-3200 CL16.

      >DDR2 probably lasts more than 100 years.

      >Think about that, any device manufactured in the coming 50 years will be outlived by 32-bit Raspberry 2!

      What's the point of it lasting 100 years if it's terribly out of date? An IDE drive from the 2000s is basically unusable today, 20 years later. CPU from around the same era is basically on its last legs because software support is being dropped[1]. Your SSDs are going to suffer the same fate. And that's not even factoring in other considerations like power consumption, and the hassle of trying to connect 30 drives to a computer.

      [1] eg. https://www.reddit.com/r/linux/comments/18mrxjk/debian_end_o...

      • bullen a day ago

        As you increase Hz you increase energy and then your components fail faster.

        The point is "the 1000 hour computer" = we are going into rent seeking hardware.

        I'm obviously not going to use 30 SSDs in one computer.

        You can google "perma computing" if really want to binge.

    • stavros a day ago

      I don't use SSDs, but the HDDs in my NAS started failing one by one after a few years. Whenever one failed, I just put a new one in, and that was it. ZFS rebalanced things automatically and I went on with my life.

      It just sounds like you spent $3k to solve a problem you could have solved with $200...

      Unrelatedly, does the name "Realms of Despair" mean anything to you?

      • bullen a day ago

        HDDs make noise, you can only use them if you have more rooms than you need.

        Nope

  • Panzer04 a day ago

    Assuming pcie SSDs are acceptable, they could have bought an optane drive with 60 drive write per day for five years endurance XD

    SSDs today have lower ratings because very few (consumer) use cases would prefer more write endurance over +100/50/33% capacity from more bits per cell.

    • bullen a day ago

      I looked at those, too expensive and hard to use one drive on 30 computers.