What is the Longterm Data Endurance of the Extreme 120gb?

I did a quick search in this section of the forums and on Google for the Longterm Data Endurance (LDE) of the Sandisk Extreme 120gb SSD drive, I did not come up with any hard answers.   The closest I came was a piece of text in my Google search results for the Sandisk Ultra drives, stating that they could have 60 terabytes written before performance was affected.  What can I expect with the Sandisk Extreme 120gb drive?

I have two 120gb Extreme’s in a raid 0 config with about 1TB of data written on each drive.  I have tried to use the SSD-Life lite tool but because of my raid array it does not detect the SSD’s.  Would it be possible to find a Linux tool that does the same as ssd-life, turn raid off in my bios, boot to a live CD/USB, examine the ssd drives, then change the bios back and have everything back to normal?  I did break the array when I upgraded the firmware for the drives and when I was done put the settings back and windows booted normal, but I think trying to do that for this may not be that easy.

Basically I am trying to figure out how much life I can get out of these drives before I see a performance drop.

Also, I did see the MTBF for the 120gb rated at 2.5 million hours.  How is this different then the LDE?  Also 2.5 million hours is 285.199 years, how do they come up with such numbers??

Thanks,

Woz

Ok, I think I can answer your questions.

First of all, to see the drives inside a RAID array, you can use crystaldiskinfo.

Second of all, for the 120gig drive, when the parameter E9 reaches about 330,000 (decimal) NAND gigabytes written, E7 will be at 10 and the drive will have fully used it’s guarenteed NAND cycles. How this translates into host writes depends on your workload and how compressable or random it is. Could be as low as 100TiB Host writes given a difficult workload, or as high as 600TiB given a good workload. You will likely be able to see what your workload is like after examining the drives with crystaldiskinfo … compare E9 to F1 to see how much host writes you have done compared to NAND writes the drive has done.

Once E7 reaches 10, how long the drive lasts after that is anyone’s guess. If you are lucky, you may get double that amount of life out of the drive or more. But once it is at 10, it is basicly on borrowed time.

Third, there is NO performance drop as the drive ages. When it finally dies, based on what endurance testing on the sandforce controller has shown, it will most likely just panic lock and never be accessable again. Like all drives, keep backups and it shouldn’t be a problem.

Forth, MTBF has nothing to do with Lifetime Data Endurance. MTBF is based the overall calculated (not tested) random component failure percentage.  It does not take into account firmware quality or component aging or wearout. In the real world, I would expect several good years of service out of a SSD.

Thank you for the reply. I actually have been using Crystaldiskinfo for sometime now, I believe it was what I used to determine my firmware was out of date for the Extreme’s.  From what you said, I think I havent even put a dent into what my drives can do, perhaps you could look over the two pics I am including here and confirm that I have nothing to worry about for a long time.  

Do I have this right, what determines the “Health Status 100%” are the numbers in ‘E7’?

One thing I do not understand is why ‘F1’ and ‘F2’ are both 0? 

https://dl.dropbox.com/u/13003947/SSD1.png

https://dl.dropbox.com/u/13003947/SSD2.png

Also what is a good program to do some hard drive benchmarking?  I would like to compare my sata 3 ssd arrary to my sata 2 hdd array in another computer.

Quote: Third, there is NO performance drop as the drive ages. When it finally dies, based on what endurance testing on the sandforce controller has shown, it will most likely just panic lock and never be accessable again. Like all drives, keep backups and it shouldn’t be a problem.

 

I am sorry but this is wrong in this case. If you have two drives in a RAID TRIM isn’t working, therefore writing speed can decrease over the time. Furthermore in general writing to flash memory chips are getting slower the more often you write to it:

 

Wikipedia: “As a chip wears out, its erase/program operations slow down considerable.”

 

I am not sure if this also applies to the Toggle-NAND-Flash memory type SanDisk in the Extreme drives uses but I assume so.

Look at the raw values for E9/F1/F2 instead of the normalised values.

They are currently expressed in hexidecimal, but can be changed to decimal numbers using the menu:

Function -> Advanced Feature -> Raw Values -> 10 [DEC]

Don’t select one of the “10 [DEC] - 2 byte” or “10 [DEC] - 2 byte” functions.

Then the raw values will be a little more understandable for mortals.

Looking at your values, Drive 1 is running at about 1.28 write amplification, thus would reach End of Specification life (given similar workload for the rest of its long long life) at about 256,000 GiB Written. Given a similar workload to it’s current workload, That will be at 520 thousand hours, or 60 years of age (something else will definitely happen before then … probably)

Drive 2 is running a bit worse amplification, (but less written over time), so at it’s 1.41 write amplification, thus would reach End of Specification life at about 234,000 GiB Written. That makes it about 565 thousand hours (less written on this drive, so longer life), or 66 years of age.

Of course, this needs to be taken with a reasonable sized grain of salt, as I have no idea how the controller or NAND will handle decades of life … in all likelyhood it probably won’t. But yeah, you really have nothing to worry about in terms of NAND wear.

Good Programs to do SSD benching are Cystaldiskmark, as ssd, and ATTO.

Just a suggestion of just how far a Sandisk Extreme 120gig drive may go past it’s specification life:

After 95 days of constant write load, this drive has endured 721,201 GB of NAND writes, which is more then twice the specification life of the NAND.  The 4 reallocations happened very early in the drive’s life (first for TiB written). Drive is still going as fast as did when it was originally started, about 125meg per second write average, with 145meg per sec during the write cycle.

@memrob wrote:

 

I am sorry but this is wrong in this case. If you have two drives in a RAID TRIM isn’t working, therefore writing speed can decrease over the time. Furthermore in general writing to flash memory chips are getting slower the more often you write to it:

 

Wikipedia: “As a chip wears out, its erase/program operations slow down considerable.”

 

I am not sure if this also applies to the Toggle-NAND-Flash memory type SanDisk in the Extreme drives uses but I assume so.

I actually have evidence to back up my claims:

http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm&p=5135106&viewfull=1#post5135106

After running a sandisk extreme for 96 days of writing, there has been no degredation in write performance. In fact, it is a little bit faster now then it was when the drive started.

The sort of performance degredation Wiki talks about occurs at very much higher states of NAND wear, and can be sometimes seen if you abuse an older intel drive to death. More like 10-20 times the rated NAND life, and doesn’t occur before the drive reaches end of specification life. In any case, the sandforce controller will panic lock well before this effect comes into play. (As seen on several sandforce based drives already write endurance tested)

As for missing TRIM while in RAID performance degradtion, you can reduce this by not allocating all the drive space to the array, leaving more space for garbage collection, or simply secure erase them when they become too annoyingly slow. It has nothing to do with the NAND aging, which is what this thread is about.

For people with intel 7-series chipsets running RST 11.2 or newer the SSD raid 0 arrays do support TRIM, people running older Intel chipsets or alternative chipsets do not have TRIM support.

AnandTech - Intel Brings TRIM to RAID-0 SSD Arrays on 7-Series Motherboards, We Test It