Reply
Newbie
Posts: 1
Registered: ‎02-11-2019

Confused

[ Edited ]

 I probably don't exactly know how these things work. I would appreciate some explanation on this: I recently purchased a factory sealed 128gb San Disk Ultra. When I popped it into my laptop, my unit is telling me that the available space is 119gb.I do know there is usually slightly less space than what is labeled, but I don't understand a 9gb difference. Can someone enlighten me? That's a lot of space to be missing in my opinion.

SanDisk Guru
Posts: 4,548
Registered: ‎07-18-2007

Re: Confused

it is due to the way the OS calculates capacity. the article below explains it

 

https://kb.sandisk.com/app/answers/detail/a_id/46/

Highlighted
Lsi
Newbie
Posts: 15
Registered: ‎03-28-2018

Re: Confused

Storage has been marketed this way for ages, basically for simplicity.  The OS calculation is the mathematically correct calculation of capacity (since there are 1024 bytes in a kilobyte, not 1000), but for marketing purposes a 480GB class hard drive will be one with approximately 480,000,000,000 bytes vs 503,316,480,000 etc.

 

Any form of storage has reserved space to enable (theoretically) seamless defect management, but beyond that flash based media needs more reserved space for factory-locked overprovisioning.  Without overprovisioning, flash based storage would perform very badly, inconsistently, and unreliably in comparison--even with the highest grade SLC chips.

 

The general rule of thumb is that the most reliable / fastest performing flash-based drives designed for enterprise "write intensive" steady state are not that way because of their flash quality, but because of massive overprovisioning.  Combining high % overprovisioning with higher quality grades of flash just pushes the endurance (but also cost) even higher.  The Micron 5100 Max drive is a classic example of the benefits of a high % of locked factory overprovisioning, despite using 3D eTLC flash.

 

Consumer drives normally have low levels of overprovisioning, which is also why they are more prone to stuttering, uneven performance levels and will normally only quote "burst" data rates vs sustained / steady state ones.  But it gives user the most usable space for their $, and they are engineered to be "good enough" vs providing ultimate performance in any load condition.