Avoiding bad sectors

Hi,

Apologies if this is already covered elsewhere, I searched but was unable to find the answer I’m looking for.

I have a Sandisk Ultra Plus 240GB (SDSSDHP256G) drive which has recently encountered problems. I’ve used it as a LUKS encrypted data volume on a Linux system. I mount it as a filesystem.One day something went wrong such that the system could no longer read the files in the filesystem. After a reboot it wouldn’t mount the drive reporting ‘Can’t read superblock’.

Using dd I have been able to establish there there are a few groups of bad sectors. ~7MB at 11%, ~112MB at 29%, ~260MB at 49% and more missing from 71% of the drive capacity through to the end. So the majority of the drive can be read successfully.

Obviously this drive has problems and for any serious use should be replaced, but I’m a techie and I like to do what I can to repair things. I realise that I could partition the drive to create partitions in the working areas of the drive, but is it possible to run a tool that will teach the drive to map out the failed areas and give me whats left as a single volume?

I’ve attached an image of the SANDisk SSD Dashboard report showing the smart values. The Dashboard reports the drive as healthy but the smart test runs cause the dashboard to crash and from what I can see those selftests fail.

Cheers,

Mark

The firmware should be handling defect management properly without the OS seeing the bad sectors or read errors, so your best bet is to update its firmware to the latest version & secure reset it.

To confirm it is still reliable to be usable after this process, run a full disk read-write test using something like HD Tune Pro and confirm that the SMART health data is ok and that the SMART failed status does not trigger.  If it does trigger, your BIOS (unless it is buggy) will tell you to backup and replace the drive when the system is rebooted.