How is ExpressCache 1.3.2 working?

Thanks very much AlleyViper!   Your explanations give me the confidence to give this a try.

Unexpectedly, when I examind my original cache partition it showed the starting offseet as 2048.   I’d always heard to start at 4096 (or multiples) to get a 4k alignment  ?? 

I was researching SSD partitioning and ran into the following: http://www.tomshardware.com/forum/292105-32-best-format-partition-performance-wear-leveling#6100306 .  One thing they say is that trying to use more than 80% of the allocated space on an SSD will result in performance issues.   Well, 0.8 x 29 GB = 23.2 GB – that number looks familiar!     Maybe these full cache slowdowns are just symptoms of a normal ssd issue of needing at least 20% slack space, and reducing the cache partition size is the actual fix and not just a work-around…

1024/2048/4096KB is also 4K aligned, so don’t worry. With diskpart you can check current partition offset with “list partition” after “select disk #”. Currently my 16GB readycache partition (created with eccmd) has a valid 1024KB offset.

What you pointed is the reason why I’ve already asked if this isn’t simply some sandforce SSD firmware issue (in case this is an old SF based SDD). The amount of needed freespace for consistent performance can vary alot with the controller/firmware, some older drives might even need half of the size free to make up for lacking garbage collecting, wear leveling, trim etc.

But such things *should* cause a noticeable IOps drop, not a total halt for some seconds like we experience on boot, so caching software is also a big suspect.

Anyway, I’m using only 16GB because I’ve seen delays with less than 24GB cached. Otherwise, I’d set a 24GB limit if software was working at 100% to ensure better wear leveling for such small drive. Strangely, my 32GB SSD came with 1 realocated sector from factory, which put health to 95%.

well up to now my 32gb readycache has been working great , however a few niggles ( but not complaints ) are 

  1. during windows 7 boot up the logo will pause ( like its crashed ) however around 5 seconds later it will continue and windows will load as per normal 

  2. the network takes much longer to load …  so much so i am now manually starting my steam client as this loads very quickly … ive asked about a solution for this on this forum ( http://forums.sandisk.com/t5/SanDisk-ReadyCache-SSD/network-taking-a-long-time-to-connect-since-installing/td-p/320394 )

3)the readycache is so quick i know dont see the windows welcome screen … it skips this and the desktop loads … this isnt a complaint,however it would be nice if we could fine tune the readycache …!

         apart from these im extremely happy with my readycache ssd …  and its well passed my expectations !

AlleyViper - thanks so much for your response.  Just followed your instructions (which were very easy to follow!), and I’m hoping for a good result!  Will report back to let you know how it went.  Thanks again!

Ok so capped my readycache partition at 16Gb using the below instructions:

  1. From the command line: ECCmd -format (this will clear the information out of the cache)
  2. Delete the partition (this is done from the Disk Management pain by right clicking on the drive and selecting “delete” the partition
  3. From the command line: ECCmd -partition (drive number) 16384 (this will create a partition of 8GB in size)
  4. From the command line: ECCmd -format (this will format the new partition and make it ready for EC/RC to utilize)

Here is a shot from ExpressCache - https://www.dropbox.com/s/ipdhp1fnzr9i6tl/cache16gbpartition.JPG

Here is my sub 1 minute boot to desktop - https://www.youtube.com/watch?v=GEJDe0XFOZ4

Here is my very long boot just to login screen before partitioning to 16Gb - https://www.youtube.com/watch?v=40FpvBixpiI&list=UUh6FciVZH5s5E3TNHhjV9LQ

So far so good!!! I will update if things change.  Sandisk needs to fix this problem!!!

rlewandowski23 that’s strange, using -partition # 16384 gives me an exact 16GB total on the GUI. About 12-14GB will be filled then.

Good luck to anyone who tried and can post her/his findings. I’ll be away from my test machine for a couple of weeks.

I used eccmd -partition (drive # shown in diskpart, in my case 1) 16384 , and ExpressCache splash screen, eccmd -info, and MiniTool Partition Wizard all show a 16.0 GB partition.   Maybe you typo’d 18384 – that’s very close to the 18432 of an 18 GB partition.

I’d leave it be at 18 GB – my memory is that AlleyViper was the only person obsrving delays as low as 17 GB, other reports were 23 GB and up.   If you can, let us know what happens as you fill past 17 GB.

Is this product not compatible with VMWare?

Whenever i start my VmWare, my hdd starts to read or write somethings like crazy and after a while either it freezes or it tries to continue like normal (but not normal…)

Just a thought:

As I look over the delay problems reported in this thread I’m not so sure they are specifically related to firmware 1.3.2 as much as the general issue of overprovisioning of an SSD.   For those needing an intro to the subject here is a good one written by an LSI/SandForce guy: http://www.edn.com/design/systems-design/4404566/1/Understanding-SSD-over-provisioning .

My thought is this:  Suppose Sandisk offered this same caching SSD product in a 64 GB size with a 32 GB active partition.

The retail cost should only be about $20 more than the current product.  Considering how much caches get written to and how much housekeeping is involved over time, i’d pay an extra $20 for a “double provisioned” 32 GB cache with both increased performance and longevity.

Yea your right I did typo that.  I did do an 18Gb partition.  The partition command I ran was - ECCmd -partition (drive number) 18432 .

So far my cache has filled up to 14Gb.  But, I haven’t loaded any PC games or anything yet.  Do we know what the max cache partition size we can use before we will start seeing problems?

NWGuy the thought about the drive being too small and it should be larger and overprovisioned makes tons of sense.  The OCZ Synapse drive that I had was setup that way.  It was a 64Gb drive that was overprovisioned to 32Gb.  I was wondering why ReadyCache wasn’t setup that way.

I thought I was getting a better drive that was better supported via software by replacing my OCZ synapes (which uses the now defunct dataplex software) … overtime I am discovering that may not be the case :(.

I think think the Sandisk / Condusive ReadyCache product is good.    Especially since the release of the 1.3.2 software update, there have been relatively few bug reports.

17 GB fill of the default 29.xx GB partition is the lowest reported fill where delays occured, that was AlleyViper.   All the other reports I found started at just over 23 GB, there were several in the 23.xx range, and then up from there.    It bears noting that the cache is still working at 100% fill for almost everyone, the task is mostly just trying to improve performance at this point.

Just to clarify, when I posted that delay with only 17GB cached it was due to a Win logo freeze at the usual point, but it was way shorter (only about 2-3s) than annoying ones with >22GB filled. They seem related, anyway.

Also, after reducing the caching partion I’ve had no more casual freezes later in the boot with stuck hdd led lit. Those happened when cache was near full.

Most probably, a 2/3 ssd size cache partition should be near the limit before trouble happens for most, given that it won’t fill completely. I still hope most issues can be solved either by software, or a ssd firmware update.

1 Like

Resetted cache twice, still hang-in on windows boot.

1 Like

Hello,

When will 1.3.3 be coming out to address these boot delay issues?

Thanks.

Hooked it up to my raid card to see if that works better again:P I just disappears after a while and sets the raid alarm off, so I guess its rma time:P

mattschnaidt, rlewandowski23, NWGuy, and anyone that tried a cache size reduction: did you also notice any improvements after this time of testing?

With my cache drive capped at 18Gb I haven’t had any problems yet.  I wish sandisk would fix this so I didn’t have to cap the drive. Honestly, it’s beginning to seem like a design flaw and they should have made it a 64Gb drive that is overprovisioned to 32Gb.

This is exactly what my OCZ synapse drive I used to use did.  Honestly, Sandisk should just release a new drive and give us all discounts on a new one if we send the old one in.

I tried “Preview” before submitting a long post and could not find a way to edit the post further, in so doing lost the post – out of time to type it all again now.

So without explanations, my recomendation for anyone experiencing what they feel are excessive boot delays are to try a partition size of 25Gb (25 x 1024 = 25600) using one of the methods previously discussed in this thread.  That may be the largest cache size you can use to reduced this problem if you are experiencing it.

Regardless of partition size one uses, occasionally seeing the Windows logo for up to two revolutions of the dots is part of normal cache operation.  Testing continues, I have two of these in different machines now now, one will remain at the original 29.xx Gb partition  and one will vary.

I’ve given more thought overprovisioning since I mentioned it as a possible way to reduce cache housekeeping delays.  If I personally bought a 64 Gb SSD cache drive, I’d probably run it in the 50-60 Gb range rather than half of that to overprovision.    In other words I’d accept reduced longevity and occasional housekeeping delays to gain additional files cached.

Another issue is marketing.   Although one could justify overprovisioning a small SSD used as a cache due to the relatively low total cost of going from 32 to 64 Gb of NAND, what about a 1Tb SSD?    How are you going to sell increased reliability for one product and not another?   I can see it for ‘mission critical’ enterprise level cost relatively no object installations but for the home market, “I can give you TWICE the storage for the same money” would be impossible (IMO) to beat.

Yea that would be an unfortunate design flaw if its failing because the relatively cheap controller it uses can’t keep up with all the writes when full, causing resets or whatever.

That being said I never got close to full on mine after a while, so I guess it failed.