How is ExpressCache 1.3.2 working?

After instalation, most problems of 1.3.1 remained on my SB850 board (I’ve tested this system >12h prime stable multiple times). The delayed boot with cache >20GB, and a tendency for a system crash on boot when cache is about - 28GB filled (stuck loading windows with hdd led on, both on w7 sp1 or w8.1 x64), or even stuck when idle after some hours (probably when ready cache is doing some maintenance). Excluding drives doesn’t make a difference for this behaviour, but disconnecting the SSD to make Readycache stop working restores system stability.

Right after setting cache to only 16GB (and also excluding 2 hdd I don’t want cached), I haven’t had any problems when cache is nearly filled , either the usual freezed boot delay or system crashes for the last two weeks.

Couldn’t this be some Sandforce firmware issue that causes trouble with an almost full drive with some SATA controllers, and Readycache simply triggers it?

I don’t seem to have “cache has been reset” problems. When it happens it’s usually due an understandable reason that triggers it (a manual or scheduled disk defrag to cached drives, updates on boot, large file changes, etc.).

Also to look in too, if not solved already, when I installed 1.3.0 on this pc that had 4 hdds with 7 visible partitions Readycache wouldn’t work. Cache would always stay at 0.07GB. After reducing the number of partitions (by merging on the same drives) to 4, it started caching normally.

1.3.2 resulted in 3 blue screens of death in one week.  Prior to installing  1.3.2/1.3.110, I had disabled caching, and my Windows 7 system was rock solid, if a little slow.  Since installing 1.3.2, the same instability which caused me to disable it previously has reemerged.

mattschnaidt, maybe you could try as I did. Since reducing cache to only 16GB stability problems seem to have disappeared.

To make it so, I’ve run diskpart on cmd admin:

diskpart (opens diskpart)

list disk (list drives on system, take note of #)

select disk # (to select the sandisk SSD listed with 29GB)

clean (clears partitioning, make sure you have the correct #, else you’ll lose some other drive’s data)

exit (exits diskpart)

eccmd -partition # 16384 (creates a 16GB cache partition for use)

eccmd -format (makes that partition ready for readycache use)

After that, launch Readycache, skip the error message, and size changes should work immediately. A reboot should be advisable anyway.

To revert to a full drive in use, just repeat the procedure but instead do “eccmd -partition #” without specifing any size. No need to reinstall software.

To make sure this is a readycache related bsod, you could run bluescreen view and analyse the dump to check the BSOD code and most important files on use at that moment.

AlleyViper, I’m noticing the boot delay and what my feeble memory says are cache operation slowdown as I approach 28Gb.   I’d like to try reducing my cache partition size as you show above, but I want to make sure I get the syntax right as I’ve gotten used to GUI partitioning utilities.

For comparison, here is a method slotmonsta posted in the “ReadyCache ssd hangs on startup” thread to make an 8k partition:

  1. From the command line: ECCmd -format (this will clear the information out of the cache) 
  2. Delete the partition (this is done from the Disk Management pain by right clicking on the drive and selecting “delete” the partition 
  3. From the command line: ECCmd -partition (drive number) 8192 (this will create a partition of 8GB in size) 
  4. From the command line: ECCmd -format (this will format the new partition and make it ready for EC/RC to utilize)

So:   eccmd -partition requires a drive number, but eccmd -format does not?

Thnaks for helping someone who started with 8" floppys but hasn’t used the command line in years and doesn’t want to partition or format their hard drive!.


Doing slotmonsta’s procedure should end up the same (except for exact cache size). I just find it more practical to use diskpart as it will list the correct drive number to use next with eccmd, that is also CMD based. It’s just a matter of doing less clicks, but I also skip the first format instruction because the partition will be deleted anyway. Just do what you’re more confortable with!

eccmd -partition command can be used with an optional drive ID, which I find safer to specify (as slotmonsta first instructed), but eccmd -format doesn’t allow other commands/switches, so I assume it’ll only work when it finds a valid partion on a valid sandisk sdd. If you just run eccmd these available commands are described (except for unsuported but working: -exclude Driveletter; -clearexclusions; -preload Filename [usagecount]).

Btw, I chose to use 16GB (after having good results with only 8GB, that is a bit short for caching, for a few days) because most problems on my system seem to start when there’s about >20GB filled cache. Either a delayed boot, or some random hard crash when cache is nearly filled.

Thanks very much AlleyViper!   Your explanations give me the confidence to give this a try.

Unexpectedly, when I examind my original cache partition it showed the starting offseet as 2048.   I’d always heard to start at 4096 (or multiples) to get a 4k alignment  ?? 

I was researching SSD partitioning and ran into the following: .  One thing they say is that trying to use more than 80% of the allocated space on an SSD will result in performance issues.   Well, 0.8 x 29 GB = 23.2 GB – that number looks familiar!     Maybe these full cache slowdowns are just symptoms of a normal ssd issue of needing at least 20% slack space, and reducing the cache partition size is the actual fix and not just a work-around…

1024/2048/4096KB is also 4K aligned, so don’t worry. With diskpart you can check current partition offset with “list partition” after “select disk #”. Currently my 16GB readycache partition (created with eccmd) has a valid 1024KB offset.

What you pointed is the reason why I’ve already asked if this isn’t simply some sandforce SSD firmware issue (in case this is an old SF based SDD). The amount of needed freespace for consistent performance can vary alot with the controller/firmware, some older drives might even need half of the size free to make up for lacking garbage collecting, wear leveling, trim etc.

But such things *should* cause a noticeable IOps drop, not a total halt for some seconds like we experience on boot, so caching software is also a big suspect.

Anyway, I’m using only 16GB because I’ve seen delays with less than 24GB cached. Otherwise, I’d set a 24GB limit if software was working at 100% to ensure better wear leveling for such small drive. Strangely, my 32GB SSD came with 1 realocated sector from factory, which put health to 95%.

well up to now my 32gb readycache has been working great , however a few niggles ( but not complaints ) are 

  1. during windows 7 boot up the logo will pause ( like its crashed ) however around 5 seconds later it will continue and windows will load as per normal 

  2. the network takes much longer to load …  so much so i am now manually starting my steam client as this loads very quickly … ive asked about a solution for this on this forum ( )

3)the readycache is so quick i know dont see the windows welcome screen … it skips this and the desktop loads … this isnt a complaint,however it would be nice if we could fine tune the readycache …!

         apart from these im extremely happy with my readycache ssd …  and its well passed my expectations !

AlleyViper - thanks so much for your response.  Just followed your instructions (which were very easy to follow!), and I’m hoping for a good result!  Will report back to let you know how it went.  Thanks again!

Ok so capped my readycache partition at 16Gb using the below instructions:

  1. From the command line: ECCmd -format (this will clear the information out of the cache)
  2. Delete the partition (this is done from the Disk Management pain by right clicking on the drive and selecting “delete” the partition
  3. From the command line: ECCmd -partition (drive number) 16384 (this will create a partition of 8GB in size)
  4. From the command line: ECCmd -format (this will format the new partition and make it ready for EC/RC to utilize)

Here is a shot from ExpressCache -

Here is my sub 1 minute boot to desktop -

Here is my very long boot just to login screen before partitioning to 16Gb -

So far so good!!! I will update if things change.  Sandisk needs to fix this problem!!!

rlewandowski23 that’s strange, using -partition # 16384 gives me an exact 16GB total on the GUI. About 12-14GB will be filled then.

Good luck to anyone who tried and can post her/his findings. I’ll be away from my test machine for a couple of weeks.

I used eccmd -partition (drive # shown in diskpart, in my case 1) 16384 , and ExpressCache splash screen, eccmd -info, and MiniTool Partition Wizard all show a 16.0 GB partition.   Maybe you typo’d 18384 – that’s very close to the 18432 of an 18 GB partition.

I’d leave it be at 18 GB – my memory is that AlleyViper was the only person obsrving delays as low as 17 GB, other reports were 23 GB and up.   If you can, let us know what happens as you fill past 17 GB.

Is this product not compatible with VMWare?

Whenever i start my VmWare, my hdd starts to read or write somethings like crazy and after a while either it freezes or it tries to continue like normal (but not normal…)

Just a thought:

As I look over the delay problems reported in this thread I’m not so sure they are specifically related to firmware 1.3.2 as much as the general issue of overprovisioning of an SSD.   For those needing an intro to the subject here is a good one written by an LSI/SandForce guy: .

My thought is this:  Suppose Sandisk offered this same caching SSD product in a 64 GB size with a 32 GB active partition.

The retail cost should only be about $20 more than the current product.  Considering how much caches get written to and how much housekeeping is involved over time, i’d pay an extra $20 for a “double provisioned” 32 GB cache with both increased performance and longevity.

Yea your right I did typo that.  I did do an 18Gb partition.  The partition command I ran was - ECCmd -partition (drive number) 18432 .

So far my cache has filled up to 14Gb.  But, I haven’t loaded any PC games or anything yet.  Do we know what the max cache partition size we can use before we will start seeing problems?

NWGuy the thought about the drive being too small and it should be larger and overprovisioned makes tons of sense.  The OCZ Synapse drive that I had was setup that way.  It was a 64Gb drive that was overprovisioned to 32Gb.  I was wondering why ReadyCache wasn’t setup that way.

I thought I was getting a better drive that was better supported via software by replacing my OCZ synapes (which uses the now defunct dataplex software) … overtime I am discovering that may not be the case :(.

I think think the Sandisk / Condusive ReadyCache product is good.    Especially since the release of the 1.3.2 software update, there have been relatively few bug reports.

17 GB fill of the default 29.xx GB partition is the lowest reported fill where delays occured, that was AlleyViper.   All the other reports I found started at just over 23 GB, there were several in the 23.xx range, and then up from there.    It bears noting that the cache is still working at 100% fill for almost everyone, the task is mostly just trying to improve performance at this point.

Just to clarify, when I posted that delay with only 17GB cached it was due to a Win logo freeze at the usual point, but it was way shorter (only about 2-3s) than annoying ones with >22GB filled. They seem related, anyway.

Also, after reducing the caching partion I’ve had no more casual freezes later in the boot with stuck hdd led lit. Those happened when cache was near full.

Most probably, a 2/3 ssd size cache partition should be near the limit before trouble happens for most, given that it won’t fill completely. I still hope most issues can be solved either by software, or a ssd firmware update.

1 Like

Resetted cache twice, still hang-in on windows boot.

1 Like


When will 1.3.3 be coming out to address these boot delay issues?


Hooked it up to my raid card to see if that works better again:P I just disappears after a while and sets the raid alarm off, so I guess its rma time:P