AlleyViper, here’s why I’m thinking you can go up higher than 23Gb without delays, even though you and others have seen issues starting near that point: As far as I know, all these reports were based on running the full available OEM 29.82 Gb size for the active partition!
Now if that is not the case, and if you have experienced cache delays with smaller partition sizes, by all means don’t go above what works for you – and please post here.
We also need to be sure everyone reporting has upgraded to firmware 1.3.2, as bug reports of all kinds have decreased significantly with that revision, and I know some of the posted delay issues involving partition fill percentages were posted using prior firmware.
OK, here goes: How a cache works.
You start with a certain low number of disk reads per time period or reboot cycle to initially fill the cache – lets say you start by caching LBAs that have 5 reads to start with.
At some point your cache gets close to full (let’s say 80%) – clearly caching all LBAs with 5 reads is going to take more space than available. So you want to increment the required LBA read count to be cached by one, making it 6 in this case, and then delete any cached LBAs with fewer than 6 reads from the cache.
In my case with a 16 Gb partition, I observed the cache filling, then decreasing, then refilling again, just as you would expect – with slight boot delays where you would expect housekeeping to be clearing out deleted low activity LBAs. I’ve observed several cycles, the highest cache fill I’ve observed was 86.25%, the lowest (after initial fill) was 76.25%.
What people are observing using the OEM 29.82 Gb partition is that the initial cache housekeeping pass, triggered somewhere around 80% fill, is taking a lot longer than 5 seconds or so to complete – yet initial housekeeping passes on smaller partitions finish quickly – disproportionately quickly. It’s possible that some absolute number is being exceeded – say I did a “scan every file” virus scan over and over again to try to max out the array of LBA read counts – but my guess is that that scenario has been anticipated. Most likely is that we’re running out of what I’ll call ‘scratch space’ when the partition is set to 29.82 Gb.
As an example, when deleting slower moving LBAs, one may wish to populate a separate array for housekeeping to work off of – perhaps there is not enough room in the 2.18 Gb (code?) space between 29.82 Gb and 32 Gb for this array. Maybe at one time everything fit but some code change ended up taking more space.
If that’s the case, it MAY be a very near miss – maybe a 29.81 Gb partition would work. Most likely, making another full 2.18 Gb available by reducing the active partition size to 29.82 - 2.18 = 27.64 Gb would work. That would cover the “Oh, I thought I had the entire 2.18 Gb space for data structures!” scenario. Not that that sort of thing ever happens :) Or is likely happening, as everyone’s cache would be affected.
HOWEVER, there’s always the chance that something does happen as cache fill nears 23 Gb in absolute terms. Having observed a maximum fill of 86% prior to housekeeping, let’s say 90% to be safe, a 25 Gb partition x 0.9 = 22.5 Gb, so AlleyViper, I really think you will be ‘safe’ with a 25Gb partition.
There is another possibility too. We’ve been addressing this problem by reformatting and repartitioning the drive using firmware 1.3.2 – what if that is all that’s necessary? Hs anyone tried a ‘clear out and start over’ from the command line using firmware 1.3.2 and the OEM partition size?