Both of mine have been running without reset for over a month now. I think the varying cache fill we observe is normal operation.
Let’s say I load an application and do some work – those LBA’s rise in access count. perhaps crossing the current caching threshold. Including them may bring the total cached LBA count up to an 80+% cache fill, at which point the “I’m getting close to full” algorithm will raise the access criteria to be cached, resulting in a reduction of LBA’s cached. I think this kind of “hysteresis loop” is a normal result of a cache using integers for read counts. E.G., going from 50 reads to 51 reads will move some number of LBA’s out of the cache – but then if you go below some percentage, you want to lower your criteria (again by an integer) to keep the cache as full as possible, and some number of LBA’s come in to the cache. And all this changes constantly with what files we are using day to day.
A variation in fill percentage is the cost of having all the work done for us by algorithms while we watch TV :laughing: . If we are willing to go to the trouble and expense of moving all the operating system files (or all files) to an SSD, we can get even better performance. I personally like the tradeoff of minimal effort on my part for a noticable performance increase and extended hard drive life.