I am using GTMetrix to analyze our website.
I find a strange thing. It takes 221 ms to load the 1.32k favicon.png. But takes only 26ms to load 1.61k logo.png, see below:
Why favicon.png needs so long to load? I test for many times, and favicon.png always takes about 10 times than other files with similar sizes.
All these resources are cached by Cloudflare and converted to webp format:
Update:
I checked carefully with logo.png and favicon, and indeed the response header of logo.png has an age, but favicon does not.
So I proceed to disable "Tiered Cache", but my option is unlike the answer:
After that, I make several tests via GTMetrix.
- For about half of the tests, the result is better, all resources are cached and delivered with a short time.
- However, for the remaining tests, the result is worse. several resources are "MISS" which take a long time to load, even in the previous test(from the same location), the same resource is "HIT" which indicates it has been cached already.
So, I just wonder why a previously "HIT" resource will become "MISS" after several seconds? Does Cloudflare clear edge cache very quickly? I checked Cloudflare document at https://blog.cloudflare.com/edge-cache-expire-ttl-easiest-way-to-override/ and it shows my plan(Pro) will use 1 hour as the TTL.
Also I see Cloudflare provides a "Cache Reserve" service which will increase the lifetime of the cached objects, seems just to solve this issue?
Update2:
To prevent cache eviction, I enable "Cache Reserve" in my Cloudflare. However, as I have disabled "Tired Cache", I get the following warning:
So, whether this is the best practice of disabling "Tried Cache" meanwhile enabling "Cache Reserve"?
Update 3:
After enabling "Cache Reserve" and disabling "Tired Cache", I made several tests and find a strange phenomena: The first several tests are normal, but then a test will lead to a very long load time, as below:
Below are the first several tests:
https://gtmetrix.com/reports/www.datanumen.com/VjtAGiyz/
https://gtmetrix.com/reports/www.datanumen.com/PRuta7WK/
https://gtmetrix.com/reports/www.datanumen.com/2hdYzL2B/
And this is the final one:
https://gtmetrix.com/reports/www.datanumen.com/8wKD5y9T/
I analyze the waterfall of the final one, and find the slowness is caused by some resources. They are cached, but do not contain "age", as below:
But I have already disabled the "Tied Cache", why this still occurs?








The Cloudflare thread mentions:
The solution there was:
That was also mentioned in this htread.
When you see a
HITin thecf-cache-statusheader, it means the requested resource was served from Cloudflare's cache. AMISS, on the other hand, means the resource had to be fetched from the origin server because it was not in the cache, because of:Expiration of the Cache Entry: Each cached resource has an associated Time To Live (TTL). When the TTL expires, the resource is removed from the cache and the next request for it will be a
MISS. The TTL can be set by thecache-controlorexpiresheaders from the origin server, or it can be set by Cloudflare's caching rules. If the TTL is very short, a resource could become aMISSshortly after being aHIT.Cache Eviction due to Limited Space: Cloudflare has a limited amount of space for caching resources on each of its edge servers. If the cache becomes full, Cloudflare may evict some resources to make room for new ones. That could cause a resource to become a
MISSeven if it was recently aHIT.See "Introducing Cache Reserve: massively extending Cloudflare’s cache" from Alex Krivit.
Different Edge Server: Each of Cloudflare's edge servers maintains its own cache. If your tests were handled by different edge servers, they could have different resources in their caches.
See more with "Third Time’s the Cache, No More" from Edward Wang.
"Tiered Cache" and "Cache Reserve" are part of Cloudflare's Argo Smart Routing feature.
When Tiered Cache is enabled, an edge server that does not have a requested resource in its cache can fetch it from another edge server that does, instead of going all the way back to the origin server. That can make cache misses faster.
Cache Reserve, on the other hand, allows edge servers to keep a stale copy of a resource for a little longer after its TTL expires. If a request comes in for the resource right after it expires, the edge server can serve the slightly-stale cached copy instead of treating it as a
MISS.Note: this does not guarantee that a resource will stay in the cache for the full hour. As mentioned before, if the cache on a particular edge server becomes full, Cloudflare may remove some resources from the cache before their TTL expires to make room for new ones. That process is known as cache eviction.
In your case, if you see resources switching from
HITtoMISSstatus within a shorter period of time, it could potentially be due to (as mentioned above) cache eviction, or due to the request being routed to a different edge server with a different cache state.To control the cache behavior more precisely, you may want to set the
cache-controlheaders on your origin server to specify the desiredmax-agefor each resource. You can also use Cloudflare's Page Rules to set custom caching rules for different parts of your site.By disabling Tiered Cache and enabling Cache Reserve, you are asking Cloudflare to keep stale resources in the cache (Cache Reserve) but not to share cached resources between edge servers (Tiered Cache).
The potential issue here is that if a resource is not in the cache of the edge server that receives a request (and it cannot get it from another edge server because Tiered Cache is disabled), it has to go all the way back to the origin server. That could increase load on your origin server and increase your bandwidth usage (hence the warning about "increased origin load and significant read/write operation charges").
In your case, if you are experiencing issues with Cloudflare's cache eviction and frequently getting
MISSstatuses, enabling Cache Reserve could indeed be beneficial. While your configuration is not generally recommended according to the warning message, it could potentially work in your situation.From there, you would need to monitor the effects of this configuration on your server load, bandwidth usage, and website performance, and see if this setup is working for you.