The reason I ask this is that from what I have read in the documentation, the tool should be checking for actual user problems in Core web vitals assessment and if there are no issues there, then looking for more issues is unncessary. The second section is about potential problems. Is this correct? Here is an example:

Everthing is passing:

core web vitals passing all tests

But there are a bunch of potential issues:

diagnose performance issues problems

Are those issues necessary to deal with or does the fact that it passes everything in the first part mean everything is all good?

1

There are 1 best solutions below

0
Barry Pollard On

In general the Google Chrome team (of which I am part of btw), advises to concentrate on real user data rather than lab-based data - which may not be representative of your traffic. It's not unheard of for the lab-based Lighthouse data to be much slower than what is really experienced by your users if they are all faster than the, relatively conservative, settings that Lighthouse uses.

However, I note that the real-user data is based on your whole origin as they do not have the data for the specific URL you tested:

PSI is using Origin data in the top section

It could be that this specific page is particularly slow, but that is offset by lots of other quicker pages on your site to give an overall green passing score. For example if this is your home page with a rich, detailed, but slow to load video.

Or perhaps your site includes admin pages visited by your site's team and they are such a big proportion of page views that it influences the overall site score.

Google Search Console lists pages in "URL groups" so it's worth checking there if some pages are slow, and some are fast, if you have different groups.

Additionally, it may be that your users are all repeat visitors and have many of your site resources cached so get super-fast speeds in real life. But new users suffer (and maybe don't become repeat visitors because of that!).

If it is based on fast users, then changes in your visitors population can result in drastic shifts in your real-user data.

TLDR: it's advisable to concentrate on real-user data. But the fact it's so far off what the lab-based data shows is a little concerning. That suggests the lab-based data is completely unrepresentative (which is possible, but rare to be that far off), or something else is going on - And it might pay to figure out what that is!