To measure the effects of using WebP image format on a website, I ran Lighthouse on a website before and after using WebP. To avoid other parameters affecting the results, I used wget to download the website with all of its resources and then started a http.server to serve the files locally. I then ran Lighthouse on the local website using the following code:
async function runLighthouse(url, outputPath, iterations) {
const chrome = await chromeLauncher.launch({ chromeFlags: ["--headless"] });
for (let i = 0; i < iterations; i++) {
const options = {
disableCache: true,
logLevel: "info",
output: "json",
port: chrome.port,
throttling: {
method: "devtools",
devtools: {
networkThrottling: {
downloadThroughputKbps: 1000,
},
},
},
emulatedFormFactor: "desktop",
};
const runnerResult = await lighthouse(url, options);
const filename = `${outputPath}/${i + 1}.json`;
const reportJson = runnerResult.report;
fs.writeFileSync(filename, reportJson);
console.log(`Lighthouse report saved to ${filename}`);
}
await chrome.kill();
}
I ran Lighthouse with the following parameters:
url: "http://localhost:8000/nums.edu.pk/",
outputPath: "JSONs/batch3/nums/before",
iterations: 5,
Even after averaging the results, I got inconsistent results with a variance of 10-50%, even before optimization. I was measuring the following metrics:
total-byte-weight time-to-interact largest-contentful-paint I then replaced all of the images on the website with WebP versions and replaced all other image formats with WebP in the HTML codes. This reduced the size of the website by 80-90%.
When I ran Lighthouse again, I still got inconsistent results. And when I tried to measure the differences between the before and after optimization Lighthouse reports, I sometimes got that the after optimization reports had a larger size and took more time, which should not happen since the size decreased significantly.
So, any help in this regard or any other alternative way to do this would be greatly appreciated.