You can probably look at that screenshot and see the immediate value of this high-level view of page performance. Once that was in place, DebugBear produced a dashboard of results. This is probably what you want to see first, right? All I had to do to set up performance monitoring for a page is provide DebugBear with a URL and data flowed in immediately with subsequent automated tests running on a four-hour basis, which is configurable. And in both cases, I know I’m working with high-quality, realistic data.įrom there, DebugBear notifies me when it spots an outlier in the results so I am always in the know. It measures performance on an automated schedule (no more manual runs, but you can still do that with their free tool) and monitors the results by keeping an eye on the historical results (no more isolated data points). That’s where a tool like DebugBear makes a lot of sense. And we can get that with real-user monitoring (RUM) where a snippet of code on my site collects real data based on from real network conditions coming from real users is sent to a server and parsed for reporting. Real usage data would be better, of course. In contrast, tools like DebugBear and WebPageTest use more realistic throttling that accurately reflects network round trips on a higher-latency connection. However, it can also lead to inaccuracies as Lighthouse doesn't fully replicate all browser features and network behaviors. DebugBear explains this nicely in its blog: Simulated throttling provides low variability and makes test quick and cheap to run. This is the type of network throttling you will find in PageSpeed Insights, and it is the default method in Lighthouse. One is powered by Lighthouse, which observes data by testing on a fast connection and estimates the amount of time it takes to load on different connections. On that note, it’s worth calling out that there are multiple flavors of network throttling. The other issue is that the data I’m getting back is based on lab simulations where I can add throttling, determine the device that’s used, and the network connection, among other simulated conditions. That seems like a lot of work, even if it adds value. I could capture that data and feed it into a spreadsheet so that I have a record of performance results over time that can be used to spot where performance is improving and, conversely, where it is failing. Think of it like a single datapoint on a line chart - there are no surrounding points to compare my results to which keeps me asking, Is this a good result or a bad result? That’s the “thing” I’ve been missing in my performance efforts. There’s no context about page speed performance before or after that snapshot because it stands alone. When I’m measuring performance, I’m only getting a snapshot at a particular time and place. And the difference between “monitoring” and “measuring” is big. After using DebugBear, I began realizing that what I’ve been doing all along is “measuring” performance. The key word here is “monitoring” performance. Measuringīefore we actually log in and look at reports, I think it’s worth getting a little semantic. If you’re like me, it’s hard to invest in a tool - particularly a paid one - before seeing how it actually works and fits into my work. I’ve had time to work with DebugBear and thought I’d give you a peek at it with some notes on my experience using it to monitor performance. The folks at DebugBear understand this situation all too well, and they were kind enough to give me an account to poke around their site speed and core web vitals reporting features. Not the best way to get a high-level view of performance. So, what I have is a hodgepodge of reports that needs to be collected, combined, and crunched before I have clear picture of what’s going on. Certain tools are designed for certain metrics with certain assumptions that produce certain results. Even with all of the available tools at my disposal, I still find myself reaching for several of them. I don’t know about you, but it often feels like I’m missing something when measuring page speed performance. In fact, there’s great tooling right under the hood of most browsers in DevTools that can do many things that a tried-and-true service like WebPageTest offers, complete with recommendations for improving specific metrics. The tooling to get a report with details from the time it takes to establish a server connection to the time it takes for the full page to render is out there. There is no shortage of ways to measure the speed of a webpage. This article has been kindly supported by our dear friends at DebugBear who help optimize web performance to improve user experience.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |