The Need for Speed
Updated: Apr 21
The dangers of a slow website include: frustrated users, negative brand perception, increased operating expenses, and loss of revenue. It must be noted that, each bug fix, new feature, and system upgrade can impact performance; this in turn, leaves developers constantly working on the website to increase its speed. It is easy enough to say that the code just needs to execute faster, but often, code will need to do specific operations that take time to process. This can include things like: encryption, large content handling, complex graphics, image manipulation, and 3rd party dependencies. Therefore, the strategy of getting faster websites does not always involve a goal of optimizing every line of code written, but looking at what is 'fast enough' in code, and optimizing the end-to-end experience holistically.
Web Performance Optimization (WPO), and, Google Speed Score, are tools to measure the speed of the webpage. It is important to understand what is being measured. To clarify, these tools base page speed on web optimization rules that would improve the rendering and execution of the front-end code (HTML, CSS, etc.) on browsers. Per Google’s definition: “These rules are general front-end best practices you can apply at any stage of web development”. In other words, the score does not look at your infrastructure, application performance, DB queries, data-center distribution, load handling, content loaded, etc. Most importantly, it should not be used as a tool to measure the overall speed of the site. It is, nonetheless, a measurement of how good the pages were developed and optimized.
Understanding thresholds and what each metric means is the first step to maximizing the potential of a web page. Once these foundational concepts have been realized, it is then advised to employ techniques in 3 different areas for optimum efficiency: server-side optimization, front-end optimization, content distribution.
Server-side response is the measurement of time it takes to load the necessary items to begin rendering a page from the server (subtracting any web latency). This includes a variety of back-end optimizations like compiler options, application logic, database indexes, CPU, memory management, etc. Identification and analysis are key to understand bottle necks within the system. This allows the developer to focus on areas that are time-costly and negatively affect the generation of page content. It is typically easier to scale static content such as images, or CSS, as web servers and load-balancers. However, requests to the application’s server, that need to query information from the database, or fetch data from other resources, face new scalability and performance challenges. The negative byproduct of these new challenges is an increased load. An increased load is equivalent to increased data/traffic being handled and carried by a system/page. This added stress can slow things down considerably. That is why it is important to focus on server-side requests, and analyze the response under certain load thresholds.
The ultimate goal is to enhance the response time of server-side calls, as well as, to reduce the number of calls that are made. In particular, highly interactive web sites make a lot of calls to the application’s server to retrieve more data as the site reports progress or, is browsed by a user. One such general practice is sever-side caching. It reduces network latency across severs and minimizes server round-trip requests.
Optimizing on the back-end is only one part of the equation, the other piece is the front-end. About 10–20% of the response time for end users accessing a web page is spent on gathering the HTML document from the web server, and showing it on the browser. Optimizing the front-end, end user, experience is key to dramatically improving the response time. When analyzing the time for a typical webpage request, the end-user spends 80 – 90% of that time waits for the components of a webpage to be downloaded. Several areas that could improve this scenario include the following best practices:
Optimizing browser caching — static resources are saved by the browser’s proxy cache, ultimately reducing time for downloads.
Minimizing request overhead — reduce upload size, enable compression, minify CSS, HTML and optimize images.
Minimizing payload size — reduce the size of responses, downloads, and cached pages
Optimizing browser rendering — improve the browser's page layout.
Content Delivery Network Distribution
A content delivery network (CDN) is a collection of web servers distributed across multiple locations. This practice delivers content more efficiently to users. The proximity of the user, to your web server, has an impact on response times. Deploying content across multiple, geographically dispersed, servers will make pages load faster from the perspective of the user. The main problem CDN addresses is latency: the amount of time it takes for the host server to receive, process, and deliver on a request for a page resource (images, CSS files, etc.). Because 80-90% of the end-user’s response time is spent on downloading all the components on a page, dispersing static content close to the end user only makes sense. The overall result is faster load times for each end-user; which, in turn, also provides better search engine ranking.