Performance at Pull Request Level

We are taught to perform our best, be it in studies, sports or other extra curricular activities. And I heard the same ideology on the first day at work, “We need to plan to improve our TP90 and TP50 numbers”. Hence, I learnt besides performing at work, we need to make sure our web application

We are taught to perform our best, be it in studies, sports or other extra curricular activities. And I heard the same ideology on the first day at work, “We need to plan to improve our TP90 and TP50 numbers”. Hence, I learnt besides performing at work, we need to make sure our web application performs well too (which totally makes sense).

Why performance is important? We live in the age where everybody wants a quick result, no one is ready wait. If they have to wait, they simply move on to try the second best solution available in the market (and we cannot let that happen). Speed and Time is Money. The plan of action we decided to move forward with was detecting any discrepancies in performance before pushing out code changes to pipeline. Thus, comes the concept of evaluating Performance at P.R. (Pull Request) level.

Business Impact

Page loading time is obviously an important part of any website’s user experience. Many times we let page load time slide to accommodate either a better appealing design, a new well-designed functionality or to add more content to web pages. Unfortunately, website visitors tend to care more about speed than all the bells and whistles we want to add to our websites.

  • Performance Impacts Conversion
  • Performance Impacts User Engagement
  • Performance Impacts OpEx and Revenue
  • Performance Impacts Usability

Build & Deploy Phase

  • A bunch of smart engineers modify the code and submit a PR for a specific plugin repo.
  • GIT receives the Pull Request and informs the Jenkins jobs that it needs to get to work on validating those
  • Jenkins Jobs (template based jobs) are than kicked-off to validate the PR’s
  • The Jenkins job, thus kicked-off performs the below steps:
    • a. Build & Unit Test Phase
      • Git Clone the repo
      • npm install the dependencies
      • Runs the Unit Tests (leveraging docker)
      • Generates the Code Coverage
      • Builds and Dist the plugin code
    • b. Deploy Phase
      • Deploys the Dist to CDN/S3 with a unique version

Functional Testing

  • Since the code change is ready to be consumed, a functional test suite is kicked-off
  • Run a basic set of functional tests to ensure the PR has not introduced any regression (leveraging docker)
  • Grunt Task for the functional test
  • x-browser/x-platform would be covered at this stage.
  • Test suites would run in parallel and cover the broader combinations to provide faster feedback to the team.

Performance Testing

The objective of this phase is to add Performance Testing along with the chain of Build Unit Tests, Deploy Tests and Function Test before pushing the P.R. to the master pipeline. In the current scenario, the P.R.’s are pushed after passing all the above tests except the performance phase. The Performance Testing comes into the scenario after all the modifications/updates have been done, right before the release to measure whether any new functionality has altered the performance or not. The inclusion of Performance Testing for every pull request will decrease the overhead faced by the engineers during the release time. This also provides a mechanism to measure and experience the real world scenario.

Process

  • Once a PR version of the module is available in CDN/S3, we leverage open source tool WebPageTest to test the performance of the shell.
  • We would simulate a request from different geographical locations such as (US-East, US-West, Australia, France & Canada etc) with variance in bandwidths such as DSL & Cable to hit PROD/Pre-PROD version of our application.
  • Prior to launching the application and logging in, we overwrite the cookie which would switch the module version from what’s in prod to our desired PR version in CDN/S3
  • At this point a simple launch, login would be performed and a detailed result summary is generated via WebPageTest.
  • We can define thresholds on the generated results to ensure we are not degrading/deviating from our base-benchmarks.
  • This will help in identifying if there is in any change in performance.

Custom Markers

We need the ability to assess and understand the performance characteristics of our application. With the help of custom markers, we are able to expose a high precision timestamp for better measurement of the performance of our application.

When

Gone are those days when performance testing is done when we are almost ready to ship. The pace at which we are currently building, performance testing is done with each Pull Request (PR). The current mechanism would validate each PR and update the same with the impact in terms of performance. Once the test is performed with DSL/Cable bandwidth(s), the same PR is published with the links to the detailed reports. Engineers would be notified on the status of the performance. The goal is to be proactive to identify the performance bottlenecks.

Authors:
Raj Vasikarla
Vanya Sehgal