Empowering our team with On Demand Test Environments

Shift left: the theory and concepts behind our On-Demand Test Environment

Product

The Problem

At Bold Penguin, we take developer experience very seriously. We also take software reliability very seriously. As our team has grown, we noticed code was spending a lot more time in our testing environments. Our automated delivery pipeline makes it easy for us to move from left to right very confidently, but any bugs or issues introduced into the pipeline led to traffic jams. Older things needed additional work to go to production which caused delays in getting newer things into the pipeline. 

Not only was this causing us to deliver value to our customers more slowly, it also started to create a negative feedback loop. Because the time between writing code and shipping was increasing, we found ourselves creating larger pull requests which increased the surface area of testing which led to even more time in our lower environments which led to larger Pulls Request etc. Our current systems made sense when we were a much smaller team but we knew we had to make changes if we were going to scale effectively.

Background

Our automated delivery pipelines are triggered off any changes to the master branch. Any updates are immediately compiled and deployed to our internal testing environment (Alpha). At this point, these changes will either be approved to go to the next environment, or a subsequent PR will need to be merged into master to revert the changes.

As we’ve grown we have found that our velocity has started to work against us. The one-way nature of our automated delivery pipelines has caused us to be uncomfortable with putting changes into our test environment out of fear that one of those changes will introduce undesired side effects and essentially blocked our pipeline.

On Demand Test Environments

One of the ways we are approaching our pipeline slowdown problem is by introducing tooling that allows us to run quickly provisionable test environments. On-demand environments are spun up for our front-end applications that get created automatically when pull requests are open. They exist outside of our normal pipeline and exist only as long as they are necessary. By getting deployed versions of our code publically accessible, we get the opportunity for feedback before code gets merged in. Engineers reviewing the code don’t have to pull down every pull request to test locally and our product team can ensure we are solving the correct problems. We even share these builds with our customers to get early feedback on new features!

Here’s an example:

When we push a PR, a new Github Action triggers.


When it finishes, it creates a deployment on the Pull Request.


Clicking the View deployment button brings you to a live, on demand test environment. 


How It Works

There are a few pieces that make the magic happen.

  1. We’re using GitHub actions to orchestrate the compilation of our front-end assets after a git push. While we’re using other tools for CI and CD respectively, using GitHub actions allows developers to see progress in their native tool suite. Also, it was really simply to get started.

  2. In our cloud environment, we’ve created an S3 bucket with static web hosting configured on it. Additionally, we’ve generated an IAM user with write permissions to this bucket. In our example, this bucket is named bold-penguin-preview-sites.

  3. Once the assets are compiled, we use GitHub actions to copy the build artifacts into a special s3 folder using the project name and PR number as part of the key. For example, PR #123 for the project called agent-website would have it’s assets copied to:

    s3://bold-penguin-preview-sites/agent-website-123/index.html

  4. The real magic part is how we’re using nginx as a proxy to s3 static web hosting. We’ve essentially configured nginx to read the URL and extract the project name & pull request number. It then uses these fields to construct a full path to an S3 bucket which is configured for static web hosting.

    For example, see a sample nginx configuration:
The server_name attribute is used to match the path -- which in our case is the project name and pull request #.
  1. Lastly, we’ve made a wildcard DNS entry which routes any requests to *.preview.boldpenguin.com to our nginx host.

So in a nutshell:

  1. GitHub actions builds & copies artifacts to an S3 bucket, preserving a special S3 key.
  2. Nginx is configured to parse the Host header of incoming requests, and perform a pass through proxy to s3 static web hosting.
  3. A wildcard DNS (*.preview.boldpenguin.com) resolves to the nginx container above.

Next Steps

Since introducing on-demand test environments, we have increased our developer productivity and happiness, gotten more feedback sooner, and have started to speed back up. We’re thinking a lot about how we can continue to scale our processes and tooling to empower our ever growing team’s demands. If these sound like problems you’re interested in helping us solve, we’d love to talk with you! Visit https://www.boldpenguin.com/careers to see our latest openings.


Recent Articles

Ryan Schaffner, Bold Penguin’s Senior Product Manager, answers questions about our HawkSoft Agency Management System (AMS) integration.

We recently hosted a Bold Penguin "Coffee with Carlos" webinar and sat down with a member of biBERK’s auditing team to discuss a few ways agents can prepare customers for audit season. We’ve included some of their tips and videos below.

We’ve compiled a list of frequently asked questions that have bubbled up while interacting with our current and future Terminal users. Below are just a few of the questions we’ve answered, in no particular order.

Write More Commercial Insurance

Our newsletter will show you how
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Request A Demo