Serving Static Sites with Fastly, S3, and Middleman
Update: This article is no out of date. Please refer to this tutorial which remains up to date.
In November we announced our partnership with Fastly to power the new HashiCorp releases service. Since then, we have expanded our use of Fastly to front all of our static sites. You may have noticed subtle frontend and backend changes to our various websites - this post details the steps we took to migrate our static sites to Fastly.
Overview
At a high level, our new static site architecture is modeled as:
Atlas is responsible for building the static site and uploading it to S3. Fastly is configured to pull from the S3 bucket as the origin and dynamically rewrites URL paths to support multiple sites in the same bucket. Cache times are set to one year, and we use surrogate keys to purge a specific subset of the cache during deploys. As an added bonus, we pre-warm the cache and check for broken links to ensure the best experience possible!
What is a Static Site?
At HashiCorp, we believe an important design and user experience is a key factor in the success of a project. Not only do we put significant time and effort to the command line interfaces, error messages, and output, but each project is given its own unique identity and website. Each project's website uses Middleman which is a static website generator written in Ruby. Here are just a few of our static sites:
- https://www.vagrantup.com/
- https://www.terraform.io/
- https://www.hashicorp.com/
The HashiCorp static sites have historically been hosted using a PaaS with a series of custom buildpacks. This process posed a few key problems. First, all requests to the static site hit a Ruby web server, which then loaded a pre-built file from disk. This resulted in less than optimal performance and made global caching difficult and costly. A common complaint from customers was that our static websites were slow or inaccessible from different parts of the world. The pre-built file was built on each deploy, which led to occasional deployment issues ranging from conflicting Ruby versions, stalled pushes, or incomplete website deploys.
Compiling Static Sites
While static sites were previously built and served by a PaaS, we had to separate the build stage from the deploy stage as part of this effort. Because of our familiarity, we chose to build the static sites using Atlas and deploy the static sites onto Amazon S3 using the popular s3cmd tool.
In Atlas we configure everything using a Packer template. In the case of our static sites, we use a simple Packer template that spins up a Docker container, builds the static site, and uploads the static site to S3. For a complete example, please see the Packer template used to build the Terraform static site. The Packer template also calls our custom bash script, which uses s3cmd to upload the static site to S3. That command looks like this:
s3cmd \
--quiet \
--delete-removed \
--guess-mime-type \
--no-mime-magic \
--acl-public \
--recursive \
--add-header="Cache-Control: max-age=31536000" \
--add-header="x-amz-meta-surrogate-key: site-$PROJECT" \
sync "$DIR/build/" "s3://<bucket>/$PROJECT/latest/"
--quiet
tells s3cmd to suppress non-error output.--delete-removed
tells s3cmd to remove old files that are no longer tracked (part of sync).--guess-mime-type
tells s3cmd to guess the mime type from the file extension instead of using python-magic (which is horribly inaccurate).--non-mime-magic
tells s3cmd to disable the python-magic mime detection entirely.--acl-public
tells s3cmd to make the resources public, read-only.--recursive
tells s3cmd to recurse into subdirectories and folders.- The first
--add-header
call sets the cache control on the resource to one year. - The second
--add-header
call sets a surrogate-key. $PROJECT
is a variable that is populated by Atlas with the name of the current site (like "terraform").
Of special note is the Surrogate-Key
field and cache timers. As you will see later, we tell Fastly to cache our content for a year at a time. We purge the cache using a surrogate key on deploy. You can read more about Fastly surrogate keys in Fastly documentation for purging by surrogate keys. This will become clearer in a moment.
The next part of our deploy script performs a soft purge on the surrogate key for the static site. It's a simple HTTP request we make with curl:
curl \
--fail \
--silent \
--output /dev/null \
--request "POST" \
--header "Accept: application/json" \
--header "Fastly-Key: $FASTLY_API_KEY" \
--header "Fastly-Soft-Purge: 1" \
"https://api.fastly.com/service/$FASTLY_SERVICE_ID/purge/site-$PROJECT"
The environment variables are automatically populated by Atlas when the script is executed. Notice the URL includes site-$PROJECT
- this is the same value as the Surrogate-Key
header we set on the resources. We use this surrogate key to only purge the content for the specific static site.
Lastly, we pre-warm the new cache. Fastly does not have a mechanism for doing this directly, so we use wget
instead:
wget \
--recursive \
--delete-after \
--level 0 \
--quiet \
"https://$PROJECT_URL/"
This will recursively spider all pages and assets, triggering a miss and an origin lookup, which will be cached on the return trip by Fastly. This technique also gives us a bit of continuous integration around our static sites, since it will catch broken internal links and fail the build.
Caching with Fastly custom VCL
Because Amazon limits the number of buckets per account, we decided to put all our static websites in the same S3 bucket. After some discussion and research, we decided on the following structure:
<bucket>
\_ <project>
\_ <version>
For example, Terraform's deployment looks like this:
<bucket>
\_ terraform
\_ latest
In the future, we have plans for versioned documentation, hence the "latest" subfolder. But because all our sites are configured in the same bucket, we need to write custom Varnish configuration to rewrite our requests to the backend dynamically based on the incoming domain. Here is what that configuration block looks like:
if (req.http.host ~ "terraform.io") {
set req.http.host = "<bucket>.s3-website-us-east-1.amazonaws.com";
set req.url = "/terraform/latest" req.url;
return(lookup);
}
This snippet, which lives in vcl_fetch
does the following:
- Conditionally filters based on the request host.
- Dynamically rewrites the backend request to the S3 endpoint.
- Prepends the request url with the bucket path.
terraform.io/(.*) => <bucket>.s3-website-us-east-1.amazonaws.com/terraform/latest/$1
One roadblock encountered in this approach occurred when the backend would perform a redirect. Since the backend knows its request URL was /terraform/latest, the redirect would include that prefix, resulting in the client being redirected to terraform.io/terraform/latest/. After some trial and error, we determined the best way to circumvent this issue is to rewrite any backend response redirects like this:
if (beresp.status == 301 || beresp.status == 302) {
set beresp.http.location = regsub(beresp.http.location, "^/(.+)/latest/", "/");
}
Additionally, we found that some of the AWS metadata were unnecessary, so we strip those headers before we send the response to the client:
unset beresp.http.x-amz-id-2;
unset beresp.http.x-amz-request-id;
unset beresp.http.x-amz-meta-s3cmd-attrs;
unset beresp.http.server;
Forcing www and TLS
Traditionally our static sites have been available on the non-TLS, TLS, non-www, and www variations of the site. This proved to be problematic for caching, since it would require caching four different versions of the site. Instead, we implemented a canonical URL for each site in the form https://www.PROJECT
and establish proper redirects for all other forms. Fastly made this very easy with built-in support for forcing SSL and the ability to write our own custom varnish.
Before passing the request to the backend, we first check if the host does not start with the www
prefix:
if (req.http.host !~ "^www\..+") {
set req.http.host = "www." req.http.host;
set req.http.x-varnish-redirect = "https://" req.http.host req.url;
error 750 req.http.x-varnish-redirect;
}
It is important to note a few things:
- If and only if the header does not start with "www" already, we rewrite the original host header and prefix it with "www." If you are hosting other sites that exist on a non-www domain, you will need to exclude them from this check.
- We are forcing the TLS redirect here as well, even if the original request was via non-TLS. This saves us from issuing multiple redirects to the user.
- We preserve the original request and query string by appending the original request url.
- We return an error with a custom error code and pass the result (which is the full URL to redirect to) to the error call.
"750" is our custom error code. By default, Varnish does not allow you to perform a redirect during a recv or fetch. Instead, the generally accepted way to perform a redirect is to use a custom error code and then set the proper headers in the vcl_error function. In our case, this custom redirect looks like this:
if (obj.status == 750) {
set obj.http.location = obj.response;
set obj.http.Strict-Transport-Security = "max-age=31536000; includeSubdomains; preload";
set obj.status = 301;
return (deliver);
}
Again, a few things to note here:
- We explicitly check for our custom error code, otherwise we allow the rest of the vcl_error function to take place.
- We set the HTTP Location header to the value passed to the error function (the redirect location).
- We set HSTS headers on the redirect (more on this later in the post).
- We explicitly set the HTTP status code to a 301 to tell the client to follow the location header as a permanent redirect.
Enabling TLS can be a bit trickier with Varnish because it is not possible to inspect the incoming protocol, but thankfully Fastly includes an easy-to-use macro for forcing SSL:
if (!req.http.Fastly-SSL) {
error 801 "Force SSL";
}
In this case, "Fastly-SSL" is a custom header and "801" is a special error code that Fastly uses to Force SSL on the request. Note that we put this after all other redirects in an effort to reduce the number of redirects we perform.
Enabling HSTS and other Security Headers
HashiCorp is committed to modern security practices. As such, in addition to our efforts to serve all content via TLS, all our static sites now include the HSTS headers. Additionally, our sites have been submitted to the HSTS preload list for modern browsers. This will ensure our sites are only served over TLS and reduce the chances of someone performing a MITM attack. Enabling HSTS on Fastly is incredibly easy.
First, we set the HTTP response header on a successful fetch. This looks like this:
sub vcl_fetch {
# ...
set beresp.http.Strict-Transport-Security = "max-age=31536000; includeSubdomains; preload";
}
Additionally, because we redirect any non-www, non-TLS version of our static sites, we need to add the HSTS headers to the redirect as well. Since there is no first-class way to perform a redirect in Varnish, we use a custom error code and then change the resulting header when it's sent back to the browser. That is the reason you saw HSTS added to the error redirect above.
Finally, we add the following custom HTTP headers to all responses:
set beresp.http.X-XSS-Protection = "1; mode=block";
set beresp.http.X-Content-Type-Options = "nosniff";
set beresp.http.X-Frame-Options = "sameorigin";
In short:
X-XSS-Protection
enables the cross-site scripting filter built into most modern web browsers.X-Content-Type-Options
prevents MIME-sniffing a response away from the declared content type.X-Frame-Options
prevents clickjacking by prohibiting our static sites from being included on other domains in an iframe.
Amazing Support
If you recall from our post back in November, one of the deciding factors in choosing Fastly as our CDN was the amazing support we received to our questions. The Fastly team was very helpful in finding the right parameters to tune and write the custom Varnish configuration. Their support team is always working, delivers very prompt replies, and always makes sure your question is answered.
Conclusion
We could not be happier with the recent changes and the amazing support we received from Fastly. Our customers have already noticed the improved performance, and we will continue to make sure our static sites are as accessible as possible.
Sign up for the latest HashiCorp news
More blog posts like this one
HashiCorp 2024 year in review
The future looks bright as we look back at what we and our customers accomplished this year.
HashiCorp Ambassador call for submissions 2025
The submission window for HashiCorp Ambassador — our program to recognize individuals for knowledge sharing, mentorship, and kindness in the community — is now open.
HashiCorp at re:Invent 2024: Infrastructure Lifecycle Management with AWS
A recap of HashiCorp infrastructure news and developments on AWS from the past year, from a new provider launch to simplifying infrastructure provisioning and more.