Introduction
There are countless ways to host a website today. For smaller sites, basic hosting providers like GitHub Pages or Netlify are sufficient as they require minimal setup, allowing your site to go live in mere minutes.
However, if you’re dealing with a larger website and don’t want to compromise on load times, Amazon Web Services (AWS) offers an efficient solution. Using AWS CloudFront and S3 buckets, you can host a static website with low latency and high performance, ensuring that it is always readily available to users.
In this guide, we’ll walk you through the process of hosting a static website on S3 using AWS CloudFront. We’ll also cover how to use CloudTrail for logging changes and how to provision the infrastructure for static hosting using Infrastructure as Code (IaC) tools like Terraform in GitOps. Additionally, we’ll discuss some shift-left practices in CI pipelines to optimise the development cycles.
Benefits of Hosting a Static Website on S3 With AWS CloudFront
Using S3 buckets and CloudFront distributions can provide a robust, secure, and efficient environment for hosting a static website. Here’s how:
Scalability
S3 can automatically handle large amounts of traffic, ensuring your website remains accessible during traffic spikes without manual intervention.
Low Latency
AWS CloudFront serves your content through a global network of edge locations, allowing faster load times for users across varied geographical locations.
Cost-Effective
S3 and CloudFront’s pay-as-you-go model lets you pay only for the storage and bandwidth you actually use, making it cost-effective for both small and large websites.
High Availability
S3 is designed for 99.999999999% durability, ensuring your data is safe and readily available. CloudFront’s distributed nature further strengthens availability.
Custom Domain Support
You can configure CloudFront to use your custom domain name, which can help reinforce your brand identity and website credibility.
Security
S3 and CloudFront provide various security options, including SSL/TLS encryption, IAM roles for access control, and the ability to serve content over HTTPS to protect user data.
Before diving into how to host a static website, let’s take a look at the tools we’ll be using.
AWS CloudFront
CloudFront is a service by Amazon that accelerates the distribution of both static and dynamic web content. This includes HTML, CSS, JavaScript, images, and other media types.
AWS CloudFront delivers content through a worldwide network of data centres called edge locations. This helps reduce latency, which is the time delay between a user’s request for content and the delivery of that content. By routing the request to the edge location with the lowest latency, CloudFront improves the loading speed of your website and improves user experience.
Route 53
Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. You can use Route 53 to perform three main functions in any combination: domain registration, DNS routing, and health checking.
AWS Certificate Manager
AWS Certificate Manager (ACM) is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources.
CloudTrail
CloudTrail enables auditing, security monitoring, and operational troubleshooting by tracking user activity and API usage. CloudTrail logs, continuously monitors, and retains account activity related to actions across your AWS infrastructure, giving you control over storage, analysis, and remediation actions.
AWS WAF
AWS WAF is a web application firewall that lets you monitor the HTTP(S) requests that are forwarded to your protected web application resources.
Amazon S3
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers data scalability, availability, security, and performance. Customers of all sizes and industries can use Amazon S3 to store and protect any amount of data for various use cases, including data lakes, websites, mobile applications, backup and restore, archiving, enterprise applications, IoT devices, and big data analytics.
Terraform
Terraform is an open-source infrastructure-as-code tool created by HashiCorp. Users define and manage data centre infrastructure using a declarative configuration language known as HashiCorp Configuration Language (HCL), or optionally JSON.
GitHub Actions
GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipelines. You can create workflows to build and test every pull request to your repository or deploy merged pull requests to production.
Now that you are familiar with the services we’ll be using, let’s move on to the steps involved in hosting a static website. Below is a high-level overview of the AWS services that will be utilised to host the static content, with the static content being pushed to the S3 bucket using CodePipeline.
Provisioning AWS Infrastructure Using CloudFront, S3, and Terraform With GitHub Actions Pipelines
The general steps are as follows:
1. The repository contains Terraform code for CloudFront, S3, AWS WAF, CloudTrail, and more. For further details, refer to the README.md file in the repository.
2. The first step is to create an S3 bucket where our Terraform state file will be stored and set up a backend configuration. For more information, refer to the Terraform documentation on Statefile and State File locking using AWS DynamoDB. We also need to enable bucket versioning.
3. Once the bucket is created, we set up the backend configuration for Terraform in backend.tf. Here, the bucket parameter specifies the name of the S3 bucket and the key parameter defines the name under which the state file will be created.
terraform { backend "s3" { bucket = "caw-aws-aps1-demo-s3-tfstate" key = "s3-backend-demo-infra-module.tfstate" region = "ap-south-1" } }
4. We also set up the provider in provider.tf. Alias is used to set up providers for two different AWS regions as WAF needs to be configured in us-east-1. For more information on multiple provider configurations, refer to the Terraform documentation.
terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 4.45.0" } } } provider "aws" { region = "ap-south-1" access_key = var.access_key_id secret_key = var.access_key_secret } provider "aws" { alias = "aws-waf-web-acl-provider" region = "us-east-1" access_key = var.access_key_id secret_key = var.access_key_secret }
5. Create the S3 bucket for static website hosting in s3.tf.
resource "aws_s3_bucket" "this-s3-bucket" { bucket = "${local.parent_org_name}-${local.cloud_provider}-${local.region}-${local.environment}-s3-${local.project}-${var.s3_bucket_name}" force_destroy = true } Resource"aws_s3_bucket_website_configuration" "this-s3-website-configuration" { bucket = var.bucket-name index_document { suffix = "index.html" } error_document { key = "index.html" } depends_on = [ aws_s3_bucket.this-s3-bucket ] } resource "aws_s3_bucket_policy" "this-s3-policy" { bucket=var.bucket-name policy = <<POLICY { "Version": "2008-10-17", "Id": "PolicyForCloudFrontPrivateContent", "Statement": [ { "Sid": "AddPerm", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::${var.bucket-name}/*" } ] } POLICY depends_on = [ aws_s3_bucket.this-s3-bucket ] } resource "aws_s3_bucket_acl" "this-s3-acl_policy" { bucket=var.bucket-name access_control_policy { grant { grantee { id = data.aws_canonical_user_id.current.id type = "CanonicalUser" } permission = "READ" } grant { grantee { id = data.aws_canonical_user_id.current.id type = "CanonicalUser" } permission = "READ_ACP" } grant { grantee { id = data.aws_canonical_user_id.current.id type = "CanonicalUser" } permission = "WRITE" } grant { grantee { id = data.aws_canonical_user_id.current.id type = "CanonicalUser" } permission = "WRITE_ACP" } grant { grantee { type = "Group" uri = "https://acs.amazonaws.com/groups/global/AllUsers" } permission = "READ_ACP" } grant { grantee { type = "Group" uri = "https://acs.amazonaws.com/groups/global/AllUsers" } permission = "READ" } owner { id = data.aws_canonical_user_id.current.id } } }
6. Create the CloudFront distribution in cloudfront.tf. Note: These parameters are for demo/blog purposes only. Follow CloudFront best practices for production use.
resource "aws_cloudfront_distribution" "this-cdn-portal" { origin { domain_name = data.aws_s3_bucket.demo-portal-s3-bucket.bucket_domain_name origin_id = data.aws_s3_bucket.demo-portal-s3-bucket.bucket_domain_name } wait_for_deployment = false enabled = true is_ipv6_enabled = true default_root_object = "index.html" default_cache_behavior { allowed_methods = ["GET", "HEAD"] cached_methods = ["GET", "HEAD"] target_origin_id = data.aws_s3_bucket.demo-portal-s3-bucket.bucket_domain_name compress = true cache_policy_id = data.aws_cloudfront_cache_policy.cache_policy_cloudfront.id viewer_protocol_policy = "allow-all" } web_acl_id = aws_wafv2_web_acl.this-waf-web-acl.arn price_class = "PriceClass_All" viewer_certificate { cloudfront_default_certificate = true } restrictions { geo_restriction { restriction_type = "none" } } }
7. Create WAF in aws-waf.tf
resource "aws_wafv2_web_acl" "this-waf-web-acl" { provider = aws.aws-waf-web-acl-provider name = "${local.parent_org_name}-${local.cloud_provider}-${local.region}-${local.environment}-waf_web_acl" description = "Bot Control waf aws acl" scope = "CLOUDFRONT" default_action { allow {} } rule { name = "AWS-AWSManagedRulesBotControlRuleSet" priority = 0 statement { managed_rule_group_statement { vendor_name = "AWS" name = "AWSManagedRulesBotControlRuleSet" } } override_action { count {} } visibility_config { sampled_requests_enabled = true cloudwatch_metrics_enabled = true metric_name = "AWS-AWSManagedRulesBotControlRuleSet" } } visibility_config { sampled_requests_enabled = true cloudwatch_metrics_enabled = true metric_name = "AWS-AWSManagedRulesBotControlRuleSet" } }
8. Create a workflow to rebuild the site and deploy it automatically after each update with main.yml. The workflow will be triggered only if files are modified in specific directories after a push action.
name: Plan using Terraform on: pull_request: branches: - develop paths: - 'terraform/dev/**'
9. The Terraform code is checked for security issues using TFSEC. This constitutes the static code analysis phase for the Terraform code.
- name: Clone repo uses: actions/checkout@master - name: tfsec id: tfsec uses: aquasecurity/tfsec-action@v1.0.0 with: soft_fail: true
10. More gatings like this can be added to the CI pipeline, such as TFCost, which can project the approximate cost of the infrastructure being provisioned by Terraform. If the security check fails, the build is halted.
11. Use GitHub Actions to set up the Terraform project and run Terraform jobs.
- name: Check out code uses: actions/checkout@v2 - name: Setup Terraform uses: hashicorp/setup-terraform@v2 with: terraform_version: 1.3.7
12. Configure AWS credentials and initialize Terraform.
- name: Configure AWS Credentials uses: aws-actions/configure-aws-credentials@v1 with: aws-region: ap-south-1 aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - name: Initialize Terraform id: init run: | terraform init \ -backend-config="access_key="${{ secrets.AWS_ACCESS_KEY_ID }}" \ -backend-config="secret_key="${{ secrets.AWS_SECRET_ACCESS_KEY }}"
13. The credentials are sourced from GitHub Secrets to ensure that sensitive information does not accidentally end up in a public repository. To set these up, head to your repository -> Settings -> Secrets and variables -> Actions.
14. Configure the Terraform plan and GitHub outputs that comment on the changes to be made in infrastructure.
- name: Plan Terraform id: plan continue-on-error: true run: | terraform plan -no-color
15. Once we create a new branch and a pull request, the pull request triggers a GitHub Action that provisions GitHub Action Runners.
git branch checkout -b demo-project git add . git commit -m "First commit" git push origin demo-project
16. Output of the plan is logged in a PR comment automatically. With the help of a GitHub Action script, which saves the output from the previous steps of TFsec, init, and plan, it publishes the results as a comment.
- uses: actions/github-script@v6 if: github.event_name == 'pull_request' env: PLAN: "terraform\n${{ steps.plan.outputs.stdout }}" with: script: | const output = `#### Terraform Tfsec 🤖\`${{ steps.tfsec.outcome }}\' #### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\` #### Terraform Plan 📖\`${{ steps.plan.outcome }}\` <details><summary>Show Plan</summary> \`\`\`\n ${process.env.PLAN} \`\`\` </details> *Pusher: @${{ github.actor }}, Action: \`${{ github.event_name }}\`, Working Directory: \`${{ env.tf_actions_working_dir }}\`, Workflow: \`${{ github.workflow }}\`*`; github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: output })
17. An email and a Slack notification are sent to reviewers to review the changes.
18. After the reviewers’ approval, the Pull Request is merged and another pipeline is triggered which provisions the infrastructure.
19. The infrastructure changes are applied after the manual approval is given.
20. After the infrastructure deployment, Route 53 entries need to be configured with the ACM certificate and added to AWS CloudFront. Please note that the Terraform code for Route 53 is beyond the scope of this article and can be configured manually.
21. After the infrastructure has been deployed, the next step is to push the built code to the S3 bucket so that it can be hosted. For this, we need another pipeline to synchronise the project’s build files with S3.
aws s3 sync --follow-symlinks --delete --no-progress ./dist/apps/<foldername>/ s3://${{ secrets.<bucketname> }}/
22. Create an invalidation for AWS CloudFront using the following commands.
aws cloudfront create-invalidation --distribution-id ${{secrets.<distribution_idName>}} --paths "/*"
After creating the CloudFront invalidation, verify that the changes are propagated across all edge locations. This typically takes 5-10 minutes depending on the distribution settings. Monitor the invalidation status in the CloudFront console or using AWS CLI. Once complete, test the website through the CloudFront URL to ensure all content is being served correctly. For enhanced security, consider implementing additional WAF rules and enabling CloudFront access logging to track request patterns.
Conclusion
The combination of Terraform and GitHub Actions offers a powerful and secure way to set up a static website using AWS CloudFront and S3 buckets. By leveraging Terraform’s Infrastructure as Code (IaC) approach and GitOps practices, the process of provisioning and configuring the necessary resources is automated and streamlined. Additionally, security checks like TFSEC are integrated into CI pipelines to ensure the code meets security standards. This approach provides an efficient solution for anyone looking to set up a static website using similar methods.
Want to simplify your web hosting setup? Schedule a call with us to discuss how we can help a secure, scalable, and automated infrastructure tailored to your needs.
FAQs
What is the benefit of using Amazon S3 and CloudFront for hosting a static website?
Amazon S3 and CloudFront offer scalability, low latency, high availability, and cost-efficiency. S3 can handle large volumes of traffic, while CloudFront ensures faster load times by distributing content through a global network of edge locations.
Can I use a custom domain with a static website hosted using S3 Buckets and CloudFront?
Yes, you can use Amazon Route 53 to set up a custom domain name for your static website. By configuring DNS settings and associating the domain with your CloudFront distribution, you can serve your website using your own branded URL.
What security measures are available when hosting a static website on AWS S3 and CloudFront?
AWS offers several security features, including SSL/TLS encryption, IAM roles for access control, and the use of AWS WAF to protect against web attacks. Additionally, CloudFront can serve content over HTTPS, ensuring secure data transmission.