This series of articles will review how to use GitHub Actions securely. Personally I do not like trusting someones random Github actions to deploy my code. I am unsure how thoroughly GitHub vets these solutions for security. Therefore, I either deploy my own runners or use GitHub’s verified actions, which can be found at The Official Github Actions Repository.
Setup Your Repo Correctly Link to heading
You will need to configure the following keys
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- S3_BUCKET
- S3_BUCKET_REGION
- AWS_CLOUDFRONT_DISTRIBUTION_ID
In your GitHub repo, you will need to set up the same secrets you would use with a preconfigured action like the one below.
I used to hardcode my bucket name and region, however, after The AWS S3 Denial of Wallet Amplification Attack, I have started adding a random string to the end of all my bucket names and storing them as secrets.
Old Site Workflow Link to heading
I noticed an old site I maintain was broken and looked at my deployment for it. I realized when I set it up several years ago I just trusted some random Github actions. This site was really just a fun little project so it didn’t matter, but I was exposing my AWS credentials to some random persons actions. They could do anything with them.
name: S3 deploy
on:
push:
branches:
- master
jobs:
run:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
steps:
- uses: actions/checkout@v3
- name: Deploy
uses: reggionick/s3-deploy@v3
with:
folder: src
bucket: ${{ secrets.S3_BUCKET }}
bucket-region: ${{ secrets.S3_BUCKET_REGION }}
delete-removed: true
filesToInclude: '.*/*,*/*,**'
New Base Workflow for Old Site Link to heading
I created a new workflow for that site, issued some new credentials and now only use official Github actions.
Let’s go through the workflow I am using for a simple site of mine, line by line.
#The name that will show up in your workflow runs
name: S3 deploy
#Holds your triggers
on:
#Sets up the trigger for pushes to your repo
push:
#Sets up which branches to trigger the workflow on
branches:
- master
#Allows the workflow to be triggered manually
workflow_dispatch:
#Holds the jobs to run
jobs:
#The Job to run
run:
#Sets that we will run on the latest ubuntu version
runs-on: ubuntu-latest
steps:
#Uses an official github action to checkout the repo
- uses: actions/checkout@v3
#Name for the Step
- name: Configure AWS CLI
#Configures the commands to run to configure the AWS CLI - See below.
run: |
sudo apt-get update
sudo apt-get install -y awscli
aws configure set aws_access_key_id ${{ secrets.AWS_ACCESS_KEY_ID }}
aws configure set aws_secret_access_key ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws configure set default.region ${{ secrets.S3_BUCKET_REGION }}
- name: Deploy to S3
#This runs the AWS CLI command to delete existing content and upload the new content
run: |
aws s3 sync src s3://${{ secrets.S3_BUCKET }} --delete
Old Deployment method for this Blog Link to heading
With that old site fixed, I needed to fix how I deploy this blog.
Last week I switched from Windows 11 to Pop!_OS. Previously I used the following powershell script to deploy this blog.
# Define parameters
$bucketName = "MyFolder"
$region = "us-west-1"
$cloudFrontDistributionId = "secret"
$localSiteDirectory = "C:\Projects\personal\thesimpledev.com\"
$buildOutputDirectory = Join-Path -Path $localSiteDirectory -ChildPath "public"
# Step 1: Build the Hugo site
Write-Host "Building the Hugo site..."
Set-Location -Path $localSiteDirectory
hugo
# Check if Hugo build was successful
if ($LASTEXITCODE -ne 0) {
Write-Host "Hugo build failed."
exit $LASTEXITCODE
}
# Step 2: Sync the `public` directory with the S3 bucket
Write-Host "Syncing files to S3 bucket: $bucketName..."
aws s3 sync $buildOutputDirectory s3://$bucketName/ --delete --region $region
# Step 3: Invalidate CloudFront distribution (if needed)
Write-Host "Creating CloudFront invalidation..."
$invalidationBatch = "{""Paths"":{""Quantity"":1,""Items"":[""/*""]},""CallerReference"":""$(Get-Date -Format o)""}"
aws cloudfront create-invalidation --distribution-id $cloudFrontDistributionId --invalidation-batch $invalidationBatch > $null 2>&1
Write-Host "Deployment complete."
New Deploy Workflow For TheSimpleDev.com Link to heading
Using what I learned with my other site I have combined it with other functionality like invalidating my old deployment.
name: S3 deploy
on:
push:
branches:
- master
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Install Hugo and AWS CLI
run: |
sudo apt-get update
sudo apt-get install -y hugo awscli
- name: Build the Hugo site
run: hugo
- name: Configure AWS CLI
run: |
aws configure set aws_access_key_id ${{ secrets.AWS_ACCESS_KEY_ID }}
aws configure set aws_secret_access_key ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws configure set default.region ${{ secrets.S3_BUCKET_REGION }}
- name: Sync to S3
run: aws s3 sync public/ s3://${{ secrets.S3_BUCKET }} --delete
- name: Create CloudFront invalidation
run: |
invalidationBatch=$(jq -n --arg callerReference $(date -u +"%Y-%m-%dT%H:%M:%SZ") --argjson items '["/*"]' '{"Paths": {"Quantity": 1, "Items": $items}, "CallerReference": $callerReference}')
aws cloudfront create-invalidation --distribution-id ${{ secrets.AWS_CLOUDFRONT_DISTRIBUTION_ID }} --invalidation-batch "$invalidationBatch"
Conclusion Link to heading
This was a quick write-up to show how I transitioned from older, insecure actions and local deployment scripts to a more versatile and secure deployment option. While not a detailed walkthrough, I hope my examples provide insights for individuals deploying personal projects on how to do so more securely.
This article will be the first deploy to use this method so fingers crossed!