If you want to set up quick and efficient provisioning and automation pipelines and you rely on machine images as a part of this framework, you’ll definitely want to prepare and maintain preconfigured images.
With AWS you can of course leverage Amazon’s AMIs for EC2 machine images. If you’re configuring autoscaling for an application, you definitely don’t want to be setting up your launch configurations to launch new EC2 instances using base Amazon AMI images and then installing any prerequesites your application may need at runtime. This will be slow and tedious and will lead to sluggish and unresponsive auto scaling.
Packer comes in at this point as a great tool to script, automate and pre-bake custom AMI images. (Packer is a tool by Hashicorp, of Terraform fame). Packer also enables us to store our image configuration in source control and set up pipelines to test our images at creation time, so that when it comes time to launching them, we can be confident they’ll work.
Packer doesn’t only work with Amazon AMIs. It supports tons of other image formats via different Builders, so if you’re on Azure or some other cloud or even on-premise platform you can also use it there.
Below I’ll be listing out the high level steps to create your own custom AMI using Packer. It’ll be Windows Server 2016 based, enable WinRM connections at build time (to allow Packer to remote in and run various setup scripts), handle sysprep, EC2 configuration like setting up the administrator password, EC2 computer name, etc, and will even run some provioning tests with Pester
You can grab the files / policies required to set this up on your own from my GitHub repo here.
Setting up credentials to run Packer and an IAM role for your Packer build machine to assume
First things first, you need to be able to run Packer with the minimum set of permissions it needs. You can run packer on an EC2 instance that has an EC2 role attached that provides it the right permissions, or if you’re running from a workstation, you’ll probably want to use an IAM user access/secret key.
Here is an IAM policy that you can use for either of these. Note it also includes an iam:PassRole statement that references an AWS account number and specific role. You’ll need to update the account number to your own, and create the Role called Packer-S3-Access in your own account.
IAM Policy for user or instance running Packer:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CopyImage",
"ec2:CreateImage",
"ec2:CreateKeypair",
"ec2:CreateSecurityGroup",
"ec2:CreateSnapshot",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteKeypair",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSnapshot",
"ec2:DeleteVolume",
"ec2:DeregisterImage",
"ec2:DescribeImageAttribute",
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSnapshots",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DetachVolume",
"ec2:GetPasswordData",
"ec2:ModifyImageAttribute",
"ec2:ModifyInstanceAttribute",
"ec2:ModifySnapshotAttribute",
"ec2:RegisterImage",
"ec2:RunInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:RequestSpotInstances",
"ec2:CancelSpotInstanceRequests"
],
"Resource": "*"
},
{
"Effect":"Allow",
"Action":"iam:PassRole",
"Resource":"arn:aws:iam::YOUR_AWS_ACCOUNT_NUMBER_HERE:role/Packer-S3-Access"
}
]
}
IAM Policy to attach to new Role called Packer-S3-Access (Note, replace the S3 bucket name that is referenced with a bucket name of your own that will be used to provision into your AMI images with). See a little further down for details on the bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowS3BucketListing",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::YOUR-OWN-PROVISIONING-S3-BUCKET-HERE"
],
"Condition": {
"StringEquals": {
"s3:prefix": [
"",
"Packer/"
],
"s3:delimiter": [
"/"
]
}
}
},
{
"Sid": "AllowListingOfdesiredFolder",
"Action": [
"s3:ListBucket"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::YOUR-OWN-PROVISIONING-S3-BUCKET-HERE"
],
"Condition": {
"StringLike": {
"s3:prefix": [
"Packer/*"
]
}
}
},
{
"Sid": "AllowAllS3ActionsInFolder",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::YOUR-OWN-PROVISIONING-S3-BUCKET-HERE/Packer/*"
]
}
]
}
This will allow Packer to use the iam_instance_profile configuration value to specify the Packer-S3-Access EC2 role in your image definition file. Essentially, this allows your temporary Packer EC2 instance to assume the Packer-S3-Access role which will grant the temporary instance enough privileges to download some bootstrapping files / artifacts you may wish to bake into your custom AMI. All quite securely too, as the policy will only allow the Packer instance to assume this role in addition to the Packer instance being temporary too.
Setting up your Packer image definition
Once the above policies and roles are in place, you can set up your main packer image definition file. This is a JSON file that will describe your image definition as well as the scripts and items to provision inside it.
Look at standardBaseImage.json in the GitHub repository to see how this is defined.
standardBaseImage.json
{
"builders": [{
"type": "amazon-ebs",
"region": "us-east-1",
"instance_type": "t2.small",
"ami_name": "Shogan-Server-2012-Build-{{isotime \"2006-01-02\"}}-{{uuid}}",
"iam_instance_profile": "Packer-S3-Access",
"user_data_file": "./ProvisionScripts/ConfigureWinRM.ps1",
"communicator": "winrm",
"winrm_username": "Administrator",
"winrm_use_ssl": true,
"winrm_insecure": true,
"source_ami_filter": {
"filters": {
"name": "Windows_Server-2012-R2_RTM-English-64Bit-Base-*"
},
"most_recent": true
}
}],
"provisioners": [
{
"type": "powershell",
"scripts": [
"./ProvisionScripts/EC2Config.ps1",
"./ProvisionScripts/BundleConfig.ps1",
"./ProvisionScripts/SetupBaseRequirementsAndTools.ps1",
"./ProvisionScripts/DownloadAndInstallS3Artifacts.ps1"
]
},
{
"type": "file",
"source": "./Tests",
"destination": "C:/Windows/Temp"
},
{
"type": "powershell",
"script": "./ProvisionScripts/RunPesterTests.ps1"
},
{
"type": "file",
"source": "PesterTestResults.xml",
"destination": "PesterTestResults.xml",
"direction": "download"
}
],
"post-processors": [
{
"type": "manifest"
}
]
}
When Packer runs it will build out an EC2 machine as per the definition file, copy any contents specified to copy, and provision and execute any scripts defined in this file.
The packer image definition in the repository I’ve linked above will:
- Create a Server 2012 R2 base instance.
- Enable WinRM for Packer to be able to connect to the temporary instance.
- Run sysprep to generalize it.
- Set up EC2 configuration.
- Download a bunch of tools (including Pester for running test once the image build is done).
- Download any S3 artifacts you’ve placed in a specific bucket in your account and store them on the image.
S3 Downloads into your AMI during build
Create a new S3 bucket and give it a unique name of your choice. Set it to private, and create a new virtual folder inside the bucket called Packer. This bucket should have the same name you specified in the Packer-S3-Access role policy in the few policy definition sections.
Place any software installers or artifacts you would like to be baked into your image in the /Packer virtual folder.
Update the DownloadAndInstallS3Artifacts.ps1 script to reference any software installers and execute the installers. (See the commented out section for an example). This PowerShell script will download anything under the /Packer virtual folder and store it in your image under C:\temp\S3Downloads.
Testing
Finally, you can add your own Pester tests to validate tasks carried out during the Packer image creation.
Define any custom tests under the /Tests folder.
Here is simple test that checks that the S3 download for items from /Packer was successful (The Read-S3Object cmdlet will create the folder and download items into it from your bucket):
Describe 'S3 Artifacts Downloads' {
It 'downloads artifacts from S3' {
"C:\temp\S3Downloads" | Should -Exist
}
}
The main image definition file ensures that these are all copied into the image at build time (to the temp directory) and from there Pester executes them.
Hook up your image build process to a build system like TeamCity and you can get it to output the results of the tests from PesterTestResults.xml.
Have fun automating and streamlining your image builds with Pester!