Amazon Linux Developer Docker Image

November 12, 2020

I've been starting to use AWS a fair bit at work and by extension, the AWS CLI. The CLI itself is pretty easy to set up either on Windows or Linux (in my case WSL2 as that's my go to linux environment now). However once you start relying on additional tools or development languages, often there's more additional set up. This can be a pain to do each time you're setting up a new PC or helping colleagues to set their PC up too.

For me this rapidly became a good reason to script the whole process using a docker build file to create an image that can be shared and updated easily. This provides an immutable and ephemeral development environment for doing all things AWS. We can bind a local folder to the image while it's running as also connecting via Visual Studio Code is super easy too.

Let's explore my Dockerfile to understand what's going on during the build steps. Most of it is a combination of the utils and tools I've been using at work along with hints from the build image that AWS uses in the Amplify Console.

Starting with the Amazon Linux 2 image firstly.

FROM amazonlinux:2

Set a working directory.


Next let's install a few basic, initial prereqs using yum.

RUN echo 'Install prereqs' && \
    yum -y install groff less which unzip && \
    yum clean all && \
    rm -rf /var/cache/yum

Now let's get some pre reqs before we download and build Python (and some others that are useful like nano and git of course!).

RUN echo 'Install some pre reqs for Python' && \
    yum -y install gcc openssl-devel bzip2-devel libffi-devel tar gzip make git nano && \
    yum clean all && \
    rm -rf /var/cache/yum

Download the Pythons and extract into the /opt folder.

RUN echo 'Download and extract Python 3.8' && \
    curl "" -o "Python-3.8.6.tgz" && \
    tar xzf Python-3.8.6.tgz

Now we actually build and install it. Using altinstall allows our version to be installed without affecting the default one of the system as that can sometimes make things go screwy.

RUN echo 'Install Python 3.8' && \
    cd Python-3.8.6 && \
    ./configure --enable-optimizations && \
    make altinstall && \
    rm -f /opt/Python-3.8.6.tgz

Finally for Python I like to set up an alias for the version we installed.

RUN echo 'Alias for python3.8' && \
    echo "alias python=python3.8" >> ~/.bashrc && \
    source ~/.bashrc

Now on to Node JS which is pretty important (in my case as a mostly front-end developer of late). There's also global tools we install that need node. We are using Node Version Manager (NVM) as it's great for installing, running and switching different versions of node. In our case we install the latest stable versions (long term support) of Node 8, 10 and 12 (carbon, dubnium and erbium respectively). We'll also tell NVM to use version 12 by default.

RUN echo 'Install node via nvm' && \
    curl -o- | bash
RUN /bin/bash -c ". ~/.nvm/ && \
    nvm install lts/carbon && \
    nvm install lts/dubnium && \
    nvm install lts/erbium && \
    nvm use lts/erbium && \
    nvm alias default node && nvm cache clear"  # Default to the latest node and empty cache

Let's install some important and usefull CLI tools. Namely the AWS CLI, SAM CLI and CDK CLI.

# Install AWS CLI
RUN echo 'Install AWS CLI' && \
    /bin/bash -c "python3.8 -m pip install awscli && rm -rf /var/cache/apk/*"

# Install SAM CLI
RUN echo 'Install SAM CLI' && \
    /bin/bash -c "python3.8 -m pip install aws-sam-cli"

# Install CDK CLI
RUN echo 'Install CDK CLI' && \
    source ~/.bashrc && \
    /bin/bash -c "npm install -g aws-cdk"

Well that's about it. You can continue to add your own commonly used tools or set things up in the .bashrc file (or set up another shell I guess). Finally I set an entry point and then it's ready to build and run.

ENTRYPOINT [ "bash", "-c" ]

Build the Dockerfile

docker build -t <tag-name>:<version> .

Run the container

docker container run -it <tag-name>:<version> /bin/bash

e.g. docker container run -it aws-dev-env:v0.2 /bin/bash

If you want to mount a working directory as a volume that's straight forward also, e.g.

docker container run -v /mnt/c/Users/user/path/to/folder/:/tmp -it aws-dev-env:v0.2 /bin/bash

Essentially the command above mounts the absolute path on the host machine to a folder in the container storage (folder will be created if it doesn't exist already). You can mount multiple containers by just passing more -v flags with the same structure.