Blog

Apps in the Cloud: Deploying Your App Using Docker Containers and AWS

Welcome to the first post in a series of articles with one clear objective: to create a cloud-based framework for your applications and give them a jump-start by deploying them on established cloud resources. We will strive to leverage the benefits of cloud infrastructure, like elastic capacity, redundancy, global availability, high speed, and cost-effectiveness, so that your software reach can be maximized with little refactoring and few dependencies. At the end of this post, you will have utilized Docker containers and AWS to create a good starting point and a tangible cloud foundation that will be agnostic but, at the same time, the canvas on which your application will draw its next iteration in the cloud deployment process. Let’s jump right in.

What is the First Step? Virtualize! 

First and foremost, you need to identify if your application can be virtualized. It is important to identify how much effort would go into virtualizing your application and decide if it is worth the end benefits. There is no magic wand, or, in this case, a tool that can virtualize any application and its dependencies. That being said, the benefits, potential gains, and the technological tendency to distribute and host solutions in the cloud encouraged me to create this tutorial based on my experiences. Once you have identified if your application can be virtualized, we can move on to the next step, which involves Docker and the beauty behind its virtualization mechanisms.

Docker Virtualization

Docker is a program that virtualizes operating system dependencies and requirements in order to facilitate the creation, configuration, deployment, and execution of applications by using containers. For context, containers and virtual machines are alike in regards to resource isolation and allocation, but differ in that containers virtualize the operating system instead of hardware. Containers are more portable, efficient, lightweight, and secure.

Virtual Machines and Containers

How Do I Put My Application in a Docker Container?

• Install a maintained version of Docker Community Edition (CE) or Enterprise Edition (EE) on your supported platform.

• Run “docker –version” to make sure that you have a supported version of Docker:

julianrodriguez$ docker --version
Docker version 18.09.1, build 4c52b90

• Test if your installation works by running a sample Docker image: docker run hello-world

sample Docker image

• List the available Docker images on your machine: docker image ls

List the available Docker images on your machine

• List the available Docker containers: docker container ls

List the available Docker containers

Dockerfiles, the Cooking Recipes of Docker Containers

A Dockerfile defines the blueprint of the environment inside your container and its dependencies. Networking and storage are virtualized inside this environment and isolated from the rest of your system. Ports, files, and other environment-specific requirements need to be specified. Nevertheless, after doing so, the build of your app defined in this Dockerfile behaves exactly the same wherever and whenever it runs.

Here is a Dockerfile example:

#Stage 1
#Start with a base image containing Java runtime
FROM openjdk:8-jdk-alpine as build

#Add maintainer info
LABEL maintainer="maintainer@testemail.com"

#The application's JAR file
ARG JAR_FILE

#Add the application's JAR to the container
COPY ${JAR_FILE} app.jar

#Unpackage JAR file
RUN mkdir -p target/dependency && (cd target/dependency; jar -xf /app.jar)

#Stage 2
#Same Java runtime
FROM openjdk:8-jdk-alpine

#Add volume pointing to /tmp
VOLUME /tmp

#Make port 80 available to the world outside this container
EXPOSE 80
#Copy unpackaged application to a new container
ARG DEPENDENCY=/target/dependency
COPY --from=build ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY --from=build ${DEPENDENCY}/META-INF /app/META-INF
COPY --from=build ${DEPENDENCY}/BOOT-INF/classes /app

#Execute the application
ENTRYPOINT ["java","-cp","app:app/lib/*","your.application.main.endpoint.here"]

As you can see, the example above is fairly self-explanatory and well documented. The main item to consider is theFROM openjdk:8-jdk-alpine as build” clause that defines a raw Docker image with Alpine Linux distribution and an embedded Java Development Kit (JDK). This is the foundation of our Docker image, which pretty much copies, unpacks, and creates folders, as well as exposing port 80 to host the Java application.

An execution statement is now in place to run your Java application using the main entry point you defined. This example might look extremely simple, and it is, but bear in mind that a common practice when building Docker images is to find a parent or existing Docker file that complies with most of your requirements and then build on top of it. 

Building and Running Your App

It is time to give Docker a try with our Dockerfile. We will create the Docker image using the command mentioned above. Make sure your current directory contains the Dockerfile we created above. The – -tag parameter is optional as a way to easily identify our new image:

julianrodriguez$ docker build --tag=MyDockerizeApp

The Docker image is now created and resides in your local registry. To confirm, you can once again run:

julianrodriguez$ docker image ls

Let’s run the application. To do so, the command is:

julianrodriguez$ docker run -p 4000:exposed_port_goes_here MyDockerizeApp

As you can see, in order to properly run and publish your app, you need to map port 4000 -p 4000 to the specific port where your application runs. On our Dockerfile, we exposed port 80, but this could be different for you depending on your specifics.

Once completed, your application is effectively serving and is reachable at localhost:4000 from the web browser or even via curl using the command: curl http://localhost:4000.

You can also run your application in “detached” mode, which frees up your console by allowing the container to run in the background. To do so, simply add -d to the previous command:

julianrodriguez$docker run -d -p 4000:exposed_port_goes_here MyDockerizeApp

Congratulations, your app is “dockerized” and running successfully! Let’s prove it is portable by uploading and running it outside of your local environment.

Deploy Docker Containers Using Amazon Web Services

Our next step is to deploy our application on a cloud-based entity. To do so, we will follow a simple, step-by-step tutorial that leverages the Amazon Elastic Container Service (AWS-ECS).

Now, how about the cost? All AWS resources used here are free. Simply go ahead and set up your AWS account and install the AWS CLI so that you can run AWS commands on your command line. To do so, you can follow the installation instructions here.

Steps to Deploy your Application Using Amazon ECS

1. Store your Docker containers in the Amazon ECR repository

We need to upload the newly created Docker container to the Amazon ECR repository, which is simply a container registry that enables developers to store, manage, and deploy container images. You can easily find it by searching for “ECR” in the search field of the AWS landing page:

Amazon ECR repository

                     a. Create an ECR repository

To store the Docker container we created, we first need an ECR repository. To create one, we simply follow the wizard on the ECR landing page. The wizard requires a name, which it uses to generate a fully qualified URL to access the repository. This repository URL will be used later on when we need to provide a reference to the image we would like to store in ECS, so keep it handy.

Create an ECR repository

Now that the repository is created, we can push the image to this repository so we can reference it when running the Docker deployment wizard.

To push your image, make sure your current user has permission to call ecr:GetAuthorizationToken. This will be required to push and pull any images from any Amazon ECR repository. Check the permissions for your user using the IAM service on the AWS console.

                       b. Authenticate Docker for your Amazon ECR registry

To authenticate Docker for the ECR service, you can run the following command on your console:

julianrodriguez$ aws ecr get-login

The output of this command will be a long command starting with “docker login -u AWS -p”, which corresponds to the autogenerated login command to be executed. Copy, paste, and execute it to log in. The output will be a success message granting you access to the ECR services via the CLI.

                       c. Push your image to your ECR repository

Using the “docker images” command, you will get the repository:tag value, or the image ID. With that information, you want to label your image with a meaningful tag. The recommended AWS standard is aws_account_id .dkr.ecr. region .amazonaws.com. The specific command to tag an image based on its image ID is:

julianrodriguez$ docker tag the_docker_image_id aws_account_id.dkr.ecr.region.amazonaws.com/the-application-name

Once your image is properly tagged, you can proceed to push it by running:

julianrodriguez$ docker push aws_account_id.dkr.ecr.region.amazonaws.com/the-application-name

Now that the image has been pushed and is ready to be used in the AWS realm, we can continue with the deployment process.

2. Create a Docker container and a task definition

The container definition establishes the basic specs of the image to deploy. Container name, image URL, memory limits, and port mappings are the basic characteristics to consider.

Create a container

A task definition provides other attributes in addition to those defined at the container level and allows sharing between containers when possible. Definition name, network name, task execution role, and compatibilities, as well as task memory and task CPU, are values to set in this section.

Create a task definition

3. Define a service

A service is defined to run the number of concurrent instances of a task definition in a given ECS cluster. It comprises settings such as the service name, the number of tasks, the security group, and the load balancer type. It is very important to map the port of the service to the port where our application is expected to be reachable. This example maps port 80 by default.

Define a service

4. Cluster configuration

Now that the container, task, and service definitions are ready, we will look into creating a logical computing unit to run it. The first step in the cloud deployment process is to look into leveraging AWS Fargate to create a cluster. One of the main advantages of doing so is that the management and configuration of individual AWS EC2 instances are completely abstracted, and we just set a few configuration parameters, leaving the heavy lifting and intricate work to AWS. Now, this is ideal for early cloud-driven deployments since we want to spin up our application quickly and reliably. In AWS Fargate, the main attributes to set are the cluster name, the VPC ID, and the subnets. Both the VPC ID and subnets are automatically created unless there is an existing setup you would like to use.

Configure your cluster

5. Create, launch, and review deployment resources

Now that all of our settings are done, we can simply go ahead and click “create” and let the wizard proceed with the ECS resource creation and the additional required AWS service integrations. AWS shows a status page like this one:

Getting Started with Amazon Elastic Container Service (Amazon ECS) using Fargate

Once the creation is finished, we can review the cluster specs in the “Amazon ECS > Clusters” section, where we will get an overall perspective of the instances, tasks, and services running on it.

review the cluster specs

Also, if we select the “newAppContainer-service” that is currently shown as “active,” we will drill down into the service specifics like load balancing, network access, tasks, events, auto-scaling, and more.

service specifics

6. Run your cloud-deployed application

Now that we have configured and deployed all the necessary resources, the time has come to test the deployment and confirm the availability of our application on the cloud cluster we just created. To do so, we will need to get a reference to the Load Balancer DNS name that was generated during the setup. This URL will serve as the single entry point for our application based on the port mapping done during the container and service configuration.

To find the URL, you need to go to the AWS EC2 service dashboard by searching on the AWS console landing page, just as you did when searching for the ECR service.

Once there, on the EC2 dashboard’s left column, you will find the “load balancing” section. Clicking on it will populate the right pane with a list of available load balancers. Based on the supposition that your AWS account is brand new, there will only be one load balancer listed. Once you select it, the bottom “Description” pane will be updated with the selected load balancer details. The details should look like this: 

load balancers section

 

Here, we will find the DNS name, which is the URL. Copy and paste the URL in a new browser, and you will be greeted by your application’s landing page. Yes, that’s it—your app is now in the cloud!

Congratulations, Your app is now in the Cloud!

Conclusion

This first post in the “Apps in the Cloud” series has explored the possibilities, requirements, implications, and setup involved in taking your standalone application and deploying it in a cloud environment. Using Docker containers and AWS we were able to create a tangible cloud foundation. This is only the beginning of a constantly evolving process where the ultimate goal is to maximize the benefits provided by cloud environments like AWS. In upcoming blog posts, I will explore more elaborate and complex services that can greatly improve our initial setup and take our distributed application to the next level. Stay tuned for more to come!

Ready to be Unstoppable? Partner with Gorilla Logic, and you can be.

TALK TO OUR SALES TEAM