Learning Microservices Architecture with Bluemix and Docker (Part 3)

Share: Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on RedditEmail this to someonePrint this page


Welcome to part 3, here we will begin using Docker to run our services. Links to previous sections: Intro – Part 1, Breaking apart the Monolith – Part 2.

The following sections will take you through the process of breaking up

Dockerization of services

We will be using dockerfiles to package our service APIs into containers. You do not need to be familiar with Docker or Dockerfiles, just make sure you have Docker properly installed and I’ll guide you through the rest. In each service directory you will now need to create the Dockerfile that I mentioned earlier. Make sure to name them “Dockerfile” and place them in the location depicted in the image from before. Each Dockerfile will be very similar

line by line explanation of the Dockerfile commands

  • line 1: Download the node.js base image and use it as the starting container.
  • line 4: copy every file inside the current directory into the container’s  /src directory
  • line 7: Move into the containers /src directory and run npm install. This is kind of redundant since node_modules are copied into the /src directory anyways. I think it’s worth learning.
  • line 10: Set the MONGO_URI to be the same as the one you used locally and on bluemix.
  • line 12: Expose the container’s port 8080 so the api can reach the outside world.
  • Line 13: Sets the command that will run when the container starts up. In this case we will run node on the API’s js file, which will in turn start the API’s server.

Changes you will need to make

Change the name of the file that will run on start. this file is specified in this line:

replace <SERVICE NAME> with the correct service name (cartApi.js, productApi.js, or reviewApi.js). Next we will need to export the corect MONGO_URI on line 10. Replace “mongodb://IbmCloud_…<rest of the URI>” with the actual Mongo uri (the one you exported earlier). Use the Template from above on all three services and make the respective changes to the run CMD, and the MONGO_URI. That’s it! all we need to do now is build the images.

Building images from the Dockerfiles

To create the image for the productAPI service, move into the service directory and use the Docker build command. The syntax will look like this:

(note: that period at the end tells Docker build where it can find the Dockerfile “.” means current directory)

The username is usually the registry name you use for IBM containers, but for this part you can really use any username you’d like. The service name will be one of the following cart, product, or review depending on which service you are building. The output for my build looks this, yours will be longer if you’ve never built a node based image before.

Screen Shot 2015-07-13 at 1.34.34 PM

Repeat these steps for the other two services, changing the image name each time. Once you have all your images built, run Docker images to verify that all the images have been created.

Running our images locally with Docker

Run your images using:

Notice that the -p (port command) maps each container’s port 8080 to the boot2Docker VM’s ports 49160 – 49162 (the port numbers were chosen arbitrarily). Each service must run on a separate port.

If all the containers show up in the “docker ps” then we can access the APIs through the boot2docker VM IP or Localhost if running on linux (All local containers are running in the boot2docker VM on Mac and Windows). To get the boot2docker VM’s IP run the following command:

it should return an ip address. take that address and hit the endpoint in postman like we did before. Don’t forget to add the port to the end of the IP.Screen Shot 2015-07-13 at 5.17.02 PM


You can test all the APIs by choosing their ports and hitting the endpoints. They can all run at the same time now since you have them bound to different boot2docker ports.

Tying it all together

Up until now we’ve been working entirely on the back end. We haven’t attached the front end to the service containers. To do this we will need to make some changes to index.html and app.js.


You really could run the application without changing app.js but to show how we are actually using the containers, go ahead and delete the following sections:

  1. cart Api
  2. review Api
  3. product Api
  4. OPTIONAL: leave the faker Api alone if you later want to reset the DB data.


find the index.html file under the public directory. This is where we will be setting the front end API urls. Find the script tag in the header where the URLs are set. If you are running locally the urls should be empty strings. To use the Container Service we’ll need to set them here:

Screen Shot 2015-07-13 at 11.05.28 PM


For the containers I ran locally, the proper configuration would look like this:

Remember, I got the boot2docker VM’s IP from running the “boot2docker ip” command. The above ip addresses WILL NEED TO BE CHANGED TO YOUR specific boot2docker IP.

Note: make sure you append http:// to the front of the IP addresses, otherwise you will run into a bunch of cross domain request issues.

Dockerize the Front End Client

To deploy the express.js frontend in a container you can save this Dockerfile at the root and build it to create the front end store-client image.

By now you should have a good understanding of what is going on here. The only thing you will need to configure is the mongo uri, fill in the DB uri we’ve been using and you are ready to build. build this one from the microservices/ root using:

Get it running with “docker run”:

Now Hit the boot2docker ip at port :49163! Everything should be working and the data should persist. Play around with it and make sure all the API Services are up and running. Looking at your “docker ps” you will see four containers running on separate ports. We have successfully split a monolithic application into three Microservices and one front end client container!

Thank You for reading, please leave comments.

Share: Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on RedditEmail this to someonePrint this page
Miguel Clement

Miguel Clement

Miguel is a Computer Science Senior at Texas A&M Univeristy. He joined the jStart Emerging Technology Team in January 2015 and has been exploring the cutting edge ever since.
Miguel Clement
Miguel Clement


Leave a Reply