Learning Microservices Architecture with Bluemix and Docker (Part 2)

Share: Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on RedditEmail this to someonePrint this page

MicroserviceBanner2

Welcome to part 2, here will begin splitting the monolithic Application! Links to previous section: Intro – Part 1.

TL;DR

If you are not interested in doing the manual steps involved in the breakdown the monolithic application into distinct services, feel free to download the “split” branch in the repository (link). Once you have the “split” branch open skip down to the Testing our microservices section at the end of this article. However if you want to dig into the code and follow the process, the steps are given below!

Split into Microservices

Before we get too far ahead, let’s run the code locally. The first step is to tie your mongoLab Db to the app. To do this we can go back into the Bluemix Console UI and find our instance of mongoLab and export it’s URI as a local environment variable. To do this, go back to the Bluemix console and click on the app we created. You should see a MongoLab service bound to the app, click the Show Credentials dropdown arrow to get the Mongo uri (seen here).

Screen Shot 2015-07-13 at 10.10.44 AM

Copy the string I highlighted and export it locally in an environment variable named “MONGO_URI” (case sensitive). In osx and linux this command will look like export

Download all dependencies using “sudo npm install” and start the server using “node app.js” then open up localhost:8080 in the browser. As simple as that, the store will be up and running. If you had anything in your cart earlier it should all still be there. This is because the bluemix app and the local instance are both reading and writing JSON on the same mongo DB.

After playing around with the app for a while you will notice some usage details in the running application’s logs: (some sample output)

Screen Shot 2015-07-13 at 12.30.08 PM

 

look at how the cart API receives two hits, the product gets one while the review API gets around fifteen requests, it is clearly receiving a large amount of traffic! This was done by design to show how apps are often imbalanced when it comes to api usage. In a monolithic application we would need to scale up the entire backend if the reviews API became a bottleneck. This would be wasteful because the cart and products APIs do not need this increase. In a Microservice Architecture application we can scale up the reviews API independently.

Another point to consider is the importance of the checkout functionality in the cart API. If customers cannot checkout or add items to the cart it would be catastrophic. We would annoy and eventually lose customers. This should also be taken into consideration when deciding how many instances we will want and how much memory/CPU we want to give each service. Give your most important services more resources to decrease the risk of downtime.

Setting up our workspace

(note: we will assume the “microservices” directory is the root directory of the project)

In the root of you project create a folder called services. Under the services directory create three folders, cartAPI, productAPI, and reviewAPI. These folders will each contain one of the three microservice APIs that will support our final application. In a full scale Microservice application you would ideally have each API service exist in it’s own Git Repo, you would also have different developers responsible for each service. For this tutorial we will just store the APIs in their own folders.

We will treat each API like an individual express project and give them each a project.json file. all three will require the following node_modules:

  1. body-parser: for parsing JSON API payloads
  2. express: the server framework
  3. mongoose: Mongo schema definitions

Here is the project.json for the product API.

you will probably want to modify the Name, description, and start command for each respective API, it’s not mandatory, but recommended.

Screen Shot 2015-07-13 at 11.08.06 AMLets give each service the Schemas it will need. You can find the schemas used by the original application in the microservices/models/ directory. There are two. one for Products, and another for Reviews. Create a models directory in each service and copy over the necessary Mongoose models. Use the image below if you get stuck.

Screen Shot 2015-07-13 at 11.23.56 AMAside from the project.json and mongoose schemas you will also want the actual API javascript files. Create one js file in each API directory; cartApi.js, productApi.js, and reviewApi.js. we will fill them in later. For now you can run npm install (may require sudo) in all three directories to get all the dependencies. At this point our services directory should look like the image on the left. You can ignore the Dockerfiles, we will add those later.

in the next section we will begin breaking up the monolithic app’s app.js file into the services we just started creating.

Split up the monolithic app.js backend

app.js is where all the api functionality currently exists. We want to take out all the APIs and place them into their respective services. After we are done, app.js will only be used to serve up the front end and all business logic will be handled by the services. To begin let us set up express in all of our API js files. The following code will be needed by all APIs:

Paste that skeleton code into all three Service js files (cartApi.js, productApi.js, and reviewApi.js). You can read through the comments to get a better understanding of what each code block does.

Moving on we will need to load the correct mongoose schemas that we copied over into each service’s Javascript file. cartApi.js and productApi.js will need the product.js schema, link using this code:

the reviewApi.js will only need the review.js schema.

(note: make sure you copied the proper schemas into the services models directory.)

We will now take all the api code (except for the faker API) out of app.js and move them into their respective files. This should be fairly straight forward. But here is the example productApi.js in case you need help.

You will follow this pattern for the other two API js files. In the event that you get really stuck look at the “split” branch in the Repo. this has the full solution code.

Once all the API code is placed into the respective services javascript files you will be able to test the APIs.

Testing our microservices

this tutorial will be using postman for API testing, you can use other services as well. See the “API technical details” section above for a breakdown of all the endpoints.

Testing cartApi.js

cd into the cartAPI directory and run your completed microservice using the node <service name>Api.js command. The API should immediately start up on http://localhost:8080/

(If you receive an error on startup make sure you stopped all node servers that were running on port 8080)

All APIs are accessed through the /api/ url path so in order to test the cart service we can hit the http://localhost:8080/api/cart endpoint. In postman that response will look like this.

Screen Shot 2015-07-13 at 12.41.44 PM

 

Everything looks good for the /cart endpoint (assuming you had fake data in the DB, and items in your cart), try testing the following endpoints.

  1. /cart/count – GET – returns number of items in the cart.
  2. /checkout/verifyPayment – PUT – given year, month, and cardNumber for a payment will return “status”: “verified” if the card is not expired.

 

To send a json payload for a PUT or POST in Postman use the RAW option to hard code your JSON into the PUT request (example:/cart/verifyPayment year and month):

Screen Shot 2015-07-13 at 12.51.11 PM

You can use these methods to test all the API’s. Be aware that you can only run one node server on port 8080 at a given time. you can test APIs one at a time or you can change the default port for each service and run them all at once. If you think that all the services are working let’s start throwing them into containers!

If you do decide to change the ports remember to change them back when we start the packaging the services for Docker containers.

The next section will explain packaging the services into Docker containers, and how to link all the services to the front end. Link to Part 3.

Share: Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on RedditEmail this to someonePrint this page
Miguel Clement

Miguel Clement

Miguel is a Computer Science Senior at Texas A&M Univeristy. He joined the jStart Emerging Technology Team in January 2015 and has been exploring the cutting edge ever since.
Miguel Clement
Miguel Clement

3 comments

Leave a Reply

Your email address will not be published. Required fields are marked *