Monolith to Microservices: The Cloud Elements Experience

By Chase Doelling in Developer Posted Jul 5, 2017

When growing an enterprise-scale system, at some point teams will look to consider transitioning to a microservices architecture. Microservices can bring increased agility, productivity and confidence to teams that need to scale a system as usage grows.  There are also some fallacies to consider.  

This happened to us at Cloud Elements and we decided to start transitioning parts of our platform to microservices a little over a year ago. We continue the path of distributing out more independent functions from our current monolithic platform named internally as: Soba (because they are delicious) to microservices. We are taking the somewhat common approach of: distributing compartmentalized pieces of our platform to run as microservices, while continuing to run the current core monolithic platform together. The end goal is to have Soba exist to support our core platform API and Elements, while all other functions scale with our customers as microservices.

Mono2Micro-Soba-Header.jpg

Functions transitioning out of SobA: 
  • Polling framework
  • Script execution (Named Grover)
  • API gateway
 Grover Cleveland.png

Fun Fact: President Grover Cleveland was the only President to have executed someone - as a former sheriff

(Illustration via 
Boston Globe)

“The main idea is to slowly replace functionality in the system with discrete microservices while minimizing the changes that must be added to the system itself to support this transition. This is important in order to reduce the cost of maintaining the system and minimize the impact of the migration.”

-Vivek Juneja, The New Stack

 

Some themes emerge

As the services become more scalable there are some themes that have emerged to inform how we should treat the next service that hits production.  

  1. Aggregating logging

“A replicated log is often about auditing, or recovery: having a central point of truth for decisions. Sometimes a replicated log is about building a pipeline with fan-in (aggregating data), or fan-out (broadcasting data), but always building a system where data flows in one direction.” - Tef @ Programming Is Terrible

To provide the best support for our customers and our team we need to have a complete snapshot of what’s happening across the platform as they behave independently, in one direction.  

  1. Containers for aggregated sandboxing

We use containers for serverless functions and customer-generated functions that can scale independent of each other. It also gives us more intelligent sandboxing, so we are not getting cross-talk of customer functions and avoids the ‘noisy neighbor’ problem.

  1. API Gateways

 As each of these scale we have also extended the API Gateway to prevent large DDoS like payloads from hitting the system.  Improving security while allowing for scale in parallel.

 

Benefits we are seeing

We are seeing some performance gains, particularly in dropping the latency of customer related function.  While it’s still early to give exact numbers, the benefits have also gone beyond the performance metrics.  It’s easier to field operations because there is more predictability in the services, as Soba will be performing fewer, different operations.

 

Where we are now

The current challenge we are seeing is how to develop and run these disparate systems either locally or as remote service, while still keeping them in sync and allowing for dependencies. There does not yet seem to be an industry standard for how to best approach this, so it has taken some trial and error to find a process that works for us.  We are a little different in that our entire platform is 100% RESTful API based. 

“Now instead of running one app to develop a feature, I need to have access to 5 different, coordinating services!” - Evan Phoenix @ phx.io

To iterate through this we are evaluating different approaches and tools that would allow us to better manage our collection of microservices. Looking at the different paths of:

  1. Running all of our microservices locally in containers
  2. Running a few services locally and the rest in remotely in the cloud

Tools we are playing with for running services locally:

  • Minikube - Local single node Kubernetes cluster inside a VM.
  • Docker - Contain all the whales! It does solve the “works on my machine” problems when collaborating remotely.

Tools we are playing with for running remotely:

  • Now- lightweight CLI for deploying Node.js or Docker applications.  In a one line deploy with command: now it will spin up an instance on its own URL, custom URL for paid accounts.

 Other tools and npm packages we use, all our microservices are Node.js:

 
Every microservices architecture is different

Because different customer needs make it different.  When looking at serverless vendors we have customers that run functions longer than the 5 minute Lamba execution limit.  To get around this and still provide the best customer experience we can, we ended up building our own function as a service platform to scale as needed, for as long as needed.  In conclusion, there is not yet a typical process for deploying microservices. For our team, this has meant testing different setup options until something feels right for our customers, team and processes. 

More resources on microservices: