The global IT industry evolves rapidly in size, shape, and form, and so do the software development practices applied all along. Like any evolutionary process, this one strives to maintain efficiency while gaining capabilities. To keep the progress going, the traditional ways of doing things have to be replaced.
For IT development, this means there is a point along the journey of software enhancement where we cannot continue to add structures upon structures of ever-increasing complexity, without sacrificing performance.
Historically, this point fell on the edge of 2011-2012, when software experts from a prominent workshop in Venice came up with the term Microservices to define a new architectural style they have been exploring at the time. Dubbed the fine-grained SOA (service-oriented architecture where app components connect via a network), it wasn’t an entirely new approach to product design, but rather a refined way of building service-oriented applications.
Strictly speaking, microservices divide the bulk of a product’s functionality into independent chunks of software, while preserving the cohesiveness of the system.
Here’s a general idea of the architectural difference when it comes to comparing microservices vs monolithic software:
Microservice vs Monolithic: Which software architecture is best?
With just above half of the enterprises out there adopting the loosely coupled services approach, it’s a tough one to crack.
The short answer is – well, it depends.
Microservices are much like government decentralization, which gives power and responsibility to the regions while maintaining essential relations to keep the state solid. The opposite of that is centralized governance – where the decision-making is concentrated.
Now, the choice of a suitable model is dictated by your needs and setup.
A small project will hardly see the advantages of using microservices, just like a small state does not need decentralization. Bigger and more complex projects, on the other hand, may very well benefit from a more advanced design approach.
MintyMint has built several microservice-based products, and we consider it a convenient and productive software model. One of the bright examples is 4friends – a crowdfunding platform for generating recurring donations.
That said, it is not all that simple when you dig deeper. There are many factors to consider when comparing microservices and monolithic architecture.
Comparing Microservices and Monolithic software architecture is not an easy task. We have to remain scientifically objective, after all.
For that reason, a point system seems just right.
When it comes to the inherent performance of application architecture, there are two key indicators – network latency and data throughput. Latency represents the amount of time data takes to travel between two destinations.
Here’s how it works:
To pass information, bytes convert into an electromagnetic signal. It then travels via wires or air and is reassembled back into bytes by the receiving party. Now, we can cut down the decoding time. But since the signal takes time to travel, data transfer will always have a slight delay. It is a natural consequence of the basic laws of physics.
In this regard, having a localized, single-core system is superior to a network of interconnected clients operating with each other, often at long physical distances. While the latency of a microservice call is minuscule (around 25ms), the more calls – the higher the delay.
There are, of course, solutions that can minimize this gap, like running all calls in parallel or using a so-called fan-out pattern. In this case, the difference tends to zero as the calls increase. And still, Monolithics turn out slightly quicker every time.
The same is true for absolute data throughput (the amount of data processed over time).
A close call in the first standing, but still a point goes to the Monolithic architecture.
2. Resource usage & scalability
Now that we’ve touched on performance, let’s examine resources usage.
This is a tricky one.
At first glance, microservices calls use more resources than the monolith ones when doing the same amount of work.
However, since microservices can allocate resources as needed, they use them a lot smarter, decreasing the memory and CPU load. In addition, the more instances performed – the greater this difference is in favor of loosely coupled services.
Monolithic software can come ahead in individual cases (when a call transfers large amounts of data) but falls behind in all other scenarios.
The same principle works when you need to upgrade the computing capabilities as the requirements increase. By managing resources more efficiently, decentralized software easily scales the power up and down, adding or removing cloud computing servers as needed.
Clearly, a win for Microservices.
3. Development complexity
Speaking about the complexity of the development process.
While the good old monolithic apps call for greater skillsets from individual developers, microservices projects can be spread into smaller tasks between highly specialized devs.
Here’s an illustration to help you understand why:
At that, the overall amount of work is in often considerably greater with Microservices.
Unlike single-core projects, assembling multiple modules may involve several source codes, frameworks, and coding languages for that matter.
Data synchronization also adds up to the complexity of running dispersed software as opposed to its locally-contained rival. Once again, some tools tackle the issues. Nevertheless, a monolithic architecture is innately more clear and transparent.
Another point in favor of Monolithics.
Are you still there? Great!
4. Deployment & reliability
One of the main reasons why companies prefer microservices are the stunning deployment opportunities it provides.
Compared to the bulky structure of monolithic software, its counterpart is simple and flexible enough to have updates as frequently as desired. In fact, you don’t have to roll out the entire system after changing some of the functionality. All you need to do is redeploy that particular service.
More so, modifying a microservice does not affect the dependent services. Therefore, it won’t threaten the entire system’s work should there be a program malfunction. Whereas even a minor code error can stall the entirety of software built with a monolithic approach.
This boosts the software’s reliability, eliminating a whole bunch of critical operational issues.
Something like having more engines on a plane…
In addition, microservices are a lot easier to test. A limited number of features dedicated to each of the services substantially decreases the number of dependencies involved. This makes it much more simple to write and run tests. Therefore, you can release the product a lot earlier.
In this one, Microservices come out ahead.
So far the score is 2 – 2.
5. Technological flexibility
This is where things get interesting.
There are countless development technologies on the market. Some of them are quick, some – are easier to build. A part of them is better for billing, others are a good suit for data processing, or have better security… You name it.
Microservices empower you to add all of it to the arsenal, taking the best from each technology.
It’s like having all of the superpowers at one’s disposal.
Like that, a piece of software can be quick, rigid, capable, and secure all at the same time. It’s no wonder why the method is so popular among architects of complex IT projects like Netflix, Medium, and Uber.
This will, of course, require hiring a whole bunch of specialists to implement, as mentioned earlier. But hey, that development complexity point is already granted to Monolithics, so we can’t complain.
Another win for microservices.
6. Team communication
Finally, team communication plays a key part in the process of IT product development, and it can be affected by software architecture choice.
Here’s the thing: by dividing the software into smaller chunks, Microservices not only distribute the tasks but also the teams, decreasing the number of individual communication channels between devs.
This goes in line with Amazon’s well-known “pizza rule”, which states that a team is too big and inefficient if it can’t be fed with two pizzas.
You decide what’s right.
I sure wouldn’t be arguing with Amazon’s expertise in project management.
So, the final round goes to the Microservices architecture, too, and the score is: Microservices – 4, Monolithics – 2
Monolithics, it was a fair battle…
Part 2: How do microservices work?
While it did seem like a logical conclusion, it wouldn’t be the ULTIMATE software architecture guide if we didn’t dig deeper into the subject.
So, let’s move on.
Now that we’ve explored how microservices are different from the traditional monolithic software, let’s examine the technology behind the revolutionary architecture.
Just like monolithic apps, modular software can be built with a wide range of coding languages and frameworks. Therefore, most of the rules for choosing a tech stack apply here as well. That being the case, a microservices tech stack is effectively larger and much more versatile than that of traditional software.
Loosely-coupled apps are very complex structure-wise. So, many aspects of the system’s cohesiveness have to be thought through before jumping into the whirlpool of the dev process.
In our case, it is worthwhile to go over all of it gradually – one functionality at a time. So, let’s jump in!
First of all, any software has to run somewhere.
There are three main hosting options to consider for microservices:
- Local server – a traditional enterprise computing model. Companies maintain equipment and software in a confined data center, having direct control over its operation.
- Public cloud – a rather modern approach. Here, shared computing resources are provided over the internet and are managed on the side of the cloud provider. We’ve already written about on-demand software recently.
- Private cloud – offers opportunities similar to the public cloud. In this case, though, companies own and manage remote server capacity in-house (for security or compliance reasons, mostly).
It should be noted that there are also hybrid cloud-hosting solutions, but that is a topic for another blog post…
In most cases, public cloud hardware is the go-to choice for running microservices. It offers virtually unlimited processing capabilities on rather flexible terms.
And while there is an array of remote infrastructure providers, the most popular of them are represented by:
- AWS (Amazon Web Services)
- Microsoft Azure
- Google Cloud Platform
- Oracle Cloud
VMs & container management
Now, there are two principal ways of using cloud resources – virtual machines and containers (each containing individual functionality).
Both use remote hardware to perform tasks. Now, VMs emulate entire systems along with the operational systems. Whereas containers share the OS and therefore have a lot of common functionality that needs not be executed separately.
This saves a ton of resources, providing a tenfold launch-time difference and a major cut in RAM and CPU usage, in favor of containers, of course. Having less overhead and close to zero weight, it is a much more favorable environment for complex applications.
However, while it’s very convenient to have individual containers for each of the services, it is another challenge to successfully manage it all. Crucial tasks like automation of deployment, scaling, and networking, add up to the complexity of running loosely coupled software.
This is where container orchestration tools come in handy, effectively tackling these kinds of issues.
In this regard, the most popular choices on the market are:
- Other solutions from major cloud providers like Amazon, Google, and Azure.
We have already learned that microservices allow using different tech for individual software components.
On one hand, this gives the flexibility to assign the best-fitting technology for tackling different tasks within the system. On the other, however, it requires establishing effective interaction between those app components.
This is exactly what a service mesh is there for.
A dedicated infrastructure layer, it enables the services to interact with each other via local “sidecar” proxies instead of calling each other directly over the network. In essence, it is an interpreter between services that often “speak” different programming languages.
Service mesh facilitates horizontal (service to service) communication, as opposed to the API gateways that control vertical (client to server) networking. It is also different from container orchestration tools, which are responsible for resource management only.
Some of the widely-applied service mesh solutions include:
- App Mesh
When designing modular apps, it is crucial to determine how program components will communicate with the system.
Typically, this task is executed via APIs.
API stands for Application Programming Interface. It enables communication between two systems (for example – your laptop and an app), determining the data transfer protocol. Something like a moderator of the client-to-service conversation who ensures that the message “gets through”.
APIs operate via ‘requests’ and ‘responses’. When a system receives a request it returns a response. The destination of that response represents an endpoint – essentially, one end of a communicational channel between a server and an external device.
Now, what comes down to one communication channel in a monolithic app, may generate an array of those in microservices. This is because splitting software into multiple pieces implies that a single client request may call for separate responses from the services, resulting in multiple endpoints. An API gateway sorts it all up by providing a single point of entry into the system…
Ok, I know that just spilled over the sane limit of tech terms per paragraph.
Let’s break it down.
Imagine a typical blog page like this one. It contains a text field, a list of recommended articles, a comment section, a login form, a sharing functionality, etc.
In microservices, a separate module owns each of the described components’ data sets. So, when you open up that page you actually communicate with a set of micro-apps.
If these calls were direct, your browser would have to send separate requests at all of those services (each with an individual web address) in order to assemble the page. For a number of reasons that furtherly exceed this article’s threshold for heavy terminology, such an option is inefficient. You may read about network latency here.
Another option is to have a distributing entity to sort through multiple client requests and return a single, “comprehensive” response. Kind of like the packing assistant in a grocery store. You know, the one gathering your stuff while you talk to the cashier and check out – to save everyone time.
That’s API getaways.
Some of the best API Gateway solutions are provided by Amazon, Ambassador, Kong, Microsoft, Akamai, Mulesoft, Google, and Express.
Privacy issues accompany any IT product development process.
At that, the nature of microservices poses an elevated threat of security breach, putting additional pressure on software architects.
More so, securing the containers in which microservices run joined the list of the industry’s main data-protection challenges in 2018.
This is happening for two reasons.
For first, it is a well-known fact that a system’s complexity is inversely proportional to its reliability. This is especially true when it comes to software vulnerabilities. Increased interaction between microservices comes hand in hand with additional communication channels – which means more points of potential penetration.
To make things worse, microservice apps are often hosted along with third-party software, on the same physical server. Even the word combination “shared environments” does not sound too safe. Forget about the multitude of less obvious ways things can go wrong in a complex cloud infrastructure hosting disintegrated software.
Luckily for IT enterprises, there are solutions for these issues, too:
Last but not least on the list of the main technologies behind microservices is the so-called middleware. These tools are responsible for additional coherence-related tasks like load-balancing, proxies, caching, and routing. While somewhat similar to the already mentioned gateways, it doesn’t expose microservices as an API does.
In terms of microservices middleware, these are the market leaders:
- HA Proxy
Although there are no reasons for smaller teams and projects to give up the well-established and proven monolithic software architecture, Microservices are more progressive and seem to be staying around for the foreseeable future.
Yes, the approach is more resource-demanding and complex than traditional program development techniques. However, its benefits outweigh the cost, especially for big projects where software reliability and production speed play a key part. The array of technologies backing microservices-based apps is stunning. From design and implementation to deployment and maintenance – versatile program tools are there at one’s aid on every step of the microservices development process.