Testing Microservices: Principles, Challenges, Case Studies
There’s an old saying that an application is never officially “finished” in the strictest sense – it simply reaches a point where it must be released.
But testing an “old school” monolithic application is one thing. Testing microservices is something else entirely. But what are microservices, and what difference do they make in the grand scheme of things? The answers to these questions are thankfully straightforward – you just need to keep a few key things in mind.
What is Microservices Architecture?
Although microservices are nothing new in web application development, with the big player’s Google and Amazon have been using them for over a decade. Microservices have seen a jump in use and have become the architecture of choice for an ever-increasing number of applications and users.
One of the most important things to understand about the microservices architecture is that it acts in stark contrast to the monolithic applications of the 1990s and early 2000s.
In a monolith, an application is essentially designed as one massive entity. Every single part of the app – and every feature – is closely intertwined and interconnected in a way that worked well for the era, but that in hindsight was also difficult to build and even harder to maintain over the long run.
For starters, every single part of an app essentially had to be “finished” before the entire thing could be rolled out to end users. Because everything was so tightly connected together, you couldn’t necessarily change one aspect without it creating a ripple effect elsewhere in the application’s code and structure. This also meant that if you needed to push out an update, you had to update the entire application – you couldn’t simply focus your attention on whatever lone feature or task you were trying to take care of in the moment.
Microservices architecture, on the other hand, structures an application not as one massive app but as a series of smaller, more independently deployable services. If you know your app is going to have five basic features, for example, you could dedicate a single service to each one. Then, when taken together, they all add up to something far more powerful than any one of them could be on their own.
The benefit here is that you don’t have to wait for the entire app to be finished before rolling it out to your end users. You can devote smaller, more intensely focused teams to handle their respective service and deploy them as they’re ready. Because these types of services can also be better built around core business processes, it’s easy to use and re-use them across multiple applications as the situation demands. Here is a good example of moving from monolith architecture to microservices.
Finally, you don’t have to worry about making a change to one service while simultaneously breaking others accidentally. Because everything is loosely coupled, this simply isn’t the type of problem you have to worry about with microservices by design.
Why Traditional Testing Doesn’t Always Work
Part of the reason why traditional testing doesn’t always work for microservices is a matter of scale. Apps built on microservices can use dozens of different services – they may not all be available to test at the exact same time.
Likewise, the way that microservices are all assembled together at the end of a project can make it very difficult to test things “the old-fashioned way” – which is why alternatives are very much a good idea.
Microservices Testing Types
Generally speaking, there are a few core microservices testing types that you and your teams will definitely want to pay attention to moving forward. Not only will they help make sure that your code is performing exactly as it should, but they’ll also guarantee the level of observability you need to always make the right decisions moving forward.
These testing types include:
- Unit testing. This is when you focus both on testing the behavior of each service by observing changes in their state, and take a closer look at interactions and collaborations between objects and their dependencies.
- Integration testing helps make sure that each service operates successfully both individually and as a cohesive whole.
- Performance testing evaluates system responsiveness and stability under a certain workload.
- Component testing dives deep into each service’s code repository, testing all functions within that microservice in isolation.
- Contract testing looks at what end users exchange with a particular service and how, exactly, that happens.
- End-to-end testing helps verify that the application as a whole meets all business goals.
Known Challenges When Testing Microservices
- Chain reaction-type errors. Sometimes it’s difficult to find an initial error since a single user’s action may trigger a chain reaction inside multiple microservices communicating with each other. Possible causes of the errors could be as follows:
– An error caused by the issue in a microservice source code
– Wrong data transition between microservices
– An operation closed due to timeout
- Additional endpoints may require testing. Besides endpoints for business logic realization, a microservice should include technical endpoints used for the communication of microservices with each other. Each endpoint should be tested, which requires the preparation of test data in a specific format.
- Different communication channels between microservices. Some specific protocols may require additional skills for the channel building.
- Automated testing is more commonly used for microservices, so it requires test scripts and autotest writing skills.
Best Practices for Microservices Monitoring
By far, one of the most important principles of monitoring microservices involves treating each service as its own software module. Treat everything as if it were supposed to operate totally independently of everything else. Once it works well in isolation, then you can verify that the system works well together.
Likewise, you’re going to want to make an effort to test across as many different types of setups as possible. The more diverse the setup your microservices run on, the more likely you are to encounter bugs that you can then put out of their misery.
Finally, never lose sight of the most important goal of all: pleasing end users. Make sure that every decision you make is built around the core business functionality they need when they need it the most and you can’t go wrong.
Options like Raygun APM, offer both instrumentation and data collection processes as well as a comprehensive dashboard that you can use to visualize the data your metrics are creating. There are other more specialized tools that you can also use depending on what you’re trying to look at in the moment. Zipkin is a tool designed specifically to trace calls between microservices and is particularly helpful when it comes to addressing latency issues.
It’s important to make sure you’re using the right tools. At MobiDev, our common choices are jmeter for load testing, Telegraf for metrics gathering, and InfuxDB accompanied by Grafana for building visual dashboards to collate all of the data being collected.
How We Test Microservices at MobiDev
- Every single microservice is tested separately with API tests to ensure the fast revealing of initial errors.
- Integration testing is conducted to ensure proper communication between microservices.
- End-to-end app testing is conducted via an API / User interface to ensure that all of the integrated parts function properly.
- At the microservice level, the code is covered by unit tests to ensure the proper functioning of the microservice modules.
- Events and error logging is used for rapid responses in case of unexpected problems.
- The involvement of test automation specialists, and the collaboration of QA engineers and developers allows the effective optimization of testing.
Microservices-based Project Case Study
A microservices-based application fairly often is a system that comprises a simple user interface and a complicated backend under the hood. A QA engineer, who tests microservices, must be experienced in using Docker and console utilities for gathering logs and connecting to containers. Being skilled in programming is a plus.
In our recent practice we tested a microservice-based software that enables biometric authentication using face and voice recognition. In our case, along with the creation of automated scripts, testing supposed the collection and preparation of a large data set for model learning and testing.
Depending on a task, different software modules required different datasets. For example, when testing voice recognition service, voice samples were recorded on various mobile and desktop devices, involving many people of different ages and gender.
The dataset for driver licenses recognition was formed with data collected by external testers who used their real IDs. It was challenging to classify the driver’s licenses by all applicable standards for all U.S. states.
Automated testing was applied with created test datasets, where an expected result was formed, and the data was sent via scripts. As a result, a report, which included the information for each endpoint and test status, was created.