Micro services will not change the world; like those tech trends that came before micro services (SOA, object oriented, object brokers, XML, client/server, etc ) didn't either. That all said I think, like the trends that came before it, micro services will make an incremental improvement into how we design and build software and systems. Just not fundamentally change it. I don’t have anything against micro services, in fact I like the model, but this is my two pence worth point of view.
At the moment there is a lot of talk about micro services being the answer to all known problems in IT including ‘fixing' big, complex, hard to change enterprise systems. Hereafter referred to as 'monoliths'. In my eyes these monoliths are in fact made up of micro services - except the services are called modules, packages, components and scripts. The monolith is really the fact that all of those modules, packages, components, scripts are written, integrated and tested by a single system integrator supplier in a typically opaque manner.
I suggest that the fix to the monolith is not the adoption of micro services but a change to the way the system is contracted. However, it is worth noting that those single contract delivery models, although seen as cumbersome, do provide a lot of intangible benefits. The largest benefit is the ability to delegate making sure that the non-functional aspects of the system (performance, capacity, stability, guaranteed service levels, etc) are covered and put measurable targets in place to get a consistent level of service.
Service Contracts
Micro services will provide the ability to make changes quickly and, if the overall architecture is supportive, those changes will be isolated to small areas of the system thereby limiting the testing. Right?A number of years ago I worked on a large integrated system with tricky NFRs around performance and availability. The system was based on a services oriented architecture and designed to be flexible. There is a strong argument that says that micro services are just an evolution of SOA and that the same principles apply. I tend to agree. In this system we could process a change request and make the technical change in 5 minutes. Great eh? However it would take us up to 100 days effort to regression test and deploy a change such as this – much to everyone’s annoyance (mine included).
The problem was not that we were poor at testing; it was because the contract included service penalties of up to £1,000,000 a month if performance and availability of the system did not meet the contracted targets. Numbers like that focus the mind and drive a risk averse behaviour when it comes to implementing changes!
Where there are risks such as the service penalties then the mitigation is to add rigorous controls and processes. The consequence is that these change processes and testing regimes impact the speed and agility that changes can be implemented in the live environment. Couple that with industry regulation, third party assessments and licensing then it makes changing any live and business (or safety or national security) critical IT system, no matter how small a change, quite a risky undertaking.
What does this have to do with micro services?
The system referred to above was built around an SOA architecture and when we do a traditional SOA based design we aim for coarse-grained abstract services that encapsulate the complexity of the back end. Micro services are similar but tend to be much finer grained and therefore there are going to be much more micro services in a system compared to the equivalent in an SOA platform.
In other words, micro services generate more moving parts. Of course this code logic needs to exist regardless of it being exposed as a micro service or not but the point of the micro service is that it is visible, flexible and reusable. Therefore, if a code function is moved from being wrapped and protected by its surroundings to being visible and usable in a variety of ways, this exposure creates moving parts. As every engineer knows, the more moving parts a system has, the more fragile a system will get.
It is this fragility that will mean that the unbounded flexibility promised by micro services is at risk of not being realised. Where fragility exists, then change uncertainty and change risk start to materialise. The obvious mitigation to this risk is to up the amount of testing. Automated testing will help but there will still need to be an element of manual testing to mitigate the risks in high stakes systems.
Additionally, because micro services create a loosely coupled environment with potentially many alternate paths the change impacts are less understood. This is because the interactions between services are not always evident and in some cases not consistent. This uncertainty is amplified in an environment where changes are split between multiple suppliers and/or contract boundaries. Therefore, as we saw in SOA, it will be possible to make changes quickly but will still take time and effort to assure all of the stakeholders and service management teams that those changes didn’t break something important.
In other words, micro services generate more moving parts. Of course this code logic needs to exist regardless of it being exposed as a micro service or not but the point of the micro service is that it is visible, flexible and reusable. Therefore, if a code function is moved from being wrapped and protected by its surroundings to being visible and usable in a variety of ways, this exposure creates moving parts. As every engineer knows, the more moving parts a system has, the more fragile a system will get.
It is this fragility that will mean that the unbounded flexibility promised by micro services is at risk of not being realised. Where fragility exists, then change uncertainty and change risk start to materialise. The obvious mitigation to this risk is to up the amount of testing. Automated testing will help but there will still need to be an element of manual testing to mitigate the risks in high stakes systems.
Additionally, because micro services create a loosely coupled environment with potentially many alternate paths the change impacts are less understood. This is because the interactions between services are not always evident and in some cases not consistent. This uncertainty is amplified in an environment where changes are split between multiple suppliers and/or contract boundaries. Therefore, as we saw in SOA, it will be possible to make changes quickly but will still take time and effort to assure all of the stakeholders and service management teams that those changes didn’t break something important.
I still think micro services are good however!
What micro services have done; like object oriented design, SOA and others before it; is formalise leading application architecture thinking into simple patterns that everyone can understand, debate and share. Micro services also push the envelope and increase the significance and importance of good APIs. When I mean 'good' I mean usable and reasonably static in terms of change. API changes are the source of the most complex impact assessments. At the end of the day though, we are building complex IT systems and there is always going to be a level of risk generated from that complexity. No matter how simple it is to change the code, the higher the criticality of the system the more rigorous the demands of the users and sponsors of the system to making sure that changes are impact assessed properly and the risks are mitigated with sufficient testing.
Using DevOps means that the automation allows for small changes to be deployed more frequently, therefore reducing the risk of individual changes causing unknown outcomes and failures. This is true but most organisations are not yet in the place where they can safely implement little and often changes with no manual test requirements.
The point I am trying to make is that IT technology by itself cannot be seen as the solution to all IT problems. The commercial and legislative aspects also need to be modified as well in order to achieve more agility. Some organisations are doing this by taking the integration responsibility back in house and essentially letting suppliers off service levels (because it is just too hard to identify route cause and therefore who is at fault). I wouldn’t be surprised to hear these organisations claiming that it is micro services that have allowed them to be more agile when in fact it is more likely that the contractual goal posts have moved with their supply chain.
So I don’t think micro services will change the world. Complex IT will stay complex and every time we create a new way of doing things with a new style, paradigm or technology we make it just a little bit more complicated!
Using DevOps means that the automation allows for small changes to be deployed more frequently, therefore reducing the risk of individual changes causing unknown outcomes and failures. This is true but most organisations are not yet in the place where they can safely implement little and often changes with no manual test requirements.
The point I am trying to make is that IT technology by itself cannot be seen as the solution to all IT problems. The commercial and legislative aspects also need to be modified as well in order to achieve more agility. Some organisations are doing this by taking the integration responsibility back in house and essentially letting suppliers off service levels (because it is just too hard to identify route cause and therefore who is at fault). I wouldn’t be surprised to hear these organisations claiming that it is micro services that have allowed them to be more agile when in fact it is more likely that the contractual goal posts have moved with their supply chain.
So I don’t think micro services will change the world. Complex IT will stay complex and every time we create a new way of doing things with a new style, paradigm or technology we make it just a little bit more complicated!