Sponsored By Aspose - File Format APIs for .NET

Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!

Microservices, or “How to spread the love”

For some time, people have been talking about microservices. I say “some time” for two reasons: 1) It’s a good opening line, and 2) I have no clue how long people have been talking about them. I just heard the term for the first time about four months ago. So if I start talking about them now, while I still know virtually nothing, I can get at least two more future posts on the subject talking about how I was doing it wrong in the beginning.

In the meantime, I have been talking about them quite a bit recently. We’ve been using them on a project at Clear Measure and I’d like to think it’s been successful but it’s too soon to tell. I feel good about what we’ve done which, historically, has always been a good metric for me.

The topic has been covered at a technical and architectural level pretty well by Martin Fowler, so much so that he’s even collected his discussions into a nice little Microservices Resource Guide. In it, he and other ThoughtWorkians define them (to the extent that anything in software containing the word “services” can be defined), point out pros and cons compared to monolithic applications, describe testing strategies, and cover off the major success stories in the space.

That doesn’t leave much ground for me to cover which, from a marketing standpoint, is almost surely the point. But I would like to add my voice if for no other reason than to plug the podcast on the subject.

Check out the Western Devs podcast on microservices

One of the more interesting links on Fowler’s Resource Guide is tucked away at the bottom. It’s a series of posts on how SoundCloud is migrating from a monolith to microservices. Part 1 discusses how they stopped working on the monolith and performed all new work in new microservices andpart 2 is on how they split the monolith up into microservices. There were challenges in both cases, leading to other architectural decisions like event sourcing.

The arguments for and against are, predictably, passionate and academic. “Overkill!” you say. “Clean boundaries!” sez I. “But…DevOps!” you counter. “Yes…DevOps!” I respond. But SoundCloud’s experience, to me, is the real selling point of microservices. Unlike Netflix and Amazon, it’s a scale that is still relatable to many of us. We can picture ourselves in the offices there making the same decisions they went through and running up against the same problems. These guys have BEEN THERE, man! Not moving to microservices because they have to but because they had a real problem and needed a solution.

Now if you read the posts, there’s a certain finality to them. “We ran into this problem so we solved it by doing X.” What’s missing from the narrative is doubt. When they ran into problems that required access to an internal API, did anyone ask if maybe they defined the boundaries incorrectly? Once event sourcing was introduced, was there a question of whether they were going too far down a rabbit hole?

That’s not really the point of these posts, which is merely to relay the decision factors to see if it’s similar enough to your situation to warrant an investigation into microservices. All the same, I think this aspect is important for something still in its relative infancy, because there are plenty of people waiting to tell you “I told you so” as soon as you hit your first snag. Knowing SoundCloud ran into the same doubt can be reassuring. Maybe I’m just waiting for Microservices: The Documentary.

Regardless, there are already plenty of counter-arguments (or more accurately, counter-assumptions) to anecdotal evidence. Maybe the situation isn’t the same. They have infrastructure. They have money and time to rewrite. They have confident, “talented” developers who always know how to solve architectural problems the right away.

So now I’ve more or less done what I always do when I talk microservices, which is talk myself into a corner. Am I for ‘em or agin ‘em? And more importantly, should you, reader, use them?

The answer is: absolutely, of course, and yes. On your current project? That’s a little murkier. The experience is there and microservices have been done successfully. It’s still a bit of a wild west which can be exciting if you ain’t much for book learnin’. But “exciting” isn’t always the best reason to decide on an architecture if someone else is paying the bills. As with any architectural shift, you have to factor in the human variables in your particular project.

For my limited experience, I like them. They solve one set of problems nicely and introduce a new set of problems that are not only tractable, but fun, in this hillbilly’s opinion.

And why else did you get into the industry if not to have fun?

This entry was posted in Microservices and tagged , , . Bookmark the permalink. Follow any comments here with the RSS feed for this post.
  • http://www.sholo.net/ scottt732

    Apologies for hijacking your comments section. I will read up on Fowler’s thoughts.

  • http://www.sholo.net/ scottt732

    So it sounds like you’re using CQRS. You’re definitely safer having your writes within a single container and to have the entire transaction in that container, but it’s not always possible when your data is spread around behind other services. The case I would make for a common back-end for the query side is that it allows you to make transactional changes to data across service boundaries without having to add the risk of making additional service calls. If the ‘true’ state of your database is really the sum of the state of a bunch of smaller databases, you run a higher risk of eventual inconsistency if a coordinated set of commands ever fails in an unusual way or the failure condition wasn’t communicated or handled correctly. In a classic RDBMS world, you’d just wrap a set of commands in a transaction and sleep soundly knowing that even if something failed, the database is will end up in a consistent state.

  • JontyMC

    Why would you want microservices sharing a common back-end? Each microservice should own its data, otherwise you are no longer separately deployable. That doesn’t mean microservices have to make synchronous calls, just store a view of the data owned by another kept up to date via events.

    Microservices can be made simple by architectural choices. For the write side, make a transactional boundary with a single microservice around each use case, with that microservice being the sole owner of changes to that data. For the read side, have a single microservice per use case subscribing to the data it needs from the write side microservices.

    This is overall simpler for most non-trivial systems, because changes to use cases correlates one-to-one with CI boundaries.

  • http://kyle.baley.org Kyle Baley

    I’d recommend Fowler’s resource guide, which is actually a series of presentations and articles rather than a single document. He touches on many of the issues and how to mitigate them. This is the main reason I didn’t go into a detailed analysis of the pros and cons as I figured much of it had already been discussed with more authority than I could add.

    The link in my opening paragraph (https://www.nginx.com/blog/introduction-to-microservices/) also has a very good introduction to microservices and discusses the benefits and drawbacks in some detail.

  • http://www.sholo.net/ scottt732

    I haven’t had a chance to read Fowler’s document yet but I’ll take a look. Just hoping to balance the argument a little bit from my own experience and potentially help some developers avoid some aggravation. ‘Microservices’ is a buzzword these days… so be weary.

    On the point about message queues and resiliency, they solve some problems, but they also complicate communications. You can’t easily guarantee things like in-order message delivery or exactly-once delivery. If one endpoint is down, how long do undeliverable messages need to live on the queue? Now you need to know how big the messages are and what’s the worst case scenario rate of ingestion? Then do some arithmetic to figure out how much disk space to buy and how that whenever someone adds a new property to a DTO somewhere, they adjust the knobs accordingly. Do you need to communicate failure synchronously or asynchronously? I.e., is there a message queue involved when things go wrong? What happens when the queue is down? What tools do you setup, buy or build to monitor the health of the queue? Did you pick the right queue? What’s the upgrade scenario look like?

  • Donald Belcham

    While the idea of exercising only one microservice is certainly the easiest option it isn’t always possible. Microservice boundaries should be defined not based on the consumption of the data but instead based on the relationship, ownership and encapsulation of the concepts that the data supports. There will be times when you have to populate a screen or report and the data required will need to be sourced from two different microservices. One option is to create umbrella services that perform the co-ordination for you. Another option is to architect your solution so that there is redundant data (akin to CQRS read tables) flattened for read purposes. I personally prefer the CQRS style approach where the data is kept fresh by handling messages triggered by the source data changing.

  • http://www.sholo.net/ scottt732

    It’s only really feasible to do that if the microservices share a common back-end and can query directly. This is why I like CQRS and ES in this world. Otherwise, you incur costs from serializing result sets. With web services gluing the services together, the client making the request has to setup (or hopefully reuse) a TCP connection + (hopefully) negotiate TLS, format the request, send it… and wait on the server while it dispatches the HTTP request, parses it, maybe verifies a security token, sets up (or hopefully reuses) a connection to the database, executes the query, serializes the results, sends the response back… so the client can handle error conditions or deserialize the results. Since all of this is generally brittle, add a handful of logging and validation logic in there.

    Compare all this with setting up–or more likely–reusing a pooled connection to your database, sending the request, receiving the response where the message formatting, parsing, and validation are all handled by a DB client library and which the web service approach still has to deal with as well.

    I’ve noticed that when people start designing for microservices, they tend to get a little carried away to the point where every business concept is it’s own island–the gatekeeper for it’s own data. It means that if I need to do something that involves multiple business concepts, I need to worry about coordinating the activities and handling the issues that might arise myself–or… make another microservice that hides some of that from me and just handle the issues that might arise from calling the new microservice. Even if you’re not putting every concept in a silo, you’re still inevitably going to have to worry about coordinating service calls somewhere. And hopefully you don’t need these things to happen atomically because coordinating distributed transactions by hand is no fun.

    I’m not saying microservices is a bad idea altogether… But proceed with extreme caution and make sure you’re not overcomplicating things unnecessarily. You can easily take what seems like an elegant approach and end up with a gigantic, over budget, late, excessively complicated system that nobody really understands and everyone has to worry about waking them up in the middle of the night. If you’ve got a monolithic application that can’t handle the load, consider options like load balancing, caching, or buying more powerful hardware… if you’re in the cloud and can’t meet performance requirements, make sure that the cloud isn’t part of the problem. If so, feel free to rent more performance… 9 times out of 10 it will be a cheaper option in the long run than building, maintaining, monitoring, and managing a suite of microservices.

  • http://kyle.baley.org Kyle Baley


    All points worth noting though I would argue the one on probability of failure if you build resiliency into your microservices (e.g. with messaging, retrying on failure, etc).

    I doubt anyone will claim microservices will fit all, or even most, scenarios and it may forever be a niche architecture. Somewhere in one of the links was the notion that monolithic and microservices are not mutually exclusive either. You can have a monolithic application that uses microservices for certain bounded contexts.

    Luckily, Martin Fowler addresses many of these issues in his resource guide so there’s no excuse for being surprised when you run into them.

  • JontyMC

    Why does microservices necessarily mean more latency/serialization/network IO? Wouldn’t you architect your microservices so a use case exercises exactly one microservice?

  • http://www.sholo.net/ scottt732

    I’ve been in the trenches on a massive monolithic-to-microservices architecture conversion in the design, implementation, and operations side. I would urge a ton of caution for the vast majority of use cases. YMMV, but in my experience…

    1) you are introducing tons of latency and overhead via serialization, network I/O, de-serialization.
    2) what would have been simple + efficient in-process/client-server calls become slow, network I/O-bound async processes.
    3) It adds complexity and latency to message validation and error handling. It’s easy to end up in states that you can’t make sense of.
    4) Without a shared back-end database, you end up storing data across logical/imaginary/microservicey boundaries. In situations where you might have done a join on 3 tables in a monolithic app, you can end up having to jump through hoops to coordinate I/O, guarantee success, and merge results (generally avoidable with careful design).
    5) It is easy to end up in situations where you are streaming data over your network and rolling it up elsewhere. This can put a LOT of strain on your network infrastructure, which hopefully for your sake doesn’t involve hypervisors and clouds.
    6) Predictable performance is nearly impossible to guarantee (SLA’s)
    7) Guaranteed message delivery comes at a cost. It’s very difficult + inefficient (impossible?) to get guaranteed message delivery with exactly-once message processing. You end up needing to architect just about every operation to be idempotent.
    8) The number of moving parts explodes nearly exponentially. With it, the probability of failure increases. If you thought you had 99.9% reliability with 1 moving part, you effectively have (0.999)^n*100 = with n.
    9) Troubleshooting situations becomes massively more complicated across machine boundaries–hopefully all of your clocks are in sync and you are tracking the path of your messages. Solutions like Logstash and Splunk become critical.
    10) You have a much larger surface area to worry about securing.

    If you’re headed down the rabbit hole regardless, I would strongly encourage you to familiarize yourself with Event Sourcing, Actor-based systems (Akka/Akka.net), and Nathan Marz’s Lambda architecture. Try to absorb as much as you can on all three and choose your path and tooling wisely.