Microservices are a tax your startup probably can't afford

260 nexo-v1 213 5/8/2025, 1:23:53 PM nexo.sh ↗

Comments (213)

asim · 6h ago
> Microservices only pay off when you have real scaling bottlenecks, large teams, or independently evolving domains. Before that? You’re paying the price without getting the benefit: duplicated infra, fragile local setups, and slow iteration. For example, Segment eventually reversed their microservice split for this exact reason — too much cost, not enough value.

Basically this. Microservices are a design pattern for organisations as opposed to technology. Sounds wrong but the technology change should follow the organisational breakout into multiple teams delivering separate products or features. And this isn't a first step. You'll have a monolith, it might break out into frontend, backend and a separate service for async background jobs e.g pdf creation is often a background task because of how long it takes to produce. Anyway after that you might end up with more services and then you have this sprawl of things where you start to think about standardisation, architecture patterns, etc. Before that it's a death sentence and if your business survives I'd argue it didn't because of microservices but inspite of them. The dev time lost in the beginning, say sub 200 engineers is significant.

candiddevmike · 6h ago
Some resume driven developers will choose microservices for startups as a way to LARP a future megacorp job. Startup may fail, but they at least got some distributed system experience. It takes extremely savvy technical leadership to prevent this.
devin · 6h ago
In my experience, it seems the majority of folks know the pitfalls of microservices, and have since like... 2016? Maybe I'm just blessed to have been at places with good engineering, technical leadership, and places that took my advice seriously, but I feel like the majority of folks I've interacted with all have experienced some horror story with microservices that they don't want to repeat.
Espressosaurus · 5h ago
I feel like it's only in the last 5 years in the tech publicity sphere that I've seen pushback against microservices, only it feels like only the last year or two where I see it to the exclusion of influencers pushing microservices.

Things are different in the embedded space so I don't have personal experience with any of it.

speed_spread · 4h ago
Pushback was always there from the start. The first edition of O'Reilly's "Building microservices" recommended _against_ microservices, unless you absolutely tried scaling your monolith and team beforehand.

Any organization stuck in microservice hell fully deserves the punishment.

westurner · 5h ago
Does [self-hosted, multi-tenant] serverless achieve similar separation of concerns in comparison to microservices?

Should the URLs contain a version; like /api/v1/ ?

FWIU OpenAPI API schema enable e.g. MCP service discovery, but not multi-API workflows or orchestrations.

(Edit: "The Arazzo Specification - A Tapestry for Deterministic API Workflows" by OpenAPI; src: https://github.com/OAI/Arazzo-Specification .. spec: https://spec.openapis.org/arazzo/latest.html (TIL by using this comment as a prompt))

hnthrow90348765 · 5h ago
Hiring will need to change to stop resume-driven development (can't eliminate it completely though), because you're likely to only get monolith roles if you only work on monoliths. Only being able to speak about microservices puts you in the "talk the talk, not walk the walk" category.

It would also nice to have less fear-driven career advice like "your skills go out of date" which drives people to try adopting the latest things.

Mountain_Skies · 2h ago
Keyword driven and filtered application processes also heavily incentivize adding into projects whatever is being posted on jobs sites. If microservices are part of a company's standard template for developer postings, people who want to work at that company will find a way to get it on their resume.
MDGeist · 6h ago
I've also seen the top down version where senior leadership like a CIO/CTO wants to put a huge "modernization" project on their resume and they don't care if it is impossible to maintain or falls over after they move on.
ang_cire · 3h ago
"Cloud Migration"
alaithea · 5h ago
And when it's your technical leadership leveraging buzzword-driven development to rise to the top, you're screwed.
wpollock · 4h ago
So true. It was in March that I saw on HN an advertisement for a vibe coder with 3 years experience. I believe the term "vibe coding" was invented a month before! Buzzword hiring is as bad as resume driven development.
bityard · 5h ago
It could also just be plain old overengineering. Like using Django and leaning on all of the magic contained within it just to implement a simple API that could instead be a very small Flask or FastAPI app.
dimal · 5h ago
I saw one startup with about fifty engineers, and dozens of services. They had all of the problems that the post describes. Getting anything done was nearly impossible until you were in the system for at least six months and knew how to work around all the issues.

Here’s the kicker: They only had a few hundred MAUs. Not hundreds of thousands. Hundreds of users. So all this complexity was for nothing. They burned through $50M in VC money then went under. It’s a shame because their core product was very innovative and well architected, but it didn’t matter.

jghn · 4h ago
> They only had a few hundred MAUs

Way too many companies believe they're really just temporarily embarrassed BigTech.

danielscrubs · 4h ago
Bad software dev. degrees that focus on fancy architecture that brings nothing to the table except overhead.
betterThanTexas · 4h ago
I don't think I learned basically anything about "fancy architecture" from my undergraduate courses except, ironically, reasoning about coupling and overhead.
ang_cire · 3h ago
I don't remember one solitary lecture on CI/CD, microservices, or even just deployment in general, in Uni. The closest that our comp. sci. classes ever came to touching on anything but the code itself was making us use SVN.
dakiol · 3h ago
I’m not sure where’s the downside. The engineers got paid, they managed to put “founder” on their cvs, and enjoyed the ride. Now they are more prepared for their next adventure. The only ones who lost money were the investors, but nobody cares about them.
singron · 6h ago
> You'll have a monolith, it might break out into frontend, backend and a separate service for async background jobs

And when you break these out, you don't actually have to split your code at all. You can deploy your normal monolith with a flag telling it what role to play. The background worker can still run a webserver since it's useful for healthchecks and metrics and the loadbalancer will decide what "roles" get real traffic.

elevatedastalt · 5h ago
If you are building the same binary for all microservices you lose the dependency-reduction benefit microservices provide, since your build will still break because of some completely unrelated team's code.
roguecoder · 5h ago
If it is possible for that other team to merge a broken build, you are doing it wrong.

If you are concerned about someone else breaking your thing, good! You were going to eventually break it yourself. Write whatever testing gives you confidence that someone else's changes won't break your code, and, bonus, now you can make changes without breaking your code.

motorest · 4h ago
> If it is possible for that other team to merge a broken build, you are doing it wrong.

This assertion is unrealistic and fails to address the problem. The fact that builds can and do break is a very mundane fact of life. There are whole job classes dedicated to mitigate the problems caused by broken builds, and here you are accusing others of doing things wrong. You cannot hide this away by trying to shame software developers for doing things that software developers do.

> Write whatever testing gives you confidence that someone else's changes won't break your code, and, bonus, now you can make changes without breaking your code.

That observation is so naive that casts doubt on whether you have any professional experience developing software. There are a myriad of ways any commit can break something that goes well beyond whether it compiles or not. Why do you think that companies, including FANGs, still hire small armies of QAs to manually verify if things still work once deployed? Is everyone around you doing things wrong, and you're the only beacon of hope? Unreal.

roguecoder · 4h ago
I haven't seen a broken build in at least nine years, not since I left the company with a merge process built out of bash scripts that took three hours and required manual hand-holding.

I am genuinely curious what situations you are seeing where builds are making it through CI and then don't compile.

It isn't always worth investing in quality, but when it is it is entirely possible to write essentially bug-free software. I've gone seven months without a bug in production and the one we saw we had a signed letter from product saying "I am okay if this feature breaks, because I think writing the tests that can verify this integration was going to take too long."

FAANG companies aren't prioritizing writing software well: they are prioritizing managing 50,000 engineers. Which is a much harder problem, but the management solutions that work for that preclude the techniques that let us write bug-free software.

One of the great things about startups is that it is trivial to manage five engineers, so there is no reason we have to write software badly.

motorest · 2h ago
> (...) it is entirely possible to write essentially bug-free software (...)

You lost what little credibility you had left.

JackSlateur · 3h ago
You are absolutely right.

Of course, if people wrote bug-free code, then there would be no bug !

Bug-free code in the actual code, or bug-free code in the test code, this is the same story.

If you write stuff and never have any bug, then either:

  - you are lying
  - you do not write much
  - you only write really simple things
  - you are Jesus, came back from heaven to shine his light on us, poor souls
The more complicated, intricate stuff you have, the more bugs you'll get (and only time will allow you to fix that).

Tests are great do define how you think it should work, and to ensure it keeps doing that way. Take the time to think about the third point on the bullet list above.

quesera · 1h ago
This seems a bit much.

In the DVCS era, we have inexpensive branching. Do as thou wilt on your topic or epic branches. Rebase them against main/master before merging upwards. Fix what must be fixed first.

Main/master branch should never fail CI. If it does, there is something seriously wrong with your branch lifecycle and/or deployment process.

tuckerman · 5h ago
Even if it builds successfully, I've never worked anywhere where automated tests prevented 100% of problems and I doubt I ever will. For most systems of sufficient complexity you are testing in prod, even if you did a lot of testing before prod as well.
roguecoder · 4h ago
That's even more true for microservices, though, since I have yet to see a microservice architecture that automatically runs end to end tests before deploying.

The post I was replying to said "your build will still break": that's what I was taking issue with. In this day and age there is no reason our trunk build should ever be broken.

mjr00 · 4h ago
> I have yet to see a microservice architecture that automatically runs end to end tests before deploying.

One of the big tenets of independent services is that your APIs are contracts that don't change behaviour. As long as each individual service doesn't introduce breaking changes, the system as a whole should work as expected. If it doesn't this is indicative of either 1) a specific service lacking test coverage, or 2) doing something wrong i.e. directly reading from a microservices' database without going through an API.

tuckerman · 3h ago
Yes, I suspect some of the back and forth is the fuzziness of the term "broken build", whether that means the code literally doesn't compile or it does but the code does the wrong thing.

I agree that you can prevent merges that cause compilation errors in nearly all cases!

jimbokun · 5h ago
What about when it’s you breaking your own thing?

A very large code base full of loosely related functionality makes it more and more likely a change in one part will break another part in unexpected ways.

jounker · 5h ago
You’ll still get some isolation since not all pathways share the same code. It’s not all or nothing.
jimbokun · 5h ago
I thought the linked article about how Khan Academy eventually migrated to multiple services was a good example of when introducing micro services is a good idea:

https://blog.khanacademy.org/go-services-one-goliath-project...

They had already scaled the mono service about as far as it could go and had a good sense of what the service boundaries should be based on experience.

PaulHoule · 2h ago
I've tended to use microservices in limited cases where the system had to serve a few requests that had radically different performance requirements, particularly memory utilization. I had a PHP server for instance that served exactly one URL for which PHP was not a good fit and a specialized server in another language for that one URL gave like 1000x better performance and money savings in terms of not needing a much bigger PHP server.

Using Spring or Guava in the Java world it is frequent that people write "services" that are simply objects that implement some interface which are injected by the framework. In a case like that you can imagine a service could have either an in-process implementation or an out-of-process implementation (e.g. via a web endpoint or some RPC.) Frameworks like that normally are thinking at the level of "let's initialize one application in one address space at a time" but it would be nice to see something oriented towards managing applications that live in various address spaces.

Trouble is that some people get this squee when they hear they can use JDK 9 for this project and JDK 10 for another project and JDK 11 for another project and they'd rather die than eschew the badly broken Python 3.5 for something better. If you standardized absolutely everything I think you could be highly productive with microservices because you wouldn't have to face gear switching or deal with teams who just don't know that XML serialization worked completely differently in JDK 7 vs JDK 8 thus the services they make don't quite communicate properly, etc.

jayd16 · 6h ago
it's weird that the your quote and your own explanation offer technical reasons for separate services but then you say it's not a technical pattern.

You'll need services. They're hard. If something is hard but it needs to be done, you should get good at it.

Like every fad, there a backlash from people seeing the fad fall apart when used poorly.

Services are a good pattern with trade offs. Weigh the trade offs, just don't do things to do them.

motorest · 4h ago
> Microservices only pay off when you have (...) independently evolving domains.

I don't see any major epiphany in this. In fact, it reads like a tautology. The very definition of microservice is that it's an independently evolving domain. That's a basic requirement.

9rx · 4h ago
> Sounds wrong

Sounds right, no? Service is what people provide, implied in the scope of a macro economy. Microservice then implies the same type of service, but within the micro economy of a single business.

tstrimple · 4h ago
I put my team through this as an inexperienced lead about 15 years ago. We were a team of less than a dozen who had a nice single solution file that you could build and run the entire stack from. At the end we were looking at roughly a dozen services all which required orchestration to get them running and working together. First hand lessons in YAGNI and "do the simplest thing that works" which have stuck with me the rest of my career.
fallingknife · 6h ago
There are plenty of tech reasons for microservices. e.g. scaling high traffic services separately and separating low priority functionality from critical paths. I would agree that this is usually not a smart thing to do in a small org, but I have seen times where splitting out a high load path into a microservice has been very much worth it at a startup.
bluefirebrand · 5h ago
> scaling high traffic services separately

This is a great optimization once you have high traffic services

Building this way before you have any traffic at all is a great way to build the wrong abstractions because your assumptions about where your load will be might be wrong

gopher_space · 4h ago
Microservices are a technical solution to regional availability and pairing problems, and they start with a spreadsheet telling you when to make them based on requirements vs. cost. They're slow, expensive threads you should have a really good reason to use.

> Building this way before you have any traffic at all is a great way to build the wrong abstractions

These services only make sense to think about within specific traffic contexts. It'd be impossible to build the right abstraction.

mindcrash · 6h ago
I know about a org with ~2-3 devs who decided microservices would be cool. I warned not to go that way because they would surely face delivery and other issues which they wouldn't have when building the solution based on a architecture archetype which could be a better fit for the team and solution, which I evidently decided should be a modular monolith. (the codebase at that point was already a monolith, in fact, but had a large amount of tech debt due to the breakneck speed in which features needed to be released)

They ignored me and went the microservices way.

Guess what?

2 years later the rebuild of the old codebase was done.

3 years later and they are still fighting delivery and other issues they would never have had if they didn't ignore me and just went for the "lame" monolith.

Moral of this short story: I can personally say everything this article says is pretty much true.

xnx · 6h ago
> 3 years later and they are still fighting delivery and other issues

Having added a fancy new technology and a "successful" project to their resume, they're supposed to move on to the next job before the consequences of their actions are fully obvious.

abirch · 6h ago
Microservices are GREAT when 1 team owns each service. I haven't seen a good use case when you have 1 team supporting multiple microservices.
eloisant · 5h ago
1 team supporting multiple services is not great, but a monolith with more than 50 developers working on it (no matter how you split your teams) isn't great either.

That's why I don't like the term "microservice", as it suggests each service should be very small. I don't think it's the case.

You can have a distributed system of multiple services of a decent size.

I know "services of a decent size" isn't as catchy as "go for one huge monolith!" or "microservices!" but that's the sensible way to approach things.

elktown · 5h ago
> but a monolith with more than 50 developers working on it (no matter how you split your teams) isn't great either.

Why can the game industry etc somehow manage this fine, but the only place where it's actually possible to adapt this kind of artificial separation over the network, it's somehow impossible not do it beyond an even lower number of devs than for a large game? Suggests confirmation bias to me.

The main problem with microservices is that it's preemptive, split whatever you want when it makes sense after-the-fact, but to intentionally split everything up before-the-fact is madness.

bcrosby95 · 4h ago
Note that the game's industry uses the term 'developer' differently. If a game has X developers, the vast majority of those people are not programmers. Engines also do a lot to empower non-programmers to implement logic in video games, taking lots of the workload off of programmers.
elktown · 4h ago
Sure, but there are enough programmers to make the point.
nand_gate · 3h ago
Consider that game programmers are generally more skilled than enterprise devs due to tougher domain constraints and role scarcity.
elktown · 3h ago
But if that's true, it seems like a bad idea to introduce that chain reaction of complexity to less skilled devs.
pixl97 · 4h ago
How many of those game developers are actually art and asset developers?

How many times have AAA releases been total crap?

How many times have games been delayed by months or years?

How many times have games left off features like local LAN play, and instead implemented a 'microservice' as a service for online play?

How many times have the console manufactures said "Yea, actually you have the option of running a client server architecture with as many services you want?"

elktown · 4h ago
Talk about an axe to grind. Are you really implying that AAA releases being bad might be due to not having microservices as a method?
pixl97 · 4h ago
No. I'm saying there is no real correlation to the quality of microservices and the quality of monorepos in games and the amount of work required to build each one as a quality software object.

Comparing a game to almost any other piece of software, especially web based software, is how you end up with broken abstractions and bad analogies.

elktown · 3h ago
The point is that it's clearly not a major issue to have large teams working on the same monolithic codebase, the problems are just solved differently or are just vastly overstated in the first place.
Capricorn2481 · 4h ago
> How many times have AAA releases been total crap?

> How many times have games been delayed by months or years?

What are we arguing here? Because I can think of many microservice apps that are crap as well, and have no velocity in development.

> How many times have games left off features like local LAN play, and instead implemented a 'microservice' as a service for online play?

This is entirely irrelevant. We're talking about the trade-offs of separating networked services that could otherwise be one unit. You're saying "why do games have servers then" which is a befuddling question with an obvious answer.

That's like saying my web server is a Microservice because it's not run in my clients browser. It makes no sense.

bcrosby95 · 5h ago
I call them nanoservices.
monero-xmr · 5h ago
We solve the problem of 50 devs working in a single monolith with folder and file structure, separation of concerns, basic stuff like this
bunderbunder · 5h ago
This is really what it comes down to right here. The real challenge is Conway's Law. Both the software architecture and the org chart need to be designed with Conway's Law in mind. If that hasn't happened then deciding between microservices and monolith is ultimately just deciding how you will be punished for your mistake.
roguecoder · 5h ago
People misunderstanding Conway's Law is a big part of the problem for sure. The law says nothing about team boundaries: it talks about communication pathways.

The paranoid socialist in me thinks big companies like team-sized microservices because it lets them prevent workers from talking to each other without completely ruling out producing running software.

When companies instead encourage forums for communication across team boundaries, it unlocks completely different architectural patterns.

bunderbunder · 4h ago
If team boundaries aren't a major influence on your communication pathways, what on earth do you even mean by "team"?
roguecoder · 4h ago
The most common alternative to organizing teams by service boundaries is to organize teams around the business problems to be solved. That is a lot easier to budget for than trying to staff by microservice boundary, doesn't have the coordination and planning overhead, and it means you aren't reliant on up-front planning to get to a functional solution or design.

In high-uncertainty greenfield development, Explore projects or Lean Startup-style experimentation, having developer be close to the users they are serving is very efficient.

It also lets those companies reteam frequently, without needing to change the software to match the new team boundaries, which is very helpful when growing the team.

roguecoder · 5h ago
Part of the problem is that many current programmers came up through functional programming or framework-based development. Microservices are often the first time they encountered modular programming or encapsulation, and so they equate "literally any architecture" with "microservices".

I've worked on monoliths with 400+ developers that were great, but it takes skills that people who have only ever worked in orgs that mandate microservice just don't have.

djtango · 5h ago
Could you elaborate on how functional programming relates to people's relationship with Microservices?
roguecoder · 4h ago
Sure!

Functional programming precludes encapsulation, so it doesn't scale indefinitely the way fractal paradigms can. Eventually, the complexity becomes overwhelming.

One effective solution to that is introducing microservices: programmers can still write entirely functional code, but have encapsulation in the form of services. They have to be micro, though, because conventionally-sized services are still big enough to strain the paradigm.

But I see junior engineers who aren't expected to think about the "architecture", by which they mean the modular design. They are handed a spec and they implement it, Mythical Man Month style. That treats organizing lines of code and organizing services as two completely-distinct activities, and depending on the company junior engineers are often not exposed to modular design until five or ten years into their careers.

antonvs · 1h ago
You’re suffering from a misunderstanding there. Functional programming is all about encapsulation, starting at the individual function boundary (closures can encapsulate state) and then at every layer above that.

Functional languages have some of the most rigorous module systems available. In fact Java adopted such a system recently, showing the weaknesses in its previous support for encapsulation via classes and packages.

wild_egg · 2h ago
> Functional programming precludes encapsulation

Since when? Maybe we have different definitions of "encapsulation" but this clause seems nonsensical to me. FP is huge on encapsulation

mjr00 · 5h ago
Folder and file structure and separation of concerns doesn't change the fact that if you have one deployable artifact, it's all sharing the same runtime when deployed. Which means the underlying versions of Java/Go/Python/etc, or core shared libraries, all need to be updated at the same time. All the code is far more coupled than it first seems.
roguecoder · 5h ago
That is not really an issue I've had with Java, but I would absolutely agree that Python is wildly unsuited as a production backend language.

I don't think it's much better if you have to spend a year and a half updating 400+ different repos, though. It's much easier to use an operationalized language that knows backwards compatibility matters.

mjr00 · 5h ago
I was at AWS RDS when they upgraded the shared control plane code from Java 7 to 8. IIRC it was about 6 months for 5-10 developers more or less full-time. Absolutely massive timesink. The move to separate services happened shortly after that.

> I don't think it's much better if you have to spend a year and a half updating 400+ different repos, though.

There's two things going for separate services (which may or may not be separate repos; remember a single repo can have multiple services):

1. You can do it piecemeal. 90% of your services will be 15-minute changes: update versions in a few files, let automated tests run, it's good to go. The 10% that have deeper compatibility issues can be addressed separately without holding back the rest. You can't separate this if you have a single deployable artifact.

2. Complexity is superlinear with respect to lines of code. Upgrading a single 1mLOC service isn't 10x harder than updating ten 100kLOC services, it's more like 20, 30x harder. Obviously this is hard to measure, but there's a reason these massive legacy codebases get stuck on ancient versions of dependencies. (And a reason companies pay out the ass for Oracle's extended Java 8 support, which they still offer.)

monero-xmr · 5h ago
All of that is so much easier with a single monorepo
mjr00 · 5h ago
Monorepo is orthogonal to services though. You can have a monorepo with multiple services in it.

Even with a monorepo, you will hit a point where you have 1, 10, 100 million lines of e.g. Python, realize you should upgrade from 3.8 to 3.14 because it's EOL, and feel a lot of pain as you have to do a big-bang, all-at-once change, fixing every single breaking change, including from libraries which you also have to update. There's no way around this in current mainstream languages.

dec0dedab0de · 4h ago
Don't know why you're downvoted, but this is the way.

Even if you're using micro services, it's usually best to have them in the same repo organized into different directories.

No matter how many people you have, you really should minimize working on the same files concurrently. This is trivial with most languages

antonvs · 1h ago
I’m guessing you’re thinking of a certain kind of application (web apps perhaps?), where a monolith can make sense. But that’s but the only kind of application.

We have dozens of service components that are all largely independent of each other - combining them together would be purely a packaging decision, and wouldn’t really simplify much. In some cases, it wouldn’t make sense or even be possible at all.

An example is our execution agent, which executes customer workflows - that’s completely independent both conceptually and from a security perspective. Each agent instance executes a single flow at a time, for resource consumption and security reasons, which entails an ecosystem of services to manage that - messaging, data ingestion at scale (100K flows per day, multi-petabyte “hot” datastore for active data), orchestration, and other supporting services such as data access and network routing.

All of our teams support multiple services, and many of them qualify as microservices.

__MatrixMan__ · 5h ago
As long as that team built those microservices to solve whatever problem they're responsible for solving I think it's better to let the problem domain dictate how many you need. Better to have seams that make sense in terms of the surrounding code than to have them in arbitrary places based on the org chart.

The trouble comes when some political wind blows and reshuffles the org chart, and now you're responsible for some services that only made sense in the context of a political reality that no longer exists.

steveBK123 · 5h ago
Every org I've tried to see push microservices did exactly the wrong version.

Rather than 1 micro service per team, which many devs.. it was some team that owns 20 services, generally way more services than developers.

It's probably just how non-lean Mag7 were in peak vs how lean most other orgs that try to ape them are.

dec0dedab0de · 4h ago
I generally agree, but there are some decent use cases for one team to have multiple services micro or otherwise:

1. when the requirements are better served by a different language/location/environment/platform.. or by deploying a 3rd party app.

2. some of the services need to quickly scale up and down, and you have enough traffic for it to be worth it.

3. if you have a tight SLA for parts of the app but not all of it.

xingped · 6h ago
The best use case is promotion! Welcome to big tech, where all the teams get reshuffled every few months and every microservice exists because some dev needed a promotion. The greater the ratio of microservices to devs, the better your manager looks! (Dev work-life balance be damned, we pay you to ruin your life.)
roguecoder · 5h ago
I mean, "GREAT" until you need to do any kind of refactoring, or the company grows, or shrinks, or reorgs, or you have a feature that needs to change more than one service.

The "one team per microservice" makes code-enclosure style code ownership possible, but it is the least efficient way I have ever seen software written.

I've long wanted to hack an IDE so people are only allowed to change the Java objects they created, and then put six Java programmers in a room and make them write an application, yelling back and forth across the room. "CAN YOU ADD A VERSION OF THAT METHOD THAT ACCEPTS THIS NEW CLASS?" "SURE THING! TRY THAT?"

People discount the costs of microservices because they makes management's job easier, especially when companies have adopted global promotion processes. But unless they are solving a real technical constriant, they are a shitty way to work as an engineer.

Alupis · 5h ago
I suspect a lot of the issues teams encounter with microservices stem from a lack of cohesive understanding of microservices.

If people on the team continue to think about the "system" as a monolith (what they already know and are comfortable with), you'll hit friction ever step of the way from design all the way out to deployment. Microservices throw out a lot of traditional assumptions and designs, which can be hard for people to subscribe to.

I think there has to be adequate "buy-in" throughout the org for it to be successful. Turning an existing mono into microservices is very likely to meet lots of internal resistance as people have varying levels of being "with it", so-to-speak.

ellisv · 5h ago
> 2 years later the rebuild of the old codebase was done. > > 3 years later and they are still fighting delivery and other issues they would never have had if they didn't ignore me and just went for the "lame" monolith.

Sounds to me like every startup.

ljm · 5h ago
One place I worked at got sold on microservices by Thoughtworks, along with a change to Java as the main language to be used.

As one would expect, they made bank from their consulting endeavor and rode off into the sunset while the rest of us wasted several years of our careers rewriting ugly but functional monolithic code into distributed Java based microservices. We could have been working on features and product but essentially were justifying a grift, adding new and novel bugs as we rebuilt stable APIs from scratch.

The company went under not long after the project was abandoned. Nobody, of course, would be held to account for it. I will no longer touch a tech consultancy like TW with a 10 foot barge pole.

wmf · 4h ago
What language was used before Java?
jihadjihad · 6h ago
Microservices [0]

> grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too

> seem very confusing to grug

0: https://grugbrain.dev/#grug-on-microservices

jayd16 · 6h ago
The short answer is it adds monkey patching to languages that don't have it.
bunderbunder · 6h ago
Monkey patching is a great technique for hacking rudimentary testability into legacy software as part of your preparations for refactoring it for maintainability.

But when I see a plan to use it that doesn't include a plan for how to stop using it again ASAP, I get very worried.

actionfromafar · 6h ago
This is true, but monkey patching is scary. If you can switch over a monolith, and keep a rollback in case of trouble, do that.

Make small changes in the monolith a time, though.

jayd16 · 5h ago
Btw, do any good, modern CI tools support incremental rollout of multiple in-flight changes on monoliths? As in patch A is live, team B wants to rollout A+B and team C wants to rollout A+C. Ideally, A+B+C will eventually go live.

Do cloud/paas providers deeply support this flow anymore? Every dashboard would need to compare across multiple live versions and I haven't tried that in a while.

bunderbunder · 1h ago
I'd say this is a job for feature flags. That way you always have exactly one live version of the code, but still retain the ability to hide WIP from users until it's ready.

If you're instead doing this with feature branches or something like that, then by definition you don't have CI. You have NI: Never Integration.

Because, to an approximation, there's never any point in time where all of the code you're working on is integrated together so that everyone has a chance to see how what they're doing interacts with what everyone else is doing. And yes, it is possible for a branch to successfully auto-merge and produce something that compiles and passes all automated tests, and still introduce a horrible regression defect because of an unanticipated interaction between two different changes on two different feature branches. I don't see it happen often, but when it does it usually creates such a big production SNAFU that even once every 5 years is still way too often for my taste.

anonymars · 4h ago
Wouldn't these just be different branches?
jayd16 · 2h ago
I'm referring to rolling out multiple merge downs (possibly from branches) from across the entire org into a monolithic deploy.
BoardsOfCanada · 5h ago
Because the network call turns the rule into a law.
frollogaston · 5h ago
This is also why app backends don't really need statically typed languages, no matter how big the company is. You have a well-defined API on the front, and you have a well-defined DB schema on the back, that's good enough.

The static typing makes even less sense at finer code scopes, like I don't need to keep asserting that a for-loop counter is an int.

tauoverpi · 3h ago
Statically typed languages, when used correctly, save engineering time both as you extend your service and when thing go wrong as the compiler helps you check that the code you've written, to some degree, meets your specification of the problem domain. With a weak type system you can't specify much of the problem domain without increased labour but with a more expressive type system (and a team that understands how to use it) you can embed enough of the domain specification that implementing part of the business logic incorrectly or violating protocols turns into compile errors instantly rather than possibly leaking to production.

As for your comment on `any`, the reason why one doesn't want to fall back on such is that you throw out most of the gains of using static types with such a construct when your function likely doesn't work with `any` type (I've never seen a function that works on absolutely anything other than `id :: a -> a` and I argue there isn't one even with RTTI).

Instead you want to declare the subset of types valid for your function using some kind of discriminated union (in rust this is `enum`, zig `union(enum)`, haskell ADTs/GADTs, etc etc) where you set a static bound on the number of things it can be. You use the type system to model your actual problem instead of fighting against it or "lying" (by saying `any`) to the compiler.

The same applies to services, APIs, protocols, and similar. The more work the compiler can help you with staying on spec the less work you have to do later when you've shipped a P1-Critical bug by mistake and none of your tests caught it.

frollogaston · 3h ago
The type system almost never catches a bug that proper testing would miss. And if the code has such nasty untested edge cases that you don't even notice a wrong type going somewhere, it'd probably behave wrongly even with the right types.

Indeed "any" breaks type checking all around it, but it can be contained more easily in a helper func with a simple return type. Most common case is your helper does a SQL query, and it's tedious and redundant to specify the type of rows returned when the SQL is already doing that.

connicpu · 2h ago
It saves development time because if I change an API my language server can immediately notify me about all the now-broken call sites, and I don't have to wait for tests to run to find out about all of them.
tauoverpi · 15m ago
The type system doesn't replace unit/snapshot/property/simulation tests as it's only job is specification. The type system is meant to be used in addition to testing to reduce the set of possible inputs to a smaller domain such that it's easier to reason about what is possible and what isn't. The same would be true even if you go as far as formal verification of programs, you always need to test even when you have powerful static types!

For example `foo :: Semigroup a, Traversable t => t a -> a` I already know that whatever is passed to this function can be traversed and have it's sum computed. It's impossible to pass something which doesn't satisfy both of those constraints as this is a specification given at the type level which is checked at compile-time. The things that cannot be captured as part of a type (bounded by the effort of specification) are then left to be captured by tests which only need to handle the subset of things which the above doesn't capture (e.g `Semigroup` specifies that you can compute the sum but it doesn't prevent `n + n = n` from being the implementation of `+`, that must be captured by property tests).

Another example, suppose you're working with time:

    tick :: Monad m => m Clock.Present

    zero    :: Clock.Duration
    seconds :: Uint -> Clock.Duration
    minutes :: Uint -> Clock.Duration
    hours   :: Uint -> Clock.Duration

    add :: Clock.Duration -> Clock.Present -> Clock.Future
    sub :: Clock.Duration -> Clock.Present -> Clock.Past

    is :: Clock.Duration -> Clock.Duration -> Bool

    until :: Clock.Future -> Clock.Present -> Clock.Duration
    since :: Clock.Past   -> Clock.Present -> Clock.Duration

    timestamp :: Clock.Present -> Clock.Past

    compare :: Clock.Present -> Clock.Foreign.Present -> Order

    data Order = Ahead Clock.Duration | Equal | Behind Clock.Duration
From the above you can tell what each function should do without looking at the implementation and you can probably write tests for each. Here the interface guides you to handle time in a safer way and tells a story `event = add (hours 5) present` where you cannot mix the wrong type of data ``until event `is` zero``. This is actual code that I've used in a production environment as it saves the team from shooting themselves in the foot with passing a `Clock.Duration` where a `Clock.Present` or `Clock.Future` should have been. Without a static type system you'd likely end up with a mistake mixing those integers up and not having enough test coverage to capture it as the space you must test is much larger than when you've constrained it to a smaller set within the bounds of the backing integer of the above.

In short, types are specifications, programs are proofs that the specification has a possible implementation, and tests ensure it behaves correctly for that the specification cannot constrain (or it would be too much effort to constrain it with types).

As for SQL, I'd rather say the issue is that the SQL schema is not encoded within your type system and thus when you perform a query the compiler cannot help you with inferring the type from the query. It's possible (in zig [1] at least) to derive the type of a prepared SQL query at compile-time so you write SQL as normal and zig checks that all types line up. It's not that types cannot do this, your tool just isn't expressive enough. F# [2] is capable of this through type providers where the database schema is imported making the type system aware of your SQL table layouts solving the "redundant specification" problem completely./

So with all of that, I assume (and do correct me if I'm wrong) that your view on what types can do is heavily influenced by typescript itself and you've yet to explore more expressive type systems (if so I do recommend trying Elm to see how you can work in an environment where `any` doesn't even exist). What you describe of types is not the way I experience them and it feels as if you're trying to fight against a tool that's there to help you.

[1]: https://rischmann.fr/blog/how-i-built-zig-sqlite [2]: https://github.com/fsprojects/SQLProvider

roguecoder · 5h ago
"Need"? Probably not. But unlike microservices they don't really have downsides (at least not with modern IDEs and the automatic refactorings they support) and they do offer some benefits.

Statically-types languages are a form of automatically-verified documentation, and an opportunity to name semantic properties different modules have in common. Both of those are great, but it is awkward that it is usually treated as an all-or-nothing matter.

Almost no language offers what I actually want: duck typing plus the ability to specify named interfaces for function inputs. Probably the closest I've found is Ruby with a linter to enforce RDoc comments on any public methods.

frollogaston · 4h ago
I'm fine with types in shared libs, just not in the app layer code, where the cost outweighs the benefit. I think you can do the in-between you describe with Typescript, but every time I've been on a team that says "oh you can use `any`," one day they disallow it. Especially in a big corp where someone turns it into a metric and a promo target.
cgannett · 6h ago
grug mention grug brain. grug also have grug brain. grug like grug. grugs together strong unless too many grugs then Overgrug think 9 grugs make baby grug in one month and grug not think it work like that
didip · 5h ago
Micro services show their benefits in a large organization.

It’s a tool to solve people issues. They can remove bureaucratic hurdles and allow devs to somewhat be autonomous again.

In a small startup, you really don’t gain much from them. Unless if the domain really necessitates them, eg. the company uses Elixir but all of the AI toolings are written in Python/Go.

echelon · 5h ago
If your application has different load or resource requirements, you should build separate services, even in a startup.

You can put most of your crud and domain logic in a monolith, but if you have a GPU workload or something that has very different requirements - that should be its own thing. That pattern shouldn't result in 100 services to maintain, but probably only a few boundaries.

Bias for monolith for everything, but know when you need to carve something out as its own.

At scale, you're 100% correct.

convolvatron · 5h ago
microservices can also cause organizational dependencies and coordination that wouldn't otherwise be necessary. i've seen it create at least as many people issues as solve them. one seemingly innocuous example is the policy of 'everybody just uses whatever services they want', which can hugely increase the ongoing maintenance requirements and seems to require that everyone learn everything in order to be functional. which never happens, which means you're always chasing people down.
hn_throwaway_99 · 5h ago
I probably just haven't checked these comment threads enough yet because I'm surprised I haven't seen this posted, but even though this is a bit old now, https://youtu.be/y8OnoxKotPQ, there is a reason it resonated with so many. It's spot on with the downsides microservices can inflict.

I've certainly seen microservices be a total disaster in large (and small) organizations. I think it's especially important that larger organizations have standards around cross-cutting concerns (e.g. authorization, logging, service-to-service communication, etc.) before they just should "OK, microservices, and go!"

demarq · 5h ago
One of those teams need to go.
frollogaston · 5h ago
If they're doing two very different things, why?
demarq · 5h ago
At a larger organization this could be, but there is nothing elixir could possibly be doing for the startup that go would not do.

Remember the whole topic here is avoiding this tax

frollogaston · 5h ago
I was starting from the assumption that Elixir does something they need, but yeah in most cases Golang would cover the same thing. Even then, you probably have separate Golang and Python, or just two separate Python services.
jerf · 6h ago
Microservices are the software architecture analog to Conway's Law. You can't help but introduce some sort of significant architecture boundary at the boundary between teams, and while that doesn't have to be "microservices" that's certainly a very attractive option. But on the flip side, introducing those heavier-weight boundaries on to yourself, internal to a team, can be very counterproductive.

I can't prove this scales up forever but I've been very happy with making sure that things are carefully abstracted out with dependency injection for anything that makes sense for it to be dependency-injected, and using module boundaries internally to a system as something very analogous to microservices, except that it doesn't go over a network. This goes especially well with using actors, even in a non-actor-focused language, because actors almost automatically have that clean boundary between them and the rest of the world, traversed by a clean concept of messages. This is sometimes called the Modular Monolith.

Done properly, should you later realize something needs to be a microservice, you get clean borders to cut along and clean places to deal with the consquences of turning it into a network service. It isn't perfect but it's a rather nice cost/benefit tradeoff. I've cut out, oh, 3 or 4 microservices out of monoliths in the past 5 years or so. It's not something I do everyday, and I'm not optimizing my modular monoliths for that purpose... I do modular monoliths because it is also just a good design methodology... but it is a nice bonus to harvest sometimes. It's one of the rare times when someone comes and quite reasonably expects that extracting something into a shared service will be months and you can be like "would you like a functioning prototype of it next week"?

roguecoder · 5h ago
Conway's law is about communication, not team boundaries. There is no requirement that we introduce a significant architectural boundary at the boundary between teams: companies choose to do so to avoid having cross-team communication.

The only way for significant architectural boundaries at team boundaries to not result in incredibly painful software, especially for a growing team, is to let the software organize the teams. Which means reorging the company whenever you need to refactor, and somehow guessing right about how many changes each component will need in the coming year.

It also means you can't have product and engineers explore a problem together, or manage by objective with OKRs since engineers aren't connected to business outcomes.

I know that all the ex-Amazonians are convinced this is the only way to build software, but it really, really isn't.

jerf · 3h ago
"Conway's law is about communication, not team boundaries."

I'm a spirit of the law sort of person, not the letter of the law. I don't care how you draw your internal organizational diagram; communication barriers are your team barriers.

It's for management to read the flow over time and keep the de jure boundaries somewhat sync'd with the de facto boundaries, but in the meantime the de facto ones are what will get written into your software.

xcskier56 · 5h ago
Microservices make sense from a technical perspective in startups if:

- You need to use a different language than your core application. E.g. we build Rails apps but need to use R for a data pipeline and 100% could not build this in ruby.

- You have 1 service that has vastly different scaling requirements that the rest of your stack. Then splitting that part off into it's own service can help

- You have a portion of your data set that has vastly different security and lifecycle requirements. E.g. you're getting healthcare data from medicare.

Outside of those, and maybe a few other edge cases, I see basically no reason why a small startup should ever choose microservices... you're just setting yourself up for more work for little to no gain.

Scarblac · 5h ago
Splitting off a few services from an application is not the same as using microservices. With microservices you split off basically everything that would be a module in a normal application.
xcskier56 · 5h ago
I think that really depends on your definition. But I will also contend that even splitting your system into 2 or 3 services if it's not for strong reasons will 100% slow you down and cause long term headaches.

One project that I helped design had to split out a segment of the system b/c the data was eligibility records coming from health plans. This data had very different security and lifecycle requirements (e.g. we have to keep it for 7 or 10 years). Splitting out this service simplified some parts but any time we need to cross the boundary between the 2 services, the work takes probably twice as long as it would if it were in a single service. I don't think it was the wrong decision, but it the service definitely did not come for free

codr7 · 4h ago
If you split off a small, isolated part of the application; that's pretty much the definition of a microservice.
shooker435 · 5h ago
In addition to having 1 service with vastly different scaling requirements, having 1 service with vastly different availability requirements may make sense to separate as well.

If you need to keep the lights or maintain an SLA and can do so by separating a concern, it can really reduce risk and increase speed when deploying new features on "less important" components.

Akronymus · 5h ago
I personally wouldnt even call those microservices, but rather treat them closer to how a DB server is usually separate from an application one.
mikeocool · 5h ago
I pretty much agree with everything in this article — it’s next to impossible service boundaries right in a startup environment.

Though, if you’re on a small team and really want to use micro services two places I have found it to be somewhat advantageous:

* wrapping particularly bad third party APIs or integrations — you’re already forced into having a network boundary, so adding a service at the boundary doesn’t increase complexity all that much. Basically this lets you isolate the big chunk of crappy code involved in integrating with the 3rd party, and giving it a nice API your monolith can interact with.

* wrapping particularly hairy dependencies — if you’ve got a dependency with a complex build process that slows down deployments or dev setup — or the dependency relies on something that conflicts with another dependency — wrapping it in its own service and giving it a nice API can be a good way to simplify things for the monolith.

roguecoder · 5h ago
You only need microservices for massive scale or to enable micromanagement of teams, but that doesn't mean you have to give up on clear module boundaries.

You can get the architectural benefits of microservices by using message-passing-style Object-Oriented programming. It requires the discipline not to reach directly into the database, but assuming you just Don't Do That a well-encapsulated "object" is a microservice that runs in the same virtual machine as the other mircoservices.

Java is the most mainstream language that supports that: whenever you find yourself reaching for a microservice, instead create a module, namespace the database tables, and then expose only the smallest possible public interface to other modules. You can test them in isolation, monitor the connections between them, and bonus: it is trivial to deploy changes across multiple "services" at the same time.

DarkNova6 · 5h ago
Or you have a good understanding of your logical boundaries and enforce them with ArchUnit.
hosh · 4h ago
One of the advantages of the BEAM / OTP ecosystem (Erlang, Elixir, and friends) is that you can construct "microservices" and think through what that means, all within a monolith. When it comes time to break it out, you can.

> Microservices only pay off when you have real scaling bottlenecks, large teams, or independently evolving domains.

The BEAM language platform can cover scaling bottlenecks (at least within certain ranges of scale) and independently evolving domains, but has many of the advantages of working with a monolith when the team is small and searching for product-fit.

Like anything there are tradeoffs. The main one being that you'd have to learn how to write code with immutable data structures, and you have to be more thoughtful on how concurrent processes talk to each other, and what kind of failure modes you want to design into things. Many teams don't know how to hire for more Erlang or Elixir developers.

siliconc0w · 5h ago
The biggest wins for microservices aren't really technical, they're organizational. They force you to break a problem down and allow each team to own a piece of it, including end to end delivery. This allows specialization of labor which is a key driver of productivity - including an ability to experiment and innovate. Every change is incremental by default, and well-documented external APIs are the only way to talk to other domains- no shared databases, filesystems, or internal APIs. It's not free and definitely takes some discipline and tooling to enforce shared standards (every service should have metrics, logging, tracing, discovery, testing, CI/CD, etc) but you'd need to build that muscle with a monolith as well.
utmb748 · 4h ago
Could kept infra as a code, logging, auth and so on in packages, gRPC or message queues for communication, telemetry, monitoring/alerts and more stuff as a code too... got to the point creating new service was just new repo, name, port a resource utilization.

Agree with organizational win, also smaller merge requests in the team were superb.

Around 5-10 devs, monolith, we ran into conflicts more often, deployment, bigger merge requests, releasing by feature was problematic, microservices made team more productive, but rules about tests/docs/endpoints/code were important.

frollogaston · 5h ago
The DB part can also get technical as performance comes into play. Most startups are probably not encountering this problem, but they could.
no_wizard · 5h ago
This may explain some of the popularity resurgence of SQLite (including distributed SQLite)

It makes Database Per Customer type apps really easy, and that is something alot of SaaS products could benefit from.

frollogaston · 4h ago
Yeah, I keep telling people at work that we need to figure out how to make it easier for teams to manage their own DBs. There are so many teams trying to shove their data into some other team's DB.
utmb748 · 5h ago
Was in startup which quit before hitting 1000 users in app at the same time, but performance was top priority so data layer stack was quiet big.
no_wizard · 6h ago
They have their place. In my experience, a good rule of thumb[0] is if there are actual benefits from being a standalone service.

For example, we have a authentication microservice at work. It makes sense that it lives outside of the main application, because its used in a multiple different contexts and the service boundary allows for it to be more responsive to changes, upgrades and security fixes than having it be part of the main app, and it deploys differently than the application. It also adds enough intentional friction that we don't accidentally put logic where it doesn't belong as part of the user authentication process. It has helped keep the code focused on only primary concerns.

That said, you can't apply any of these patterns blindly, as is so often the case. A good technical leader should push back when the benefits don't actually exist. The real issue is lack of experience making technical decisions on merits.

This includes high level executive leaders in the organization. At a startup especially, they are still often involved in many technical decisions. You'd be surprised (well maybe not!) how the highest leadership in a company at a startup will mandate things like using microservices and refuse to listen to anything running counter to such things.

[0]: https://en.wikipedia.org/wiki/Rule_of_thumb

esafak · 6h ago
I don't think this merited a wiki link :)
no_wizard · 6h ago
Its an international forum, there may be at least 1 person who hasn't encountered this colloquialism before. It hinders nothing yet may be informative to someone who's unfamiliar.
Akronymus · 5h ago
And I dont see how the link detracts from the post at all. For ESL people, like me, it can be quite helpful to have such a link. (i find myself looking up such phrases quite often)
nazgulsenpai · 6h ago
Perhaps in consideration of a non-native English speaker who might not understand the phrase.
zsoltkacsandi · 6h ago
Most “benefits” assumed from separation can be achieved with clear interfaces and modular monoliths, without the cognitive and operational tax microservices impose.

> It also adds enough intentional friction that we don't accidentally put logic where it doesn't belong as part of the user authentication process.

Preventing misplaced logic is a matter of good code structure, well defined software development processes and team discipline - not something that requires splitting into a separate microservice, and definitely not something that you want to solve on system architecture level.

zbobet2012 · 5h ago
The largest benefit of microservices has always been lifecycle management, and "clear interfaces" in "modular monoliths" does not in fact solve that. If you update the logging library in a monolith, everyone takes that updates even if it breaks half the teams.

That's a "large" organization problem. But large is actually, not that big (about 5-10 scrum teams before this is a very large problem).

It also means on critical systems separating high risk and low risk changes are not possible.

Like all engineering decisions, this is a set of tradeoffs.

zsoltkacsandi · 4h ago
> The largest benefit of microservices has always been lifecycle management, and "clear interfaces" in "modular monoliths" does not in fact solve that.

What lifecycle are we really talking about? There are massive monoliths - like the Linux kernel or PostgreSQL - with long lifespans, clear modularity, and thousands of contributors, all without microservices. Lifecycle management is achievable with good architecture, not necessarily with service boundaries.

> If you update the logging library in a monolith, everyone takes that updates even if it breaks half the teams.

This is a vague argument. In a microservice architecture, if multiple systems rely on the structure or semantics of logs — or any shared behavior or state - updating one service without coordination can just as easily break integrations. It’s not the architecture that protects you from this, but communication, discipline, and tests.

> It also means on critical systems separating high risk and low risk changes are not possible.

Risk can be isolated within a monolith through careful modular design, feature flags, interface boundaries, and staged rollouts. Microservices don’t eliminate risk - they often just move it across a network boundary, where failures can be harder to trace and debug.

I’m not against microservices. But the examples given in the comment I responded to reflect the wrong reasons (at least based on what I’ve seen in 15+ years across various workplaces) for choosing or avoiding a microservice architecture.

Microservices don’t solve coupling or modularity issues — they just move communication from in-process calls to network calls. If a system is poorly structured as a monolith, it will likely be a mess as microservices too — just a slower, harder-to-debug one.

no_wizard · 6h ago
>Most “benefits” assumed from separation can be achieved with clear interfaces and modular monoliths, without the cognitive and operational tax microservices impose.

Perhaps yes. Every situation should be evaluated on merits. This came across that there is also an assumption that we didn't try other solutions first - we absolutely did. Microservice is the best solution to solving the problems we needed solved in this case. Even better than modular monolith with clear interfaces.

>without the cognitive and operational tax microservices impose

When done correctly, I don't think there is a tax. Most operational questions should be automated away once discovered. The only 'tax' is that it lives separately from the larger application and is deployed independently, but I haven't seen in practice this add any notable overhead.

>Preventing misplaced logic is a matter of good code structure, well defined software development processes and team discipline

All true, and a microservice can aid all of these things too, but isn't the solution you should reach for when solving for these things and these things alone in my opinion. That said, myself and others have observed there is time saved on enforcing discipline around this issue once we separated the code away from the main application. I can't deny that hasn't been a good thing, because it has. It would be leaving information out about the benefits we'v experienced, and I see no reason to do that.

All told, completely dismissing the value of microservices as a potential solution is no different than completely dismissing other solutions in favor of microservices. Things have their place, there are pros and cons to them, and should be evaluated relative to their merit for the situation.

You may find you never implement microservices, or implement very few, or perhaps the needs of an organization is as such that its a pattern used most of the time, but the technical merits of doing so - with any decision of this nature, not limited to microservices - should have a backing justification that includes why other solutions don't fit

zsoltkacsandi · 4h ago
> Every situation should be evaluated on merits. This came across that there is also an assumption that we didn't try other solutions first - we absolutely did.

I completely agree. But this a little bit contradicts with your original comment that caught my eye:

> In my experience, a good rule of thumb[0] is if there are actual benefits from being a standalone service.

A rule of thumb is, by nature, a generalization — it simplifies decision making through heuristics. Benefits on the other hand always subjective, they can be interpreted in a given context.

And based on my experience, there will always be some benefits that can be used to justify factoring something out into a separate service. The challenge is that it's often easy to overemphasize those benefits, even when they don't outweigh the downsides. Your example with the auth service and the added friction is, in my view, a good illustration of a justification that might sound reasonable but can lead to unnecessary complexity. (Just to be clear, my intent here isn't to judge your decisions - I understand these trade-offs are often nuanced - and that's why again there is no good rule of thumb for this)

no_wizard · 3h ago
>I completely agree. But this a little bit contradicts with your original comment that caught my eye:

Its a good rule of thumb that you may want to evaluate if a microservice is appropriate. Thats the point in context[0]. If you think it might be a relevant solution, that is a pretty good heuristic that evaluating it may be worth the time.

What I seem to be coming up against when discussing this is people conflate worthwhile evaluation with worthwhile solution. Thats where the nuance and details need to live is in solution discovery, but how do you arrive at the right solution if you don't first evaluate what solutions might fit a given problem?

I stand by it being a good rule of thumb. If you think there are actual benefits - not perceived, but quantifiable benefits - of something being a standalone independent service, you might have a case for a microservice. I feel its a good heuristic for narrowing solution evaluation.

It doesn't equate to saying microservice is the solution if it meets that one criteria, only don't rule it out.

>And based on my experience, there will always be some benefits that can be used to justify factoring something out into a separate service. The challenge is that it's often easy to overemphasize those benefits, even when they don't outweigh the downsides. Your example with the auth service and the added friction is, in my view, a good illustration of a justification that might sound reasonable but can lead to unnecessary complexity.

Bias is hard to overcome. Technical decision makers need to be keenly aware of this. I wish it was easier to identify when this comes into the situation as it plays a bigger role in all this than is often realized. I go to great lengths to validate my thoughts when making big technical decisions for this reason, and deciding on something like this is a big technical decision that deserves that approach, in my opinion.

One good quality of a good technical decision maker is taking the time to quantify why its a good solution and why other solutions are not, and that must hold up to technical scrutiny of your peers. This challenges any assumptions missed. Ideally, your peers should be able to see through anything that wasn't given enough evaluation, like if the benefits of doing something is overemphasized.

>good illustration of a justification that might sound reasonable but can lead to unnecessary complexity.

I agree, which is why I state elsewhere that it shouldn't be used solely for this purpose, but I'm not going to leave out a real benefit we saw. If there is a persistent organizational problem that engineers seem to want to put code inside of something that is in broader context inappropriate or adds complexity where it shouldn't, you may benefit from such friction and its okay to evaluate that aspect too.

[0]: By which I mean in the context of the article, which I think dismisses microservices as a potential solution with prejudice

zsoltkacsandi · 2h ago
> By which I mean in the context of the article, which I think dismisses microservices as a potential solution with prejudice

> What I seem to be coming up against when discussing this is people conflate worthwhile evaluation with worthwhile solution.

> It doesn't equate to saying microservice is the solution if it meets that one criteria, only don't rule it out.

I read the article, it's not against microservice architecture itself, but points out that you shouldn't treat them as a starting point or a best practice.

There is even a section named "When Microservices Do Make Sense". It also highlights examples where microservices make sense ("Their post is a good example of how microservices can work when you have the organizational maturity and operational overhead to support them."), so I really do not see where does anyone rule out, dismiss, come up against anything.

I somewhat understand and agree what you are trying to argue against, but that's definitely not in this article.

> Its a good rule of thumb that you may want to evaluate if a microservice is appropriate.

This is too general to be practically useful — it applies to almost everything in life.

dkarl · 5h ago
My current take on microservices is that people pay serious attention to modularity and API design in the context of microservices. They work hard to break down the problem properly and design good interfaces between parts of the system.

In monoliths, they generally don't.

There's no logical reason why you couldn't pay as much attention to decomposition and API design between the modules of a monolith. You could have the benefit of good design without all the architectural and operational challenges of microservices. Maybe some people succeed at this. But in practice I've never seen it. I've seen people handle the challenges of microservices successfully, and I've never seen a monolith that wasn't an incoherent mess internally.

This is just my experience, one person's observations offered for what they're worth.

In practice, in the context of microservices, I've seen an entire team work together for two weeks to break down a problem coherently, holding off on starting implementation because they knew the design wasn't good enough and it was worth the time to get it right. I've seen people escalate issues with others' designs because they saw a risk and wanted to address it.

In the context of monoliths, I've never seen someone delay implementation so much as a day because they knew the design was half-baked. I rarely see anyone ask for design feedback or design anything as a team until they've screwed something up so badly that it can't be avoided. People sometimes make major design decisions in a split second while coding. What kind of self-respecting senior developer would spend a week getting input on an internal code API before starting to implement? People sometimes aren't even aware that the code they wrote that morning has implications for code that will be written later.

Theoretically this is okay because refactoring is easy in a monolith. Right? ... It is, right?

I'm basically sold on microservices because I know how to get developers to take design seriously when it's a bunch of services talking to each other via REST or grpc, and I don't know how to get them to take the internal design of a monolith seriously.

roguecoder · 4h ago
Bingo!

Every good monolith I've worked in (and I have worked in several, including one that was more than twenty years old) was highly-modular, well-designed with an easy-to-explain architecture.

The other thing they had in common was that code reviews talked about the aesthetics of the code and design, instead of just hunting for errors or skimming for security problems. It was relatively common to throw out the first proposed PR and start over, and that was fine because people were slicing the work small enough they were posting four to six PRs a week anyway.

It took the engineers at the company being willing to collaborate on the craft of software development and prioritize the long-term health of the code over short-term feature delivery. And the result of being willing to go a little bit slower day-to-day was that the actual feature delivery was faster than anywhere else I've ever worked.

Without a functioning professional culture, nothing is going to be great. But at least with microservices people do have to design an API at some point.

rho4 · 4h ago
This is probably the first argument for microservices I heard that makes sense to me.

Not that I would ever want to give up our monolith, but we do experience the problems you point out.

bitcurious · 5h ago
I'll go against the grain and say that microservices have advantages for small dev teams embedded in non-tech orgs.

1. You get to minimize devops/security/admin work. Really a consequences of using serverless tooling, but you land on a something like a microservices architecture if you do.

2. You get can break out work temporally. This is the big one - when you're a small team supporting multiple products, you often don't have continuity of work. You have one project for a few months, completely unrelated product for another few months. Microservice architectures are easier to build and maintain in that environment.

roguecoder · 4h ago
Watch out for bit rot, though: it is very easy for a startup to come back to one of those microservices six months later and discover the dependencies are borked and it no longer even builds.

Each repo you create is one more set of Dependabot alerts you need to keep on top of.

codr7 · 5h ago
Microservices minimize devops/security/admin work?

What planet are you living on?

roguecoder · 4h ago
I assume he means building the product out of AWS legos like Lambdas. Stick it all under one account, manage it manually instead of trying to deal with Terraform and it isn't too bad.

Heroku is still way easier, though.

4ndrewl · 3h ago
The article almost gets there, but the key is this:

Microservice architecture is a deployment strategy.

If you have a problem with deployments (eg large numbers of teams, perhaps some external suppliers running at different cadences, or with different tech stacks) the microservices are a fine solution to this.

Ensorceled · 5h ago
Years ago I attended a local meetup where the CTO of a local startup gave a presentation on their, mostly successful, microservice rollout.

In the Q&A after ward, another local startup CTO asked about problems their company was having with their microservices.

The successful CTO asked two questions: "How big is your microservices tooling team?" and "How big is your Dev Ops Team?"

His point was, if you're development team is not big enough to afford dedicated teams to tooling and dev ops, it's not big enough to afford microservices.

utmb748 · 5h ago
Was in org with 10 people devops dedicated team, it was smooth, also as a dev could push requests for their repos... but also only 3 devops and they were so busy my requirement for basic stuff was burried in backlog. You can develop but still need to maintain from the to time.
addisonj · 4h ago
I hope this is more common knowledge these days... but this is good framing and makes really clear the costs.

What this article doesn't cover... and where a good chunk of my career has been, is when companies are driven to break out into services, which might be due to scale, team size, or becoming a multi-product company. Whatever the reason, it can kill velocity during the transition. In my experience, if this is being done to support becoming multi-product, this loss in velocity comes at the worst time and can sink even very component teams.

As an industry, the gap between what makes sense for startups and what makes sense for scale can be a huge chasm. To be clear, I don't think it means you should invest in micro-services on the off-chance you need to hit scale (which I think is where many convince themselves of) nor does it mean that you should always head to microservices even when you hit those forcing functions (scaling monoliths is possible!)

That said, modularity, flexibility, and easy evolution are super important as companies grow and I do really think the next generation of tools and platforms will be benefit to better suiting themselves to evolution and flexibility than they do today. One idea I have thought for some time is platforms that "feel" like a monolith, but are 1) more concrete in building firmer interfaces between subsystems and 2) have flexibility in how calls happen between these interfaces (imagine being able to run a subsystem embedded or transparently to move calls over an RPC interface). Certainly that is "possible" with well structured code in platforms today... but it isn't always natural.

I am not sure the answer, but I really hope the next 10 years of my career has less massive chasms crossed via huge multi-year painful efforts and more cautious, careful evolution enabled by well considered tool and platforms.

bob1029 · 5h ago
Monolith really is the best path and I question if you couldn't make it work in ~100% of cases if you genuinely tried to.

One should consider if they can dive even deeper into the monolithic rabbit hole. For example, do you really need an external hosted SQL provider, or could you embed SQLite?

From a latency & physics perspective, monolith wins every time. Making a call across the network might as well take an eternity by comparison to a local method. Arguments can be made that the latency can be "hidden", but this is generally only true for the more trivial kinds of problems. For many practical businesses, you are typically in a strictly serialized domain which means that you are going to be forced to endure every microsecond of delay. Assuming that a transaction was not in conflict doesn't work at the bank. You need to be sure every time before the caller is allowed to proceed.

The tighter the latency domain, the less you need to think about performance. Things can be so fast by default that you can actually focus on building what the customer is paying for. You stop thinking about the sizes of VMs, who's got the cheapest compute per dollar and other distracting crap.

no_wizard · 5h ago
>I question if you couldn't make it work in ~100% of cases if you genuinely tried to.

You could say this about almost any pattern, if you genuinely tried to make microservices work it could work in ~100% of cases, I'm sure of that.

Its this pattern of dismissing or accepting a solution with strong prejudice you don't evaluate the merits is the real problem. Thats the true behavior we need to get away from.

We as an industry may find, that modular monoliths trend toward the top as a result (I hate to speculate too much, every company is different and there are in fact other patterns of development beyond the two mentioned) but that would be a side effect if true. The real win is moving away from such prejudiced behavior

bob1029 · 4h ago
> Its this pattern of dismissing or accepting a solution with strong prejudice you don't evaluate the merits is the real problem.

I spent a solid 3 years of my career attempting to make micro service architecture work in a B2B SaaS ecosystem. I have experience. This is not prejudice.

> modular monoliths

I don't see the meaningful difference between this and microservices.

no_wizard · 4h ago
>I spent a solid 3 years of my career attempting to make micro service architecture work in a B2B SaaS ecosystem. I have experience. This is not prejudice.

Yes, you do have experience, and it may not match others. Thats my point. At previous jobs, I had terrible microservice experiences, they were everything people complained about them to be. Yet, by setting that aside and really diving into evaluation on merits, I came around on the idea because I understand the failures of my previous experience came to misapplication of the concepts, not the concepts themselves.

Thats what we need more of, the kind of evaluation and reflection one should do when making these decisions (or being apart of a group that does) and I don't think we should discount our own experiences, we should strive to separate them from the concepts of appropriate technical decision making, lest we become overly biased for or against something.

>I don't see the meaningful difference between this and microservices.

The most obvious is the independence microservices have. They're truly independent. Sometimes that is exactly what you want

codr7 · 4h ago
Containerization unfortunately pretty much killed embedded DBs; it's a shame, because you can squeeze a lot of performance out of not having to access the DB over a network.
roguecoder · 4h ago
Containerization is another thing that is wildly overused by startups that don't yet have the problems it solves.
rglover · 4h ago
The best architecture approach I've ever found is:

1. Start with a monolith

2. If necessary, set up a job server that can be vertically/horizontally scaled and then give it a private API, or, give it access to the same database as the monolith.

For an overwhelming number of situations, this works great. You separate the heavy compute workloads from the customer-facing CRUD app and can scale the two independent of one another.

The whole microservices thing always seemed like an attempt by cloud providers to just trick you into using their services. The first time I ever played with serverless/lambda, I had a visceral reaction to the deployment process and knew it would end in tragedy.

PathOfEclipse · 3h ago
I don't know what the "right" answer is, but I worked at a company that built a fairly unwieldy monolith that was dragging everyone down as it matured into a mid-sized company. And, once you're successfully used at scale it becomes much more difficult to make architectural changes. Is there a middle ground? Is there a way to build a monolith while making it easier to factor apart services earlier rather than later? I don't know, and I don't think the article addresses that either.

The article does mention "invest in modularity", but to be honest, if you're in frantic startup mode dumping code into a monolith, you're probably not caring about modularity either.

Lastly, I would imagine it's easier to start with microservices, or multiple mid-sized services if you're relying on advanced cloud infra like AWS, but that has its own costs and downsides.

gleenn · 6h ago
Has anyone tried something like Polylith which lets you build all your code like normal functions for local dev and testing and then seemlessly pull out parts to network services as needed?

https://github.com/polyfy/polylith

alaithea · 5h ago
Pretty sure I saw someone say this in the past, but microservices might as well have been a psyop pushed out by larger, successful startups onto smaller, earlier-stage companies and projects. I say "might as well" because I don't think there's any evidence for it, but the number of companies and projects that have glommed onto the microservices idea, only to find their development velocity grind to a halt, has to be in the hundreds at least (thousands?). Whether the consequences were intended or not, microservices have been a gift on the competitive landscape for the startups that pushed microservices in the first place.
karmakaze · 4h ago
I've worked in monoliths done poorly and well, as well as bad and good implementations of microservices (even if done for the wrong reasons). The part of this post on 'if you go microservices' doesn't state things strongly enough. My takeaways comparing what worked vs what didn't:

- Use one-way async messaging. Making a UserService that everything else uses synchronously via RPC/REST/whatever is a very bad idea and an even worse time. You'll struggle for even 2-nines of overall system uptime (because they don't average, they multiply down).

- 'Bounded context' is the most important aspect of microservices to get right. Don't make <noun>-services. You can make a UserManagementService that has canonical information about users. That information is propagated to other services which can work independently each using the eventually consistent information they need about users.

There's other dumb things that people do like sharing a database instance for multiple 'micro'-services and not even having separately accessible schemas. In the end if done well, each microservice is small and pleasant to work on, with coordination between them being the challenging part both technically and humanly.

parpfish · 5h ago
I’ve read a lot of pros/cons about micro services over the last decade, but don’t have a clear definition for what qualifies.

My current job insists that they have a “simple monolith” because all the code is in a single repo. But that repo has code to build dozens of python packages and docker containers. Tons of deploy scripts. Different teams/employees are isolated to particular parts of the codebase.

It feels a lot like microservices, but I don’t know what the defining feature of microservices is supposed to be

shooker435 · 5h ago
Sounds like microservices deployed from a monorepo...

Which honestly may be the future if LLMs stay in a dev's toolkit. Plugging in an AI model to a monorepo provides so much context that can't be easily communicated across microservices in separate repos.

johncoltrane · 5h ago
In 2016-17, I was involved with a rather large mictoservice-heavy rewrite project tha didn't go particularly well. The main reason was that microservices were actually a good fit for the _planned_ organisational structure, but not for the one that was eventually put in place. When you go from 4 vertically integrated independent teams to 2 backend devs, 2 frontend devs, and 1 "devops" without stopping 5 minutes to rethink the architecture, of course shit will happen.
phodge · 2h ago
This article conflates the Monolith|Microservices and Monorepo|Polyrepo dichotomies. Although it is typical to choose Microservices and Polyrepo together or Monolith with Monorepo, it's not strictly necessary and the two architectural decisions come with different tradeoffs.

For example you may be forced to split out some components into separate services because they require a different technology stack to the monolith, but that doesn't strictly require a separate source code repository.

CharlieDigital · 5h ago
Google had a really great paper on this about 2 years back titled Towards Modern Development of Cloud Applications[0] that talks about how teams often:

    > ... conflate logical boundaries (how code is written) with physical boundaries (how code is deployed)
It's very easy to read and digest and I think it's a great paper that makes the case for building "modular monoliths".

I think many teams do not have a practical guide on how to achieve this. Certainly, Google's solution in this case is far too complex for most teams. But many teams can achieve the 5 core benefits that they mentioned with a simpler setup. I wrote a about this in a blog post A Practical Guide to Modular Monoliths with .NET[1] with a GitHub repo showing how to achieve this[2] as well as a video walkthrough[3]

This approach has proven (for me) to be easy to implement, package, deploy, and manage and is particularly good for startups with all of the qualities mentioned in the Google paper without much complexity added.

[0] https://dl.acm.org/doi/pdf/10.1145/3593856.3595909

[1] https://chrlschn.dev/blog/2024/01/a-practical-guide-to-modul...

[2] https://github.com/CharlieDigital/dn8-modular-monolith

[3] https://www.youtube.com/watch?v=VEggfW0A_Oo

ngrilly · 2h ago
The team responsible for a single microservice at a Big Tech company is often as large as, or even larger than, the entire engineering team of a startup. The same can be true for the size of the codebase. This is why it often doesn't make sense for a startup to introduce microservices.
duxup · 5h ago
I can't imagine a small team following ALL the rules of microservices benefiting much at all. It makes no sense.

For large orgs where each service has a dedicated team it starts to make sense... but then it becomes clear that microservices are an organizational solution.

metalrain · 4h ago
I think separately deployed services built from same monolithic codebase makes a lot of sense. You get to choose resources per service, but can get the benefits of sharing code/tests.
root_axis · 5h ago
The problem is the "micro" part. Service oriented architecture is generally the way to go, but the service boundaries should be defined by engineering constraints, not as arbitrarily small.
frollogaston · 5h ago
Where I work, they consider a service managed full-time by a team of 2-8 people a "microservice." Before that, they had a monolith shared by a dept of ~120.
Cthulhu_ · 6h ago
I've seen microservices get introduced at companies... it never solved a real problem, it was more to scratch a developer's itch, or cargo cult ideas. It started to fall apart when they tried to figure out how to get an order service to fetch the prices of a product from the product pricing service, only to realise they need to hold onto the product price at the time of placing the order (it was a high volume / short product life cycle type of e-commerce), so uhh.. maybe we should duplicate this product into the order service? And then it would need to end up at a payment or invoicing service, more data duplication. And everything had to go through a central message bus to avoid web-like sprawl.

The other one was a microservice architecture in front of the real problem, a Java backend service that hid the real real problem, one or more mainframes. But the consultants got to play in their microservices garden, which was mostly just a REST API in front of a Postgres database that would store blobs of JSON. And of course these microservices would end up needing to talk to each other through REST/JSON.

I've filed this article in my "microservices beef" bookmarks folder if I ever end up in another company that tries to do microservices. Of course, that industry has since moved on to lambdas, which is microservices on steroids.

vjvjvjvjghv · 6h ago
I always tell people if they can’t handle writing decent libraries they also won’t handle microservices. Especially when a 3 person team cranks out 15 microservices, ideally with different languages.
abhisek · 6h ago
Totally agree. Micro services unnecessarily makes thing complicated for small teams. IMHO it solves the problem of velocity ONLY when a large engineering team is slowed down due to too much release & cross cutting dependencies on a monolith. Although I see people solving with modular monoliths, merge queues and CODEOWNERS effectively.

Few cases where microservices makes sense probably when we have a small and well bounded use-case like webhooks management, notifications or may be read scaling on some master dataset

mamidon · 6h ago
Can you elaborate a bit on codeowners, I've not heard of that kind of solution before.
Jemaclus · 6h ago
They're a way to assign ownership to individuals or teams on a granular basis, rather than at the repo-level. You can assign entire folders or individual files to people.

Here's more at Github's docs: https://docs.github.com/en/repositories/managing-your-reposi...

mithametacs · 5h ago
You just put a text file with the names of the team or developers who own the directory.
bzmrgonz · 5h ago
I agree, most startups could do with a decent hypervisor plus vps for web visibility, but honestly selfhosting is fine. I'm surprise no one has built a startup environment in a box of boxes (pfsense/truenas/proxmox/minIO/openwrt)<should cover almost any techstack imaginable>, if you want bleeding edge, add microcloud from canonical or incus.
lenerdenator · 5h ago
You don't know what your micro services need to be until you start running into the problems posed by your monolith.
yawnxyz · 5h ago
I've found using Cloudflare Workers really productive, esp. their R2 and Durable Objects bindings. Are these technically "microservices" and should they be avoided if following trad software patterns?

Using them makes it easy to build endpoints for things like WhatsApp and other integrations

utmb748 · 5h ago
From my experience, microservices were great if there are more devs, organizational advantage over tech.

CI/CD - infra can be as code, shared across, K8s port-forward for local development, better resource utilization, multiple envs end so on, available tooling, if setup correctly, usually keeps working.

Not mentioned plus, usually smaller merge requests, feature can be split and better estimated, less conflicts during work or testing... possibility to share in packages.

Also if there are no tests, doesnt matter if its monorepo or MS, you can break easily or spend more time.

You should afford tests and documentation, keep working on tech debt.

Next common issue I see, too big tech stack cos something is popular.

stevebmark · 4h ago
> In reality, business logic doesn’t directly map to service boundaries

Love this quote, it should be a poster on the wall of any dev who pushes Domain Driven Design on an engineering team.

nottorp · 5h ago
IT is full of cult-like concepts that promise to solve all your problems. Microservices is just one of them.

The catch is to keep them all in mind and use them in moderation.

Like everything else in life.

sisve · 5h ago
I wish more people would understand that it's a big middle between monolith ans microservice ans that it most likely the correct for most situations.

Context and nuances

Havoc · 5h ago
You can always use microservice like architecture and not slice it too finely ie too micro.

Stuff like k8s works fine as docker delivery vehicle

mountainriver · 6h ago
I honestly can’t believe we are still talking about microservices.

Just use regular sized services

monero-xmr · 6h ago
My friend briefly worked at a company where every API was a lambda. Each lambda had a git repo. Lambdas would often call into other lambdas. In order to make a feature, it might involve touching 10+ lambdas. They had over 200 lambdas after a year. Total nightmare
sitkack · 5h ago
I think the major issue that I see, and I could be wrong is that if you want to change some underlying functionality between you and a dependent function to do that you would need to change all the intermediate functions only so that you could call that dependent function and layers deep.

I have played around with architectures like this, but I allowed the caller to patch in a dependent function in the call with those function overlay overrides were passed from function to function.

Apologies, used sst

alabastervlog · 5h ago
Lemme guess: their scale was in the tens of requests per minute, and the performance was somehow still bad.
monero-xmr · 5h ago
Yes it was a disaster and he bounced as quick as he could as the CTO could not be reasoned with. Many such cases
goji_berries · 5h ago
mgaunard · 5h ago
Even worse, I've seen large systems where everything was built as nanoservices.
alaithea · 5h ago
There was a point in time (circa 2019-2020) when the madness got so severe that every new feature ended up as a microservice backed by a DB with a single table (plus a couple tables for API keys, migration tracking, etc.)

I love it when all my CRUD has to be abstracted over HTTP. /s

nicman23 · 5h ago
people underestimate how much a single 5 euro vps with a lamp stack can do
demarq · 5h ago
Startups and micro services shouldn’t even be in the same sentence
mvdtnz · 1h ago
The opposite of "microservices" is not "monoliths". The organisation I work at has something like 250-300+ microservices all in a monolith. This is the best of both worlds for large applications, in my opinion.

(It's no coincidence that this company was largely loaded up with ex-Googlers in the early days).

swisniewski · 3h ago
I see this a lot ("if you are a startup, just ship a monolith").

I think this is the wrong way to frame it. The advice should be "just do the scrappy thing".

This distinction is important. Sometimes, creating a separate service is the scrappy thing to do, sometimes creating a monolith is. Sometimes not creating anything is the way to go.

Let's consider a simple example: adding a queue poller. Let's say you need to add some kind of asynchronous processing to your system. Maybe you need to upload data from customer S3 buckets, or you need to send emails or notifications, or some other thing you need to "process offline".

You could add this to your monolith, by adding some sort of background pollers that read an SQS queue, or a table in your database, then do something.

But that's actually pretty complicated, because now you have to worry about how much capacity to allocate to processing your service API and how much capacity to allocate to your pollers, and you have scale them all up at the same time. If you need more polling, you need more api servers. It become a giant pain really quickly.

It's much simpler to just separate them then it is to try to figure out how to jam them together.

Even better though, is to not write a queue poller at all. You should just write a Lambada and point it at your queue.

This is particularly true if you are me, because I wrote the Lambda Queue Poller, it works great, and I have no real reason to want to write it a second time. And I don't even have to maintain it anymore because I haven't worked at AWS since 2016. You should do this to, because my poller is pretty good, and you don't need to write one, and some other schmuck is on the hook for on-call.

Also you don't really need to think about how to scale at all, because Lambda will do it for you.

Sure, at some point, using Lambda will be less cost effective than standing up your own infra, but you can worry about that much, much, much later. And chances are there will be other growth opportunities that are much more lucrative than optimizing your computer bill.

There are other reasons why it might be simpler to split things. Putting your control plane and your data plane together just seems like a head ache waiting to happen.

If you have things that happen every now and then ("CreateUser", "CreateAccount", etc) and things that happen all the time ("CaptureCustomerClick", or "UpdateDoorDashDriverLocation", etc) you probably want to separate those. Trying to keep them together will just end up causing your pain.

I do agree, however, that having a "Users" service and an "AccountService" and a "FooService" and "BarService" or whatever kind of domain driven nonsense you can think of is a bad idea.

Those things are likely to cause pain and high change correlations, and lead to a distributed monolith.

I think the advice shouldn't be "Use a Monolith", but instead should be "Be Scrappy". You shouldn't create services without good reason (and "domain driven design" is not a good reason). But you also shouldn't "jam things together into a monolith" when there's a good reason not to. N sets of crud objects that are highly related to each other and change in correlated ways don't belong in different services. But things that work fundamentally differently (a queue poller, a control-plane crud system, the graph layer for grocery delivery, an llm, a relational database) should be in different services.

This should also be coupled with "don't deploy stuff you don't need". Managing your own database is waaaaaaay more work that just using Dynamo DB or DSQL or Big Table or whatever....

So, "don't use domain driven design" and "don't create services you don't need" is great advice. But "create a monolith" is not really the right advice.

bossyTeacher · 6h ago
Problem is that, come recruiting time, interview gatekeepers are filtering out candidates who don't have the shiny words of the season, see micro services, unit tests, lots of abstractions, etc. It's like a dating app game. Everyone knows is overblown but they are still playing the game. The idea that not every company needs to make the same architectural and technological decisions is a concept way too complex for interview gatekeepers.
hereonout2 · 5h ago
Are unit tests a shiny fad? Second time I've seen it mentioned in this thread. Is there some other type of testing I should be doing, or have I been doing it all wrong for the last two decades?
roguecoder · 4h ago
For unit testing to pay off, it requires having modular units to test.

Programmers coming up through frameworks or functional programming often don't have those, and so the techniques OO unit testers use don't translate well at all. If the first "unit" you build is a microservice, the first possible "unit" test is the isolation test for that service.

I have watched junior engineers crawl over glass to write tests for something because they didn't know how to write testable code yet, and then the tests they write often make refactoring a-la-Martin-Fowler's-book impossible.

(And that is leaving aside the consultancies that want to be able to advertise "100% test coverage!" but don't actually care if the tests make software harder to maintain in the long run because they aren't going to be there.)

Eventually we'll be able to acknowledge that there are a lot of different skills in our profession, and that writing good code isn't about being "smart": it's about knowing how to write code well. But until then people will keep blaming the tools they don't know how to use.

codr7 · 4h ago
Integration testing?

Less mocking, more bang for the buck.

roguecoder · 4h ago
I am so mad that the mockists stole the word "unit test" for their thing. The original definition of a unit test was writing "integration" tests for each of the sub-components of a system.

(Mockist tests are fine for people who really want them, as long as you delete them before checking in the code.)

bossyTeacher · 49m ago
i thought mockist tests are written in a separate test module
httpz · 3h ago
Obligatory KRAZAM video on microservices https://www.youtube.com/watch?v=y8OnoxKotPQ
mannyv · 6h ago
If you don't know what you're doing any architecture will be fine.

If you don't understand the benefit of xyz then don't do it.

Our microservice implementation is great. It scales with no maintenance, and when you have three people that makes a difference.

mattbillenstein · 6h ago
You're probably on the early part of the curve where anything works - small team, simple product, no scale - come back when one or two of these changes...
jmyeet · 5h ago
Microservices are a fad.

Every service boundary you have to cross is a point of friction and a potential source of bugs and issues so by having more microservices you just have more than go wrong, by definition.

A service needs to maintain an interface for compatibility reasons. Each microservice needs to do that and do integration testing with every service they interact with. If you can't deploy a microservice without also updating all its dependencies then you don't have an independent service at all. You just have a more complicated deployment with more bugs.

The real problem you're trying to solve is deployment. If a given service takes 10 minutes to restart, then you have a problem. Ideally that should be seconds. But more ideally, you should be able to drain traffic from it then replace it however long it takes and then slowly roll it out checking for canary changes. Even more ideally, this should be largely automated.

Another factor: build times. If a service takes an hour to compile, that's going to be a huge impediment to development speed. What you need is a build system that caches hermetic artifacts so this rarely happens.

With all that above, you end up with what Google has: distributed builds, automated deployment and large, monolithic services.

sergiotapia · 5h ago
microservices are a gigantic waste of time. like TDD.

it takes skill and taste to use only enough of each. unfortunately a lot of VC $$$ has been spent by cloud companies and a whole generation or two of devs are permasoiled by the micro$ervice bug.

don't do it gents. monolith, until you literally cannot go further, then potentially, maybe, reluctantly, spin out a separate service to relieve some pressure.

gavmor · 5h ago
Ha! I always feel more than a little embarrassed when it happens, but I can't sit idly by while TDD is slandered, especially from so seemingly oblique an angle!

While I agree with you regarding microservices (eg language abstractions provide 80% of the encapsulation SOA provides for 20% of the overhead) and I readily acknowledge that 100% test coverage is a quixotic fantasy, I really can't imagine writing reliable software without debuggers, print-statements, or a REPL—all of which TDD replaces in my workflow.

How, I wonder, do you observe the behavior of the program if not through tests? By playing with it? Manually reproducing state? Or, do you simply wait until after the program is written to test its functionality?

I wonder what mental faculties I lack that facilitate your TDD-less approach. Can it be learned?

sergiotapia · 4h ago
i didn't say i don't write tests. i trashed tdd, the practice of tdd is dogma, impractical and unrealistic.
gavmor · 2h ago
> Or, do you simply wait until after the program is written to [observe] its [behavior]?

How is that even possible?

satvikpendem · 21m ago
Write the program, run the program, observe. Or, write the program, write the tests, run the program, observe for breakages. Neither are TDD, the practice of writing tests first before any actual implementation is written, which I agree with the parent is dogma, impractical, and unrealistic.
pydry · 5h ago
like TDD, microservices are a waste of time if you do it the wrong way and for the wrong reasons.

Like TDD, theyre great if done in the right way for the right reasons.