One of the reasons microservice architecture originally became popular was to break apart monolithic applications. In many cases, I bet a big driver was a lack of separation of concerns, and a more modular design was desired. There are many ways to put up walls in software to help make software more modular and self-contained. Rails engines are a good way to make more a rails app more modular. The number of times I've seen microservices created for the purpose of modularity (not scaling concerns), and the complexity that has brought has really soured me on microservices.
djfobbz · 13m ago
I built my first SaaS back in 2009 on Rails 2.3. Fast forward to 2025, it's still running on Rails 2.3 LTS and Ruby 3.3.8, and it's still making money. No complaints here! ;)
giovapanasiti · 9h ago
This is exactly my experience. Most of the time people go to microservices for the wrong reason and they will regret that for years
fhub · 19m ago
Some might, but I imagine some have left the company when the pain is really felt and are excited to do it all again at the next company.
whstl · 9h ago
Different sections of an app can use different databases, if the bottleneck is in the database.
Different routes can be served by different servers, if the bottleneck is in CPU usage.
Different async tasks can run on different task runner services, if the problem is tasks competing with each other.
Different test suites can run for different sections of the app, if the problem is with tests taking too long to run.
Github and others even allow specific subfolders to be "owned" by different teams.
What else is there? Even slowness of compilation and/or initialization can be alleviated, depending on the language or framework.
stronglikedan · 7h ago
I think the point is that all of that adds complexity that is often unnecessary - a premature optimization if you will. It's like a hammer, and everything looks like a nail to a lot of people.
inopinatus · 7h ago
GP isn’t oppositional, they listed runtime constructs that all run off a single monolith. The point being you don’t need so-called microservices for flexibility in the production environment.
leptons · 8h ago
I've built numerous systems on AWS Lambda over the last 10 years, and have never once regretted it. YMMV.
ecshafer · 7h ago
Ive regretted 99% of the services Ive built in AWS lambda over the years. Everytime it gets more complex than a couple hundred lines of code over a few lambas I start to think “if this were just one service, development, deployments, cicd, testing, storage would all be simpler”.
leptons · 5h ago
My deployments to Lambda are extremely simple. All I do is hit save in VSCode and the Lambda is updated. Change the env to prod and it deploys instantly to prod.
There's tools that make it easy, I'm still using a tool I built 10 years ago. Very little has changed except the addition of layers, which are also pretty easy and automatically handled in my dev tool.
All the Lambdas I write also run locally, and testing isn't an issue.
The only gripe I have with Lambda is when they deprecate older nodejs versions, and I am forced to update some of my Lambdas to run on current nodejs, which then leads to refactoring due to node module incompatibilities in some specific situations. But those are really nodejs problems and not so much Lambda problems, and it does get me to keep my apps updated.
YMMV.
jt2190 · 5h ago
> One of the reasons microservice architecture originally became popular was to break apart monolithic applications.
I feel like the emphasis was on autoscaling parts of the app independently. (It’s telling that this has been forgotten and now we only remember it as “splitting up the app”.)
mrinterweb · 44m ago
Scaling concerns can be a legitimate reason for a microservice, but I think those scaling concerns should be proven and not assumed before a new microservice is born.
I also hate the agreement of maybe one day we might... as a justification for a new microservice. The number of times that premature optimization didn't pay off is far less than I've seen it come to be.
Microservice should be an exception, not the preferred architectural design pattern.
Sometimes I cynically think system architects like them because they make their diagrams look more important.
bwilliams · 1h ago
You can have a modular monolith with multiple entrypoints that enable autoscaling of independent "deploys".
ch4s3 · 54m ago
That’s always made sense to me in theory, but practically speaking a lot of logical units of service just don’t scale independently of your core service. Obviously this isn’t always true but I think it’s too easy to talk yourself into popping off new services and once you’ve built out the infrastructure for it and the incidental complexity is costly.
nurettin · 9h ago
I use multiple services for resilience. Example: With multiple services that have clear separation of concerns, you can debug and fix your processing layer without stopping the collection layer. You can update a distributor while workers wait and vice versa. This way I never have downtime anxiety. No regrets.
zrail · 8h ago
Separation of services is orthogonal to separation of concerns. There's nothing stopping you from having multiple entry points into the same monolith. I.e. web servers run `puma` and workers run `sidekiq` but both are running the same codebase. This is, in fact, the way that every production Rails app that I've worked with is structured in terms of services.
Concerns (in the broad sense, not ActiveSupport::Concern) can be separated any number of ways. The important part is delineating and formalizing the boundaries between them. For example, a worker running in Puma might instantiate and call three or four or a dozen different service objects all within different engines to accomplish what it needs, but all of that runs in the same Sidekiq thread.
Inserting HTTP or gRPC requests between layers might enforce clean logical boundaries but often what you end up with is a distributed ball of mud that is harder to reason about than a single codebase.
mrinterweb · 7h ago
For your example, rails apps handle this case by default with job queues that are managed by the deployment separately. There is a way to force the job queue to process in the same process as your web server, but that's not the way most should be running prod rails apps. There usually isn't anxiety associated with rails app deploys, in my experience.
rco8786 · 7h ago
What can you not do in a monolith? You can still have async queues and different event processors that stop and start independently within a monolithic deployment.
JohnBooty · 7h ago
Speaking as a monolith fan, IMO/IME the main drawback is RAM usage per instance.
You can have a "big, beautiful" Rails monolith codebase used by both Puma and Sidekiq queues etc, and that works well from most standpoints. But RAM usage will be pretty high and limit horizontal scaling.
helle253 · 11h ago
I love Rails Engines, it's a very slick feature.
I recently migrated a featureset from one Rails project into another, as a mounted engine, and ensuring isolation (but not requiring it!) has been tremendously helpful.
capevace · 10h ago
The Filament package for Laravel lets you build similarly encapsulated „plugins“, that are basically mini Laravel apps, that can be easily added to existing apps.
The plugins can rely on all of the Laravelisms (auth, storage etc) and Filament allows them to easily draw app/admin UI.
hk1337 · 11h ago
I have been looking at using Rails Engines recently playing around with trying to get an idea off the ground.
GGO · 10h ago
Rails engines are one of the most underrated features that everyone should be using more.
malkosta · 7h ago
Offset-based pagination will be a problem on big tables.
matltc · 10h ago
Miss seeing rails in the wild
pqdbr · 9h ago
Rails is not only alive and well, but actually booming.
Octoth0rpe · 7h ago
> Rails is not only alive and well, but actually booming.
Do you have any references that validate this?
Rails 'booming' on a 3 year time scale wouldn't surprise me, but would on a 10 year scale.
henning · 11h ago
This blog post just shows how libraries and frameworks often solve one problem but create another. This leads to the emission of ridiculous sentences like `One of the trickiest aspects of building engines is handling routing correctly` which would be a non-issue if you just wrote simple code to solve the problem in front of you instead of doing a bunch of "modular" "engine" framework-y compiler-y nonsense that adds boatloads of complexity just to accomplish one basic thing like handling file uploads.
pqdbr · 9h ago
"one basic thing like handling file uploads" - say no more.
Actually, the article isn't even about handling file uploads - it's about deliberately creating a modular admin panel for dealing with file uploads.
It's not modularity for "framework-y" sake, but to easily deploy that admin panel in other applications with literally a one-liner.
giovapanasiti · 9h ago
I couldn't have written this comment better myself. Thank you this is exactly the point
Different routes can be served by different servers, if the bottleneck is in CPU usage.
Different async tasks can run on different task runner services, if the problem is tasks competing with each other.
Different test suites can run for different sections of the app, if the problem is with tests taking too long to run.
Github and others even allow specific subfolders to be "owned" by different teams.
What else is there? Even slowness of compilation and/or initialization can be alleviated, depending on the language or framework.
There's tools that make it easy, I'm still using a tool I built 10 years ago. Very little has changed except the addition of layers, which are also pretty easy and automatically handled in my dev tool.
All the Lambdas I write also run locally, and testing isn't an issue.
The only gripe I have with Lambda is when they deprecate older nodejs versions, and I am forced to update some of my Lambdas to run on current nodejs, which then leads to refactoring due to node module incompatibilities in some specific situations. But those are really nodejs problems and not so much Lambda problems, and it does get me to keep my apps updated.
YMMV.
I feel like the emphasis was on autoscaling parts of the app independently. (It’s telling that this has been forgotten and now we only remember it as “splitting up the app”.)
I also hate the agreement of maybe one day we might... as a justification for a new microservice. The number of times that premature optimization didn't pay off is far less than I've seen it come to be.
Microservice should be an exception, not the preferred architectural design pattern.
Sometimes I cynically think system architects like them because they make their diagrams look more important.
Concerns (in the broad sense, not ActiveSupport::Concern) can be separated any number of ways. The important part is delineating and formalizing the boundaries between them. For example, a worker running in Puma might instantiate and call three or four or a dozen different service objects all within different engines to accomplish what it needs, but all of that runs in the same Sidekiq thread.
Inserting HTTP or gRPC requests between layers might enforce clean logical boundaries but often what you end up with is a distributed ball of mud that is harder to reason about than a single codebase.
You can have a "big, beautiful" Rails monolith codebase used by both Puma and Sidekiq queues etc, and that works well from most standpoints. But RAM usage will be pretty high and limit horizontal scaling.
I recently migrated a featureset from one Rails project into another, as a mounted engine, and ensuring isolation (but not requiring it!) has been tremendously helpful.
The plugins can rely on all of the Laravelisms (auth, storage etc) and Filament allows them to easily draw app/admin UI.
Do you have any references that validate this?
Rails 'booming' on a 3 year time scale wouldn't surprise me, but would on a 10 year scale.
Actually, the article isn't even about handling file uploads - it's about deliberately creating a modular admin panel for dealing with file uploads.
It's not modularity for "framework-y" sake, but to easily deploy that admin panel in other applications with literally a one-liner.