Okay I'm a Nix enthusiast but you'll have to trust me when I say that I'm not criticizing them for moving away from Nix; it isn't that strong of an emotional attachment. However, I'm not really sure I understand some of these complaints and they really could use more explanation. For example:
> The biggest problem with Nix is its commit-based package versioning. Only the latest major version of each package is available, with versions tied to specific commits in the nixpkgs repo.
While Nixpkgs is an amazing resource, Nix != Nixpkgs. Nixpkgs is highly unideal for cases where you want to be able to pull arbitrary versions of toolchains, but it is not the only way to go. For example, there is amazingly good Nix tooling for pulling an arbitrary version of Rust. Other Nix-based developer tools have shown how you can do this well.
> no way of splitting up the Nix dependencies into separate layers
That doesn't make any sense. You can literally just split them into separate layers in whatever arbitrary fashion you'd like. The built-in Nixpkgs docker tooling has some support for this even.
> We also changed the codebase from Rust to Go because of the Buildkit libraries.
This part is not related to Nix, but I find it interesting anyways. Obviously most people don't transition programming languages on a whim, it's generally something you do when you're already planning on building from scratch anyways. To me it almost sounds like different people worked on Railpacks vs Nixpacks.
(I've definitely seen what happens when people not familiar with Nix wind up having to deal with unfinished Nix solutions within an organization. It is not pretty, as most people are unwilling to try to figure out Nix. I don't generally use Nix at work out of fear of causing this situation.)
dleslie · 5h ago
I don't use Nix, however this seems dismissive:
> While Nixpkgs is an amazing resource, Nix != Nixpkgs.
If Nixpkgs is the default and alternatives require additional research and effort then for most users it _is_ Nix.
> That doesn't make any sense. You can literally just split them into separate layers in whatever arbitrary fashion you'd like. The built-in Nixpkgs docker tooling has some support for this even.
Is this obvious, simple, and default behaviour?
whateveracct · 4h ago
Nixpkgs isn't Nix and in production you rarely just use Nixpkgs verbatim. It's trivial to overlay whatever versions you want (including forks), and I'd say it's expected for any company in production to manage their package set.
We are talking about a company full of professionals. If they need something obvious, simple, and default to manage their build - the core business function that turns their text into deployable artifacts - maybe there is a skill culture issue.
The industry is full of ineptitude though.
eddythompson80 · 3h ago
> The industry is full of ineptitude though.
While I disagree with the person you're replying to, I find your reply dismissive.
I don't know the behind-the-scnene reasons for this, but I can very very easily apply a very similar situation to this from my experience.
Nix is a full blown functional programming language along with a very rich (and poorly documented, niche, only second to C++ template in error comprehensibility[1]) ecosystem in itself. It's not like "docker" or "kubernetes" where you're mostly dealing with "data" files like yaml, json or Dockerfile. You're dealing with a complex programming project.
With that in mind:
- You have a core team with 1 or 2 people with Nix passion/expertise.
- Those people do most of the heavy lifting in implementation.
- They onboarding the team on to Nix
- They evangelize Nix through the org/company
- They mod and answer all the "#nix-discussions" channel questions
Initially the system is fairly successful and everything is good. over the next 5-6 years it would accumulate a lot of feature asks. The original "Nix person" has long left. Most of the original people have moved either to other projects or not particularly that passionate about Nix. In fact, the "best" developer you have who has inherited the whole Nix thing has only really had to deal with all the shit parts of Nix and the system. They are they ones fixing issues, dealing with bugs, etc. All while maintaining 3 stacks, a Nix stack, a Go stack, and a Rust stack.
Eventually that person/team that's annoyed by maintaining the Nix project wins. They want to own that code. They don't want to use Nix any more. They know what's needed, they want to implement it as part of their main Go stack that they are actively working on. They can optimize things for their special case without having to worry about "implementing it the Nix way" or "doing it upstream".
They promise you (the management who is open to the idea, but trying to understand the ROI) feature parity + top 5 feature asks for the initial release. You trust the team enough to let them do what they think is best.
[1]: LLMs are really good at suggesting a solution given an error message. Nix errors bring them to their knees. It's always "Hmmm.... it appears that there is an error in your configuration... have you tried a `git revert`?"
whateveracct · 3h ago
I'm not being dismissive. Well, I am dismissing a lot of industry people's opinions. Because they're bad.
Just because people decide stuff for money doesn't mean I can't call them bad. Not everyone is equally skilled.
And your parable is exactly the issue. The unskilled and loud and whiny do often win, and it's a shame. I see it all the time.
(Also you're way overstating Nix as a "full blown FP language." It isn't hard to learn. I learned it just be existing on a project with it. Within 6mo, now I'm apparently a "Nix expert" and people now point at me as one of the people who "knows it" and "you can't expect everyone to know it like you do." idk maybe I'm some genius but I think it's more that I just don't have a bad personality.)
eddythompson80 · 3h ago
So you are being dismissive. That's what you're doing. You're dismissing more than just "stuff for money". You're dismissing anything that doesn't fall under the "skill" or "technical" category. All software projects contain a human element. I was showing an example from my experience on how something like that could happen.
> A perfectly capably (but perhaps a bit esoteric) technology is picked by a smart passionate person for a project.
> The novel technology is in 1 isolated module that's mostly feature complete for the first 1-3 years.
> People in the team/company deal with that "thing" as a blackbox more and more
> 5-10 years later, mostly new team maintaining the project. They hate the weird choice of tech. "Why is only this one component different???"
> People understand the contract with the "black box" very well, but HATE the "black box". People think "We can implement the black box contract very easily"
whateveracct · 3h ago
Yes I am dismissing. People don't have a right to not be dismissed if I judge them poorly. People are allowed to have bad professional opinions of others.
And I am dismissing the types you describe specifically. I dismiss them (privately amongst the likeminded) at work all the time too. I just put them on a list in my head when they start spouting these sorts of bad values.
kfajdsl · 4h ago
This isn’t “most users”, this is a large company building a product on top of Nix. I’m pretty sure most orgs using Nix at a minimum have a custom overlay.
If you identify these things as an issue, any competent engineer should find a variety of solutions with search and/or LLM assistance within an hour, since they’re not super obscure requirements.
I’m not saying Railway didn’t do this and realize that these common solutions weren’t viable for them, but it’s odd to not mention anything they tried to get around it.
nothrabannosir · 3h ago
To emphasize this point: dleslie's comment is valid on a blog post "we tried Nix for a while to manage our dependencies, we are just building an app and want builds to work, and we decided to move on". For an end user, it is absolutely understandable to assume "nix = nixpkgs".
But as kfajdsl points out: that's not what TFA is. This is a company building a product on top of Nix. Package management is their expertise. Anyone using Nix in that capacity understands the distinction between nix and nixpkgs. Which they certainly do--GP only remarked it was odd they didn't explain it, not that they didn't know.
jchw · 4h ago
> If Nixpkgs is the default and alternatives require additional research and effort then for most users it _is_ Nix.
This feels rather dismissive. They wrote a bespoke solution, not a weekend toy. Surely you'd agree that they have more than just surface-level knowledge of Nix, to be able to distinguish between Nix and Nixpkgs? They're already doing non-trivial things by merging multiple commits of Nixpkgs in order to get different versions of different tools!
> Is this obvious, simple, and default behaviour?
Well, Nix doesn't do much of anything "by default", it's a pretty generic tool. But insofar as it matters, Yes, pretty much. `dockerTools.buildLayeredImage` will in fact automatically build a layered image, and it is the most "obvious" way (IMO) to build a docker image. There is also `dockerTools.buildImage` but there's no particular reason to use it unless you specifically want a flattened image. (The documentation available is clear enough about this. In fact, in practice, much of the time you'd probably actually want `dockerTools.streamLayeredImage` instead, which is also documented well enough, but that's beyond the point here.)
But that's not my point. As far as I know, Nixpacks don't even use this functionality, I'm pretty sure they wrote their own OCI image building tools. And in that sense, it is not obvious why they can't split the Nix store and the article doesn't explain it.
My point wasn't to be dismissive about the difficulties of Nix, it's that the blog post doesn't really do a good job of explaining things. It makes it sound like these are normal problems in Nix, but they are not; even the official Nixpkgs documentation often points to third party solutions for when you're working outside of Nixpkgs, since most of the Nixpkgs tools is geared for Nixpkgs and NixOS usage. As an example, take a look at this section of the Rust documentation in Nixpkgs:
So even if you're relatively new to Nix, as long as you are reading the documentation you will indeed definitely be aware of the fact that there is more to the Nix ecosystem than just Nixpkgs. It may not be surface-level Nix knowledge, but it's certainly close.
baobun · 42m ago
I think the part that's easy to miss is that their users are devs who will want to specify their own dependencies and versions for arbitrary packages.
The way nix works with the way nixpkgs is structured, pinning a version of any package means pinning a commit of the entire nixpkgs tree. Since package builds of node/python/ruby packages depend on stuff outside of the package dir in the tree, you need that mapping between versions and commits. It is also a leaky abstraction, so they will need to expose that to their users, who now may run into situations where they need to align various states of the nixpkgs repo when they just wanted to "yarn add new-fancy-nodejs-package-with-linked–native-deps".
Using nix without nixpkgs may be fine for more scoped use but seems hard to justify for a platform like Railway.
jrockway · 4h ago
I think this is pretty well stated. I'll add that while nixpkgs isn't nix, nixpkgs is kind of the good part. I use NixOS and for the first time in my life, I'm using the latest version of the Linux kernel on release day. That's pretty excellent. While I've come to tolerate Debian Stable in my old age, it is always like stepping a few years into the past ;)
The Nix language is something I could criticize for hours without getting bored, but it is what it is. It's old and they did the best they could and it's probably not worth changing. The Nix build system feels awfully primitive to me, often rebuilding stuff that doesn't need to be rebuilt for no good reason. (For example, my NixOS installer ISO has a ton of the build depend on the cmdline I pass to the kernel [just console=ttyS2,1500000n8], and so changing the speed of my serial port requires about 3 minutes of build time. It's goofy and makes me laugh, I'm not going to stop using Nix because of it... but it's also something that I wouldn't let happen in MY build.)
Nix for Docker images is, in my opinion, what it's the worst at. A long time ago, I was writing some software in Go and needed to add the pg_dump binary from Postgres to my container image. The infrastructure team suggested using Nix, which I did, but our images blew up from 50MB of our compressed go binary to 1.5GB of God Knows What. pg_dump is 464K. I ended up doing things my way, with Bazel and rules_debian to install apt packages, and the result (on top of distroless) was much cleaner and more compact. My opinion with some actual Nix experience is that a Nix system always ends up being 1.4GB. My installer ISO is 1.4GB. My freshly installed machine is 1.4GB. That's just how it is, for whatever reason.
Finally, the whole "I would like to build a large C++ project" situation is a well worn path. s/C++/Rust doesn't change anything material. There are build systems that exist to make the library situation more tolerable. They are all as complicated as Nix, but some work much better for this use case. Nix is trying to be a build system for building other people's software, supporting nixpkgs, and lands on the very generic side of things. Build systems that are designed for building your software tend to do better at that job. Personally, I'm happy with Bazel and probably wouldn't use anything else (except "go build" for go-only projects), but there are many, many, many other options. 99% of the time, you should use that instead of Nix (and write a flake so people can install the latest version of Your Thing with home-manager; or maybe I'm just the only person that uses their own software day to day and you don't actually need to do that...)
internet_points · 4h ago
> a Nix system always ends up being 1.4GB
That's strange, I never had problems building really tiny docker (release) images with nix, in fact it felt easier than doing it with alpine. You just get exactly what you specify, no more.
(OTOH, when developing in nix, I always end up with a huge /nix/store and have no idea how to clean it without garbage collecting everything and having to wait all over)
chriswarbo · 3h ago
> I always end up with a huge /nix/store and have no idea how to clean it without garbage collecting everything and having to wait all over
FYI you can avoid things getting garbage-collected by doing `nix-store --add-root`; that makes an "(indirect) garbage collector root"[0]. Especially useful if you're using import-from-derivation, since that imported derivation won't appear in the dependencies of your final build output (which, to be clear, is a good thing; since it lets us calculate a derivation, e.g. by solving dependency constraints or whatever, without affecting the eventual hash if that calculation happens to match a previous one!)
Honestly this feels more like rail... wants to make their own version, hence a new railX lol
viraptor · 8h ago
The version selection part sounds weird. The versions in nixpkgs make sense when you're running/building the system. If you're providing runtimes/compilers as a platform, you really want something like what devenv does - provide the versions yourself. You really don't want to end up building an old system to provide an old nodejs - you're leaving security patches in dependencies behind. Devenv does it for example through https://github.com/cachix/nixpkgs-python "All Python versions, kept up-to-date on hourly basis using Nix."
> Railway injects a deployment ID environment variable into all builds.
They could've done it in the next layer after installation. Also, you can split packages into different layers. There's even automation for it if you need batches to keep the number of layers down.
lewo · 6h ago
> With no way of splitting up the Nix dependencies into separate layers
For instance, if your images use bash, you can explicitly create a layer containing the bash closure. This layer can then be used across all your images and is only rebuild and repushed if this bash closure is modified.
> > pull in dependencies often results in massive image sizes with a single /nix/store layer
This is the case for the basic nixpkgs.dockerTools.buildImage function but this is not true with nix2container, nor with nixpkgs.dockerTools.streamLayeredImage. Instead of writing the layers in the Nix store, these tools build a script to actually push the image by using existing store paths (which are Nix runtime dependencies of this script). Regarding the nix2container implementation, it builds a JSON file describing the Nix store paths for all layers and uses Skopeo to push the image (to a Docker deamon, a registry, podman, ...), by consuming this JSON file.
Just wanted to say thanks for nix2container. I’ve been using it to do some deploys to AWS (ECR) and my iteration time between builds is down to single digit seconds.
mrbluecoat · 9h ago
> we don’t have any problem with Nix itself. But there is a problem with how we were using it.
A good example of 'use the right tool for the right job'. Nix is great for some use cases and awful for others. The problem is the Nix learning curve is so high that by the time you've grasped it enough to make a decision you feel you've invested too much time to back out now and pivot to something else so you try to shoehorn it to solve the original need.
teekert · 8h ago
I feel it like that as well, but in some ways Nix is a more normal programming paradigm than other OS’s. We’re just not used to thinking about an OS that way. Nix expressions have inputs (a package repo, lots of key-value pairs) and outputs (a Linux system). Idk perhaps in a couple of years it will be much more normal.
Ie it is very easy for an AI to create a to-spec shell.nix (some Python packages, some Linux packages, some env vars, some path entries etc), or configuration.nix because of this paradigm.
I do this a lot to include envs with repos that fully support the package. It would probably be more reproducible with flakes (a flake.nix is like a shell.nix but with version pinning… or something, I’m still climbing that learning hill).
Aurornis · 17m ago
From the headline I thought they were making a single incremental change (Nix) but the article sounds like they’re doing an entire rewrite under a new project name:
> Since we transitioned away from Nix, we also transitioned away from the name Nixpacks in favor of Railpack. We also changed the codebase from Rust to Go because of the Buildkit libraries.
Suddenly the move away from Nix seems less like an incremental change and more like one part of a complete overhaul of the entire project. Did they have a team changeover or something? Or did they just want to start over from scratch and rewrite the whole project?
It also seems strange to switch to an entirely different programming language for a single library. I haven’t found library FFI to be a huge obstacle in Rust.
kesor · 8h ago
Looks like they are trying to force versions into where there are none. Just like trying to force a square cube into a round hole.
"Default versions" breaking things that depend on them? What is that? It is like using docker's ":latest" tag and being surprised each time that a new server falls on its face because the "default" image is actually a different version from the previous "default" image.
I don't understand any of the explanations in this blog post. Seems like people who have zero clue about what a "version" of a software is.
"no way of splitting up the Nix dependencies into separate layers" - Why? Of course you can split /nix/store into as many layers as you need. Do they even know how to use containers and how to use Nix in the first place?
With the clear incompetence of these people, no wonder that their proposed solution smells like a decomposed fish.
Classic NIH syndrome. There is going to be no surprise to see them meet the exact same problems they didn't solve with Nix to infest their new "solution".
jbverschoor · 8h ago
What works better to raise VC (which for many is the goal)
A nix wrapper or a deployment platform
BoorishBears · 6h ago
Traction.
I don't think any VC worth the time is going to sit around nitpicking how much Nix matters to their offering if they're making increasing amounts of money.
jbverschoor · 5h ago
Depends who’s pretending to be a VC, or accelerator
setheron · 8h ago
Nix gives you a commit guarantee rather than arbitrary versions. You're going to put yourself in a bad time when you have edge cases: glibc changes or conflicting shared libraries.
It sounds like it's a little bit too late, but I'm happy to provide some consulting on how you can get it to work idiomatically with Nix.
Product looks cool!
nialv7 · 8h ago
nix solves the shared library incompatibility problem by being extremely conservative. every time anything changes, consequential or not - a comment got modified, documentation changes, a testcase got added, etc. - it will rebuild all dependents. and not just that, but all dependents of dependents, and dependents of dependents of dependents, on and on. this often results in massive massive rebuilds.
sure you are not going to get shared library conflicts, but i think this solution is extremely wasteful, and can make development painful too - look at nixpkgs' staging process.
setheron · 5h ago
Nix has support for bit reproduction and will not rebuild on comments if you specify it.
Of course lots of software isn't ready for but reproduction which is why Nix has taken such a pragmatic approach. (I have written a lot about this).
It's all a series of tradeoffs. If your goal is reproducibility (as close as you can get), you will have a larger graph likely ..since you are accounting for more!
Sometimes we like to believe we can have our cake and eat it too rather than understand life's a series of tradeoffs.
When we think we are getting a silver bullet, we've likely just pushed that complexity somewhere else.
nialv7 · 4h ago
IIUC you are talking about CA-derivations? Yeah they may help but it's hard to know how much since it's not in production yet, despite being part of Eelco's original paper describing nix. So my hope isn't high.
> When we think we are getting a silver bullet, we've likely just pushed that complexity somewhere else.
True but we kind of just stopped looking. and I feel much of the solution space hasn't been explored.
smilliken · 5h ago
The reason someone changes a dependency at all is because they expect a difference in behavior. No one would feel the motivation to go update a dependency if they aren't getting something out of it, that's a waste of effort and an unnecessary risk.
Each person doesn't have to perform the build on their own. A build server will evaluate it and others will pull it from the cache.
The greater waste that nix eliminates is the waste of human time spent troubleshooting something that broke in production because of what should have been an innocent change, and the lost business value from the decreased production. When you trust your dependencies are what you asked for, it frees the mind of doubt and lets you focus on troubleshooting more efficiently towards a problem.
Aside, I spent over a decade on Debian derived distros. I never once had one of these distros complete an upgrade successfully between major versions, despite about 10 attempts spread over those years, though thankfully always on the first sacrificial server attempted. They always failed with interesting issues, sometimes before they really got started, sometimes borking the system and needing a fresh install. With NixOS, the upgrades are so reliable they can be done casually during the workday in production without bothering to check that they were successful. I think that wouldn't be possible if we wanted the false efficiency of substituting similar but different packages to save the build server from building the exact specification. Anything short of this doesn't get us away from the "works on my machine" problem.
eddythompson80 · 3h ago
> You're going to put yourself in a bad time when you have edge cases: glibc changes or conflicting shared libraries.
I totally understand the value proposition of Nix. However I think saying "bad time" is a bit hyperbolic. At most it's "You'll be losing a pretty significant guarantee compared to Nix". Still probably "packed to be more likely to work correctly" than 95% of software out there.
ris · 8h ago
The main problem here is wanting to hang on to the "bespoke version soup" attitude that language package managers encourage (and is totally unsustainable). The alternative Mise doesn't appear to have any ability to understand version constraints between packages and certainly doesn't run tests for each installed package to ensure it works correctly with the surrounding versions. So you're not getting remotely the same thing.
jorams · 47m ago
Bespoke version soup is unsustainable, but part of why people keep doing it is that it tends to work fine. It tends to work fine in part because OS-level libraries come from a different, much more conservative world, in which breaking backwards compatibility is something you try to avoid as much as possible.
So they can take a stable, well-managed OS as a base, use tools like mise and asdf to build a bespoke version soup of tools and language runtimes on top, then run an app on top of that. It will almost never break. When it does break, they fiddle with versions and small fixes until it works again, then move on. The fact that it broke is annoying, but unimportant. Anything that introduces friction, requires more learning, or requires more work is a waste of time.
Others would instead look for a solution to stop it from breaking ever again. This solution is allowed to introduce friction, require more learning, or require more work, because they consider the problem important. These people want Nix.
Most people are in the first group, so a company like Railway that wants to grow ends up with a solution that fits that group.
k__ · 8h ago
"the "bespoke version soup" attitude that language package managers encourage"
Care to elaborate what that means and what the alternative is?
matthewbauer · 6h ago
Not sure on the exact take of the OP, but:
Package maintainers often think in terms of constraints like I need a 1.0.0 <= pkg1 < 2.0.0 and a 2.5.0 <= pkg2 < 3.0.0. This tends to make total sense in the micro context of a single package but always falls apart IMO in the macro context. The problem is:
- constraints are not always right (say pkg1==1.9.0 actually breaks things)
- constraints of each dependency combined ends up giving very little degrees of freedom in constraint solving, so that you can’t in fact just take any pkg1 and use it
- even if you can use a given version, your package may have a hidden dependency on one if pkg1’s dependencies, that is only apparent once you start changing pkg1’s version
Constraint solving is really difficult and while it’s a cool idea, I think Nixpkgs takes the right approach in mostly avoiding it. If you want a given version of a package, you are forced to take the whole package set with you. So while you can’t say take a version of pkg1 from 2015 and use it with a version of pkg2 from 2025, you can just take the whole 2015 Nixpkgs and get pkg1 & pkg2 from 2015.
jonhohle · 5h ago
There’s no clear definition (in most languages, of major/minor/patch versioning). Amazon did this reasonably well when I was there, though the patch version was implicitly assigned and the major and minor required humans to follow the convention:
You could not depend on a patch version directly in source. You could force a patch version other ways, but each package would depend on a specific major/minor and the patch version was decided at build time. It was expected that differences in the patch version were binary compatible.
Minor version changes were typically were source compatible, but not necessarily binary compatible. You couldn’t just arbitrarily choose a new minor version for deployment (well, you could, but without expecting it to go well).
Major versions were reserved for source or logic breaking changes. Together the major and minor versions were considered the interface version.
There was none of this pinning to arbitrary versions or hashes (though, you could absolutely lock that in at build time).
Any concept of package (version) set was managed by metadata at a higher level. For something like your last example, we would “import” pkg2 from 2025, bringing in its dependency graph. The 2025 graph is known to work, so only packages that declare dependencies on any of those versions would be rebuilt. At the end of the operation you’d have a hybrid graph of 2015, 2025, and whatever new unique versions were created during the merge, and no individual package dependencies were ever touched.
The rules were also clear. There were no arbitrary expressions describing version ranges.
No comments yet
0xbadcafebee · 5h ago
> Constraint solving is really difficult and while it’s a cool idea, I think Nixpkgs takes the right approach in mostly avoiding it. If you want a given version of a package, you are forced to take the whole package set with you.
Thank you, I was looking for an explanation of exactly why I hate Nix so much. It takes a complicated use case, and tries to "solve" it by making your use-case invalid.
It's like the Soylent of software. "It's hard to cook, and I don't want to take time to eat. I'll just slurp down a bland milkshake. Now I don't have to deal with the complexities of food. I've solved the problem!"
lkjdsklf · 5h ago
It’s not an invalid use case in nixpkgs. It’s kind of the point of package overlays.
It removes the “magic” constraint solving that seemingly never works and pushes it to the user to make it work
chriswarbo · 5h ago
> I was looking for an explanation of exactly why I hate Nix so much
Note that the parent said "I think Nixpkgs takes the right approach in mostly avoiding it". As others have already said, Nix != Nixpkgs.
If you want to go down the "solving dependency version ranges" route, then Nix won't stop you. The usual approach is to use your normal language/ecosystem tooling (cabal, npm, cargo, maven, etc.) to create a "lock file"; then convert that into something Nix can import (if it's JSON that might just be a Nixlang function; if it's more complicated then there's probably a tool to convert it, like cabal2nix, npm2nix, cargo2nix, etc.). I personally prefer to run the latter within a Nix derivation, and use it via "import from derivation"; but others don't like importing from derivations, since it breaks the separation between evaluation and building. Either way, this is a very common way to use Nix.
(If you want to be even more hardcore, you could have Nix run the language tooling too; but that tends to require a bunch of workarounds, since language tooling tends to be wildly unreproducible! e.g. see
http://www.chriswarbo.net/projects/nixos/nix_dependencies.ht... )
matthewbauer · 5h ago
I mean you can do it in Nix using overlays and overrides. But it won’t be cached for you and there’s a lot of extra fiddling required. I think it’s pretty much the same as how Bazel and Buck work. This is the future like it or now.
ris · 5h ago
It's the idea that every application can near-arbitrarily choose a bespoke-but-exact mix of versions of every underlying package and assume they all work together. This is same attitude that leads to seemingly every application on planet earth needing to individually duplicate the work of reacting to every single dependabot update for their thousands of underlying packages and deal with the fallout of conflicts when they arise.
Packages in nixpkgs follow the "managed distribution" model, where almost all package combinations can be expected to work together, remain reasonably stable (on the stable branch) for 6 months receiving security backports, then you do all your major upgrades when you jump to the next stable branch when it is released.
jpgvm · 6h ago
Nix generally speaking has a global "nixpkgs" version (I'm greatly over-simplifying here ofc) in which there is a single version of each package.
This is likely the source of their commit based versioning complaint/issue, i.e the commits in question are probably https://github.com/NixOS/nixpkgs versions if they aren't maintaining their own overlay of derivations.
This is in contrast to systems that allow all of the versions to move independently of each other.
i.e in the Nix world you don't just update one package, you move atomically to a new set of package versions. You can have full control over this by using your own derivations to customise the exact set of versions, in practice most folk using Nix aren't deep enough in it for that though.
James_K · 5h ago
Put out fewer versions of things. It is entirely possible to write a piece of software and only change the interface of it at rare intervals. The best solution I can think of though would be to allow one version of a package to provide multiple versions of its interface. Suppose you want to increment the minor version number of your code and this involves changing the signatures of a number of functions, you could design a programming language packaging system such that both versions are defined in the same package, sharing code when needs be.
chriswarbo · 3h ago
That's why everything in Nixpkgs is defined as a function, which takes dependencies as arguments.
It also puts a function in the result, called `override`, which can be called to swap out any of those arguments.
James_K · 3h ago
Which leads to the exact problems defined in this article. Many programs using many library versions. It would be much better from both a security and size perspective if these disparate packages could be served by a single shared object using versioned symbols.
chriswarbo · 3h ago
Hmm, not sure I agree. Most of those arguments get populated by taking the fixed-point, along with any given overlays; so it's easy to swap-out a library everywhere, just by sticking in an overlay. The exceptions mostly seem to be things that just don't work with the chosen version of some dependency; and even that's quite rare (e.g. it's common for Nixpkgs maintainers to patch a package in order to make the chosen dependency work; though that can causes other problems!)
aidenn0 · 5h ago
You can actually have both if you do it right. It's trivial to build a rust package with Nix from a Cargo.lock file, for example. Nixpkgs is contrary to bespoke version soup, but Nix itself can be fine with it.
Cloudef · 8h ago
I dont see why they couldnt made their own derivations instead of relying on nixpkgs hashes.
No comments yet
rcarmo · 5h ago
Hmmm. Interesting. I looked at the deployment file and... I'm doing a very similar thing, but with docker-compose and caddy: https://github.com/piku/kata/tree/compose
Completely different approach to dependencies, though. For now.
neuroelectron · 1h ago
Nix seems to be being undermined internally. I wonder why that would be happening.
eviks · 7h ago
Couldn't quickly find the blog introducing nixpacks, but weren't these issues clearly visible from the start?
> Smaller Builds
> Better caching
what were the benefits that overcame this, and what about those now?
droelf · 8h ago
We've been working on Pixi, which uses Conda packages. You can control versions precisely and also easily build your own software into a package to ship it. Would be curious to chat if it could be useful as an alternative to `mise`.
femiagbabiaka · 7h ago
The new design is very similar to Dagger, interesting.
lloydatkinson · 9h ago
As someone with only a little experience with Nix, the points here don’t really seem right?
> This approach isn’t clear or maintainable, especially for contributors unfamiliar with Nix’s version management.
> For languages like Node and Python, we ended up only supporting their latest major version.
What is not maintainable about this? That they need to make a list of available versions? So, can this not be automated?
Furthermore, why is Railway defining how a user uses Nix?
Surely one of the points of Nix is that you can take a bare machine and have it configured with exactly what versions of packages you want? Why would Railway need to get in the way of the user and limit their versions anyway?
Or did I misunderstand and they don’t even expose Nix to the user? If so, the original question still stands: can’t they automate that list of package versions?
elbear · 8h ago
The version limits come from the fact that the Nix cache doesn't maintain older versions. So, if you use an older version, you will have to compile from sources. It sounds like they didn't want to take it upon themselves and provide a cache with older versions, even though it doesn't sound like much effort.
Honestly, the reasons given don't feel very solid. Maybe the person who introduced Nix left and the ones remaining didn't like it very much (the language itself is not very nice, the docs weren't great either in the past).
Still, I'm not familiar enough with the stack they chose, but does it provide a level of determinism close to Nix? If not, it might come to bite them or make their life harder later on.
e3bc54b2 · 8h ago
Nix cache (cache.nixos.org) does in fact maintain older versions[0]. In fact, they maintain so much older stuff (binaries and associated source), that they are used in both research[1][2] and are having issues with gigantic cache size[3][4].
And yes, their reasoning implies NIH and just unfamiliarity combined with unwillingness to really understand Nix.
Even if this weren’t true, setting up their own binary cache on S3 would have been trivial. It took me half a day to get it set up for our CI pipeline.
akho · 6h ago
Nix cache does provide old versions. What they seem to want is old versions built with new versions of dependencies. That's also possible, but you will have to build things.
koolala · 6h ago
Patching old software with newer components? Does another tool really offer that automatically? This article is a promotion for their tool that does?
akho · 5h ago
That’s what dynamic linking does in most linux distributions.
cormacrelf · 8h ago
From the article:
> The way Nixpacks uses Nix to pull in dependencies often results in massive image sizes with a single /nix/store layer ... all Nix and related packages and libraries needed for both the build and runtime are here.
This statement is kinda like “I’m giving up on automobiles because I can’t make them go forward”. This is one of the things Nix can do most reliably. It automates the detection of which runtime dependencies are actually referenced in the resulting binary, using string matching on /nix/store hashes. If they couldn’t make it do that, they’re doing something pretty weird or gravely wrong. I wouldn’t even know where to start to try to stop Nix from solving this automatically!
I wouldn’t read too much into their experience with it. The stuff about versioning is a very normal problem everyone has, would have been more interesting if they attempted to solve it.
mplanchard · 5h ago
To be fair to the authors, this IS a problem, albeit one they phrased poorly, especially with building docker images via nix. The store winds up containing way more than you need (eg all of postgres, not just psql), and it can be quite difficult to patch individual packages. Derivations are also not well-pruned in my experience, leading to very bloated docker images relative to using a staged Dockerfile.
Image size isn’t something we’ve focused a lot on, so I haven’t spent a ton of time on it, but searching for “nix docker image size” shows it to be a pretty commonly encountered thing.
codethief · 9h ago
> Or did I misunderstand and they don’t even expose Nix to the user?
That's at least my understanding, yes.
notnmeyer · 9h ago
you can not have a dockerfile in your project at all, push your code to them, and they’d build an image for it with nixpacks. you’d see nix stuff in your build logs, but it’s behind the scenes for the most part.
api · 4h ago
Nix strikes me as an incredibly well thought out solution to a set of problems that should not exist.
The OS should be immutable. Apps and services and drivers/extensions should be self contained. Things should not be installed “on” the OS. This entire concept is a trillion dollar mistake.
wg0 · 7h ago
> We also changed the codebase from Rust to Go because of the Buildkit libraries.
Go is the best choice at the moment for such tools. These tools start a process, do lots of IO and exit.
Very pragmatic choice.
Onavo · 7h ago
Go also statically links all dependencies and reinvents all the wheels usually provided by the system land. Cross compilation is trivial. It is unrivaled when it comes to deployment simplicity.
xvilka · 7h ago
Rust does the same.
bithavoc · 5h ago
Rust can cross-compile, yes, but is not as seamless. For example, Rust can not cross-compile Windows binaries from Linux without external support like MinGW.
Go can cross-compile from Linux to Windows, Darwin and FreeBSD without requiring any external tooling.
hu3 · 6h ago
Yea and at least Go provides a giant Google engineering tier quality standard library so reinventing the wheel here doesn't hurt so much productivity.
Meanwhile Rust requires a pile of variable quality community driven crates to do basic things.
bigyabai · 5h ago
Both languages have enormous cargo-culting issues when you try to do anything that isn't fizzbuzz. The bigger difference that I'd expect people to identify is that Rust generates freestanding binaries where Go software requires a carefully-set runtime. There are pros and cons to each approach.
hu3 · 2h ago
None of them generate true freestanding executables by default. Both Go and Rust require glibc.
And Go's runtime is built-in by default. Unlike Java so there's nothing to "carefully set".
bigyabai · 2h ago
By that definition, I cannot name a single high-level programming language that generates freestanding binaries.
hu3 · 1h ago
Many of them can, including Rust and Go. Just not with default arguments. You need to pass linking arguments.
This is not done by default to reduce binary sizes.
foooorsyth · 7h ago
>The biggest problem with Nix is its commit-based package versioning.
I am naive about Nix, but...
...isn't that like...the whole selling point of Nix? That it's specific about what you're getting, instead of allowing ticking time bombs like python:latest or npm-style glibc:^4.4.4
Odd to attach yourself to Nix then blog against its USP.
grep_name · 45m ago
Eh. I've been using nixOS for years now and still find that I often desperately, desperately wish I could upgrade just one program that I need a new version of without risking that, you know, any individial single one of my installed packages has a change between the last update and now that messes up my workflow. Or that I could pin a version of a software that I'm happier with that version of without essentially rolling my own package repo. It is, in fact, the only package manager I'm aware of that makes it such a pain to do that. It's only because people in this thread are insisting it's doable that I say 'such a pain' instead of just 'impossible'.
A few weeks ago I needed to update firefox for a bug fix that was causing a crash, but of course that meant updating all of nixpkgs. When I finished the switch, the new version of pipewire was broken in some subtle way and I had to roll it back and have been dealing with firefox crashing once a week instead. I can't imagine pitching this to my team for development when I'm having this kind of avoidable issue just with regular packages that aren't even language dependencies.
To those who say 'if you want to lock your dependencies for a project, you can just build a nix flake from a locked file using the <rust | python | npm> tools' I say, why the hell would I want to do that? Being able to manage multiple ecosystems from the same configuration tool was half the draw of nix in the first place!
awinter-py · 6h ago
tldr we are moving on from nix because we are selling an alternative to nix?
timeon · 2h ago
This is not appropriate to post here but the site has nothing to do with railways.
> The biggest problem with Nix is its commit-based package versioning. Only the latest major version of each package is available, with versions tied to specific commits in the nixpkgs repo.
While Nixpkgs is an amazing resource, Nix != Nixpkgs. Nixpkgs is highly unideal for cases where you want to be able to pull arbitrary versions of toolchains, but it is not the only way to go. For example, there is amazingly good Nix tooling for pulling an arbitrary version of Rust. Other Nix-based developer tools have shown how you can do this well.
> no way of splitting up the Nix dependencies into separate layers
That doesn't make any sense. You can literally just split them into separate layers in whatever arbitrary fashion you'd like. The built-in Nixpkgs docker tooling has some support for this even.
> We also changed the codebase from Rust to Go because of the Buildkit libraries.
This part is not related to Nix, but I find it interesting anyways. Obviously most people don't transition programming languages on a whim, it's generally something you do when you're already planning on building from scratch anyways. To me it almost sounds like different people worked on Railpacks vs Nixpacks.
(I've definitely seen what happens when people not familiar with Nix wind up having to deal with unfinished Nix solutions within an organization. It is not pretty, as most people are unwilling to try to figure out Nix. I don't generally use Nix at work out of fear of causing this situation.)
> While Nixpkgs is an amazing resource, Nix != Nixpkgs.
If Nixpkgs is the default and alternatives require additional research and effort then for most users it _is_ Nix.
> That doesn't make any sense. You can literally just split them into separate layers in whatever arbitrary fashion you'd like. The built-in Nixpkgs docker tooling has some support for this even.
Is this obvious, simple, and default behaviour?
We are talking about a company full of professionals. If they need something obvious, simple, and default to manage their build - the core business function that turns their text into deployable artifacts - maybe there is a skill culture issue.
The industry is full of ineptitude though.
While I disagree with the person you're replying to, I find your reply dismissive.
I don't know the behind-the-scnene reasons for this, but I can very very easily apply a very similar situation to this from my experience.
Nix is a full blown functional programming language along with a very rich (and poorly documented, niche, only second to C++ template in error comprehensibility[1]) ecosystem in itself. It's not like "docker" or "kubernetes" where you're mostly dealing with "data" files like yaml, json or Dockerfile. You're dealing with a complex programming project.
With that in mind:
- You have a core team with 1 or 2 people with Nix passion/expertise.
- Those people do most of the heavy lifting in implementation.
- They onboarding the team on to Nix
- They evangelize Nix through the org/company
- They mod and answer all the "#nix-discussions" channel questions
Initially the system is fairly successful and everything is good. over the next 5-6 years it would accumulate a lot of feature asks. The original "Nix person" has long left. Most of the original people have moved either to other projects or not particularly that passionate about Nix. In fact, the "best" developer you have who has inherited the whole Nix thing has only really had to deal with all the shit parts of Nix and the system. They are they ones fixing issues, dealing with bugs, etc. All while maintaining 3 stacks, a Nix stack, a Go stack, and a Rust stack.
Eventually that person/team that's annoyed by maintaining the Nix project wins. They want to own that code. They don't want to use Nix any more. They know what's needed, they want to implement it as part of their main Go stack that they are actively working on. They can optimize things for their special case without having to worry about "implementing it the Nix way" or "doing it upstream".
They promise you (the management who is open to the idea, but trying to understand the ROI) feature parity + top 5 feature asks for the initial release. You trust the team enough to let them do what they think is best.
[1]: LLMs are really good at suggesting a solution given an error message. Nix errors bring them to their knees. It's always "Hmmm.... it appears that there is an error in your configuration... have you tried a `git revert`?"
Just because people decide stuff for money doesn't mean I can't call them bad. Not everyone is equally skilled.
And your parable is exactly the issue. The unskilled and loud and whiny do often win, and it's a shame. I see it all the time.
(Also you're way overstating Nix as a "full blown FP language." It isn't hard to learn. I learned it just be existing on a project with it. Within 6mo, now I'm apparently a "Nix expert" and people now point at me as one of the people who "knows it" and "you can't expect everyone to know it like you do." idk maybe I'm some genius but I think it's more that I just don't have a bad personality.)
> A perfectly capably (but perhaps a bit esoteric) technology is picked by a smart passionate person for a project.
> The novel technology is in 1 isolated module that's mostly feature complete for the first 1-3 years.
> People in the team/company deal with that "thing" as a blackbox more and more
> 5-10 years later, mostly new team maintaining the project. They hate the weird choice of tech. "Why is only this one component different???"
> People understand the contract with the "black box" very well, but HATE the "black box". People think "We can implement the black box contract very easily"
And I am dismissing the types you describe specifically. I dismiss them (privately amongst the likeminded) at work all the time too. I just put them on a list in my head when they start spouting these sorts of bad values.
If you identify these things as an issue, any competent engineer should find a variety of solutions with search and/or LLM assistance within an hour, since they’re not super obscure requirements.
I’m not saying Railway didn’t do this and realize that these common solutions weren’t viable for them, but it’s odd to not mention anything they tried to get around it.
But as kfajdsl points out: that's not what TFA is. This is a company building a product on top of Nix. Package management is their expertise. Anyone using Nix in that capacity understands the distinction between nix and nixpkgs. Which they certainly do--GP only remarked it was odd they didn't explain it, not that they didn't know.
This feels rather dismissive. They wrote a bespoke solution, not a weekend toy. Surely you'd agree that they have more than just surface-level knowledge of Nix, to be able to distinguish between Nix and Nixpkgs? They're already doing non-trivial things by merging multiple commits of Nixpkgs in order to get different versions of different tools!
> Is this obvious, simple, and default behaviour?
Well, Nix doesn't do much of anything "by default", it's a pretty generic tool. But insofar as it matters, Yes, pretty much. `dockerTools.buildLayeredImage` will in fact automatically build a layered image, and it is the most "obvious" way (IMO) to build a docker image. There is also `dockerTools.buildImage` but there's no particular reason to use it unless you specifically want a flattened image. (The documentation available is clear enough about this. In fact, in practice, much of the time you'd probably actually want `dockerTools.streamLayeredImage` instead, which is also documented well enough, but that's beyond the point here.)
But that's not my point. As far as I know, Nixpacks don't even use this functionality, I'm pretty sure they wrote their own OCI image building tools. And in that sense, it is not obvious why they can't split the Nix store and the article doesn't explain it.
My point wasn't to be dismissive about the difficulties of Nix, it's that the blog post doesn't really do a good job of explaining things. It makes it sound like these are normal problems in Nix, but they are not; even the official Nixpkgs documentation often points to third party solutions for when you're working outside of Nixpkgs, since most of the Nixpkgs tools is geared for Nixpkgs and NixOS usage. As an example, take a look at this section of the Rust documentation in Nixpkgs:
https://github.com/NixOS/nixpkgs/blob/master/doc/languages-f...
So even if you're relatively new to Nix, as long as you are reading the documentation you will indeed definitely be aware of the fact that there is more to the Nix ecosystem than just Nixpkgs. It may not be surface-level Nix knowledge, but it's certainly close.
The way nix works with the way nixpkgs is structured, pinning a version of any package means pinning a commit of the entire nixpkgs tree. Since package builds of node/python/ruby packages depend on stuff outside of the package dir in the tree, you need that mapping between versions and commits. It is also a leaky abstraction, so they will need to expose that to their users, who now may run into situations where they need to align various states of the nixpkgs repo when they just wanted to "yarn add new-fancy-nodejs-package-with-linked–native-deps".
Using nix without nixpkgs may be fine for more scoped use but seems hard to justify for a platform like Railway.
The Nix language is something I could criticize for hours without getting bored, but it is what it is. It's old and they did the best they could and it's probably not worth changing. The Nix build system feels awfully primitive to me, often rebuilding stuff that doesn't need to be rebuilt for no good reason. (For example, my NixOS installer ISO has a ton of the build depend on the cmdline I pass to the kernel [just console=ttyS2,1500000n8], and so changing the speed of my serial port requires about 3 minutes of build time. It's goofy and makes me laugh, I'm not going to stop using Nix because of it... but it's also something that I wouldn't let happen in MY build.)
Nix for Docker images is, in my opinion, what it's the worst at. A long time ago, I was writing some software in Go and needed to add the pg_dump binary from Postgres to my container image. The infrastructure team suggested using Nix, which I did, but our images blew up from 50MB of our compressed go binary to 1.5GB of God Knows What. pg_dump is 464K. I ended up doing things my way, with Bazel and rules_debian to install apt packages, and the result (on top of distroless) was much cleaner and more compact. My opinion with some actual Nix experience is that a Nix system always ends up being 1.4GB. My installer ISO is 1.4GB. My freshly installed machine is 1.4GB. That's just how it is, for whatever reason.
Finally, the whole "I would like to build a large C++ project" situation is a well worn path. s/C++/Rust doesn't change anything material. There are build systems that exist to make the library situation more tolerable. They are all as complicated as Nix, but some work much better for this use case. Nix is trying to be a build system for building other people's software, supporting nixpkgs, and lands on the very generic side of things. Build systems that are designed for building your software tend to do better at that job. Personally, I'm happy with Bazel and probably wouldn't use anything else (except "go build" for go-only projects), but there are many, many, many other options. 99% of the time, you should use that instead of Nix (and write a flake so people can install the latest version of Your Thing with home-manager; or maybe I'm just the only person that uses their own software day to day and you don't actually need to do that...)
That's strange, I never had problems building really tiny docker (release) images with nix, in fact it felt easier than doing it with alpine. You just get exactly what you specify, no more.
(OTOH, when developing in nix, I always end up with a huge /nix/store and have no idea how to clean it without garbage collecting everything and having to wait all over)
FYI you can avoid things getting garbage-collected by doing `nix-store --add-root`; that makes an "(indirect) garbage collector root"[0]. Especially useful if you're using import-from-derivation, since that imported derivation won't appear in the dependencies of your final build output (which, to be clear, is a good thing; since it lets us calculate a derivation, e.g. by solving dependency constraints or whatever, without affecting the eventual hash if that calculation happens to match a previous one!)
[0] https://nix.dev/manual/nix/2.18/package-management/garbage-c...
> Railway injects a deployment ID environment variable into all builds.
They could've done it in the next layer after installation. Also, you can split packages into different layers. There's even automation for it if you need batches to keep the number of layers down.
nix2container [1] is actually able to do that: you can explicitly build layers containing a subset of the dependencies required by your image. An example is provided in this section: https://github.com/nlewo/nix2container?tab=readme-ov-file#is...
For instance, if your images use bash, you can explicitly create a layer containing the bash closure. This layer can then be used across all your images and is only rebuild and repushed if this bash closure is modified.
> > pull in dependencies often results in massive image sizes with a single /nix/store layer
This is the case for the basic nixpkgs.dockerTools.buildImage function but this is not true with nix2container, nor with nixpkgs.dockerTools.streamLayeredImage. Instead of writing the layers in the Nix store, these tools build a script to actually push the image by using existing store paths (which are Nix runtime dependencies of this script). Regarding the nix2container implementation, it builds a JSON file describing the Nix store paths for all layers and uses Skopeo to push the image (to a Docker deamon, a registry, podman, ...), by consuming this JSON file.
(disclaimer: i'm the nix2container author)
[1] https://github.com/nlewo/nix2container
A good example of 'use the right tool for the right job'. Nix is great for some use cases and awful for others. The problem is the Nix learning curve is so high that by the time you've grasped it enough to make a decision you feel you've invested too much time to back out now and pivot to something else so you try to shoehorn it to solve the original need.
Ie it is very easy for an AI to create a to-spec shell.nix (some Python packages, some Linux packages, some env vars, some path entries etc), or configuration.nix because of this paradigm.
I do this a lot to include envs with repos that fully support the package. It would probably be more reproducible with flakes (a flake.nix is like a shell.nix but with version pinning… or something, I’m still climbing that learning hill).
> Since we transitioned away from Nix, we also transitioned away from the name Nixpacks in favor of Railpack. We also changed the codebase from Rust to Go because of the Buildkit libraries.
Suddenly the move away from Nix seems less like an incremental change and more like one part of a complete overhaul of the entire project. Did they have a team changeover or something? Or did they just want to start over from scratch and rewrite the whole project?
It also seems strange to switch to an entirely different programming language for a single library. I haven’t found library FFI to be a huge obstacle in Rust.
"Default versions" breaking things that depend on them? What is that? It is like using docker's ":latest" tag and being surprised each time that a new server falls on its face because the "default" image is actually a different version from the previous "default" image.
I don't understand any of the explanations in this blog post. Seems like people who have zero clue about what a "version" of a software is.
"no way of splitting up the Nix dependencies into separate layers" - Why? Of course you can split /nix/store into as many layers as you need. Do they even know how to use containers and how to use Nix in the first place?
With the clear incompetence of these people, no wonder that their proposed solution smells like a decomposed fish.
Classic NIH syndrome. There is going to be no surprise to see them meet the exact same problems they didn't solve with Nix to infest their new "solution".
A nix wrapper or a deployment platform
I don't think any VC worth the time is going to sit around nitpicking how much Nix matters to their offering if they're making increasing amounts of money.
It sounds like it's a little bit too late, but I'm happy to provide some consulting on how you can get it to work idiomatically with Nix.
Product looks cool!
sure you are not going to get shared library conflicts, but i think this solution is extremely wasteful, and can make development painful too - look at nixpkgs' staging process.
Of course lots of software isn't ready for but reproduction which is why Nix has taken such a pragmatic approach. (I have written a lot about this).
It's all a series of tradeoffs. If your goal is reproducibility (as close as you can get), you will have a larger graph likely ..since you are accounting for more!
Sometimes we like to believe we can have our cake and eat it too rather than understand life's a series of tradeoffs.
When we think we are getting a silver bullet, we've likely just pushed that complexity somewhere else.
> When we think we are getting a silver bullet, we've likely just pushed that complexity somewhere else.
True but we kind of just stopped looking. and I feel much of the solution space hasn't been explored.
Each person doesn't have to perform the build on their own. A build server will evaluate it and others will pull it from the cache.
The greater waste that nix eliminates is the waste of human time spent troubleshooting something that broke in production because of what should have been an innocent change, and the lost business value from the decreased production. When you trust your dependencies are what you asked for, it frees the mind of doubt and lets you focus on troubleshooting more efficiently towards a problem.
Aside, I spent over a decade on Debian derived distros. I never once had one of these distros complete an upgrade successfully between major versions, despite about 10 attempts spread over those years, though thankfully always on the first sacrificial server attempted. They always failed with interesting issues, sometimes before they really got started, sometimes borking the system and needing a fresh install. With NixOS, the upgrades are so reliable they can be done casually during the workday in production without bothering to check that they were successful. I think that wouldn't be possible if we wanted the false efficiency of substituting similar but different packages to save the build server from building the exact specification. Anything short of this doesn't get us away from the "works on my machine" problem.
I totally understand the value proposition of Nix. However I think saying "bad time" is a bit hyperbolic. At most it's "You'll be losing a pretty significant guarantee compared to Nix". Still probably "packed to be more likely to work correctly" than 95% of software out there.
So they can take a stable, well-managed OS as a base, use tools like mise and asdf to build a bespoke version soup of tools and language runtimes on top, then run an app on top of that. It will almost never break. When it does break, they fiddle with versions and small fixes until it works again, then move on. The fact that it broke is annoying, but unimportant. Anything that introduces friction, requires more learning, or requires more work is a waste of time.
Others would instead look for a solution to stop it from breaking ever again. This solution is allowed to introduce friction, require more learning, or require more work, because they consider the problem important. These people want Nix.
Most people are in the first group, so a company like Railway that wants to grow ends up with a solution that fits that group.
Care to elaborate what that means and what the alternative is?
Package maintainers often think in terms of constraints like I need a 1.0.0 <= pkg1 < 2.0.0 and a 2.5.0 <= pkg2 < 3.0.0. This tends to make total sense in the micro context of a single package but always falls apart IMO in the macro context. The problem is:
- constraints are not always right (say pkg1==1.9.0 actually breaks things)
- constraints of each dependency combined ends up giving very little degrees of freedom in constraint solving, so that you can’t in fact just take any pkg1 and use it
- even if you can use a given version, your package may have a hidden dependency on one if pkg1’s dependencies, that is only apparent once you start changing pkg1’s version
Constraint solving is really difficult and while it’s a cool idea, I think Nixpkgs takes the right approach in mostly avoiding it. If you want a given version of a package, you are forced to take the whole package set with you. So while you can’t say take a version of pkg1 from 2015 and use it with a version of pkg2 from 2025, you can just take the whole 2015 Nixpkgs and get pkg1 & pkg2 from 2015.
You could not depend on a patch version directly in source. You could force a patch version other ways, but each package would depend on a specific major/minor and the patch version was decided at build time. It was expected that differences in the patch version were binary compatible.
Minor version changes were typically were source compatible, but not necessarily binary compatible. You couldn’t just arbitrarily choose a new minor version for deployment (well, you could, but without expecting it to go well).
Major versions were reserved for source or logic breaking changes. Together the major and minor versions were considered the interface version.
There was none of this pinning to arbitrary versions or hashes (though, you could absolutely lock that in at build time).
Any concept of package (version) set was managed by metadata at a higher level. For something like your last example, we would “import” pkg2 from 2025, bringing in its dependency graph. The 2025 graph is known to work, so only packages that declare dependencies on any of those versions would be rebuilt. At the end of the operation you’d have a hybrid graph of 2015, 2025, and whatever new unique versions were created during the merge, and no individual package dependencies were ever touched.
The rules were also clear. There were no arbitrary expressions describing version ranges.
No comments yet
Thank you, I was looking for an explanation of exactly why I hate Nix so much. It takes a complicated use case, and tries to "solve" it by making your use-case invalid.
It's like the Soylent of software. "It's hard to cook, and I don't want to take time to eat. I'll just slurp down a bland milkshake. Now I don't have to deal with the complexities of food. I've solved the problem!"
It removes the “magic” constraint solving that seemingly never works and pushes it to the user to make it work
Note that the parent said "I think Nixpkgs takes the right approach in mostly avoiding it". As others have already said, Nix != Nixpkgs.
If you want to go down the "solving dependency version ranges" route, then Nix won't stop you. The usual approach is to use your normal language/ecosystem tooling (cabal, npm, cargo, maven, etc.) to create a "lock file"; then convert that into something Nix can import (if it's JSON that might just be a Nixlang function; if it's more complicated then there's probably a tool to convert it, like cabal2nix, npm2nix, cargo2nix, etc.). I personally prefer to run the latter within a Nix derivation, and use it via "import from derivation"; but others don't like importing from derivations, since it breaks the separation between evaluation and building. Either way, this is a very common way to use Nix.
(If you want to be even more hardcore, you could have Nix run the language tooling too; but that tends to require a bunch of workarounds, since language tooling tends to be wildly unreproducible! e.g. see http://www.chriswarbo.net/projects/nixos/nix_dependencies.ht... )
Packages in nixpkgs follow the "managed distribution" model, where almost all package combinations can be expected to work together, remain reasonably stable (on the stable branch) for 6 months receiving security backports, then you do all your major upgrades when you jump to the next stable branch when it is released.
This is likely the source of their commit based versioning complaint/issue, i.e the commits in question are probably https://github.com/NixOS/nixpkgs versions if they aren't maintaining their own overlay of derivations.
This is in contrast to systems that allow all of the versions to move independently of each other.
i.e in the Nix world you don't just update one package, you move atomically to a new set of package versions. You can have full control over this by using your own derivations to customise the exact set of versions, in practice most folk using Nix aren't deep enough in it for that though.
It also puts a function in the result, called `override`, which can be called to swap out any of those arguments.
No comments yet
Completely different approach to dependencies, though. For now.
> Smaller Builds > Better caching
what were the benefits that overcame this, and what about those now?
> This approach isn’t clear or maintainable, especially for contributors unfamiliar with Nix’s version management.
> For languages like Node and Python, we ended up only supporting their latest major version.
What is not maintainable about this? That they need to make a list of available versions? So, can this not be automated?
Furthermore, why is Railway defining how a user uses Nix?
Surely one of the points of Nix is that you can take a bare machine and have it configured with exactly what versions of packages you want? Why would Railway need to get in the way of the user and limit their versions anyway?
Or did I misunderstand and they don’t even expose Nix to the user? If so, the original question still stands: can’t they automate that list of package versions?
Honestly, the reasons given don't feel very solid. Maybe the person who introduced Nix left and the ones remaining didn't like it very much (the language itself is not very nice, the docs weren't great either in the past).
Still, I'm not familiar enough with the stack they chose, but does it provide a level of determinism close to Nix? If not, it might come to bite them or make their life harder later on.
And yes, their reasoning implies NIH and just unfamiliarity combined with unwillingness to really understand Nix.
[0]: https://discourse.nixos.org/t/how-long-is-binary-cache-kept-...
[1]: https://hal.science/hal-04913007
[2]: https://luj.fr/blog/is-nixos-truly-reproducible.html
[3]: https://discourse.nixos.org/t/nixos-foundations-financial-su...
[4]: https://discourse.nixos.org/t/the-nixos-foundations-call-to-...
> The way Nixpacks uses Nix to pull in dependencies often results in massive image sizes with a single /nix/store layer ... all Nix and related packages and libraries needed for both the build and runtime are here.
This statement is kinda like “I’m giving up on automobiles because I can’t make them go forward”. This is one of the things Nix can do most reliably. It automates the detection of which runtime dependencies are actually referenced in the resulting binary, using string matching on /nix/store hashes. If they couldn’t make it do that, they’re doing something pretty weird or gravely wrong. I wouldn’t even know where to start to try to stop Nix from solving this automatically!
I wouldn’t read too much into their experience with it. The stuff about versioning is a very normal problem everyone has, would have been more interesting if they attempted to solve it.
Image size isn’t something we’ve focused a lot on, so I haven’t spent a ton of time on it, but searching for “nix docker image size” shows it to be a pretty commonly encountered thing.
That's at least my understanding, yes.
The OS should be immutable. Apps and services and drivers/extensions should be self contained. Things should not be installed “on” the OS. This entire concept is a trillion dollar mistake.
Go is the best choice at the moment for such tools. These tools start a process, do lots of IO and exit.
Very pragmatic choice.
Go can cross-compile from Linux to Windows, Darwin and FreeBSD without requiring any external tooling.
Meanwhile Rust requires a pile of variable quality community driven crates to do basic things.
And Go's runtime is built-in by default. Unlike Java so there's nothing to "carefully set".
This is not done by default to reduce binary sizes.
I am naive about Nix, but...
...isn't that like...the whole selling point of Nix? That it's specific about what you're getting, instead of allowing ticking time bombs like python:latest or npm-style glibc:^4.4.4
Odd to attach yourself to Nix then blog against its USP.
A few weeks ago I needed to update firefox for a bug fix that was causing a crash, but of course that meant updating all of nixpkgs. When I finished the switch, the new version of pipewire was broken in some subtle way and I had to roll it back and have been dealing with firefox crashing once a week instead. I can't imagine pitching this to my team for development when I'm having this kind of avoidable issue just with regular packages that aren't even language dependencies.
To those who say 'if you want to lock your dependencies for a project, you can just build a nix flake from a locked file using the <rust | python | npm> tools' I say, why the hell would I want to do that? Being able to manage multiple ecosystems from the same configuration tool was half the draw of nix in the first place!