Ask HN: Why hasn't x86 caught up with Apple M series?
418 points by stephenheron 2d ago 599 comments
iOS 26 Launches Sept 15 – Even GPT-5 Doesn't Know It Exists
2 points by rileygersh 7h ago 5 comments
Ask HN: Best codebases to study to learn software design?
100 points by pixelworm 3d ago 89 comments
Stop squashing your commits. You're squashing your AI too
4 points by jannesblobel 1d ago 9 comments
Unexpected productivity boost of Rust
208 bkolobara 246 8/27/2025, 3:48:07 PM lubeno.dev ↗
What’s remarkable is that (a) I have very little Rust experience overall (mostly a Python programmer), (b) very little virtio experience, and (c) essentially no experience working with any of the libraries involved. Yet, I pulled off the refactor inside of a week, because by the time the project actually compiled, it worked perfectly (with one minor Drop-related bug that was easily found and fixed).
This was aided by libraries that went out of their way to make sure you couldn’t hold them wrong, and it shows.
Seriously, why would you think that assigning a value would stop your script from executing? Maybe the Typescript example is missing some context, but it seems like such a weird case to present as a "data race".
It's obviously not a good idea to rely on such assumptions when programming, and when you find yourself having such a hunch, you should generally stop and verify what the specification actually says. But in this case, the behaviour is weird, and all bets are off. I am not at all surprised that someone would fall for this.
I'm sure I knew the href thing at one point. It's probably even in the documentation. But the API itself leaves a giant hole for this kind of misunderstanding, and it's almost certainly a mistake that a huge number of people have made. The more pieces of documentation we need to keep in our heads in order to avoid daily mistakes, the exponentially more likely it is we're going to make them anyway.
Good software engineering is, IMHO, about making things hard to hold the wrong way. Strong types, pure functions without side effects (when possible), immutable-by-default semantics, and other such practices can go a long way towards forming the basis of software that is hard to misuse.
It greatly heartens me that we've made it to the point where someone writing Javascript for the browser is recommended to consult a spec instead of a matrix of browsers and browser versions.
However, that said, why would a person embark on research instead of making a simple change to the code so that it relies on fewer assumptions, and so that it's readable and understandable by other programmers on their team who don't know the spec by heart?
*(int*)0 = 0;
Modern C compilers could require you to complicate this enough to confuse them, because their approach to UB is weird, if they saw an UB they could do anything. But in olden days such an assignment led consistently to SIGSEGV and a program termination.
IBM did this for a long time
I recall that on DOS, Borland Turbo C would detect writes to address 0 and print a message during normal program exit.
No shit. It's obvious because you literally just read a blog post explaining it. The point is if you sprinkle dozens of "obvious" things through a large enough code based, one of them is going to bite you sooner or later.
It's better if the language helps you avoid them.
On the other hand, that strictness is precisely what leads people to end up with generally reasonable code.
Same thing for the borrow-checker.
[1]: https://downloads.haskell.org/~ghc/7.10.3-rc1/users_guide/ty...
Although you said "mode of operation" and I can't get behind that idea, I think the choice to just wrap overflow by default for the integer types in release builds was probably a mistake. It's good that I can turn it off, but it shouldn't have been the default.
I personally see Rust as an ideal "second system" language, that is, you solve a business case in a more forgiving language first, then switch (parts) to Rust if the case is proven and you need the added performance / reliability.
I know that Rust provides some additional compile-time checks because of its stricter type system, but it doesn't come for free - it's harder to learn and arguably to read
Ownership/borrowing clarifies whether function arguments are given only temporarily to view during the call, or whether they're given to the function to keep and use exclusively. This ensures there won't be any surprise action at distance when the data is mutated, because it's always clear who can do that. In large programs, and when using 3rd party libraries, this is incredibly useful. Compare that to that golang, which has types for slices, but the type system has no opinion on whether data can be appended to a slice or not (what happens depends on capacity at runtime), and you can't lend a slice as a temporary read-only view (without hiding it behind an abstraction that isn't a slice type any more).
Thread safety in the type system reliably catches at compile time a class of data race errors that in other languages could be nearly impossible to find and debug, or at very least would require catching at run time under a sanitizer.
Basically, I don't need ownership, if I don't mutate things. It would be nice to have ownership as a concept, in case I do decide to mutate things, but it sucks to have to pay attention to it, when I don't mutate and to carry that around all the time in the code.
Non-owning non mutating borrow that doesn’t require you to clone/copy:
Transfer of ownership, no clone/copy needed, non mutating: Transfer of ownership, foo can mutate: AFAIK rust already supports all the different expressivity you’re asking for. But if you need two things to maintain ownership over a value, then you have to clone by definition, wrapping in Rc/Arc as needed if you want a single version of the underlying value. You may need to do more syntax juggling than with F# (I don’t know the language so I can’t speak to it) but that’s a tradeoff of being a system engineering language and targeting a completely different spot on the perf target.Ah, you are confused on terminology. Borrowing is a thing that only happens when you make references. What you are doing when you pass a non-copy value is moving it.
Generally, anything that is not copy you pass to a function should be a (non-mut) reference unless it's specifically needed to be something else. This allows you to borrow it in the callee, which means the caller gets it back after the call. That's the workflow that the type system works best with, thanks to autoref having all your functions use borrowed values is the most convenient way to write code.
Note that when you pass a value type to a function, in Rust that is always a copy. For non-copy types, that just means move semantics meaning you also must stop using it at the call site. You should not deal with this in general by calling clone on everything, but instead should derive copy on the types for which it makes sense (small, value semantics), and use borrowed references for the rest.
What I would prefer is, that Rust only cares about whether I use it in the caller after the call, if I pass a mutable value, because in that case, of course it could be unsafe, if the callee mutates it.
Sometimes Copy cannot be derived and then one needs to implement it or Clone. A few months ago I used Rust again for a short duration, and I had that case. If I recall correctly it was some Serde struct and Copy could not be derived, because the struct had a String or &str inside it. That should a be fairly common case.
Note that calling by value is expensive for large types. What those other languages do is just always call by reference, which you seem to confuse for calling by value.
Rust can certainly not do what you would prefer. In order to typecheck a function, Rust only needs the code of that function, and the type defitions of everything else, the contents of the functions don't matter. This is a very good rule, which makes code much easier to read.
&str is Copy, String is not.
How do they do that without either taking a reference or copying/cloning automatically for you? Would be helpful if you provide an example.
But otherwise I'm not clear what ownership semantics you're trying to express - would be helpful if you could give an example.
Owned objects are exclusively owned by default, but wrapping them in Rc/Arc makes them shared too.
Shared mutable state is the root of all evil. FP languages solve it by banning mutation, but Rust can flip between banning mutation or banning sharing. Mutable objects that aren't shared can't cause unexpected side effects (at least not any more than Rust's shared references).
By passing values do you mean 'moving'? Like not passing reference?
So I want to move a value, but also be able to use it after moving it, because I don't mutate it in that other function, where it got moved to. So it is actually more like copying, but without making a copy in memory.
It would be good, if Rust realized, that I don't have mutating calls anywhere and just lets me use the value. When I have a mutation going on, then of course the compiler should throw error, because that would be unsafe business.
If you call `foo(&value)` then `value` remains available in your calling scope after `foo` returns. If you don't mutate `value` in foo, and foo doesn't do anything other than derive a new value from `value`, then it sounds like a shared reference works for what you're describing?
Rust makes you be explicit as to whether you want to lend out the value or give the value away, which is a design decision, and Rust chooses that the bare syntax `value` is for moving and the `&value` syntax is for borrowing. Perhaps you're arguing that a shared immutable borrow should be the default syntax.
Apologies if I'm misunderstanding!
Syntax is generally the least interesting/important part of languages.
Similarly, Java sidesteps many of these issues in mostly using reference types, but ends up with a different classes of errors. So the C/pointer family static analysis can be quite distinct from that for JVM languages.
Swift is roughly on par with Rust wrt exclusivity and data-race safety, and is catching up on ownership.
Rust traits and macros are really a distinguishing feature, because they enable programmer-defined constraints (instead of just compiler-defined), which makes the standard library smaller.
There's a fine line here: it matters a lot whether we're talking about a "sloppy" 80% solution that later causes problems and is incredibly hard to fix, or if it's a clean minimal subset, which restricts you (by being the minimal thing everyone agrees on) but doesn't have any serious design flaws.
I like to call it getting "union-pilled" and it's really hard to accept otherwise statically-typed languages once you become familiar.
C is statically typed, but its type system tracks much less.
And fwiw I've used unions in typescript extensively and I'm not convinced that they're a good idea. They give you a certain flexibility to writing code, yes, does that flexibility lead to good design choices, idk.
Doesn't have to be compiled to be statically typed... but yeah, probably.
> Be it Java, Go or C++;
Lol! No. All static type systems aren't the same.
TypeScript would be the only one of your examples that brings the same benefit. But the entire system is broken due to infinite JS Wats it has to be compatible with.
> it's harder to learn and arguably to read
It's easier to learn it properly, harder to vibe pushing something into it until it seems to works. Granted, vibe pushing code into seemingly working is a huge part of initial learning to code, so yeah, don't pick Rust as your first language.
It's absolutely not harder to read.
But where-as with interfaces, typically they require you early define what your class implements. Rust gives you a late-bound-ish (still compile time but not defined in the original type) / Inversion of Control way to take whatever you've got and define new things for it. In most languages what types a thing has are defined by the library, but Rust not just allows but is built entirely around taking very simple abstract thing and constructing bigger and bigger toolkits of stuff around them. Very Non-zero sum in ways that languages rarely are.
There's a ton of similarity to Extension Methods, where more can get added to the type. But traits / impls are built much more deeply into rust, are how everything works. Extension Methods are also, afaik, just methods, where-as with Rust you really adding new types that an existing defined-elsewhere thing can express.
I find it super shocking (and not because duh) that Rust's borrow checking gets all the focus. Because the type system is such a refreshing open ended late-defined reversal of type system dogma, of defining everything ahead of time. It seems like such a superpower of Rust that you can keep adding typiness to a thing, keep expanding what a thing can do. The inversion here is, imo, one of the real largely unseen sources of glory for why Rust keeps succeeding: you don't need to fully consider the entire type system of your program ahead of time, you can layer in typing onto existing types as you please, as fits, as makes sense, and that is a far more dynamic static type system than the same old highly constrained static type dreck we've suffered for decades. Massive break forward: static, but still rather dynamic (at compile time).
Statically typed does not imply compiled. You can interpret a statically typed language, for instance. And not every compiled language is all that static.
For example, C is statically typed, but also has the ability to play pointer typecasting trickery. So how much can the compiler ever guarantee anything, really? It can't, and we've seen the result is brittle artifacts from C.
Rust is statically-typed and it has all kinds of restrictions on what you can do with those types. You can't just pointer cast one thing to another in Rust, that's going to be rejected by the compiler outright. So Rust code has to meet a higher bar of "static" than most languages that call themselves "static".
Type casting is just one way Rust does this, other ways have been mentioned. They all add up and the result is Rust artifacts are safter and more secure.
You can't safely do this yourself. That is, you couldn't write safe Rust which performs this operation for two arbitrary things. But Rust of course does do this, actually quite a lot, because if we're careful it's entirely safe.
That famous Quake 3 Arena "Fast inverse square root" which involves type puns? You can just write that in safe Rust and it'll work fine. You shouldn't - on any even vaguely modern hardware the CPU can do this operation faster anyway - but if you insist it's trivial to write it, just slower.
Why can you do that? Well, on all the hardware you'd realistically run Rust on the 32-bit integer types and the 32-bit floating types are the exact same size (duh), same bit order and so on. The CPU does not actually give a shit whether this 32-bit aligned and 32-bit sized value "is" an integer or a floating point number, so "transforming" f32 to u32 or u32 to f32 emits zero CPU instructions, exactly like the rather hairier looking C. So all the Rust standard library has to do is promise that this is OK which on every supported Rust platform it is. If some day they adopted some wheezing 1980s CPU where that can't work they'd have to write custom code for that platform, but so would John Carmack under the same conditions.
The thesis of Rust is that in aggregate, everyone can't be careful, therefore allowing anyone to do it (by default) is entirely unsafe.
Of course you can do unsafe things in Rust, but relegating that work to the room at the back of the video store labeled "adults only" has the effect of raising code quality for everyone. It turns out if you put up some hoops to jump through before you can access the footguns, people who shouldn't be wielding them don't, and average code quality goes up.
Well, the compiler is guaranteed that no mistakes will happen. It's the programmer who looses his guarantees in this case.
https://www.rocksolidknowledge.com/articles/locking-asyncawa...
It's just so brittle. How can anyone think this is a good idea?
It is a tradeoff between making some things easier. And probably compiler is not mature enough to catch this mistake yet but it will be at some point.
Zig being an auteur language is a very good thing from my perspective, for example you get this new IO approach which is amazing and probably wouldn’t happen if Andrew Kelley wasn’t in the position he is in.
I have been using Rust to write storage engines past couple years and it’s async and io systems have many performance mistakes. Whole ecosystem feels like it is basically designed for writing web servers.
An example is a file format library using Io traits everywhere and using buffered versions for w/e reason. Then you get a couple extra memcopy calls that are copying huge buffers. Combined with global allocation everywhere approach, it generates a lot of page faults which tanks performance.
Another example was, file format and just compute libraries using asyncio traits everywhere which forces everything to be send+sync+’static which makes it basically impossible to use in single thread context with local allocators.
Another example is a library using vec everywhere even if they know what size they’ll need and generating memcopies as vec is getting bigger. Language just makes it too easy.
I’m not saying Rust is bad, it is a super productive ecosystem. But it is good that Zig is able to go deeper and enable more things. Which is possible because one guy can just say “I’ll break the entire IO API so I can make it better”.
Obviously nobody knows they've made this mistake, that's why it is important for the compiler to reject the mistake and let you know.
I don't want to use an auteur language, the fact is Andrew is wrong about some things - everybody is, but because it's Andrew's language too bad that's the end of the discussion in Zig.
I like Rust's `break 'label value`. It's very rarely the right thing, but sometimes, just sometimes, it's exactly what you needed and going without is very annoying. However IIUC for some time several key Rust language people hated this language feature, so it was blocked from landing in stable Rust. If Rust was an auteur language, one person's opinion could doom that feature forever.
Like, how can anyone think that requiring the user to always remember to explicitly write `mutex.unlock()` or `defer mutex.unlock()` instead of just allowing optional explicit unlock and having it automatically unlock when it goes out of scope by default is a good idea? Both Go and Zig have this flaw. Or, how can anyone think that having a cast that can implicitly convert from any numeric type to any other in conjunction with pervasive type inference is a good idea, like Rust's terrible `as` operator? (I once spent a whole day debugging a bug due to this.)
As a side note, I hate the `as` cast in Rust. It's so brittle and dangerous it doesn't even feel like a part of the language. It's like a JavaScript developer snuck in and added it without anyone noticing. I hope they get rid of it in an edition.
Rust language hat on: I hope so too. We very much want to, once we've replaced its various use cases.
We have `.into()` for lossless conversions like u32 to u64.
We need to fix the fact that `usize` doesn't participate in lossless conversions (e.g. even on 64-bit you can't convert `usize` to `u64` via `.into()`).
We need to fix the fact that you can't write `.into::<u64>()` to disambiguate types.
And I'm hoping we add `.trunc()` for lossy conversion.
And eventually, after we provide alternatives for all of those and some others, we've talked about changing `as` to be shorthand for `.into()`.
Not to mention this sort of proliferation of micro-calls for what should be <= 1 instruction has a cost to debug performance and/or compile times (though this is something that should be fixed regardless).
This confused me too at first. You have to do `u64::from(_)` right? It makes sense in a certain way, similar to how you have to do `Vec::<u64>::new()` rather than `Vec::new::<u64>()`, but it is definitely more annoying for `into`.
(Imagine if Python 3 let you import Python 2 modules seamlessly.)
And it doesn't exactly help to compile newer software on an older OS.
Another painful bugbear is when I'm converting to/from usize and I know that it is really either going to be a u64 or maybe u32 in a few cases, and I don't care about breaking usize=u128 or usize=u16 code. Give me a way to say that u32 is Into<usize> for my code!
Presumably the lock is intended to be used for blocking until the commit is created, which would only be guaranteed after the await. Releasing the lock after submitting the transaction to the database but before getting confirmation that it completed successfully would probably result in further edge cases. I'm unfamiliar with rust's async, but is there a join/select that should be used to block, after which the lock should be unlocked?
This is wild. I assume there's at least the tooling to catch this kind of errors right?
I already commented on Zig compiler/stdlib code itself, but here's Tigerbeetle and Bun, the two biggest(?) Zig codebases:
https://github.com/search?q=repo%3Atigerbeetle%2Ftigerbeetle...
https://github.com/search?q=repo%3Aoven-sh%2Fbun%20%22%3D%3D...
https://github.com/tigerbeetle/tigerbeetle/blob/b173fdc82700...
https://github.com/tigerbeetle/tigerbeetle/blob/b173fdc82700... (different file, same check.)
If I just need to check for 1 specific error and do something why do I need a switch?
In Rust you have both "match" (like switch) and "if let" which just pattern matches one variant but both are properly checked by the compiler to have only valid values.
If the author had written `FileError.AccessDenid`, this would not have compiled, as it would be comparing with the `FileError` error set.
The global error set is pretty much never used, except when you want to allow a user to provide his own errors, so you allow the method to return `anyerror`.
Like here in `std/tar.zig`: https://github.com/ziglang/zig/blob/50edad37ba745502174e49af...
Or here in `main.zig`: https://github.com/ziglang/zig/blob/50edad37ba745502174e49af...
And in a bunch of other places: https://github.com/search?q=repo%3Aziglang%2Fzig+%22%3D%3D+e...
As I stated before, this error wouldn't even exist in the first place in no codebase ever: look how the method that fails returns a `FileError` and not an `anyerror`
It could be rightly argued that it still shouldn't compile though.
The error presented in this example would not be written by any zig developer. Heck, before this example i didn't even knew that you could compare directly to the global error set, and i maintain a small library.
zig and rust do not have the same scope. I honestly do not think they should be compared. Zig is better compared to C, and rust is better compared to C++.
The languages are very different in scope, scale, and design goals; yes. That means there's tradeoffs that might make one language or the other more suitable for a particular person or project, and that means it can be interesting and worthwhile to talk about those tradeoffs.
In particular, Rust's top priority is program correctness -- the language tries hard not to let you write "you're holding it wrong" bugs, whereas Zig tends to choose simplicity and explicitness instead. That difference in design goals is the whole point of the article, not a reason to dismiss it.
I don't know enough Zig to have a qualified opinion on the particular example (besides being very surprised it compiled). However, I thought this post from the front page the other day had more practical and thoughtful examples of this kind of thing: https://www.openmymind.net/Im-Too-Dumb-For-Zigs-New-IO-Inter...
It’s more like picking up a fork and being surprised to find out that it’s burning hot without any visible difference.
No True Scotsman fallacy. It was written by the Zig developer who wrote it.
Languages are a collection of tradeoffs so I'm pretty sure you could find examples for every two languages in existence. It also makes these kinds of comparisons ~useless.
For example, python is AFAIK the lead language to get something done quickly. At least as per AoC leaderboards. It's a horrible language to have in production though (experienced it with 700k+ LOC).
Rust is also ok to do for AoC, but you will (based on stats I saw) need about 2x time to implement. Which in production software is definitely worth it, because of less cost in fixing stupid mistakes, but a code snippet will not show you that.
https://www.tiobe.com/tiobe-index/
https://pypl.github.io/PYPL.html
https://www.statista.com/statistics/793628/worldwide-develop...
https://redmonk.com/sogrady/2024/03/08/language-rankings-1-2...
I'm a rust dev full time. And I agree with everything here. But I also want people to realize it's not "Just Rust" that does this.
In case anyone gets FOMO.
It's really, really good for <1000 LoC day projects that you won't be maintaining. (And, if you're writing entirely in the REPL, you probably won't even be saving the code in the first place.)
Use a type checker! Pyright can get you like 80% of Rust's type safety.
mypy's output is, AFAICT, also non-deterministic, and doesn't support a programmatic format that I know of. This makes it next to impossible to write a wrapper script to diff the errors to, for example, show only errors introduced by the change one is making.
Relying on my devs to manually trawl through 80k lines of errors for ones they might be adding in is a lost cause.
Our codebase also uses SQLAlchemy extensively, which does not play well with typecheckers. (There is an extension to aid in this, but it regrettably SIGSEGVs.)
Also this took me forever to understand:
That will get you:Regarding the ~80k errors. Yeah, nothing to do here besides slowly grinding away and adding type annotations and fixes until it's resolved.
For the code example pyright gives some hint towards variance but yes it can be confusing.
https://pyright-play.net/?pyrightVersion=1.1.403&code=GYJw9g...
Love this.
It's not true you can't build reliable software in python. People have. There's proof of it everywhere. Tons of examples of reliable software written in python which is not the safest language.
I think the real thing here is more of a skill issue. You don't know how to build reliable software in a language that doesn't have full type coverage. That's just your lack of ability.
I'm not trying to be insulting here. Just stating the logic:
Just spitting facts.But I can build reliable software without types as well. Many people can. This isn’t secret stuff that only I can do. There are thousands and thousands of reliable software built on Python, ruby and JavaScript.
Sort of like closing 80% of a submarine's hatches and then diving.
And that's assuming the codebase and all dependencies have correct type annotations.
It’s just a logic bug.
E.g., the code doesn’t match their own English description of the logic: “If yes, redirect to the specific page. If not, go to the dashboard or onboarding page.”
The code is missing the “if not” (probably best expressed using an “else” clause following the if block).
It's fair to point out that browser api is confusing. You might not think of setting a property as kicking off an asynchronous operation, especially if it seems to has instantaneous effect at first.
But the basic control flow logic of that code is wrong. Confusion about whether a side-effect from an api call might bail you out from your error is beside the point.
https://tc39.es/ecma262/multipage/
And I find the event loop vrs concurrency via mutexes to be like an apples to oranges comparison. They both do some form of concurrency but not nearly in the same way
Again, Typescript is hardly considered a language. It is just a tool used to keep javascript under some control on large projects. Again, the comparison between rust and Typescript on a language level is not a great match.
As for fearless refactoring, don't get me started. I experienced this the first time I was porting a vanilla js backend to a typescript version. It was awesome. I won't say it works in much the same way as rust does but man, if you ever ported a rest api written in javascript to Typescript - you'd experience a similar effect.
If you want this behavior, it's relatively simple to implement your own mutex on top of futex, but no one is going to expect the behavior it provides.
The problem is not with TypeScript or even JavaScript but an odd Browser API where mutating some random value of an object results in a redirect on the page, but not synchronously.
Even if the language of the browser were Rust, there's nothing about the type system specifically that would have caught this bug (as far as I can tell, anyways. Presumably there's something in the background periodically reading the value of `href` and updating the page accordingly, but since that background job only would have needed read and not write access to the variable, I don't think the borrow checker would have helped here)
That's not a "Typescript" or language issue, that's a DOM/browser API weirdness
Also, Typescript is adding types on top of the JavaScript language, not the DOM API.
There is nothing in our domain of distributed systems based on SaaS products, mobile OSes, and managed cloud environments, that would profit from a borrow checker.
You can even go more crazy with linear types, effects, formal proofs or dependent types.
What Rust has achieved, was definitely make these ideas more mainstream.
No comments yet
In Rust lifetimes for references are part of the type, so &'a str and &'b str could be different types, even though they're both string slice references.
Beyond that, Rust tracks two important "thread safety" properties called Sync and Send, and so if your Thing ends up needing to be Send (because another thread gets given this type) but it's not Send, that's a type error just as surely as if it lacked some other needed property needed for whatever you do with the Thing, like it's not totally ordered (Ord) or it can't be turned into an iterator (IntoIterator)
Which languages are those?
You can ask any professional python programmer how much time they've spent trying to figure out the methods that are callable on the object returned by some pytorch function, and they will all tell you it's a challenge that occurs at least weekly. You can ask any C++ programmer how much time they've spent debugging segfaults. You can ask any java programmer how much time they've spent debugging null pointer exceptions. These are all common problems that waste an incredible amount of time, that simply do not occur to anywhere close to the same extent in Rust.
It's true that you can get some of these benefits by writing tests. But would tests have prevented the issue that OP mentioned in his post, where acquiring a mutex from one thread and releasing it from another is undefined? It's highly doubtful, unless you have some kind of intensive fuzz-testing infrastructure that everyone talks about and no one seems to actually have. And what is more time-efficient: setting up that infrastructure, running it, seeing that it detects undefined behavior at the point of the mutex being released, and realizing that it happened because the mutex was sent to a different thread? Or simply getting a compile error the moment you write the code that says "hey pal, mutex guards can't be moved to a different thread". Plus, everyone who's worked on a codebase with a lot of tests can tell you that you sometimes end up spending more time fixing tests than you do actually writing code. For whatever reason, I spend much less time fixing types than fixing tests.
There is a compounding benefit as well. When you can refactor easily (and unit tests often do not make refactoring much easier...), you can iterate on your code's architecture until you find one that meshes naturally with your domain. And when your requirements change and your domain evolves, you can refactor again. If refactoring is too expensive to attempt, your architecture will become more and more out-of-sync with your domain until your codebase is unmaintainable spaghetti. If you imagine a simple model where every new requirement either forces you into refactoring your code or spaghettifying your code, and assume that each instance of spaghettification induces a 1% dev speed slowdown, you can see that these refactors become basically essential. Because 100 new requirements in the future, the spaghetti coder will be operating at 36% the productivity of the counterfactual person who did all the refactors. Seen this way, it's clear that you have to do the refactors, and then a major component of productivity is whether you can do them quickly. An area where it's widely agreed rust excels at.
There are plenty of places we can look at Rust and find ourselves wanting more. But that doesn't mean we shouldn't be proud of what Rust has accomplished. It has finally brought many of the innovations of ML and Haskell to the masses, and innovated new type-system features on top of that, leading to a very productive and pleasantly-designed language.
(I also left this comment on reddit, and am copying it here.)
One caveat though - using a normal std Mutex within an async environment is an antipattern and should not be done - you can cause all sorts of issues & I believe even deadlock your entire code. You should be using tokio sync primitives (e.g. tokio Mutex) which can yield to the reactor when it needs to block. Otherwise the thread that's running the future blocks forever waiting for that mutex and that reactor never does anything else which isn't how tokio is designed).
So the compiler is warning about 1 problem, but you also have to know to be careful to know not to call blocking functions in an async function.
This is simply not true, and the tokio documentation says as much:
"Contrary to popular belief, it is ok and often preferred to use the ordinary Mutex from the standard library in asynchronous code."
https://docs.rs/tokio/latest/tokio/sync/struct.Mutex.html#wh...
there are absolutely situations where tokio's mutex and rwlock are useful, but the vast majority of the time you shouldn't need them
Tokio MutexGuards are Send, unfortunately, so they are really prone to cancellation bugs.
(There's a related discussion about panic-based cancellations and mutex poisoning, which std's mutex has but Tokio's doesn't either.)
[1] spawn_local does exist, though I guess most people don't use it.
The generally recommended alternative is message passing/channels/"actor model" where there's a single owner of data which ensures cancellation doesn't occur -- or, at least that if cancellation happens the corresponding invalid state is torn down as well. But that has its own pitfalls, such as starvation.
This is all very unsatisfying, unfortunately.
In the Rust community, cancellation is pretty well-established nomenclature for this.
Hopefully the video of my talk will be up soon after RustConf, and I'll make a text version of it as well for people that prefer reading to watching.
True. I used std::mutex with tokio and after a few days my API would not respond unless I restarted the container. I was under the impression that if it compiles, it's gonna just work (fearless concurrency) which is usually the case.
Or your server is heavily contended enough that all worker threads are blocked on this mutex and no reactor can make forward progress.
> The exact behavior on locking a mutex in the thread which already holds the lock is left unspecified. However, this function will not return on the second call (it might panic or deadlock, for example).
- The return type of Mutex::lock() is a MutexGuard, which is a smart pointer type that 1) implements Deref so it can be dereferenced to access the underlying data, 2) implements Drop to unlock the mutex when the guard goes out of scope, and 3) implements !Send so the compiler knows it is unsafe to send between threads: https://doc.rust-lang.org/std/sync/struct.MutexGuard.html
- Rust's implementation of async/await works by transforming an async function into a state machine object implementing the Future trait. The compiler generates an enum that stores the current state of the state machine and all the local variables that need to live across yield points, with a poll function that (synchronously) advances the coroutine to the next yield point: https://doc.rust-lang.org/std/future/trait.Future.html
- In Rust, a composite type like a struct or enum automatically implements Send if all of its members implement Send.
- An async runtime that can move tasks between threads requires task futures to implement Send.
So, in the example here: because the author held a lock across an await point, the compiler must store the MutexGuard smart pointer as a field of the Future state machine object. Since MutexGuard is !Send, the future also is !Send, which means it cannot be used with an async runtime that moves tasks between threads.
If the author releases the lock (i.e. drops the lock guard) before awaiting, then the guard does not live across yield points and thus does not need to be persisted as part of the state machine object -- it will be created and destroyed entirely within the span of one call to Future::poll(). Thus, the future object can be Send, meaning the task can be migrated between threads.
A type is “Send” if it can be moved from one thread to another, it is “Sync” if it can be simultaneously accessed from multiple threads.
These traits are automatically applied whenever the compiler knows it is safe to do so. In cases where automatic application is not possible, the developer can explicitly declare a type to have these traits, but doing so is unsafe (requires the ‘unsafe’ keyword and everything that entails).
You can read more at rustinomicon, if you are interested: https://doc.rust-lang.org/nomicon/send-and-sync.html
The compiler knows the Future doesn't implement the Send trait because MutexGuard is not Send and it crosses await points.
Then, tokio the aysnc runtime requires that futures that it runs are Send because it can move them to another thread.
This is how Rust safety works. The internals of std, tokio and other low level libraries are unsafe but they expose interfaces that are impossible to misuse.
https://docs.rs/tokio/latest/tokio/task/fn.spawn.html
If you want to run everything on the same thread then localset enables that. See how the spawn function does not include the send bound.
https://docs.rs/tokio/latest/tokio/task/struct.LocalSet.html
Rust can use that type information and lifetimes to figure out when it's safe and when not.
With Rust, they frontload the complexity, so it's considered to be "hard to learn". But I've got to say, Rust's "complexities" have allowed me to build a taller software tower than I've ever been able to build before in any other language I've used professionally (C/C++/Java/Swift/Javascript/Python).
And that's the thing a lot of people don't get about Rust, because you can only really appreciate it once you've climbed the steep learning curve.
At this point I've gone through several risky and time-consuming (weeks) refactors of a substantial Rust codebase, and every time it's worked out I'm amazed it wasn't the kind of disaster I've experienced refactoring in other languages, where the refactor has to be abandoned because it got so hairy and everyone lost all hope and motivation.
They don't tell you about that kind of pain in the Python tutorial when they talk about how easy it is to not have to type curly braces and have dynamic types everywhere. And you don't really find that pleasure in Rust until you've built enough experience and code to have to do a substantial refactor. So I can understand why the Rust value proposition is dubious for people who are new to the language and programming in general.
[1] https://xkcd.com/1987