The reason you are not seeing crashes when allocating with Rust and freeing with C (or vice versa) is that by default Rust also uses the libc allocator.
It's funny. When I first tried Rust in 2018 they were still statically linking jemalloc into every binary rustc compiled by default, and that alone very much put me off of the language for a while.
Apparently they did away with jemalloc in favor of the system allocator that same year but nonetheless when I came back to it years later I was very happy to learn of its removal.
umanwizard · 1h ago
> that alone very much put me off of the language for a while
Why?
CupricTea · 54m ago
Jemalloc added over a megabyte to every project for only questionable gains, and it was awkward and unwieldy to remove it. While there are good reasons to use a different allocator depending on the project, Rust defaulting to this type of behavior failed a certain personal litmus test on what it wanted to be as a language in that it felt like it was fighting the system rather than integrating with it.
It also does not give a good first impression at all to newcomers to see their hello world project built in release mode take up almost 2MiB of space. Today it's a much more (subjectively) tolerable 136kiB on Windows (considering that Rust std is statically linked).
ainiriand · 1h ago
Sorry, but why that can be a downside in 2018?
mwkaufma · 1h ago
Lots of detail, little substance, and misleading section headers. GPT-generated red flags.
phkahler · 5h ago
Something I'd like to know for mixing Rust and C. I know it's possible to access a struct from both C and Rust code and have seen examples. But those all use accessor functions on the Rust side rather than accessing the members directly. Is it possible to define a structure in one of the languages and then via some wrapper or definitions be able to access it idiomatically in the other language? Can you point to some blog or documentation explaining how?
GrantMoyer · 4h ago
Rust bindgen[1] will automatically generate native Rust stucts (and unions) from C headers where possible. Note that c_int, c_char, etc. are just aliases for the corresponding native Rust types.
However, not all C constructs have idomatic Rust equivalents. For example, bitfields don't exist in Rust, and unlike Rust enums, C enums can have any value of the underlying type. And for ABI reasons, it's very commom in C APIs to use a pointer to an opaque type paired with what are effectively accessor function and methods, so mapping them to accessors and methods on a "Handle" type in Rust often is the most idomatic Rust representation of the C interface.
Here's one of my recorded talks going through an example of using a `#[repr(C)]` struct (in this case one that's auto-generated by Bindgen): https://youtu.be/LLAUzghhNHg?t=2168
I don't know what examples you've been seeing. The interop structs are just regular Rust structs with the `#[repr(C)]` attribute applied to them, to ensure that the Rust compiler lays the struct out exactly as the C compiler for that target ABI would. Rust code can access their fields just fine. There's no strict need for accessor functions.
stouset · 4h ago
And vice versa. Rust code and C code can both operate on each other’s structs natively.
`#[repr(C)]` instructs the compiler to lay the struct out exactly according to C’s rules: order, alignment, padding, size, etc. Without this, the compiler is allowed a lot more freedom when laying out a struct.
eatonphil · 5h ago
One of the areas I wonder about this a lot is when integrating Rust code into Postgres which has its own allocator system. Mostly right now when we need to have complex data structures (non-Postgres data structures) that must live outside of the lexical scope we put them somewhere global and return a handle to the C code to reference the object. But with the upcoming support for passing an allocator to any data structure (in the Rust standard library anyway) I think this gets a lot easier?
Arnavion · 5h ago
>But with the upcoming support for passing an allocator to any data structure (in the Rust standard library anyway) I think this gets a lot easier?
Yes and no. Even within libstd, some things require A=GlobalAlloc, eg `std::io::Read::read_to_end(&mut Vec<u8>)` will only accept Vec<u8, GlobalAlloc>. It cannot be changed to work with Vec<u8, A> because that change would make it not dyn-compatible (nee "object-safe").
And as you said it will cut you off from much of the third-party crates ecosystem that also assumes A=GlobalAlloc.
But if the subset of libstd you need supports A=!GlobalAlloc then yes it's helpful.
tialaramex · 4h ago
For me the most interesting thing in Allocator is that it's allowed to say OK, you wanted 185 bytes but I only have a 256 byte allocation here, so, here is 256 bytes.
This means that e.g. a growable container type doesn't have to guess that your allocator probably loves powers of 2 and so it should try growing to 256 bytes not 185 bytes, it can ask for 185 bytes, get 256 and then pass that on to the user. Significant performance is left on the table when everybody is guessing and can't pass on what they know due to ABI limitations.
Rust containers such as Vec are already prepared to do this - for example Vec::reserve_exact does not promise you're getting exactly the capacity you asked for, it won't do the exponential growth trick because (unlike Vec::reserve) you've promised you don't want that, but it would be able to take advantage of a larger capacity provided by the allocator.
steveklabnik · 5h ago
I’m not sure what those two things have to do with each other, though I did just wake up. The only thing the new allocator stuff would give you is the ability to allocate a standard library data structure with the Postgres allocator. Scoping and handles and such wouldn’t change, and using your own data structures wouldn’t change.
It’s also very possible I’m missing something!
eatonphil · 5h ago
> The only thing the new allocator stuff would give you is the ability to allocate a standard library data structure with the Postgres allocator.
Yeah no this is basically all I'm saying. I'm excited for this.
steveklabnik · 4h ago
Ah yeah, well it's gonna be a good feature for sure when it ships!
tracker1 · 4h ago
Interesting read... and definitely good to know base of knowledge especially if you're working in transitional or mixed codebases.
sesm · 5h ago
Section named "The Interview Question That Started Everything" doesn't contain the interview question.
hyperbrainer · 5h ago
That's the first thing on the page.
> Interviewer: “What happens if you allocate memory with C’s malloc and try to free it with Rust’s dealloc, if you get a pointer to the memory from C?”
> Me: “If we do it via FFI then there’s a possibility the program may continue working (because the underlying structs share the same memory layout? right? …right?)”
sesm · 4h ago
That's fair. Personally, I've skipped that entire pre-section thinking it's a long quote from some book.
PoignardAzur · 1h ago
It is, but yeah, the entire article's formatting is pretty weird.
jeroenhd · 57m ago
The entire blog post feels formatted like AI output to me. Repeated checklists with restated points, tables and full blocks of code spread across the page in a very specific way.
I don't know if the author used AI to write this, but if they didn't, this is the person AI agents decided to copy the writing style of.
Edit: Reddit thread somewhere in the comments here to a post from the author pretty much confirmed my suspicions, this article is heavily AI generated and plain wrong in several cases. A good reminder not to use AI slop to learn new topics, because LLMs bullshit half the time and you need to know what you're doing to spot the lies.
Tony_Delco · 4h ago
Fantastic opening line (“Memory oppresses me.”). If this article was written by an AI, it’s the best AI I’ve seen in months.
Seriously though: I already knew the “don’t mix allocators” rule, but I really enjoyed seeing such a careful and hands-on exploration of why it’s dangerous. Thanks for sharing it.
7e · 5h ago
Allocating memory with C and freeing it with Rust is silly. If you want to free a C-allocated pointer in Rust, just have Rust call back in to C. Expecting that allocators work identically in both runtimes is unreasonable and borderline insane. Heck, I wouldn't expect allocators to work the same even across releases of libc from the same vendor (or across releases of Rust's std).
rectang · 4h ago
I don't agree with your contemptuous framing. It's incorrect, and per the post's author, "dangerous" — but depending on your background it's not "silly" or "borderline insane". It's just naive, and writing a slab allocator as an exercise or making honest explorations like in this blog post will help cure the naivete.
Arnavion · 4h ago
The article is about how and why mixing allocators fails, not if it fails or how to fix the problem.
benmmurphy · 4h ago
usually when interfacing with a library written in c the library will export functions for object destruction. it makes sense for that to be part of the interface instead of using the system allocator because it also gives the library freedom to do extra work during object destruction. if you have simple objects then its possible to just use the system allocator, but if you have graphs or trees of objects then its necessary to have a custom destroy function and there is always some risk in the future you might be forced to need to allocate more complex data structures that require multiple allocations.
ryanf · 4h ago
This article looked interesting, but I bounced off it because the author appears to have made heavy use of an LLM to generate the text. How can I trust that the content is worth reading if a person didn't care enough to write it themselves?
zem · 3h ago
it sounds nothing like AI to me! or AI has advanced to the point where it is hard to tell - e.g. I wouldn't expect a sentence like "You’re not just getting 64 bytes of memory. You’re entering into a complex contract with a specific allocator implementation." from one.
pests · 3h ago
While I usually hate all the accusations of writings being LLM generated, I find your example a bit odd as that phasing is very typical of ChatGPT, especially when it was glazing everyone after that one update they had to reverent.
“It’s not just _________. It’s _________________.”
This was in almost every response doubling down on the users ideas and blowing things out of proportion. Stuff like…
“It’s not just a good idea. It’s a ground up rewriting of modern day physics.”
rocky_raccoon · 3h ago
I picked up on it very quickly as well. Here are some more phrases that match that same LLM pattern. Sure, you could argue that someone actually writes like this, but after a while, it becomes excessive.
- Your program continues running with a corrupted heap - a time bomb that will explode unpredictably later.
- You’re not just getting 64 bytes of memory. You’re entering into a complex contract with a specific allocator implementation.
- The Metadata Mismatch
- If it finds glibc’s metadata instead, the best case is an immediate crash. The worst case? Silent corruption that manifests as mysterious bugs hours later.
- Virtual Memory: The Grand Illusion
- CPU Cache Architecture: The Hidden Performance Layer
- Spoiler: it’s even messier than you might think.
zem · 3h ago
huh, interesting, I guess I haven't read enough of it to pick up on the patterns
rectang · 4h ago
I find it hard to believe that an LLM would have come up with this quote to start the article:
> “Memory oppresses me.” - Severian, The Book of the New Sun
That sort of artistic/humourous flourish isn't in character for an LLM.
jml7c5 · 58m ago
It looks like a mix of LLM and human-written content. The (human) author would have been the one who chose to put that quote there.
eviks · 42m ago
But it's easy to believe that this is one of the few things the author added. Doesn't have to be 0% or 100%
TechDebtDevin · 4h ago
Do you see Emojis in tables/code now and assume the person is using an llm? I dont really see it.
Maybe I'm too paranoid! If it's not LLM then I don't think it's a very well-organized post though.
In addition to the emoji, things that jumped out at me were the pervasive use of bullet lists with bold labels and some specific text choices like
> Note: The bash scripts in tools/ dynamically generate Rust code for specialized analysis. This keeps the main codebase clean while allowing complex experiments.
But I did just edit my post to walk it back slightly.
skydhash · 3h ago
Not TFA’s author
As a non-native English speaker, 90% of my vocabulary come from technical books and SF and Fantasy novels. And due to an education done in French, I tend to prefer slightly complicated sentences forms.
If someone uses LLM to give their posts clarity or for spellchecking, I would aplaud them. What I don’t agree with, LLM use or no, is meandering and inconsistency.
OmarAssadi · 4h ago
Personally, it is one of the flags, yeah. It's been a while since I've tried ChatGPT or some of the others, but the structure and particular usage felt a lot like what I'd have gotten out of deepseek.
It's not a binary thing, of course, but it's definitely an LLM smell, IMO.
mvieira38 · 4h ago
I mean, are we supposed not to? This doesn't read like a blog at all, it even has the dreaded "Key Takeaways" end section... The content is good and seems genuinely researched, but the text looks "AI enhanced", that's all
jokoon · 4h ago
Any insight on the quantity of paid rust job out there?
https://stdrs.dev/nightly/x86_64-unknown-linux-gnu/src/std/s...
Apparently they did away with jemalloc in favor of the system allocator that same year but nonetheless when I came back to it years later I was very happy to learn of its removal.
Why?
It also does not give a good first impression at all to newcomers to see their hello world project built in release mode take up almost 2MiB of space. Today it's a much more (subjectively) tolerable 136kiB on Windows (considering that Rust std is statically linked).
However, not all C constructs have idomatic Rust equivalents. For example, bitfields don't exist in Rust, and unlike Rust enums, C enums can have any value of the underlying type. And for ABI reasons, it's very commom in C APIs to use a pointer to an opaque type paired with what are effectively accessor function and methods, so mapping them to accessors and methods on a "Handle" type in Rust often is the most idomatic Rust representation of the C interface.
[1]: https://github.com/rust-lang/rust-bindgen
https://doc.rust-lang.org/nomicon/other-reprs.html
`#[repr(C)]` instructs the compiler to lay the struct out exactly according to C’s rules: order, alignment, padding, size, etc. Without this, the compiler is allowed a lot more freedom when laying out a struct.
Yes and no. Even within libstd, some things require A=GlobalAlloc, eg `std::io::Read::read_to_end(&mut Vec<u8>)` will only accept Vec<u8, GlobalAlloc>. It cannot be changed to work with Vec<u8, A> because that change would make it not dyn-compatible (nee "object-safe").
And as you said it will cut you off from much of the third-party crates ecosystem that also assumes A=GlobalAlloc.
But if the subset of libstd you need supports A=!GlobalAlloc then yes it's helpful.
This means that e.g. a growable container type doesn't have to guess that your allocator probably loves powers of 2 and so it should try growing to 256 bytes not 185 bytes, it can ask for 185 bytes, get 256 and then pass that on to the user. Significant performance is left on the table when everybody is guessing and can't pass on what they know due to ABI limitations.
Rust containers such as Vec are already prepared to do this - for example Vec::reserve_exact does not promise you're getting exactly the capacity you asked for, it won't do the exponential growth trick because (unlike Vec::reserve) you've promised you don't want that, but it would be able to take advantage of a larger capacity provided by the allocator.
It’s also very possible I’m missing something!
Yeah no this is basically all I'm saying. I'm excited for this.
> Interviewer: “What happens if you allocate memory with C’s malloc and try to free it with Rust’s dealloc, if you get a pointer to the memory from C?”
> Me: “If we do it via FFI then there’s a possibility the program may continue working (because the underlying structs share the same memory layout? right? …right?)”
I don't know if the author used AI to write this, but if they didn't, this is the person AI agents decided to copy the writing style of.
Edit: Reddit thread somewhere in the comments here to a post from the author pretty much confirmed my suspicions, this article is heavily AI generated and plain wrong in several cases. A good reminder not to use AI slop to learn new topics, because LLMs bullshit half the time and you need to know what you're doing to spot the lies.
Seriously though: I already knew the “don’t mix allocators” rule, but I really enjoyed seeing such a careful and hands-on exploration of why it’s dangerous. Thanks for sharing it.
“It’s not just _________. It’s _________________.”
This was in almost every response doubling down on the users ideas and blowing things out of proportion. Stuff like…
“It’s not just a good idea. It’s a ground up rewriting of modern day physics.”
- Your program continues running with a corrupted heap - a time bomb that will explode unpredictably later.
- You’re not just getting 64 bytes of memory. You’re entering into a complex contract with a specific allocator implementation.
- The Metadata Mismatch
- If it finds glibc’s metadata instead, the best case is an immediate crash. The worst case? Silent corruption that manifests as mysterious bugs hours later.
- Virtual Memory: The Grand Illusion
- CPU Cache Architecture: The Hidden Performance Layer
- Spoiler: it’s even messier than you might think.
> “Memory oppresses me.” - Severian, The Book of the New Sun
That sort of artistic/humourous flourish isn't in character for an LLM.
https://www.reddit.com/r/rust/comments/1mh7q73/comment/n6uan...
The reply to that comment is also a good explainer of why the post has such a strong LLM smell for many.
BTW that Reddit post also has replies confirming my suspicions that the technical content wasn't trustworthy, if anyone felt like I was just being snobby about the LLM writing: https://www.reddit.com/r/rust/comments/1mh7q73/comment/n6ubr...
In addition to the emoji, things that jumped out at me were the pervasive use of bullet lists with bold labels and some specific text choices like
> Note: The bash scripts in tools/ dynamically generate Rust code for specialized analysis. This keeps the main codebase clean while allowing complex experiments.
But I did just edit my post to walk it back slightly.
As a non-native English speaker, 90% of my vocabulary come from technical books and SF and Fantasy novels. And due to an education done in French, I tend to prefer slightly complicated sentences forms.
If someone uses LLM to give their posts clarity or for spellchecking, I would aplaud them. What I don’t agree with, LLM use or no, is meandering and inconsistency.
It's not a binary thing, of course, but it's definitely an LLM smell, IMO.