Making Rust's macros easier is laudable. Purely from a user's perspective I find it especially annoying, that proc macros need their own crate, even if I understand the reasons for it.
If I read Crabtime correctly it solves that problem, which is nice.
That being said Crabtime looks more like compile time eval on steroids to me than an analogon to Zig's comptime.
One (maybe the) distinguishing feature between comptime in Zig and Rust macros seems to me to be access to type information. In Zig you have it[1]
in Rust you don't and that makes a big difference. It doesn't look like we will get that in Rust anytime soon and the projects that need it (e.g. cargo semver check) use dirty tricks (parsing RustDoc from the macro) to accomplish it. I did not see anything like that in Crabtime, but I might have missed it. At any rate, I'd expect compile time reflection for anything that claims to bring comptime to Rust.
[1] I think, but I am not a Zig expert, so please correct me if I am wrong.
pron · 39d ago
> One (maybe the) distinguishing feature between comptime in Zig and Rust macros seems to me to be access to type information. In Zig you have it[1] in Rust you don't and that makes a big difference.
There are other differences. First, comptime functions aren't syntactic macros. This makes them much easier to reason about and debug. You could think about them as if they were regular functions running at runtime in a partially-typed language with powerful reflection (their simplicity also means they're weaker than macros, but the point is that you can get very far with that, without taking on the difficulties associated with macros). Second, I think that comptime's uniqueness comes not from what it does in isolation, but that it makes other language features redundant, keeping the entire language simple. This means that with one simple yet just-powerful-enough feature you can do away with several other features.
The end result is that Zig is a very simple language with the expressivity of far more complicated languages. That on its own is not super unusual; in a way, JavaScript is like that, too. But Zig does it in a low-level language, and that's revolutionary. It is because of its simplicity that people compare Zig to C, but it's as expressive as C++ while also being safer than C++, let alone C.
Adding comptime to an already-complex language misses out on its greatest benefit.
GrantMoyer · 38d ago
Nit-pick: Javascript is not simple; I personally think it has among the most complex language semantics out of all commonly used languages.
HelloNurse · 38d ago
Javascript would be a simple hybrid of object oriented and functional principles if backward compatibility with old hacks and "robust" use for web page scripting didn't require a host of redundant features, syntactic bizarre special cases and and evil semantic choices: -- comments, iterating objects and arrays, the absurd equality operators and type conversions, and so on.
efnx · 38d ago
Prototypal inheritance can be pretty awkward and complicated. It’s evident in that few young JS devs ever use it, and many don’t even know about it!
chuckadams · 38d ago
I'm an old JS dev, having been around from its beginning, and I avoid JS's OOP system of prototypes too (along with the class syntax) and stick to only static closures in object literals. VueJS and TypeScript are also mostly written in that style, so I'm in good company. I'm tickled pink that we're at the point where a dev actually _can_ forget that the OOP system exists, because its a horror show. I learned OOP on a prototype-based system (LambdaMOO) so it's not like it's foreign to me, JS's system is just so full of warts and wats it's not funny.
It's scary how good TS is at inferring the type, it'll even infer an instance of a class from a literal, so the interop with class-based code is almost seamless (almost: just don't try destructuring a "real" object or re-binding `this` on a static closure)
Lerc · 38d ago
Can you give an example outside of the weirdness caused by automaticy converting types? (Or 'with', which is kind-of in purgatory now)
I have considered a non backwards compatible JavaScript descendant to clean things up. It would be interesting to hear what you consider to be problems.
Empty slots are skipped in map() and for-in, but not in for-of and the new array from map() will have the same empty slots. delete will change a slot to empty, it won't change the length of the array.
Still, much saner than PHP arrays.
cardanome · 38d ago
> Still, much saner than PHP arrays.
PHP basically has arrays and maps merged into one. Basically like Lua does.
It is pretty handy. Not sure what you find insane about it.
Of course the functions for arrays in the standard library are consistently inconsistent but that is just a general PHP thing. Isn't a big deal when you use an IDE.
chuckadams · 38d ago
> Not sure what you find insane about it
How about array_filter() returning an associative array, because it doesn't renumber the indexes? You need to run it through array_values() after. You run into this all the time when converting to json.
For something more obscure but equally infuriating, how about using iterator_to_array() on anything that uses `yield from`? Everything yielded that way will overwrite previously yielded values, unless you pass a magic boolean parameter that of course defaults to doing the wrong thing (PHP is chock full of those). OFC it's because of the behavior of array keys.
How about when you do want associative array semantics and use a string index that consists entirely of numeric digits? It gets cast into an int, always, and you cannot control that. This is super "fun" when you run array_keys on your associative array you only used string keys on, and naïvely think it will return only strings. Crash at runtime if your function was declared to just take a string.
There are so many more WTFs, these were just off the top of my head from issues I've encountered recently. I make my living with PHP, but you'll never see me defending arrays. Though at least they're zero-based As The Deity Intended It.
GrantMoyer · 38d ago
Off the top of my head:
- There's both undefined and null
- There's three ways to declare variable
- typeof vs instanceof
- for in vs for of
- the way `this` works
- semantics of functions called with the wrong number of args
There's far more that I can't immediately recall.
Lerc · 38d ago
A lot of that is what I had in mind. Things like for...of and let fix glaring problems by introducing new things instead of changing to maintain backwards compatibility.
A lot of the fixes are just removing the old broken way of doing things.
I'd still probably change 'for x of ...' to implicitly be 'const x' and require let if you are doing shenanigans with changing the values.
I am surprisingly ok with null and undefined. Null means not a thing, undefined means there is not even the notion of the thing.
I'm pretty sure there are more than three ways to declare a variable.
x=5
let x=5
var x=5
window.x=5
window["x"]=5 //arguably the same as above.
Object.defineProperty(window,"x",{value:5})
Object.assign(window,{x:5})
at least
int_19h · 36d ago
> Null means not a thing, undefined means there is not even the notion of the thing.
If only "undefined" was consistently the same as lack of variable / collection entry, it would be understandable. But as things are, {x: 1, y: undefined} is not the same as {x: 1} in many ways - indeed, so much so that TS distinguishes between these two cases in its type system.
paulddraper · 37d ago
Many of those are setting properties, not variables.
A variable reference can lookup properties on the global object, or other objects with ‘with’
codethief · 34d ago
> There's both undefined and null
To make matters even worse, there's a difference between setting an object property to undefined as opposed to omitting it. So one could say there are two different "undefined"s.
chuckadams · 38d ago
And just for fun, `typeof null` returns `object`, which is an acknowledged bug they'll never fix because compatibility.
paulddraper · 37d ago
> three ways to declare a variable
FWIW Python has three (default, nonlocal, and global).
And many languages have two.
throwawaymaths · 38d ago
prototypes vs class
codedokode · 38d ago
Javacripts has lot of low-level functions like [[Get]], [[Call]], [[ToPrimitive]] which can be redefined. If you believe that JS is easy, do you remember how these low-level functions work? Also, JS has prototypes.
bungle · 38d ago
Simplicity in high-level: Lua, low-level: Zig (combined in one (complex): Rust?)
creata · 38d ago
> This makes them much easier to reason about and debug.
Can you give an example of something that's easier to reason about (e.g., an error that's easier to spot) with Zig's comptime than with macros?
> it makes other language features redundant
I'm guessing (so I might be wrong) that IDEs and users still need to be aware of the common idioms, so why does it matter whether or not those common idioms are implemented in the compiler or using comptime? (I'm not saying it doesn't matter, I'm wondering what benefits you have in mind.)
WhyNotHugo · 38d ago
> Can you give an example of something that's easier to reason about (e.g., an error that's easier to spot) with Zig's comptime than with macros?
Rust proc_macros takes a stream of tokens and return a stream of tokens. If your macro meant to return an instance of a specific type, it must output the correct tokens which create that instance via existing interfaces. There's some really ugly indirection in trying to understand what's going on.
This is always harder to reason about than Zig's equivalent, because in Zig you just return the thing that you want to return.
NobodyNada · 38d ago
How does it work if I wanted to construct a type (and maybe a set of helper types. some related functions, etc.), rather than an instance of a type?
If I just wanted to construct an instance of a specific type at compile time in Rust, I'd probably be using a const fn instead of a macro.
cgh · 38d ago
You return the type directly. You can then declare things to be of this type. Eg, from the Zig docs, here's how to construct a generic List type (note the comptime declaration of the generic parameter):
fn List(comptime T: type) type {
return struct {
items: []T,
len: usize,
};
}
// The generic List data structure can be instantiated by passing in a type:
var buffer: [10]i32 = undefined;
var list = List(i32){
.items = &buffer,
.len = 0,
};
samatman · 38d ago
You can't reason about macros, that's not how they work.
You can read their definition, you can expand them, but there's no way to look at a macro call and reason about it, it can do anything at all. In C you don't even know what is and isn't a macro, so Rust has a modest edge in that respect.
Zig just doesn't have this problem to begin with.
creata · 38d ago
Reading a macro's definition and reasoning about its effect is... reasoning. It's not the same as reasoning about something using its inherent limitations, which is the kind of reasoning that I think you're referring to, but it's still reasoning.
samatman · 38d ago
Ok, sure, we can reason about anything. We could reason about machine code, if we had the time and inclination.
I barely participate in Hacker News anymore because it seems to have collectively lost the ability to extract meaning from words, unless an exhausting and totally excessive amount of attention is put into satisfying a misplaced sense of precision. There's no intellectual charity left and it sucks.
No comments yet
dabinat · 37d ago
I have difficulty debugging proc macros. If I need to output some data to aid in debugging a derive macro, the only way I could think of to make that happen was to make it panic with the data as part of the message. This feels like a very clunky way to debug.
pjmlp · 37d ago
Zig's safety is kind of on par with Modula-2, and Object Pascal, which are equally expressive as high level languages, with low level capabilities, but apparently curly brackets have won.
The two things it adds on top, is the way nullability is handled at compile time, while the former do runtime null checks, and comptime.
baranul · 34d ago
> curly brackets have won
True, but at least there are curly bracket languages heavily influenced by Pascal, such as Go, V (Vlang) (has comptime too and is debatably safer), Odin, etc...
huijzer · 38d ago
I always on purpose avoid macros and comptime as much as possible because they usually make code much harder to reason about and to debug. Also, often macros are hard to refactor.
Would you agree with my idea, or would you say I am missing something? Does Zig aleviate some of the problems I mentioned?
vijaybritto · 37d ago
Now that I'm older I agree with you, but I would have been furious hearing this ten years ago :)
forks · 39d ago
What are some examples of other language features that comptime makes redundant?
dhruvrajvanshi · 39d ago
It's generic system for example, is built on top of comptime. A generic struct is just a function that takes a type as an argument and returns a struct.
```
fn Vec(comptime T: anytype) {
return struct {
// ...
}
}
```
IMO having a first class generic type parameter syntax is better but this demonstrates OP's point.
SkiFire13 · 38d ago
Nit: comptime does not replace a proper generic system (i.e. a polymorphic type system), but acts more like a templating system (like the one in C++).
Wumpnot · 38d ago
It just looks like C++ templates with a slightly different syntax ..
azakai · 38d ago
Exactly, the point is that C++ added templates as a huge new language feature, while in Zig it is just one of the things that is immediately possible thanks to comptime.
throwawaymaths · 38d ago
well, not quite since you can pass non-type things of generally any level of data type complexity (as long as it's comptime-valid, which only excludes certain types of mutation), and do stuff with them that you couldnt in c++.
That example looks easy enough to replicate in C++ with consteval + template, basically the same except a few minor syntax changes.
Maxatar · 38d ago
You absolutely can't do that in C++ with consteval + template. C++ would need support for reflection to do that, and maybe it will get it in 10 years, maybe not, but as of today this would not be possible.
Furthermore, the original argument wasn't about whether something can or can't be done in C++, it was that this one feature in Zig subsumes what would require a multitude of features from C++, such as consteval, templates, SFINAE, type traits, so on so forth...
Instead of having all these disparate features all of which work in subtly different ways, you have one single feature that unifies all of this functionality together.
listeria · 37d ago
You absolutely can do that in C++ with consteval + template, what's more, you don't even need consteval, constexpr will suffice:
I'd agree that zig's comptime encompasses many of C++'s features, and I appreciate the approach they took to new features by way of builtins (like @typeInfo + @Type for reflection), but this is not a good example.
Furthermore, why is type traits among the list of features that you claim is subsumed by comptime? not only is that not the case, but type traits are not so much a feature of C++ as they are of its standard library, implemented using templates (a feature of the language).
steveklabnik · 38d ago
You're not wrong in general here, but C++ is going to get the core of reflection in C++26. I'm not sure enough of the details to know if it supports doing this, however.
Rust on the other hand... that might be ten years.
pcwalton · 38d ago
There are various reflection crates available on crates.io, as you know.
steveklabnik · 38d ago
Yeah, I always wished that the reflect crate got further along than it has.
I still think that language support us important, but unfortunately due to what happened, I suspect that will take a long time. And that’s disappointing.
pcwalton · 38d ago
We Bevy users use Rust reflection on a daily basis, and are very happy with it :)
I agree it'd be nice if it weren't confined to our community, though.
steveklabnik · 38d ago
I should check it out, I haven't had an actual use-case for reflection lately, so I haven't given bevy_reflect a try yet, but when I do, I'll make sure to give it a shot.
Ygg2 · 38d ago
How does it do reflection on third party types (non-Rust, non-Bevy types)? Those that don't derive Reflect?
Don't coherence rules prevent it?
pcwalton · 37d ago
Assuming by "non-Rust types" you mean "those that the bevy_reflect crate doesn't know about", it's indeed limited by the orphan rule. That being said, bevy_reflect offers many workarounds for this problem. Because bevy_reflect is based on a type registry containing function pointers, you can actually populate it manually for types external to your crate without using the Reflect trait at all if you want to. And if your type contains fields that aren't Reflect, then you can use custom reflection logic.
Ygg2 · 37d ago
non-Rust meaning non-std types, for which Bevy reflect can do manual trait implementation.
Wumpnot · 38d ago
There is no reflection in this example, this is easily replicated in C++
throwawaymaths · 38d ago
it's not, because what you want to be a constexpr is not const. The type signature is comptime []u8, not comptime []const u8
throwawaymaths · 38d ago
i didn't use reflection in this example, but note that consteval shouldn't be able to do this because I mutate the string; it's not const at comptime.
listeria · 37d ago
I tried your example and got an error:
error: type capture contains reference to comptime var
I'm not sure how you were suppossed to use it but here's my attempt:
If you allow `capitalized` to be it's own instance then there's no reason to mutate the comptime parameter in the first place, and it can be replicated in C++17.
pron · 38d ago
Generics, interfaces/traits/concepts, macros, conditional compilation, const functions/constexpr. These are four or five different features in C++ or Rust, some of which are quite complex, all expressible as one simple construct: comptime.
SkiFire13 · 38d ago
Comptime can only properly express half of them:
- generics: comptime can implement some kind of polymorphism, but not at the type level. In other words it implements a templating system, not a polymorphic type system;
- interfaces/traits/concepts: comptime implements none of that, it is plain duck typing, just like "old" C++ templates. In fact C++ introduced concepts to improve its situation with templates, while Zig is still behind on that front!
- macros: comptime solves some of the usecases where macros are used, but it cannot produce arbitrary tokens and hence cannot fully replace macros.
I do agree that it can neatly replace conditional compilation and const functions/const expr, but let's not make it seem like comptime solves everything in the world.
throwawaymaths · 38d ago
in practice aside from interfaces the only thing you can't do at comptime is to generically attach declarations (member functions, consts) to a container type (the best you can do is to do it on a case by case basis).
you could probably cobble together an interface system with @comptimeError, but because of the time order of compilation stages, a failed method call will trigger the compiler error before your custom interface code, making it effectively useless for the 90% of cases you care about.
if I'm not mistaken in principle a small change to the compiler could amend this situation
pcwalton · 38d ago
How do you typecheck generics, with type inference, with comptime?
Or, more generally, address all the issues raised in [1]. You're saying that comptime can fully replicate all the features that a proper generics system has, which is plainly false.
I would say these are more differences than issues, and that some of those presented as more fundamental ones are actually quite small. Suppose that instead of `fn foo (comptime T : type, ...) { typecheck(T); ...}` Zig introduced just a tiny bit of new syntax to allow you to write something like `fn foo (comptime T : typecheck(T), ...) { ... }` -- i.e. the type constraints would be part of the signature -- would you then say it had generics rather than templates? Personally, I have not made up my mind on whether or not such an addition would be very valuable, but even if it is, it can be done later. That small addition would address most "issues" in the article, which I would say are more about IDE support than anything else. But even without it, what you want to know is known at compile time, and the article admits that the compilation errors are already better than those you get with C++ templates (I would say much better).
Now, I'm not saying that Zig's choices always dominate and that all languages would be better off with its approach; far from it. I am saying that it introduces a novel tradeoff that is especially compelling in cases where not only generics but also macros, conditional compilation, and constexprs are otherwise required. In a language like Java these extra features are not required, and so Zig-style comptime would not simplify the language nearly as much.
But even in cases where all these features are needed, I don't think everyone would take Zig's choices over C++'s or Rust's, or vice-versa. To those, like me, for whom language complexity is the biggest problem with C++ or Ada (I used Ada in the nineties), Zig is a revolutionary step forward. I don't think any low-level language has ever been this simple while also being this expressive.
pcwalton · 38d ago
It's interesting that so many replies in this thread (and indeed, most Zig threads) are along the lines of "yes, it doesn't do X today, but Zig could just add X". I'd really like to see arguments in favor of Zig that rely on what it can do today, rather than what it might do someday. After all, you don't extend Rust the same courtesy, and Zig is not that young of a language. And in PL circles Zig has a bit of a reputation for promising things that it has yet to deliver (e.g. static detection of potential stack overflows, which I'm convinced just can't be done in a useful way in the presence of higher order functions).
pron · 38d ago
> It's interesting that so many replies in this thread (and indeed, most Zig threads) are along the lines of "yes, it doesn't do X today, but Zig could just add X".
That wasn't my argument.
> After all, you don't extend Rust the same courtesy
Given that my aesthetic issue with Rust is that it has too many complicated features, I don't see how that courtesy could be extended. There is, indeed, an asymmetry between adding features and removing them, but the aesthetic "points" I'm awarding Zig is not due to features it could add but due to features it hasn't while they've not yet been shown to be critical.
I think it's fairly obvious that any feature in any language was added to add some positive value. But every feature also has a negative value, as it makes the language more complicated, which in aggregate may mean fewer programs would be written in it. The challenge is balancing the value of features with their complexity. Even those who prefer Rust's aesthetics to Zig would admit that Zig's novel approach to power/simplicity balance is something we have not seen in programming language design in many years.
pcwalton · 38d ago
> The challenge is balancing the value of features with their complexity. Even those who prefer Rust's aesthetics to Zig would admit that Zig's novel approach to power/simplicity balance is something we have not seen in programming language design in many years.
I disagree. Minimalism in systems language design has been done over and over: see Go for the most recent example. Comptime is something that C++ was already doing in the form of constexpr since 2011 and a space that D had explored for over a decade before Zig came around in the form of "static if" and so forth (in addition to lots of academic work, of course). Stripping out template metaprogramming in favor of leaning heavily on compile-time function evaluation isn't novel either. I think you find the set of features that Zig has to be personally appealing, which is fine. But the argument that it's anything novel is weak, except in the trivial sense that every language is novel because it includes some features and leaves others out (but if every language is novel, then the word "novel" has no meaning).
From my vantage point, Zig is essentially a skin on a subset of C++, one that is in practice less safe than C++ because of the relative immaturity of tooling.
pron · 38d ago
I've been doing low-level programming for over 30 years now, and Zig's use of comptime as a simplifying feature is nothing at all like C++'s or D's (it is more conceptually similar to the role macros play in Lisps). Denying how revolutionary it is for low-level programming seems strange to me. You don't have to like the end result, but clearly Zig offers a novel way to do low-level programming. I was excited and thoroughly impressed by Rust's application of substructural typing despite being familiar with the idea long before Rust came out, even though the overall language doesn't quite suit my taste.
Minimalism is also, of course, not a new idea, but unlike in Go, I wouldn't say minimalism is Zig's point. Expressive low-level languages have always been much more complex than similarly expressive high-level ones, something that many of us have seen as their primary problem, and Zig is the first one that isn't. In other words, the point isn't minimalism as a design aesthetic (I would say that explicitness is a design aesthetic of Zig's much more than minimalism) but rather reducing the complexity that some have seen as the biggest issue with expressive low-level languages.
What was so impressive to me is that we've always known how macros can add almost infinite expressivity to languages that could be considered minimalistic, but they carry their own set of severe complexity issues. Zig showed how comptime can be used to offer much of the power of macros with almost none of their cost. That, too, is revolutionary even without considering the low-level domain specifically (although high-level languages have other options).
Finally, if you want to talk about "safety in practice", considering more than just the language, I don't think we can know without empirical study, but the same claim could be made about Rust. Both Zig the language and Rust the language are safer than either C or C++ (the languages), and Rust the language is safer than Zig the language. But as to their relative safety (or correctness) "in practice" either now or in the future, only time and empirical study will tell. Clearly, languages that offer more soundness, like Idris or ATS, don't always work so well in practice. So I admit I don't know at this time whether Zig (the gestalt) offers more or less correctness in practice than C++ or which of Zig or Rust offers more correctness in practice, but neither do you.
gw2 · 37d ago
> Zig is essentially a skin on a subset of C++, one that is in practice less safe than C++
Give it a rest please. Given your association with Rust, endlessly attacking competing languages is not a good look, regardless of whether your points are technically correct or not.
hollerith · 37d ago
Looks fine to me.
pjmlp · 37d ago
So Mesa, Cedar, Modula-2, Object Pascal, Oberon,... don't count?
pron · 37d ago
Of course they count, but they never reached Zig's expressivity/simplicity ratio. For example, Oberon doesn't have generics. You could argue that Zig doesn't have dynamic dispatch as part of the language, but it's expressive enough for that to be done in a library (https://github.com/alexnask/interface.zig). Put simply, Zig can do pretty much anything C++ can with the same expressivity (by programming languages X and Y having the same expressivity I mean that there is some small constant C such that any program in X could be written in Y in a number of lines that is within a factor of C compared to X).
pjmlp · 37d ago
Modula-2 and Oberon Pascal evolved to have generics, if that is the "expressiveness problem".
Also all the ones I mentioned, supported binary libraries, which apparently is not something the Zig folks are too keen in supporting, other than C like ABI.
For me any systems language that doesn't support binary library distribution isn't that relevant to me, and yes that is also something that I regularly complain about in Rust, and not only me. Microsoft has a talk on their Rust's adoption where this is mentioned as a problem, only relieved thanks to ubiquity of COM as mechanism to delivery binary libraries on Windows.
pron · 36d ago
I agree that good separate compilation is valuable, but having a full ABI for natively-aot-compiled languages is rather difficult once generics are involved. Even C++ doesn't have one (and if you think about it, even C punts on separate compilation once macros are involved). I think the only such language that offers one -- and not quite to the full extent -- is Swift.
baranul · 35d ago
Another aspect of this argument, is that Zig is supposedly not about adding many features, in order to live up to its claim of being a small and simple language, with an emphasis on systems programming. Where on the other hand, Rust's pitch is not promising small and simple.
creata · 38d ago
> would you then say it had generics rather than templates?
I think pcwalton's "generics" vs. "templates" distinction mostly boils down to parametric typechecking, which Zig's design just can't do. (Can it?)
Although, I vaguely remember some example showing that even Rust in some cases allows a type definition X<T> even when there exists a T such that X<T> would fail to typecheck.
creata · 38d ago
I don't think pron was saying that Zig has a feature-by-feature match for everything that Rust's generics can do. I think his point is that comptime can handle what the target audience of Zig wants from generics. In that regard, I don't think the criticisms there are that big a deal.
throwawaymaths · 38d ago
1. if you wish, you absolutely can check for "extra constraints" on a passed type (or even an anytype parameter) using comptime reflection and the @comptimeError builtin.
2. if you want to restrict the use of a function to comptime (why you would want to is beyond me) it is possible to do with @inComptime builtin.
the only tricky bit is that your function could try to call a function inaccessible to you because it's transitively restricted and you'd have a hard time noticing that from the code but it's not possible for that code to be compiled (barring errors by the zig team) so its more of an annoyance than a problem.
They're popping up all over. For some reason, Zig folk want Rust things and Rust folk want Zig things.
pcwalton · 38d ago
I have seen projects like that, and they're a prime example of what I mean when I want to see arguments in favor of Zig that rely on what Zig can do now, and not what it could potentially do in the future. Ten years ago, C++ folks were also promising memory safety with the ISO C++ Core Guidelines; you don't hear much about that anymore, because it turns out it can't be done while keeping the resulting language C++ (for example, seanbax's work for C++, which is far more advanced than this project, is really awesome, but is essentially a different language).
baranul · 34d ago
If people are referring to what Zig or any other language could possibly do, as something valid to be taken under consideration, then it should at least be explicitly stated in their roadmap.
Otherwise, it can be construed as just another wish list or feature request, that main developers have no plans to implement.
rc00 · 38d ago
And what about all the promises of things making it out of Rust Nightly?
When it comes to Zig, one crowd says it's too early because it's not 1.0 fast enough. And now, are you saying that the feature development is basically done? What point are you trying to make?
pcwalton · 37d ago
The point I'm making is that I don't believe Zig can be made meaningfully memory safe without breaking compatibility to the extent that memory-safe Zig would effectively be a different language, any more than C can.
pron · 37d ago
I tend to agree, but writing code in a memory-safe language is not the goal. The relevant goal on that particular front (and it is not the only goal people have when writing programs) is to maximise the resulting program's correctness within some effort budget and other requirements (such as memory consumption etc.). Using a memory-safe language is one approach toward achieving that goal, but if that safety comes at a cost of added effort, then showing that's the best approach depends on claims that are hard to verify without a lot more empirical data. The only theoretical justification could be that more soundness results in more practical correctness, but if that were the case then there are languages that offer more soundness than Rust. In other words, if Rust is generally superior to Zig because it offers more soundness, then other languages are superior still. Indeed, soundness is not considered the best approach to attaining every property in general because it can be more expensive than other approaches that achieve the desired outcome, and Rust clearly recognises that.
Rather, both languages' correctness-related claims rely on them being in some better or worse effort/correctness sweet spot, and that can only be measured empirically for those two specific languages. Crucially, results comparing Rust to C or C++ are not relevant. Zig offers more soundness than C, and even if it offers the same soundness as C++ (and I disagree -- I think it offers more) the language's claim is that its simplicity assists correctness. We can only tell to what extent that is true with more empirical observation; I don't think there can be any good guess made here regarding the size of the effect.
kobebrookskC3 · 37d ago
some correctness matters more than others. for a web browser, i would rather use a browser that is buggy/janky/crashy but doesn't give random websites arbitrary native code execution over a browser that is logically correct except for giving random websites rce. the languages that are more sound than rust are probably worse among some axis like ease of use, expressivity or performance. even if they weren't, they're probably not going to buy me much more than a slightly less buggy/janky/crashy browser, once you no longer have exploitable memory safety bugs.
pron · 36d ago
Sure, but once soundness starts trading off other properties that may be important even for that kind of correctness, it is not necessarily the best approach. You don't care what the cause of the exploitable remote execution vulnerability is, and remote code execution is a real vulnerability even in fully memory-safe languages.
If you look at MITRE's CWE Top 25 [1], #2 (Out-of-bounds Write) and #6 (Out-of-bounds Read) are soundly eliminated by both Zig and Rust. It is only when it comes to #8 (Use After Free) that Rust's additional soundness comes into play. But because it comes at a cost, the question is to what extent eliminating #8 adversely impacts the other top vulnerabilities, including those higher on the list. It may be the case that eliminating the eighth most dangerous vulnerability through sound language guarantees may end up being worse overall even if we're only concerned with security (and it isn't the only concern). We can only try to assess that through empirical study.
if you looked at the clr project you would find a credible claim that it doesn't have to be an "effectively different language", that there is a real possibility that "minor annotations are sufficient".
> I don't believe
Of course, we're debating against a belief, and you have plenty of reasons to not believe it that it will be impossible to be swayed by any sort of evidence, and you will always find a way move the goalposts.
pjmlp · 39d ago
That is what I really like about the evolution of metaprogramming in C++.
While it started as a hack on how to use templates back in C++98, it has gotten quite usable nowadays in C++23, and the compile time reflection will make it even better.
All without having another language to learn about, as it happens with Rust macros, with its variations, or reliance on 3rd party crates (syn).
msk-lywenn · 39d ago
How is C++'s template metaprogramming not another language inside C++ today? AFAIK, the syntax and even general logic is still extremely different than regular C++
pjmlp · 39d ago
constexpr, consteval, if constexpr, requires, auto,... are quite regular C++.
codedokode · 38d ago
The problem with C++ metaprogramming is that it is pain to read and understand, unless it is your daily job.
nukem222 · 39d ago
How on earth does zig resolve types before macros? Must be some ~~nuts~~ novel order of evaluation to get that behavior. How is this intended to function? Are there multiple layers of macros? Do you have to declare said level or is it derived? How do you use macros to define types or declare types of variables? Can you use said types in other macros?
pfg_ · 39d ago
Zig doesn't have macros, it has functions which can be run at comptime. You can make a function that returns a type and call it from another function. All declarations are only analyzed when they are first used, and functions when called at comptime are memoized based on their arguments. The order of evaluation is really simple and predictable.
weinzierl · 39d ago
Just for completeness: Rust has functions which can be run at comptime as well. They are called const fn and Rust has them out of the box, no crate required. They are also true Rust and not macros with a separate syntax.
They are still not an adequate substitute for Zig's comptime feature. For one and in a sense they are much more limited than comptime functions in Zig but for another (and for better or worse) they also have much higher aspirations than Zig.
const fn must always be able to be run at compile time or run time and always produce bit-identical results. This is much harder than it looks at first glance because it must also uphold in a cross-compiling scenario where the compile time environment can be vastly different from the run time environment.
This requirement also forbids any kind of side effect, so Rust const fn are essentially pure functions and I've heard them called like that.
WhyNotHugo · 38d ago
What Rust is missing is reflection and the ability to define types and functions via code. Zig's comptime is often used for this: to generate code (for example, a serialiser for a type) or to generate types (generics being a typical thing, but lots of other usages are viable).
vlovich123 · 38d ago
You can define types and functions via code via macros. For example, [1] which creates a new sibling type and injects a ::builder method into your type.
And you can add reflection [2]. So if you can add what you need via crates, is the language actually missing it or is it just not as ergonomic / performant as it needs to be or is it an education problem?
Rust has compile time environment variables and const fns can parse them. It's one very nice and easy way to experiment with or configure rust code at compile time, and should be explored more.
Zambyte · 39d ago
Reading environment variables contradicts my mental model of pure functions. Are they just not pure functions then? Or pure-ish?
kzrdude · 39d ago
The functions are pure but they take an input from a built-in (looks like a macro) that reads the environment variable at compile time.
Also, you only compile once, so how could you tell the difference? You could say - if it was using const fn that it's a "templated" function that depends on compile time settings.
Zambyte · 38d ago
I see, I guess if you can't set environment variables during build time that makes it pure-ish enough.
dwattttt · 38d ago
There's two lookups that can occur, distinguishing them makes it clearer.
Looking up the value of an environment variable at runtime is not a const operation, and produces an error if you try to do it in a const fn.
Looking up the value of an environment variable during compile time _can_ be done in a const context, but it'll only happen once. The environment should be considered an input to a const fn, and that makes it "pure".
EDIT: These two operations can both be done in non-const functions too, they're different functions (well, one's a macro).
vlovich123 · 38d ago
You can set environment variables in your build.rs / the user sets it like A=b cargo build.
IshKebab · 39d ago
Yeah I don't really understand why Rust copied that from C++, where constexpr functions only might run at compile time.
C++ ended up having to add consteval and constinit which really are compile-time.
kibwen · 39d ago
It's the other way around. Rust has always had contexts that are guaranteed to be compile-time (const and static items), and gradually added the ability to run some subset of the language at compile-time (const fn) specifically to accommodate const/static items (e.g. to replace the old lazy_static with the modern LazyLock), and naturally also allows these functions to run at runtime if you want (and in what would otherwise be a runtime context, you can enforce compile-time evaluation with a const block).
tialaramex · 39d ago
Huh? What is it you think Rust copied here? I agree that the choice in C++ is essentially worthless, so that in practice you can write functions which are definitely never executed at compile time and aren't constant in any sense, label them constexpr and that compiles anyway. It just becomes yet more noise C++ programmers learn to type by reflex to get the correct behaviour from their compiler, joining explicit.
But in Rust that's not what you're getting. Rust's const fn is none of the options C++ decided it needed, Rust says if the parameters are themselves constants then we promise we can evaluate this at compile time and if appropriate we will -- this means we can use Rust's const fn where we'd use C++ consteval, but the function can also be called at runtime with variable parameters - and we can use Rust's const where we'd use C++ constinit, calling these const fn with constant parameters.
Because Rust is more explicit about safety of course, we can often get away with claiming some value is "constant" in C++ despite actually figuring out what it is at runtime, and Rust isn't OK with that, for example in my code
The Rational type is a big rational, it owns heap allocations so we'll just make one once, at runtime, and then re-use it whenever we need this particular fraction (it's for calculating natural logarithms of arbitrary computable real numbers).
IshKebab · 38d ago
> Rust says if the parameters are themselves constants then we promise we can evaluate this at compile time and if appropriate we will
Well exactly. "if appropriate". So like C++'s `constexpr`, Rust doesn't make any guarantees about compile-time evaluation.
Zig's `comptime` must be evaluated at compile time.
tialaramex · 38d ago
If we want a constant context, we can say so. Because Rust is expression oriented we can write for example a loop (though if you're unfamiliar with Rust it may not be clear why a for loop can't work yet, other loops are fine) and wrap the whole expression in a const block and that'll be evaluated at compile time. For example:
let a = const {
let mut x: u64 = 0;
let mut k = 5;
loop {
if k == 0 {
break x;
}
k -= 1;
x += 2;
x *= x;
}
};
LegionMammal978 · 38d ago
Yeah, it's nice that const blocks are stable now. Before them, you had to use hacks like defining a trait impl with an associated const, which was verbose and messy.
(If I recall correctly, one of the big questions was "Will const blocks unreachable at runtime still be evaluated at compile time?" It looks like the answer was to leave it unspecified.)
kibwen · 38d ago
> Zig's `comptime` must be evaluated at compile time.
Yes, and the equivalent in Rust is any constant context, such as a const item, or a const block inside of a non-const function. Anything in a constant context is guaranteed to run at compile-time.
tyilo · 39d ago
Floats are not guaranteed to be bit-identical at compile time and run time in Rust.
kibwen · 39d ago
It's not quite as bad as it sounds, because the only difference is that the representation of NaN (the sign and the payload bits) isn't guaranteed to be stable. If you're not relying on any specific representation of NaN, then floating-point math in const fn is identical, and observed differences would be considered a soundness bug in the Rust compiler.
weinzierl · 39d ago
I had to look this up because it is a while that I tried to use floating point math in a const fn and it seems that the differences you described have been decided to be acceptable.
Oh, nice. Sometime I find it really hard to track the status of new Rust features. For example the tracking issue I linked is still open with "Stabilize" missing.
what happens if you compile on a system that has a different precision than the system you run on? like suppose you compile on a 64 bit system targetting 32 bit embedded with an fp accelerator or a 16 bit system with softfloat?
kibwen · 38d ago
I'm not personally familiar with the implementation, but Rust's const fn is evaluated using an interpreter called MIRI with its own softfloat implementation, and therefore isn't limited by the precision of the host platform. The act of cross-compilation shouldn't pose a problem, and would be a soundness issue in the compiler if it did.
throwawaymaths · 38d ago
no the soundness of the compiler is at risk because the target has limitations, not the host.
kibwen · 38d ago
As long as the target is compliant with IEEE 754, which is what Rust expects, it shouldn't be an issue. The only platform that I know of that causes problems is extremely old pre-SSE 32-bit x86, where floats sometimes have 80-bit precision and which can't be worked around because of LLVM limitations which nobody's going to fix because the target is so obscure. Rust will probably just end up deprecating that target and replacing it with a softfloat equivalent.
throwawaymaths · 38d ago
there are multiple IEE-754 fps. 32, 64, and 80 in use.
so your claim is that rust compiler knows in advance which will be used by the target and adjusts its softfloat accordingly?
I'm not convinced. IIRC there are cases for SIMD where there is only a 2 ULP guarantee and some tryhard silicon gives you 1 ULP for the same opcode.
kibwen · 37d ago
> so your claim is that rust compiler knows in advance which will be used by the target and adjusts its softfloat accordingly?
Rust performs FP operations using the precision of the underlying type. For compile time evaluation this is enforced by Miri, and for runtime evaluation this is enforced by carefully emitting the appropriate LLVM IR.
> IIRC there are cases for SIMD where there is only a 2 ULP guarantee and some tryhard silicon gives you 1 ULP for the same opcode.
Rust only permits operations in constant contexts when it's confident that it can make useful guarantees about their behavior. In particular, FP ops in const contexts are currently limited as follows:
"This RFC specifies the behavior of +, - (unary and binary), *, /, %, abs, copysign, mul_add, sqrt, as-casts that involve floating-point types, and all comparison operations on floating-point types."
Last time I checked the float functions that have no bit-identical results (mostly transcendental functions) were missing from Rust's const fn for exactly that reason.
codedokode · 38d ago
They are not guaranteed to be precise anyway.
Asraelite · 35d ago
> They are still not an adequate substitute for Zig's comptime feature. For one and in a sense they are much more limited than comptime functions in Zig
The syntactic restrictions don't really matter; it's still Turing-complete. The key difference is that types are values in Zig but not in Rust, which is a core design feature of the language and can't be changed easily.
johnisgood · 39d ago
So if Rust has Zig's comptime feature, what is this crate? How does it differ, what does it add?
IshKebab · 39d ago
Rust doesn't have Zig's comptime feature. Rust's const fn's are normal functions that are capable of running at compile time. It's an optional optimisation; it doesn't exist any additional semantic capabilities because they also need to be able to run at runtime.
Zig's comptime functions only run at compile time, so they can do extra things - in particular manipulating types - that you can't do if your function needs to run at runtime. (Don't mention dependent types.)
zozbot234 · 39d ago
Note that dependently-typed code also effectively "runs at compile-time", it's inherent to that programming model. You can "extract" an ordinary program from dependently-typed code which you can then compile to a binary and run as usual, but then that program will not feature dependent types in their full generality.
johnisgood · 39d ago
Okay, so this crate adds Zig's actual comptime?
weinzierl · 39d ago
No, it doesn't because it is based on Rust macros which are strictly less capable than Zig comptime in a crucial way (compile time reflection).
Neither Rust macros nor const fn are 100% what Zig comptime is but they have other properties that Zig comptime lacks. Apples and oranges.
johnisgood · 38d ago
Then the title is misleading (although not surprising).
So what are the major differences between the two anyways? Just to be sure.
bsder · 38d ago
> Zig's comptime functions only run at compile time, so they can do extra things - in particular manipulating types - that you can't do if your function needs to run at runtime.
Careful, I'm not sure this is true. I haven't found a Zig comptime function that doesn't also work just as well at runtime function.
This is, in fact, the primary characteristic that makes Zig comptime easier to reason about than any "macro" system. If something is wrong in my comptime function, I can normally make a small adjustment to force it to be a runtime function that I can step through and probe and debug.
It's sort of a unification of compile time and run time semantics and it is long overdue. The late John Shutt's Scheme-alike Kernel (https://web.cs.wpi.edu/~jshutt/kernel.html) sort of approached this as did old-school Tcl.
bsder · 37d ago
Correcting myself: Any Zig function which returns a Type can only be run at comptime as a Type can only be created at compile time.
nukem222 · 39d ago
> Zig doesn't have macros, it has functions which can be run at comptime
You raised my hopes and dashed them quite expertly, sir. Bravo!
gliptic · 39d ago
You're probably underestimating what you can do with these.
wavemode · 39d ago
The answer is that zig doesn't have macros (i.e. syntactic transformations). Comptime functions in zig are just that - functions which run at compile time. They run after typechecking of existing types, but they are capable of creating new types. Types in Zig are just values. But they're values that don't exist at runtime.
deredede · 39d ago
Zig's comptime is not macros, it's staged programming / multi-stage programming.
Ygg2 · 39d ago
Zig comptime is a C++ template done better. It also suffers from similar issues as C++ templates. You can't know function is comptime unless you put it in comptime and it passes.
This blog post is disqualified from any serious discussion, because it doesn't know the distinction between templates, which Zig's comptime constructs are not, and partial evaluation with reified types, which Zig's comptime constructs are.
It's not possible to make a positive contribution after a mistake that basic.
Here's an example of someone getting the design space correct, and therefore contributing to the discussion in a positive way. He doesn't end up liking Zig, for reasons I disagree with, but he does completely evade being not-even-wrong, which is table stakes.
> This blog post is disqualified from any serious discussion, because it doesn't know the distinction between templates
Just because a blog doesn't go full type theory doesn't disqualify it from drawing conclusions based on experience and limitations incurred during actual use.
Something can be very well typed but still suck to use.
Intution doesn't need to be based on formal understanding. See Table of elements. Created by grouping elements by behavior, it turned out to be based on electron orbital configuration.
samatman · 38d ago
The central claim is that Zig's use of comptime is similar enough to templates to conflate them. That's simply incorrect. There's no value in trying to extract information from something which makes such a basic mistake as that, it doesn't contribute to a discussion, it distracts that discussion down a blind alley.
Ygg2 · 38d ago
I think it's insightful to some extent. The problems encountered in C++ templates apply to Zig's comptime as well. And their solution seem to be along same lines, i.e. add constraints.
Edit: on re-reading the author doesn't understand why negative traits in Rust are a problem (not is basic boolean operation). I think they are abstracting too much and saying cows should be roughly spherical and water should roughly be a superconductor.
pjmlp · 39d ago
C++98 templates done better, we are past that now in C++23.
Also D and Circle, done it before Zig.
andrepd · 39d ago
>we are past that now
Jesus, we very much aren't. The only real step improvement was in C++11 which added constexpr which (in the following years) gradually obviated the need for C++98 style TMP. But there really hasn't been much of a generational improvement since (OK, concepts I guess), and it remains cumbersome and error prone and difficult to debug.
pjmlp · 39d ago
If I compare C++98 template metaprogramming with tag dispatch, ADL, and SFINAE, with what C++23 offers, it is already a complete different world, even if there are still warts to improve.
littlestymaar · 39d ago
This isn't a macro, it works as both macros and templates in C++, and regarding types it works the same way as templates in C++.
SkiFire13 · 38d ago
I'll leave this here, try guessing what this prints:
well i suppose this is a good part of the reason why usingnamespace is likely to go the way of the dodo, though if i had to guess:
hello: 4, 5, 5
nindalf · 39d ago
I tried the library out and it worked pretty well for me.
I had previously written a declarative macro to generate benchmark functions [1]. It worked, but I didn't enjoy the process of getting it working. Nor did I feel confident about making changes to it.
When I rewrote it using crabtime I found the experience much better. I was mostly writing Rust code now, something I was familiar with. The code is much more readable and customisable [2]. For example, instead of having to pass in the names of the modules each time I added a new one, I simply read the files from disk at compile time.
To compare the two see what the code looks like in within the braces of paste!{} in the first one and crabtime::output!{} in the second one. The main difference is that I can construct the strings using Rust code and drop them in with a simple {{ str }}. With paste!, I don't know exactly what I did, but I kept messing around until it worked.
Or compare the two loops. In the first one we have `($($year:ident {$($day:ident),+ $(,)?}),+ $(,)?)` while with crabtime we have plain Rust code - `for (year, day) in years_and_days`. I find the latter more readable.
Overall I'm quite pleased with crabtime. Earlier I'd avoid Rust metaprogramming as much as possible, but now I'd be open to writing a macro if the situation called for it.
> Polanas – For their invaluable assistance with testing, design, and insightful feedback that greatly improved the project.
> Your support and contributions have played a vital role in making this crate better—thank you!
mplanchard · 38d ago
At first I was like wait this looks just like eval_macro, which I discovered a couple of weeks ago. Looks like it is just renamed! The new name is great, congrats on the improved branding :)
No comments yet
vlovich123 · 39d ago
Wow this is so neat. Has anyone had any experience with it / feedback? This looks so much nicer than existing macros.
nindalf · 39d ago
The author announced a previous version of this crate a couple of weeks ago and this new version two days ago. So there might not be that many users.
That said I did replace a declarative macro with it. Supposedly I wrote the declarative macro according to git blame but I only have a vague idea how it works. I replaced it with crabtime and I got something that I can understand and maintain.
Overall I’d say I’m very pleased with crabtime. Previously I would have avoided Rust metaprogramming as much as possible, but now I’d feel confident to use it where appropriate.
Ygg2 · 39d ago
It's neat, but I don't think it does the same as Zig's comptime. For one it doesn't have Zig's dynamic behavior, nor more practically compile time reflection.
Rust can't have Zig's comptime dynamic properties, same way Zig will never have Rust's compile time guarantees*.
You can't simultaneously have your dynamic cake and eat it at compile time.
* Theoretically you can have it, but it would require changing language to such extent it's not recognizable.
weinzierl · 39d ago
No experience. I agree that it looks nice and useful, but I don't think it is much like Zig's comptime.
norman784 · 39d ago
This looks nice, just yesterday I was trying to make my code more concise by using some macro_rules magic, but it was a bit more than what macro_rules can handle, so I ended up just writing the whole thing. I avoid whenever I can proc macros, I wrote my fair share of macros, but I hate them, you need to add most of the time 3 new dependencies, syn, quote and proc_macros2, that adds up to the compilation times.
This looks worth the playing with and see if they can solve my issue, one thing I avoid as much as possible is to add unnecessary dependencies, didn’t check how many dependencies this will add overall to the project.
lifthrasiir · 39d ago
It depends on proc-macro2, syn, quote, toml and rustc_version [1]. First three are legitimately expected for any complex enough procedural macros. Toml and rustc_version are apparently for automatic Cargo configuration and fairly harmless by their own. Their transitive dependencies are also not bad: unicode-ident (from proc-macro2), serde, serde_spanned, toml_datetime (from toml), and semver (from rustc_version).
This looks cool, but how it impacts project compile times? They talk about how caching works for multiple invocations of the same macro with different arguments. It would be nice to have some approximate numbers for how long it takes to create, compile, and execute one of its generated projects.
codedokode · 38d ago
The problem with macros in Rust is that they have full access to your computer. This is literally an invitation for exploitation. I think we will see the attacks based on this vulnerability once Rust becomes more popular.
jkelleyrtp · 38d ago
`make myfile.mk` -> pwned
I do share the sentiment - and complain about this frequently - but any environment with build scripts can wreck your computer. Encrypt what you can, I guess, but software engineering is an extremely dangerous job wrt security.
zamalek · 38d ago
Its slightly more insidious: merely opening it in a text editor (assuming it has some form of lsp) could pwn you. Rust definitely isn't alone in this. Quite a few of the editors I know will run in a dumbed down mode when opening an unknown repo.
kibwen · 37d ago
It's even more insidious than that! Even navigating to a directory in a checkout of a hostile git repo can run arbitrary code if your shell displays git info (what branch you're on, etc).
hypeatei · 38d ago
Do other languages have a security model for this? I've always assumed that building arbitrary code could execute something in most languages.
I think using something like the pledge syscall from OpenBSD in the compiler could be useful. That way, it's controlled at the process level which things can be accessed on the system.
codedokode · 38d ago
C macros and gcc do not allow to run arbitrary code during compilation.
Deos anyone have an example beyond the one on that page? I'm having a hard time understanding.
So, I'm interested in some metaprogramming right now. I'm setting up Vec3 SIMD types, and it requires a lot of repetition to manage the various variants: f32::Vec3x8, f64::Vec3x16 etc that are all similar internally. This could be handled using traditional macros, procedural macros, or something called "code gen", which I think is string manipulation of code. Could I use crabtime to do this instead? Should I?
> This could be handled using traditional macros, procedural macros, or something called "code gen", which I think is string manipulation of code. Could I use crabtime to do this instead? Should I?
You could, it seems. Crabtime supports both the procedural macros and "code gen" approaches you are talking about.
codedokode · 38d ago
For simply copy-pasting code you could start with using simplest traditional macros. No matter what approach you choose your code will be pain to read and understand (maybe we need to have "show generated code" button in our IDEs).
conaclos · 38d ago
Actually we have a command to do exactly what you want: `expand macro`. Crabtime claims to have the same thing.
Does anyone else find macros make it hard to grep a code base? This does seem like something semantic grep could solve, but I'm unaware of any semantic grep macros use cases.
mplanchard · 38d ago
This is why I generally avoid making new structs via macros, and why I personally dislike the popular error library snafu: breaking “go to definition” and codebase search really needs to be worth it IMO. It doesn’t feel as bad for proc macros that add methods for whatever reason for me, but having to use a type with an opaque definition hidden behind a macro really bugs me.
Nullabillity · 38d ago
Snafu works fine with at least rust-analyzer's gotodef (though go-to-references is indeed broken by it :/).
codedokode · 38d ago
The problem is not macros, it is concatenation of identifiers. I stumbled upon this a lot when working with CSS preprocessors which allow you to write code like this:
.user {
&--profile { color: red; }
}
Now, searching for "user--profile" CSS class becomes imposible. Despite this, SASS and similar preprocessor seem to be popular and used almost everywhere. Well, I never had high expectations of front-end developers, so I am not very disappointed.
So I think as long as you don't break identifiers, the code should be searchable. But, your IDE will probably not able to help you with auto-complete and navigation.
jgalt212 · 38d ago
> The problem is not macros, it is concatenation of identifiers.
Yes, that's also a problem. As is MySQLCursorDict Class. Now you have to grep every column name every table your code base accesses.
loeg · 38d ago
Moreso than calling an ordinary subroutine?
dymk · 38d ago
The code demo in the crabtime readme is actually a good example of something that is now hard to grep. Let's say you see a usage of `Position1`, and you want to find where and how it's defined - well, there is no `enum Position1` in the codebase, because the identifier is concatenated from two separate parts. You lose out on some IDE niceties as well - can't command-click on the definition site to find usages, because there is no definition site (available to you, at least).
cchance · 38d ago
I don't know if i'm too dumb i never understood what comptime gives, i get what macros are for but how does something like crabtime improve things ?
Does this basically allow us to write normal rust code instead of procmacros, with even fewer constraints?
voidhorse · 38d ago
It's a crutch for stone age imperative languages that still have weak expressivity and inferior type systems.
Learn Haskell, Idris, ATS, etc and you will soon find it baffling how easy it is to impress OOP/Imperative programmers and how little they demand of their languages.
jedisct1 · 38d ago
It's nothing like Zig's comptime.
cyber1 · 38d ago
No, this is not Zig comptime at all. Zig's comptime work is on another level, and it's amazing.
metaltyphoon · 38d ago
Instead of just saying its not. Explain what’s so much different here.
tdhz77 · 38d ago
Moments like this realize I don’t understand programming fundamentals. Imposter syndrome sits in and I realize that my lack of formal education is costing me.
Making Rust's macros easier is laudable. Purely from a user's perspective I find it especially annoying, that proc macros need their own crate, even if I understand the reasons for it. If I read Crabtime correctly it solves that problem, which is nice.
That being said Crabtime looks more like compile time eval on steroids to me than an analogon to Zig's comptime.
One (maybe the) distinguishing feature between comptime in Zig and Rust macros seems to me to be access to type information. In Zig you have it[1] in Rust you don't and that makes a big difference. It doesn't look like we will get that in Rust anytime soon and the projects that need it (e.g. cargo semver check) use dirty tricks (parsing RustDoc from the macro) to accomplish it. I did not see anything like that in Crabtime, but I might have missed it. At any rate, I'd expect compile time reflection for anything that claims to bring comptime to Rust.
[1] I think, but I am not a Zig expert, so please correct me if I am wrong.
There are other differences. First, comptime functions aren't syntactic macros. This makes them much easier to reason about and debug. You could think about them as if they were regular functions running at runtime in a partially-typed language with powerful reflection (their simplicity also means they're weaker than macros, but the point is that you can get very far with that, without taking on the difficulties associated with macros). Second, I think that comptime's uniqueness comes not from what it does in isolation, but that it makes other language features redundant, keeping the entire language simple. This means that with one simple yet just-powerful-enough feature you can do away with several other features.
The end result is that Zig is a very simple language with the expressivity of far more complicated languages. That on its own is not super unusual; in a way, JavaScript is like that, too. But Zig does it in a low-level language, and that's revolutionary. It is because of its simplicity that people compare Zig to C, but it's as expressive as C++ while also being safer than C++, let alone C.
Adding comptime to an already-complex language misses out on its greatest benefit.
It's scary how good TS is at inferring the type, it'll even infer an instance of a class from a literal, so the interop with class-based code is almost seamless (almost: just don't try destructuring a "real" object or re-binding `this` on a static closure)
I have considered a non backwards compatible JavaScript descendant to clean things up. It would be interesting to hear what you consider to be problems.
Still, much saner than PHP arrays.
PHP basically has arrays and maps merged into one. Basically like Lua does.
It is pretty handy. Not sure what you find insane about it.
Of course the functions for arrays in the standard library are consistently inconsistent but that is just a general PHP thing. Isn't a big deal when you use an IDE.
How about array_filter() returning an associative array, because it doesn't renumber the indexes? You need to run it through array_values() after. You run into this all the time when converting to json.
For something more obscure but equally infuriating, how about using iterator_to_array() on anything that uses `yield from`? Everything yielded that way will overwrite previously yielded values, unless you pass a magic boolean parameter that of course defaults to doing the wrong thing (PHP is chock full of those). OFC it's because of the behavior of array keys.
How about when you do want associative array semantics and use a string index that consists entirely of numeric digits? It gets cast into an int, always, and you cannot control that. This is super "fun" when you run array_keys on your associative array you only used string keys on, and naïvely think it will return only strings. Crash at runtime if your function was declared to just take a string.
There are so many more WTFs, these were just off the top of my head from issues I've encountered recently. I make my living with PHP, but you'll never see me defending arrays. Though at least they're zero-based As The Deity Intended It.
- There's both undefined and null
- There's three ways to declare variable
- typeof vs instanceof
- for in vs for of
- the way `this` works
- semantics of functions called with the wrong number of args
There's far more that I can't immediately recall.
A lot of the fixes are just removing the old broken way of doing things.
I'd still probably change 'for x of ...' to implicitly be 'const x' and require let if you are doing shenanigans with changing the values.
I am surprisingly ok with null and undefined. Null means not a thing, undefined means there is not even the notion of the thing.
I'm pretty sure there are more than three ways to declare a variable.
at leastIf only "undefined" was consistently the same as lack of variable / collection entry, it would be understandable. But as things are, {x: 1, y: undefined} is not the same as {x: 1} in many ways - indeed, so much so that TS distinguishes between these two cases in its type system.
A variable reference can lookup properties on the global object, or other objects with ‘with’
To make matters even worse, there's a difference between setting an object property to undefined as opposed to omitting it. So one could say there are two different "undefined"s.
FWIW Python has three (default, nonlocal, and global).
And many languages have two.
Can you give an example of something that's easier to reason about (e.g., an error that's easier to spot) with Zig's comptime than with macros?
> it makes other language features redundant
I'm guessing (so I might be wrong) that IDEs and users still need to be aware of the common idioms, so why does it matter whether or not those common idioms are implemented in the compiler or using comptime? (I'm not saying it doesn't matter, I'm wondering what benefits you have in mind.)
Rust proc_macros takes a stream of tokens and return a stream of tokens. If your macro meant to return an instance of a specific type, it must output the correct tokens which create that instance via existing interfaces. There's some really ugly indirection in trying to understand what's going on.
This is always harder to reason about than Zig's equivalent, because in Zig you just return the thing that you want to return.
If I just wanted to construct an instance of a specific type at compile time in Rust, I'd probably be using a const fn instead of a macro.
You can read their definition, you can expand them, but there's no way to look at a macro call and reason about it, it can do anything at all. In C you don't even know what is and isn't a macro, so Rust has a modest edge in that respect.
Zig just doesn't have this problem to begin with.
I barely participate in Hacker News anymore because it seems to have collectively lost the ability to extract meaning from words, unless an exhausting and totally excessive amount of attention is put into satisfying a misplaced sense of precision. There's no intellectual charity left and it sucks.
No comments yet
The two things it adds on top, is the way nullability is handled at compile time, while the former do runtime null checks, and comptime.
True, but at least there are curly bracket languages heavily influenced by Pascal, such as Go, V (Vlang) (has comptime too and is debatably safer), Odin, etc...
Would you agree with my idea, or would you say I am missing something? Does Zig aleviate some of the problems I mentioned?
``` fn Vec(comptime T: anytype) {
}```
IMO having a first class generic type parameter syntax is better but this demonstrates OP's point.
Furthermore, the original argument wasn't about whether something can or can't be done in C++, it was that this one feature in Zig subsumes what would require a multitude of features from C++, such as consteval, templates, SFINAE, type traits, so on so forth...
Instead of having all these disparate features all of which work in subtly different ways, you have one single feature that unifies all of this functionality together.
Furthermore, why is type traits among the list of features that you claim is subsumed by comptime? not only is that not the case, but type traits are not so much a feature of C++ as they are of its standard library, implemented using templates (a feature of the language).
Rust on the other hand... that might be ten years.
I still think that language support us important, but unfortunately due to what happened, I suspect that will take a long time. And that’s disappointing.
I agree it'd be nice if it weren't confined to our community, though.
Don't coherence rules prevent it?
- generics: comptime can implement some kind of polymorphism, but not at the type level. In other words it implements a templating system, not a polymorphic type system;
- interfaces/traits/concepts: comptime implements none of that, it is plain duck typing, just like "old" C++ templates. In fact C++ introduced concepts to improve its situation with templates, while Zig is still behind on that front!
- macros: comptime solves some of the usecases where macros are used, but it cannot produce arbitrary tokens and hence cannot fully replace macros.
I do agree that it can neatly replace conditional compilation and const functions/const expr, but let's not make it seem like comptime solves everything in the world.
you could probably cobble together an interface system with @comptimeError, but because of the time order of compilation stages, a failed method call will trigger the compiler error before your custom interface code, making it effectively useless for the 90% of cases you care about.
if I'm not mistaken in principle a small change to the compiler could amend this situation
Or, more generally, address all the issues raised in [1]. You're saying that comptime can fully replicate all the features that a proper generics system has, which is plainly false.
[1]: https://typesanitizer.com/blog/zig-generics.html
Now, I'm not saying that Zig's choices always dominate and that all languages would be better off with its approach; far from it. I am saying that it introduces a novel tradeoff that is especially compelling in cases where not only generics but also macros, conditional compilation, and constexprs are otherwise required. In a language like Java these extra features are not required, and so Zig-style comptime would not simplify the language nearly as much.
But even in cases where all these features are needed, I don't think everyone would take Zig's choices over C++'s or Rust's, or vice-versa. To those, like me, for whom language complexity is the biggest problem with C++ or Ada (I used Ada in the nineties), Zig is a revolutionary step forward. I don't think any low-level language has ever been this simple while also being this expressive.
That wasn't my argument.
> After all, you don't extend Rust the same courtesy
Given that my aesthetic issue with Rust is that it has too many complicated features, I don't see how that courtesy could be extended. There is, indeed, an asymmetry between adding features and removing them, but the aesthetic "points" I'm awarding Zig is not due to features it could add but due to features it hasn't while they've not yet been shown to be critical.
I think it's fairly obvious that any feature in any language was added to add some positive value. But every feature also has a negative value, as it makes the language more complicated, which in aggregate may mean fewer programs would be written in it. The challenge is balancing the value of features with their complexity. Even those who prefer Rust's aesthetics to Zig would admit that Zig's novel approach to power/simplicity balance is something we have not seen in programming language design in many years.
I disagree. Minimalism in systems language design has been done over and over: see Go for the most recent example. Comptime is something that C++ was already doing in the form of constexpr since 2011 and a space that D had explored for over a decade before Zig came around in the form of "static if" and so forth (in addition to lots of academic work, of course). Stripping out template metaprogramming in favor of leaning heavily on compile-time function evaluation isn't novel either. I think you find the set of features that Zig has to be personally appealing, which is fine. But the argument that it's anything novel is weak, except in the trivial sense that every language is novel because it includes some features and leaves others out (but if every language is novel, then the word "novel" has no meaning).
From my vantage point, Zig is essentially a skin on a subset of C++, one that is in practice less safe than C++ because of the relative immaturity of tooling.
Minimalism is also, of course, not a new idea, but unlike in Go, I wouldn't say minimalism is Zig's point. Expressive low-level languages have always been much more complex than similarly expressive high-level ones, something that many of us have seen as their primary problem, and Zig is the first one that isn't. In other words, the point isn't minimalism as a design aesthetic (I would say that explicitness is a design aesthetic of Zig's much more than minimalism) but rather reducing the complexity that some have seen as the biggest issue with expressive low-level languages.
What was so impressive to me is that we've always known how macros can add almost infinite expressivity to languages that could be considered minimalistic, but they carry their own set of severe complexity issues. Zig showed how comptime can be used to offer much of the power of macros with almost none of their cost. That, too, is revolutionary even without considering the low-level domain specifically (although high-level languages have other options).
Finally, if you want to talk about "safety in practice", considering more than just the language, I don't think we can know without empirical study, but the same claim could be made about Rust. Both Zig the language and Rust the language are safer than either C or C++ (the languages), and Rust the language is safer than Zig the language. But as to their relative safety (or correctness) "in practice" either now or in the future, only time and empirical study will tell. Clearly, languages that offer more soundness, like Idris or ATS, don't always work so well in practice. So I admit I don't know at this time whether Zig (the gestalt) offers more or less correctness in practice than C++ or which of Zig or Rust offers more correctness in practice, but neither do you.
Give it a rest please. Given your association with Rust, endlessly attacking competing languages is not a good look, regardless of whether your points are technically correct or not.
Also all the ones I mentioned, supported binary libraries, which apparently is not something the Zig folks are too keen in supporting, other than C like ABI.
For me any systems language that doesn't support binary library distribution isn't that relevant to me, and yes that is also something that I regularly complain about in Rust, and not only me. Microsoft has a talk on their Rust's adoption where this is mentioned as a problem, only relieved thanks to ubiquity of COM as mechanism to delivery binary libraries on Windows.
I think pcwalton's "generics" vs. "templates" distinction mostly boils down to parametric typechecking, which Zig's design just can't do. (Can it?)
Although, I vaguely remember some example showing that even Rust in some cases allows a type definition X<T> even when there exists a T such that X<T> would fail to typecheck.
2. if you want to restrict the use of a function to comptime (why you would want to is beyond me) it is possible to do with @inComptime builtin.
the only tricky bit is that your function could try to call a function inaccessible to you because it's transitively restricted and you'd have a hard time noticing that from the code but it's not possible for that code to be compiled (barring errors by the zig team) so its more of an annoyance than a problem.
https://github.com/ityonemo/clr
They're popping up all over. For some reason, Zig folk want Rust things and Rust folk want Zig things.
Otherwise, it can be construed as just another wish list or feature request, that main developers have no plans to implement.
When it comes to Zig, one crowd says it's too early because it's not 1.0 fast enough. And now, are you saying that the feature development is basically done? What point are you trying to make?
Rather, both languages' correctness-related claims rely on them being in some better or worse effort/correctness sweet spot, and that can only be measured empirically for those two specific languages. Crucially, results comparing Rust to C or C++ are not relevant. Zig offers more soundness than C, and even if it offers the same soundness as C++ (and I disagree -- I think it offers more) the language's claim is that its simplicity assists correctness. We can only tell to what extent that is true with more empirical observation; I don't think there can be any good guess made here regarding the size of the effect.
If you look at MITRE's CWE Top 25 [1], #2 (Out-of-bounds Write) and #6 (Out-of-bounds Read) are soundly eliminated by both Zig and Rust. It is only when it comes to #8 (Use After Free) that Rust's additional soundness comes into play. But because it comes at a cost, the question is to what extent eliminating #8 adversely impacts the other top vulnerabilities, including those higher on the list. It may be the case that eliminating the eighth most dangerous vulnerability through sound language guarantees may end up being worse overall even if we're only concerned with security (and it isn't the only concern). We can only try to assess that through empirical study.
[1]: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html
> I don't believe
Of course, we're debating against a belief, and you have plenty of reasons to not believe it that it will be impossible to be swayed by any sort of evidence, and you will always find a way move the goalposts.
While it started as a hack on how to use templates back in C++98, it has gotten quite usable nowadays in C++23, and the compile time reflection will make it even better.
All without having another language to learn about, as it happens with Rust macros, with its variations, or reliance on 3rd party crates (syn).
They are still not an adequate substitute for Zig's comptime feature. For one and in a sense they are much more limited than comptime functions in Zig but for another (and for better or worse) they also have much higher aspirations than Zig.
const fn must always be able to be run at compile time or run time and always produce bit-identical results. This is much harder than it looks at first glance because it must also uphold in a cross-compiling scenario where the compile time environment can be vastly different from the run time environment.
This requirement also forbids any kind of side effect, so Rust const fn are essentially pure functions and I've heard them called like that.
And you can add reflection [2]. So if you can add what you need via crates, is the language actually missing it or is it just not as ergonomic / performant as it needs to be or is it an education problem?
[1] https://docs.rs/typed-builder/latest/typed_builder/
[2] https://docs.rs/reflect/latest/reflect/
Also, you only compile once, so how could you tell the difference? You could say - if it was using const fn that it's a "templated" function that depends on compile time settings.
Looking up the value of an environment variable at runtime is not a const operation, and produces an error if you try to do it in a const fn.
Looking up the value of an environment variable during compile time _can_ be done in a const context, but it'll only happen once. The environment should be considered an input to a const fn, and that makes it "pure".
EDIT: These two operations can both be done in non-const functions too, they're different functions (well, one's a macro).
C++ ended up having to add consteval and constinit which really are compile-time.
But in Rust that's not what you're getting. Rust's const fn is none of the options C++ decided it needed, Rust says if the parameters are themselves constants then we promise we can evaluate this at compile time and if appropriate we will -- this means we can use Rust's const fn where we'd use C++ consteval, but the function can also be called at runtime with variable parameters - and we can use Rust's const where we'd use C++ constinit, calling these const fn with constant parameters.
Because Rust is more explicit about safety of course, we can often get away with claiming some value is "constant" in C++ despite actually figuring out what it is at runtime, and Rust isn't OK with that, for example in my code
We can just calculate what power of two is bigger than SIG_BITS and shift it left at compile time. But... The Rational type is a big rational, it owns heap allocations so we'll just make one once, at runtime, and then re-use it whenever we need this particular fraction (it's for calculating natural logarithms of arbitrary computable real numbers).Well exactly. "if appropriate". So like C++'s `constexpr`, Rust doesn't make any guarantees about compile-time evaluation.
Zig's `comptime` must be evaluated at compile time.
(If I recall correctly, one of the big questions was "Will const blocks unreachable at runtime still be evaluated at compile time?" It looks like the answer was to leave it unspecified.)
Yes, and the equivalent in Rust is any constant context, such as a const item, or a const block inside of a non-const function. Anything in a constant context is guaranteed to run at compile-time.
Looks like floating point math in const fn is coming. Here is the respective tracking issue: https://github.com/rust-lang/rust/issues/128288
so your claim is that rust compiler knows in advance which will be used by the target and adjusts its softfloat accordingly?
I'm not convinced. IIRC there are cases for SIMD where there is only a 2 ULP guarantee and some tryhard silicon gives you 1 ULP for the same opcode.
Rust performs FP operations using the precision of the underlying type. For compile time evaluation this is enforced by Miri, and for runtime evaluation this is enforced by carefully emitting the appropriate LLVM IR.
> IIRC there are cases for SIMD where there is only a 2 ULP guarantee and some tryhard silicon gives you 1 ULP for the same opcode.
Rust only permits operations in constant contexts when it's confident that it can make useful guarantees about their behavior. In particular, FP ops in const contexts are currently limited as follows:
"This RFC specifies the behavior of +, - (unary and binary), *, /, %, abs, copysign, mul_add, sqrt, as-casts that involve floating-point types, and all comparison operations on floating-point types."
https://github.com/rust-lang/rfcs/blob/master/text/3514-floa...
The syntactic restrictions don't really matter; it's still Turing-complete. The key difference is that types are values in Zig but not in Rust, which is a core design feature of the language and can't be changed easily.
Zig's comptime functions only run at compile time, so they can do extra things - in particular manipulating types - that you can't do if your function needs to run at runtime. (Don't mention dependent types.)
Neither Rust macros nor const fn are 100% what Zig comptime is but they have other properties that Zig comptime lacks. Apples and oranges.
So what are the major differences between the two anyways? Just to be sure.
Careful, I'm not sure this is true. I haven't found a Zig comptime function that doesn't also work just as well at runtime function.
This is, in fact, the primary characteristic that makes Zig comptime easier to reason about than any "macro" system. If something is wrong in my comptime function, I can normally make a small adjustment to force it to be a runtime function that I can step through and probe and debug.
It's sort of a unification of compile time and run time semantics and it is long overdue. The late John Shutt's Scheme-alike Kernel (https://web.cs.wpi.edu/~jshutt/kernel.html) sort of approached this as did old-school Tcl.
You raised my hopes and dashed them quite expertly, sir. Bravo!
See https://typesanitizer.com/blog/zig-generics.html
It's not possible to make a positive contribution after a mistake that basic.
Here's an example of someone getting the design space correct, and therefore contributing to the discussion in a positive way. He doesn't end up liking Zig, for reasons I disagree with, but he does completely evade being not-even-wrong, which is table stakes.
https://hirrolot.github.io/posts/why-static-languages-suffer...
Just because a blog doesn't go full type theory doesn't disqualify it from drawing conclusions based on experience and limitations incurred during actual use.
Something can be very well typed but still suck to use.
Intution doesn't need to be based on formal understanding. See Table of elements. Created by grouping elements by behavior, it turned out to be based on electron orbital configuration.
Edit: on re-reading the author doesn't understand why negative traits in Rust are a problem (not is basic boolean operation). I think they are abstracting too much and saying cows should be roughly spherical and water should roughly be a superconductor.
Also D and Circle, done it before Zig.
Jesus, we very much aren't. The only real step improvement was in C++11 which added constexpr which (in the following years) gradually obviated the need for C++98 style TMP. But there really hasn't been much of a generational improvement since (OK, concepts I guess), and it remains cumbersome and error prone and difficult to debug.
hello: 4, 5, 5
I had previously written a declarative macro to generate benchmark functions [1]. It worked, but I didn't enjoy the process of getting it working. Nor did I feel confident about making changes to it.
When I rewrote it using crabtime I found the experience much better. I was mostly writing Rust code now, something I was familiar with. The code is much more readable and customisable [2]. For example, instead of having to pass in the names of the modules each time I added a new one, I simply read the files from disk at compile time.
To compare the two see what the code looks like in within the braces of paste!{} in the first one and crabtime::output!{} in the second one. The main difference is that I can construct the strings using Rust code and drop them in with a simple {{ str }}. With paste!, I don't know exactly what I did, but I kept messing around until it worked.
Or compare the two loops. In the first one we have `($($year:ident {$($day:ident),+ $(,)?}),+ $(,)?)` while with crabtime we have plain Rust code - `for (year, day) in years_and_days`. I find the latter more readable.
Overall I'm quite pleased with crabtime. Earlier I'd avoid Rust metaprogramming as much as possible, but now I'd be open to writing a macro if the situation called for it.
[1] - https://github.com/nindalf/advent/blob/13ff13/benches/benche...
[2] - https://github.com/nindalf/advent/blob/b72b98/benches/benche...
> We would like to extend our heartfelt gratitude to the following individuals for their valuable contributions to this project:
> timonv – For discovering and suggesting the brilliant name for this crate. Read more about it here (https://www.reddit.com/r/rust/comments/1j42fgi/comment/mg6pw...).
> Polanas – For their invaluable assistance with testing, design, and insightful feedback that greatly improved the project.
> Your support and contributions have played a vital role in making this crate better—thank you!
No comments yet
That said I did replace a declarative macro with it. Supposedly I wrote the declarative macro according to git blame but I only have a vague idea how it works. I replaced it with crabtime and I got something that I can understand and maintain.
Overall I’d say I’m very pleased with crabtime. Previously I would have avoided Rust metaprogramming as much as possible, but now I’d feel confident to use it where appropriate.
Rust can't have Zig's comptime dynamic properties, same way Zig will never have Rust's compile time guarantees*.
You can't simultaneously have your dynamic cake and eat it at compile time.
* Theoretically you can have it, but it would require changing language to such extent it's not recognizable.
This looks worth the playing with and see if they can solve my issue, one thing I avoid as much as possible is to add unnecessary dependencies, didn’t check how many dependencies this will add overall to the project.
[1] https://crates.io/crates/crabtime-internal/1.1.1/dependencie...
I do share the sentiment - and complain about this frequently - but any environment with build scripts can wreck your computer. Encrypt what you can, I guess, but software engineering is an extremely dangerous job wrt security.
I think using something like the pledge syscall from OpenBSD in the compiler could be useful. That way, it's controlled at the process level which things can be accessed on the system.
But thank you for letting me learn something useful.
But go off, king.
So, I'm interested in some metaprogramming right now. I'm setting up Vec3 SIMD types, and it requires a lot of repetition to manage the various variants: f32::Vec3x8, f64::Vec3x16 etc that are all similar internally. This could be handled using traditional macros, procedural macros, or something called "code gen", which I think is string manipulation of code. Could I use crabtime to do this instead? Should I?
> This could be handled using traditional macros, procedural macros, or something called "code gen", which I think is string manipulation of code. Could I use crabtime to do this instead? Should I?
You could, it seems. Crabtime supports both the procedural macros and "code gen" approaches you are talking about.
So I think as long as you don't break identifiers, the code should be searchable. But, your IDE will probably not able to help you with auto-complete and navigation.
Yes, that's also a problem. As is MySQLCursorDict Class. Now you have to grep every column name every table your code base accesses.
Does this basically allow us to write normal rust code instead of procmacros, with even fewer constraints?
Learn Haskell, Idris, ATS, etc and you will soon find it baffling how easy it is to impress OOP/Imperative programmers and how little they demand of their languages.