Show HN: Tattoy – a text-based terminal compositor (tattoy.sh)
17 points by tombh 44m ago 3 comments
Show HN:I made a word translation plugin for language learning.
2 points by Mantaa 1d ago 0 comments
Crabtime: Zig’s Comptime in Rust
466 klaussilveira 195 3/19/2025, 6:44:11 PM crates.io ↗
Making Rust's macros easier is laudable. Purely from a user's perspective I find it especially annoying, that proc macros need their own crate, even if I understand the reasons for it. If I read Crabtime correctly it solves that problem, which is nice.
That being said Crabtime looks more like compile time eval on steroids to me than an analogon to Zig's comptime.
One (maybe the) distinguishing feature between comptime in Zig and Rust macros seems to me to be access to type information. In Zig you have it[1] in Rust you don't and that makes a big difference. It doesn't look like we will get that in Rust anytime soon and the projects that need it (e.g. cargo semver check) use dirty tricks (parsing RustDoc from the macro) to accomplish it. I did not see anything like that in Crabtime, but I might have missed it. At any rate, I'd expect compile time reflection for anything that claims to bring comptime to Rust.
[1] I think, but I am not a Zig expert, so please correct me if I am wrong.
There are other differences. First, comptime functions aren't syntactic macros. This makes them much easier to reason about and debug. You could think about them as if they were regular functions running at runtime in a partially-typed language with powerful reflection (their simplicity also means they're weaker than macros, but the point is that you can get very far with that, without taking on the difficulties associated with macros). Second, I think that comptime's uniqueness comes not from what it does in isolation, but that it makes other language features redundant, keeping the entire language simple. This means that with one simple yet just-powerful-enough feature you can do away with several other features.
The end result is that Zig is a very simple language with the expressivity of far more complicated languages. That on its own is not super unusual; in a way, JavaScript is like that, too. But Zig does it in a low-level language, and that's revolutionary. It is because of its simplicity that people compare Zig to C, but it's as expressive as C++ while also being safer than C++, let alone C.
Adding comptime to an already-complex language misses out on its greatest benefit.
It's scary how good TS is at inferring the type, it'll even infer an instance of a class from a literal, so the interop with class-based code is almost seamless (almost: just don't try destructuring a "real" object or re-binding `this` on a static closure)
I have considered a non backwards compatible JavaScript descendant to clean things up. It would be interesting to hear what you consider to be problems.
Still, much saner than PHP arrays.
PHP basically has arrays and maps merged into one. Basically like Lua does.
It is pretty handy. Not sure what you find insane about it.
Of course the functions for arrays in the standard library are consistently inconsistent but that is just a general PHP thing. Isn't a big deal when you use an IDE.
How about array_filter() returning an associative array, because it doesn't renumber the indexes? You need to run it through array_values() after. You run into this all the time when converting to json.
For something more obscure but equally infuriating, how about using iterator_to_array() on anything that uses `yield from`? Everything yielded that way will overwrite previously yielded values, unless you pass a magic boolean parameter that of course defaults to doing the wrong thing (PHP is chock full of those). OFC it's because of the behavior of array keys.
How about when you do want associative array semantics and use a string index that consists entirely of numeric digits? It gets cast into an int, always, and you cannot control that. This is super "fun" when you run array_keys on your associative array you only used string keys on, and naïvely think it will return only strings. Crash at runtime if your function was declared to just take a string.
There are so many more WTFs, these were just off the top of my head from issues I've encountered recently. I make my living with PHP, but you'll never see me defending arrays. Though at least they're zero-based As The Deity Intended It.
- There's both undefined and null
- There's three ways to declare variable
- typeof vs instanceof
- for in vs for of
- the way `this` works
- semantics of functions called with the wrong number of args
There's far more that I can't immediately recall.
A lot of the fixes are just removing the old broken way of doing things.
I'd still probably change 'for x of ...' to implicitly be 'const x' and require let if you are doing shenanigans with changing the values.
I am surprisingly ok with null and undefined. Null means not a thing, undefined means there is not even the notion of the thing.
I'm pretty sure there are more than three ways to declare a variable.
at leastIf only "undefined" was consistently the same as lack of variable / collection entry, it would be understandable. But as things are, {x: 1, y: undefined} is not the same as {x: 1} in many ways - indeed, so much so that TS distinguishes between these two cases in its type system.
A variable reference can lookup properties on the global object, or other objects with ‘with’
To make matters even worse, there's a difference between setting an object property to undefined as opposed to omitting it. So one could say there are two different "undefined"s.
FWIW Python has three (default, nonlocal, and global).
And many languages have two.
Can you give an example of something that's easier to reason about (e.g., an error that's easier to spot) with Zig's comptime than with macros?
> it makes other language features redundant
I'm guessing (so I might be wrong) that IDEs and users still need to be aware of the common idioms, so why does it matter whether or not those common idioms are implemented in the compiler or using comptime? (I'm not saying it doesn't matter, I'm wondering what benefits you have in mind.)
Rust proc_macros takes a stream of tokens and return a stream of tokens. If your macro meant to return an instance of a specific type, it must output the correct tokens which create that instance via existing interfaces. There's some really ugly indirection in trying to understand what's going on.
This is always harder to reason about than Zig's equivalent, because in Zig you just return the thing that you want to return.
If I just wanted to construct an instance of a specific type at compile time in Rust, I'd probably be using a const fn instead of a macro.
You can read their definition, you can expand them, but there's no way to look at a macro call and reason about it, it can do anything at all. In C you don't even know what is and isn't a macro, so Rust has a modest edge in that respect.
Zig just doesn't have this problem to begin with.
I barely participate in Hacker News anymore because it seems to have collectively lost the ability to extract meaning from words, unless an exhausting and totally excessive amount of attention is put into satisfying a misplaced sense of precision. There's no intellectual charity left and it sucks.
No comments yet
The two things it adds on top, is the way nullability is handled at compile time, while the former do runtime null checks, and comptime.
True, but at least there are curly bracket languages heavily influenced by Pascal, such as Go, V (Vlang) (has comptime too and is debatably safer), Odin, etc...
Would you agree with my idea, or would you say I am missing something? Does Zig aleviate some of the problems I mentioned?
``` fn Vec(comptime T: anytype) {
}```
IMO having a first class generic type parameter syntax is better but this demonstrates OP's point.
Furthermore, the original argument wasn't about whether something can or can't be done in C++, it was that this one feature in Zig subsumes what would require a multitude of features from C++, such as consteval, templates, SFINAE, type traits, so on so forth...
Instead of having all these disparate features all of which work in subtly different ways, you have one single feature that unifies all of this functionality together.
Furthermore, why is type traits among the list of features that you claim is subsumed by comptime? not only is that not the case, but type traits are not so much a feature of C++ as they are of its standard library, implemented using templates (a feature of the language).
Rust on the other hand... that might be ten years.
I still think that language support us important, but unfortunately due to what happened, I suspect that will take a long time. And that’s disappointing.
I agree it'd be nice if it weren't confined to our community, though.
Don't coherence rules prevent it?
- generics: comptime can implement some kind of polymorphism, but not at the type level. In other words it implements a templating system, not a polymorphic type system;
- interfaces/traits/concepts: comptime implements none of that, it is plain duck typing, just like "old" C++ templates. In fact C++ introduced concepts to improve its situation with templates, while Zig is still behind on that front!
- macros: comptime solves some of the usecases where macros are used, but it cannot produce arbitrary tokens and hence cannot fully replace macros.
I do agree that it can neatly replace conditional compilation and const functions/const expr, but let's not make it seem like comptime solves everything in the world.
you could probably cobble together an interface system with @comptimeError, but because of the time order of compilation stages, a failed method call will trigger the compiler error before your custom interface code, making it effectively useless for the 90% of cases you care about.
if I'm not mistaken in principle a small change to the compiler could amend this situation
Or, more generally, address all the issues raised in [1]. You're saying that comptime can fully replicate all the features that a proper generics system has, which is plainly false.
[1]: https://typesanitizer.com/blog/zig-generics.html
Now, I'm not saying that Zig's choices always dominate and that all languages would be better off with its approach; far from it. I am saying that it introduces a novel tradeoff that is especially compelling in cases where not only generics but also macros, conditional compilation, and constexprs are otherwise required. In a language like Java these extra features are not required, and so Zig-style comptime would not simplify the language nearly as much.
But even in cases where all these features are needed, I don't think everyone would take Zig's choices over C++'s or Rust's, or vice-versa. To those, like me, for whom language complexity is the biggest problem with C++ or Ada (I used Ada in the nineties), Zig is a revolutionary step forward. I don't think any low-level language has ever been this simple while also being this expressive.
That wasn't my argument.
> After all, you don't extend Rust the same courtesy
Given that my aesthetic issue with Rust is that it has too many complicated features, I don't see how that courtesy could be extended. There is, indeed, an asymmetry between adding features and removing them, but the aesthetic "points" I'm awarding Zig is not due to features it could add but due to features it hasn't while they've not yet been shown to be critical.
I think it's fairly obvious that any feature in any language was added to add some positive value. But every feature also has a negative value, as it makes the language more complicated, which in aggregate may mean fewer programs would be written in it. The challenge is balancing the value of features with their complexity. Even those who prefer Rust's aesthetics to Zig would admit that Zig's novel approach to power/simplicity balance is something we have not seen in programming language design in many years.
I disagree. Minimalism in systems language design has been done over and over: see Go for the most recent example. Comptime is something that C++ was already doing in the form of constexpr since 2011 and a space that D had explored for over a decade before Zig came around in the form of "static if" and so forth (in addition to lots of academic work, of course). Stripping out template metaprogramming in favor of leaning heavily on compile-time function evaluation isn't novel either. I think you find the set of features that Zig has to be personally appealing, which is fine. But the argument that it's anything novel is weak, except in the trivial sense that every language is novel because it includes some features and leaves others out (but if every language is novel, then the word "novel" has no meaning).
From my vantage point, Zig is essentially a skin on a subset of C++, one that is in practice less safe than C++ because of the relative immaturity of tooling.
Minimalism is also, of course, not a new idea, but unlike in Go, I wouldn't say minimalism is Zig's point. Expressive low-level languages have always been much more complex than similarly expressive high-level ones, something that many of us have seen as their primary problem, and Zig is the first one that isn't. In other words, the point isn't minimalism as a design aesthetic (I would say that explicitness is a design aesthetic of Zig's much more than minimalism) but rather reducing the complexity that some have seen as the biggest issue with expressive low-level languages.
What was so impressive to me is that we've always known how macros can add almost infinite expressivity to languages that could be considered minimalistic, but they carry their own set of severe complexity issues. Zig showed how comptime can be used to offer much of the power of macros with almost none of their cost. That, too, is revolutionary even without considering the low-level domain specifically (although high-level languages have other options).
Finally, if you want to talk about "safety in practice", considering more than just the language, I don't think we can know without empirical study, but the same claim could be made about Rust. Both Zig the language and Rust the language are safer than either C or C++ (the languages), and Rust the language is safer than Zig the language. But as to their relative safety (or correctness) "in practice" either now or in the future, only time and empirical study will tell. Clearly, languages that offer more soundness, like Idris or ATS, don't always work so well in practice. So I admit I don't know at this time whether Zig (the gestalt) offers more or less correctness in practice than C++ or which of Zig or Rust offers more correctness in practice, but neither do you.
Give it a rest please. Given your association with Rust, endlessly attacking competing languages is not a good look, regardless of whether your points are technically correct or not.
Also all the ones I mentioned, supported binary libraries, which apparently is not something the Zig folks are too keen in supporting, other than C like ABI.
For me any systems language that doesn't support binary library distribution isn't that relevant to me, and yes that is also something that I regularly complain about in Rust, and not only me. Microsoft has a talk on their Rust's adoption where this is mentioned as a problem, only relieved thanks to ubiquity of COM as mechanism to delivery binary libraries on Windows.
I think pcwalton's "generics" vs. "templates" distinction mostly boils down to parametric typechecking, which Zig's design just can't do. (Can it?)
Although, I vaguely remember some example showing that even Rust in some cases allows a type definition X<T> even when there exists a T such that X<T> would fail to typecheck.
2. if you want to restrict the use of a function to comptime (why you would want to is beyond me) it is possible to do with @inComptime builtin.
the only tricky bit is that your function could try to call a function inaccessible to you because it's transitively restricted and you'd have a hard time noticing that from the code but it's not possible for that code to be compiled (barring errors by the zig team) so its more of an annoyance than a problem.
https://github.com/ityonemo/clr
They're popping up all over. For some reason, Zig folk want Rust things and Rust folk want Zig things.
Otherwise, it can be construed as just another wish list or feature request, that main developers have no plans to implement.
When it comes to Zig, one crowd says it's too early because it's not 1.0 fast enough. And now, are you saying that the feature development is basically done? What point are you trying to make?
Rather, both languages' correctness-related claims rely on them being in some better or worse effort/correctness sweet spot, and that can only be measured empirically for those two specific languages. Crucially, results comparing Rust to C or C++ are not relevant. Zig offers more soundness than C, and even if it offers the same soundness as C++ (and I disagree -- I think it offers more) the language's claim is that its simplicity assists correctness. We can only tell to what extent that is true with more empirical observation; I don't think there can be any good guess made here regarding the size of the effect.
If you look at MITRE's CWE Top 25 [1], #2 (Out-of-bounds Write) and #6 (Out-of-bounds Read) are soundly eliminated by both Zig and Rust. It is only when it comes to #8 (Use After Free) that Rust's additional soundness comes into play. But because it comes at a cost, the question is to what extent eliminating #8 adversely impacts the other top vulnerabilities, including those higher on the list. It may be the case that eliminating the eighth most dangerous vulnerability through sound language guarantees may end up being worse overall even if we're only concerned with security (and it isn't the only concern). We can only try to assess that through empirical study.
[1]: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html
> I don't believe
Of course, we're debating against a belief, and you have plenty of reasons to not believe it that it will be impossible to be swayed by any sort of evidence, and you will always find a way move the goalposts.
While it started as a hack on how to use templates back in C++98, it has gotten quite usable nowadays in C++23, and the compile time reflection will make it even better.
All without having another language to learn about, as it happens with Rust macros, with its variations, or reliance on 3rd party crates (syn).
They are still not an adequate substitute for Zig's comptime feature. For one and in a sense they are much more limited than comptime functions in Zig but for another (and for better or worse) they also have much higher aspirations than Zig.
const fn must always be able to be run at compile time or run time and always produce bit-identical results. This is much harder than it looks at first glance because it must also uphold in a cross-compiling scenario where the compile time environment can be vastly different from the run time environment.
This requirement also forbids any kind of side effect, so Rust const fn are essentially pure functions and I've heard them called like that.
And you can add reflection [2]. So if you can add what you need via crates, is the language actually missing it or is it just not as ergonomic / performant as it needs to be or is it an education problem?
[1] https://docs.rs/typed-builder/latest/typed_builder/
[2] https://docs.rs/reflect/latest/reflect/
Also, you only compile once, so how could you tell the difference? You could say - if it was using const fn that it's a "templated" function that depends on compile time settings.
Looking up the value of an environment variable at runtime is not a const operation, and produces an error if you try to do it in a const fn.
Looking up the value of an environment variable during compile time _can_ be done in a const context, but it'll only happen once. The environment should be considered an input to a const fn, and that makes it "pure".
EDIT: These two operations can both be done in non-const functions too, they're different functions (well, one's a macro).
C++ ended up having to add consteval and constinit which really are compile-time.
But in Rust that's not what you're getting. Rust's const fn is none of the options C++ decided it needed, Rust says if the parameters are themselves constants then we promise we can evaluate this at compile time and if appropriate we will -- this means we can use Rust's const fn where we'd use C++ consteval, but the function can also be called at runtime with variable parameters - and we can use Rust's const where we'd use C++ constinit, calling these const fn with constant parameters.
Because Rust is more explicit about safety of course, we can often get away with claiming some value is "constant" in C++ despite actually figuring out what it is at runtime, and Rust isn't OK with that, for example in my code
We can just calculate what power of two is bigger than SIG_BITS and shift it left at compile time. But... The Rational type is a big rational, it owns heap allocations so we'll just make one once, at runtime, and then re-use it whenever we need this particular fraction (it's for calculating natural logarithms of arbitrary computable real numbers).Well exactly. "if appropriate". So like C++'s `constexpr`, Rust doesn't make any guarantees about compile-time evaluation.
Zig's `comptime` must be evaluated at compile time.
(If I recall correctly, one of the big questions was "Will const blocks unreachable at runtime still be evaluated at compile time?" It looks like the answer was to leave it unspecified.)
Yes, and the equivalent in Rust is any constant context, such as a const item, or a const block inside of a non-const function. Anything in a constant context is guaranteed to run at compile-time.
Looks like floating point math in const fn is coming. Here is the respective tracking issue: https://github.com/rust-lang/rust/issues/128288
so your claim is that rust compiler knows in advance which will be used by the target and adjusts its softfloat accordingly?
I'm not convinced. IIRC there are cases for SIMD where there is only a 2 ULP guarantee and some tryhard silicon gives you 1 ULP for the same opcode.
Rust performs FP operations using the precision of the underlying type. For compile time evaluation this is enforced by Miri, and for runtime evaluation this is enforced by carefully emitting the appropriate LLVM IR.
> IIRC there are cases for SIMD where there is only a 2 ULP guarantee and some tryhard silicon gives you 1 ULP for the same opcode.
Rust only permits operations in constant contexts when it's confident that it can make useful guarantees about their behavior. In particular, FP ops in const contexts are currently limited as follows:
"This RFC specifies the behavior of +, - (unary and binary), *, /, %, abs, copysign, mul_add, sqrt, as-casts that involve floating-point types, and all comparison operations on floating-point types."
https://github.com/rust-lang/rfcs/blob/master/text/3514-floa...
The syntactic restrictions don't really matter; it's still Turing-complete. The key difference is that types are values in Zig but not in Rust, which is a core design feature of the language and can't be changed easily.
Zig's comptime functions only run at compile time, so they can do extra things - in particular manipulating types - that you can't do if your function needs to run at runtime. (Don't mention dependent types.)
Neither Rust macros nor const fn are 100% what Zig comptime is but they have other properties that Zig comptime lacks. Apples and oranges.
So what are the major differences between the two anyways? Just to be sure.
Careful, I'm not sure this is true. I haven't found a Zig comptime function that doesn't also work just as well at runtime function.
This is, in fact, the primary characteristic that makes Zig comptime easier to reason about than any "macro" system. If something is wrong in my comptime function, I can normally make a small adjustment to force it to be a runtime function that I can step through and probe and debug.
It's sort of a unification of compile time and run time semantics and it is long overdue. The late John Shutt's Scheme-alike Kernel (https://web.cs.wpi.edu/~jshutt/kernel.html) sort of approached this as did old-school Tcl.
You raised my hopes and dashed them quite expertly, sir. Bravo!
See https://typesanitizer.com/blog/zig-generics.html
It's not possible to make a positive contribution after a mistake that basic.
Here's an example of someone getting the design space correct, and therefore contributing to the discussion in a positive way. He doesn't end up liking Zig, for reasons I disagree with, but he does completely evade being not-even-wrong, which is table stakes.
https://hirrolot.github.io/posts/why-static-languages-suffer...
Just because a blog doesn't go full type theory doesn't disqualify it from drawing conclusions based on experience and limitations incurred during actual use.
Something can be very well typed but still suck to use.
Intution doesn't need to be based on formal understanding. See Table of elements. Created by grouping elements by behavior, it turned out to be based on electron orbital configuration.
Edit: on re-reading the author doesn't understand why negative traits in Rust are a problem (not is basic boolean operation). I think they are abstracting too much and saying cows should be roughly spherical and water should roughly be a superconductor.
Also D and Circle, done it before Zig.
Jesus, we very much aren't. The only real step improvement was in C++11 which added constexpr which (in the following years) gradually obviated the need for C++98 style TMP. But there really hasn't been much of a generational improvement since (OK, concepts I guess), and it remains cumbersome and error prone and difficult to debug.
hello: 4, 5, 5
I had previously written a declarative macro to generate benchmark functions [1]. It worked, but I didn't enjoy the process of getting it working. Nor did I feel confident about making changes to it.
When I rewrote it using crabtime I found the experience much better. I was mostly writing Rust code now, something I was familiar with. The code is much more readable and customisable [2]. For example, instead of having to pass in the names of the modules each time I added a new one, I simply read the files from disk at compile time.
To compare the two see what the code looks like in within the braces of paste!{} in the first one and crabtime::output!{} in the second one. The main difference is that I can construct the strings using Rust code and drop them in with a simple {{ str }}. With paste!, I don't know exactly what I did, but I kept messing around until it worked.
Or compare the two loops. In the first one we have `($($year:ident {$($day:ident),+ $(,)?}),+ $(,)?)` while with crabtime we have plain Rust code - `for (year, day) in years_and_days`. I find the latter more readable.
Overall I'm quite pleased with crabtime. Earlier I'd avoid Rust metaprogramming as much as possible, but now I'd be open to writing a macro if the situation called for it.
[1] - https://github.com/nindalf/advent/blob/13ff13/benches/benche...
[2] - https://github.com/nindalf/advent/blob/b72b98/benches/benche...
> We would like to extend our heartfelt gratitude to the following individuals for their valuable contributions to this project:
> timonv – For discovering and suggesting the brilliant name for this crate. Read more about it here (https://www.reddit.com/r/rust/comments/1j42fgi/comment/mg6pw...).
> Polanas – For their invaluable assistance with testing, design, and insightful feedback that greatly improved the project.
> Your support and contributions have played a vital role in making this crate better—thank you!
No comments yet
That said I did replace a declarative macro with it. Supposedly I wrote the declarative macro according to git blame but I only have a vague idea how it works. I replaced it with crabtime and I got something that I can understand and maintain.
Overall I’d say I’m very pleased with crabtime. Previously I would have avoided Rust metaprogramming as much as possible, but now I’d feel confident to use it where appropriate.
Rust can't have Zig's comptime dynamic properties, same way Zig will never have Rust's compile time guarantees*.
You can't simultaneously have your dynamic cake and eat it at compile time.
* Theoretically you can have it, but it would require changing language to such extent it's not recognizable.
This looks worth the playing with and see if they can solve my issue, one thing I avoid as much as possible is to add unnecessary dependencies, didn’t check how many dependencies this will add overall to the project.
[1] https://crates.io/crates/crabtime-internal/1.1.1/dependencie...
I do share the sentiment - and complain about this frequently - but any environment with build scripts can wreck your computer. Encrypt what you can, I guess, but software engineering is an extremely dangerous job wrt security.
I think using something like the pledge syscall from OpenBSD in the compiler could be useful. That way, it's controlled at the process level which things can be accessed on the system.
But thank you for letting me learn something useful.
But go off, king.
So, I'm interested in some metaprogramming right now. I'm setting up Vec3 SIMD types, and it requires a lot of repetition to manage the various variants: f32::Vec3x8, f64::Vec3x16 etc that are all similar internally. This could be handled using traditional macros, procedural macros, or something called "code gen", which I think is string manipulation of code. Could I use crabtime to do this instead? Should I?
> This could be handled using traditional macros, procedural macros, or something called "code gen", which I think is string manipulation of code. Could I use crabtime to do this instead? Should I?
You could, it seems. Crabtime supports both the procedural macros and "code gen" approaches you are talking about.
So I think as long as you don't break identifiers, the code should be searchable. But, your IDE will probably not able to help you with auto-complete and navigation.
Yes, that's also a problem. As is MySQLCursorDict Class. Now you have to grep every column name every table your code base accesses.
Does this basically allow us to write normal rust code instead of procmacros, with even fewer constraints?
Learn Haskell, Idris, ATS, etc and you will soon find it baffling how easy it is to impress OOP/Imperative programmers and how little they demand of their languages.