As far as I know, Zig has a bunch of things in the works for a better development experience. Almost every day there's something being worked on - like https://github.com/ziglang/zig/pull/24124 just now. I know that Zig had some plans in the past to also work on hot code swapping. At this rate of development, I wouldn't be surprised if hot code swapping was functional within a year on x86_64.
The biggest pain point I personally have with Zig right now is the speed of `comptime` - The compiler has a lot of work to do here, and running a brainF** DSL at compile-time is pretty slow (speaking from experience - it was a really funny experiment). Will we have improvements to this section of the compiler any time soon?
Overall I'm really hyped for these new backends that Zig is introducing. Can't wait to make my own URCL (https://github.com/ModPunchtree/URCL) backend for Zig. ;)
AndyKelley · 31d ago
For comptime perf improvements, I know what needs to be done - I even started working on a branch a long time ago. Unfortunately, it is going to require reworking a lot of the semantic analysis code. Something that absolutely can, should, and will be done, but is competing with other priorities.
lenkite · 31d ago
Thank you for working so hard on Zig. Really looking forward to Zig 1.0 taking the system programming language throne.
Imustaskforhelp · 31d ago
I am not sure, but why can't C,Rust and Zig with others (like Ada,Odin etc.) and of course C++ (how did I forget it?) just coexist.
Not sure why but I was definitely getting some game of thrones vibes from your comment and I would love to see some competition but I don't know, Just code in whatever is productive to you while being systems programming language I guess.
But I don't know low level languages so please, take my words at 2 cents.
brabel · 31d ago
I am just watching the Game of Throne series right now, so this comment sounds funnier than it should to me :D.
The fight for the Iron Throne, lots of self-proclaimed kings trying to take it... C is like King Joffrey, Rust is maybe Robb Stark?! And Zig... probably princess Daenerys with her dragons.
No comments yet
lmm · 31d ago
The industry has the resources to sustain maybe two and a half proper IDEs with debuggers, profilers etc.. So much as we might wish otherwise, language popularity matters. The likes of LSP mitigate this to a certain extent, but at the moment they only go so far.
rfoo · 31d ago
All system programming languages mentioned by GP share the same set of debuggers and profilers, though. It's not very language specific.
flohofwoe · 31d ago
That's where extensible IDEs like VSCode (and with it the Language Server Protocol and Debug Adapter Protocol) come in.
It's not perfect yet, but I can do C/C++/ObjC, Zig, Odin, C3, Nim, Rust, JS/TS, Python, etc... development and debugging all in the same IDE, and even within the same project.
titzer · 31d ago
For Virgil I went through three different compile-time interpreters. The first walked a tree-like IR that predated SSA. Then, after SSA, I designed a linked-list-like representation specifically for interpretation speed. After dozens of little discrepancies between this custom interpreter and compile output, I finally got rid of it and wrote an interpreter that works directly on the SSA intermediate representation. In the worst case, the SSA interpreter is only 2X slower than the custom interpreter. In the best case, it's faster, and saves a translation step. I feel it is worth it because of the maintenance burden and bugs.
9d · 31d ago
Have you considered hiring people to help you with these tasks so you can work in parallel and get more done quicker?
AndyKelley · 31d ago
It's a funny question because, as far as I'm aware, Zig Software Foundation is the only organization among its peers that spends the bulk of its revenue directly paying contributors for their time - something I'm quite proud of.
9d · 31d ago
Oh so then you're already doing that. Well then that's fine, the tasks will get done when they get done then.
BSDobelix · 31d ago
>>spends the bulk of its revenue directly paying contributors
Same with the FreeBSD Foundation (P: OS Improvements):
Other Foundations are more like the "Penguin Foundation".....
bgthompson · 31d ago
Hot code swapping will be huge for gamedev. The idea that Zig will basically support it by default with a compiler flag is wild. Try doing that, clang.
Retro_Dev · 31d ago
Totally agree with that - although even right now zig is excellent for gamedev, considering it's performant, uses LLVM (in release modes), can compile REALLY FAST (in debug mode), it has near-seamless C integration, and the language itself is really pleasant to use (my opinion).
sgt · 31d ago
Is Zig actually being used for real game dev already?
Visual C++ and tools like Live++ have been doing it for years.
Maybe people should occasionally move away from their UNIX and vi ways.
BSDobelix · 31d ago
>Maybe people should occasionally move away from their UNIX and vi ways.
Maybe when something better comes up, but since you never invested one single minute on improving Inferno we have to wait for another Hero ;)
const_cast · 31d ago
Yes, at a huge cost. That only works on Microsoft platforms.
MSVC++ is a nice compiler, sure, but it's not GCC or Clang. It's very easy to have a great feature set when you purposefully cut down your features to the bare minimum. It's like a high-end restaurant. The menu is concise and small and high quality, but what if I'm allergic to shellfish?
GCC and Clang have completely different goals, and they're much more ambitious. The upside of that is that they work on a lot of different platforms. The downside is that the quality of features may be lower, or some features may be missing.
modernerd · 31d ago
I ended up switching from Zig to C# for a tiny game project because C# already supports cross-platform hot reload by default. (It’s just `dotnet watch`.) Coupled with cross-compilation, AOT compilation and pretty good C interop, C# has been great so far.
baq · 31d ago
Why more games aren’t being developed in lisp is… perhaps not beyond me, but game development missed a turn a couple times.
pjmlp · 31d ago
That is basically what they do when using Lua, Python, C#, Java, but with less parenthesis, which apparently are too scary for some folks, moving from print(x) to (print x).
There was a famous game with Lisp scripting, Abuse, and Naughty Dog used to have Game Oriented Assembly Lisp.
baq · 31d ago
I had exactly the same title in mind, remember my very young self being in shock when I learned that it was lisp. If you didn't look under the hood you'd never be able to tell, it just worked.
ww520 · 31d ago
Is comptime slowness really an issue? I'm building a JSON-RPC library and heavily relying on comptime to be able to dispatch a JSON request to arbitrary function. Due to strict static typing, there's no way to dynamically dispatch to a function with arbitrary parameters in runtime. The only way I found was figuring the function type mapping during compile time using comptime. I'm sure it will blow up the code size with additional copies of the comptimed code with each arbitrary function.
Okx · 31d ago
Yes, last time I checked, Zig's comptime was 20x slower than interpreted Python. Parsing a non-trivial JSON file at comptime is excrutiatingly slow and can take minutes.
> Parsing a non-trivial JSON file at comptime is excrutiatingly slow
Nevertheless, impressive that you can do so!
Simran-B · 30d ago
I would argue that it's not meaningful to do so for larger files with comptime as there doesn't seem to be a need for parsing JSON like the target platform would (comptime emulates it) - I expect it to be independent. You're also not supposed to do I/O using comptime, and @embedFile kind of falls under that. I suppose it would be better to write a build.zig for this particularly use case, which I think would then also be able to run fast native code?
dnautics · 31d ago
Is it easy to build out a custom backend? I haven't looked at it yet but I'd like to try some experiments with that -- to be specific, I think that I can build out a backend that will consume AIR and produce a memory safety report. (it would identify if you're using undefined values, stack pointer escape, use after free, double free, alias xor mut)
sali0 · 31d ago
URCL is sending me down a rabbithole. Haven't looked super deeply yet, but the most hilarious timeline would be that an IR built for Minecraft becomes a viable compilation target for languages.
rurban · 31d ago
Better spend the time at comptime than at runtime. Always a benefit
bgthompson · 31d ago
This is already such a huge achievement, yet as the devlog notes, there is plenty more to come! The idea of a compiler modifying only the parts of a binary that it needs to during compilation is simultaneously refreshing and totally wild, yet now squarely within reach of the Zig project. Exciting times ahead.
9d · 31d ago
> For a larger project like the Zig compiler itself, it takes the time down from 75 seconds to 20 seconds. We’re only just getting started.
Excited to see what he can do with this. He seems like a really smart guy.
What's the package management look like? I tried to get an app with QuickJS + SDL3 working, but the mess of C++ pushed me to Rust where it all just works. Would be glad to try it out in Zig too.
stratts · 31d ago
Package management in Zig is more manual than Rust, involving fetching the package URL using the CLI, then importing the module in your build script. This has its upsides - you can depend on arbitrary archives, so lots of Zig packages of C libraries are just a build script with a dependency on a unmodified tarball release. But obviously it's a little trickier for beginners.
Zig makes it really easy to use C packages directly like this, though Zig's types are much more strict so you'll inevitably be doing a lot of casting when interacting with the API
It's also worth pointing out that the Zig std library covers a lot more than the rust one. No need for things like rustix, rand, hashbrown, and a few others I always have to add whenever I do rust stuff.
nindalf · 31d ago
You add hashbrown as an explicit dependency? The standard library HashMap is a re-export of hashbrown. Doesn’t it work for you?
vlovich123 · 31d ago
Can’t speak for the op but there’s a number of high performance interfaces that avoid redundant computations that are only available directly from hashbrown.
LAC-Tech · 31d ago
huh, does it? I always add it so I can make non-deterministic hashmaps in rust. oh and you need one more one for the hashing function I think.
But I did not know hashmap re-exported hashbrown, thanks.
looks like there's no way to access it, outside of hashmap.
Though maybe you just need the third party hasher and you can call with_hasher.
IDK man there's a lot going on with rust.
nindalf · 31d ago
Yes that’s exactly it. If speed is a priority and the input is trusted, change the hasher to something faster like ahash. The ahash crate makes this easy.
WalterBright · 31d ago
The dmd D compiler can compile itself (debug build):
real 0m18.444s
user 0m17.408s
sys 0m1.688s
On an ancient processor (it runs so fast I just never upgraded it):
cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 15
model : 107
model name : AMD Athlon(tm) 64 X2 Dual Core Processor 4400+
stepping : 2
cpu MHz : 2299.674
cache size : 512 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
Nice to hear from you, Andrew! I assume you're using a machine newer than 15 years ago :-)
I suppose it would compile faster if I didn't have symbolic debug info turned on.
Anyhow, our users often use dmd for development because of the high speed turnaround, and gdc/ldc for deployment with their more optimized code gen.
AndyKelley · 31d ago
You too! Yeah I think that was a great call. I took inspiration from D for sure when aiming for this milestone that we reached today.
Some people say you should use an old computer for development to help you write faster code. I say you should use a new computer for development, and write the fastest code you possibly can by exploiting all the new CPU instructions and optimizing for newer caching characteristics.
WalterBright · 31d ago
I'm still in the camp of using computers our users tend to have.
Also, self-compile times are strongly related to how much code there is in the compiler, not just the compile speed.
I also confess to being a bit jaded on this. I've been generating code from 8086 processors to the latest. Which instructions and combinations are faster is always flip-flopping around from chip to chip. So I leave it to the gdc/ldc compilers for the top shelf speed, and just try to make the code gen bulletproof and do a solid job.
Working on the new AArch64 has been quite fun. I'll be doing a presentation on it later in the summer. My target machine is a Raspberry Pi, which is a great machine.
Having the two code generators side by side also significantly increased the build times, because it's a lot more code being compiled.
AndyKelley · 31d ago
Fair enough, and yeah I hear you on the compilation cost of all the targets. We don't have aarch64 yet but in addition to x86_64 we do have an LLVM backend, a C backend, a SPIR-V backend, WebAssembly backend, RISC-V backend, and sparc backend. All that plus the stuff I mentioned earlier in 15s on a modern laptop.
WalterBright · 31d ago
I considered a C backend at one time, but C is not expressive enough. The generated code would be ugly. For example, exception handling. D's heavy reliance on common blocks in the object code is another issue. C doesn't support nested functions (static links). And so on.
Never found a user who asked for that, either :-/
steveklabnik · 31d ago
Some users want a C backend, not for maintainability reasons, but for the ability to compile on platforms that have nothing but a C compiler. The maintainability or aesthetics of the C is irrelevant, it's like another intermediate representation.
AndyKelley · 30d ago
Can confirm, Zig's generated C is extremely ugly. We literally treat it as an object format [1].
The MSVC limitations are maddening, from how short string literals must be, to the complete lack of inline assembly when targeting x86_64.
I bet it would be easier to write a code gen for such platforms than to wrassal generated C code and work around the endless problems.
Anyhow, one of the curious features of D is its ability to translate C code to D code. Curious as it was never intentionally designed, it was discovered by one of our users.
D has the ability to create a .di file from a .d file, which is analogous to writing a .h file from a .c file. When D gained the ability to compile C files, you just ask it to create a .di file, and voila! the C code translated to D!
steveklabnik · 30d ago
Maybe, I don’t use those platforms and so I don’t know from experience, I just know that’s why people asked us in Rust.
I somehow missed that D has that! I try to read the forums now and again, but I should keep more active tabs on how stuff is going :)
miki123211 · 31d ago
I think it would be a good idea to have some kind of "speedbump" tool that makes your software slower, but in a way where optimizing it would also optimize the faster version.
I don't know whether this is technically feasible, maybe you could run it on CPUs with good power management and force them to underclock or something.
hiccuphippo · 30d ago
You could use Qemu to emulate an older CPU, you would need to disable kvm with -no-kvm. There's also a throttle option I found while googling this.
candrewlee · 31d ago
15s is fast, wow.
Do you have any metrics on which parts of the whole compiler, std, package manager, etc. take the longest to compile?
How much does comptime slowness affect the total build time?
mlugg · 31d ago
[edited to fix a formatting problem, sorry!]
Well, one interesting number is what happens when you limit the compiler to this feature set:
* Compilation front-end (tokenizing/parsing, IR lowering, semantic analysis)
* Our own ("self-hosted") x86_64 code generator
* Our own ("self-hosted") ELF linker
...so, that's not including the LLVM backend and LLD linker integration, the package manager, `zig build`, etc. Building this subset of the compiler (on the branch which the 15 second figure is from) takes around 9 seconds. So, 6 seconds quicker.
This is essentially a somewhat-educated guess, so it could be EXTREMELY wrong, but of those 6s, I would imagine that around 1-2 are spent on all the other codegen backends and linkers (they aren't too complex and most of them are fairly incomplete), and probably a good 3s or so are from package management, since that pulls in HTTP, TLS, zip+tar, etc. TLS in particular does bring in some of our std.crypto code which sometimes sucks up more compile time than it really should. The remaining few seconds can be attributed to some "everything else" catch-all.
Amusingly, when I did some slightly more in-depth analysis of compiler performance some time ago, I discovered that most of the compiler's time -- at least during semantic analysis -- is spent analyzing different calls to formatted printing (since they're effectively "templated at compile time" in Zig, so the compiler needs to do a non-trivial amount of work for every different-looking call to something like `std.log.info`). That's not actually hugely unreasonable IMO, because formatted printing is a super common operation, but it's an example of an area we could improve on (both in the compiler itself, and in the standard library by simplifying and speeding up `std.fmt`). This is one example of a case where `comptime` execution is a big contributor to compile times.
However, aside from that one caveat of `std.fmt`, I would say that `comptime` slowness isn't a huge deal for many projects. Really, it depends how much they use `comptime`. You can definitely feel the limited speed of `comptime` execution if you use it heavily (e.g. try to parse a big file at `comptime`). However, most codebases are more restrained in their use of `comptime`; it's like a spice, a bit is lovely, but you don't want to overdo it! As with any kind of metaprogramming, overuse of `comptime` can lead to horribly unreadable code, and many major Zig projects have a pretty tasteful approach to using `comptime` in the right places. So for something like the Zig compiler, the speed of `comptime` execution honestly doesn't factor in that much (aside from that `std.fmt` caveat discussed above). `comptime` is very closely tied in to general semantic analysis (things like type checking) in Zig's design, so we can't really draw any kind of clear line, but on the PR I'm taking these measurements against, the threading actually means that even if semantic analysis (i.e. `comptime` execution plus more stuff) were instantaneous, we wouldn't see a ridiculous performance boost, since semantic analysis is now running in parallel to code generation and linking, and those three phases are faiiirly balanced right now in terms of speed.
In general (note that I am biased, since I'm a major contributor to the project!), I find that the Zig compiler is honestly a fair bit faster than people give it credit for. Like, it might sound pretty awful that (even after these improvements), building a "Hello World" takes (for me) around 0.3s -- but unlike C (where libc is precompiled and just needs to be linked, so the C compiler literally has to handle only the `main` you wrote), the Zig compiler is actually freshly building standard library code to handle, for instance, debug info parsing and stack unwinding in the case of a panic (code which is actually sorta complicated!). Right now, you're essentially getting a clean build of these core standard library components every time you build your Zig project (this will be improved upon in the future with incremental compilation). We're still planning to make some huge improvements to compilation speed across the board of course -- as Andrew says, we're really only just getting started with the x86_64 backend -- but I think we've already got something pretty decently fast.
9d · 31d ago
Wait, is this a competition?
WalterBright · 31d ago
Always!
9d · 31d ago
Dang, I'm far behind then, I haven't even downloaded QBE[1] yet!
When I tried compiling zig it would take ages because it would go through different stages (with the entirety of bootstraping from wasm)
throwawaymaths · 31d ago
im stunned that zig can compile itself in 75 seconds (even with llvm)
pjmlp · 31d ago
We used to have such fast compile times with Turbo Pascal, and other dialects, Modula-2, Oberon dialects, across 16 bit and early 32 bit home computers.
Then everything went south, with the languages that took over mainstream computing.
faresahmed · 31d ago
Not to disagree with you, but even C++ is going through great efforts to improve compile-times through C++20 modules and C++23 standard library modules (import std;). Although no compiler fully supports both, you can get an idea of how they can improve compile-times with clang and libc++
$ # No modules
$ clang++ -std=c++23 -stdlib=libc++ a.cpp # 4.8s
$ # With modules
$ clang++ -std=c++23 -stdlib=libc++ --precompile -o std.pcm /path/to/libc++/v1/std.cppm # 4.6s but this is done once
$ clang++ -std=c++23 -stdlib=libc++ -fmodule-file=std=std.pcm b.cpp # 1.5s
a.cpp and b.cpp are equivalent but b.cpp does `import std;` and a.cpp imports every standard C++ header file (same thing as import std, you can find them in libc++' std.cppm).
Notice that this is an extreme example since we're importing the whole standard library and is actually discouraged [^1]. Instead you can get through the day with just these flags: `-stdlib=libc++ -fimplicit-modules -fimplicit-module-maps` and of course -std=c++20 or later, no extra files/commands required! but you are only restricted to doing import <vector>; and such, no import std.
[^1]: non-standard headers like `bits/stdc++.h` which does the same thing (#including the whole standard library) is what is actually discouraged because a. non-standard and b. compile-times, but I can see `import std` solving these two and being encouraged once it's widely available!
pjmlp · 31d ago
As big fan of C++ modules (see my github), we are decades away of widespread adoption, unfortunately.
See regular discussions on C++ reddit, regarding state of modules support across the ecosystem.
9d · 31d ago
C++ will be a very useful, fast, safe, and productive language in 2070.
9d · 31d ago
Am I wrong about this?
Their algorithms were simpler.
Their output was simpler.
As their complexity grew, proportionately did program performance.
Not to mention adding language convenience features (generics, closures).
pjmlp · 31d ago
See Ada, released in 1983.
Generics were already present in CLU and ML, initially introduced in 1976.
Check their features.
9d · 31d ago
Yeah but--
rurban · 31d ago
tinycc is still fast. all the current single-pass compilers are fast.
pjmlp · 31d ago
Agreed, I would say the main problem is lack of focus on developer productivity.
Ok, the goalpost has moved on with what -O0 is expected to deliver in machine code quality, lets then have something like -ffast-compile, or interpreter/jit as alternative toolchain in the box.
Practical example from D land, compile D with dmd during development, use gdc or ldc for release.
WhereIsTheTruth · 31d ago
I said it for D and Nature, and every other languages that comes with its own backend, we all have a duty to support projects that tries to not depend on LLVM, compiler R&D has stagnated because of LLVM, far too many languages chose to depend on it, far too many people don't value fast iteration time, or perhaps they grew to not expect any better?
Fast iteration time with incremental compilation and binary patching, good debugging should be the expectation for new languages, not something niche or "too hard to do"
flohofwoe · 31d ago
OTH LLVM caused an explosion of languages created by individuals which immediately had competitive performance and a wide range of supported platforms (Zig being one of them!).
The entire realtime rendering industry is essentially built on top of LLVM (or forks of LLVM), even Microsoft have switched their shader compiler to LLVM and is now (finally) starting to upstream their code.
The compiler infrastructure of most game consoles is Clang based (except Xbox which - so far - sticks to MSVC).
So all in all, LLVM has been a massive success, especially for bootstrapping new things.
pjmlp · 31d ago
Indeed, that is one of the few things I find positive on Go, being bootstraped, and not dependent on LLVM.
d3ckard · 31d ago
In no way I want to sound demanding/ungrateful, since Zig is free work, but I am mostly interested in some realistic 1.0 timeline.
Zig is pretty much exactly what I would want from low level language, I'm just waiting for it to be stable.
And, of course, kudos - I really appreciate minimalist design philosophy of Zig.
ArtixFox · 31d ago
I am pretty sure serious projects like tigerbeetle freeze a version [most probably latest release] and use it. nightly is for experimental stuff.
treeshateorcs · 31d ago
so, a helloworld program (`zig init`) is 9.3MB compiled. compared to `-Doptimize=ReleaseSmall` 7.6KB that is huge (more than 1000 times larger)
AndyKelley · 31d ago
Indeed, good observation. Another observation is that 82% of that is debug info.
-OReleaseSmall -fno-strip produces a 580K executable, while -ODebug -fstrip produces a 1.4M executable.
Sounds like Julia should consider switching to Zig to get considerable performance gains. I remember authors feeling uneasy with each llvm release worrying about performance degradations.
patagurbon · 31d ago
Julia is effectively hard locked to LLVM. Large swathes of the ecosystem rely on the presence of LLVM either for intrinsics, autodiff (Enzyme) or gpu compilation. Nevermind Base and Core.
The compiler is fairly retargetable, this is an active area of work. So it’s maybe possible in the future to envision zig as an alternative compiler for fragments of the language.
bobbylarrybobby · 31d ago
Isn't LLVM considered part of Julia’s public API? You've got macros like @code_llvm that actually give you IR
jakobnissen · 31d ago
That could be a way to get compile times down, but I think there is still much to do on the Julia side.
Such as a more fine grained compile cache, better tooling to prevent i validations, removal of the world splitting optimisation, more use of multithreading in the compiler, automatic precompilation of concrete signatures, and generation of lazier code which hot-swaps in code when it is compiled.
eigenspace · 29d ago
People say this about every compiler backend that shows up. I have serious doubts, but if someone wants to take it on as a project it'd be pretty interesting to see what happens.
whinvik · 31d ago
As a complete noob what is the advantage of Zig over other languages? I believe it's a more modern C but what is the modern part?
dns_snek · 31d ago
A few things off the top of my head:
- an integrated build system that doesn't use multiple separate arcane tools and languages
- slices with known length in Zig vs arrays in C (buffer overflows)
- an explicit optional type that you're forced to check, null pointers aren't* allowed (*when they are, for integrating with C code, the type makes that obviously clear)
- enums, tagged unions and enforced exhaustive checks on "switch" expressions
- error handling is explicit, functions return an error (an enum value) that the caller must handle in some way. In C the function might return some integer to indicate an error which you're allowed to completely ignore. What is missing is a standard way of returning some data with an error that's built into the language (the error-struct-passed-through-parameters pattern feels bolted on, there should be special syntax for it)
- "defer", "errdefer" blocks for cleanup after function returns or errors
- comptime code generation (in Zig) instead of macros, type reflection (@typeInfo and friends)
- caller typically makes decisions about how and where memory is allocated by passing an allocator to libraries
- easier (at least for a noob) to find memory leaks by just using GeneralPurposeAllocator
As someone who's always used higher level languages since I started programming and strongly disliked many of the arcane, counterintuitive things about C and surrounding ecosystem whenever I tried it, Zig finally got me into systems programming in a way that I find enjoyable.
fjnfndnf · 31d ago
That's just the backend swapped out, all the analysis and type passes are still present or is it also reducing verifications?
While a quick compile cycle is beneficial for productivity, this is only the case if it also includes fast tests
Thus wouldn't it be easier to just interpret zig for debug? That would also solve the issue of having to repeat the work for each target
flohofwoe · 31d ago
> Thus wouldn't it be easier to just interpret zig for debug
The whole point of debug mode is debuggability, and hooking up such an interpreted Zig to a standard debugger like gbd or lldb probably isn't trivial, since those expect an executable with DWARF debug info.
PS: also acceptable debug-mode performance is actually very important, especially in areas like game development.
ArtixFox · 31d ago
Only backend has been swapped out.
The tests will be fast too yes.
There is no real need to add an interpreter. Having custom backend s means that while currently it is being used for debug, far in future it might be able to compete with llvm for speed.
Adding an interpreter would be useless as u would still need to write a custom backend.
The problem is llvms slowness for debug and release.
foresto · 31d ago
Isn't this one of the preconditions for bringing async/await back to Zig?
I've got that stuff all figured out, should have some interesting updates for everyone over the next 2-3 months. Been redoing I/O from the ground up - mostly standard library work.
sgt · 31d ago
Is it really that important though, to have async/await? I mean, do Zig developers actually need it?
AndyKelley · 31d ago
Funny, this is the central question of the talk that I am working on for Systems Distributed in Amsterdam next week.
sgt · 31d ago
Hoping they make the talks available afterwards on YouTube or somewhere.
audunw · 31d ago
Its the feature I’m waiting for before I start a project I’ve wanted to do for a while in Zig.
I’d say it’s really important to get async nailed before 1.0, because it’s potentially one of the biggest killer features for many projects.
Zigs async isn’t just about I/O, I’ve found the feature useful in hardware simulation code as well.
foresto · 31d ago
Do developers actually need anything more than assembly?
When choosing a new language (and ecosystem) in which to invest my time, I'm more likely to pick one that's versatile than one that struggles outside of a niche. Even with a background in processes, threads, manual event loops and callbacks, I find that higher level concurrency features make my life easier in enough different situations that first-class support for them is now fairly high on my shopping list.
Do I actually need a language with something resembling coroutines? No; I got by for decades with C and C++. But I want it. It makes me more productive and helps me to keep my code simpler, both of which are valuable to me. These days, I have found myself walking away from languages for being weak in this area.
sgt · 30d ago
Now imagine Zig with channels and go routines...
whitehexagon · 31d ago
Keeping Zig simple and close to C seems the most beneficial. Something that can be used everywhere that C is currently used. Otherwise it moves from systems programming language into applications programming.
nasretdinov · 31d ago
IMO there are plenty of cases where you don't need to sqeeze every little drop of performance by going all in with epoll/io_uring directly, but you still want to handle 10k+ concurrent connections more effectively than with threads
sgt · 31d ago
epoll combined with a thread pool
lossolo · 31d ago
I don't know about others, but it is for me. I guess it depends on what you're working on.
geodel · 31d ago
Reading the link, it seems to me async is never coming back or at least not till 2028.
mlugg · 31d ago
I don't understand how you reached this conclusion.
Nothing is decided for sure, but the plan is most likely to re-introduce stackless coroutines as a set of lower-level primitives [0], and support implementing the planned `std.Io` abstraction [1] using them. The usage isn't quite as "pretty" as our old async/await syntax, but this approach sidesteps a lot of design problems, allows applications to seamlessly switching between different async implementations (e.g. stackless vs stackful), and simplifies the eventual language specification.
It's true that we're not doing this work right now, but that doesn't mean async is "never coming back", nor that you'll be waiting "till 2028".
Sure, I was just tempering expectations. In large project there are always tons of competing priorities competing. Considering many things needed before async and not all of them may be complete before other important work like incremental compilation, new backends etc. It maybe a while to see async.
txdv · 31d ago
Is there a guide of how to build zig in 20 seconds? (for fast development cycles)
I would like to contribute but faced difficulties because the compilation for all stage1/2/3 combined took a lot of time
mlugg · 31d ago
The whole "stage1/2/3" jazz is about our bootstrap process; that is, the way you get a Zig compiler starting from nothing but a C compiler. This is a tricky problem because of the fact that the Zig compiler is written in Zig. The bootstrap is unfortunately quite slow to run, for two main reasons:
* We want the final Zig binary it produces to be optimized using LLVM, and LLVM is incredibly slow.
* The start of the bootstrap chain involves a kinda-weird step where we translate a WASM binary to a gigantic C file which we then build; this takes a while and makes the first Zig compiler in the process ("stage1"/"zig1") particularly slow.
Luckily, you very rarely need to bootstrap!
Most of the time, you can simply download a recent Zig binary from ziglang.org. The only reason the bootstrap process exists is essentially so you can make that tarballs yourself (useful if you want to link against system LLVM, or optimize for your native CPU, or you are a distro package maintainer). You don't actually need to do it to develop the compiler; you just need to get a relatively recent build of Zig to use to build the compiler, and it's fine to grab that from ziglang.org (or a mirror).
Once you have that, it's as simple as `zig build -Dno-lib` in the Zig repository root. The `-Dno-lib` option just prevents the build script from copying the contents of the `lib/` directory into your installation prefix (zig-out by default); that's desirable to avoid when working on the compiler because it's a lot of files so can take a while to copy.
You can also add `-Ddev=x86_64-linux` to build a smaller subset of compiler functionality, speeding up the build more. For the other `-Ddev` options, look at the fields of `Env` in `src/dev.zig`.
candrewlee · 31d ago
This is awesome for Zig, I think this direction is gonna be a primary differentiator when comparing to Rust.
And hey, I wrote a lot of the rendering code for that perf analyzer. Always fun to see your work show up on the internet.
Will macOS also get support for the self hosted back end at some point?
meepmorp · 31d ago
that work is in progress, though the self-hosted backend is x86 only for now.
xmorse · 31d ago
x86 runs fine in Apple Silicon too thanks to Rosetta
mlugg · 31d ago
I believe (although I won't claim to know details) that Rosetta is unable to handle the code our self-hosted backend emit. I don't know whether it's related to the machine code or the Mach-O object, nor do I know in what way it breaks. Feel free to try it out of course!-- my understanding could be wrong.
xmorse · 29d ago
Next time you try Rosetta and it fails can you open an issue with the error messages? I think this task would be perfect for o3 to work on, you basically have to browse hundreds of issues on GitHub and find other people that got into the same problem
VWWHFSfQ · 31d ago
I'm interested in Zig but kind of discouraged by the 30 pages of open issues mentioning "segfault" on their Github tracker. It's disheartening for a systems programming language being developed in the 21st century.
cornholio · 31d ago
Zig is not a memory safe language and does not attempt to prevent its users from shooting themselves in the leg; it tries to make those unsafe actions explicit and simple, unlike something like C++ that drowns you in complexity, but if you really want to do pointer wrangling and using memory after freeing it, zig allows you to do it.
This design philosophy should lead to countless segfaults that are the result of Zig working as designed. It also relegates Zig to the small niche of projects in modern programming where performance and developer productivity are more important than resilience and correctness.
enbugger · 31d ago
Since when segfaults are declared as the thing of the 20th century?
pjmlp · 31d ago
Since we discovered better ways of doing systems programming around early 1980's, that aren't tied to UNIX culture.
enbugger · 30d ago
Which ways for example?
pjmlp · 29d ago
Mesa, Modula-2, Object Pascal, Ada for example.
Unfortunely none of them came with free beer UNIX.
AndyKelley · 31d ago
I see 40 pages in rust-lang/rust. Are you sure this heuristic is measuring what you think it's measuring?
VWWHFSfQ · 31d ago
Oh I wasn't comparing to Rust. But just a quick glance between the two repos shows a pretty big difference between the nature of the "segfault" issues reported.
Every mature compiler (heck, project of any kind) has thousands of bugs open. It’s just a poor metric.
VWWHFSfQ · 31d ago
Yep and like I said, I'm interested in Zig. But it's still somewhat discouraging as a C replacement just because it seems to still have all the same problems but without the decades of tools and static analyzers to help out. But I'm keeping an eye on it.
Retro_Dev · 31d ago
It is my opinion that even if Zig were nothing more than a syntactical tweak of C, it would be preferable over C. C has a lot of legacy cruft that can't go away, and decades of software built with poor practices and habits. The status-quo in Zig is evolving to help mitigate these issues. One obvious example that sets Zig apart from C is error handling built into the language itself.
uecker · 31d ago
What specific legacy cruft does bother you? I think it is a strength of C that it evolves it slowly and code from decades ago will still run.
I also do not see how having decades of legacy software is holding anybody back doing new stuff in C in a better way. New C code can be very nice.
pjmlp · 31d ago
So slowly that what allowed for Morris worm is still present in C23, and now everyone is rushing into hardware memory tagging as a solution instead.
uecker · 31d ago
Can you be more precise? Memory safety does not only affect C programs and - despite people repeating this over and over - I do not believe it is true that it is actually harder to build safe programs in C compared to many other languages. Ada and Rust have certainly some advantage, but I think also this is exaggerated.
pjmlp · 31d ago
Memory corruption caused by lack of bounds checking, or vocabulary types like the ones provided by SDS and glib.
Microsoft had to come up with SAL during Windows XP SP2, Apple with -fbounds-safety alongside Safe C dialect for iBoot firmware, Oracle with ADI on Solaris/SPARC, Apple's ARM PAC extension, ARM and Microsoft's collaboration on Pluton and CHERI Morello, Apple, Microsoft, Google and Samsung's collaboration on ARM's MTE.
Lots of money being spent on R&D, for something WG14 considers exaggerated.
uecker · 31d ago
Those extensions are mostly to be able to enhance safety of legacy code though and not necessary when writing new code that is safe. But it is only the later that is relevant when comparing to new alternative languages.
pjmlp · 31d ago
Where can we find examples of such newly written C code that is memory safe, given that language features that would make it possible are only now under discussion for future standards, after governments and industry pressure?
uecker · 31d ago
Basically any code that does not use pointer arithmetic or raw pointer dereferences and instead puts string handling and buffer management behind abstractions.
The new feature we will put in are not to enable safe programming, but to make it more convenient and to make safety demonstrable.
And I wish there was actually some real industry interest to pushing this forward. Industry seems more interested into claiming this an unfixable problem and we all have to switch to Rust, which gives them another decade of punting the problems with existing code.
pjmlp · 31d ago
So basically no examples, just hope people actually follow best practices, as usual.
Why doesn't WG14 prove the industry wrong then?
uecker · 31d ago
I could help with searching some public examples if you really do not know any code that uses safe string abstractions in C. But my original aim was to understand what specific "legacy cruft" in C is seen as problematic, and why its presence requires an entirely language to fix. So far, I did not get a good answer to this. I certainly do know some legacy cruft I want to see go but its presence does not prevent me from writing bounds-safe code.
WG14 is a very small number of volunteers. It would help if the industry would actually invest development resources for safety on the compiler / language side and in cleaning up legacy code.
pjmlp · 31d ago
Exactly, there are very few pearls of best practices in C, and those that exist, are probably in high integrity computing, with the relevant certification costs.
When all major OS vendors, some of whom are also compiler vendors, see more return into investment, contributing their money to alternative language foundations, or open source projects, than sending their employees to either WG14, or WG21, it is kind of clear ISO isn't going the way they would like to.
I would not call this an exaggeration, rather not listening.
Additionally, it would not surprise me if one of Zig, Odin, Rust eventually started popping up on console DevKits, or Khronos standards as well.
uecker · 31d ago
I don't know. WG21 is very big. WG14 is very small, so it is different. But in neither there is ISO going in some direction. Whoever shows up can influence the standard. Some of my proposals towards safety were opposed by compiler vendors because they do not want to put up too much pressure on their customer upgrading their legacy code. But of course, rewriting the code in another language would be much more effort than maintaining it... So I think the true answer is that nobody want to invest in maintenance of legacy code. But this will increasingly also be a problem for other languages once they are not young anymore.
pjmlp · 31d ago
Maybe people should take advantage D, Ada and Modula-2 are all part of GCC.
ArtixFox · 31d ago
Im pretty sure valgrind and friends can be used in zig.
Zig is still not 1.0, theres not much stability guarantees, making something like Frama-C, even tho it is possible is simply going to be soo much pain due to constant breakages as compared to something like C.
Beyond that, tools like antithesis https://antithesis.com/ exist that can be used for checking bugs. [ I dont have any experience with it. ]
stratts · 31d ago
What's the state of the art here?
Most of Zig's safety, or lack thereof, seems inherent to allowing manual memory management, and at least comparable to its "C replacement" peers (Odin, C3, etc).
ArtixFox · 31d ago
I guess formal verification tools? That is the peak that even rust is trying to reach with creusot and friends. Ada has support for it using Spark subset [which can use why3 or have you write the proofs in coq]
Frama-C exists for C.
Astree exists for C++ but i dont think lone developers can access it. But it is used in Boeing.
pjmlp · 31d ago
Comparable to Modula-2 and Object Pascal, hence why those languages ought to do better.
Otherwise it is like getting Modula-2 from 1978, but with C like syntax, because curly bracket must be.
garbagepatch · 31d ago
No shame in waiting until 1.0. There's other production ready languages you can use right now so you can ignore Zig until then.
9d · 31d ago
That's about size and popularity, not maturity.
Several very popular, small, mature projects have zero or few open issues.
(And several mature, huge and unpopular ones too.)
xmorse · 31d ago
that issue is ridiculous, what did you expect randomly increasing the pointer of an array?
tatjam · 31d ago
At the same time, having the access to .ptr be explicit is way better than in C, where the variable you are adding 1 to may or may not be a pointer.
Having the pointer nature of the operation locally explicit is way better than having to scroll back to find the type of the variable.
9d · 31d ago
> And we’re looking at aarch64 next - work that is expected to be accelerated thanks to our new Legalize pass.
Sorry, what?
mlugg · 31d ago
The other comments get the general idea, but here's a slightly more detailed explanation.
Code generation backends in the Zig compiler work by lowering an SSA-like structured IR called AIR to machine code (or actually first to another intermediate data structure called MIR, but don't worry about that). The thing is, AIR is intentionally quite high-level, the intention being that the code emitting AIR (which is complex and difficult to parallelize) doesn't have to waste loads of valuable single-threaded time turning e.g. a single line of Zig code into tens or hundreds of instructions.
However, this approach sort of just moves this work of "expanding" these high-level operations into low-level instructions, from the code producing AIR, into the codegen backend. That's good for compiler performance (and also actually for avoiding spaghetti in the compiler :P), but it makes it much more difficult to write backends, because you need to implement much more complex lowerings, and for a much greater number of operations.
To solve this problem, `Legalize` is a new system we've introduced to the compiler which effectively performs "rewrites" on AIR. The idea is that if a codegen backend doesn't want to support a certain high-level operation, it can set a flag which tells `Legalize` that before sending the AIR to codegen, it should rewrite all occurences of that instruction to a longer string of simpler instructions. This could hugely simplify the task of writing a backend; we don't have that many legalizations implemented right now, but they could, for instance, convert arithmetic on large integer types (e.g. u256) into multiple operations on a "native" integer size (e.g. u64), significantly decreasing the number of integer sizes which you need to handle in order to get a functional backend. The resulting implementation might not emit as efficient machine code as it otherwise should (generally speaking, manually implementing "expansions" like the one I just mentioned in the backend rather than in `Legalize` will lead to better code because the backend can sort of "plan ahead" better), but you can implement it with way less work. Then, if you want, you can gradually extend the list of operations which the backend actually supports directly lowering, and just turn off the corresponding legalizations; so everything works before and after, but you get better code (and possibly slightly faster compilation) from implementing the operation "directly".
I'm about halfway done writing an AArch64 backend for the dmd D compiler. Of course, the gdc and ldc compilers already support that.
9d · 31d ago
Every few years I look into D and read it and think it's really good and want to write something in it. But then I come across comments on HN that make me feel like it's not good enough compared to something like C++ or Rust. I know I should ignore those, and I do like having a GC, so D seems like maybe a better Go. But one thing I always put ahead of everything else is developer convenience, for example, autocompletion and hover-docs in VS Code. I assume D has these, but I also assume that it's not as well supported as it is in Rust, mostly thanks to some sort of principle that probably has a name by now, where the more hype a language has, the more support its entire ecosystem has, regardless of whether it's a good language or not. Kind of like how writing TypeScript for the web is now amazing to work with, being a zeroeth-class citizen in VS Code (soon to be lowered to first-class after the Go rewrite), simply because JavaScript is the only native language the web speaks. Anyway, if D has wrapper libs for SDL3 and V8, decent docs, works well on both Win&Mac, and has good enough VS Code support, I'd be glad to give it a shot.
WalterBright · 31d ago
D could use a well-oiled marketing machine!
nektro · 31d ago
afaict its a new pass that transforms Air generated from Sema into Air understood by a particular backend, since theyre not all at the same level of maturity
xmorse · 31d ago
This could be the greatest programming language development of the last 10 years. Finally a language that compiles fast and is fast at runtime too.
pjmlp · 31d ago
Turbo Pascal 5.5 for MS-DOS, one example out of many others from those days, running on lame IBM PCs.
If anything, it is a generation rediscovering what we have lost.
0points · 31d ago
> Finally a language that compiles fast and is fast at runtime too.
We been enjoying golang for the last decade and then some ;-)
xmorse · 31d ago
that's why i added fast at runtime too
0points · 30d ago
Enlighten me, what's so slow about the golang runtime?
rurban · 31d ago
We are talking zig here, not lua
ArtixFox · 31d ago
theres a long way to go but its better than nothing!
The biggest pain point I personally have with Zig right now is the speed of `comptime` - The compiler has a lot of work to do here, and running a brainF** DSL at compile-time is pretty slow (speaking from experience - it was a really funny experiment). Will we have improvements to this section of the compiler any time soon?
Overall I'm really hyped for these new backends that Zig is introducing. Can't wait to make my own URCL (https://github.com/ModPunchtree/URCL) backend for Zig. ;)
Not sure why but I was definitely getting some game of thrones vibes from your comment and I would love to see some competition but I don't know, Just code in whatever is productive to you while being systems programming language I guess.
But I don't know low level languages so please, take my words at 2 cents.
The fight for the Iron Throne, lots of self-proclaimed kings trying to take it... C is like King Joffrey, Rust is maybe Robb Stark?! And Zig... probably princess Daenerys with her dragons.
No comments yet
It's not perfect yet, but I can do C/C++/ObjC, Zig, Odin, C3, Nim, Rust, JS/TS, Python, etc... development and debugging all in the same IDE, and even within the same project.
Same with the FreeBSD Foundation (P: OS Improvements):
https://freebsdfoundation.org/wp-content/uploads/2024/03/Bud...
Other Foundations are more like the "Penguin Foundation".....
Maybe people should occasionally move away from their UNIX and vi ways.
Maybe when something better comes up, but since you never invested one single minute on improving Inferno we have to wait for another Hero ;)
MSVC++ is a nice compiler, sure, but it's not GCC or Clang. It's very easy to have a great feature set when you purposefully cut down your features to the bare minimum. It's like a high-end restaurant. The menu is concise and small and high quality, but what if I'm allergic to shellfish?
GCC and Clang have completely different goals, and they're much more ambitious. The upside of that is that they work on a lot of different platforms. The downside is that the quality of features may be lower, or some features may be missing.
There was a famous game with Lisp scripting, Abuse, and Naughty Dog used to have Game Oriented Assembly Lisp.
Nevertheless, impressive that you can do so!
Excited to see what he can do with this. He seems like a really smart guy.
What's the package management look like? I tried to get an app with QuickJS + SDL3 working, but the mess of C++ pushed me to Rust where it all just works. Would be glad to try it out in Zig too.
SDL3 has both a native Zig wrapper: https://github.com/Gota7/zig-sdl3
And a more basic repackaging on the C library/API: https://github.com/castholm/SDL
For QuickJS, the only option is the C API: https://github.com/allyourcodebase/quickjs-ng
Zig makes it really easy to use C packages directly like this, though Zig's types are much more strict so you'll inevitably be doing a lot of casting when interacting with the API
Even this is pretty usable, handling value conversions and such thanks to comptime. (Take a look at the tests here: https://github.com/eknkc/zquickjs/blob/master/src/root.zig)
But I did not know hashmap re-exported hashbrown, thanks.
looks like there's no way to access it, outside of hashmap.
Though maybe you just need the third party hasher and you can call with_hasher.
IDK man there's a lot going on with rust.
real 0m18.444s user 0m17.408s sys 0m1.688s
On an ancient processor (it runs so fast I just never upgraded it):
cat /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 107 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 4400+ stepping : 2 cpu MHz : 2299.674 cache size : 512 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes
oh, and by the way that includes the package manager, so the compile time accounts for:
* HTTP
* TLS (including aegis-128l, aegis-256, aes128-gcm, aes256-gcm, chacha20poly1305)
* deflate, zstd, and xz
* git protocol
I suppose it would compile faster if I didn't have symbolic debug info turned on.
Anyhow, our users often use dmd for development because of the high speed turnaround, and gdc/ldc for deployment with their more optimized code gen.
Some people say you should use an old computer for development to help you write faster code. I say you should use a new computer for development, and write the fastest code you possibly can by exploiting all the new CPU instructions and optimizing for newer caching characteristics.
Also, self-compile times are strongly related to how much code there is in the compiler, not just the compile speed.
I also confess to being a bit jaded on this. I've been generating code from 8086 processors to the latest. Which instructions and combinations are faster is always flip-flopping around from chip to chip. So I leave it to the gdc/ldc compilers for the top shelf speed, and just try to make the code gen bulletproof and do a solid job.
Working on the new AArch64 has been quite fun. I'll be doing a presentation on it later in the summer. My target machine is a Raspberry Pi, which is a great machine.
Having the two code generators side by side also significantly increased the build times, because it's a lot more code being compiled.
Never found a user who asked for that, either :-/
The MSVC limitations are maddening, from how short string literals must be, to the complete lack of inline assembly when targeting x86_64.
[1]: https://ziglang.org/documentation/0.14.1/std/#std.Target.Obj...
Anyhow, one of the curious features of D is its ability to translate C code to D code. Curious as it was never intentionally designed, it was discovered by one of our users.
D has the ability to create a .di file from a .d file, which is analogous to writing a .h file from a .c file. When D gained the ability to compile C files, you just ask it to create a .di file, and voila! the C code translated to D!
I somehow missed that D has that! I try to read the forums now and again, but I should keep more active tabs on how stuff is going :)
I don't know whether this is technically feasible, maybe you could run it on CPUs with good power management and force them to underclock or something.
Do you have any metrics on which parts of the whole compiler, std, package manager, etc. take the longest to compile? How much does comptime slowness affect the total build time?
Well, one interesting number is what happens when you limit the compiler to this feature set:
* Compilation front-end (tokenizing/parsing, IR lowering, semantic analysis)
* Our own ("self-hosted") x86_64 code generator
* Our own ("self-hosted") ELF linker
...so, that's not including the LLVM backend and LLD linker integration, the package manager, `zig build`, etc. Building this subset of the compiler (on the branch which the 15 second figure is from) takes around 9 seconds. So, 6 seconds quicker.
This is essentially a somewhat-educated guess, so it could be EXTREMELY wrong, but of those 6s, I would imagine that around 1-2 are spent on all the other codegen backends and linkers (they aren't too complex and most of them are fairly incomplete), and probably a good 3s or so are from package management, since that pulls in HTTP, TLS, zip+tar, etc. TLS in particular does bring in some of our std.crypto code which sometimes sucks up more compile time than it really should. The remaining few seconds can be attributed to some "everything else" catch-all.
Amusingly, when I did some slightly more in-depth analysis of compiler performance some time ago, I discovered that most of the compiler's time -- at least during semantic analysis -- is spent analyzing different calls to formatted printing (since they're effectively "templated at compile time" in Zig, so the compiler needs to do a non-trivial amount of work for every different-looking call to something like `std.log.info`). That's not actually hugely unreasonable IMO, because formatted printing is a super common operation, but it's an example of an area we could improve on (both in the compiler itself, and in the standard library by simplifying and speeding up `std.fmt`). This is one example of a case where `comptime` execution is a big contributor to compile times.
However, aside from that one caveat of `std.fmt`, I would say that `comptime` slowness isn't a huge deal for many projects. Really, it depends how much they use `comptime`. You can definitely feel the limited speed of `comptime` execution if you use it heavily (e.g. try to parse a big file at `comptime`). However, most codebases are more restrained in their use of `comptime`; it's like a spice, a bit is lovely, but you don't want to overdo it! As with any kind of metaprogramming, overuse of `comptime` can lead to horribly unreadable code, and many major Zig projects have a pretty tasteful approach to using `comptime` in the right places. So for something like the Zig compiler, the speed of `comptime` execution honestly doesn't factor in that much (aside from that `std.fmt` caveat discussed above). `comptime` is very closely tied in to general semantic analysis (things like type checking) in Zig's design, so we can't really draw any kind of clear line, but on the PR I'm taking these measurements against, the threading actually means that even if semantic analysis (i.e. `comptime` execution plus more stuff) were instantaneous, we wouldn't see a ridiculous performance boost, since semantic analysis is now running in parallel to code generation and linking, and those three phases are faiiirly balanced right now in terms of speed.
In general (note that I am biased, since I'm a major contributor to the project!), I find that the Zig compiler is honestly a fair bit faster than people give it credit for. Like, it might sound pretty awful that (even after these improvements), building a "Hello World" takes (for me) around 0.3s -- but unlike C (where libc is precompiled and just needs to be linked, so the C compiler literally has to handle only the `main` you wrote), the Zig compiler is actually freshly building standard library code to handle, for instance, debug info parsing and stack unwinding in the case of a panic (code which is actually sorta complicated!). Right now, you're essentially getting a clean build of these core standard library components every time you build your Zig project (this will be improved upon in the future with incremental compilation). We're still planning to make some huge improvements to compilation speed across the board of course -- as Andrew says, we're really only just getting started with the x86_64 backend -- but I think we've already got something pretty decently fast.
[2] https://c9x.me/compile/
Better spelled as Dlang!
When I tried compiling zig it would take ages because it would go through different stages (with the entirety of bootstraping from wasm)
Then everything went south, with the languages that took over mainstream computing.
Notice that this is an extreme example since we're importing the whole standard library and is actually discouraged [^1]. Instead you can get through the day with just these flags: `-stdlib=libc++ -fimplicit-modules -fimplicit-module-maps` and of course -std=c++20 or later, no extra files/commands required! but you are only restricted to doing import <vector>; and such, no import std.
[^1]: non-standard headers like `bits/stdc++.h` which does the same thing (#including the whole standard library) is what is actually discouraged because a. non-standard and b. compile-times, but I can see `import std` solving these two and being encouraged once it's widely available!
See regular discussions on C++ reddit, regarding state of modules support across the ecosystem.
Their algorithms were simpler.
Their output was simpler.
As their complexity grew, proportionately did program performance.
Not to mention adding language convenience features (generics, closures).
Generics were already present in CLU and ML, initially introduced in 1976.
Check their features.
Ok, the goalpost has moved on with what -O0 is expected to deliver in machine code quality, lets then have something like -ffast-compile, or interpreter/jit as alternative toolchain in the box.
Practical example from D land, compile D with dmd during development, use gdc or ldc for release.
Fast iteration time with incremental compilation and binary patching, good debugging should be the expectation for new languages, not something niche or "too hard to do"
The entire realtime rendering industry is essentially built on top of LLVM (or forks of LLVM), even Microsoft have switched their shader compiler to LLVM and is now (finally) starting to upstream their code.
The compiler infrastructure of most game consoles is Clang based (except Xbox which - so far - sticks to MSVC).
So all in all, LLVM has been a massive success, especially for bootstrapping new things.
Zig is pretty much exactly what I would want from low level language, I'm just waiting for it to be stable.
And, of course, kudos - I really appreciate minimalist design philosophy of Zig.
-OReleaseSmall -fno-strip produces a 580K executable, while -ODebug -fstrip produces a 1.4M executable.
zig's x86 backend makes for a significantly better debugging experience with this zig-aware lldb fork: https://github.com/ziglang/zig/wiki/LLDB-for-Zig
I don't recall whether it supports stepping through comptime logic at the moment; that was something we discussed recently.
I believe the most relevant links are https://github.com/ziglang/zig/issues/16270 and https://github.com/orgs/ziglang/projects/2/views/1?pane=issu... (as you can see, nothing is concrete yet, just vague mentions of optimization passes)
The compiler is fairly retargetable, this is an active area of work. So it’s maybe possible in the future to envision zig as an alternative compiler for fragments of the language.
Such as a more fine grained compile cache, better tooling to prevent i validations, removal of the world splitting optimisation, more use of multithreading in the compiler, automatic precompilation of concrete signatures, and generation of lazier code which hot-swaps in code when it is compiled.
- an integrated build system that doesn't use multiple separate arcane tools and languages
- slices with known length in Zig vs arrays in C (buffer overflows)
- an explicit optional type that you're forced to check, null pointers aren't* allowed (*when they are, for integrating with C code, the type makes that obviously clear)
- enums, tagged unions and enforced exhaustive checks on "switch" expressions
- error handling is explicit, functions return an error (an enum value) that the caller must handle in some way. In C the function might return some integer to indicate an error which you're allowed to completely ignore. What is missing is a standard way of returning some data with an error that's built into the language (the error-struct-passed-through-parameters pattern feels bolted on, there should be special syntax for it)
- "defer", "errdefer" blocks for cleanup after function returns or errors
- comptime code generation (in Zig) instead of macros, type reflection (@typeInfo and friends)
- caller typically makes decisions about how and where memory is allocated by passing an allocator to libraries
- easier (at least for a noob) to find memory leaks by just using GeneralPurposeAllocator
As someone who's always used higher level languages since I started programming and strongly disliked many of the arcane, counterintuitive things about C and surrounding ecosystem whenever I tried it, Zig finally got me into systems programming in a way that I find enjoyable.
While a quick compile cycle is beneficial for productivity, this is only the case if it also includes fast tests
Thus wouldn't it be easier to just interpret zig for debug? That would also solve the issue of having to repeat the work for each target
The whole point of debug mode is debuggability, and hooking up such an interpreted Zig to a standard debugger like gbd or lldb probably isn't trivial, since those expect an executable with DWARF debug info.
PS: also acceptable debug-mode performance is actually very important, especially in areas like game development.
There is no real need to add an interpreter. Having custom backend s means that while currently it is being used for debug, far in future it might be able to compete with llvm for speed.
Adding an interpreter would be useless as u would still need to write a custom backend.
The problem is llvms slowness for debug and release.
https://github.com/ziglang/zig/wiki/FAQ#what-is-the-status-o...
I’d say it’s really important to get async nailed before 1.0, because it’s potentially one of the biggest killer features for many projects.
Zigs async isn’t just about I/O, I’ve found the feature useful in hardware simulation code as well.
When choosing a new language (and ecosystem) in which to invest my time, I'm more likely to pick one that's versatile than one that struggles outside of a niche. Even with a background in processes, threads, manual event loops and callbacks, I find that higher level concurrency features make my life easier in enough different situations that first-class support for them is now fairly high on my shopping list.
Do I actually need a language with something resembling coroutines? No; I got by for decades with C and C++. But I want it. It makes me more productive and helps me to keep my code simpler, both of which are valuable to me. These days, I have found myself walking away from languages for being weak in this area.
Nothing is decided for sure, but the plan is most likely to re-introduce stackless coroutines as a set of lower-level primitives [0], and support implementing the planned `std.Io` abstraction [1] using them. The usage isn't quite as "pretty" as our old async/await syntax, but this approach sidesteps a lot of design problems, allows applications to seamlessly switching between different async implementations (e.g. stackless vs stackful), and simplifies the eventual language specification.
It's true that we're not doing this work right now, but that doesn't mean async is "never coming back", nor that you'll be waiting "till 2028".
[0]: https://github.com/ziglang/zig/issues/23446 [1]: https://github.com/ziglang/zig/blob/async-await-demo/lib/std...
I would like to contribute but faced difficulties because the compilation for all stage1/2/3 combined took a lot of time
* We want the final Zig binary it produces to be optimized using LLVM, and LLVM is incredibly slow.
* The start of the bootstrap chain involves a kinda-weird step where we translate a WASM binary to a gigantic C file which we then build; this takes a while and makes the first Zig compiler in the process ("stage1"/"zig1") particularly slow.
Luckily, you very rarely need to bootstrap!
Most of the time, you can simply download a recent Zig binary from ziglang.org. The only reason the bootstrap process exists is essentially so you can make that tarballs yourself (useful if you want to link against system LLVM, or optimize for your native CPU, or you are a distro package maintainer). You don't actually need to do it to develop the compiler; you just need to get a relatively recent build of Zig to use to build the compiler, and it's fine to grab that from ziglang.org (or a mirror).
Once you have that, it's as simple as `zig build -Dno-lib` in the Zig repository root. The `-Dno-lib` option just prevents the build script from copying the contents of the `lib/` directory into your installation prefix (zig-out by default); that's desirable to avoid when working on the compiler because it's a lot of files so can take a while to copy.
You can also add `-Ddev=x86_64-linux` to build a smaller subset of compiler functionality, speeding up the build more. For the other `-Ddev` options, look at the fields of `Env` in `src/dev.zig`.
And hey, I wrote a lot of the rendering code for that perf analyzer. Always fun to see your work show up on the internet.
https://github.com/andrewrk/poop
FWIW there is a similar effort for Rust using cranelift: <https://github.com/rust-lang/rustc_codegen_cranelift>
This design philosophy should lead to countless segfaults that are the result of Zig working as designed. It also relegates Zig to the small niche of projects in modern programming where performance and developer productivity are more important than resilience and correctness.
Unfortunely none of them came with free beer UNIX.
yikes... https://github.com/ziglang/zig/issues/23556
I also do not see how having decades of legacy software is holding anybody back doing new stuff in C in a better way. New C code can be very nice.
Microsoft had to come up with SAL during Windows XP SP2, Apple with -fbounds-safety alongside Safe C dialect for iBoot firmware, Oracle with ADI on Solaris/SPARC, Apple's ARM PAC extension, ARM and Microsoft's collaboration on Pluton and CHERI Morello, Apple, Microsoft, Google and Samsung's collaboration on ARM's MTE.
Lots of money being spent on R&D, for something WG14 considers exaggerated.
The new feature we will put in are not to enable safe programming, but to make it more convenient and to make safety demonstrable.
And I wish there was actually some real industry interest to pushing this forward. Industry seems more interested into claiming this an unfixable problem and we all have to switch to Rust, which gives them another decade of punting the problems with existing code.
Why doesn't WG14 prove the industry wrong then?
WG14 is a very small number of volunteers. It would help if the industry would actually invest development resources for safety on the compiler / language side and in cleaning up legacy code.
When all major OS vendors, some of whom are also compiler vendors, see more return into investment, contributing their money to alternative language foundations, or open source projects, than sending their employees to either WG14, or WG21, it is kind of clear ISO isn't going the way they would like to.
I would not call this an exaggeration, rather not listening.
Additionally, it would not surprise me if one of Zig, Odin, Rust eventually started popping up on console DevKits, or Khronos standards as well.
Zig is still not 1.0, theres not much stability guarantees, making something like Frama-C, even tho it is possible is simply going to be soo much pain due to constant breakages as compared to something like C.
But it is not impossible and there have been demos of refinement type checkers https://github.com/ityonemo/clr
Beyond that, tools like antithesis https://antithesis.com/ exist that can be used for checking bugs. [ I dont have any experience with it. ]
Most of Zig's safety, or lack thereof, seems inherent to allowing manual memory management, and at least comparable to its "C replacement" peers (Odin, C3, etc).
Otherwise it is like getting Modula-2 from 1978, but with C like syntax, because curly bracket must be.
Several very popular, small, mature projects have zero or few open issues.
(And several mature, huge and unpopular ones too.)
Having the pointer nature of the operation locally explicit is way better than having to scroll back to find the type of the variable.
Sorry, what?
Code generation backends in the Zig compiler work by lowering an SSA-like structured IR called AIR to machine code (or actually first to another intermediate data structure called MIR, but don't worry about that). The thing is, AIR is intentionally quite high-level, the intention being that the code emitting AIR (which is complex and difficult to parallelize) doesn't have to waste loads of valuable single-threaded time turning e.g. a single line of Zig code into tens or hundreds of instructions.
However, this approach sort of just moves this work of "expanding" these high-level operations into low-level instructions, from the code producing AIR, into the codegen backend. That's good for compiler performance (and also actually for avoiding spaghetti in the compiler :P), but it makes it much more difficult to write backends, because you need to implement much more complex lowerings, and for a much greater number of operations.
To solve this problem, `Legalize` is a new system we've introduced to the compiler which effectively performs "rewrites" on AIR. The idea is that if a codegen backend doesn't want to support a certain high-level operation, it can set a flag which tells `Legalize` that before sending the AIR to codegen, it should rewrite all occurences of that instruction to a longer string of simpler instructions. This could hugely simplify the task of writing a backend; we don't have that many legalizations implemented right now, but they could, for instance, convert arithmetic on large integer types (e.g. u256) into multiple operations on a "native" integer size (e.g. u64), significantly decreasing the number of integer sizes which you need to handle in order to get a functional backend. The resulting implementation might not emit as efficient machine code as it otherwise should (generally speaking, manually implementing "expansions" like the one I just mentioned in the backend rather than in `Legalize` will lead to better code because the backend can sort of "plan ahead" better), but you can implement it with way less work. Then, if you want, you can gradually extend the list of operations which the backend actually supports directly lowering, and just turn off the corresponding legalizations; so everything works before and after, but you get better code (and possibly slightly faster compilation) from implementing the operation "directly".
If anything, it is a generation rediscovering what we have lost.
We been enjoying golang for the last decade and then some ;-)