In case anyone else is unfamiliar with Zig syntax and wondering: in Zig the .{ "somevalue"} represents an anonymous list literal [1], and .{.somename = "somevalue"} is an anonymous struct literal [2].
(A bit off topic rant, but Zig documentation is quite bad, it took me a lot more effort that it should to discover the facts above.)
I agree the standard library docs aren't useful (unless something changed a lot in the last six months)... it seemed always better to just search and read the standard library source directly.
messe 39 days ago [-]
> I agree the standard library docs aren't useful (unless something changed a lot in the last six months)... it seemed always better to just search and read the standard library source directly.
They're automatically generated. As I understand it they've been holding off on improving them until they have the self hosted compiler working, as it will be easier at that point.
throwawaymaths 39 days ago [-]
To be fair it's really hard to "search" for tuples and anyonymous structs and get an understanding from "reading the stdlib". It's also enough of a departure from C (but not obviously readable like types on the left) to warrant some sort of big red flag in the lang docs... Understanding this tripped me up a bit
LAC-Tech 39 days ago [-]
Zig is still not 1.0. I can forgive its lack of docs at this point, especially how surprisingly readable the std lib is.
akira2501 39 days ago [-]
I'm not wild about the new trend of languages of making a bunch of syntax dependent on a single small sigil. It gives me a strong feeling of Perl's historical "eclecticness" and I feel like I'm putting more effort into correctly reading the program back into my mind than I should ideally have to.
kzrdude 39 days ago [-]
I'm wondering what the point of the period is here, could it be done without it?
throwawaymaths 39 days ago [-]
I think it allows the parser to be context-free since otherwise it is tricky to distinguish it from a code block. They could have picked a different sigil, but I think . was landed on because it had the fewest semantic conflicts with other concepts in the language (for example % is used to denote modular arithmetic)
alpaca128 39 days ago [-]
I don't know why, but this is a part of the language's syntax, for example when matching a union type each named variant also has that period. Same with the argument for print statements (`print("{s}", .{var})`).
kzrdude 39 days ago [-]
aha, seems like they use the list argument to print instead of having varargs functions then?
bo0O0od 39 days ago [-]
In zig code blocks are expressions and can appear in a lot of places, e.g. on the RHS of an assignment. Probably possible to not need the dot but considerably more complicated.
metadat 39 days ago [-]
Is `.{"somevalue"}` a string array of length 1?
The dot seems quite superfluous and quite the silly quirk (in an unintuitive way). How do Python and Go get by without such a mystery sigil?
throwawaymaths 39 days ago [-]
It's a tuple of length one with an array of u8. A tuple is an anonymous struct with special fields that are numerical, note that also tuples can be accessed by index syntax. Some distinction from empty braces is necessary because "naked" empty braces are for code blocks. It's only unintuitive to you because it's different and you're not used to it yet.
codethief 40 days ago [-]
This is really cool, thanks for providing such an in-depth view into Zig's comptime features and how you used them!
ewalk153 40 days ago [-]
Agree. This is one of the most instructive tutorials on the comptime language feature.
Sphax 39 days ago [-]
I didn't really intend to write a tutorial but glad you liked the post !
curist 40 days ago [-]
By using comptime, the statement couldn't be runtime composed, right? That's currently the major holdback for me to spend more time on zig: if using comptime become more common in zig community, the libs could be less flexible to use. It feels sort of like function coloring to me, that the whole call chain also need to pass down the value as comptime variable. I've only spend 2 days with zig, so I would love to learn if I'm wrong on this subject.
Sphax 40 days ago [-]
Only the metadata of the statement is comptime, that is the type annotation for each bind parameters. So if you have this query
SELECT * FROM user WHERE age = $age{u16}
You _must_ provide a u16 bind parameter. However the value itself is of course not required to be comptime-known, that would make the whole thing unusable.
For what it's worth there are in zig-sqlite variants of the method which bypass the comptime checks; they're not documented properly yet but see all methods named `xyzDynamic`, for example https://github.com/vrischmann/zig-sqlite/blob/master/sqlite....
throwawaymaths 39 days ago [-]
Generally I wouldn't call zig comptime function coloring. (I have written a prime sieve algorithm that uses the runtime code to precalculate some primes at comptime. Yes, I had to be very careful about what was in the prime number algorithm, but comptime supports that level of complexity and it was certainly possible to call runtime-intended code at comptime.
Can you call comptime-intended code at runtime? No? (Yes? B/c the call site is "in" the runtime code?) But just make it runtime code instead of comptime code?
skybrian 39 days ago [-]
Often, comptime code couldn’t be executed at runtime because the language features it accesses aren’t available then. But I agree that if a parameter could go either way, you shouldn’t have to write two versions.
longrod 40 days ago [-]
This looks really neat and handy especially the type annotations part. I always got frustrated when accidentally putting the wrong type into a column.
How's the Language Server support for this? Last time I tried ZLS, it was quite well rounded so I am curious to know if this type annotations would work with it or not. Would be really cool if they do.
I have never been able to fully adopt zig due to how frustrating working with strings is but I absolutely loved the comptime functionality.
xmorse 40 days ago [-]
ZLS currently does not run any comptime code as far as i know
It would be cool to have an official Zig language server that can do it
Tiberium 40 days ago [-]
I think the only real way you can run make a language server understand comptime code is using the compiler as a library, similar to how Nim does it - nimsuggest (the tool for autocompletion, definitions, etc) is basically the compiler itself with some nimsuggest-specific stuff, so it understand macros, templates, compile time code evaluation, etc.
gpderetta 40 days ago [-]
Isn't that's how most LSP servers are implemented? I'm only vaguely familiar with clangd.
ranfdev 39 days ago [-]
Yes. An LSP has to work along with the compiler, or it has to become an alternative compiler.
For example, from the article "Why LSP?" [1]
"It is known that compilers are complicated, and a language server is a compiler and then some."
It's moving in that direction, but far from all are based on their compiler still.
Shadonototra 40 days ago [-]
from what i understood, that is what they are planning to do, have an official built in language server
Sphax 40 days ago [-]
zls doesn't support this unfortunately. I don't know what the plans are for comptime support.
nickysielicki 39 days ago [-]
constexpr/consteval/comptime/etc is a game changer for the systems programmer and it feels so close but yet so far. Can someone speak to what language has the best support for this? Some things that I feel are missing in C++:
* arbitrary file I/O. I can take my compile time data and write a python script to put it in a std::array, but I shouldn’t have to.
* non-fixed-sized containers, ie vector
* a generic memoization utility in the standard library for caching. If I want my function to handle any input at runtime, but I want to pre-populate a cache at compile time for values that I know will be called, it’s doable but not as easy as it should be. (In general, given the amount of algorithms that rely on memoization, I’m somewhat surprised that Python is the only language I know that makes memoization as easy as a decorator).
formerly_proven 39 days ago [-]
> * arbitrary file I/O. I can take my compile time data and write a python script to put it in a std::array, but I shouldn’t have to.
> * a generic memoization utility in the standard library for caching. If I want my function to handle any input at runtime, but I want to pre-populate a cache at compile time for values that I know will be called, it’s doable but not as easy as it should be. (In general, given the amount of algorithms that rely on memoization, I’m somewhat surprised that Python is the only language I know that makes memoization as easy as a decorator).
I don't think it's possible to use comptime to generate wrapper functions duplicating the signature of a given function, at least I haven't found a way to do so. The function signature has to be spelled out in the comptime code.
ntoskrnl 39 days ago [-]
> Can someone speak to what language has the best support for this?
I can give you the Rust perspective, but I'm not sure it's the best.
> arbitrary file I/O
include_str! and include_bytes! make the contents of a file available as a string or a byte array. More complex types would need a build script (or transmute).
> non-fixed-sized containers, ie vector
No support in `const fn`, but you can again use a build script as an escape hatch.
> a generic memoization utility in the standard library for caching
There's nothing in the stdlib, but there are a few third party crates that give you the easy one-line syntax, such as https://crates.io/crates/memoize
jdrek1 39 days ago [-]
> arbitrary file I/O. I can take my compile time data and write a python script to put it in a std::array, but I shouldn’t have to.
You can already use them in a constexpr context, you just can't leave it yet. So creating a vector on compile time and then using it on runtime is sadly not working right now but I think that's being worked on as well.
39 days ago [-]
dataangel 39 days ago [-]
All of this is already done in Lisp. Greenspun's 10th rule.
ArtixFox 39 days ago [-]
...is there any lisp that can be used for systems programming and that does NOT produce huge binaries, does not use GC, generates fast n small binaries and is still interactive? everyone claims lisp can be used for anything, same goes for forths, but i cant see a lisp like that, and about forths...none of the free ones are fast, and the fast ones are still slow
on the same note, lets say if someone is creating something[i am thinking of] like this, what would be a better option, go the forth way or the lisp way
cuz both offer same stuff to some extent
throwawaymaths 39 days ago [-]
I mean there was a "lisp machine" in the distant past that actually ran lisp on metal. Does that count?
ArtixFox 39 days ago [-]
obviously no, we dont care about a language that was once used in a machine specifically for it, cuz we cant go in past and program in that specific machine.
i think ive realised what is up with lisps, forths, etc. theoretically you can build all kinds of beautiful abstractions over it and use them for anything
but in reality these languages cant do that and/or their communities dont care to implement it/its not their goal
it might sound bad but it is what it is
if i would have to say it in a worse way,
these languages are all talk
39 days ago [-]
jhgb 39 days ago [-]
Here's an absolutely crazy idea that I had the other day... Since Zig has comptime, couldn't a Zig reimplementation of SQLite use comptime for compile-time query compilation? Including generating native code for fixed queries? So far I haven't been able to come up with any counterargument for why this wouldn't work.
pvg 39 days ago [-]
The counter-argument is interpreting the query is not what takes up the time in executing a typical RDBMS query. Imagine a db is just a key-value store, the query is some keys and the db just spits out whatever it found in a giant hashtable. That's not going to get faster if you 'compile' the query. It's not going to get much faster if your db is a more realistic three hashtables, two big arrays and a partridge in a b-tree.
jhgb 39 days ago [-]
It might not be worth for everything, but extremely simple queries on extremely simple schemas would not be the goal here.
> couldn't a Zig reimplementation of SQLite use comptime for compile-time query compilation? Including generating native code for fixed queries?
SQLite already compiles queries into opcodes and then uses its own VM to interpret them. You would have to reimplement the internal VM.
jhgb 39 days ago [-]
I'm aware of that, but these elementary operations have some definitions in code. Even just calling them in a fixed sequence (as opposed to dispatching at runtime one opcode at a time) would be something optimizable for a sufficiently smart compiler. However, given a good-enough design of the whole system, straight code generation from the query plan shouldn't be a problem either (and at that point, the Zig compiler should be able to do even more work with the generated query code). I mean, any reimplementation would presumably use Zig features heavily anyway -- otherwise you could just link with the C code.
cryptonector 39 days ago [-]
You lose at "reimplementation of SQLite".
jhgb 39 days ago [-]
Possibly, but then again, if one's aim is to replace C with Zig in the future perhaps completely, because SQLite became de-facto standard file format for many applications (see OGC GeoPackage for example), you might need some implementation of SQLite for that. If a file format is going to stick for decades, you'll definitely get other implementations of that file format sooner or later in environments that can't or don't want to use C.
cryptonector 39 days ago [-]
So, yes, it'd be nice to have an implementation of SQLite3 in some non-C language. However, that's a tall order.
And here you're proposing Zig while someone else will prefer Rust, and someone else Go. So you'll need N>1 rewrites.
This is why SQLite3 is written in C then, because for all the ways in which C sucks, it is the most common (lowest) denominator that ticks the portability checkbox.
Language runtimes that want to not have a C runtime embedded are the hardest hit. The best thing to do for those in the short-term is to talk to a C-coded IPC service that runs the C-coded thing that otherwise can't be run.
jhgb 39 days ago [-]
> because for all the ways in which C sucks, it is the most common (lowest) denominator that ticks the portability checkbox.
This is actually something that Zig is supposed to do as well. There should be nothing that you can do in C but can't do in Zig. So hypothetically, in the future (say, twenty years from now), a Zig implementation of SQLite could be a preferred one since you're not losing anything with it. (With the additional benefit that should you want to, you might be able to compile application-specific queries into it.)
noodledoodletwo 39 days ago [-]
What environment cannot run C?
jhgb 39 days ago [-]
Well, for instance, famously, any multiple-stack machines will have a hard time trying to run C because C assumes a single hardware stack. Although I imagine that currently, in most cases, you'll want to avoid C for software reasons, rather than hardware reasons (for example, you're required to deploy pure Java code, or pure C# code, or want to avoid any unsafe code in Rust, things like that).
cryptonector 39 days ago [-]
> C assumes a single hardware stack
No, it doesn't. C does not even assume a stack. A platform where function call frames are allocated on the heap would not be incompatible with C.
You could say that some C code assumes a stack, but that's pretty exceptional.
> Although I imagine that currently, in most cases, you'll want to avoid C for software reasons,
Yes.
> rather than hardware reasons
Hard to imagine hardware on which C could not run. Maybe a JVM chip, but even then, you could compile C to bytecode.
jhgb 39 days ago [-]
> No, it doesn't. C does not even assume a stack. A platform where function call frames are allocated on the heap would not be incompatible with C.
The problem is not with call frames being on the stack or on the heap; even a spaghetti stack would be problematic. One of the problems is that return addresses and data are interleaved in C-style frames, whereas multiple stack machines require them to be separate. It's not that you couldn't write an implementation of C for a stack machine, it's just that it would be probably very primitive and slow. Yes, you can run C at the very least the way it was done on Lisp machines, by allocating a large byte array and treating it as your physical memory, but surely you would want to avoid it if you could. There really is a reason why stack machines historically ran somewhat exotic languages like Forth.
cryptonector 39 days ago [-]
No, really, the C language makes no assumptions about this.
jhgb 39 days ago [-]
You can't take an address of pretty much any input parameter, return value, or local variable of a function on a stack machine because they're all on the stack CPU core's hardware data stack which has no addresses for its elements. In C you should be able to take the address of these objects, using an ampersand. That's not making an assumption?
cryptonector 39 days ago [-]
If it's addressable memory, you can address it. That does not assume a stack however. You don't get to assume that they are in some order, not if you want your code to be portable. On typical architectures that do use a stack, if you're very careful, you do get to use the addresses of automatics to figure out whether the stack grows up or down, but that's not specifically something in standard C.
Const-me 39 days ago [-]
GPUs are weird.
HLSL and CUDA have C-like syntax, but underneath the syntax these things are very different from C.
On GPU almost nothing has an address, there's no stack, no malloc/free, no files or printf, and every instruction runs on 32+ threads in lockstep.
39 days ago [-]
voigt 39 days ago [-]
I see Zig, I vote up ;)
Exciting new language with compelling features!
Great work of the author explaining comptime!
39 days ago [-]
noodledoodletwo 40 days ago [-]
Cool article.
I can't figure out why i am supposed to care about zig beyond it's fun? I get that it's interesting and more safe than C (honestly though what the hell isn 't).
Say you write rust pretty regularly for new product development, what does zig offer to make my life better, my products more stable, etc?
hansvm 39 days ago [-]
I write Rust for $day_job and Zig for fun. Rust is great, but if I had to rip on a few places where it's lacking for some applications: Zig maintains a small language footprint, is easier to interop with C, and makes it readily apparent when your code is doing anything non-trivial. To make that concrete:
- The small footprint in Zig enables fast compilation (and getting faster) and fast feedback cycles.
- Arbitrary nastiness can happen using From and other traits in Rust, and the tendency to shadow names and rely on type inference can make that unpleasant to track down. In Zig you'd be forced to make a choice (for any non-trivial type) at the return or call site of how you were going to convert things.
- If you want to use the stdlib and have any non-trivial control over how objects are allocated you're in for a rough time in Rust (say, a circular buffer backing some objects and a reusable arena for others). You'll probably be reinventing a lot of wheels.
- Similarly with issues like implicit locking in stdout. It's executing a syscall per line anyway, but the lock can inadvertently make contended writes 1000x slower, so suddenly logging in multithreaded code needs a dedicated logger, and not any of the common options since those fall back to the locking stdout we're trying to avoid.
adamdusty 39 days ago [-]
Rust probably had a fairly small "language footprint" when it was only a few years old with no production use cases as well. I've used both in some small hobby projects. I like writing zig significantly more than rust, but we'll see how long the language remains small and compact.
AndyKelley 39 days ago [-]
On the contrary, it had a larger one because it had green threads which require an entire runtime (similar to Go).
motiejus 39 days ago [-]
I first tried rust in 2015[1]. It was my first serious attempt at rust.
The language was not small whatsoever at the time. It was not fun.
I recommend trying it again. Consider how old the language is, 7 years is a really long time ago.
It may not be the smallest language, but, it compiles pretty tightly.
DixieDev 39 days ago [-]
Most likely you specifically don't have much reason to care about Zig. Meanwhile, C can still often be found in areas where high performance and precise control over memory are important - such as in small embedded systems and game engine development - and Zig is a great fit as a replacement.
While you can technically use Rust in these domains you'll find yourself jumping through hoops and fighting against quirks that come with it being fairly high-level and very opinionated on how to enforce memory safety.
One such scenario I've encountered is implementing my own memcpy with a loop like `for i in 0..len`. This works in release builds, but without optimisations this gets a deeeeep callstack that eventually also calls memcpy, so you get a stack overflow. Note how memcpy is implemented in rlibc to avoid this issue:
https://docs.rs/rlibc/latest/src/rlibc/lib.rs.html#30-38
noodledoodletwo 39 days ago [-]
Fair enough thanks for explaining. Basically if you want to make your own Malloc or things like it, zig is friendlier. Will say though, I've used rust for gamedev, and found the experience to be really nice. Was I writing my own memcpy though, nope.
It's funny how in rust you can write raw asm, but there seem to be quirks somewhere in the middle.
pbronez 40 days ago [-]
The Zig website [0] has an FAQ for this. I’ll copy in an abridged version here for convenience:
==========
Why Zig When There is Already C++, D, and Rust?
- No hidden control flow. If Zig code doesn’t look like it’s jumping away to call a function, then it isn’t.
- No hidden allocations. Zig has a hands-off approach when it comes to heap allocation. There is no new keyword or any other language feature that uses a heap allocator. The entire concept of the heap is managed by library and application code, not by the language.
- First-class support for no standard library. Zig has an entirely optional standard library. Each std lib API only gets compiled into your program if you use it. Zig has equal support for either linking against libc or not linking against it. Zig is friendly to bare-metal and high-performance development.
- A Portable Language for Libraries. Zig is attempting to become the new portable language for libraries by simultaneously making it straightforward to conform to the C ABI for external functions, and introducing safety and language design that prevents common bugs within the implementations.
- A Package Manager and Build System for Existing Projects. Not only can you write Zig code instead of C or C++ code, but you can use Zig as a replacement for autotools, cmake, make, scons, ninja, etc. And on top of this, it (will) provide a package manager for native dependencies. This build system is intended to be appropriate even if the entirety of a project’s codebase is in C or C++.
- Simplicity. Zig has no macros and no metaprogramming, yet is still powerful enough to express complex programs in a clear, non-repetitive way. Even Rust has macros with special cases like format!, which is implemented in the compiler itself. Meanwhile in Zig, the equivalent function is implemented in the standard library with no special case code in the compiler.
- Tooling. Zig provides binary archives for Linux, Windows, macOS and FreeBSD. It is installed by downloading and extracting a single archive, no system configuration needed. It is statically compiled,
uses LLVM, has out of the box cross-compilation to most major platforms, and ships w/ libc source and dynamically compiles when needed. The Zig build system has caching and compiles C and C++ code with libc support
> Simplicity. Zig has no macros and no metaprogramming
This one is an odd point; this very article is a demonstration of metaprogramming*:
> Thanks to Zig’s type reflection we can read a row of data into a user-provided type without needing to write any “mapping” function: we know the type we want to read (here the User struct) and can analyse it at compile-time.
* Which is a feature I favor, for the record.
throwawaymaths 39 days ago [-]
I think the poster meant "no macros" (and especially no c-style lexical macros)
formerly_proven 39 days ago [-]
No metaprogramming as in no separate metalanguage (e.g. templates, macro_rules!)
avgcorrection 39 days ago [-]
Homogeneous metaprogramming is a thing...
noodledoodletwo 40 days ago [-]
Yea I feel like most of these differentiators aren't things most people care about barring one. Tight integrations with c/c++ is potentially useful, beyond that I don't really get it. It's kind of like Hare in that regard?
stonemetal12 39 days ago [-]
In general those are the reasons I see people give for why they still use C instead of C++ or newer languages. Also all of those things were, and maybe still are necessary for embedded systems where C still dominates.
To me it comes down to Zig is a "modern" C, but unlike most other attempts at replacing C it doesn't skip some of C's use cases.
noodledoodletwo 39 days ago [-]
See I got downvoted a bit... Let me rephrase, other modern languages offer something along the lines of these, of the things that aren't there for say rust, close integration with c/c++ is nice. Ie rust has nostd, etc.
ptato 39 days ago [-]
out of that list, rust doesn't offer "no macros and no metaprogramming" and "no hidden control flow". maybe these just aren't things you care about?
noodledoodletwo 39 days ago [-]
Yea but for the most part you can avoid macros, and areas where you can't you can consider them to be keywords in my opinion anyways. Macros aren't all bad, but they do get abused a lot and make a nightmare for others. Fwiw Ive only written one macro and it was for learning purposes only.
sryie 39 days ago [-]
If zig catches on then the question may change from "why zig" to "do I really need rust?". Rust has a higher entry barrier and is harder to use daily. Is the safety guarantee worth it? In my experience, it is easier to fix a bug than to prove to the rust compiler that there is no bug. I just need a language that finds and reports the bugs (before production). Zig's error handling is interesting in this regard and it may be "good enough". Beyond that, I would prefer to spend my time on the real problem rather than appeasing the compiler.
noodledoodletwo 39 days ago [-]
It's really not that hard to use once you invest some time into it. I know where you are coming from, but appeasing the compiler actually means "I am writing safe code". Rather then "it compiled let's see what happens in production"
sryie 39 days ago [-]
"some time" is the entry barrier I was referring to and increases the cost of onboarding and adoption. However, rust also incurs a non-negligible ongoing productivity cost for its complexity.
Rust can provide some guarantees but it still won't protect you from logic errors or a "clever" coworker. In comparison, zig is simple and at the end of the day readability is my best defense against bugs/errors.
I find myself agreeing with pron and others in that discussion when they say no one really wants to use a safe language. What they want are correct programs.
What remains to be seen is how easy it is to write correct programs in each and the values people place on the deltas between the two languages.
mcronce 39 days ago [-]
> rust also incurs a non-negligible ongoing productivity cost for its complexity
This isn't really consistent with my experience. Compared to writing other languages with varying levels of strictness (C, Python, Go to name a few), Rust's compiler saves me a lot of time writing tests and finding bugs.
It doesn't save me the work of fixing them, but in my experience, bugs that are hard to find and easy to fix drastically outnumber bugs that are easy to find and hard to fix.
That said, I haven't tried Zig. It's on my radar, but just haven't had a new project to start lately.
sryie 39 days ago [-]
I am surprised that you don't encounter a productivity cost in rust vs languages with a gc (like python and go). If that were typical then I assume everyone would be using it. What do you consider to be the tradeoffs between rust and other languages? What are its drawbacks in your experience? How much rust experience do you have compared to other languages?
noodledoodletwo 39 days ago [-]
The productivity cost is mostly up front, once you understand borrowing it's seriously not bad in 99% of cases. Also, you can be initially productive in C or C++, but how much time goes into trying to fix undefined behavior, threading issues,etc. More importantly, when do those hours get spent? Is it after a customer is screaming into your support chatbot, or is it before you launch during normal development? To me that's the difference and the killer feature. That's why ms, aws, etc are using it. Not because it's new and marginally/debatably better then c/c++.
Once I started realizing what the borrow checker was saying it wasn't "fighting the compiler" I realized what I was asking the compiler to do was really not a good idea. It becomes your friend basically.
I didn't make this post to shill rust, I'm pretty language agnostic to my own detriment... The point is, it's worth considering and trying it again if you didn't get it the first time around. Rust took me 2x to become a tool I'd use, I honestly hated it at first.
sryie 38 days ago [-]
I think we can agree that c and c++ are essentially table stakes at this point and that anything new needs to offer strong incentives.
I like rust (I actually liked it from the start years ago). However, I believe that there are tradeoffs and the companies you named use other languages in addition to rust because they also believe there are tradeoffs.
Rust won't be the last language. I don't want to get too comfortable with what I know and miss out on what I don't. I am excited by new languages that may also have something to offer. However, I can't master them all so a good discussion is valuable and can offer shortcuts through the noise. I think my assumption in my first reply to your original post was that you were interested in the same thing. However, if you are happy with rust then I don't want to spoil your enjoyment.
throwawaymaths 39 days ago [-]
I think a static borrow checker tool is very likely going to be in the future of zig ecosystem (unlikely to be in the mainline, but that's okay), especially after the intermediate representations stabilize.
noodledoodletwo 39 days ago [-]
I'll believe it when I see it hit 1.0 and has a healthy user base.
mongol 40 days ago [-]
Yes, but say you are not yet writing Rust, perhaps you should choose Zig instead of it?
giancarlostoro 40 days ago [-]
I think you're both asking the same question from different angles. If I understand correctly, and I could be misremembering but Zig is supposed to be compatible with C++ not just C, which is something that is not necessarily straight forward in other C / C++ competing languages. I hear even D has some issues with mangling and what not.
In all honesty, I prefer the syntax of D over all the others, it feels the most like Java or C# but with a lot of modern benefits. D is trying to do too much though it feels like and I would love for the next D standard library to support OOTB similar to what Go supports, especially a very minimalist web server, I think all modern programming languages should be capable of spinning up web servers out of the box. This is one small detail Go got right in my opinion.
Here's an article from the Chromium team on challenges they faced with trying to integrate Rust (or at least evaluating it) note the entry was last updated in 2020:
D nim and a few others offer nicer syntax over what's normally C for sure. Yea it doesn't surprise me that an established product had trouble incorporating rust, but at the same time, there's a reason why they went through with it right? Memory safety, no data races, etc. Like there is a motivating reason for using it. Meanwhile lots of other products have found ways of integrating Rust and some of them are risk averse products.
krylon 38 days ago [-]
I like that the web server in Go's standard library comes as a library, so you can embed a web server into your application, even if it otherwise isn't web-related at all, just to allow for some introspection and/or control.
(To be fair, I think Python and Ruby have HTTP servers in their standard libraries as well, at least minimalist ones.)
noodledoodletwo 40 days ago [-]
I see where you are coming from, zig is easier to learn, but zig doesn't offer what rust does with respect to safety. That feature is so hard for me to ignore. I respect it though, some people want to be up and running with a new technology in a day or whatever, rust doesn't give you that unless you are very seasoned.
throwawaymaths 39 days ago [-]
Honestly many applications don't need the level of safety that rust provides (for small single threaded cli apps or cloud lambdas, just allocate into an arena and throw everything away when the program quits, no UAF or DF because you're never freeing)
At the other extreme, if you're writing an operating system or a language VM, you probably want contextual allocators (like an allocator that takes a runtime argument like "which green thread I'm allocating on") which rust makes extremely difficult.
If you really need memory and resource safety I think the best answer is to be patient. Zig is very easy to parse and I imagine it will be the case that static analysis build tools will come about which can do what you want out of rust.... Being decoupled from the compiler chain you would be able to run fast but guard your prs to main/dev/release (as you see fit) with static analysis tools that will protect you with the safety you seek in an isomorphic fashion to "how rust does it". There's no reason why someone couldn't write it now, but with a lot of things (like ZIR/AIR) being highly unstable -- and these are what you're likely to want to statically analyze for such a tool -- for ones sanity I don't recommend building out a tool like that now.
noodledoodletwo 39 days ago [-]
I actually disagree with that and think it encourages bad practices in people who don't know better. The rise of fuzz testing on these old c applications following this kind of mentality is a constant source of cves . Why expose users to problems, if it's easy not too?
Also it's not like that safety costs much of anything. You really can get comfortable writing rust, especially "easy rust" like a single thread cli app. It really is easy, even convenient.
Either zig is a different tool to rust, in which case I get it. Or people should be waiting for zig to be mature, meanwhile rust has been stable for half a decade, and at this point "just works". I'm not going to sit through an addendum static analyzer and all the bugs that come with making one for a few years. Other people whose appetite for risk is higher will though, and I wish them the best. I've seen efforts in other languages, my conclusion is that it's much better to have it built in...
winter_squirrel 40 days ago [-]
As someone who uses both for different personal projects:
- I use zig as my build system for both rust, zig, C libraries and linking since the build system works really well for this purpose
- When I need to write applications or libraries that can benefit from compile-time code, I always try and use zigs since it's much easier to use comptime then a combination of rust macros and generics
- I like the zig async story a lot better. Or at least it's much easier to wrap my head around and write code in compared to rust + tokyo
On the other hand, sometimes I know a project will benefit from the borrow checker or I want to use some of the awesome rust crates that the community made and I'll use rust instead.
noodledoodletwo 40 days ago [-]
Does a really short summary of this read kind of like, "zig is a modern c, but I like using rust as a modern c++"?
Agree, Tokio is a little tricky for people new to rust no doubt, but it's tricky for a reason. It's saving lives in production.
throwawaymaths 39 days ago [-]
Just FYI: things that are actually "saving lives" are probably realtime applications, which need things like deterministic execution time, no allocation from the os, bounded memory usage, which rust generally does not give you without a ton of effort.
soggybutter 39 days ago [-]
This is actually the frame of reference I've taken to giving most folks. Zig is to C what Rust is to C++ in my mind. Is there a lot of overlap between all 4? Absolutely. But devs still choose C over C++ (and Rust) in some cases, and I think it'll be similar for Zig
noodledoodletwo 39 days ago [-]
Thanks for explaining, I really appreciate it.
orangetuba 39 days ago [-]
Yes, but Rust offers something more than just modernizing C++. The borrow checker provides something entirely new that has the potential to make a huge impact.
jmull 39 days ago [-]
Just semantics, but I think the borrow checker is part of modernizing C++.
(The way to manage memory in C++ and C is to opt in to a memory management pattern... hopefully you've chosen a good pattern and hopefully you follow it consistently. Patterns are generally backed by utilities and primitives that are hopefully correct, complete, and hopefully make it relatively easy to follow the pattern consistently. Hopefully you can get a linter to help you too. In a sense Rust takes the same approach, but was implemented from the ground up to eliminate all the gaps so you essentially don't have to hope to successfully bridge the gaps yourself.)
verdagon 39 days ago [-]
Can you elaborate on the savings lives part? I sense a good story, would love to hear.
(A bit off topic rant, but Zig documentation is quite bad, it took me a lot more effort that it should to discover the facts above.)
[1] https://ziglang.org/documentation/master/#Anonymous-List-Lit...
[2] https://ziglang.org/documentation/master/#Anonymous-Struct-L...
I used "ziglearn" to understand ("chapter 1" and "chapter 2" have tons of foundational stuff): https://ziglearn.org/chapter-1/
I used the language reference for the details: https://ziglang.org/documentation/master/#Introduction
I agree the standard library docs aren't useful (unless something changed a lot in the last six months)... it seemed always better to just search and read the standard library source directly.
They're automatically generated. As I understand it they've been holding off on improving them until they have the self hosted compiler working, as it will be easier at that point.
The dot seems quite superfluous and quite the silly quirk (in an unintuitive way). How do Python and Go get by without such a mystery sigil?
For what it's worth there are in zig-sqlite variants of the method which bypass the comptime checks; they're not documented properly yet but see all methods named `xyzDynamic`, for example https://github.com/vrischmann/zig-sqlite/blob/master/sqlite....
Can you call comptime-intended code at runtime? No? (Yes? B/c the call site is "in" the runtime code?) But just make it runtime code instead of comptime code?
How's the Language Server support for this? Last time I tried ZLS, it was quite well rounded so I am curious to know if this type annotations would work with it or not. Would be really cool if they do.
I have never been able to fully adopt zig due to how frustrating working with strings is but I absolutely loved the comptime functionality.
It would be cool to have an official Zig language server that can do it
For example, from the article "Why LSP?" [1]
"It is known that compilers are complicated, and a language server is a compiler and then some."
[1]: https://matklad.github.io//2022/04/25/why-lsp.html
* arbitrary file I/O. I can take my compile time data and write a python script to put it in a std::array, but I shouldn’t have to.
* non-fixed-sized containers, ie vector
* a generic memoization utility in the standard library for caching. If I want my function to handle any input at runtime, but I want to pre-populate a cache at compile time for values that I know will be called, it’s doable but not as easy as it should be. (In general, given the amount of algorithms that rely on memoization, I’m somewhat surprised that Python is the only language I know that makes memoization as easy as a decorator).
https://ziglang.org/documentation/0.9.1/#embedFile
I haven't tried, but because it should just be equivalent to a string literal you should be able to further process it using comptime as well.
> * non-fixed-sized containers, ie vector
One way is to just have a function count how many entries you need and use arrays. This is all pretty straightforward because comptime.
Real dynamic comptime containers are dependent on making a comptime allocator available. There's a ticket for that: https://github.com/ziglang/zig/issues/1291
> * a generic memoization utility in the standard library for caching. If I want my function to handle any input at runtime, but I want to pre-populate a cache at compile time for values that I know will be called, it’s doable but not as easy as it should be. (In general, given the amount of algorithms that rely on memoization, I’m somewhat surprised that Python is the only language I know that makes memoization as easy as a decorator).
I don't think it's possible to use comptime to generate wrapper functions duplicating the signature of a given function, at least I haven't found a way to do so. The function signature has to be spelled out in the comptime code.
I can give you the Rust perspective, but I'm not sure it's the best.
> arbitrary file I/O
include_str! and include_bytes! make the contents of a file available as a string or a byte array. More complex types would need a build script (or transmute).
> non-fixed-sized containers, ie vector
No support in `const fn`, but you can again use a build script as an escape hatch.
> a generic memoization utility in the standard library for caching
There's nothing in the stdlib, but there are a few third party crates that give you the easy one-line syntax, such as https://crates.io/crates/memoize
There's a proposal for that https://open-std.org/JTC1/SC22/WG21/docs/papers/2020/p1040r6...
> non-fixed-sized containers, ie vector
You can already use them in a constexpr context, you just can't leave it yet. So creating a vector on compile time and then using it on runtime is sadly not working right now but I think that's being worked on as well.
on the same note, lets say if someone is creating something[i am thinking of] like this, what would be a better option, go the forth way or the lisp way cuz both offer same stuff to some extent
i think ive realised what is up with lisps, forths, etc. theoretically you can build all kinds of beautiful abstractions over it and use them for anything
but in reality these languages cant do that and/or their communities dont care to implement it/its not their goal
it might sound bad but it is what it is
if i would have to say it in a worse way, these languages are all talk
SQLite already compiles queries into opcodes and then uses its own VM to interpret them. You would have to reimplement the internal VM.
And here you're proposing Zig while someone else will prefer Rust, and someone else Go. So you'll need N>1 rewrites.
This is why SQLite3 is written in C then, because for all the ways in which C sucks, it is the most common (lowest) denominator that ticks the portability checkbox.
Language runtimes that want to not have a C runtime embedded are the hardest hit. The best thing to do for those in the short-term is to talk to a C-coded IPC service that runs the C-coded thing that otherwise can't be run.
This is actually something that Zig is supposed to do as well. There should be nothing that you can do in C but can't do in Zig. So hypothetically, in the future (say, twenty years from now), a Zig implementation of SQLite could be a preferred one since you're not losing anything with it. (With the additional benefit that should you want to, you might be able to compile application-specific queries into it.)
No, it doesn't. C does not even assume a stack. A platform where function call frames are allocated on the heap would not be incompatible with C.
You could say that some C code assumes a stack, but that's pretty exceptional.
> Although I imagine that currently, in most cases, you'll want to avoid C for software reasons,
Yes.
> rather than hardware reasons
Hard to imagine hardware on which C could not run. Maybe a JVM chip, but even then, you could compile C to bytecode.
The problem is not with call frames being on the stack or on the heap; even a spaghetti stack would be problematic. One of the problems is that return addresses and data are interleaved in C-style frames, whereas multiple stack machines require them to be separate. It's not that you couldn't write an implementation of C for a stack machine, it's just that it would be probably very primitive and slow. Yes, you can run C at the very least the way it was done on Lisp machines, by allocating a large byte array and treating it as your physical memory, but surely you would want to avoid it if you could. There really is a reason why stack machines historically ran somewhat exotic languages like Forth.
HLSL and CUDA have C-like syntax, but underneath the syntax these things are very different from C.
On GPU almost nothing has an address, there's no stack, no malloc/free, no files or printf, and every instruction runs on 32+ threads in lockstep.
Great work of the author explaining comptime!
I can't figure out why i am supposed to care about zig beyond it's fun? I get that it's interesting and more safe than C (honestly though what the hell isn 't).
Say you write rust pretty regularly for new product development, what does zig offer to make my life better, my products more stable, etc?
- The small footprint in Zig enables fast compilation (and getting faster) and fast feedback cycles.
- Arbitrary nastiness can happen using From and other traits in Rust, and the tendency to shadow names and rely on type inference can make that unpleasant to track down. In Zig you'd be forced to make a choice (for any non-trivial type) at the return or call site of how you were going to convert things.
- If you want to use the stdlib and have any non-trivial control over how objects are allocated you're in for a rough time in Rust (say, a circular buffer backing some objects and a reusable arena for others). You'll probably be reinventing a lot of wheels.
- Similarly with issues like implicit locking in stdout. It's executing a syscall per line anyway, but the lock can inadvertently make contended writes 1000x slower, so suddenly logging in multithreaded code needs a dedicated logger, and not any of the common options since those fall back to the locking stdout we're trying to avoid.
The language was not small whatsoever at the time. It was not fun.
[1]: https://github.com/motiejus/makelua/tree/rust
It may not be the smallest language, but, it compiles pretty tightly.
While you can technically use Rust in these domains you'll find yourself jumping through hoops and fighting against quirks that come with it being fairly high-level and very opinionated on how to enforce memory safety.
One such scenario I've encountered is implementing my own memcpy with a loop like `for i in 0..len`. This works in release builds, but without optimisations this gets a deeeeep callstack that eventually also calls memcpy, so you get a stack overflow. Note how memcpy is implemented in rlibc to avoid this issue: https://docs.rs/rlibc/latest/src/rlibc/lib.rs.html#30-38
It's funny how in rust you can write raw asm, but there seem to be quirks somewhere in the middle.
==========
Why Zig When There is Already C++, D, and Rust?
- No hidden control flow. If Zig code doesn’t look like it’s jumping away to call a function, then it isn’t.
- No hidden allocations. Zig has a hands-off approach when it comes to heap allocation. There is no new keyword or any other language feature that uses a heap allocator. The entire concept of the heap is managed by library and application code, not by the language.
- First-class support for no standard library. Zig has an entirely optional standard library. Each std lib API only gets compiled into your program if you use it. Zig has equal support for either linking against libc or not linking against it. Zig is friendly to bare-metal and high-performance development.
- A Portable Language for Libraries. Zig is attempting to become the new portable language for libraries by simultaneously making it straightforward to conform to the C ABI for external functions, and introducing safety and language design that prevents common bugs within the implementations.
- A Package Manager and Build System for Existing Projects. Not only can you write Zig code instead of C or C++ code, but you can use Zig as a replacement for autotools, cmake, make, scons, ninja, etc. And on top of this, it (will) provide a package manager for native dependencies. This build system is intended to be appropriate even if the entirety of a project’s codebase is in C or C++.
- Simplicity. Zig has no macros and no metaprogramming, yet is still powerful enough to express complex programs in a clear, non-repetitive way. Even Rust has macros with special cases like format!, which is implemented in the compiler itself. Meanwhile in Zig, the equivalent function is implemented in the standard library with no special case code in the compiler.
- Tooling. Zig provides binary archives for Linux, Windows, macOS and FreeBSD. It is installed by downloading and extracting a single archive, no system configuration needed. It is statically compiled, uses LLVM, has out of the box cross-compilation to most major platforms, and ships w/ libc source and dynamically compiles when needed. The Zig build system has caching and compiles C and C++ code with libc support
==========
[0] https://ziglang.org/learn/why_zig_rust_d_cpp/
This one is an odd point; this very article is a demonstration of metaprogramming*:
> Thanks to Zig’s type reflection we can read a row of data into a user-provided type without needing to write any “mapping” function: we know the type we want to read (here the User struct) and can analyse it at compile-time.
* Which is a feature I favor, for the record.
To me it comes down to Zig is a "modern" C, but unlike most other attempts at replacing C it doesn't skip some of C's use cases.
Rust can provide some guarantees but it still won't protect you from logic errors or a "clever" coworker. In comparison, zig is simple and at the end of the day readability is my best defense against bugs/errors.
There was an interesting comparison (and discussion) of zig vs rust safety a few months ago: https://news.ycombinator.com/item?id=26537693
I find myself agreeing with pron and others in that discussion when they say no one really wants to use a safe language. What they want are correct programs.
What remains to be seen is how easy it is to write correct programs in each and the values people place on the deltas between the two languages.
This isn't really consistent with my experience. Compared to writing other languages with varying levels of strictness (C, Python, Go to name a few), Rust's compiler saves me a lot of time writing tests and finding bugs.
It doesn't save me the work of fixing them, but in my experience, bugs that are hard to find and easy to fix drastically outnumber bugs that are easy to find and hard to fix.
That said, I haven't tried Zig. It's on my radar, but just haven't had a new project to start lately.
Once I started realizing what the borrow checker was saying it wasn't "fighting the compiler" I realized what I was asking the compiler to do was really not a good idea. It becomes your friend basically.
I didn't make this post to shill rust, I'm pretty language agnostic to my own detriment... The point is, it's worth considering and trying it again if you didn't get it the first time around. Rust took me 2x to become a tool I'd use, I honestly hated it at first.
I like rust (I actually liked it from the start years ago). However, I believe that there are tradeoffs and the companies you named use other languages in addition to rust because they also believe there are tradeoffs.
Rust won't be the last language. I don't want to get too comfortable with what I know and miss out on what I don't. I am excited by new languages that may also have something to offer. However, I can't master them all so a good discussion is valuable and can offer shortcuts through the noise. I think my assumption in my first reply to your original post was that you were interested in the same thing. However, if you are happy with rust then I don't want to spoil your enjoyment.
In all honesty, I prefer the syntax of D over all the others, it feels the most like Java or C# but with a lot of modern benefits. D is trying to do too much though it feels like and I would love for the next D standard library to support OOTB similar to what Go supports, especially a very minimalist web server, I think all modern programming languages should be capable of spinning up web servers out of the box. This is one small detail Go got right in my opinion.
Here's an article from the Chromium team on challenges they faced with trying to integrate Rust (or at least evaluating it) note the entry was last updated in 2020:
https://www.chromium.org/Home/chromium-security/memory-safet...
(To be fair, I think Python and Ruby have HTTP servers in their standard libraries as well, at least minimalist ones.)
At the other extreme, if you're writing an operating system or a language VM, you probably want contextual allocators (like an allocator that takes a runtime argument like "which green thread I'm allocating on") which rust makes extremely difficult.
If you really need memory and resource safety I think the best answer is to be patient. Zig is very easy to parse and I imagine it will be the case that static analysis build tools will come about which can do what you want out of rust.... Being decoupled from the compiler chain you would be able to run fast but guard your prs to main/dev/release (as you see fit) with static analysis tools that will protect you with the safety you seek in an isomorphic fashion to "how rust does it". There's no reason why someone couldn't write it now, but with a lot of things (like ZIR/AIR) being highly unstable -- and these are what you're likely to want to statically analyze for such a tool -- for ones sanity I don't recommend building out a tool like that now.
Also it's not like that safety costs much of anything. You really can get comfortable writing rust, especially "easy rust" like a single thread cli app. It really is easy, even convenient.
Either zig is a different tool to rust, in which case I get it. Or people should be waiting for zig to be mature, meanwhile rust has been stable for half a decade, and at this point "just works". I'm not going to sit through an addendum static analyzer and all the bugs that come with making one for a few years. Other people whose appetite for risk is higher will though, and I wish them the best. I've seen efforts in other languages, my conclusion is that it's much better to have it built in...
- I use zig as my build system for both rust, zig, C libraries and linking since the build system works really well for this purpose
- When I need to write applications or libraries that can benefit from compile-time code, I always try and use zigs since it's much easier to use comptime then a combination of rust macros and generics
- I like the zig async story a lot better. Or at least it's much easier to wrap my head around and write code in compared to rust + tokyo
On the other hand, sometimes I know a project will benefit from the borrow checker or I want to use some of the awesome rust crates that the community made and I'll use rust instead.
Agree, Tokio is a little tricky for people new to rust no doubt, but it's tricky for a reason. It's saving lives in production.
(The way to manage memory in C++ and C is to opt in to a memory management pattern... hopefully you've chosen a good pattern and hopefully you follow it consistently. Patterns are generally backed by utilities and primitives that are hopefully correct, complete, and hopefully make it relatively easy to follow the pattern consistently. Hopefully you can get a linter to help you too. In a sense Rust takes the same approach, but was implemented from the ground up to eliminate all the gaps so you essentially don't have to hope to successfully bridge the gaps yourself.)