I started building Bun a little over a year ago, and as of about 20 minutes ago, its in public beta
One of the things I'm excited about is bun install.
On Linux, it installs dependencies for a simple Next.js app about 20x faster than any other npm client available today.
hyperfine "bun install --backend=hardlink" "yarn install --no-scripts" "npm install --no-scripts --ignore-scripts" "pnpm install --ignore-scripts" --prepare="rm -rf node_modules" --cleanup="rm -rf node_modules" --warmup=8
Benchmark #1: bun install --backend=hardlink
Time (mean ± σ): 25.8 ms ± 0.7 ms [User: 5.4 ms, System: 28.3 ms]
Range (min … max): 24.4 ms … 27.6 ms 76 runs
Benchmark #2: yarn install --no-scripts
Time (mean ± σ): 568.4 ms ± 15.3 ms [User: 781.6 ms, System: 497.4 ms]
Range (min … max): 550.8 ms … 604.5 ms 10 runs
Benchmark #3: npm install --no-scripts --ignore-scripts
Time (mean ± σ): 1.261 s ± 0.017 s [User: 1.719 s, System: 0.516 s]
Range (min … max): 1.241 s … 1.286 s 10 runs
Benchmark #4: pnpm install --ignore-scripts
Time (mean ± σ): 1.343 s ± 0.003 s [User: 601.3 ms, System: 151.6 ms]
Range (min … max): 1.339 s … 1.348 s 10 runs
Summary
'bun install --backend=hardlink' ran
22.01 ± 0.85 times faster than 'yarn install --no-scripts'
48.85 ± 1.51 times faster than 'npm install --no-scripts --ignore-scripts'
51.99 ± 1.45 times faster than 'pnpm install --ignore-scripts'
zebracanevra 44 days ago [-]
It seems like bun caches the manifest responses. PNPM, for example, resolves all package versions when installing (without a lockfile), which is slower. The registry does have a 300 second cache time, so not faulting you there, but it means your benchmark is on the fully cached path, which you'd only hit when installing something for the first time. Subsequent installs would use the lockfile and bun and PNPM seem fast* in that case.
If I install a simple nextjs app, then remove node_modules, the lockfile, and the ~/.bun/install/cache/*.npm files (i.e. keep the contents, remove the manifests) and then install, bun takes around ~3-4s. PNPM is consistently faster for me at around ~2-3s.
I'm not familiar with bun's internals so I may be doing something wrong.
One piece of feedback, having the lockfile be binary is a HUGE turn off for me. Impossible to diff. Is there another format?
* I will mention that even in the best case scenario with PNPM (i.e. lockfile and node_modules) it still takes 400ms to start up, which, yes, is quite slow. So every action APART from the initial install is much MUCH faster with bun. I still feel 400ms is good enough for a package manager which is invoked sporadically. Compare that to esbuild which is something you invoke constantly, and having that be fast is such a godsend.
Jarred 44 days ago [-]
> It seems like the main thing that bun does to stay ahead is cache the manifest responses. PNPM, for example, resolves all package versions when installing (without a lockfile), which is slower.
This isn't the main optimization. The main optimization is the system calls used to copy/link files. To see the difference, compare `bun install --backend=copyfile` with `bun install --backend=hardlink` (hardlink should be the default). The other big optimization is the binary formats for both the lockfile and the manifest. npm clients waste a lot of time parsing JSON.
The more minor optimizations have to do with reducing memory usage. The binary lockfile format interns the strings (very repetitive strings). However, many of these strings are tiny, so it's actually more expensive to store a hash and a length separately from the string itself. Instead, Bun stores the string as 8 bytes and one bit bit says whether the entire string is contained inside those 8 bytes or if it's a memory offset into the lockfile's string buffer (since 64-bit pointers can't use the full memory address and bun currently only targets 64-bit CPUs, this works)
yarn also caches the manifest responses.
> If I install a simple nextjs app, then remove node_modules, the lockfile, and the ~/.bun/install/cache/.npm files (i.e. keep the contents, remove the manifests) and then install, bun takes around ~3-4s. PNPM is consistently faster for me at around ~2-3s.
This sounds like a concurrency bug with scheduling tasks from the main thread to the HTTP thread. I would love someone to help review the code for the thread pool & async io.
> One piece of feedback, having the lockfile be binary is a HUGE turn off for me. Impossible to diff. Is there another format?
If you do `bun install -y`, it will output as a yarn v1 lockfile.
If you add this to your .gitattributes:
*.lockb binary diff=lockb
It will print the diff as a yarn lockfile.
the_duke 44 days ago [-]
Did you benchmark if a the binary lockfile actually makes any appreciable difference for execution time?
Considering the speed with which a fast parser can gobble up JSON I'm somewhat skeptical that this would be relevant for common operations.
joeblubaugh 44 days ago [-]
> The other big optimization is the binary formats for both the lockfile and the manifest. npm clients waste a lot of time parsing JSON.
Yes he did
Ygg2 43 days ago [-]
Where exactly?
I don't see it either. Perf data that shows that was the issue.
rattray 43 days ago [-]
Take a look at Jarred's twitter, and you'll see he spends a lot of time profiling things:
Of course, I can't say for sure that he looked at the fastest possible way to parse json here, but my intuition would be that if he didn't, it's because he had an educated guess that it'd still be slower.
Ygg2 39 days ago [-]
That's not logically connected to statement "parsing JSON is major bottle neck".
It's just comparison of execution times of several different package manager.
Better would be parsing JSON vs binary in Bun.
jahewson 44 days ago [-]
Fast JSON parsers use many exotic tricks. Picking one optimisation and baking it into the file format isn’t so bad.
the_duke 43 days ago [-]
You don't need to go straight to simdjson et al, something like Rust serde which desierializes to typed structs with data bllike strings borrowed from the input can be very fast.
IshKebab 43 days ago [-]
It's still very slow compared to binary formats. Especially indexed ones like SQLite.
laumars 43 days ago [-]
Nobody is arguing that JSON is equally as performant as binary formats. What the others are saying is that the amount of JSON in your average lock file should be small enough that parsing it is negligible.
If you were dealing with a multi-gigabyte lock file then it would be a different matter but frankly I agree with their point that parsing a lock file which is only a few KB shouldn’t be a differentiator (and if it is, then the JSON parser is the issue, and fixing that should be the priority rather than changing to a binary format).
Moreover the earlier comment about lock files needing to be human readable is correct. Being able to read, diff and edit them is absolutely a feature worth preserving even if it costs you a fraction of a second in execution time.
IshKebab 43 days ago [-]
> I agree with their point that parsing a lock file which is only a few KB
You mean a few MB? NPM projects typically have thousands of dependencies. A 10MB lock file wouldn't be atypical and parse time for a 10MB JSON file can absolutely be significant. Especially if you have to do it multiple times.
> Being able to read, diff and edit them is absolutely a feature worth preserving even if it costs you a fraction of a second in execution time.
You can read and edit a SQLite file way easier than a huge JSON file.
zebracanevra 44 days ago [-]
> If you add this to your .gitattributes:
Not applicable to GitHub etc though.
I'm also not seeing any speed differences when using -y/yarn lockfile. Why not make it the default?
brasic 44 days ago [-]
> Not applicable to GitHub etc though.
GitHub (disclosure: where I work) does respect some directives in a repo’s .gitattributes file. For example, you can use them to override language detection or mark files as generated or vendored to change diff presentation. You can also improve the diff hunk headers we generate by default by specifying e.g. `*.rb diff=ruby` (although come to think of it I don’t know why that’s necessary since we already know the filetype — I’ll look into it)
In principal there’s no reason we couldn’t extend our existing rich diff support used for diffing things like images to enhance the presentation of lockfile diffs. There’s not a huge benefit for text-based lock files but for binary ones (if such a scheme were to take off) it would be a lot more useful.
hoten 44 days ago [-]
Any way to use `.gitattributes` to specify a file is _not_ generated? I work on a repo with a build/ directory with build scripts, which is unfortunately excluded by default from GitHub's file search or quick-file selection (T).
brasic 43 days ago [-]
Yes! Use `<pattern> -linguist-generated` (the minus sets a negative override for any gitattribute).
Does this really work for jump to file? (we're not talking language statistics or supressing diffs on PRs, which is mostly what linguist readme is talking about).
> File finder results exclude some directories like build, log, tmp, and vendor. To search for files within these directories, use the filename code search qualifier.
(The inability of quick jumping to files from /build/ folder with `T` has been driving me crazy for YEARS!)
Correct me if I'm wrong, but checking those two files:
I don't see `/build` matching anything there. So to me this `/build` suppression from search results seems like controlled by some other piece of software at GitHub :/
I checked and you're right: The endpoint that returns the file list has a hardcoded set of excludes and pays no attention to `.gitattributes`.
I think it's reasonable to respect the linguist overrides here so I'll open a PR to remove entries from the exclude if the repo has a `-linguist-generated` or `-linguist-vendored` gitattribute for that directory [1]. So in your case you can add
build/** -linguist-generated
to `.gitattributes` and once my PR lands files under `build` should be findable in file-finder.
Thanks for pointing this out! Feel free to DM me on twitter (@cbrasic) if you have more questions.
Does it try to use reflinking with `--backend=copyfile` when the FS supports it?
Jarred 43 days ago [-]
On macOS it explicitly uses clonefile()
On Linux, not yet. I don't have a machine that supports reflinks right now and I am hesitant to push code for this without manually testing it works. That being said, it does use copy_file_range if --backend=copyfile, which can use reflinks.
aconbere 44 days ago [-]
Hmmm, pnpm also uses hardlinks (or a reflink if it’s available) to copy out of a content addressable on disk cache.
jbverschoor 43 days ago [-]
Still don't understand whhy we even need all these inodes.. The repo is centrally accessible (and should be read-only btw). Resolving that shouldn't be a problem. It's been more than a decade and npm is still a mess.
aconbere 43 days ago [-]
No arguments here! I'm consistently dismayed at the state of these tools :(
syrusakbary 44 days ago [-]
I'm ultra excited about Bun being finally open sourced, congrats on the amazing progress here Jarred!
Since JSC is actually compilable to Wasm [1] and Zig supports WASI compilation, I wonder how easy would be to get it running fully in WAPM with WASI. Any thoughts on how feasible that should be?
Congratulations for the release! You are doing an impressive work with bun. I find particulary exciting the built-in sqlite, and I cannot wait to move all my projects to bun. Egoistically speaking (my 2012 mbp doesn't support AVX2 instructions), I hope that now that the project is public, since you are going to get a lot of issue reports about the failure on install, you will find some time to get back to issue#67. Thank you, and keep up the excellent work.
nicoburns 44 days ago [-]
Yeah, the install function of npm/yarn/pnpm are all incredbly slow. And also seems to get slower super-linearly with the number of dependencies. I have one project where it can take minutes (on my 2015 MacBook - admittedly it’s quicker on my brand new machine) just to add one be dependency and re-resolve the lock file. If that can solved by a reliable tool I’d definitely switch!
oreilles 44 days ago [-]
This is one of if not the most insane thing in web dev at the moment. Git can diff thousand of files between two commit in less time than it takes to render the result on screen. But somehow it can take actual minutes to find out where to place a dependency in a simple tree with npm. God, why ?
capableweb 44 days ago [-]
> it can take actual minutes to find out where to place a dependency in a simple tree with npm. God, why ?
npm is famous for a lot of things & reasons, but none of those are "because it's well engineered".
To this day, npm still runs the `preinstall` script after dependencies have actually been downloaded to your disk. It modifies a `yarn.lock` file if you have it on disk when running `npm install`. Lots of things like these, so that the install is slow, is hardly surprising.
hoten 44 days ago [-]
Since when would npm install modify a yarn lock file?
I don't know exactly the "since when", but recently I was caught off guard when issuing `npm i` by mistake in a yarn project. It modifies "yarn.lock" by changing some, if not all, the registry from yarn pkg registry to npm package registry.
44 days ago [-]
tempest_ 44 days ago [-]
People building language tooling often use the language itself, even if it is not very suitable for the task at hand.
This happens because the tooling often requires domain knowledge which they have and if they set out to write tooling for a language they tend to be experienced in that language.
kaba0 42 days ago [-]
JS definitely doesn’t help, but it is a surprisingly fast language with modern runtimes. The problem lies elsewhere.
nicoburns 44 days ago [-]
In fairness, I suspect your average node_modules folder has a lot more files than your average git repo (maybe even an order of magnitude more)
pxc 43 days ago [-]
> it can take actual minutes to find out where to place a dependency in a simple tree with npm. God, why ?
Is it even a tree? Does NPM still allow circular dependencies?
nine_k 44 days ago [-]
But who still uses npm for that, and for what reason? Yarn seems much faster.
solardev 44 days ago [-]
Is yarnv2 any better for this? I haven't tried it because it wasn't backward-compatible, but for fresh projects, would that be a better choice?
metaldrone 43 days ago [-]
Yes, yarn v2 (now v3) is a bit faster than yarn v1, but don't expect miracles.
Yarn v2 is backwards compatible though. You just need to use the node_modules "linker" (not the default one) and it's ready to go.
eyelidlessness 43 days ago [-]
> Yarn v2 is backwards compatible though. You just need to use the node_modules "linker" (not the default one) and it's ready to go.
Last I checked, not quite. Yarn 2+ patches some dependencies to support PnP, even if you don’t use PnP. I discovered this while trying out both Yarn 2 and a pre-release version of TypeScript which failed to install—because the patch for it wasn’t available for that pre-release. I would have thought using node_modules would bypass that patching logic, but no such luck.
conaclos 43 days ago [-]
I have just discovered yarn2 / yarn3. The main advantage over npm / pnpm seems be the Zero Install philosophy [1] and the Plug'n'Play architecture [2]. Have you some feedback about these features?
By the way, the yarn2 / yarn3 project is hosted on a distinct repository [3].
Maybe the problem lies in too many dependencies in the first place? Nowadays JS development is super bloated.
claytongulick 44 days ago [-]
It's a deeply impressive achievement, absolutely blows my mind that you were able to do it in a year.
andai 44 days ago [-]
Dang, nice work! Any idea where the slowness comes from in the other tools, how Bun manages to be so much faster?
the_duke 44 days ago [-]
The benchmark source code links on the homepage are "Not found".
Also a few questions:
What do you attribute the performance advantage to? How much of it is JavascriptCore instead of v8 versus optimized glue binding implementations in the runtime? If the latter, what are you doing to improve performance?
Similarly for the npm client: how much is just that bun is the only tool in a compiled /GC free language versus special optimization work?
How does Zigs special support for custom allocators factor in?
Jarred 44 days ago [-]
will fix the broken links shortly
edit: should be fixed
adamwathan 44 days ago [-]
I've been following Jarred's progress on this on Twitter for the last several months and the various performance benchmarks he's shared are really impressive:
Easy to dismiss this as "another JS build tool thing why do we need more of this eye roll" but I think this is worth a proper look — really interesting to follow how much effort has gone into the performance.
slimsag 44 days ago [-]
Agreed, I've also been following his work for a while now. Jarred is pulling off something very impressive here :) Congratulations on the public beta!
clarkmoreno 44 days ago [-]
big agree!
m0meni 44 days ago [-]
As far as I know, one guy made this (https://twitter.com/jarredsumner) working 80h-90h on it a week. His twitter has some cool performance insights into JS and JS engines. It's the biggest Zig codebase too I think.
That's pretty good! Big project to accomplish in a year
ksec 43 days ago [-]
If there was one comment or tweet that lead me to follow him, it was this [1]
"1ms is an eternity for computers"
When was the last time you heard a Web Developer, frontend, backend or web tooling Dev state that? [2] It is always, oh the network latency dominate, or the DB response time dominate. It is only one tenth of a second ( 100ms ) it doesn't matter. It is fast enoughTM.
[2] Actually Nick Craver does that a lot during his StackOverflow era.
Cthulhu_ 41 days ago [-]
It's because the web developer is so far removed from the hardware; we work in Typescript, which is transpiled to Javascript, which is minified, optimized and compressed before it's sent to the browser, where it's run in a JS engine in a browser in an operating system where there's a few more steps to the actual hardware.
And that's plain JS, most people don't even work in plain JS but in a framework where they don't just say "var x = y" but "publish this event into my Observable stack which triggers a state update which will eventually, somehow, update my component which will then eventually update the DOM in the browser", after which the browser's own whole stack of renders takes over.
Meanwhile in video games you can say `frame[0][0] = 255` to set a pixel to white, or whatever.
I think it's a matter of levels of abstraction; with webapps, I'd argue they've gone even further than Java enterprise applications in the 90's / 2000's.
lewispollard 41 days ago [-]
More web developers need to try out game development, it's a similar enough domain in terms of rendering UI and being responsive, but there's absolutely no tolerance for the kind of time wastage that web developers seem to accept as a given.
switz 44 days ago [-]
I'm really excited about bun – it represents an opportunity to deeply speed up the server side javascript runtime and squeeze major performance gains for common, real-world uses-cases such as React SSR.
Also, Jarred's twitter is a rich follow full of benchmarks and research. If nothing else, this project will prove to be beneficial to the entire ecosystem. Give it some love.
nine_k 44 days ago [-]
Why would V8 be less suitable for stuff like React SSR, compared to JSCore?
I have no opinion, just interesting.
oorza 43 days ago [-]
Faster startup -> lower serverless bills for a lot of shops. Shaving 25ms off a request might not seem like it's important, but if that saves you a million serverless seconds in aggregate, you just earned a bonus.
ec109685 42 days ago [-]
For any well trafficked site, the vast majority of their lambda invocations will be from a warm start.
progx 43 days ago [-]
Millions auf serverless seconds? Better get a cheap server for all basic executions and only serverless for randomly huge traffic.
premun 43 days ago [-]
But what if you're Azure or Amazon? Then you definitely care a lot about this kind of thing
kaba0 42 days ago [-]
I’m fairly sure they have their own servers where they can just run their processes continuously in the background.
immigrantheart 44 days ago [-]
The best part of being a JavaScript programmer is that you have the entire world working for you for free.
choward 44 days ago [-]
What makes this specific to JavaScript? It applies to all open source.
hn_throwaway_99 44 days ago [-]
The open source JS ecosystem, though, has soooooo many folks working on it that you can almost always find something you need. Certainly compared to something like Java, which has a couple "800 pound gorilla" projects, but then it falls off pretty quickly.
Of course, this is very much a double edge sword, with all the "leftpad" type dependency nightmares and security issues, as well as the "Hot new JS framework of the month" issues. Still, I think the dependency issues are solvable and dependency tooling is getting better, and the frameworks issue has calmed down a bit such that it's easy to stick to a couple major projects (e.g. React, Express) if so desired, but more experimental, cool stuff is always out there.
IshKebab 43 days ago [-]
Yeah definitely a double edged sword. So much of the JS ecosystem is crap written by keen beginners who don't know what they're doing.
You might say "so what they're doing it for free, you don't have to use their stuff", but often you do because the existence of a sort-of-working solution means that other people are much less likely to write a robust solution for the same problem. So everyone ends up using the crap one.
Rust and Go are way way better in that regard.
kaba0 42 days ago [-]
Java’s ecosystem is not much smaller than JS’s and the former has some beautiful libraries in specific niches, with not much alternative.
lenkite 43 days ago [-]
"which has a couple "800 pound gorilla" projects, but then it falls off pretty quickly."
Better to depend on numerous 800 pound long-lived gorilla's than a million short-lived mayflies that keep dying.
hn_throwaway_99 43 days ago [-]
My point was that JS has its 800 pound gorillas too, and so if you only want to use that, you can.
But it has so much of a broader ecosystem of other tools that if you're willing to take that risk it's an option. Java basically just doesn't have that.
throwawaymaths 43 days ago [-]
Well those gorillas are really hard to maintain. Suppose you found that it wasn't quite what you wanted. Would you poke your head into the Kafka codebase?
lenkite 43 days ago [-]
Java Gorillas are far, far easier to maintain than JS mayflies. Static typing, top-notch refactoring tools, excellent IDE's with deep language analysis, stable ecosystems, minimal dependency chains, better performance, better profiling tools, trustable repositories with well-supported choices for self-hosting - the list goes on and on.
PS: I have already poked my head into the Kafka codebase in the past. Not the best written project and also confusing because of the Scala mix, but far more readable than several I have seen. And Java makes it easily navigable. Can even auto-gen diagrams to grok it better.
rebolek 43 days ago [-]
If you use some non-mainstream language it's quite the opposite. You're working for the entire world.
immigrantheart 44 days ago [-]
Almost everyone works in JavaScript.
kwizzt 44 days ago [-]
There’s a large world outside of JavaScript…
immigrantheart 43 days ago [-]
There is no company in this world that can avoid JavaScript.
kwizzt 43 days ago [-]
Company using JS != company don’t use other languages/tools. It’s pretty obvious that JS is only a small part of the programming/tech world.
Jatidude 42 days ago [-]
JS is the most widely used programming language in existence.
_gabe_ 43 days ago [-]
At the last company I worked at I only used C# + WPF (it was horrible). A couple jobs ago I only used R. There are companies with entire divisions that never have to touch javascript, and I'm certain there are companies that never use it. There's a very large world of programmers working for insurance/Healthcare/embedded/military/government that is nothing like modern web dev.
skybrian 44 days ago [-]
Often it's for smaller values of "the entire world" though.
latchkey 44 days ago [-]
Definitely beta, but really awesome to see someone working on this stuff.
"I spent most of the day on a memory leak. Probably going to just ship with the memory leak since I need to invest more in docs. It starts to matter after serving about 1,000,000 HTTP requests from web streams"
What's your point with pointing this out without any comment. Are you trying to be snarky?
nicce 44 days ago [-]
He gave an example about awesomeness. Someone going so deep into something which matters just in a rare scenario?
pverghese 44 days ago [-]
Their reply was edited to include the first line. When I replied it was just the quote and no context
latchkey 44 days ago [-]
I only added the additional note after the , to clarify that it is still cool stuff, even if it is beta and has memory leaks.
At least in my usecase, I do about 35m hits / day... so this would fall over in less than an hour. 1m isn't that large of a number and the author is willing to shrug that off until after launching.
hoten 44 days ago [-]
Yeah... the top response to that tweet is pretty spot on.
I love these super-ambitious projects (see Parcel, Rome.js) because after several years they will still fail in many areas at once!
> Rome is a formatter, linter, bundler, and more for JavaScript, TypeScript, JSON, HTML, Markdown, and CSS.
> Rome is designed to replace Babel, ESLint, webpack, Prettier, Jest, and others.
Haven't seen it since.
> Parcel: The zero configuration build tool for JavaScript, CSS, HTML, TypeScript, React, images, SASS, SVG, Vue, libraries, Less, CoffeeScript, Node, Stylus, Pug, Electron, Elm, WebGL, extensions, GraphQL, MD
As long as you do the bare minimum for each target.
paulhodge 42 days ago [-]
One of the things that makes frontend dev a nightmare is having to wrangle so many different tools and configurations to build a site, even a simple one. The only next step to improve the situation is all-in-one tools. We're lucky that people are trying to tackle this hard problem.
jokethrowaway 43 days ago [-]
parcel is great, I always use it for personal project and it works pretty well
Normally I start with something else (webpack, rollup, whatever happened to be there with the example I'm starting from), then when I hit some roadblocks I just parcel index.html and I have something working.
wonderbore 42 days ago [-]
Parcel is great, until it isn’t. I regularly try to switch to Parcel and it’s just impossible on larger projects. If you start with parcel, great! (maybe, as long as you just need to bundle 2 modules and that’s it)
brundolf 43 days ago [-]
Oh that's all, okay, haha.
onion2k 43 days ago [-]
Webpack, Babel, yarn, and PostCSS
Something has done a bit wrong if you're running any of those tools in production.
its-dlh 43 days ago [-]
What do you mean? Those are incredibly common tools for production builds.
onion2k 43 days ago [-]
You don't use them in production though. Your code is built in a pipeline somewhere using those tools, typically CI/CD, and then the artifacts from that process are what gets deployed to production. If you're actually running Webpack on a production server then you're doing something very unusual.
jereees 43 days ago [-]
They are part of the production build pipeline. You need them to have the final product.
onion2k 43 days ago [-]
That's not what anyone means when they say "in production" though. When people talk about things being "in production" they mean "on a production server".
simjnd 42 days ago [-]
No, what most people mean when they say "in production" is "in a real project" in the sense that it is or will be published and used. It means something that is more than a demo / PoC / etc. You're being so ridiculously pedantic it's shocking.
onion2k 42 days ago [-]
No, what most people mean when they say "in production" is "in a real project" in the sense that it is or will be published and used.
Most companies have things that run "in dev", "in staging", "in CI", and "in production". These map directly to some tools - for example, React has "development" and "production" modes. When someone says a server or a database or a tool is used "in production" they're usually referring to the live environment that users access. Most people use tools like Webpack locally to run code when they're doing dev work, and in CI to build things. If someone said to me "We're running Webpack in production" then I would have questions.
If you use "in production" to mean "anywhere in a project", then how do you differentiate between a staging environment and a production environment? Do you talk about "the staging environment that's in production"? That would be confusing...
whoopdeepoo 43 days ago [-]
You're being absurdly pedantic
alxndr 43 days ago [-]
So then it's accurate to say that no one uses TypeScript in production??
Hamcha 44 days ago [-]
I want to love this so much, kinda sad that Windows support is non-existant outside of WSL (which I try to use as a last resort option).
I love the bundling of most tasks in one app, especially in an environment where I had friends refuse to interact because of the "framework of the month" problem.
I just wish it din't rep Zig this much, I'm hyped for Zig as much as the next guy, but the website mentions is twice back to back and I really think we should stop going "it's better cause it's written in X".
AndyKelley 44 days ago [-]
It's not enough to build something great if nobody knows about it. Marketing is just as important to the success of a project as engineering. By giving such a generous shoutout to Zig, complete with a call for donations, Jarred has effectively created a symbiotic relationship with the Zig project, a win-win situation where both projects are boosted up by each others' spotlights.
Hamcha 43 days ago [-]
A shoutout is very appreciated and putting Zig in the title line definitely is what caught my interest (so yes Zig is being used to promote bun probably more than bun is promoting Zig at this point) but the writing specifically on the website is a bit on the nose, don't you think? "Why is Bun fast? Because of Zig"
Rust is already somewhat infamous for this ("Rewrite it in Rust" is a meme) and has caused it to develop a bit of a stigma, at least in my circles.
I'm still rooting for Zig to get its place among the big ones (and bun seems definitely a nice way to push for it) I just hope that happens without creating the annoying cult-like behaviors that plagued the crab language.
kristoff_it 43 days ago [-]
Bun relies on Zig and Zig is an unfinished product that relies on donations to survive. A shout-out to Zig is not only a nice gesture but also a smart one, since it's in Bun's best interest for Zig to survive at least until v1.
The general concern about "written in Zig" being annoying is fair. I think it's a different beast when paired with a call to donate but regardless, if RIIZ is what worries you most, then you can sleep safe because our motto is "Maintain It With Zig", a conscious rejection of "Rewrite It in *".
Purely from better ergonomics[1], not a compile-time guarantee like Rust has. If your tools for concurrency are better, you're less likely to make a mistake.
Sometimes it is better if written in X. Tools that are themselves written in JS tend to be slow, compared to something like esbuild or swc, which, being written in compiled languages, are very fast.
davidmurdoch 43 days ago [-]
No Windows support is a blocker for me.
julienb_sea 44 days ago [-]
This is super cool and if I was working on more personal projects I would be tempted. In an enterprise context, moving to a reimplementation of core Node APIs is a terrifying prospect. There are infinite possible subtle problems that can appear and debugging even within official Node has been a challenge in the past. I don't know how this concern can be alleviated without seeing proven usage and broader community acceptance.
andyfleming 44 days ago [-]
It is scary, but it already seems more stable and backward-compatible than Deno. With some testing and further stabilization, I have a feeling bun might be a much more feasible and beneficial move.
andrew_ 44 days ago [-]
It's not backwards compatible, it's just Node compatible. Demo is not, and it's stated as much clearly. Thus the suggestion that bun is more backward-compatible than Deno is incorrect and speaks to a fundamental misunderstanding of the two projects.
andyfleming 44 days ago [-]
Sure, a poor choice of words. Node compatibility is what matters to me (and likely many others) practically as a node developer. Deno's benefits don't outweigh the costs of migrating to it and adopting it more broadly, in my opinion.
8n4vidtmkvmk 44 days ago [-]
tried deno briefly the other day, was a real pain. just wanted to hash some files. they had an std hash lib but dropped it in favor of web crypto which bazzarely doesn't have a hash.update method for streaming?? not very good. prefer node's impl
ksm1717 44 days ago [-]
I have a hunch that this concern can be alleviated with proven usage and broader community acceptance
kylemh 44 days ago [-]
A special dev, releasing a special thing. Chuffed for Jarred. I'm hoping Bun hops to me even quicker than it can run.
AtNightWeCode 43 days ago [-]
Impressive. I have to say that.
Performance is one thing (the benchmarks are probably wrong though), but will it solve any of the headache you get with NodeJS. I for instance have 7 different NodeJS versions installed just to compile all the projects I use. Oldest version is currently 6(!). The NPM dependency hell is best in class. NodeJS LTS is nothing but a tag. Compliance with ECMAScript updates have not been great. Still a mess with NodeJS to figure out if a package will work on the specific version. Still poor performance in comparison to other tech. And so on...
atom_arranger 43 days ago [-]
There will always be incompatibilities between versions. Ideally the tool would manage its own version, maybe with vendoring. Yarn does this optionally and it works well
At least with this you wouldn’t need to manage versions of the package manager and runtime separately
AtNightWeCode 42 days ago [-]
Well, I can't even run npm install on latest NodeJS LTS version. So, I can't even install yarn to begin with.
I've been following this project for a while now, and it's incredibly ambitious. It will take a while to reach its full potential but Jarred is doing an extraordinary amount of very good work on it, and I think there is a very good chance this will turn out to be a pretty big deal.
angelmm 43 days ago [-]
Impressive work here. Congrats Jarred for the lunch of Bun!
I'm eager to see how the different runtimes will shine in the edge side. Bun is clearly positioned as one option and the focus on this is stated in the main page.
I believe the traction for Bun will depend a lot of the adoption. I see the point on having the fastest(TM) runtime in Linux and MacOS, but that comes from the fact of using specific syscalls. If we move to the edge, it's not clear to me how this will be implemented. Maybe those syscalls are not available or sandboxing makes things for difficult. The requirements for making it compatible may reduces its performance.
We will see :)
inglor 43 days ago [-]
Hey, excited to see more players in this space and more alternatives which I believe is a win for users.
If there is anything Node core can do better for you to allow better interop ping us. Also - I'm not sure if you're involved with the WinterCG stuff (I _think_ maybe?) but if you're not please come by!
WaffleIronMaker 44 days ago [-]
Note that the bun install [1] seems to be hosted as an HTML file, not as a text file. I'm not sure to what extent that causes issues, but it seems atypical.
Just took it for a spin w/ SQLite. Pretty nice that it's got TypeScript support and SQLite support out of the box with no dependencies. The same project in Node pulled in over 100 dependencies and was about 1/3 as fast (super basic HTTP endpoint serving 2 rows of data). Bun performed roughly on par with Go. Again, a trivial test, but pretty exciting, nonetheless.
pjmlp 43 days ago [-]
> Bun.js uses the JavaScriptCore engine, which tends to start and perform a little faster than more traditional choices like V8
So it isn't really everything written in Zig and there is some C++ helping there, actually.
mkishi 43 days ago [-]
Did you consider Deno as misleading?
The fact it states it's powered by JSC before even mentioning Zig makes it pretty clear, imo.
pjmlp 43 days ago [-]
Depends on how it sells the rewrite.
hu3 43 days ago [-]
Zig supports C/C++ cross-compilation out-of-the-box so the integration with JavaScriptCore is at the very least facilitated by Zig.
pjmlp 43 days ago [-]
Still isn't Zig.
Using the same reasoning I can assert having written a JavaScript runtime in e.g. F# by making use of C++/CLI to compile JSC.
hu3 43 days ago [-]
What's the point of your comment?
They don't claim it's pure Zig and even mention JavaScriptCore in their front page.
pjmlp 43 days ago [-]
> Bun: Fast JavaScript runtime, transpiler, and NPM client written in Zig
Definitely reads otherwise and is click bait, given the actual implementation.
cweagans 43 days ago [-]
I'm not really sure what your argument is. It sounds like you're complaining that he used a library to handle the actual javascript parsing/evaluation. There are still platform-specific things that need to be implemented and plugged in to the JS environment to be able to do anything useful (for instance, starting up a server and binding to a port isn't something that the OOTB JavaScriptCore sandbox is going to let you do - you have to implement that separately and plug it in yourself). The transpiler and npm client are completely separate things.
If you read "runtime" as "extensions that allow you to do actually useful stuff" instead of "parser/evaluator", I don't see an issue.
pjmlp 43 days ago [-]
My argument is as sound as the English understanding out of the sentence.
cweagans 42 days ago [-]
I just pointed out that there is a different interpretation of the sentence that makes it work. It's a little arrogant to assume that not only is your interpretation the "correct" one, but that it's the _only_ one.
There is precedent for this too -- for instance, the ".NET runtime" is not only the JIT/AOT compiler for CIL, but also the libraries providing the .net api + standards/mechanisms for loading other libraries, etc.
> Runtime describes software/instructions that are executed while your program is running, especially those instructions that you did not write explicitly, but are necessary for the proper execution of your code. [...] Runtime code is specifically the code required to implement the features of the language itself.
norswap 43 days ago [-]
Is JavaScriptCore really faster than V8 in general?
skavi 43 days ago [-]
Yup, at least according to Speedometer 2.0.
brillout 43 days ago [-]
HatTip [1] just added preliminary Bun support [2].
(HatTip: write universal server code that runs anywhere: Node.js, Edge, Deno, Bun, ... Also for library authors who want to provide a middleware that works anywhere, instead of locking their users into a single JS server runtime.)
@Jarred: We'd be curious to know what you think of HatTip!
The benchmark numbers for React server-side rendering are really impressive. How does bun manage to be so much faster, especially considering that React SSR is actually fairly compute-intensive (building up a DOM tree and all)?
leeoniya 44 days ago [-]
maybe it's bottlenecked on Node's built-in http module,
there are projects like uWebSockets.js (and hyper-express [1] built on top of it) [1] which show a 10x increase in throughput.
One optimization he made is inlining jsx createElement calls
mhh__ 44 days ago [-]
Gaming benchmarks is pretty easy if you have a very repetitive task. A good compiler developer should be able to make his or her compiler best everyone else one one benchmark
andsoitis 44 days ago [-]
"New levels of performance (extending JavaScriptCore, the engine)."
I excitedly thought this project might include a new JavaScript VM, not extending an existing one.
throwawaymaths 43 days ago [-]
At this rate Just give jarred another 2 years
crabmusket 43 days ago [-]
What would you like to see from a new JS VM?
oorza 43 days ago [-]
Do you just want a wish list? How about configurable garbage collectors, cacheable JIT outputs, or a standardized bytecode format to facilitate language interop?
crabmusket 43 days ago [-]
Those are great points. I asked because I have seen people get excited about new JS runtimes a lot when Deno came out, and not understanding that it was using V8 under the hood. So many people thought they wanted a "new TypeScript runtime" that would "compile to WASM" without really thinking through how that would work or what it would be. Spoilers: it would work the same as V8.
But thanks for those specific wishlist items, which are much more sensible! Isn't your last item WASM though, with its interface types proposal?
cyansmoker 44 days ago [-]
Congrats!
This is making me reconsider using the node ecosystem in some of my projects.
jakearmitage 44 days ago [-]
Any special magic going on with the interpreter code? Did zig allow you to write a more performant parser/AST walker?
conaclos 44 days ago [-]
Bun parser is a translation in Zig of ESbuild's parser. ESbuild parser is already well tuned. Bun takes zig advantages to go further.
abdellah123 43 days ago [-]
I wonder how is the performance comparison between the two?
And can I use bun parser as a drop in replacement for esbuild?
conaclos 43 days ago [-]
ESbuild is much more mature than bun. The author of ESbuild cares a lot about compatibility with other bundlers and stability. Moreover it is already insanely fast. I am not sure there is any interest to switch from ESbuild to bun for bundling or transpiling code.
By the way, I think that bun does not apply the patches of ESbuild since the translation date.
fuu_dev 44 days ago [-]
Zigs makes cross compiling easier and is on par with c/c++ in terms of performance. There is no magic trick else just personal preference i assume.
williamstein 44 days ago [-]
From the page: “Why is Bun fast? An enourmous amount of time spent profiling, benchmarking and optimizing things. The answer is different for every part of Bun, but one general theme: zig's low-level control over memory and lack of hidden control flow makes it much simpler to write fast software.”
dum replaces npm run and npx.
Instead of waiting 200ms for your npm client to start, it will start immediately.
tylerchurch 44 days ago [-]
Any detailed comparison of this vs. Node.js vs. Deno?
throwawaymaths 44 days ago [-]
I did a quick comparison for my own reasons of using them as an "absolutely stupid runner" which boots a fresh VM, runs some JavaScript that converts a piece of JSON, and gets out (this is likely mostly measuring VM boot and cleanup only). Bun was crazy fast: a factor of 2 over libmozjs, and a factor of 3 over nodejs
truth_seeker 44 days ago [-]
Roadmap goals looks promising, especially under the Runtime section.
gratz, what was using Zig like? What kind of problems you had?
akagusu 44 days ago [-]
Bun won my heart because:
- it uses Zig and not Rust
- it uses JavaScriptCore and not V8
- it has built-in support for my favorite database: sqlite
- it is all-in-one
speed_spread 44 days ago [-]
Careful, enough comments like that might just make Zig the new Rust. Rewrite It In Zig! Zig Uber Alles!
OTOH, that could at last free Rust from being the shiniest kid on the block. Carry on...
deepstack 43 days ago [-]
however that fact that it is NOT using V8 is quite refreshing change. We need diversity in the JS eco system other than google v8. Didn't realize JavascriptCore is faster than v8.
3836293648 43 days ago [-]
To be fair they only said it starts faster than V8, nothing was said on runtime performance
speed_spread 43 days ago [-]
This is the same weak argument that's being used to sell native compilation on Java. Tons of graphics showing startup time improvement, never a word on what happens after. But the only cases where this metric makes a real difference are CLI utilities and Serverless (if you know others, please tell me)
stolsvik 43 days ago [-]
That’s not true, as far as I've read. I feel it is spelled out pretty thoroughly: Startup time and memory footprint are way lower, but peak performance is also lower: JIT outperforms static compilation due to it having more information. There are some tools to gather such information by running the program for a while, gathering stats, but JIT still outperforms.
abraxas 42 days ago [-]
Careful there. The JavaScript ecosystem is nothing if not diverse...
crabbygrabby 43 days ago [-]
You do realize people write rust for a reason right? Not because it's new and shiny...
seanw444 43 days ago [-]
Well obviously. It's not going to shine when it's rusty.
speed_spread 43 days ago [-]
speed_spread 43 days ago [-]
macintux 43 days ago [-]
I’ve found that any humor here is downvoted unless it is both very creative and very short (basically no longer than a sentence).
Or maybe that’s just my criteria. Has nothing to do with the subject, and everything to do with what HN is for.
speed_spread 43 days ago [-]
True. I do allow myself some leeway when I get deeper in the conversation tree but it's still noise I'm adding since HN is much flatter than other forums. I really need an outlet that will fulfill my need for expression but without going full Reddit.
Or maybe I just need physical colleagues and coffee break humor.
43 days ago [-]
Shadonototra 43 days ago [-]
that can't happen with Zig, since you can just drop your C/C++ headers and call it a day ;)
kekkidy 43 days ago [-]
the_duke 43 days ago [-]
To be fair: there have been several SEGFAULT issues opened already on the repo.
The only time I've seen a SEGFAULT in Rust was when using a really badly implemented C library wrapper.
pverghese 43 days ago [-]
Kind of unfair if that's your concern since rust has segfault issues as well.
the_duke 43 days ago [-]
Only with unsafe Rust , and the ecosystem is generally very good about keeping unsafe code tightly scoped and correct.
CJefferson 43 days ago [-]
Does it? I've never seen safe Rust segfault.
crabbygrabby 43 days ago [-]
It can't people are shilling zig aimlessly.
3836293648 43 days ago [-]
The segfault can happen in safe rust, it just can't be caused by it. Segfaults don't happen immediately
43 days ago [-]
sedatk 44 days ago [-]
What do you have against Rust?
mr90210 43 days ago [-]
The whole rewrite everything in Rust.
jokethrowaway 43 days ago [-]
Rewriting into a safe language actually has some benefits - namely safety compile time guarantees.
Only RAM manufacturers like memory leaks.
Cthulhu_ 43 days ago [-]
While true, I think there's a lot of "this is better because it's been written in Rust" software around these days. The emphasis is on the language, not the problem being solved.
conaclos 43 days ago [-]
Agreed. Some project attract attention just because it is written in Rust. Although Rust is well-designed, the language does not make everything.
krona 43 days ago [-]
Memory leaks in rust might be less common than a fully GC'ed language but saying rust prevents or makes them less common than e.g. zig/c++ seems like a stretch.
crabbygrabby 43 days ago [-]
You literally cannot get them period in rust. Like it is not possible. That's the point of rust.
3836293648 43 days ago [-]
Leaks are absolutely considered safe by rust. Just look at Box::leak. And if we're comparing it to GC'd "leaks", those are typically hashmaps growning out of control and that can also happen
voorwerpjes 43 days ago [-]
Leaks can happen in rust. The guarantee safe rust makes is that memory leaks cannot result in undefined behavior.
crabbygrabby 43 days ago [-]
That's mostly a thing teenagers are doing to learn CS, when it's not it's because the software is vulnerable or slow and benefits from a rewrite in a safe language.
sedatk 43 days ago [-]
How is that worse than Zig in that context?
avgcorrection 43 days ago [-]
It is effectively bait, irrespective of intent. Why would you say that “this pastry is great: it’s blueberry and not chocolate”. Is the implication that 50% of the enjoyment is on account of what it is not? Just weird.
k__ 44 days ago [-]
Haha, good question.
Should have used Nim, to make it interesting :D
aidaman 44 days ago [-]
too complicated
Existenceblinks 43 days ago [-]
The bible is too thick!
44 days ago [-]
jalino23 44 days ago [-]
this is very interesting difference too! can you please tell me what you like more about zig over rust and JavascriptCore over v8. very interesting cause I've always thought v8 is superior to JC cause v8 is chrome and JC = safari and zig is very interesting too
skavi 44 days ago [-]
Safari consistently performs better than Chrome in JS benchmarks (on Macs).
easrng 43 days ago [-]
Also some things are wayyy slower on v8 compared to other engines. (I've only checked spidermonkey.) In my experience v8 is faster at running WASM and spidermonkey is faster at running optimized js (not quite asm.js but using some tricks like |0) and starting workers.
chrisseaton 43 days ago [-]
> I've always thought v8 is superior to JC cause v8 is chrome and JC = safari
I have never in my life heard anyone say they thought V8 was superior to JSC - what are you basing that on?
jokethrowaway 43 days ago [-]
The entire marketing for Chrome was that V8 was faster.
Since 2019, it seems like JSC is actually faster (they even beat V8 benchmarks).
mi_lk 43 days ago [-]
source please?
conaclos 43 days ago [-]
Every JS engine has interesting design choices. V8 has more documentation than other (the v8 blog is great resource [1]). This ism ore easy to have an overview of V8 internals. It could be nice to have more internal overview from other engine, in particular SpiderMonkey. It is hard to find up-to-date info.
Unrelated question, maybe someone knows, each time I opened a JavaScriptCore or Webkit source file, i've almost never seen any comments. Is it me and my bad sampling or these codes have almost no documentation at all? That's really uncanny.
amelius 43 days ago [-]
> it uses JavaScriptCore and not V8
Isn't JavaScriptCore available only in Apple's ecosystem?
wootest 43 days ago [-]
JavaScriptCore is a constituent part of WebKit, which is pretty much available on washer-dryers these days. But the macOS/iOS/etcOS framework part of JavaScriptCore that adds Objective-C/Swift layers is only available for those platforms, yes: https://developer.apple.com/documentation/javascriptcore
ksec 43 days ago [-]
As much as I wholeheartedly agree, please dont spell it out.
Zig is not perfect, Zig is full of sin, in a world full of righteousness, using Zig should be closely guarded.
nojvek 43 days ago [-]
This is great. Does bun handle yarn.lock files and create them? I.e - can I replace yarn/npm with bun 1:1 for the speed gains?
I also wonder how bun compares to esbuild for js/ts transpile speed gains?
alexarena 44 days ago [-]
Congrats! Cool to see a new class of JS runtimes springing up. Lots to be excited about here, but cold start time seems like a game changer for building at the edge.
Why choose Zig, is there something it provides that makes it particularly suitable for this type of project?
gnuvince 43 days ago [-]
From the webpage:
> An enourmous amount of time spent profiling, benchmarking and optimizing things. The answer is different for every part of Bun, but one general theme: Zig's low-level control over memory and lack of hidden control flow makes it much simpler to write fast software.
With modern hardware, accessing memory efficiently is key to writing fast software. Programs written in programming languages that don't let the developer control how data is laid out in memory usually face an uphill battle to be fast.
noduerme 44 days ago [-]
This sounds extremely impressive! One question: Most of my nodejs code relies heavily on a custom wrapper I built around node-mysql ... for some rather complicated historical reasons, not node-mysql2. In general, what database modules are supported out of the box (besides sqlite3)?
8n4vidtmkvmk 44 days ago [-]
try mysql3!
j/k. i wrote it so I'm biased but it hasn't seen much use outside my projects
noduerme 43 days ago [-]
heh. I understand. No one uses mine either. I wrote it around the core to cache server-side prepared statements and old PDO style bindings (WHERE `field`=:var)
I will check yours out if you post a link! or is it just node-mysql3?
It's main selling point is that it uses template strings so you can just do
sql`select f from t where x=${somevar}`
And the lib will take care of escaping somevar. I was getting sick of ORMs which make it even harder to write complex queries.
It's just a wrapper over mysql1 or 2. I forget which
tipiirai 43 days ago [-]
Congratulations! This is one of those epic projects you only get to see only few times on your career. Potential/hopeful game changer. Like jQuery or Node.
I'd like to know more about the bundled .bun files. What are they? How they are used? Usable on the browser too?
aledalgrande 44 days ago [-]
wonder if bun also has a different approach to security when it comes to installing packages and running their scripts and/or using file system at runtime, e.g. not giving access to the whole machine's file system like Node?
toastal 43 days ago [-]
It's a shame proprietary Discord is their only communication option, and proprietary GitHub their only Git mirror. They're even advertising the Discord in the CLI (https://github.com/Jarred-Sumner/bun/blob/e4fb84275715bb4de4...). Also the shorthand syntax `bun create github-user/repo-name destination` is favoring users choosing GitHub above others instead of not favoring any specific Git forge (the best path I've seen is how nixpkgs supports github:u/repo gitlab:u/repo sourcehut:~u/repo etc. to shorthand popular options but not favoring any, while still being flexible enough to continue extending).
tasubotadas 43 days ago [-]
Why is a proprietary communication tools is a problem?
What's next? It's gonna be a shameful practice to use windows for development?
toastal 43 days ago [-]
> Why is a proprietary communication tools is a problem?
You're signaling to all contributors that you don't value their freedom or privacy.
Not everyone wants to give their data to a corporation. Some users have accessibility needs that straight aren't met by Discord's clients and they send cease-and-desists to every attempt at people to try to make a better or safer alternative client experience free of charge. The fact that there are free and libre alternatives, but choosing not use or at least support an alternative alongside shows your project's priorities (see: Libera.Chat, mailing lists, Matrix, Zulip, Fediverse, RSS/Atom feeds, hosting Discourse, et. al.).
> What's next? It's gonna be a shameful practice to use windows for development?
Slippery nope.
oorza 43 days ago [-]
It's a cost-benefit analysis, like everything else. Supporting only github more than qualifies the Pareto need for the feature, as does Discord for realtime communication. None of the alternatives you mentioned to Discord, for example, are likely to already have a client installed for the vast majority of developers; with that as a litmus, the choices are effectively Discord or Slack. When you're doing the hard calculus of how to spend your most precious resource (time) in a FOSS project, you have to weigh the costs and rewards. Only supporting Github likely has no statistically significant difference in likelihood for contributions. Similarly, hosting conversations on an unusual platform most users are not already using increases the friction of their contributions, so you choose the most popular platform.
I'm sure this project's community would welcome a contribution to mirror their git in a read-only state somewhere else, because why wouldn't they? Similarly, I'm sure they'd be fine with collaborating on setting up bidirectional chat bots so you can communicate with them as you want.
But to expect these things from a nascent project seems ridiculous. We're not talking about React or Spring here, we're talking about a brand new project who should be investing as much time as possible making their software work, not catering to every potential communications niche.
If you've decided that contributing to someone else's code on Github violates your sense of ethics or privacy, that's well within your rights and I respect you for it, but you must have enough self-awareness to recognize that that puts you in the far extreme of digital ethicists. And that shouldn't come with an expectation that your ethics have been catered to.
toastal 43 days ago [-]
It remind like every parents' lesson of: if everyone's jumping off a bridge, should you too? And as stewards of OSS, we should shepard users into these FOSS platforms.
Instead, you bifurcated your community that is passionate about FOSS and privacy from those that aren't.
kuschku 43 days ago [-]
> None of the alternatives you mentioned to Discord, for example, are likely to already have a client installed for the vast majority of developers
You seem to have a very distorted view of developers, the vast majority of free software developers are going to have either an IRC client or a Matrix client installed already.
oorza 43 days ago [-]
I've never seen Matrix discussed outside of HackerNews. In the last five years or so, the reaction to people finding out I still use IRC is either "What is IRC?" or "People still use that? Brings me back..."
I don't know a single person in any professional context that doesn't have one of Slack, Teams, or Discord installed.
I may be overstating how much smaller your pool is, but to say that choosing Discord or Slack doesn't grossly expand your reach is just naive.
kuschku 43 days ago [-]
In my pool, having slack or teams installed is less likely than having an IRC or Matrix client installed.
So it's probably just that, different bubbles that rarely intersect.
chrisseaton 43 days ago [-]
> You're signaling to all contributors that you don't value their freedom or privacy.
I don’t think software freedom is a focus of this project. They can’t cater to every unrelated hobby issue.
> shows your project's priorities
Yeah - the priority isn’t software freedom. They can’t prioritise everything.
deepstack 43 days ago [-]
yeah what happens if a few years down the road, the company decided to close the free accounts? To me for open source project we need a way to archive the discussion and make it public not tight to any company or so.
chrisseaton 43 days ago [-]
> yeah what happens if a few years down the road, the company decided to close the free accounts?
I guess they'll move on somewhere else? Likelihood seems low, impact seems low. Why spend energy on it?
> To me for open source project we need a way to archive the discussion and make it public not tight to any company or so.
Ok but that's what you're interested in. Most people aren't into that.
I don't get why you'd expect all projects to be focused on your particular hobby interests?
Maybe I love typography. Why isn't this project paying more attention to the typography in their website damnit!
deepstack 43 days ago [-]
>I guess they'll move on somewhere else? Likelihood seems low, impact seems low. Why spend energy on it?
many services don't really migration of the data. Why would I want my data stuck some where?
chrisseaton 43 days ago [-]
> Why would I want my data stuck some where?
Nobody wants their data stuck - people assess the likelihood of that as low and the impact as low, so rationally don’t care about it.
If one of my personal projects was unilaterally deleted right now by GitHub it’d be annoying to lose my issues but I could recover ok. And I don’t think it’s likely anyway, so why worry?
People only have so much energy to spend worrying about things. Most would spend it building instead.
Can’t be that hard to understand?
cweagans 43 days ago [-]
> You're signaling to all contributors that you don't value their freedom or privacy.
Disagree strongly with this. It signals to me that the developer cares more about building a good thing than standing up FOSS tools to appease the zealots. Seems very pragmatic.
chakkepolja 43 days ago [-]
My problem is not that it's proprietary but that it doesn't appear in google search, unlike forums.
crabmusket 43 days ago [-]
Even Go's famously-derided "package management" used full URLs like `import "github.com/user/repo"`
toastal 43 days ago [-]
The Vim plugin communities are notorious about the GitHub bias too with almost everything just assuming GitHub.
npm is supports shorthands for some Git forges (though no SourceHut or Codeberg), but without the `forge:` syntax, you get GitHub as the default which is also favoritism (no surprise with Microsoft owning GitHub and npm though).
The worst offender IMO though is Elm who ties their entire package management ecosystem to GitHub where both your identity and the ability to upload a package requires a GitHub account and hosting must be there too, and downloading requires that GitHub's servers are actually up (with the community needing hacks to deal with the non-so-uncommon likelihood of GitHub being down and no way to point to a mirror).
44 days ago [-]
viginti_tres 44 days ago [-]
Was trying to benchmark bun but after 2 minutes it stuck in typescript [1342/1364]. Running on M1 Mac, nextjs project. I think i'm still going to stick pnpm
brunojppb 44 days ago [-]
This is an impressive achievement! Congrats Jarred for this initiative. This is going to help JS ecosystem to move further ahead.
wpnx 44 days ago [-]
Congrats Jarred :) It's been fun watching you build this over the last year on Twitter. Cheers to much success
adamddev1 43 days ago [-]
This looks really, really good! Does it (have plans to) support TCO (tail-call optimization)?
That's great, but PTC not quite the same thing as TCO. (As the blog post says). Too bad it doesn't seem any big players in JavaScript ar moving to implement that.
radicalriddler 43 days ago [-]
Is bun.sh (the website) open sourced? It says it was built with bun, anyone have a link?
Neat. Anyone try it on ARM yet? I’m interested in 32 bit ARM, but 64 will do too.
gardenhedge 44 days ago [-]
Awesome. The homepage explains it very well which is impressive.
vaughan 43 days ago [-]
Any plans for desktop? An Electron replacement would be nice.
newbieuser 43 days ago [-]
at first glance it looks like a more developer friendly tool compared to deno
picozeta 43 days ago [-]
Nice, but using a memory unsafe language in a web context is too risky in my opinion.
Etheryte 43 days ago [-]
Does that mean you also avoid using Chrome, Firefox, Node, etc?
picozeta 43 days ago [-]
No, because I have to.
But there are many capable languages to develop memory save web applications (Haskell, Go, Rust, Java, Python, TypeScript, Elm, Clojure, ...), so why would one choose one that's not save?
Etheryte 43 days ago [-]
For starters, Go is not even memory safe. That aside, there are many reasons to choose a different language, memory safety is only one of very many tradeoffs that you make when choosing your tooling. In Bun's example, extreme performance has been achieved which no comparable alternative provides. Neither performance nor memory safety are "better" or "worse" by themselves, it's a matter of which tradeoffs you choose.
postalrat 43 days ago [-]
Did you want to say (Rust) but felt compelled to list more?
Can Java, Python, Typescript, Elm, or Clojure even be consider memory safe since they are run on VMs or interpreters that might not be memory safe?
KenTG 38 days ago [-]
Nice!
asciiresort 44 days ago [-]
Not to be a cynic, but I wonder how much of the motivation to create a competing runtime in recent months is in response to the eye gauging ( I know, I know. It’s all relative ) tens of millions in funding Deno just raised.
I don’t intend this as a knock on this project. Competition is good and, this space, unlike the rest of JavaScript, could do with more players. There are some promising numbers and claims here. I hope it works out.
I’m genuinely posing this intellectual question of financial incentive as an theory for JavaScript fatigue as a whole.
High profile threads on JavaScript fatigue trend on HackerNews multiple times a week. The wide range of theories about why web developers reinvent the wheel strangely leave out the financial incentives.
Everyone claims their framework is the fast, powerful, light, batteries included, modern, retro futuristic, extensible, opinionated, configurable, zero config, minimal (3kb, gzipped minified, of course ). The list goes on. A few days ago, I was chatting with someone how all these JavaScript libraries make these words have no meaning. To demonstrate, I screenshared a bunch of landing pages. At this point, I haven’t exhaustively, in one sitting, cross referenced these landing pages. 90% of the libraries shared the same layout. 3 columns / cards with one of those hyperbolic words.
Previously I thought it was pretentious and weird that Svelte marketed itself as “cybernetically enhanced web apps”. What does that even mean? Then again, none of the descriptor words like light, dynamic, and ergonomic mean much. At least Svelte was memorable.
Occasionally, one of these libraries would describe their interface as being ergonomically designed. As if other developers designed their interfaces to not be ergonomic. It’s like how we’d all like to think we’re nice, good, decent people. The majority of people would not do bad stuff if they perceived it as such.
I do think most JavaScript developers have good intentions. Then I’ve also seen DevRel / Evangelist types who shill their library, with disingenuous benchmarks, knowing full well there are better solutions, or that they can help make existing solutions better, to everyone’s benefit. The spoils include consulting, training, speaking fees, books, swag, collecting Patreon ( there are some controversial projects which come to mind ), resume building, GitHub activity social capital ( I’ve talked to some recruiters who were fixated on the idea that you publish code on GitHub, specifically Github, because to them, VCS=Git=GitHub, or it doesn’t exist )
maclockard 44 days ago [-]
I'm also pretty cynical of most JS rebuild/reinvention projects. I'm very tentatively excited by this one _because_ it looks like all it does is incrementally improve. Having something that is a drop-in API compat replacement for yarn 1/npm and node makes it potentially really easy to get the benefits of incremental perf improvements _without_ needing to reinvent the wheel like yarn 2 or deno.
forty 44 days ago [-]
100% this. Being compatible with nodejs API makes it possibly useful, unlike some other projects which want to throw away the huge npm ecosystem. Why on earth would anyone use JS (or even TS) server side if not to benefit from the ecosystem? Unlike on the web, there are plenty of better languages to use if you don't want npm.
stevage 44 days ago [-]
For me and possibly other full stack devs, because I don't want to stay current in two different languages. Using python and JavaScript sucked. Switching to Node and JS was much better.
Aeolun 44 days ago [-]
> Switching to Node and JS was much better.
Though now you have to stay current in two different runtimes which are subtly different.
Easier, but still a PITA.
kristoff_it 44 days ago [-]
These are legitimate concerns. Looking at the history of bun and its current homepage the main point is that it offers a dramatic speed improvement (plus misc. quality of life stuff).
At this point the ball is in your (and everyone else's) court to put these claims to the test. It should not be terribly hard to see if the speedup is worth your while or not, JS surely doesn't lack bloated projects that you can try to build. My own personal blog is a bloated Gatsby mess that takes half a minute to build.
That's the one true part of the experience that nobody can falsify.
asciiresort 44 days ago [-]
> Replace npm run with bun run and save 160ms on every run
Maybe you can’t falsify this, but it’s a question of risk vs reward.
It’s currently at 0.1 release. Chances are it has a much higher chance of breaking. And when that happens, it would likely take occupy way more time to debug than the hundreds of ms saved.
Also by being new, it means it has not had a chance to cover all the cases yet. That’s the devil. It’s fast now, but it’s an apples to orange comparison until Bun is at a stable release.
kristoff_it 44 days ago [-]
Yes, making up your own mind with first-hand experience requires investing time and effort, that's why people like to have other people tell them what to think.
Now we've come full circle.
Aeolun 44 days ago [-]
Tbh, just geing able to run my TS code with a single executable without building or ts-node would be worth it.
Especially if I can somehow hook this into my unit tests.
firloop 44 days ago [-]
Jarred's been working on bun for over a year. I don't get the sense that it's a reaction to anything recent at all, Jarred is just super passionate about building the best JavaScript runtime.
asciiresort 44 days ago [-]
I don’t doubt that.
The fact that this project uses Zig suggests to me the developer is talented , passionate, and willing to challenge the status quo.
When you choose a lesser known language like that to tackle a hard problem, chances are you are confident in yourself and the language.
The problems I pointed out with the JavaScript ecosystem as a whole is that it’s low hanging fruit. It’s not that there aren’t financial opportunities elsewhere, outside JavaScript. There definitely is. But the perception of financial incentives is the low hanging fruit, plus high reward.
In this case, it will boil down to how much Bun innovates versus just being a thin wrapped around existing solutions. And again, I don’t doubt this. Skeptical, in general, but not ruling Bun out.
Tomuus 44 days ago [-]
Deno isn't the only company offering a not-Node JS server runtime. Cloudflare, Shopify, Fastify, AWS, and probably more all have skin in this game.
asciiresort 44 days ago [-]
Deno showed there is space for not-Node, and that developers would be receptive to this.
And yes, Deno is just one player in edge, but you can agree there is much more money involved with all those other players you listed.
It’s going to be a battle of eyeballs from those edge providers then wouldn’t it? Whether that’s consulting or licensing fees, or just an acquisition / acquihire player.
Maybe you’re suggesting these players would build a runtime themselves. From my experience, only a fraction of companies, rarely, tackle ambitious projects like this. It’d be hard to justify to management who need quarterly results. Instead, they’ll fork an existing technology and make it better, because you can show incremental progress but keep your baseline. For example, Facebook didn’t rewrite their PHP code right away. They wrote a faster interpreter.
Tomuus 44 days ago [-]
I wouldn't agree that Deno showed that, as I said many companies are making a lot of money from non-Node JS runtimes.
The players I mention have built their own runtime, they're mostly all built on V8 isolates (including Deno Deploy).
This is why I struggle to see where Bun fits in the edge JS world, as far as I understand it JSC has no Isolate primitive meaning Bun would have to write this from scratch (or salvage the other parts of WebKit that offer isolation). Otherwise Bun will be limited to using Linux containers on the edge, at which point you re-introduce the startup time you gained by switching from node in the first place.
ignoramous 43 days ago [-]
Someone suggested that Deno Deploy might not be using isolates as isolation boundary per account but processes instead (though, Deno Deploy may be using isolates to run different functions part the same account).
Shall we count jest and mocha as js runtime as well (not for server though)
andyfleming 44 days ago [-]
I think some creators pursue products as ways to fund their passions. For example, was deno deploy always the endgame or is it just a way to fund working on deno? Aside from motivation, how it's implemented and how it affects the community matters.
asciiresort 44 days ago [-]
> it has the edge in mind
This part, very early on in the Bun page, stood out. That’s a monetizable product, even if the code is open source. That to me felt like positioning itself as a potential drop in replacement for Deno Deploy / Edge Functions.
There was serverless, and now the next trend is with edge computing. It’s already happening, but now specifically about runtimes on that edge.
ignoramous 43 days ago [-]
> This part, very early on in the Bun page, stood out. That’s a monetizable product, even if the code is open source.
It actually seems more vendor agnostic in description. It's an overall advantage of bun, but there isn't any first-class bun service pitched (at least at this point).
"Please don't pick the most provocative thing in an article or post to complain about in the thread. Find something interesting to respond to instead."
Why? If it's over TLS you can trust it's being served by the owner of the website. You're having to trust the person who wrote the script anyway. And before anyone says "I'm going to inspect the shell script before I run it", do also note that its purpose is to download and run binaries that you are never going to inspect.
Arnavion 44 days ago [-]
The other thing to be careful about is that the script is written in a way that a truncated script has no effect. This is because sh will execute the stdin line-by-line as it reads it, so if the download fails in the middle of a line the downloaded fragment of that line will be executed.
It can be the difference between `rm -rf ~/.cache/foo` and `rm -rf ~/`
The standard way to solve this problem is to put the entire script inside a function, invoke that function in the last line of the script (as the only other top-level statement other than the function definition), and name it in such a way that substrings of the name are unlikely to map to anything that already exists. Note that the bun.sh script does not do this, but also from a quick glance the only thing that could noticeably go wrong is this line:
rmdir $bin_dir/bun-${target}
A truncated download could make it run `rmdir $bin_dir` instead, in which case it'll end up deleting ~/.bun instead of ~/.bun/bin/bun-${target}, which is probably not a big deal.
SahAssar 44 days ago [-]
IMO the main problem is that it isn't clear how updates will work. Some of the curl-to-bash scripts don't do anything in regards to updates at all, some add a PPA/similar on ubuntu/debian/fedora/etc.
It'd be nice to know what and how I should manage updates.
TimTheTinker 44 days ago [-]
True, the only real counterpoint is someone who clones the repo, inspects it, and builds from source.
phist_mcgee 44 days ago [-]
Do you really own your own operating system if you haven't compiled the kernel yourself?
TimTheTinker 44 days ago [-]
Even if you do compile the kernel yourself, do you really own your OS if you haven't compiled the compiler yourself? Did you use a pre-built compiler binary to compile the compiler?
Now we're getting to the real questions in life. :)
(Incidentally, this is probably the most fundamental software supply chain attack vector - manipulate the compiler binary used to compile the compiler used to compile the kernel and userspace. The attack payload would never appear in any sources, but would always be present in binaries.)
You could add additional security to the process by first validating some cryptographic signature or verifying that the downloaded content's hash matches one that the author published.
Both of those just push the overall security a bit down the line, but both are ultimately not completely safe. The only truly safe action to take is to not download it at all.
0des 44 days ago [-]
someone demonstrated a while back how based on user agent you could serve innocuous code for a browser checking the code first, and then a different malicious payload for curl.
You're running untrusted binaries anyway in the end, so I don't think this is anything more than a neat party trick.
dementiapatien 44 days ago [-]
But this technique lets you serve malicious code to a small number of people using curl|bash, rather than hosting obviously-bad binaries that anyone can inspect and call you out on. It also lets you target the attack to specific users or IP blocks.
Moreutils has two programs that would trivially defeat this:
`sponge` reads the full input before passing it on
`vipe` inserts your editor inline, so you can view/modify the input before passing it on to bash (change an install directory, etc)
tambourine_man 44 days ago [-]
I’m always flabbergasted how often projects, including and especially the ones dealing with web technologies (JavaScript), fail to write a responsive website.
You could be more impressed that some web pages manages to guess that the browser you use, that presents itself as a desktop browser, is in fact a mobile browser :)
tambourine_man 43 days ago [-]
Mine is not presenting itself as a desktop browser. It just has a smaller screen than what the designer expected. iPhone SE
One of the things I'm excited about is bun install.
On Linux, it installs dependencies for a simple Next.js app about 20x faster than any other npm client available today.
If I install a simple nextjs app, then remove node_modules, the lockfile, and the ~/.bun/install/cache/*.npm files (i.e. keep the contents, remove the manifests) and then install, bun takes around ~3-4s. PNPM is consistently faster for me at around ~2-3s.
I'm not familiar with bun's internals so I may be doing something wrong.
One piece of feedback, having the lockfile be binary is a HUGE turn off for me. Impossible to diff. Is there another format?
* I will mention that even in the best case scenario with PNPM (i.e. lockfile and node_modules) it still takes 400ms to start up, which, yes, is quite slow. So every action APART from the initial install is much MUCH faster with bun. I still feel 400ms is good enough for a package manager which is invoked sporadically. Compare that to esbuild which is something you invoke constantly, and having that be fast is such a godsend.
This isn't the main optimization. The main optimization is the system calls used to copy/link files. To see the difference, compare `bun install --backend=copyfile` with `bun install --backend=hardlink` (hardlink should be the default). The other big optimization is the binary formats for both the lockfile and the manifest. npm clients waste a lot of time parsing JSON.
The more minor optimizations have to do with reducing memory usage. The binary lockfile format interns the strings (very repetitive strings). However, many of these strings are tiny, so it's actually more expensive to store a hash and a length separately from the string itself. Instead, Bun stores the string as 8 bytes and one bit bit says whether the entire string is contained inside those 8 bytes or if it's a memory offset into the lockfile's string buffer (since 64-bit pointers can't use the full memory address and bun currently only targets 64-bit CPUs, this works)
yarn also caches the manifest responses.
> If I install a simple nextjs app, then remove node_modules, the lockfile, and the ~/.bun/install/cache/.npm files (i.e. keep the contents, remove the manifests) and then install, bun takes around ~3-4s. PNPM is consistently faster for me at around ~2-3s.
This sounds like a concurrency bug with scheduling tasks from the main thread to the HTTP thread. I would love someone to help review the code for the thread pool & async io.
> One piece of feedback, having the lockfile be binary is a HUGE turn off for me. Impossible to diff. Is there another format?
If you do `bun install -y`, it will output as a yarn v1 lockfile.
If you add this to your .gitattributes:
It will print the diff as a yarn lockfile.Considering the speed with which a fast parser can gobble up JSON I'm somewhat skeptical that this would be relevant for common operations.
Yes he did
I don't see it either. Perf data that shows that was the issue.
https://news.ycombinator.com/item?id=31993429
Of course, I can't say for sure that he looked at the fastest possible way to parse json here, but my intuition would be that if he didn't, it's because he had an educated guess that it'd still be slower.
It's just comparison of execution times of several different package manager.
Better would be parsing JSON vs binary in Bun.
If you were dealing with a multi-gigabyte lock file then it would be a different matter but frankly I agree with their point that parsing a lock file which is only a few KB shouldn’t be a differentiator (and if it is, then the JSON parser is the issue, and fixing that should be the priority rather than changing to a binary format).
Moreover the earlier comment about lock files needing to be human readable is correct. Being able to read, diff and edit them is absolutely a feature worth preserving even if it costs you a fraction of a second in execution time.
You mean a few MB? NPM projects typically have thousands of dependencies. A 10MB lock file wouldn't be atypical and parse time for a 10MB JSON file can absolutely be significant. Especially if you have to do it multiple times.
> Being able to read, diff and edit them is absolutely a feature worth preserving even if it costs you a fraction of a second in execution time.
You can read and edit a SQLite file way easier than a huge JSON file.
Not applicable to GitHub etc though.
I'm also not seeing any speed differences when using -y/yarn lockfile. Why not make it the default?
GitHub (disclosure: where I work) does respect some directives in a repo’s .gitattributes file. For example, you can use them to override language detection or mark files as generated or vendored to change diff presentation. You can also improve the diff hunk headers we generate by default by specifying e.g. `*.rb diff=ruby` (although come to think of it I don’t know why that’s necessary since we already know the filetype — I’ll look into it)
In principal there’s no reason we couldn’t extend our existing rich diff support used for diffing things like images to enhance the presentation of lockfile diffs. There’s not a huge benefit for text-based lock files but for binary ones (if such a scheme were to take off) it would be a lot more useful.
Here's a test demonstrating that this usage works: https://github.com/github/linguist/blob/32ec19c013a7f81ffaee...
Quoting the docs on finding files:
https://docs.github.com/en/search-github/searching-on-github...
> File finder results exclude some directories like build, log, tmp, and vendor. To search for files within these directories, use the filename code search qualifier.
(The inability of quick jumping to files from /build/ folder with `T` has been driving me crazy for YEARS!)
Correct me if I'm wrong, but checking those two files:
- https://github.com/github/linguist/blob/master/lib/linguist/...
- https://github.com/github/linguist/blob/master/lib/linguist/...
I don't see `/build` matching anything there. So to me this `/build` suppression from search results seems like controlled by some other piece of software at GitHub :/
Also, files from `/build` are not hidden in diffs, so per this table: https://github.com/github/linguist/blob/HEAD/docs/overrides.... they are not "linguist-generated".
I think it's reasonable to respect the linguist overrides here so I'll open a PR to remove entries from the exclude if the repo has a `-linguist-generated` or `-linguist-vendored` gitattribute for that directory [1]. So in your case you can add
to `.gitattributes` and once my PR lands files under `build` should be findable in file-finder.Thanks for pointing this out! Feel free to DM me on twitter (@cbrasic) if you have more questions.
[1] Recursively matching a directory with gitattributes requires the `/**` syntax unlike .gitignore: https://git-scm.com/docs/gitattributes#:~:text=with%20a%20fe...
On Linux, not yet. I don't have a machine that supports reflinks right now and I am hesitant to push code for this without manually testing it works. That being said, it does use copy_file_range if --backend=copyfile, which can use reflinks.
Since JSC is actually compilable to Wasm [1] and Zig supports WASI compilation, I wonder how easy would be to get it running fully in WAPM with WASI. Any thoughts on how feasible that should be?
[1]: https://wapm.io/syrusakbary/jsc
npm is famous for a lot of things & reasons, but none of those are "because it's well engineered".
To this day, npm still runs the `preinstall` script after dependencies have actually been downloaded to your disk. It modifies a `yarn.lock` file if you have it on disk when running `npm install`. Lots of things like these, so that the install is slow, is hardly surprising.
https://github.com/npm/cli/blob/latest/workspaces/arborist/l...
This happens because the tooling often requires domain knowledge which they have and if they set out to write tooling for a language they tend to be experienced in that language.
Is it even a tree? Does NPM still allow circular dependencies?
Yarn v2 is backwards compatible though. You just need to use the node_modules "linker" (not the default one) and it's ready to go.
Last I checked, not quite. Yarn 2+ patches some dependencies to support PnP, even if you don’t use PnP. I discovered this while trying out both Yarn 2 and a pre-release version of TypeScript which failed to install—because the patch for it wasn’t available for that pre-release. I would have thought using node_modules would bypass that patching logic, but no such luck.
By the way, the yarn2 / yarn3 project is hosted on a distinct repository [3].
[1] https://yarnpkg.com/features/zero-installs
[2] https://yarnpkg.com/features/pnp
[3] https://github.com/yarnpkg/berry
Also a few questions:
What do you attribute the performance advantage to? How much of it is JavascriptCore instead of v8 versus optimized glue binding implementations in the runtime? If the latter, what are you doing to improve performance?
Similarly for the npm client: how much is just that bun is the only tool in a compiled /GC free language versus special optimization work?
How does Zigs special support for custom allocators factor in?
edit: should be fixed
https://twitter.com/jarredsumner/status/1542824445810642946
Easy to dismiss this as "another JS build tool thing why do we need more of this eye roll" but I think this is worth a proper look — really interesting to follow how much effort has gone into the performance.
Congrats on the release :)
https://news.ycombinator.com/item?id=31993615
"1ms is an eternity for computers"
When was the last time you heard a Web Developer, frontend, backend or web tooling Dev state that? [2] It is always, oh the network latency dominate, or the DB response time dominate. It is only one tenth of a second ( 100ms ) it doesn't matter. It is fast enoughTM.
[1] https://twitter.com/jarredsumner/status/1477775398700158980
[2] Actually Nick Craver does that a lot during his StackOverflow era.
And that's plain JS, most people don't even work in plain JS but in a framework where they don't just say "var x = y" but "publish this event into my Observable stack which triggers a state update which will eventually, somehow, update my component which will then eventually update the DOM in the browser", after which the browser's own whole stack of renders takes over.
Meanwhile in video games you can say `frame[0][0] = 255` to set a pixel to white, or whatever.
I think it's a matter of levels of abstraction; with webapps, I'd argue they've gone even further than Java enterprise applications in the 90's / 2000's.
Also, Jarred's twitter is a rich follow full of benchmarks and research. If nothing else, this project will prove to be beneficial to the entire ecosystem. Give it some love.
Of course, this is very much a double edge sword, with all the "leftpad" type dependency nightmares and security issues, as well as the "Hot new JS framework of the month" issues. Still, I think the dependency issues are solvable and dependency tooling is getting better, and the frameworks issue has calmed down a bit such that it's easy to stick to a couple major projects (e.g. React, Express) if so desired, but more experimental, cool stuff is always out there.
You might say "so what they're doing it for free, you don't have to use their stuff", but often you do because the existence of a sort-of-working solution means that other people are much less likely to write a robust solution for the same problem. So everyone ends up using the crap one.
Rust and Go are way way better in that regard.
Better to depend on numerous 800 pound long-lived gorilla's than a million short-lived mayflies that keep dying.
But it has so much of a broader ecosystem of other tools that if you're willing to take that risk it's an option. Java basically just doesn't have that.
PS: I have already poked my head into the Kafka codebase in the past. Not the best written project and also confusing because of the Scala mix, but far more readable than several I have seen. And Java makes it easily navigable. Can even auto-gen diagrams to grok it better.
"I spent most of the day on a memory leak. Probably going to just ship with the memory leak since I need to invest more in docs. It starts to matter after serving about 1,000,000 HTTP requests from web streams"
https://twitter.com/jarredsumner/status/1543236098087825409
At least in my usecase, I do about 35m hits / day... so this would fall over in less than an hour. 1m isn't that large of a number and the author is willing to shrug that off until after launching.
https://twitter.com/threepointone/status/1543237413190901760
> Longer-term, bun intends to replace Node.js, Webpack, Babel, yarn, and PostCSS (in production).
https://github.com/Jarred-Sumner/bun#limitations--intended-u...
> Rome is a formatter, linter, bundler, and more for JavaScript, TypeScript, JSON, HTML, Markdown, and CSS.
> Rome is designed to replace Babel, ESLint, webpack, Prettier, Jest, and others.
Haven't seen it since.
> Parcel: The zero configuration build tool for JavaScript, CSS, HTML, TypeScript, React, images, SASS, SVG, Vue, libraries, Less, CoffeeScript, Node, Stylus, Pug, Electron, Elm, WebGL, extensions, GraphQL, MD
As long as you do the bare minimum for each target.
Normally I start with something else (webpack, rollup, whatever happened to be there with the example I'm starting from), then when I hit some roadblocks I just parcel index.html and I have something working.
Something has done a bit wrong if you're running any of those tools in production.
Most companies have things that run "in dev", "in staging", "in CI", and "in production". These map directly to some tools - for example, React has "development" and "production" modes. When someone says a server or a database or a tool is used "in production" they're usually referring to the live environment that users access. Most people use tools like Webpack locally to run code when they're doing dev work, and in CI to build things. If someone said to me "We're running Webpack in production" then I would have questions.
If you use "in production" to mean "anywhere in a project", then how do you differentiate between a staging environment and a production environment? Do you talk about "the staging environment that's in production"? That would be confusing...
I love the bundling of most tasks in one app, especially in an environment where I had friends refuse to interact because of the "framework of the month" problem.
I just wish it din't rep Zig this much, I'm hyped for Zig as much as the next guy, but the website mentions is twice back to back and I really think we should stop going "it's better cause it's written in X".
Rust is already somewhat infamous for this ("Rewrite it in Rust" is a meme) and has caused it to develop a bit of a stigma, at least in my circles.
I'm still rooting for Zig to get its place among the big ones (and bun seems definitely a nice way to push for it) I just hope that happens without creating the annoying cult-like behaviors that plagued the crab language.
The general concern about "written in Zig" being annoying is fair. I think it's a different beast when paired with a call to donate but regardless, if RIIZ is what worries you most, then you can sleep safe because our motto is "Maintain It With Zig", a conscious rejection of "Rewrite It in *".
https://kristoff.it/blog/maintain-it-with-zig/
I wholeheartedly disagree. I'm much more interested in this because it's written in Zig and not C or C++.
It tells me a few things about the code:
- naive-implementation performance should be pretty good
- written by someone who cares about details
- less likely to have race conditions and memory bugs
- attractive for new devs who want to work on a project in a next-generation language
At this point, the only reason to use a slow or unsafe language is that it's all the author knows.
1. https://kristoff.it/blog/zig-colorblind-async-await/
Performance is one thing (the benchmarks are probably wrong though), but will it solve any of the headache you get with NodeJS. I for instance have 7 different NodeJS versions installed just to compile all the projects I use. Oldest version is currently 6(!). The NPM dependency hell is best in class. NodeJS LTS is nothing but a tag. Compliance with ECMAScript updates have not been great. Still a mess with NodeJS to figure out if a package will work on the specific version. Still poor performance in comparison to other tech. And so on...
At least with this you wouldn’t need to manage versions of the package manager and runtime separately
npm ERR! Unexpected token '.'
Pain points
- no docs on deploying bun (failed to do it on Fly.io or Railway.app, much less Netlify or Vercel)
- lack of api reference right now
- got a segfault when building a React SSR server
Super hacky (it just has the binary in the Repo) but it works: https://bun-fun.fly.dev/
I'm eager to see how the different runtimes will shine in the edge side. Bun is clearly positioned as one option and the focus on this is stated in the main page.
I believe the traction for Bun will depend a lot of the adoption. I see the point on having the fastest(TM) runtime in Linux and MacOS, but that comes from the fact of using specific syscalls. If we move to the edge, it's not clear to me how this will be implemented. Maybe those syscalls are not available or sandboxing makes things for difficult. The requirements for making it compatible may reduces its performance.
We will see :)
If there is anything Node core can do better for you to allow better interop ping us. Also - I'm not sure if you're involved with the WinterCG stuff (I _think_ maybe?) but if you're not please come by!
[1] https://bun.sh/install
So it isn't really everything written in Zig and there is some C++ helping there, actually.
The fact it states it's powered by JSC before even mentioning Zig makes it pretty clear, imo.
Using the same reasoning I can assert having written a JavaScript runtime in e.g. F# by making use of C++/CLI to compile JSC.
They don't claim it's pure Zig and even mention JavaScriptCore in their front page.
Definitely reads otherwise and is click bait, given the actual implementation.
If you read "runtime" as "extensions that allow you to do actually useful stuff" instead of "parser/evaluator", I don't see an issue.
There is precedent for this too -- for instance, the ".NET runtime" is not only the JIT/AOT compiler for CIL, but also the libraries providing the .net api + standards/mechanisms for loading other libraries, etc.
See also https://stackoverflow.com/questions/3900549/what-is-runtime :
> Runtime describes software/instructions that are executed while your program is running, especially those instructions that you did not write explicitly, but are necessary for the proper execution of your code. [...] Runtime code is specifically the code required to implement the features of the language itself.
(HatTip: write universal server code that runs anywhere: Node.js, Edge, Deno, Bun, ... Also for library authors who want to provide a middleware that works anywhere, instead of locking their users into a single JS server runtime.)
@Jarred: We'd be curious to know what you think of HatTip!
[1]: https://github.com/hattipjs/hattip
[2]: https://github.com/hattipjs/hattip/tree/feat/bun
there are projects like uWebSockets.js (and hyper-express [1] built on top of it) [1] which show a 10x increase in throughput.
[1] https://github.com/kartikk221/hyper-express/blob/master/docs...
I excitedly thought this project might include a new JavaScript VM, not extending an existing one.
But thanks for those specific wishlist items, which are much more sensible! Isn't your last item WASM though, with its interface types proposal?
This is making me reconsider using the node ecosystem in some of my projects.
By the way, I think that bun does not apply the patches of ESbuild since the translation date.
https://github.com/Jarred-Sumner/bun/issues/159
Best of luck.
OTOH, that could at last free Rust from being the shiniest kid on the block. Carry on...
Or maybe that’s just my criteria. Has nothing to do with the subject, and everything to do with what HN is for.
Or maybe I just need physical colleagues and coffee break humor.
The only time I've seen a SEGFAULT in Rust was when using a really badly implemented C library wrapper.
Only RAM manufacturers like memory leaks.
Should have used Nim, to make it interesting :D
I have never in my life heard anyone say they thought V8 was superior to JSC - what are you basing that on?
Since 2019, it seems like JSC is actually faster (they even beat V8 benchmarks).
[1] https://v8.dev/blog
Isn't JavaScriptCore available only in Apple's ecosystem?
Zig is not perfect, Zig is full of sin, in a world full of righteousness, using Zig should be closely guarded.
I also wonder how bun compares to esbuild for js/ts transpile speed gains?
Could you share what made you choose Zig over V for the project? It looks like both languages would have been an appropriate choice.
Anybody know how bun (or any other package manager) stacks up to it?
https://pnpm.io/
> An enourmous amount of time spent profiling, benchmarking and optimizing things. The answer is different for every part of Bun, but one general theme: Zig's low-level control over memory and lack of hidden control flow makes it much simpler to write fast software.
With modern hardware, accessing memory efficiently is key to writing fast software. Programs written in programming languages that don't let the developer control how data is laid out in memory usually face an uphill battle to be fast.
I will check yours out if you post a link! or is it just node-mysql3?
It's main selling point is that it uses template strings so you can just do
sql`select f from t where x=${somevar}`
And the lib will take care of escaping somevar. I was getting sick of ORMs which make it even harder to write complex queries.
It's just a wrapper over mysql1 or 2. I forget which
I'd like to know more about the bundled .bun files. What are they? How they are used? Usable on the browser too?
What's next? It's gonna be a shameful practice to use windows for development?
You're signaling to all contributors that you don't value their freedom or privacy.
Not everyone wants to give their data to a corporation. Some users have accessibility needs that straight aren't met by Discord's clients and they send cease-and-desists to every attempt at people to try to make a better or safer alternative client experience free of charge. The fact that there are free and libre alternatives, but choosing not use or at least support an alternative alongside shows your project's priorities (see: Libera.Chat, mailing lists, Matrix, Zulip, Fediverse, RSS/Atom feeds, hosting Discourse, et. al.).
> What's next? It's gonna be a shameful practice to use windows for development?
Slippery nope.
I'm sure this project's community would welcome a contribution to mirror their git in a read-only state somewhere else, because why wouldn't they? Similarly, I'm sure they'd be fine with collaborating on setting up bidirectional chat bots so you can communicate with them as you want.
But to expect these things from a nascent project seems ridiculous. We're not talking about React or Spring here, we're talking about a brand new project who should be investing as much time as possible making their software work, not catering to every potential communications niche.
If you've decided that contributing to someone else's code on Github violates your sense of ethics or privacy, that's well within your rights and I respect you for it, but you must have enough self-awareness to recognize that that puts you in the far extreme of digital ethicists. And that shouldn't come with an expectation that your ethics have been catered to.
Instead, you bifurcated your community that is passionate about FOSS and privacy from those that aren't.
You seem to have a very distorted view of developers, the vast majority of free software developers are going to have either an IRC client or a Matrix client installed already.
I don't know a single person in any professional context that doesn't have one of Slack, Teams, or Discord installed.
I may be overstating how much smaller your pool is, but to say that choosing Discord or Slack doesn't grossly expand your reach is just naive.
So it's probably just that, different bubbles that rarely intersect.
I don’t think software freedom is a focus of this project. They can’t cater to every unrelated hobby issue.
> shows your project's priorities
Yeah - the priority isn’t software freedom. They can’t prioritise everything.
I guess they'll move on somewhere else? Likelihood seems low, impact seems low. Why spend energy on it?
> To me for open source project we need a way to archive the discussion and make it public not tight to any company or so.
Ok but that's what you're interested in. Most people aren't into that.
I don't get why you'd expect all projects to be focused on your particular hobby interests?
Maybe I love typography. Why isn't this project paying more attention to the typography in their website damnit!
many services don't really migration of the data. Why would I want my data stuck some where?
Nobody wants their data stuck - people assess the likelihood of that as low and the impact as low, so rationally don’t care about it.
If one of my personal projects was unilaterally deleted right now by GitHub it’d be annoying to lose my issues but I could recover ok. And I don’t think it’s likely anyway, so why worry?
People only have so much energy to spend worrying about things. Most would spend it building instead.
Can’t be that hard to understand?
Disagree strongly with this. It signals to me that the developer cares more about building a good thing than standing up FOSS tools to appease the zealots. Seems very pragmatic.
npm is supports shorthands for some Git forges (though no SourceHut or Codeberg), but without the `forge:` syntax, you get GitHub as the default which is also favoritism (no surprise with Microsoft owning GitHub and npm though).
The worst offender IMO though is Elm who ties their entire package management ecosystem to GitHub where both your identity and the ability to upload a package requires a GitHub account and hosting must be there too, and downloading requires that GitHub's servers are actually up (with the community needing hacks to deal with the non-so-uncommon likelihood of GitHub being down and no way to point to a mirror).
But there are many capable languages to develop memory save web applications (Haskell, Go, Rust, Java, Python, TypeScript, Elm, Clojure, ...), so why would one choose one that's not save?
Can Java, Python, Typescript, Elm, or Clojure even be consider memory safe since they are run on VMs or interpreters that might not be memory safe?
I don’t intend this as a knock on this project. Competition is good and, this space, unlike the rest of JavaScript, could do with more players. There are some promising numbers and claims here. I hope it works out.
I’m genuinely posing this intellectual question of financial incentive as an theory for JavaScript fatigue as a whole.
High profile threads on JavaScript fatigue trend on HackerNews multiple times a week. The wide range of theories about why web developers reinvent the wheel strangely leave out the financial incentives.
Everyone claims their framework is the fast, powerful, light, batteries included, modern, retro futuristic, extensible, opinionated, configurable, zero config, minimal (3kb, gzipped minified, of course ). The list goes on. A few days ago, I was chatting with someone how all these JavaScript libraries make these words have no meaning. To demonstrate, I screenshared a bunch of landing pages. At this point, I haven’t exhaustively, in one sitting, cross referenced these landing pages. 90% of the libraries shared the same layout. 3 columns / cards with one of those hyperbolic words.
Previously I thought it was pretentious and weird that Svelte marketed itself as “cybernetically enhanced web apps”. What does that even mean? Then again, none of the descriptor words like light, dynamic, and ergonomic mean much. At least Svelte was memorable.
Occasionally, one of these libraries would describe their interface as being ergonomically designed. As if other developers designed their interfaces to not be ergonomic. It’s like how we’d all like to think we’re nice, good, decent people. The majority of people would not do bad stuff if they perceived it as such.
I do think most JavaScript developers have good intentions. Then I’ve also seen DevRel / Evangelist types who shill their library, with disingenuous benchmarks, knowing full well there are better solutions, or that they can help make existing solutions better, to everyone’s benefit. The spoils include consulting, training, speaking fees, books, swag, collecting Patreon ( there are some controversial projects which come to mind ), resume building, GitHub activity social capital ( I’ve talked to some recruiters who were fixated on the idea that you publish code on GitHub, specifically Github, because to them, VCS=Git=GitHub, or it doesn’t exist )
Though now you have to stay current in two different runtimes which are subtly different.
Easier, but still a PITA.
At this point the ball is in your (and everyone else's) court to put these claims to the test. It should not be terribly hard to see if the speedup is worth your while or not, JS surely doesn't lack bloated projects that you can try to build. My own personal blog is a bloated Gatsby mess that takes half a minute to build.
That's the one true part of the experience that nobody can falsify.
Maybe you can’t falsify this, but it’s a question of risk vs reward.
It’s currently at 0.1 release. Chances are it has a much higher chance of breaking. And when that happens, it would likely take occupy way more time to debug than the hundreds of ms saved.
Also by being new, it means it has not had a chance to cover all the cases yet. That’s the devil. It’s fast now, but it’s an apples to orange comparison until Bun is at a stable release.
Now we've come full circle.
Especially if I can somehow hook this into my unit tests.
The fact that this project uses Zig suggests to me the developer is talented , passionate, and willing to challenge the status quo.
When you choose a lesser known language like that to tackle a hard problem, chances are you are confident in yourself and the language.
The problems I pointed out with the JavaScript ecosystem as a whole is that it’s low hanging fruit. It’s not that there aren’t financial opportunities elsewhere, outside JavaScript. There definitely is. But the perception of financial incentives is the low hanging fruit, plus high reward.
In this case, it will boil down to how much Bun innovates versus just being a thin wrapped around existing solutions. And again, I don’t doubt this. Skeptical, in general, but not ruling Bun out.
And yes, Deno is just one player in edge, but you can agree there is much more money involved with all those other players you listed.
It’s going to be a battle of eyeballs from those edge providers then wouldn’t it? Whether that’s consulting or licensing fees, or just an acquisition / acquihire player.
Maybe you’re suggesting these players would build a runtime themselves. From my experience, only a fraction of companies, rarely, tackle ambitious projects like this. It’d be hard to justify to management who need quarterly results. Instead, they’ll fork an existing technology and make it better, because you can show incremental progress but keep your baseline. For example, Facebook didn’t rewrite their PHP code right away. They wrote a faster interpreter.
The players I mention have built their own runtime, they're mostly all built on V8 isolates (including Deno Deploy).
This is why I struggle to see where Bun fits in the edge JS world, as far as I understand it JSC has no Isolate primitive meaning Bun would have to write this from scratch (or salvage the other parts of WebKit that offer isolation). Otherwise Bun will be limited to using Linux containers on the edge, at which point you re-introduce the startup time you gained by switching from node in the first place.
https://news.ycombinator.com/item?id=31759170
This part, very early on in the Bun page, stood out. That’s a monetizable product, even if the code is open source. That to me felt like positioning itself as a potential drop in replacement for Deno Deploy / Edge Functions.
There was serverless, and now the next trend is with edge computing. It’s already happening, but now specifically about runtimes on that edge.
bun isn't open source, it is source available (at this point in time, at least): https://github.com/Jarred-Sumner/bun/issues/241
This type of thing needs to stop
https://news.ycombinator.com/newsguidelines.html
It can be the difference between `rm -rf ~/.cache/foo` and `rm -rf ~/`
The standard way to solve this problem is to put the entire script inside a function, invoke that function in the last line of the script (as the only other top-level statement other than the function definition), and name it in such a way that substrings of the name are unlikely to map to anything that already exists. Note that the bun.sh script does not do this, but also from a quick glance the only thing that could noticeably go wrong is this line:
A truncated download could make it run `rmdir $bin_dir` instead, in which case it'll end up deleting ~/.bun instead of ~/.bun/bin/bun-${target}, which is probably not a big deal.It'd be nice to know what and how I should manage updates.
Now we're getting to the real questions in life. :)
(Incidentally, this is probably the most fundamental software supply chain attack vector - manipulate the compiler binary used to compile the compiler used to compile the kernel and userspace. The attack payload would never appear in any sources, but would always be present in binaries.)
https://www.quora.com/What-is-a-coders-worst-nightmare/answe...
Both of those just push the overall security a bit down the line, but both are ultimately not completely safe. The only truly safe action to take is to not download it at all.
thanks to dementiapatien below for the link https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...
Going through package registries/repositories is a slow process, so obviously want something faster.
Just GitHub releases? Would you be fine if the URL instead pointed to GitHub in that case?
The previous HN discussion said it better than I can: https://news.ycombinator.com/item?id=17636032
`sponge` reads the full input before passing it on
`vipe` inserts your editor inline, so you can view/modify the input before passing it on to bash (change an install directory, etc)
https://imgur.com/a/1yrgHR4