xyst 3 hours ago

In my opinion, the significant drop in memory footprint is truly underrated (13 MB vs 1300 MB). If everybody cared about optimizing for efficiency and performance, the cost of computing wouldn’t be so burdensome.

Even self-hosting on an rpi becomes viable.

  • marcosdumay an hour ago

    It's the result of the data isolation above anything else attitude of Javascript.

    Or, in other words, it's the unavoidable result of insisting on using a language created for the frontend to write everything else.

    You don't need to rewrite your code in Rust to get that saving. Any other language will do.

    (Personally, I'm surprised all the gains are so small. Looks like it's a very well optimized code path.)

    • jvanderbot an hour ago

      "Rust" really just means "Not javascript" as a recurring pattern in these articles.

    • adastra22 an hour ago

      There is no reason data isolation should cost you 100x memory usage.

      • marcosdumay 11 minutes ago

        There are plenty of reasons. They are just not intrinsic to the isolation, instead they come from complications rooted deeply on the underlying system.

        If you rebuild Linux from the ground up with isolation in mind, you will be able to do it more efficiently. People are indeed in the process of rewriting it, but it's far from complete (and moving back and forward, as not every Linux dev cares about it).

  • echoangle 2 hours ago

    If every developer cared for optimizing efficiency and performance, development would become slower and more expensive though. People don’t write bad-performing code because it’s fun but because it’s easier. If hardware is cheap enough, it can be advantageous to quickly write slow code and get a big server instead of spending days optimizing it to save $100 on servers. When scaling up, the tradeoff has to be reconsidered of course.

    • marcos100 an hour ago

      We all should think about optimization and performance all the time and make a conscious decision of doing or not doing it given a time constraint and what level of performance we want.

      People write bad-performing code not because it's easier, it's because they don't know how to do it better or don't care.

      Repeating things like "premature optimization is the root of all evil" and "it's cheaper to get a bigger machine than dev time" are bad because people stop caring about it and stop doing it and, if we don't do it, it's always going to be a hard and time-consuming task.

      • 0cf8612b2e1e 34 minutes ago

        It is even worse for widely deployed applications. To pick on some favorites, Microsoft Teams and One Drive have lousy performance and burn up a ton of cpu. Both are deployed to tens/hundreds of millions of consumers, squandering battery life and electricity usage globally. Even a tiny performance improvement could lead to a fractional reduction in global energy use.

      • toolz an hour ago

        Strongly disagree with this sentiment. Our jobs are typically to write software in a way that minimizes risk and best ensures the success of the project.

        How many software projects have you seen fail because it couldn't run fast enough or used too many resources? Personally, I've never seen it. I'm sure it exists, but I can't imagine it's a common occurrence. I've rewritten systems because they grew and needed perf upgrades to continue working, but this was always something the business knew, planned for and accepted as a strategy for success. The project may have been less successful if it had been written with performance in mind from the beginning.

        With that in mind, I can't think of many things less appropriate to keep in your mind as a first class concern when building software than performance and optimization. Sure, as you gain experience in your software stack you'll naturally be able to optimize, but since it will possibly never be the reason your projects fail and presumably your job is to ensure success of some project, then it follows that you should prioritize other things strongly over optimization.

        • MobiusHorizons 17 minutes ago

          I see it all the time, applications that would be very usable and streamlined for users from a ui perspective are frustrating and painful to use because every action requires a multi second request. So the experience is mostly reduced to staring at progress spinners.

    • sampullman an hour ago

      I'm not so sure. I use Rust for simple web services now, when I would have used Python or JS/TS before, and the development speed isn't much different. The main draw is the language/type system/borrow checker, and reduced memory/compute usage is a nice bonus.

      • aaronblohowiak an hour ago

        Which framework? Do you write sync or async? I’ve AoC’d rust and really liked it but async seems a bit much.

        • wtetzner 28 minutes ago

          I have to agree, despite using it a lot, async is the worst part of Rust.

          If I had to do some of my projects over again, I'd probably just stick with synchronous Rust and thread pools.

          The concept of async isn't that bad, but it's implementation in Rust feels rushed and incomplete.

          For a language that puts so much emphasis on compile time checks to avoid runtime footguns, it's way too easy to clog the async runtime with blocking calls and not realize it.

        • tayo42 28 minutes ago

          If he was OK with python performance limitations the rust without async is more then enough

    • treyd 41 minutes ago

      Code is usually ran many more times than it is written. It's usually worth spending a bit of extra time to do something the right way the first time when you can avoid having to rewrite it under pressure only after costs have ballooned. This is proven time and time again, especially in places where inefficient code can be so easily identified upfront.

    • throwaway19972 2 hours ago

      Yea but we also write the same software over and over and over and over again. Perhaps slower, more methodical development might enable more software to be written fewer times. (Does not apply to commercially licensed software or services obviously, which is straight waste.)

      • chaxor an hour ago

        This is a decent point, but in many cases writing software over again can be a great thing, even in replaceing some very well established software.

        The trick is getting everyone to switch over and ensure correct security and correctness for the newer software. A good example may be openssh. It is very well established, so many will use it - but it has had some issues over the years, and due to that, it is actually _very_ difficult now to know what the _correct_ way to configure it for the best, modern, performant, and _secure_ operation. There are hundreds of different options for it, almost all of them existing for 'legacy reasons' (in other words no one should ever use in any circumstance that requires any security).

        Then along comes things like mosh or dropbear, which seem like they _may_ improve security, but still basically do the same thing as openssh, so it is unclear if they have a the same security problems and simply don't get reported due to lower use, or if they aren't vulnerable.

        While simultaneously, things like quicssh-rs rewrite the idea but completely differently, such that it is likely far, far more secure (and importantly simpler!), but getting more eyes on it for security is still important.

        So effectively, having things like Linux move to Rust (but as the proper foundation rather than some new and untrusted entity) can be great when considering any 'rewrite' of software, not only for removing the cruft that we now know shouldn't be used due to having better solutions (enforce using only best and modern crypto or filesystems, and so on), but also to remodel the software to be more simple, cleaner, concise, and correct.

    • devmor 2 hours ago

      Caring about efficiency and performance doesn't have to mean spending all your time on it until you've exhausted every possible avenue. Sometimes using the right tools and development stack is enough to make massive gains.

      Sometimes it means spending a couple extra minutes here or there to teach a junior about freeing memory on their PR.

      No one is suggesting it has to be a zero-sum game, but it would be nice to bring some care for the engineering of the craft back into a field that is increasingly dominated by business case demands over all.

      • internet101010 an hour ago

        Exactly. Nobody is saying to min-max from the start - just be a bit more thoughtful and use the right tools for the job in general.

  • btilly an hour ago

    That's because you're churning temporary memory. JS can't free it until garbage collection runs. Rust is able to do a lifetime analysis, and knows it can free it immediately.

    The same will happen on any function where you're calling functions over and over again that create transient data which later gets discarded.

  • leeoniya 2 hours ago

    fwiw, Bun/webkit is much better in mem use if your code is written in a way that avoids creating new strings. it won't be a 100x improvement, but 5x is attainable.

  • beached_whale an hour ago

    Im ok if it isnt popular. It will keep compute costs lower for those using it as the norm is excessive usage

  • jchw 2 hours ago

    It's a little more nuanced than that of course, a big reason why the memory usage is so high is because Node.JS needs more of it to take advantage of a large multicore machine for compute-intensive tasks.

    > Regarding the abnormally high memory usage, it's because I'm running Node.js in "cluster mode", which spawns 12 processes for each of the 12 CPU cores on my test machine, and each process is a standalone Node.js instance which is why it takes up 1300+ MB of memory even though we have a very simple server. JS is single-threaded so this is what we have to do if we want a Node.js server to make full use of a multi-core CPU.

    On a Raspberry Pi you would certainly not need so many workers even if you did care about peak throughput, I don't think any of them have >4 CPU threads. In practice I do run Node.JS and JVM-based servers on Raspberry Pi (although not Node.JS software that I personally have written.)

    The bigger challenge to a decentralized Internet where everyone self-hosts everything is, well, everything else. Being able to manage servers is awesome. Actually managing servers is less glorious, though:

    - Keeping up with the constant race of security patching.

    - Managing hardware. Which, sometimes, fails.

    - Setting up and testing backup solutions. Which can be expensive.

    - Observability and alerting; You probably want some monitoring so that the first time you find out your drives are dying isn't months after SMART would've warned you. Likewise, you probably don't want to find out you have been compromised after your ISP warns you about abuse months into helping carry out criminal operations.

    - Availability. If your home internet or power goes out, self-hosting makes it a bigger issue than it normally would be. I love the idea of a world where everyone runs their own systems at home, but this is by far the worst consequence. Imagine if all of your e-mails bounced while the power was out.

    Some of these problems are actually somewhat tractable to improve on but the Internet and computers in general marched on in a different more centralized direction. At this point I think being able to write self-hostable servers that are efficient and fast is actually not the major problem with self-hosting.

    I still think people should strive to make more efficient servers of course, because some of us are going to self-host anyways, and Raspberry Pis run longer on battery than large rack servers do. If Rust is the language people choose to do that, I'm perfectly content with that. However, it's worth noting that it doesn't have to be the only one. I'd be just as happy with efficient servers in Zig or Go. Or Node.JS/alternative JS-based runtimes, which can certainly do a fine job too, especially when the compute-intensive tasks are not inside of the event loop.

    • pferde an hour ago

      While I agree with pretty much all you wrote, I'd like to point out that e-mail, out of all the services one could conceivably self-host, is quite resilient to temporary outages. You just need to have another backup mail server somewhere (maybe another self-hosting friend or in a datacenter), and set up your DNS MX records accordingly. The incoming mail will be held there until you are back online, and then forwarded to your primary mail server. Everything transparent to the outside word, no mail gets lost, no errors shown to any outside sender.

    • wtetzner 24 minutes ago

      Reducing memory footprint is a big deal for using a VPS as well. Memory is still quite expensive when using cloud computing services.

    • bombela 2 hours ago

      > Imagine if all of your e-mails bounced while the power was out.

      Retry for a while until the destination becomes reachable again. That's how email was originally designed.

      • jasode an hour ago

        >Retry for a while until the destination becomes reachable again. That's how email was originally designed.

        Sure, the SMTP email protocol states guidelines for "retries" but senders don't waste resources retrying forever. E.g. max of 5 days: https://serverfault.com/questions/756086/whats-the-usual-re-...

        So gp's point is that if your home email server is down for an extended power outage (maybe like a week from a bad hurricane) ... and you miss important emails (job interview appointments, bank fraud notifications, etc) ... then that's one of the risks of running an email server on the Raspberry Pi at home.

        Switching to a more energy-efficient language like Rust for server apps so it can run on RPi still doesn't alter the risk calculation above. In other words, many users would still prioritize email reliability (Gmail in the cloud) over self-hosted autonomy of a RPi at home.

        • umanwizard an hour ago

          Another probably even bigger reason people don't self-host email specifically is that practically all email coming from a residential IP is spam from botnets, so email providers routinely block residential IPs.

        • jchw an hour ago

          Yeah, exactly this. The natural disaster in North Carolina is a great example of how I envision this going very badly. When you self-host at home, you just can't have the same kind of redundancy that data centers have.

          I don't think it's an obstacle that's absolutely insurmountable, but it feels like something where we would need to organize the entire Internet around solving problems like these. My personal preference would be to have devices act more independently. e.g. It's possible to sync your KeepassXC with SyncThing at which point any node is equal and thus only if you lose all of your devices simultaneously (e.g. including your mobile computer(s)) are you at risk of any serious trouble. (And it's easy to add new devices to back things up if you are especially worried about that.) I would like it if that sort of functionality could be generalized and integrated into software.

          For something like e-mail, the only way I can envision this working is if any of your devices could act as a destination in the event of a serious outage. I suspect this would be possible to accomplish to some degree today, but it is probably made a lot harder by two independent problems (IPv4 exhaustion/not having directly routable IPs on devices, mobile devices "roaming" through different IP addresses) which force you to rely on some centralized infrastructure anyways (e.g. something like Tailscale Funnels.)

          I for one welcome whoever wants to take on the challenge of making it possible to do reliable, durable self-hosting of all of my services without the pain. I would be an early adopter without question.

djoldman 6 minutes ago

Not trying to be snarky, but for this example, if we can compile to wasm, why not have the client compute this locally?

This would entail zero network hops, probably 100,000+ QRs per second.

IF it is 100,000+ QRs per second, isn't most of the thing we're measuring here dominated by network calls?

jchw 3 hours ago

Haha, I was flabbergasted to see the results of the subprocess approach, incredible. I'm guessing the memory usage being lower for that approach (versus later ones) is because a lot of the heavy lifting is being done in the subprocess which then gets entirely freed once the request is over. Neat.

I have a couple of things I'm wondering about though:

- Node.js is pretty good at IO-bound workloads, but I wonder if this holds up as well when comparing e.g. Go or PHP. I have run into embarrassing situations where my RiiR adventure ended with less performance against even PHP, which makes some sense: PHP has tons of relatively fast C modules for doing some heavy lifting like image processing, so it's not quite so clear-cut.

- The "caveman" approach is a nice one just to show off that it still works, but it obviously has a lot of overhead just because of all of the forking and whatnot. You can do a lot better by not spawning a new process each time. Even a rudimentary approach like having requests and responses stream synchronously and spawning N workers would probably work pretty well. For computationally expensive stuff, this might be a worthwhile approach because it is so relatively simple compared to approaches that reach for native code binding.

  • tln 7 minutes ago

    The native code binding was impressively simple!

    7 lines of rust, 1 small JS change. It looks like napi-rs supports Buffer so that JS change could be easily eliminated too.

eandre 3 hours ago

Encore.ts is doing something similar for TypeScript backend frameworks, by moving most of the request/response lifecycle into Async Rust: https://encore.dev/blog/event-loops

Disclaimer: I'm one of the maintainers

Dowwie 2 hours ago

Beware the risks of using NIFs with Elixir. They run in the same memory space as the BEAM and can crash not just the process but the entire BEAM. Granted, well-written, safe Rust could lower the chances of this happening, but you need to consider the risk.

pjmlp 2 hours ago

And so what we were doing with Apache, mod_<pick your lang> and C back in 2000, is new again.

At least with Rust it is safer.

ports543u 2 hours ago

While I agree the enhancement is significant, the title of this post makes it seem more like an advertisement for Rust than an optimization article. If you rewrite js code into a native language, be it Rust or C, of course it's gonna be faster and use less resources.

  • mplanchard an hour ago

    Is there an equivalently easy way to expose a native interface from C to JS as the example in the post? Relatedly, is it as easy to generate a QR code in C as it is in Rust (11 LoC)?

  • baq 2 hours ago

    'of course' is not really that obvious except for microbenchmarks like this one.

    • ports543u an hour ago

      I think it is pretty obvious. Native languages are expected to be faster than interpreted or jitted, or automatic-memory-management languages in 99.9% of cases, where the programmer has far less control over the operations the processor is doing or the memory it is copying or using.

voiper1 2 hours ago

Wow, that's an incredible writeup.

Super surprised that shelling out was nearly as good any any other method.

Why is the average bytes smaller? Shouldn't it be the same size file? And if not, it's a different alorithm so not necessarily better?

  • pixelesque 2 hours ago

    > Why is the average bytes smaller? Shouldn't it be the same size file?

    The content being encoded in the PNG was different ("https://www.reddit.com/r/rustjerk/top/?t=all" for the first, "https://youtu.be/cE0wfjsybIQ?t=74" for the second example - not sure whether the benchmark used different things?), so I'd expect the PNG buffer pixels to be different between those two images and thus the compressed image size to be a bit different, even if the compression levels of DEFLATE within the PNG were the same).

  • xnorswap 2 hours ago

    That struck me as odd too.

    It may be just additional HTTP headers added to the response, but then it's hardly fair to use that as a point of comparison and treat smaller as "better".

isodev 3 hours ago

This is a really cool comparison, thank you for sharing!

Beyond performance, Rust also brings a high level of portability and these examples show just how versatile a pice of code can be. Even beyond the server, running this on iOS or Android is also straightforward.

Rust is definitely a happy path.

  • jvanderbot 3 hours ago

    Rust deployment is a happy path, with few caveats. Writing is sometimes less happy than it might otherwise be, but that's the tradeoff.

    My favorite thing about Rust, however, is Rust dependency management. Cargo is a dream, coming from C++ land.

    • krick 2 hours ago

      Everything is a dream, when coming from C++ land. I'm still incredibly salty about how packages are managed in Rust, compared to golang or even PHP (composer). crates.io looks fine today, because Rust is still relatively unpopular, but 1 common namespace for all packages encourages name squatting, so in some years it will be a dumpster worse than pypi, I guarantee you that. Doing that in a brand-new package manager was incredibly stupid. It really came late to the market, only golang's modules are newer IIRC (which are really great). Yet it repeats all the same old mistakes.

      • guitarbill an hour ago

        I don't really understand this argument, and it isn't the first time I've heard it. What problem other than name squatting does it solve?

        How does a Java style com.foo.bar or Golang style URL help e.g. mitigate supply chain attacks? For Golang, if you search pkg.go.dev for "jwt" there's 8 packages named that. I'm not sure how they are sorted; it doesn't seem to be by import count. Yes, you can see the URL directly, but crates.io also shows the maintainers. Is "github.com/golang-jwt/jwt/v5" "better" than "golang.org/x/oauth2/jwt"? Hard to say at a glance.

        On the flip side, there have been several instances where Cargo packages were started by an individual, but later moved to a team or adopted. The GitHub project may be transferred, but the name stays the same. This generally seems good.

        I honestly can't quite see what the issue is, but I have been wrong many a time before.

      • Imustaskforhelp 2 hours ago

        In my opinion , I like golang's way better because then you have to be thoughtful about your dependencies and it also prevents any drama (like rust foundation cargo drama) (ahem) (if you are having a language that is so polarizing , it would be hard to find a job in that )

        I truly like rust as a performance language but I would rather like real tangible results (admittedly slow is okay) than imagination within the rust / performance land.

        I don't want to learn rust to feel like I am doing something "good" / "learning" where I can learn golang at a way way faster rate and do the stuff that I like for which I am learning programming.

        Also just because you haven't learned rust doesn't make you inferior to anybody.

        You should learn because you want to think differently , try different things. Not for performance.

        Performance is fickle minded.

        Like I was seeing a native benchmark of rust and zig (rust won) and then I was seeing benchmark of deno and bun (bun won) (bun is written in zig and deno in bun)

        The reason I suppose is that deno doesn't use actix and non actix servers are rather slower than even zig.

        It's weird .

        • jvanderbot 2 hours ago

          There are some influential fair comparisons of compiled languages, but for the most part my feeling is that people are moving from an extremely high level language like Python or JS, and then going to Rust to get performance, when any single compiled language would be fine, and for 90% of them, Go would have been the right choice (on backend or web-enabled systems apps), there was just a hurdle to get to most other compiled languages.

          It's just Rust is somehow more accessible to them? Maybe it's that pointers and memory just was an inaccessible / overburdensom transition?

          • umanwizard an hour ago

            Rust is the only mainstream language with an ergonomic modern type system and features like exhaustive matching on sum types (AFAIK... maybe I'm forgetting one). Yes things like OCaml and Haskell exist but they are much less mainstream than Rust. I think that's a big part of the appeal.

            In Go instead of having a value that can be one of two different types, you have to have two values one of which you set to the zero value. It feels prehistoric.

            • jvanderbot an hour ago

              That strikes me as an incredibly niche (and probably transient) strength! But I will remember that.

              • umanwizard 37 minutes ago

                It's not niche at all; it's extremely common to need this. Maybe I'm not explaining it well. For example, an idiomatic pattern in Go is to return two values, one of which is an error:

                  func f() (SomeType, error) {
                          // ...
                  }
                
                In Rust you would return one value:

                  fn f() -> anyhow::Result<SomeType> {
                      // ...
                  }
                
                In Go (and similar languages like C) nothing enforces that you actually set exactly one value, and nothing enforces that you actually handle the values that are returned.

                It's even worse if you need to add a variant, because then it's easy to make a mistake and not update some site that consumes it.

          • bombela an hour ago

            Not sure how much it weighs on the balance in those types of decisions. But Rust has safe concurrency. That's probably quite a big boost of web server quality if anything else.

            • jvanderbot an hour ago

              Go's concurrency is unsafe? Rust's concurrency is automatically safe?

              I am not saying you're wrong, I just don't find it any better than C++ concurrent code, you just have many different lock types that correspond to the borrow-checker's expectations, vs C++'s primitives / lock types.

              Channels are nicer, but that's doable easily in C++ and native to Go.

              • umanwizard 4 minutes ago

                > Go's concurrency is unsafe? Rust's concurrency is automatically safe?

                Yes and yes...

                Rust statically enforces that you don't have data races, i.e. it's not possible in Rust (without unsafe hacks) to forget to guard access to something with a mutex. In every other language this is enforced with code comments and programmer memory.

      • joshmarinacci 2 hours ago

        Progress. It doesn’t have to be the best. It just has to be better than C++.

bhelx 3 hours ago

If you have a Java library, take a look at Chicory: https://github.com/dylibso/chicory

It runs on any JVM and has a couple flavors of "ahead-of-time" bytecode compilation.

  • bluejekyll 2 hours ago

    This is great to see. I had my own effort around this that I could never quite get done.

    I didn’t notice this on the front page, what JVM versions is this compatible with?

bdahz 2 hours ago

I'm curious what if we replace Rust with C/C++ in those tiers. Would the results be even better or worse than Rust?

  • znpy 2 hours ago

    It should be pretty much the same.

    The article is mostly about exemplifying the various leve of optimisation you can get by moving “hot code paths” to native code (irrespective whether you write that code in rust/c++/c.

    Worth noting that if you’re optimising for memory usage, rust (or some other native code) might not help you very much until you throw away your whole codebase, which might not be always feasible.

  • Imustaskforhelp 2 hours ago

    also maybe checking out bun ffi / I have heard they recently added their own compiler

echelon 3 hours ago

Rust is simply amazing to do web backend development in. It's the biggest secret in the world right now. It's why people are writing so many different web frameworks and utilities - it's popular, practical, and growing fast.

Writing Rust for web (Actix, Axum) is no different than writing Go, Jetty, Flask, etc. in terms of developer productivity. It's super easy to write server code in Rust.

Unlike writing Python HTTP backends, the Rust code is so much more defect free.

I've absorbed 10,000+ qps on a couple of cheap tiny VPS instances. My server bill is practically non-existent and I'm serving up crazy volumes without effort.

  • boredumb 2 hours ago

    I've been experimenting with using Tide, sqlx and askama and after getting comfortable, it's even more ergonomic for me than using golang and it's template/sql librarys. Having compile time checks on SQL and templates in and of itself is a reason to migrate. I think people have a lot of issues with the life time scoping but for most applications it simply isn't something you are explicitly dealing with every day in the way that rust is often displayed/feared (and once you fully wrap your head around what it's doing it's as simple as most other language aspects).

  • kstrauser an hour ago

    I’ve written Python APIs since about 2001 or so. A few weeks ago I used Actix to write a small API server. If you squint and don’t see the braces, it looks an awful lot like a Flask app.

    I had fun writing it, learned some new stuff along the way, and ended up with an API that could serve 80K RPS (according to the venerable ab command) on my laptop with almost no optimization effort. I will absolutely reach for Rust+Actix again for my next project.

    (And I found, fixed, and PR’d a bug in a popular rate limiter, so I got to play in the broader Rust ecosystem along the way. It was a fun project!)

  • JamesSwift 2 hours ago

    > Writing Rust for web (Actix, Axum) is no different than writing Go, Jetty, Flask, etc. in terms of developer productivity. It's super easy to write server code in Rust.

    I would definitely disagree with this after building a micro service (url shortener) in rust. Rust requires you to rethink your design in unique ways, so that you generally cant do things in the 'dumbest way possible' as your v1. I found myself really having to rework my design-brain to fit rusts model to please the compiler.

    Maybe once that relearning has occurred you can move faster, but it definitely took a lot longer to write an extremely simple service than I would have liked. And scaling that to a full api application would likely be even slower.

    Caveat that this was years ago right when actix 2 was coming out I believe, so the framework was in a high amount of flux in addition to needing to get my head around rust itself.

    • collinvandyck76 2 hours ago

      > Maybe once that relearning has occurred you can move faster

      This has been my experience. I have about a year of rust experience under my belt, working with an existing codebase (~50K loc). I started writing the toy/throwaway programs i normally write, now in rust instead of go halfway through this stretch. Hard to say when it clicked, maybe about 7-8 months through this experience, so that i didn't struggle with the structure of the program and the fights with the borrow checker, but it did to the point where i don't really have to think about it much anymore.

      • guitarbill an hour ago

        I have a similar experience. Was drawn to Rust not because of performance or safety (although it's a big bonus), but because of the tooling and type system. Eventually, it does get easier. I do think that's a poor argument, kind of like a TV show that gets better in season 2. But I can't discount that it's been much nicer to maintain these tools compared to Python. Dependency version updates are much less scary due to actual type checking.

  • nesarkvechnep 2 hours ago

    It will probably never replace Elixir as my favourite web technology. For writing daemons though, it's already my favourite.

  • manfre 2 hours ago

    > I've absorbed 10,000+ qps on a couple of cheap tiny VPS instances.

    This metric doesn't convey any meaningful information. Performance metrics need context of the type of work completed and server resources used.

dyzdyz010 3 hours ago

Make Rustler great again!