Google will delete OAuth clients falsely flagged as unused
8 points by panstromek 21h ago 7 comments
Ask HN: How do I start my own cybersecurity related company?
4 points by babuloseo 1d ago 4 comments
We Tested 7 Languages Under Extreme Load and Only One Didn't Crash
18 nnx 35 5/28/2025, 8:43:52 AM freedium.cfd ↗
I also don't understand why under "extreme load" there would be excessive memory pressure in the first place. When a server can't keep up with incoming requests it doesn't need to continue spawning new workers/goroutines. You don't need to .accept() when you don't have the resources to process the incoming request.
Very strange article.
It's a worthless article anyway for the simple reason that there are no graphs, numbers, or reproducible experiments. The code snippets aren't the whole programs and the test harness setup isn't spelled out. The programs themselves look different in what they're doing, so they're not even equivalent! It's hard to tell because, for example, in some snippets the outermost loop is shown, but for C++ only the per-request "workload", but not all of it.
Even just the how of the testing can make a huge difference in my experience, especially when running synthetic workloads against garbage-collected languages. Most of them will never crash under normal workloads, but if you go out of your way to generate stupid amounts of memory allocations, practically none will be able to keep up.
The whole article is just nonsense, end-to-end, starting from the first content paragraph:
"Our test environment consisted of a cluster of 16 high-performance servers..."
Why a "cluster"? None of the workloads appear to be distributed or clustered applications! They're not testing Akka or Microsoft Orleans here, so why bother having more than one box?
What operating system was used?
What were the client systems? How many?
Were some of the languages "doing better" simply because they were slower at handling the test loads and hence failing slower?
Were the test clients correctly sending requests as fast as possible, or waiting sequentially for previous requests to complete before sending the next request?
Etc...
It seems like an ideological shitpost.
Instead they show pseudocode with very vague descriptions of failure mode that do not really make sense: "Under our error cascade simulation, some low-level failures in unsafe code regions propagated in ways that eventually caused deadlocks in resource management." That doesn't give any details nor does it sound like a realistic failure case to have "failures in unsafe code regions".
Obviously this is a bad idea. Even in Erlang's case because while some processes may continue to function, the behavior becomes utterly unpredictable.
In the real world, we would gracefully reject jobs/connections above a certain threshold.
The various error/failure modes are interesting, but not unexpected in retrospect.
I'm still not a fan of the Erlang syntax, it's so hard to read. It's probably just because I've mostly used C inspired language.
It helps tremendously, that in Erlang functions are quite short, self contained due to functional nature. And you can grasp essence of function on one screen without need to have background knowledge of scattered mess of classes/global variables/other spaghetti.
Really pleasant for reading code.
But definitely not C inspired language.
[1] https://medium.com/@codeperfect
Go seems to have been tested with http handling, Rust with sequential computation heavy jobs, C++ with parallel processing and so on.
Maybe this is just one example from each language but this is just confusing.
Edit: though I agree with another top-level comment that the reported results are too vague and stereotypical to be believed.
I know, but then it goes on to show widely different, half baked examples with poor explanations.
After reading other top level comments I agree with the others that this is probably AI slop. The only thing I achieved ny reading it was wasting a few minutes.
> … using the latest stable versions as of May 2025:
> Go 1.23
Go 1.24 was released in February 2025.
> Rust 1.78
Rustc 1.86 was released in April 2025.
> C++ (using GCC 14.1 with C++23 features)
GCC 15.1 was released in April 2025.
So many give aways (claiming older language versions are recent meaning we deal with a cut off date, the code samples, the explanations).
This is what the Erlang VM was exactly designed for, yet these "programmers" claim to be surprised that it didn't crash under heavy load, unlike the rest?
The language runtime is what was under test rather than the language itself.
Mediocrity and hype are once again celebrated.
Disclaimer that I know absolutely nothing about Erlang except that I'd rather program in hieroglyphs, but how is a process crashing and restarting an acceptable failure mode?
The title says a single language didn't crash, but it literally does crash and restart if I understand correctly.
In any case this seems to be an extremely narrow test on an extremely specific use-case, where it might be fine to indeed crash and restart, but it's definitely not indicative of the performance of the languages as a whole.
This is nothing you couldn’t do in other languages, but because the entire thing is built into the language and the runtime, and includes tooling to make structuring an application that way easier (“behaviours”), it’s very normal to build Erlang applications that way.
This further extends to entire machines.
[0]https://en.m.wikipedia.org/wiki/Erlang_(programming_language...
Erlang is designed with a mechanism that makes it easy for external processes to monitor for crashes (or hardware failures), rather than an in-process mechanism like exception handling used in many other programming languages.
Erlang was designed with the aim of improving the development of telephony applications.
The Erlang runtime system provides strict process isolation between Erlang processes (this includes data and garbage collection, separated individually by each Erlang process) and transparent communication between processes on different Erlang nodes (on different hosts).
The "let it crash" philosophy prefers that a process be completely restarted rather than trying to recover from a serious failure. Though it still requires handling of errors, this philosophy results in less code devoted to defensive programming where error-handling code is highly contextual and specific.