The JVMin in the last 6-8 years has been a powerhouse of innovation and cool features. Incredibly impressive!
panny · 22m ago
And a thank you to Oracle for being a good steward of the language.
reactordev · 12m ago
Surely this is written by an LLM.
Paying per core for “enterprise” just because you’re a business isn’t my idea of being a good steward. If anything we should be championing the OpenJDK folks. They are the real heroes.
tombert · 10m ago
I never thought I would be excited for a new release of Java, but ever since Java 21, I have grown to actually enjoy writing the language. Whomever is running it has really done a good job making the language actually fun to write in the last few years.
electric_muse · 49m ago
Execution-time sampling is great for latency stories, but it blurs the line between waiting and working.
CPU-time sampling gives you a cleaner picture of what actually burns cycles, which is the thing you pay for and the thing you can really optimize.
When two lenses disagree, you learn something about whether you’re chasing throughput or latency. That’s the conversation most teams need.
porridgeraisin · 26m ago
ChatGPT
boroboro4 · 10m ago
Thank you for telling, I went through their comments and they all like this :-( While having substance very obviously AI generated
binary132 · 17m ago
someone should write an LLM detector bot that just leaves this comment on all AI slop
lionkor · 23m ago
what?
alserio · 20m ago
I believe they are saying that the commenter looks a lot like karma farming with an llm, it leaves a lot of comments like this one
sumanthvepa · 1m ago
What benefit could one possibly get by farming karma on site like hacker news. It's not like one can gather followers or something. I'm always mystified by folks who do this. Would love to understand the motivation.
CPU-time sampling gives you a cleaner picture of what actually burns cycles, which is the thing you pay for and the thing you can really optimize.
When two lenses disagree, you learn something about whether you’re chasing throughput or latency. That’s the conversation most teams need.