Ask HN: Real Karma?

1 alganet 18 7/31/2025, 5:45:04 PM
I was wondering, with AI and stuff, why do we still rely on human likes and dislikes? (in general, not HN-specific)

It seems like an ideal application of text-based inference: to assign points depending on how someone interacts in a discussion (using correct factual arguments, not straying away, not misdirecting, being considerate and reasonable, and so on).

It surprises me that no one tried it before.

So, why we don't have any automated explicit karma-based moderator anywhere?

Comments (18)

PaulHoule · 19h ago
I’ve done conceptual work on this for the last year, my RSS reader does try picking high quality comments from HN relevant to my interests together with 108 other RSS feeds. The results are promising but not revolutionary yet.

Further progress is blocked by the need to develop a general test classifier, that is blocked by the project of migrating my arangodb systems (like my RSS reader) to Postgres.

alganet · 19h ago
Why do you think that is? I mean, the results not being promising. This is not a jab on AI tech, it's a genuine question.

I am 100% on the side that such applications of artificial intelligence should be used with caution and, therefore, be carefully tested. However, this is not the trend in regards to LLMs. It seems people apply them for important tasks without much consideration of quality. Do you have any hypothesis on why this has never been done with this obvious theme (auto-karma)?

PaulHoule · 19h ago
There are all kinds of algorithmic feeds, search rankings depend on quality signals. There’s nothing really new to the idea —- the question is “what do you define as quality?”
alganet · 19h ago
That is a question I already answered in the post: "using correct factual arguments, not straying away, not misdirecting, being considerate and reasonable".

If you have all figured out, except this, then you disagree and see quality as another aspect? Maybe you agree partially, or you missed that I already addressed this.

Also, you're avoiding the second part of my comment. You seem to care about quality (whatever way you define it), and also so do I, but that doesn't seem to be a trend in LLMs which people seem to employ for all sorts of important tasks without these concerns.

I have an hypothetical answer for that: there is someone (possibly, lots of different groups) employing those auto-karma analyzers, but it's concealed and its workings are not public. Would you agree?

PaulHoule · 18h ago
Yes. Every algorithmic ranking is based on somebody's idea of quality. Also for every result that is published there are numerous unpublished results.

That last one of "considerate and reasonable" is easy, one of my projects that is blocked by that classifier is something that detects hostile signatures on Bluesky profiles. I want to follow a lot of people but not people who post 20 articles about political outrage a day. The short of it is that if you can classify 2000 profiles as "hostile" or "not hostile" ModernBERT + Classical ML or ModernBERT + BiLSTM will work grea and will be cheap to train and run.

"Correct factual arguments" is hard. If you prompt ChatGPT or something similar and ask "Is this true?" it will work, I dunno, 90% of the time. A machine that can determine the truth isn't a machine, it's a god.

There are probably signs of "misdirecting" that ModernBERT can see. "Not straying away" could be addressed by a Siamese network classifier that that a pair of documents and tests "Is B relevant to A?" you could synthesize training data for that by pairing real replies and also pairing random documents.

alganet · 18h ago
> "Correct factual arguments" is hard

Is it though?

> A machine that can determine the truth isn't a machine, it's a god.

Irrelevant exageration.

---

Also, I am not asking how would you do it. I'm asking why do you think nobody did it publicly. You're still avoiding the question.

PaulHoule · 18h ago
> Irrelevant exageration.

No. Not at all. See

https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...

also the problem that many questions can't be answered out of the database or by principles but only by taking observations which may not be possible. "Truth" is the most problematic concept in philosophy and when the word is wheeled out it impairs the truth. See "9/11 Truther", Godel's idea is that you can break first order logic + arithmetic if you try to introduce the concept of "truth" to the system because it lets you make statements like "I am lying now" Consider also how it took people hundreds of years to prove Fermat's Last Theorem.

See

https://www.sciencedirect.com/science/article/pii/S187705091...

for research on this topic being done in the open.

alganet · 18h ago
If you don't want to share your opinion, you can just say it.
gsf_emergency_2 · 11h ago
I believe in your hypothesis..

Auto-karma seems like room temperature superconductor. I'm guessing people competently researching those tend not to want to publicize their lab notebooks. Bridge wouldn't pub a blog showing us how their code integrates with Pix?

It's like asking dang to show us his current git commits?

(Anyways it's most likely not my opinion that you want, so I'm just pretending to be openHoule here

Edit: https://news.ycombinator.com/item?id=44748570 )

alganet · 5h ago
Your description is wrong.

I think people don't want to share it because their auto-karma sucks, not because it works.

gsf_emergency_2 · 4h ago
I don't think there's been any real progress in room temperature superconductivity. And I think neither dang's recent improvements to HN nor Bridge are that great either. (The last 2 do kinda work, but are far from perfect)

Trying to generate discussion here where there's none to be had I suppose..

Are you of German descent btw? Northern Italian? So blunt & disagreeable, but maybe there's something warm and fuzzy deep down? Not seeing it yet :)

alganet · 3h ago
There are tons of discussions to be had in the subject of auto-karma, which you are trying to evade.

You're trying to create an association between what I'm saying and HN-related karma, which I already dismissed previously.

I'm mostly of Lithuanian descent, but that's also irrelevant.

Also, you don't bother me.

gsf_emergency_2 · 3h ago
(Ah Lithuanian makes more sense, familiarity breeds contempt as they say!)

Sorry! That implied association was accidental, you might even say negligent.

(Even though I do suspect dang has been working on auto-karma which is totally different from the HN-karma, but more for judging the "true merit" of individual posters or posts for the purpose of moderation. Because HN-karma is completely useless for that, but auto karma could work in principle)

Or I might have just completely misunderstood what you meant by auto-karma..

alganet · 3h ago
I don't care about what dang is up to.

You were really never part of the conversation. The most important thing you missed is that your presence does not bother me.

al2o3cr · 18h ago

    I have an hypothetical answer for that: there is someone (possibly,
    lots of different groups) employing those auto-karma analyzers, but
    it's concealed and its workings are not public.
I think you should check to see if your carbon monoxide detector is working, because that's some paranoid nonsense
alganet · 18h ago
Why is it paranoid?

It's an obvious application of the technology, that seems hard to get working, but obvious.

I don't see how this is so far-fetched. If you do, you failed to explain it.

joules77 · 18h ago
During such a discussion it will probably become more interesting to chat with the bot about the decisions its making, than to chat with the other person :)
mytailorisrich · 18h ago
This ignores and negates the whole point of "like" and "dislike", which are ways for readers to react to a comment based of what they think of it.