Ask HN: Real Karma?

1 alganet 12 7/31/2025, 5:45:04 PM
I was wondering, with AI and stuff, why do we still rely on human likes and dislikes? (in general, not HN-specific)

It seems like an ideal application of text-based inference: to assign points depending on how someone interacts in a discussion (using correct factual arguments, not straying away, not misdirecting, being considerate and reasonable, and so on).

It surprises me that no one tried it before.

So, why we don't have any automated explicit karma-based moderator anywhere?

Comments (12)

PaulHoule · 1h ago
I’ve done conceptual work on this for the last year, my RSS reader does try picking high quality comments from HN relevant to my interests together with 108 other RSS feeds. The results are promising but not revolutionary yet.

Further progress is blocked by the need to develop a general test classifier, that is blocked by the project of migrating my arangodb systems (like my RSS reader) to Postgres.

alganet · 1h ago
Why do you think that is? I mean, the results not being promising. This is not a jab on AI tech, it's a genuine question.

I am 100% on the side that such applications of artificial intelligence should be used with caution and, therefore, be carefully tested. However, this is not the trend in regards to LLMs. It seems people apply them for important tasks without much consideration of quality. Do you have any hypothesis on why this has never been done with this obvious theme (auto-karma)?

PaulHoule · 1h ago
There are all kinds of algorithmic feeds, search rankings depend on quality signals. There’s nothing really new to the idea —- the question is “what do you define as quality?”
alganet · 1h ago
That is a question I already answered in the post: "using correct factual arguments, not straying away, not misdirecting, being considerate and reasonable".

If you have all figured out, except this, then you disagree and see quality as another aspect? Maybe you agree partially, or you missed that I already addressed this.

Also, you're avoiding the second part of my comment. You seem to care about quality (whatever way you define it), and also so do I, but that doesn't seem to be a trend in LLMs which people seem to employ for all sorts of important tasks without these concerns.

I have an hypothetical answer for that: there is someone (possibly, lots of different groups) employing those auto-karma analyzers, but it's concealed and its workings are not public. Would you agree?

PaulHoule · 1h ago
Yes. Every algorithmic ranking is based on somebody's idea of quality. Also for every result that is published there are numerous unpublished results.

That last one of "considerate and reasonable" is easy, one of my projects that is blocked by that classifier is something that detects hostile signatures on Bluesky profiles. I want to follow a lot of people but not people who post 20 articles about political outrage a day. The short of it is that if you can classify 2000 profiles as "hostile" or "not hostile" ModernBERT + Classical ML or ModernBERT + BiLSTM will work grea and will be cheap to train and run.

"Correct factual arguments" is hard. If you prompt ChatGPT or something similar and ask "Is this true?" it will work, I dunno, 90% of the time. A machine that can determine the truth isn't a machine, it's a god.

There are probably signs of "misdirecting" that ModernBERT can see. "Not straying away" could be addressed by a Siamese network classifier that that a pair of documents and tests "Is B relevant to A?" you could synthesize training data for that by pairing real replies and also pairing random documents.

alganet · 1h ago
> "Correct factual arguments" is hard

Is it though?

> A machine that can determine the truth isn't a machine, it's a god.

Irrelevant exageration.

---

Also, I am not asking how would you do it. I'm asking why do you think nobody did it publicly. You're still avoiding the question.

PaulHoule · 1h ago
> Irrelevant exageration.

No. Not at all. See

https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...

also the problem that many questions can't be answered out of the database or by principles but only by taking observations which may not be possible. "Truth" is the most problematic concept in philosophy and when the word is wheeled out it impairs the truth. See "9/11 Truther", Godel's idea is that you can break first order logic + arithmetic if you try to introduce the concept of "truth" to the system because it lets you make statements like "I am lying now" Consider also how it took people hundreds of years to prove Fermat's Last Theorem.

See

https://www.sciencedirect.com/science/article/pii/S187705091...

for research on this topic being done in the open.

alganet · 39m ago
If you don't want to share your opinion, you can just say it.
al2o3cr · 1h ago

    I have an hypothetical answer for that: there is someone (possibly,
    lots of different groups) employing those auto-karma analyzers, but
    it's concealed and its workings are not public.
I think you should check to see if your carbon monoxide detector is working, because that's some paranoid nonsense
alganet · 1h ago
Why is it paranoid?

It's an obvious application of the technology, that seems hard to get working, but obvious.

I don't see how this is so far-fetched. If you do, you failed to explain it.

joules77 · 1h ago
During such a discussion it will probably become more interesting to chat with the bot about the decisions its making, than to chat with the other person :)
mytailorisrich · 1h ago
This ignores and negates the whole point of "like" and "dislike", which are ways for readers to react to a comment based of what they think of it.