Ask HN: Real Karma?
1 alganet 10 7/31/2025, 5:45:04 PM
I was wondering, with AI and stuff, why do we still rely on human likes and dislikes? (in general, not HN-specific)
It seems like an ideal application of text-based inference: to assign points depending on how someone interacts in a discussion (using correct factual arguments, not straying away, not misdirecting, being considerate and reasonable, and so on).
It surprises me that no one tried it before.
So, why we don't have any automated explicit karma-based moderator anywhere?
Further progress is blocked by the need to develop a general test classifier, that is blocked by the project of migrating my arangodb systems (like my RSS reader) to Postgres.
I am 100% on the side that such applications of artificial intelligence should be used with caution and, therefore, be carefully tested. However, this is not the trend in regards to LLMs. It seems people apply them for important tasks without much consideration of quality. Do you have any hypothesis on why this has never been done with this obvious theme (auto-karma)?
If you have all figured out, except this, then you disagree and see quality as another aspect? Maybe you agree partially, or you missed that I already addressed this.
Also, you're avoiding the second part of my comment. You seem to care about quality (whatever way you define it), and also so do I, but that doesn't seem to be a trend in LLMs which people seem to employ for all sorts of important tasks without these concerns.
I have an hypothetical answer for that: there is someone (possibly, lots of different groups) employing those auto-karma analyzers, but it's concealed and its workings are not public. Would you agree?
That last one of "considerate and reasonable" is easy, one of my projects that is blocked by that classifier is something that detects hostile signatures on Bluesky profiles. I want to follow a lot of people but not people who post 20 articles about political outrage a day. The short of it is that if you can classify 2000 profiles as "hostile" or "not hostile" ModernBERT + Classical ML or ModernBERT + BiLSTM will work grea and will be cheap to train and run.
"Correct factual arguments" is hard. If you prompt ChatGPT or something similar and ask "Is this true?" it will work, I dunno, 90% of the time. A machine that can determine the truth isn't a machine, it's a god.
There are probably signs of "misdirecting" that ModernBERT can see. "Not straying away" could be addressed by a Siamese network classifier that that a pair of documents and tests "Is B relevant to A?" you could synthesize training data for that by pairing real replies and also pairing random documents.
Is it though?
> A machine that can determine the truth isn't a machine, it's a god.
Irrelevant exageration.
---
Also, I am not asking how would you do it. I'm asking why do you think nobody did it publicly. You're still avoiding the question.
It's an obvious application of the technology, that seems hard to get working, but obvious.
I don't see how this is so far-fetched. If you do, you failed to explain it.