Ask HN: Even with AGI, it wouldn't know what you know. Can we preserve that?

1 consumer451 3 7/23/2025, 11:07:36 PM
Even if we had true AGI today, even if that AGI attained top .001% human-level intelligence as a product, that product would still not have your experiences and knowledge of a given project or domain. It could not replace you.

That AGI didn't experience what you have experienced. How could it without recording all of your audio and visual input?

Clearly, I am not the first person to think of this concept. My question: is it important for us to fight the inevitable attempt at capture of all our audio/visual existence? Or, is this a Luddite concept in 2025?

Comments (3)

alganet · 22h ago
I don't understand what you are saying.

Is it some elaboration on "It's useless, humans are better, therefore stop trying to collect data?".

If it is, it is more of an assertion than a question. Those are rather common, and there isn't much to say about them.

If it's not, you should express your doubts better. Explain in detail where you're coming from.

bigyabai · 1d ago
In all likelihood, you individually do not live an interesting or successful enough life to be good training data. Data brokers would probably struggle to sell AI companies your personal data.
consumer451 · 1d ago
My post, which is likely not clear, is not about training models. It is about an AI product which could replace everything that your brain does. Not just work, but all of your decision making [0].

If all of your work and knowledge is done via text like Slack or Email, or via transcribed online meetings, then this seems relatively easy in the long term. That problem is not that hard, conceptually. However, it would still not know some anecdote that you heard over beers from someone else, which is how multiple AI concepts were discovered.

If we continue that idea, not just work, but to offload all our our thinking, a virtual you, it would require from birth A/V data capture, right?

[0] The reason that this came up in my mind is that today I presented a new LLM-enabled feature in my SaaS product, to my beta users. One user quickly and unironically, responded "This is awesome! I don't have to use my brain for this anymore!" This scared the crap out of me, and my post here is the generalization of this idea.