Quod natura non dat, artificialis intelligentia (AI) non praestat

1 vayllon 0 5/4/2025, 9:33:07 AM
The title of this article paraphrase the old Latin proverb “Quod natura non dat, Salmantica non praestat” — which means — "What nature does not give, Salamanca University does not provide"— we could say that artificial intelligence can't make up for what natural, biological intelligence lacks. We're talking about innate abilities like memory, comprehension, or the capacity to learn. Put simply: if someone lacks natural talent, not even ChatGPT can save them.

For those who are not familiar with the University of Salamanca, it is one of the oldest universities in Europe, founded in 1218. The proverb is carved in stone on one of its buildings, which has helped cement the popularity of the proverb.

And that brings us to the real point of this article: AI won’t make us smarter if we don’t know how to use it. When it comes to large language models (LLMs), this has everything to do with prompt engineering and context—how we craft our questions, context and provide examples to get meaningful answers, and how we decide whether to trust those answers or not.

Personally, prompt engineering is starting to feel more and more like hypnosis.

When I write complex prompts filled with detailed instructions, I think of those stage magicians who hypnotize people from the audience, telling them how to behave or even who they are, a chicken or whatever.

With each new version of large language models, this “hypnotic engineering” seems to grow stronger. I wouldn’t be surprised if, in the near future, we start seeing professional “suggesters” —specialists in AI hypnosis through carefully crafted prompts. We might even get new job titles like LLM Hypnotist or AI Whisperer. Imagine movies like The LLM Whisperer—a sequel to The Horse Whisperer.

For instance, in GPT-4.1, we’re already starting to see some highly suggestive prompts that point in this direction. Just an example:

“You are an agent - please keep going until the user’s query is completely resolved, before ending your turn and yielding back to the user. Only terminate your turn when you are sure that the problem is solved. You MUST plan extensively before each function call, and reflect extensively on the outcomes of the previous function calls. DO NOT do this entire process by making function calls only,….”

Not only do we need the skill of a hypnotist to craft these instructions, but we also need the ability of a psychologist to interpret the responses in order to keep the conversation going and even detect hallucinations. In other words, we must smart enough to effectively use these new tools.

To paraphrase another popular saying: “You must first read, then reflect. Doing so in reverse order is dangerous.” The idea here is that both reading without reflection and reflecting without a knowledge base can lead to bad results.

The same applies when using tools like ChatGPT: we need to know how to ask the right questions—and just as importantly, how to think critically about the answers we get. And this has a lot to do with how much prior knowledge we have about the domain. If we don’t know anything about the domain, we’ll probably believe whatever the Chatbot tells us—and that’s when things get really dangerous.

So, in an attempt to hypnotize the audience, I would suggest you cultivate your intelligence, your memory, and your comprehension skills. It's a daily task. It's like going to the gym. Because if you start delegating your intelligence to ChatGPT and similar, you won't have the criteria to use it. It is well known that if you delegate a skill, you lose it. You have many examples around you. Please, don't lose your ability to think; it's very dangerous.

Comments (0)

No comments yet