Ask HN: How to Argue Against AI Enthusiasts?
Still, this argument seems irrefutable and at the same time, when taken seriously, it exerts enormous pressure. It represents a kind of evolutionary logic that can override everything else.
That’s why I wanted to ask: are there people or works that realistically and pragmatically outline the limits of AI,something that can serve as a solid counterpoint to blind optimism? Something that can't be overwritten so easily? I don’t mean the “grand limit cases” like quantum randomness, Gödel’s incompleteness theorems, or similar topics, but rather something much closer to the actual technology: inconsistencies or paradoxes that directly affect neural networks and limit them.
Furthermore, a fundamental guiding strategy or maxim seems to be: “What is the next logical step?” This also makes criticism difficult, because it simulates a kind of logical compulsion, one that elevates a person above other doubts and relieves him of them. This quickly turns into: “As long as we are following pure logic, we don’t need to worry about anything else.”
I’m also looking for counterarguments to this maxim, from logic, philosophy, and sociology.
Thank you
The only way to truly be critical of AI is to be against it for other reasons, such as its damaging effects on society and its ability to aggregate wealth to the top without much serious life improvement for the average person. I think AI is really one of those things that you're either for or against and there's no middle ground.
I don't like smart phones, professional sports, pick up trucks, anime, or games with pixel art. Please advice on how I can argue against people who are enthusiastic about that kind of stuff.
AI is a bunch of different technologies that have many uses—neural networks, natural language processing, OCR, speech recognition, machine learning, computer vision, image classification, upscaling models, and our favorite new friends "generative pre-trained transformers" (GPT) and "large language models" (LLM) that make up key parts of "generative AI."
Once you make them specify what they're talking about, you talk about the nature and inherent limitations in the technology.
I like to call GPT and LLM "statistical binary string predictors." IE: given a string of binary, predict the expected binary string based on the inputs. It's an amazing technology, don't get me wrong. We're starting to see the limits already though.
Limited context windows. Larger context/training == lower quality results. More input tokens = lower quality. In some respects, newer models are now regressing from earlier models because they're chasing benchmarks and not the real world use cases.
Start to dive into the details. Ask them to admit the problems with LLM and GPT. Ask them how they see these problems getting resolved. Most AI fanboys don't understand the technologies involved. Expose it.