This seems a bit naive. Sure, almost everyone loves LLMs, but there is a small, vocal, minority of anti-AI luddites. Mostly, those remnants won't know about this, but there will be a few who discover it, and tatically weaponize it. If I weren't looking to the future, trying to get in on the ground floor of the AI potential, I'd use llms.txt to send user agents I irrationally dislike to the "crap farm" part of my operation. I'd direct those I considered shameless parasites who destroyed usable search away from real, interesting, premium human-generated content.
I expect that AI will be able to detect this kind of shenanigans, and figure out ways around it, so maybe it will be no problem at all.
I expect that AI will be able to detect this kind of shenanigans, and figure out ways around it, so maybe it will be no problem at all.