On an unrelated note; I guess that respected journals are getting a significant amount of papers written with varying amounts of AI assistance. I never envied or enjoyed reviewing papers, and would enjoy such activities even less now.
vicentesteve · 12h ago
Appreciate the comment, I understand what you're trying to say.
Just to clear something up where I'm at: I'm a student, not a professional researcher (yet, anyway), and I'm trying to get into this realm step by step. I don't actually have much of a network or people around me to help share my work, so publishing it myself is kind of the only way I can get feedback from others who are into this sort of thing.
Also, I realize grades and university matter, but I think that trying to do it — even if it's not perfect — has to count for something as well. Publishing this is a way of learning, improving, and maybe getting closer to being part of a real research team someday.
You're correct as far as the abstract on ResearchGate is concerned — I didn't realize it got scrambled when I copied it over from the LaTeX version. Thank you for pointing it out to me, I'll fix it.
And sure, I know it sounds weird that I'm the only author. I understand why you'd question it. But I'm not claiming this is the evidence of anything — I'm just presenting a thing I've been messing around with and hoping that people can help say where there might be something to it, or where it doesn't work. That's the entire premise.
And AI — and this was not accomplished by AI. It's just me. Sure, I did use LaTeX and a PDF editor, but the writing and the thinking are mine.
Thanks again for reading. If you have any ideas on the actual content, I'd really love to hear them.
lightandlight · 11h ago
I'm encouraged by what you've shared here.
My impression of this topic is that it attracts a lot of cranks: "The secrets of the universe have been revealed, and you are all blind!"
Sounds like you're open to feedback and the possibility of being wrong.
My main concern is that it will be hard for you to find people who could give you excellent feedback on this,
because they probably have epistemic shields up around this topic due to all the surrounding crankery.
I don't know how to solve this, but I wish you luck.
vicentesteve · 14h ago
Hi everyone,
I’m an independent researcher and recently completed a formal, self-contained proof that P ≠ NP. The result is based on an explicit language L* ∈ NP \ P that resists all polynomial-time machines via a certified local test. The core techniques combine locally testable codes, circuit resistance, and diagonalisation.
Unlike many previous attempts, the proof avoids relativization, natural proofs, and algebrization, and relies entirely on constructive combinatorics and meta-complexity arguments. It has been submitted to the Journal of the ACM and also made publicly available here:
I’m sharing this for discussion and feedback from the theoretical CS and cryptography communities. Constructive criticism is welcome — I’d genuinely like to know if any gaps remain. The paper is intended to be rigorous and elementary in its construction.
It seems the PDF and associated DOI have been removed.
Is there a reason you haven’t put it on ArXiv?
vicentesteve · 11h ago
Thanks for pointing that out. I’m not sure what happened with the DOI — I’m already in contact with ResearchGate to get that fixed. But the PDF is still available and can be viewed or downloaded directly from the ResearchGate page, just below the summary.
As for arXiv: I haven't submitted it yet because I need an endorsement and I'm not advanced enough to get one. Because I'm new and haven't written much, I haven't been able to get one yet. I hope I will down the road — I'd really like to submit it there when it's done.
akkartik · 13h ago
I'm able to open the PDF.
akkartik · 11h ago
Did you really (pre)publish these 3 papers in just the last couple of months?
I have no idea if the proof is correct or not, but my first impression is that it does not sound bogus, like so many other attempts.
immibis · 12h ago
Proving that parsing the input takes time polynomial in the size of the input is a red flag to me. Normally this is so obvious it's not worth stating.
It's also wrong (I think). It says skipping over l bits is O(l) but an actual Turing machine would have to go to the first unskipped bit and mark it skipped, then go back to the counter at the beginning and decrement it, and so on, making it O(l^2).
This is irrelevant since it's more sensible to give the input in a Turing-machine-friendly format when feeding it to a Turing machine.
Bigger red flag: I don't understand why the verifier has to take s in the certificate and evaluate Hn*(s) when Hn*(s) is already given as input. Also r is unused except in an edge case where it also wouldn't be needed? So basically both parts of the certificate are redundant according to the algorithm description and you could run the deterministic verifier without a certificate, meaning the problem is in P if this verifier algorithm is correct.
Another gaping flaw is that the LTC verification (I don't really understand this so will assume it's correct) tests a random sample of q bits yet the machine is said to be deterministic. Deterministic machines don't randomly sample things and have probabilities of getting the right answer.
Also the diagonalization procedure, to find an input that each decider gets wrong, is to just loop over all possible inputs and feed them to the decider and see if it gets it wrong??? Maybe I skipped over the part where you proved that would actually happen, but otherwise it's nonsense to just assume it's impossible for the decider to get them all right.
Edit: that part was section 4.1. In section 4.1 aren't you assuming that not only has the halting problem been solved, but that all machines halt?
This comment isn't meant to be an exhaustive list.
vicentesteve · 11h ago
Thank you very much for spending the time reading it — I really appreciate such a thoughtful and thorough comment.
You're right on a couple of things. For instance, I probably overstated the time for parsing in a way that makes it sound like it's part of the actual hardness, when really it was just something I wanted to make explicit to be precise. And the point about randomness in the LTC part is valid — the verifier itself is fully deterministic and uses a fixed check index r, but the LTC construction it relies on is based on randomness. I see now that this wasn’t clearly separated in how I wrote it.
About the certificate: I get why it might look redundant at first. The key idea is that the verifier doesn’t go looking for a parity check that fails — it just verifies the one that’s provided. So r isn’t there because the verifier couldn’t find a violation, but because it doesn’t try to; it just checks whether the one it's told about actually fails. That said, I completely agree it could be better explained.
The diagonalization part definitely needs more clarity too. You're right: just looping over inputs only works if you can guarantee that failure will eventually happen — and that’s supposed to come from the circuit-counting bound. I’ll go back and make that implication and its formal reasoning much more explicit.
As for the machines halting — that’s handled by padding every D_i so it always halts within a fixed polynomial time (as mentioned in Lemma 4.1), but again, I agree that’s not emphasized enough and should be stated more clearly.
Overall, your feedback is incredibly helpful — not just in pointing out specific issues, but also in showing me how things might come across to someone reading this fresh. I'm already working on revising the affected sections, and I’d honestly be glad to hear any further thoughts you (or others) might have. Thanks again!
immibis · 44m ago
> The key idea is that the verifier doesn’t go looking for a parity check that fails — it just verifies the one that’s provided. So r isn’t there because the verifier couldn’t find a violation, but because it doesn’t try to; it just checks whether the one it's told about actually fails.
Then you failed. When using the certificate and verifier definition of NP, if the correct answer to the problem instance is false, there must be no certificate that makes the verifier return true.
A nondeterministic Turing machine is one that can clone itself and take two different paths without taking extra time. The copies can also make copies so it can clone as many times as it needs to. Copies that return false delete themselves, and if ANY copy returns true the overall answer is true (and we can delete all the copies). Any problem that can be solved by one of these is in NP. That's the definition of NP.
Equivalently, if we already know which one is going to return true, each time the machine wants to clone itself, we can tell the machine which copy is going to return true, and then it either continues being itself or pretends to be the clone, whichever one we told it, without actually cloning. Then we can run this on a normal Turing machine that doesn't clone itself. Of course, if we tell it wrong, it will return false even though it could have returned true if we told it right. This leads to the certificate definition of NP. The data we tell the machine about "the right way to pretend to clone" is the certificate.
Both are equivalent ways to define which problems are NP. The cloning version is the actual definition, but the certificate version is often more convenient, because, ya know, actual computers don't clone themselves.
If the answer to the problem is false, in the cloning version, all copies of the machine return false. That means in the certificate version, which only runs one copy, it will return false no matter which copy we choose, i.e. no matter which certificate we give it. Even if we give the wrong certificate, it's not possible to get a certificate machine for an NP problem to return true when the correct answer is false.
Your verifier machine is claimed to be a certificate machine that solves your problem, but you also say it sometimes returns true when the correct answer is false, if you give the wrong certificate. So there's something wrong with the setup here. I think the verifier machine is just wrong as it doesn't do what it needs to do to prove what you think it's proving.
The abstract in the research gate page is garbled (equations are duplicated), and don't match the PDF.
Some of the author's other publications are shown on his google scholar page. In all cases Voltes is the sole author: https://scholar.google.com/citations?user=uA1sptcAAAAJ&hl=en
On an unrelated note; I guess that respected journals are getting a significant amount of papers written with varying amounts of AI assistance. I never envied or enjoyed reviewing papers, and would enjoy such activities even less now.
Just to clear something up where I'm at: I'm a student, not a professional researcher (yet, anyway), and I'm trying to get into this realm step by step. I don't actually have much of a network or people around me to help share my work, so publishing it myself is kind of the only way I can get feedback from others who are into this sort of thing.
Also, I realize grades and university matter, but I think that trying to do it — even if it's not perfect — has to count for something as well. Publishing this is a way of learning, improving, and maybe getting closer to being part of a real research team someday.
You're correct as far as the abstract on ResearchGate is concerned — I didn't realize it got scrambled when I copied it over from the LaTeX version. Thank you for pointing it out to me, I'll fix it.
And sure, I know it sounds weird that I'm the only author. I understand why you'd question it. But I'm not claiming this is the evidence of anything — I'm just presenting a thing I've been messing around with and hoping that people can help say where there might be something to it, or where it doesn't work. That's the entire premise.
And AI — and this was not accomplished by AI. It's just me. Sure, I did use LaTeX and a PDF editor, but the writing and the thinking are mine.
Thanks again for reading. If you have any ideas on the actual content, I'd really love to hear them.
My main concern is that it will be hard for you to find people who could give you excellent feedback on this, because they probably have epistemic shields up around this topic due to all the surrounding crankery. I don't know how to solve this, but I wish you luck.
I’m an independent researcher and recently completed a formal, self-contained proof that P ≠ NP. The result is based on an explicit language L* ∈ NP \ P that resists all polynomial-time machines via a certified local test. The core techniques combine locally testable codes, circuit resistance, and diagonalisation.
Unlike many previous attempts, the proof avoids relativization, natural proofs, and algebrization, and relies entirely on constructive combinatorics and meta-complexity arguments. It has been submitted to the Journal of the ACM and also made publicly available here:
ResearchGate (PDF, with DOI): https://doi.org/10.13140/RG.2.2.14071.74408
I’m sharing this for discussion and feedback from the theoretical CS and cryptography communities. Constructive criticism is welcome — I’d genuinely like to know if any gaps remain. The paper is intended to be rigorous and elementary in its construction.
Thanks in advance, Vicent Esteve Voltes https://orcid.org/0009-0003-1371-7561
Is there a reason you haven’t put it on ArXiv?
As for arXiv: I haven't submitted it yet because I need an endorsement and I'm not advanced enough to get one. Because I'm new and haven't written much, I haven't been able to get one yet. I hope I will down the road — I'd really like to submit it there when it's done.
https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=756...
It's also wrong (I think). It says skipping over l bits is O(l) but an actual Turing machine would have to go to the first unskipped bit and mark it skipped, then go back to the counter at the beginning and decrement it, and so on, making it O(l^2).
This is irrelevant since it's more sensible to give the input in a Turing-machine-friendly format when feeding it to a Turing machine.
Bigger red flag: I don't understand why the verifier has to take s in the certificate and evaluate Hn*(s) when Hn*(s) is already given as input. Also r is unused except in an edge case where it also wouldn't be needed? So basically both parts of the certificate are redundant according to the algorithm description and you could run the deterministic verifier without a certificate, meaning the problem is in P if this verifier algorithm is correct.
Another gaping flaw is that the LTC verification (I don't really understand this so will assume it's correct) tests a random sample of q bits yet the machine is said to be deterministic. Deterministic machines don't randomly sample things and have probabilities of getting the right answer.
Also the diagonalization procedure, to find an input that each decider gets wrong, is to just loop over all possible inputs and feed them to the decider and see if it gets it wrong??? Maybe I skipped over the part where you proved that would actually happen, but otherwise it's nonsense to just assume it's impossible for the decider to get them all right.
Edit: that part was section 4.1. In section 4.1 aren't you assuming that not only has the halting problem been solved, but that all machines halt?
This comment isn't meant to be an exhaustive list.
You're right on a couple of things. For instance, I probably overstated the time for parsing in a way that makes it sound like it's part of the actual hardness, when really it was just something I wanted to make explicit to be precise. And the point about randomness in the LTC part is valid — the verifier itself is fully deterministic and uses a fixed check index r, but the LTC construction it relies on is based on randomness. I see now that this wasn’t clearly separated in how I wrote it.
About the certificate: I get why it might look redundant at first. The key idea is that the verifier doesn’t go looking for a parity check that fails — it just verifies the one that’s provided. So r isn’t there because the verifier couldn’t find a violation, but because it doesn’t try to; it just checks whether the one it's told about actually fails. That said, I completely agree it could be better explained.
The diagonalization part definitely needs more clarity too. You're right: just looping over inputs only works if you can guarantee that failure will eventually happen — and that’s supposed to come from the circuit-counting bound. I’ll go back and make that implication and its formal reasoning much more explicit.
As for the machines halting — that’s handled by padding every D_i so it always halts within a fixed polynomial time (as mentioned in Lemma 4.1), but again, I agree that’s not emphasized enough and should be stated more clearly.
Overall, your feedback is incredibly helpful — not just in pointing out specific issues, but also in showing me how things might come across to someone reading this fresh. I'm already working on revising the affected sections, and I’d honestly be glad to hear any further thoughts you (or others) might have. Thanks again!
Then you failed. When using the certificate and verifier definition of NP, if the correct answer to the problem instance is false, there must be no certificate that makes the verifier return true.
A nondeterministic Turing machine is one that can clone itself and take two different paths without taking extra time. The copies can also make copies so it can clone as many times as it needs to. Copies that return false delete themselves, and if ANY copy returns true the overall answer is true (and we can delete all the copies). Any problem that can be solved by one of these is in NP. That's the definition of NP.
Equivalently, if we already know which one is going to return true, each time the machine wants to clone itself, we can tell the machine which copy is going to return true, and then it either continues being itself or pretends to be the clone, whichever one we told it, without actually cloning. Then we can run this on a normal Turing machine that doesn't clone itself. Of course, if we tell it wrong, it will return false even though it could have returned true if we told it right. This leads to the certificate definition of NP. The data we tell the machine about "the right way to pretend to clone" is the certificate.
Both are equivalent ways to define which problems are NP. The cloning version is the actual definition, but the certificate version is often more convenient, because, ya know, actual computers don't clone themselves.
If the answer to the problem is false, in the cloning version, all copies of the machine return false. That means in the certificate version, which only runs one copy, it will return false no matter which copy we choose, i.e. no matter which certificate we give it. Even if we give the wrong certificate, it's not possible to get a certificate machine for an NP problem to return true when the correct answer is false.
Your verifier machine is claimed to be a certificate machine that solves your problem, but you also say it sometimes returns true when the correct answer is false, if you give the wrong certificate. So there's something wrong with the setup here. I think the verifier machine is just wrong as it doesn't do what it needs to do to prove what you think it's proving.