On behalf of the OpenBao project, I welcome collaboration with future researchers. We were not informed of these vulnerabilities before HashiCorp posted their usual CVE bulletins, which is disappointing. (Especially as HashiCorp's Vault no longer has an Open Source edition ;-)
We've triaged as being affected by 8 of the 9 CVEs (in fixing an earlier Cert Auth vulnerability, we correctly remediated this one) and have merged patches for most of them.
Happily, the community has done some great work on remediating these and I'm very appreciative of them.
I'm most excited about the audit changes: this was the impetus needed to make them be configuration driven in the next release series. Leaving audit device (which, as a reminder, have a socket mode which can make arbitrary TCP calls!) open to API callers is rather unsafe, even with prefix being limited.
(Edit: And of course it goes without saying, but we're more than happy to accept contributions to the community -- code, docs, technical, or otherwise!)
procaryote · 4h ago
> This default is 30 seconds, matching the default TOTP period. But due to skew, passcodes may remain valid for up to 60 seconds (“daka” in Hebrew), spanning two time windows.
Wait, why would I care this is "daka" in Hebrew? Is this a hallucination or did they edit poorly?
tecleandor · 3h ago
Also... what is "daka" ? 60 seconds? passcodes that remain valid for two time windows? I've been checking the dictionary and "daka" might mean "minute".
Yeah, it read slightly weird before I got to that point, and then it was obvious it was AI slop.
neom · 4h ago
Maybe just being cute. Author is Yarden Porat from Cyata, an Israeli cybersecurity company.
onecommentman · 2h ago
So perhaps using AI writing tools for English to polish his writing, since English may not be his first language and he doesn’t want stumbling around English syntax to get in the way of his message.
It may become an English writing style we all have to get used to from non-native English speakers and an actual valid use case for current AI. I know I’d use AI this way when writing something important in a language I’m semi-fluent in. I already use search engines to confirm the proper use and spelling of fashionably popular foreign phrases, instead of an online dictionary.
mike_hearn · 3h ago
Impressive. It's worth reading despite the slight AI sheen to the writing, as it's unusually informative relative to most security articles. The primary takeaway from my POV is to watch out for "helpful" string normalization calls in security sensitive software. Strings should be bags of bytes as much as possible. A lot of the exploits boil down to trying to treat security identifiers as text instead of fixed numeric sequences. Also, even things that look trivial like file paths in error messages can be deadly.
progbits · 3h ago
My take on the normalization is that it happens in the wrong place - you should not do it adhoc.
If your input from user is a string, define a newtype like UserName and do all validation and normalization once to convert it. All subsequent code should be using that type and not raw strings, so it will be consistent everywhere.
Normal_gaussian · 2h ago
Its ridiculous that we haven't been aggressively boxing login credentials for decades at this point. This kind of issue was well discussed when I did my degree well over a decade ago.
benterix · 2h ago
Yeah, I tolerated the AI tint in this article only because it was very informative otherwise.
shahartal · 2h ago
Hey all — authors of Vault Fault here (I’m Shahar, CEO at Cyata), really appreciate all the thoughtful comments.
Just to clarify - all the vulnerabilities were found manually by a very real human, Yarden Porat.
The writeup was mostly human-written as well, just aimed at a broader audience - which explains the verbosity. We did work with a content writer to help shape the structure and flow, and I totally get that some parts read a bit “sheeny.”
Feedback noted and appreciated - and yep, there’s more coming :)
btw likely missed with the direct link - we also found pre-auth RCE in CyberArk Conjur - cyata.ai/vault-fault
v5v3 · 2h ago
Going by the naming, this was reported to Hashicorp prior to 10th June?
And as it's now August, is it redacted as not fixed yet? Why not
We had earlier pulled support for pre-Vault-1.0 userpass pre-bcrypt hashing (so there's no longer a timing difference there that could be used for enumeration) and using cache busting on lookup should also ensure consistency across storage layers. Plus, normalizing the remaining error messages through when the user's credential is fully validated as correct.
neuralkoi · 3h ago
In non-CA mode, an attacker who has access to the private key of a pinned certificate can:
Present a certificate with the correct public key
Modify the CN in the client certificate to any arbitrary value
Cause Vault to assign the resulting alias.Name to that CN
I agree that this is an issue, but if an attacker has access to the private key of a pinned certificate, you might have some bigger issues...
themk · 52m ago
I've run Vault for a long time, and none of this surprises me. I've even reported some of these to Hashicorp in the past, along with other equally shocking bugs.
The code base is an absolute mess.
The number of bugs and weird edge cases I've found with my quickcheck property testing of their API is shocking, and makes me think their test suites are woefully inadequate.
cipherboy · 46m ago
OpenBao, under the Linux Foundation's OpenSSF, is making meaningful improvements to the code. I'd love to have high-quality reports, if you're willing to re-visit these. :-)
fidotron · 45m ago
> The code base is an absolute mess.
This is an understatement, and honestly when I saw it the first time it was enough to make me wonder about all things Hashicorp.
neom · 4h ago
The post covers 9 CVEs May-June 2025
(Full chain from default user > admin > root > RCE):
Something feels odd reading the article. It's so verbose like it's trying to explain things like the reader is 5yo.
plantain · 4h ago
AI written, or edited.
Cthulhu_ · 3h ago
I'd say edited, I did wonder if they used AI to find the issues in the first place but they would brag about that front and center and pivot to an AI-first security company within seconds. Then again, maybe they used AI to help them map out what happens in the code, even though it's Go code and should be pretty readable / obvious what happens.
That said, I think it's weird; the vulnerabilities seem to have been found by doing a thorough code review and comprehension, why then cut corners by passing the writeup through AI?
benterix · 2h ago
I don't think they would brag about it if they were found by AI, but based on their description I suspect most of this work was definitely done by LLMs, and then checked by humans.
benterix · 2h ago
It was definitely edited by AI or written on the basis of initial information. Which is a pity because I'd love to see the original, it has more value for me.
natebc · 1h ago
This sentiment sums up why i dislike the broad use of LLMs and generative words/art/music. Genuine Human work has more value to me than anything generated by a computer.
I like humans. I've even loved a few. I like what humans do; warts, typos and awkward phrasing included.
technion · 2h ago
I generally dont like seeing these "blind username enumeration" type issues.
Its nearly always possible to get usernames elsewhere, they are basically public and the private part is the key and any mfa token. Usernames can get locked out, but the workaround of having user enumeration sprays always burn CPU hashing time delaying passwords doesn't seem like a step forward.
Edit: replaced link with link to HN post, not the article in that post.
edoceo · 4h ago
But does it affect Bao? Could test there since they are so closely related.
satoqz · 4h ago
OpenBao maintainer here - The majority of these does affect us, more or less. Unfortunately it seems that we did not receive any prior outreach regarding these vulnerabilities before publication... make of that what you will. We've been hard at work the past days trying to get a security release out, which will likely land today.
Scandiravian · 4h ago
Thanks for the great work and swift communication
I'm very disappointed to hear that the researchers did not disclose these findings to the OpenBao project before publishing them, so you now have to rush a release like this
Will you reach out to the researchers for an explanation after you've fixed the issues?
wafflemaker · 4h ago
I can explain* researchers (and myself, though have nothing to do with it):
We both learned about OpenBao today.
explanation ≠ excuse
Scandiravian · 4h ago
Thank you for the explanation. It's obviously not great that this was missed, but finger-pointing now doesn't really help anyone, so I'll focus on what seems to me like the root issue
My impression is that there is an information gap about forked projects that lead to this issue
I'm on vacation right now, but when I'm back I'll try to setup a small site that lists forks of popular projects and maybe some information on when in time the project was forked
Hopefully something like that can make it more likely that these things are responsibly disclosed to all relevant projects
Scandiravian · 4h ago
It sounds like these issues are from before the fork, in which case they will be
It also doesn't sound like the researchers made an effort to safely disclose these findings to the OpenBao project before publishing them, which I think would have been the right thing to do
I mean, this is kinda what you expect from software written in Go, right? The point of go is to make is it so that below average programmers can write roughly average code, and with the tradeoff that above average programmers can't easily add type-system safety or create abstractions to protect the software from reverting to the mean.
Like, of course a language that sucks for writing parsers will end up with a ton of bugs that would have been fixed by parsing and normalizing all input asap, but no, in go and javascript the average type is a string so you can "ToLower" deep in the code instead of just during parsing after which it should have no longer been a string type
neomantra · 1h ago
Quite the hot take on Golang LOL. These were logic and flow errors that could have emerged with any language. These bugs were teased out with deep introspection.
The second paragraph seems more like design issues than a language issue. That said, I’d certainly rather write a parser in Golang than JavaScript, especially once one brings up type safety.
maxall4 · 5h ago
Mmm AI writing gotta love it… /s
markasoftware · 5h ago
it really does have that AI writing style, and these are the sorts of bugs I imagine an AI could have found...I wonder if that's what they did (though they claim it was all manual source code inspection).
darkwater · 4h ago
Having the blog post explaining the findings written - or aided - by an AI doesn't necessarily mean that the findings themselves were found using AI.
Edit: even if the TLD they use is .ai and they heavily promote themselves as revolutionary AI security firm yadda yadda yadda
neomantra · 3h ago
From reading it and mostly from the introduction, it felt like they rolled up their sleeves and really dug into the code. This was refreshing versus the vibe-coding zeitgeist.
I would be curious what AI tools assisted in this and also what tools/models could re-discover them on the unpatched code base now that we know they exist.
Cthulhu_ · 3h ago
I can imagine they could have used AI to analyze, describe and map out what exactly happens in the code. Then again, it's Go, following the flow of code and what exactly is being checked is pretty straightforward (see e.g. https://github.com/hashicorp/vault/blob/main/vault/request_h... which was mentioned in the article)
benterix · 2h ago
> .I wonder if that's what they did (though they claim it was all manual source code inspection).
Give me one reason why they would do it by hand if they can automate is as much as possible. Vulnerability research is an area without any guarantees, you can spend months looking for bugs and find nothing. These guys are not stupid, they used LLMs trying to find whatever they could, they probably explored more blind alleys than we will know, and then got very good results. Many other companies are doing the same.
tiedemann · 4h ago
TLDR: string parsing is hard and most of us are vulnerable to assumptions and/or never get around to do those fuzzy tests properly when checking that input is handled correctly.
procaryote · 4h ago
A lot of these are on the pattern of normalising input as late as possible, which is an odd choice for a security product.
Cthulhu_ · 3h ago
I'd argue it's odd that they (or LDAP) normalise input in the first place. I can sort-of understand username normalization to avoid having both "admin" and "Admin" accounts, but that check only needs to be done when creating an account, when logging in it should not accept "Admin" as valid for account "admin".
But I'm neither a security person nor have I done much with authentication since my 2000's PHP hobbying. I suspect an LDAP server has to deal with or try and manage a lot of garbage input because of the sheer number of integrations they often have.
LtWorf · 3h ago
I mean… it's hashicorp… did you expect sanity?
One of the vault backends has a size limit and so secret keys larger than 2048 bits would not fit. Amazing tool.
compressedgas · 4h ago
I don't see any parsing going on here. They failed to normalize the input values the way that the LDAP server does before applying rate limiting resulting in an effectively higher than expected login attempt rate limit.
We've triaged as being affected by 8 of the 9 CVEs (in fixing an earlier Cert Auth vulnerability, we correctly remediated this one) and have merged patches for most of them.
Happily, the community has done some great work on remediating these and I'm very appreciative of them.
I'm most excited about the audit changes: this was the impetus needed to make them be configuration driven in the next release series. Leaving audit device (which, as a reminder, have a socket mode which can make arbitrary TCP calls!) open to API callers is rather unsafe, even with prefix being limited.
(Edit: And of course it goes without saying, but we're more than happy to accept contributions to the community -- code, docs, technical, or otherwise!)
Wait, why would I care this is "daka" in Hebrew? Is this a hallucination or did they edit poorly?
It may become an English writing style we all have to get used to from non-native English speakers and an actual valid use case for current AI. I know I’d use AI this way when writing something important in a language I’m semi-fluent in. I already use search engines to confirm the proper use and spelling of fashionably popular foreign phrases, instead of an online dictionary.
If your input from user is a string, define a newtype like UserName and do all validation and normalization once to convert it. All subsequent code should be using that type and not raw strings, so it will be consistent everywhere.
Just to clarify - all the vulnerabilities were found manually by a very real human, Yarden Porat.
The writeup was mostly human-written as well, just aimed at a broader audience - which explains the verbosity. We did work with a content writer to help shape the structure and flow, and I totally get that some parts read a bit “sheeny.” Feedback noted and appreciated - and yep, there’s more coming :)
btw likely missed with the direct link - we also found pre-auth RCE in CyberArk Conjur - cyata.ai/vault-fault
And as it's now August, is it redacted as not fixed yet? Why not
CVE-2025-6010 - [REDACTED]
OpenBao is reasonably confident in our fix: https://github.com/openbao/openbao/pull/1628
We had earlier pulled support for pre-Vault-1.0 userpass pre-bcrypt hashing (so there's no longer a timing difference there that could be used for enumeration) and using cache busting on lookup should also ensure consistency across storage layers. Plus, normalizing the remaining error messages through when the user's credential is fully validated as correct.
The code base is an absolute mess.
The number of bugs and weird edge cases I've found with my quickcheck property testing of their API is shocking, and makes me think their test suites are woefully inadequate.
This is an understatement, and honestly when I saw it the first time it was enough to make me wonder about all things Hashicorp.
CVE-2025-6010 - [REDACTED]
CVE-2025-6004 - Lockout Bypass https://feedly.com/cve/CVE-2025-6004
Via case permutation in userpass auth Via input normalization mismatch in LDAP auth
CVE-2025-6011 - Timing-Based Username Enumeration https://feedly.com/cve/CVE-2025-6011
Identify valid usernames
CVE-2025-6003 - MFA Enforcement Bypass https://feedly.com/cve/CVE-2025-6003
Via username_as_alias configuration in LDAP
CVE-2025-6013 - Multiple EntityID Generation https://feedly.com/cve/CVE-2025-6013
Allows LDAP users to generate multiple EntityIDs for the same identity
CVE-2025-6016 - TOTP MFA Weaknesses https://feedly.com/cve/CVE-2025-6016
Aggregated logic flaws in TOTP implementation
CVE-2025-6037 - Certificate Entity Impersonation https://feedly.com/cve/CVE-2025-6037
Existed for 8+ years in Vault
CVE-2025-5999 - Root Privilege Escalation https://feedly.com/cve/CVE-2025-5999
Admin to root escalation via policy normalization
CVE-2025-6000 - Remote Code Execution https://feedly.com/cve/CVE-2025-6000
First public RCE in Vault (existed for 9 years) Via plugin catalog abuse > https://discuss.hashicorp.com/t/hcsec-2025-14-privileged-vau...
That said, I think it's weird; the vulnerabilities seem to have been found by doing a thorough code review and comprehension, why then cut corners by passing the writeup through AI?
I like humans. I've even loved a few. I like what humans do; warts, typos and awkward phrasing included.
Its nearly always possible to get usernames elsewhere, they are basically public and the private part is the key and any mfa token. Usernames can get locked out, but the workaround of having user enumeration sprays always burn CPU hashing time delaying passwords doesn't seem like a step forward.
Edit: replaced link with link to HN post, not the article in that post.
I'm very disappointed to hear that the researchers did not disclose these findings to the OpenBao project before publishing them, so you now have to rush a release like this
Will you reach out to the researchers for an explanation after you've fixed the issues?
explanation ≠ excuse
My impression is that there is an information gap about forked projects that lead to this issue
I'm on vacation right now, but when I'm back I'll try to setup a small site that lists forks of popular projects and maybe some information on when in time the project was forked
Hopefully something like that can make it more likely that these things are responsibly disclosed to all relevant projects
It also doesn't sound like the researchers made an effort to safely disclose these findings to the OpenBao project before publishing them, which I think would have been the right thing to do
Like, of course a language that sucks for writing parsers will end up with a ton of bugs that would have been fixed by parsing and normalizing all input asap, but no, in go and javascript the average type is a string so you can "ToLower" deep in the code instead of just during parsing after which it should have no longer been a string type
The second paragraph seems more like design issues than a language issue. That said, I’d certainly rather write a parser in Golang than JavaScript, especially once one brings up type safety.
Edit: even if the TLD they use is .ai and they heavily promote themselves as revolutionary AI security firm yadda yadda yadda
I would be curious what AI tools assisted in this and also what tools/models could re-discover them on the unpatched code base now that we know they exist.
Give me one reason why they would do it by hand if they can automate is as much as possible. Vulnerability research is an area without any guarantees, you can spend months looking for bugs and find nothing. These guys are not stupid, they used LLMs trying to find whatever they could, they probably explored more blind alleys than we will know, and then got very good results. Many other companies are doing the same.
But I'm neither a security person nor have I done much with authentication since my 2000's PHP hobbying. I suspect an LDAP server has to deal with or try and manage a lot of garbage input because of the sheer number of integrations they often have.
One of the vault backends has a size limit and so secret keys larger than 2048 bits would not fit. Amazing tool.