Show HN: Regolith – Regex library that prevents ReDoS CVEs in TypeScript
Why: Many CVEs happen because TypeScript libraries are vulnerable to Regular Expression Denial of Service attacks. I learned about this problem while doing undergraduate research and found that languages like Rust have built-in protection but languages like JavaScript, TypeScript, and Python do not. This library attempts to mitigate these vulnerabilities for TypeScript and JavaScript.
How: Regolith uses Rust's Regex library under the hood to prevent ReDoS attacks. The Rust Regex library implements a linear-time Regex engine that guarantees linear complexity for execution. A ReDoS attack occurs when a malicious input is provided that causes a normal Regex engine to check for a matching string in too many overlapping configurations. This causes the engine to take an extremely long time to compute the Regex, which could cause latency or downtime for a service. By designing the engine to take at most a linear amount of time, we can prevent these attacks at the library level and have software inherit these safety properties.
I'm really fascinated by making programming languages safer and I would love to hear any feedback on how to improve this project. I'll try to answer all questions posted in the comments.
Thanks! - Jake Roggenbuck
Another thought: since backreferences and lookaround are the features in JS regexes which _cause_ ReDOS, why not just wrap vanilla JS regex, rejecting patterns including them? Wouldn’t that achieve the same result in a simpler way?
This lets you have full backwards compatibility in languages like Python and JS/TS that support backreferences/lookarounds, without running any risk of DOS (including by your own handrolled regexes!)
And on modern processors, a suitably implemented check for a timeout would largely be branch-predicted to be a no-op, and would in theory result in no measurable change in performance. Unfortunately, the most optimized and battle-tested implementations seem to have either taken the linear-time NFA approaches, or have technical debt making timeout checks impractical (see comment in [0] on the Python core team's resistance to this) - so we're in a situation where we don't have the best of both worlds. Efforts like [1] are promising, especially if augmented with timeout logic, but early-stage.
[0] https://stackoverflow.com/a/74992735
[1] https://github.com/fancy-regex/fancy-regex
> why not just wrap vanilla JS regex, rejecting patterns including them?
Yea! I was thinking about this too actually. And this would solve the problem of being server side only. I'm thinking about making a new version to do just this.
For a pattern rejecting wrapper, how would you want it to communicate that an unsafe pattern has been created.
This is incorrect. Other features can cause ReDOS.
The other problematic features have linear time algorithms that could be used, but generally are not used (i assume for better average case performance)
.*,.*,.*,.*,.* etc.
I believe a timeout is a better (simpler) solution than to try to prevent 'bad' patterns. I use this approach in my own (tiny, ~400 lines) regex library [2]. I use a limit at most ~100 operations per input byte. So, without measuring wall clock time, which can be inaccurate.
[1]: https://stackoverflow.com/questions/2667015/is-regex-too-slo... [2]: https://github.com/thomasmueller/bau-lang/blob/main/src/test...
This paragraph in particular seems very wholesome, but misguided in light of the tradeoff:
Honestly, the biggest shock here for me is that Rust doesn't support these. Sure, Python has yet to integrate the actually-functional `regex`[3] into stdlib to replace the dreadfully under-specced `re`, but Rust is the new kid on the block! I guess people just aren't writing complex regexes anymore...[4]RE:simpler wrapper, I personally don't see any reason it wouldn't work, and dropping a whole language seems like a big win if it does. I happened to have some scaffolding on hand for the cursed, dark art of metaregexes, so AFAICT, this pattern would work for a blanket ban: https://regexr.com/8gplg Ironically, I don't think there's a way to A) prevent false-positives on triple-backslashes without using lookarounds, or B) highlight the offending groups in full without backrefs!
[1] https://www.regular-expressions.info/backref.html
[2] https://www.regular-expressions.info/lookaround.html
[3] https://github.com/mrabarnett/mrab-regex
[4] We need a regex renaissance IMO, though the feasibility of "just throw a small fine-tuned LLM at it" may delay/obviate that for users that can afford the compute... It's one of the OG AI concepts, back before intuition seemed possible!
There is no TypeScript RegExp, there is only the JavaScript RegExp as implemented in various VMs. There is no TypeScript VM, only JavaScript VMs. And there are no TypeScript CVEs unless it's against the TypeScript compiler, language server, etc.
I have a foggy recollection of compute times exploding for me on a large regex in .Net code and I used a feature I hadn't seen in JavaScript's RegExp that allowed me to mark off sections of already matched parts of the regular expression that prevented it from backtracking.
Perhaps the answer isn't removing features for linear regex, but adding more features to make it more expressive and tunable?
I think many people are annoyed with ReDos as a bug class. It seems like mostly noise in the CVE trackers, library churn and badge collecting for "researchers". It'd be less of a problem if people stuck to filing CVEs against libraries that might remotely see untrusted input rather than scrambling to collect pointless "scalps" from every tool under the sun that accepts a configuration regex - build tools, very commonly :(
Perhaps you can stop this madness... :)
I totally agree here. Safety can and should be from the language itself.