Show HN: I modeled the Voynich Manuscript with SBERT to test for structure
The Voynich Manuscript is a 15th-century book written in an unknown script. No one’s been able to translate it, and many think it’s a hoax, a cipher, or a constructed language. I wasn’t trying to decode it — I just wanted to see: does it behave like a structured language?
I stripped a handful of common suffix-like endings (aiin, dy, etc.) to isolate what looked like root forms. I know that’s a strong assumption — I call it out directly in the repo — but it helped clarify the clustering. From there, I used SBERT embeddings and KMeans to group similar roots, inferred POS-like roles based on position and frequency, and built a Markov transition matrix to visualize cluster-to-cluster flow.
It’s not translation. It’s not decryption. It’s structural modeling — and it revealed some surprisingly consistent syntax across the manuscript, especially when broken out by section (Botanical, Biological, etc.).
GitHub repo: https://github.com/brianmg/voynich-nlp-analysis Write-up: https://brig90.substack.com/p/modeling-the-voynich-manuscrip...
I’m new to the NLP space, so I’m sure there are things I got wrong — but I’d love feedback from people who’ve worked with structured language modeling or weird edge cases like this.
https://www.youtube.com/watch?v=p6keMgLmFEk&t=1s
I've been working on a project related to a sensemaking tool called Pol.is [1], but reprojecting its wiki survey data with these new algorithms instead of PCA, and it's amazing what new insight it uncovers with these new algorithms!
https://patcon.github.io/polislike-opinion-map-painting/
Painted groups: https://t.co/734qNlMdeh
(Sorry, only really works on desktop)
[1]: https://www.technologyreview.com/2025/04/15/1115125/a-small-...
https://youtu.be/sD-uDZ8zXkc
The traditional NLP techniques of stripping suffices and POS identification may actually harm embedding quality than improvement, since that removes relevant contextual data from the global embedding.
Appreciate you calling that out — that’s a great push toward iteration.
Does it make sense to check the process with a control group?
E.g. if we ask a human to write something that resembles a language but isn’t, then conduct this process (remove suffixes, attempt grouping, etc), are we likely to get similar results?
Reference mapping each cluster to all the others would be a nice way to indicate that there's no variability left in your analysis
And yes to the cross-cluster reference idea — I didn’t build a similarity matrix between clusters, but now that you’ve said it, it feels like an obvious next step to test how much signal is really being captured.
Might spin those up as a follow-up. Appreciate the thoughtful nudge.
(Before I get yelled out, this isn't prescriptive, it's a personal preference.)
I'd add that just because you can achieve separability from a method, the resulting visualization may not be super informative. The distance between clusters that appear in t-SNE projected space often have nothing to do with their distance in latent space, for example. So while you get nice separate clusters, it comes at the cost of the projected space greatly distorting/hiding the relationship between points across clusters.
The author made an assumption that Voynichese is a Germanic language, and it looks like he was able to make some progress with it.
I’ve also come across accounts that it might be an Uralic or Finno-Ugric language. I think your approach is great, and I wonder if tweaking it for specific language families could go even further.
I didn’t re-map anything back to glyphs in this project — everything’s built off those EVA transliterations as a starting point. So if "okeeodair" exists in the dataset, that’s because someone much smarter than me saw a sequence of glyphs and agreed to call it that.
No comments yet
Unless author hadn't written tens of books exactly like that before, which didn't survive, of course.
I don't think it's a very novel idea, but I wonder if there's analysis for pattern like that. I haven't seen mentions of page to page consistency anywhere.
A lot of work's been done here. There are believed to have been 2 scribes (see Prescott Currier), although Lisa Fagin Davis posits 5. Here's a discussion of an experiment working off of Fagin Davis' position: https://www.voynich.ninja/thread-3783.html
It's not a cipher, it was written by an Egyptian Hebrew speaking traveller, and Rainer Hannig and his wife were able to build up a fairly good grammar before he died two years ago. [1] the general issue of the manuscript itself is that it's evolving in its grammar and ethymological use of words, as the traveller picked up various words and transferred meanings along the way.
But, given that your attempt tries to find similarities between proto languages that were mixed together, this could be a great thing to study/analyze the evolution of languages over time, given that you're able to preserve bayesian inference on top.
[1] https://www.rainer-hannig.com/voynich/
My main goal was to learn and see if the manuscript behaved like a real language, not necessarily to translate it. Appreciate the link — I’ll check it out (once I get my German up to speed!).
The challenge (as I understand it) is that the vocabulary size is pretty massive — thousands of unique words — and the structure might not be 1:1 with how real language maps. Like, is a “word” in Voynich really a word? Or is it a chunk, or a stem with affixes, or something else entirely? That makes brute-forcing a direct mapping tricky.
That said… using cluster IDs instead of individual word (tokens) and scoring the outputs with something like a language model seems like a pretty compelling idea. I hadn’t thought of doing it that way. Definitely some room there for optimization or even evolutionary techniques. If nothing else, it could tell us something about how “language-like” the structure really is.
Might be worth exploring — thanks for tossing that out, hopefully someone with more awareness or knowledge in the space see's it!
Maybe a version of scripture that had been "rejected" by some King, and was illegal to reproduce? Take the best radiocarbon dating, figure out who was King back then, and if they 'sanctioned' any biblical translations, and then go to the version of the bible before that translation, and this will be what was perhaps illegal and needed to be encrypted. That's just one plausible story. Who knows, we might find out the phrase "young girl" was simplified to "virgin", and that would potentially be a big secret.
https://www.researchgate.net/publication/368991190_The_Voyni...
For more info, see https://www.voynich.ninja/thread-3940-post-53738.html#pid537...
Yet 10 years later I still hear that the consensus is that there's no agreeable translation. So, what, all this mandaic-gypsies was nothing? And all coincidences were… coincidences?
https://www.rainer-hannig.com/voynich/
Also there might be some characters that are in there just to confuse. For example that bizarre capital "P"-like thing that has multiple variations seems to appear sometimes far too often to represent real language, so it might be just an obfuscator that's removed prior to decryption. There may be other characters that are abnormally "frequent" and they're maybe also unused dummy characters. But the "too many Ps" problem is also consistent with just pure fiction too, I realize.
That said, I just watched a video about the practice of "speaking in tongues" that some christian congregations practice. From what I understand, it's a practice where believers speak in gibberish for certain rituals.
Studying these "speeches", researches found patterns and rhythms that the speakers followed without even being aware they exist.
I'm not saying that's what's happening here, but maybe if this was a hoax (or a prank), maybe these patterns emerged just because they were inscribed by a human brain? At best, these patterns can be thought of as shadows of the patterns found in the writers mother tongue?
People often assert this, but I'm unsure of any evidence. If I wrote a manuscript in a pretend language, I would expect it to end up with language-like patterns, some automatically and some intentionally.
Humans aren't random number generators, and they aren't stupid. Therefore, the implicit claim that a human could not create a manuscript containing gibberish that exhibits many language-like patterns seems unlikely to be true.
So we have two options:
1. This is either a real language or an encoded real language that we've never seen before and can't decrypt, even after many years of attempts
2. Or it is gibberish that exhibits features of a real language
I can't help but feel that option 2 is now the more likely choice.
https://youtu.be/vUAaHkGpJy8
It's harder to generate good gibberish than it appears at first.
There's certainly a system to the madness, but it exhibits rather different statistical properties from "proper" languages. Look at section 2.4: https://www.voynich.nu/a2_char.html At the moment, any apparently linguistic patterns are happenstance; the cypher fundamentally obscures its actual distribution (if a "proper" language.)
Shud less kee chicken souls do be gooby good? Mus hess to my rooby roo!
The age of the document can be estimated through various methods that all point to it being ~500 year old. The vellum parchment, the ink, the pictures (particularly clothes and architecture) are perfectly congruent with that.
The weirdest part is that the script has a very low number of different signs, fewer than any known language. That's about the only clue that could point to a hoax afaik.
I have no background in NLP or linguistics, but I do have a question about this:
> I stripped a set of recurring suffix-like endings from each word — things like aiin, dy, chy, and similar variants
This seems to imply stripping the right-hand edges of words, with the assumption that the text was written left to right? Or did you try both possibilities?
Once again, nice work.
https://arstechnica.com/science/2024/09/new-multispectral-an...
but imagine if it was just a (wealthy) child's coloring book or practice book for learning to write lol
Even if it was "just" an (extraordinarily wealthy and precocious) child with a fondness for plants, cosmology, and female bodies carefully inscribing nonsense by repeatedly doodling the same few characters in blocks that look like the illuminated manuscripts this child would also need access to, that's still impressive and interesting.
https://www.voynich.ninja/thread-4327-post-60796.html#pid607... is the main forum discussing precisely this. I quite liked this explanation of the apparent structure: https://www.voynich.ninja/thread-4286.html
> RU SSUK UKIA UK SSIAKRAINE IARAIN RA AINE RUK UKRU KRIA UKUSSIA IARUK RUSSUK RUSSAINE RUAINERU RUKIA
That is, there may be 2 "word types" with different statistical properties (as Feaster's video above describes)(perhaps e.g. 2 different Cyphers used "randomly" next to each other). Figuring out how to imitate the MS' statistical properties would let us determine cypher system and make steps towards determining its language etc. so most credible work's gone in this direction over the last 10+ years.
This site is a great introduction/deep dive: https://www.voynich.nu/
<quote>
Key Findings
* Cluster 8 exhibits high frequency, low diversity, and frequent line-starts — likely a function word group
* Cluster 3 has high diversity and flexible positioning — likely a root content class
* Transition matrix shows strong internal structure, far from random
* Cluster usage and POS patterns differ by manuscript section (e.g., Biological vs Botanical)
Hypothesis
The manuscript encodes a structured constructed or mnemonic language using syllabic padding and positional repetition. It exhibits syntax, function/content separation, and section-aware linguistic shifts — even in the absence of direct translation.
</quote>
I don't see how it could be random, regardless of whether it is an actual language. Humans are famously terrible at generating randomness.
I wouldn't assume that the writer made decisions based on these goals, but rather that the writer attempted to create a simulacrum of a real language. However, even if they did not, I would expect an attempt at generating a "random" language to ultimately mirror many of the properties of the person's native language.
The arguments that this book is written in a real language rest on the assumption that a human being making up gibberish would not produce something that exhibits many of the properties of a real language; however, I don't see anyone offering any evidence to support this claim.