Show HN: I modeled the Voynich Manuscript with SBERT to test for structure

195 brig90 47 5/18/2025, 4:09:01 PM github.com ↗
I built this project as a way to learn more about NLP by applying it to something weird and unsolved.

The Voynich Manuscript is a 15th-century book written in an unknown script. No one’s been able to translate it, and many think it’s a hoax, a cipher, or a constructed language. I wasn’t trying to decode it — I just wanted to see: does it behave like a structured language?

I stripped a handful of common suffix-like endings (aiin, dy, etc.) to isolate what looked like root forms. I know that’s a strong assumption — I call it out directly in the repo — but it helped clarify the clustering. From there, I used SBERT embeddings and KMeans to group similar roots, inferred POS-like roles based on position and frequency, and built a Markov transition matrix to visualize cluster-to-cluster flow.

It’s not translation. It’s not decryption. It’s structural modeling — and it revealed some surprisingly consistent syntax across the manuscript, especially when broken out by section (Botanical, Biological, etc.).

GitHub repo: https://github.com/brianmg/voynich-nlp-analysis Write-up: https://brig90.substack.com/p/modeling-the-voynich-manuscrip...

I’m new to the NLP space, so I’m sure there are things I got wrong — but I’d love feedback from people who’ve worked with structured language modeling or weird edge cases like this.

Comments (47)

patcon · 1h ago
I see that you're looking for clusters within PCA projections -- You should look for deeper structure with hot new dimensional reduction algorithms, like PaCMAP or LocalMAP!

I've been working on a project related to a sensemaking tool called Pol.is [1], but reprojecting its wiki survey data with these new algorithms instead of PCA, and it's amazing what new insight it uncovers with these new algorithms!

https://patcon.github.io/polislike-opinion-map-painting/

Painted groups: https://t.co/734qNlMdeh

(Sorry, only really works on desktop)

[1]: https://www.technologyreview.com/2025/04/15/1115125/a-small-...

staticautomatic · 7m ago
I’ve had much better luck with umap than PCA and t-sne for reducing embeddings.
brig90 · 1h ago
Thanks for pointing those out — I hadn’t seen PaCMAP or LocalMAP before, but that definitely looks like the kind of structure-preserving approach that would fit this data better than PCA. Appreciate the nudge — going to dig into those a bit more.
minimaxir · 3h ago
A point of note is that the text embeddings model used here is paraphrase-multilingual-MiniLM-L12-v2 (https://huggingface.co/sentence-transformers/paraphrase-mult...), which is about 4 years old. In the NLP world, that's effectively ancient, particularly as the robustness of even small embeddings models due to global LLM improvements has increased dramatically both in information representation and distinctiveness in the embedding space. Even modern text embedding models not explicitly trained for multilingual support still do extremely well on that type of data, so they may work better for the Voynich Manuscript which is a relatively unknown language.

The traditional NLP techniques of stripping suffices and POS identification may actually harm embedding quality than improvement, since that removes relevant contextual data from the global embedding.

brig90 · 3h ago
Totally fair — I defaulted to paraphrase-multilingual-MiniLM-L12-v2 mostly for speed and wide compatibility, but you’re right that it’s long in the tooth by today’s standards. I’d be really curious to see how something like all-mpnet-base-v2 or even text-embedding-ada-002 would behave, especially if we keep the suffixes in and lean into full contextual embeddings rather than reducing to root forms.

Appreciate you calling that out — that’s a great push toward iteration.

thih9 · 56m ago
(I know nothing about NLP)

Does it make sense to check the process with a control group?

E.g. if we ask a human to write something that resembles a language but isn’t, then conduct this process (remove suffixes, attempt grouping, etc), are we likely to get similar results?

GTP · 8m ago
The link to the write-up seems broken, can you write the correct one?
us-merul · 3h ago
I’ve found this to be one of the most interesting hypotheses: http://voynichproject.org/

The author made an assumption that Voynichese is a Germanic language, and it looks like he was able to make some progress with it.

I’ve also come across accounts that it might be an Uralic or Finno-Ugric language. I think your approach is great, and I wonder if tweaking it for specific language families could go even further.

veqq · 2h ago
This thread discusses the many purported "solutions": https://www.voynich.ninja/thread-4341.html While Bernholz' site is nice, Child's work doesn't shed much light on actually deciphering the MS.
us-merul · 2h ago
Thanks for this! I had come across Child’s hypothesis after doing a search related to Old Prussian and Slavic languages, so I don’t have much context for this solution, and this is helpful to see.
ablanton · 2h ago
Reubend · 1h ago
Most agree that this is not a real solution. Many of the pages translate to nonsense using that scheme, and some of the figures included in the paper don't actually come from the Voynich manuscript in the first place.

For more info, see https://www.voynich.ninja/thread-3940-post-53738.html#pid537...

Avicebron · 3h ago
Maybe I missed it in the README but how did you do the initial encoding for the "words"? so for example, if you have ""okeeodair" as a word, where do you map that back to original symbols?
brig90 · 3h ago
Yep, that’s exactly right — the words like "okeeodair" come directly from the EVA transliteration files, which map the original Voynich glyphs to ASCII approximations. So I’m not working with the glyphs themselves, but rather the standardized transliterated words based on the EVA (European Voynich Alphabet) system. The transliterations I used can be found here: https://www.voynich.nu/

I didn’t re-map anything back to glyphs in this project — everything’s built off those EVA transliterations as a starting point. So if "okeeodair" exists in the dataset, that’s because someone much smarter than me saw a sequence of glyphs and agreed to call it that.

No comments yet

user32489318 · 57m ago
Would analysis of a similar body of text in a known language yield similar patterns? Put it in another way, could you use this type of an analysis on different types of text help understand what this script describes?
tetris11 · 3h ago
UMAP or TSNE would be nice, even if PCA already shows nice separation.

Reference mapping each cluster to all the others would be a nice way to indicate that there's no variability left in your analysis

brig90 · 3h ago
Great points — thank you. PCA gave me surprisingly clean separation early on, so I stuck with it for the initial run. But you’re right — throwing UMAP or t-SNE at it would definitely give a nonlinear perspective that could catch subtler patterns (or failure cases).

And yes to the cross-cluster reference idea — I didn’t build a similarity matrix between clusters, but now that you’ve said it, it feels like an obvious next step to test how much signal is really being captured.

Might spin those up as a follow-up. Appreciate the thoughtful nudge.

lukeinator42 · 2h ago
Do you have examples of how this reference mapping is performed? I'm interested in this for embeddings in a different modality, but don't have as much experience on the NLP side of things
tetris11 · 1h ago
Nothing concrete, but you essentially perform shared nearest neighbours using anchor points to each cluster you wish to map to. These form correction vectors you can then use to project from one dataset to another
jszymborski · 3h ago
When I get nice separation with PCA, I personally tend to eschew UMAP, since the relative distance of all the points to one another is easier to interpret. I avoid t-SNE at all costs, because distance in those plots are pretty much meaningless.

(Before I get yelled out, this isn't prescriptive, it's a personal preference.)

minimaxir · 36m ago
PCA having nice separation is extremely uncommon unless your data is unusually clean or has obvious patterns. Even for the comically-easy MNIST dataset, the PCA representation doesn't separate nicely: https://github.com/lmcinnes/umap_paper_notebooks/blob/master...
tomrod · 2h ago
We are of a like mind.
rossant · 24m ago
TIL about the Voynich manuscript. Fascinating. Thank you.
glimshe · 3h ago
I strongly believe the manuscript is undecipherable in the sense thats it's all gibberish. I can't prove it, but at this point I think it's more likely than not to be hoax.
lolinder · 2h ago
Statistical analyses such as this one consistently find patterns that are consistent with a proper language and would be unlikely to have emerged from someone who was just putting gibberish on the page. To get the kinds of patterns these turn up someone would have had to go a large part of the way towards building a full constructed language, which is interesting in its own right.
ahmedfromtunis · 1h ago
Personally, I have no preference to any theory about the book; whichever it turns out to be, I'll take it as is.

That said, I just watched a video about the practice of "speaking in tongues" that some christian congregations practice. From what I understand, it's a practice where believers speak in gibberish for certain rituals.

Studying these "speeches", researches found patterns and rhythms that the speakers followed without even being aware they exist.

I'm not saying that's what's happening here, but maybe if this was a hoax (or a prank), maybe these patterns emerged just because they were inscribed by a human brain? At best, these patterns can be thought of as shadows of the patterns found in the writers mother tongue?

InsideOutSanta · 2h ago
> would be unlikely to have emerged from someone who was just putting gibberish on the page

People often assert this, but I'm unsure of any evidence. If I wrote a manuscript in a pretend language, I would expect it to end up with language-like patterns, some automatically and some intentionally.

Humans aren't random number generators, and they aren't stupid. Therefore, the implicit claim that a human could not create a manuscript containing gibberish that exhibits many language-like patterns seems unlikely to be true.

So we have two options:

1. This is either a real language or an encoded real language that we've never seen before and can't decrypt, even after many years of attempts

2. Or it is gibberish that exhibits features of a real language

I can't help but feel that option 2 is now the more likely choice.

neom · 2h ago
CamperBob2 · 1h ago
Or Dead Can Dance, e.g. https://www.youtube.com/watch?v=VEVPYVpzMRA .

It's harder to generate good gibberish than it appears at first.

cubefox · 1h ago
Creating gibberish with the statistical properties of a natural language is a very hard task if you do this hundreds of years before the discovery of said statistical properties.
InsideOutSanta · 1h ago
Why?
veqq · 2h ago
> consistent with a proper language

There's certainly a system to the madness, but it exhibits rather different statistical properties from "proper" languages. Look at section 2.4: https://www.voynich.nu/a2_char.html At the moment, any apparently linguistic patterns are happenstance; the cypher fundamentally obscures its actual distribution (if a "proper" language.)

andoando · 2h ago
Could still be gibberish.

Shud less kee chicken souls do be gooby good? Mus hess to my rooby roo!

himinlomax · 1h ago
There are many aspects that point to the text not being completely random or clumsily written. In particular it doesn't fall into many faults you'd expect from some non-expert trying to come up with a fake text.

The age of the document can be estimated through various methods that all point to it being ~500 year old. The vellum parchment, the ink, the pictures (particularly clothes and architecture) are perfectly congruent with that.

The weirdest part is that the script has a very low number of different signs, fewer than any known language. That's about the only clue that could point to a hoax afaik.

ck2 · 1h ago
> "New multispectral analysis of Voynich manuscript reveals hidden details"

https://arstechnica.com/science/2024/09/new-multispectral-an...

but imagine if it was just a (wealthy) child's coloring book or practice book for learning to write lol

Avicebron · 46m ago
> but imagine if it was just a (wealthy) child's coloring book or practice book for learning to write lol

Even if it was "just" an (extraordinarily wealthy and precocious) child with a fondness for plants, cosmology, and female bodies carefully inscribing nonsense by repeatedly doodling the same few characters in blocks that look like the illuminated manuscripts this child would also need access to, that's still impressive and interesting.

andyjohnson0 · 3h ago
This looks very interesting - nice work!

I have no background in NLP or linguistics, but I do have a question about this:

> I stripped a set of recurring suffix-like endings from each word — things like aiin, dy, chy, and similar variants

This seems to imply stripping the right-hand edges of words, with the assumption that the text was written left to right? Or did you try both possibilities?

Once again, nice work.

brig90 · 3h ago
Great question — and you’re right to catch the assumption there. I did assume left-to-right when stripping suffixes, mostly because that’s how the transliteration files were structured and how most Voynich analyses approach it. I didn’t test the reverse — though flipping the structure and checking clustering/syntax behavior would be a super interesting follow-up. Appreciate you calling it out!
veqq · 2h ago
The best work on Voynich has been done by Emma Smith, Coons and Patrick Feaster, about loops and QOKEDAR and CHOLDAIIN cycles. Here's a good presentation: https://www.youtube.com/watch?v=SCWJzTX6y9M Zattera and Roe have also done good work on the "slot alphabet". That so many are making progression in the same direction is quite encouraging!

https://www.voynich.ninja/thread-4327-post-60796.html#pid607... is the main forum discussing precisely this. I quite liked this explanation of the apparent structure: https://www.voynich.ninja/thread-4286.html

> RU SSUK UKIA UK SSIAKRAINE IARAIN RA AINE RUK UKRU KRIA UKUSSIA IARUK RUSSUK RUSSAINE RUAINERU RUKIA

That is, there may be 2 "word types" with different statistical properties (as Feaster's video above describes)(perhaps e.g. 2 different Cyphers used "randomly" next to each other). Figuring out how to imitate the MS' statistical properties would let us determine cypher system and make steps towards determining its language etc. so most credible work's gone in this direction over the last 10+ years.

This site is a great introduction/deep dive: https://www.voynich.nu/

brig90 · 2h ago
I’m definitely not a Voynich expert or linguist — I stumbled into this more or less by accident and thought it would make for a fun NLP learning project. Really appreciate you pointing to those names and that forum — I wasn’t aware of the deeper work on QOKEDAR/CHOLDAIIN cycles or the slot alphabet stuff. It’s encouraging to hear that the kind of structure I modeled seems to resonate with where serious research is heading.
akomtu · 2h ago
Ock ohem octei wies barsoom?
nine_k · 3h ago
In short, the manuscript looks like a genuine text, not like a random bunch of characters pretending to be a text.

<quote>

Key Findings

* Cluster 8 exhibits high frequency, low diversity, and frequent line-starts — likely a function word group

* Cluster 3 has high diversity and flexible positioning — likely a root content class

* Transition matrix shows strong internal structure, far from random

* Cluster usage and POS patterns differ by manuscript section (e.g., Biological vs Botanical)

Hypothesis

The manuscript encodes a structured constructed or mnemonic language using syllabic padding and positional repetition. It exhibits syntax, function/content separation, and section-aware linguistic shifts — even in the absence of direct translation.

</quote>

brig90 · 3h ago
Yep, that was my takeaway too — the structure feels too consistent to be random, and it echoes known linguistic patterns.
gchamonlive · 3h ago
I'd be surprised if it was indeed random, but the consistency is really surprising. I say this because I imagine that anyone that would be able to produce such text is a master scribe that put countless hours writing other works, so he's supposed to be very familiar with such structure, therefore even if he was going for randomness, I would doubt he would achieve it.
InsideOutSanta · 2h ago
> the structure feels too consistent to be random

I don't see how it could be random, regardless of whether it is an actual language. Humans are famously terrible at generating randomness.

nine_k · 1h ago
The kind of "randomness" hardly compatible with language-like structure could arise from choosing the glyphs according to purely graphical concerns, "what would look nice here", lines being too long or too short, avoiding repeating sequences or, to the contrary, achieving interesting 2D structures in the text, etc. It's not cryptography-class randomness, but it would be enough to ruin the rather well-expressed structures in the text (see e.g. the transition matrix).
InsideOutSanta · 1h ago
>choosing the glyphs according to purely graphical concerns, "what would look nice here", lines being too long or too short, avoiding repeating sequences or, to the contrary, achieving interesting 2D structures in the text

I wouldn't assume that the writer made decisions based on these goals, but rather that the writer attempted to create a simulacrum of a real language. However, even if they did not, I would expect an attempt at generating a "random" language to ultimately mirror many of the properties of the person's native language.

The arguments that this book is written in a real language rest on the assumption that a human being making up gibberish would not produce something that exhibits many of the properties of a real language; however, I don't see anyone offering any evidence to support this claim.