Immediately-Invoked Function Expression (2010) (benalman.com)
1 points by motorest 1h ago 1 comments
Show HN: JK Enter Google Search Chrome extension (github.com)
1 points by hello_sh 1h ago 0 comments
Microsoft Office is using an artificially complex XML schema as a lock-in tool
112 firexcy 53 7/19/2025, 4:22:45 AM blog.documentfoundation.org ↗
Instead of perfect looks, we should focus on the content. Formats like markdown are nice, because they force you to do this. The old way made sense 30 yers ago when information was consumed on paper.
Ofc, if we stop really caring what things look like we could save lot of energy and time. Just go back to pure HTML without any JavaScript or CSS...
You want to be able to do everything just right for the looks. Because there always will be someone negotiating down because your PDF report does not look right and they know a competitor who „does this heading exactly right”.
In theory if you have garbled content that is not acceptable of course, but small deviations should be tolerated.
Unfortunately we have all kinds of power games where you want exact looks. You don’t always have option to walk away from asshole customers nitpicking on BS issues.
For most documents nowadays it makes no sense to see them as a representation of physical paper. And the word paradigm of representing a document as a if it were a piece of paper is obsolete in many areas where it is still being used.
Ironically Atlassian, with confluence, is a large force pushing companies away from documents as a representation of paper.
"Interoperability" is something technical enthusiasts talk about and not something that users creating documents care about outside of the people they share with seeing exactly what was created.
Death to user friendlyness! Advanced users only! /s
Its called antitrust.
One could now use that exact sentence to describe the most popular open document format of all: HTML and CSS.
It is complex but not complicated. You can start with just a few small parts and get to a usable and clean document within hours from the first contact with the languages. The tags and rules are usually quite self-describing while consice and there are tons and tons of good docs and tools. The development of the standards is also open and you can peek there if you want to understand decisions and rationals.
It's about making software that would display a document in that format correctly.
I.e., a browser.
Wordsmithing your way around this doesn't make them any easier.
And I bet they didn't switch to XML because it was superior to their old file formats, but simply because of the unbelievable XML hype that existed for a short time in the late 1990s and early 2000s.
OOXML was, if anything, an attempt to get ahead of requirements to have a documented interoperable format. I believe it was a consequence of legal settlements with the US or EU but am too tired at the moment to look up sources proving that.
Being able to layer markup with text before, inside elements, and after is especially important --- as anyone with HTML knowledge should know. Being able to namespace things so, you know, that OLE widget you pulled into your documents continue to work? Even more important. And that third-party compiled plugin your company uses for some obscure thing? Guess what. Its metadata gets correctly embedded and saved also, and in a way that is forward and backwards compatible with tooling that does not have said plugin installed.
So no, it wasn't 'hype'.
I feel qualified to opine on this as both a former power user of Word and someone building a word processor for lawyers from scratch[1]. I've spent hours pouring over both the .doc and OOXML specs and implementing them. There's a pretty obvious journey visible in those specs from 1984 when computers were under powered with RAM rounding to zero through the 00's when XML was the hot idea to today when MSFT wants everyone on the cloud for life. Unlike say an IDE or generic text editor where developers are excited to work on and dogfood the product via self-hosting, word processors are kind of boring and require separate testing/QA.
It's not "artificial", it's just complex.
MSFT has the deep pockets to fund that development and testing/QA. LibreOffice doesn't.
The business model is just screaming that GPL'd LibreOffice is toast.
[1] Plug: https://tritium.legal
As for complexity, an illustration-- while using M365 I recently was confounded by a stretch of text that had background highlighting that was neither highlight markup, not paragraph or style formatting. An AI turned me onto an obscure dialog for background shading at a text level which explained the mystery. I've been a sophisticated user of M365 for decades and never encountered such a thing, nor have a clear idea of why anyone would use text-level background formatting in preference of the more obvious choices. Yet, there it is. With that kind of complexity and obscurity in the actual product, it's inevitable the file format would be convoluted and complex.
What isn’t acknowledged is that a lot of that complexity isn’t purely malicious. OOXML had to capture decades of WordPerfect/Office binary formats, include every oddball feature ever shipped, and satisfy both backwards‑compatibility and ISO standardisation. A comprehensive schema will inevitably have “dozens or even hundreds of optional or overloaded elements” and long type hierarchies. That’s one reason why the spec is huge. Likewise, there’s a difference between a complicated but documented standard and a closed format—OOXML is published (you can go and download those 8 000 pages), and the parts of it that matter for basic interoperability are quite small compared with the full kitchen‑sink spec.
That doesn’t mean the criticism is wrong. The sheer size and complexity of OOXML mean that few free‑software developers can afford to implement more than a tiny subset. When the bar is that high, the practical effect is the same as lock‑in. For simple document exchange, OpenDocument is significantly leaner and easier to work with, and interoperability bodies like the EU have been encouraging governments to use it for years. The takeaway for anyone designing document formats today should be the same as the article’s closing line: complexity imprisons people; simplicity and clarity set them free.
What surprises me is how well LibreOffice handles various file formats, not just OOXML. In some cases LibreOffice has the absolute best support for abandoned file formats. I'm not the one maintaining them, so it's easy enough for me to say "See, you managed just fine". It much be especially frustrating when you have the OpenDocument format, which does effectively the same thing, only simpler.
Considering how little most free software makes they can't afford to do a lot of things. It's not a hard bar to hit.
If you're working with an XML schema that is served up in XSD format, using code gen is the best (only) path. I understand it's old and confusing to the new generation, but if you just do it the boomer way you can have the whole job done in like 15 minutes. Hand-coding to an XML interface would be like cutting a board with an unplugged circular saw.
One example I work with sometimes is almost 1MB of xsds and thats a rather small internal data tool. They even have restful json variant but its not that used, and complexity is roughly the same (you escape namescape hell, escaping xml chars etc but then tooling around json is a bit less evolved). Xml to object mapping tool is a must.
The complexity is not artificial, it is completely organic and natural.
It is incidental complexity born of decades of history, backwards compatibility, lip-service to openness, and regulatory compliance checkbox ticking. It wasn't purposefully added, it just happened.
Every large document-based application's file format is like this, no exceptions.
As a random example, Adobe Photoshop PSD files are famously horrific to parse, let alone interpret in any useful way. There are many, many other examples, I don't aim to single out any particular vendor.
All of this boils down to the simple fact that these file formats have no independent existence apart from their editor programs.
They're simply serialised application state, little better than memory-dumps. They encode every single feature the application has, directly. They must! Otherwise the feature states couldn't be saved. It's tautological. If it's in Word, Excel, PowerPoint, or any other Office app somewhere, it has to go into the files too.
There are layers and layers of this history and complex internal state that has to be represented in the file. Everything from compatibility flags, OLE embedding, macros, external data source, incremental saves, the support for quirks of legacy printers that no longer exist, CYMK, external data, document signing, document review notes, and on and on.
No extra complexity had to be added to the OOXML file formats, that's just a reflection of the complexity of Microsoft Office applications.
Simplicity was never engineered into these file formats. If it had been, it would have been a tremendous extra effort for zero gain to Microsoft.
Don't blame Microsoft for this either, because other vendors did the exact same thing, for the exact same pragmatic reasons.
You might not add features, but well that is most likely losing proposition against those competitors that have features. As generally normal users want some tiny subset of features. Be it images, tables, internal links, comments, versions.
What you want is a compiler (e.g., into a different document format) or an interpreter (e.g., for running a search or a spell checker).
That's a task that's massively complicated because you cannot give an LLM the semantic definition of the XML and your target (both typically are under documented and under specified). Without that information, the LLM would almost certainly generate an incomplete or broken implementation.
Having a debate about the quality of OOXML feels like a waste of time, though. This was all debated in public when Microsoft was making its proprietary products into national standards, and nobody on Microsoft's side debated the formats on the merits because there obviously weren't any, except a dubious backwards compatibility promise that was already being broken because MS Office couldn't even render OOXML properly. People trying to open old MS Office documents were advised to try Openoffice.
They instead did the wise thing and just named themselves after their enemy ("Open Office? Well we have Office Open!"), offered massive discounts and giveaways to budget-strapped European countries for support, and directly suborned individual politicians.
Which means to me that it's potentially a winnable battle at some point in the future, but I don't know why now would be a better outcome than then. Maybe if you could trick MS into fighting with Google about it. Or just maybe, this latest media push is some submarine attempt by Google to start a new fight about file formats?
https://news.ycombinator.com/item?id=44606646
But if you dig hard enough, there's actually links to more evidence of why it is that complicated... so I don't think it was necessarily intentionally done as a method of lock-in, but where's the outrage in that? /s
"Complicated file format has legitimate reasons for being complicated" just doesn't have the same ring to it as a sensationalized accusation with no proof.
'special case everything we ever used to do in office so everything renders exactly the same'
Instead of offering some suitable placebo for properly rendering into a new format ONCE with those specific quirks fixed in place?
"You have opened your Word 97 document in Office 2003. The quirks have been removed, so it might look different now. Check every page before saving as docx."
"You have pasted from a Word 97 document into an Office 2003 OOXML document. Some things will not work."
Obviously parsing the XML is trivial. What is not trivial is what you do with parsed XML and what the parsed structure represents.
What is it about serializing XML that would optimize the expression of a data model?
I think this needs to end and it is up to ordinary people to seek alternatives.
Apart from LibreOffice, we still have many other alternatives.
Even TypeScript encourages artificial complexity of interfaces and creates lock-in, that's why Microsoft loves it. That's why they made it Turing Complete and why they don't want TypeScript to be made backwards with JavaScript via the type annotations ECMAScript proposal. They want complex interfaces and they want all these complex interfaces to be locked into their tsc compiler which they control.
They love it when junior devs use obscure 'cutting edge' or 'enterprise grade' features of their APIs and disregard the benefits of simplicity and backwards compatibility.