That would really be a surprising turn of events, and what a turn it would be! Imagine if the next generation of browsers supported XSLT 3.0—entire cabinets full of blog engines, static page generators, and much more could be rewritten. I support this. XSLT is one of those technologies that is too good to disappear. On the contrary, while some JavaScript frameworks have already turned to dust, I can still rely on my XML toolchains to work in 40 years' time.
rsolva · 2h ago
Oh, that would be something! All I want is to be able to make a website with some reusable elements and simple styling and keep adding content for decades, without having to think about keeping up with upgrades, dependencies and shifting framework paradigms.
I have made a couple of simple websites using PHP to bolt on reusable elements (header, footer, navigation), just because it is the solution that probably will work for ~decades without much churn. But XSLT would be even better!
cousin_it · 2h ago
I realize JS might not be to everyone's taste, but I think I've found the absolute minimal JS solution to reuse headers and footers: https://vladimirslepnev.me/write It's almost exactly like PHP but client side, there's no "flash of content" at all.
em-bee · 1h ago
that's neat. i don't like inline js though. i'd like to be able to load it from a separate file so that i can reuse it on multiple pages.
i am thinking of something like
index.html
<div><navigation/></div>
index.js
function navigation() {
document.querySelector('navigation').replaceWith('<a href="...">...')
}
or maybe
let customnodes = {
navigation: '<a href="...">...',
...
}
then add a function that iterates over customnodes and makes the replacements. even better if i could just iterate over all defined functions. (there is a way, i just didn't take the time to look it up) then the functions could be:
function navigation() {
return '<a href="...">...'
}
and the iterator would wrap that function with the appropriate document.querySelector('navigation').replaceWith call.
cousin_it · 1h ago
That'll work but you'll get flash of content I think. (Or layout shift, idk what's the right name for it.) The way I did it, the common elements are defined in common.js and inserted in individual pages, but the insert point is an inline script calling a function, not a div with specific id. Then the scripts run during page load and the page shows correctly right away.
cosmic_cheese · 2h ago
Includes are the feature of PHP that made it catch my eye many years ago. Suddenly not having to duplicate shared headers, footers, etc and keep it all up to date across my sites was magical. I could only barely write code at all back then but mixing PHP includes into otherwise plain HTML felt natural and intuitive.
themafia · 2h ago
The <template> element combined with the `cloneNode` works quite well for this. I've got a simple function which can take templates, clone them, fill in nodes with data from a structure, then prepare it to be added to the page.
Works a treat and makes most frameworks for the same seem completely pointless.
em-bee · 1h ago
could you share some examples please? i am interested in trying this out.
ocdtrekkie · 2h ago
PHP is a bit more fragile than it used to be. It is probably a bit for the best but I've had to fix a lot of my 2010-era PHP code to work on PHP 8.x. Though... like, it wasn't super hard. The days you could just start using a variable passed in a URL without setting a default if it wasn't...
I read the Wikipedia on xslt, and as a long time web developer i do not understand at all how this would be useful. Plenty of people here are saying if this tech had taken hold we'd have a better world. Is there a clear example somewhere of why and how?
ergonaught · 38m ago
Ancient history for me but once upon a time my company was developing a web application with C++ that used XSLT to render HTML (with our "API" exposing data via XML). It was fast even then, and gave us a great deal of flexibility. We were certainly fans of XSLT.
pyuser583 · 3h ago
There is a fascinating alternative universe where XML standards actually took hold. I've seen it in bits and pieces. It would have been beautiful.
But that universe did not happen.
Lots of "modern" tooling works around the need. For example, in a world of Docker and Kubernetes, are those standards really that important?
I would blame the adoption of containerization for the lack of interest in XML standards, but by the time containerization happened, XML had been all but abandoned.
Maybe it was the adoption of Python, whose JSON libraries are much nicer than XML. Maybe it was the fact that so few XML specs every became mainstream.
In terms of effort, there is a huge tail in XML, where you're trying to get things working, but getting little in return for that effort. XLST is supposed to be the glue that keeps it all together, but there is no "it" to keep together.
XML also does not play very nice with streaming technologies.
I suspect that eventually XML will make a comeback. Or maybe another SGML dialect. But that time is not now.
Aurornis · 2h ago
I think the simplest explanation is that developers used it and did not like it.
The pro-XML narrative always sounded like what you wrote, as far back as I can remember: The XML people would tell you it was beautiful and perfect and better than everything as long as everyone would just do everything perfectly right at every step. Then you got into the real world and it was frustrating to deal with on every level. The realities of real-world development meant that the picture-perfect XML universe we were promised wasn't practical.
I don't understand your comparison to containerization. That feels like apples and oragnes.
mikepurvis · 2h ago
HTML was conceived as a language for marking up a document that was primarily text; XML took the tags and attributes from that and tried to turn it into a data serialization and exchange format. But it was never really well suited to that, and it's obvious from looking XML-RPC or SOAP payloads that there were fundamental gaps in the ability of XML to encode type and structure information inline:
I don't think this is the only factor, but I think XML had a lot of this kind of cognitive overhead built in, and that gave it a lot of friction when stacked up against JSON and later yaml... and when it came to communicating with a SPA, it was hard to compete with JS being able to natively eval the payload responses.
UlisesAC4 · 34m ago
To be fair I cannot trust your shape in your jsonrpc, I am not sure if id is truly an integer or if you sent me an integer by mistake, same as params or even the payload of the params' param, this is why we ended adopting openapi for describing http interactions and iirc jsonrpc specifically can also be described with it. At least in the schema part no one would say it is ambiguous, also one does not need do heavier parses, the obj is a tree, no more checking on scaping strings, no more issues with handcoded multiline strings, it is dropped the need to separate attributes with commas as we know the end tag delimits a space and so on.
bawolff · 34m ago
While i think a lot of xml was a bad idea, some of the issues are not instrinsically the fault of XML but some really poor design decisions by people making xml based languages.
They tended to be design by comittee messes that included every possible use case as an option.
Anyone who has ever had the misfortune of having to deal with SAML knows what i'm talking about. Its a billion line long specification, everyone only implements 10% of it, and its full of hidden gotchas that will screw up your security if you get them wrong. (Even worse, the underlying xml-signature spec is literally the worst way to do digital signatures possible. Its so bad you'd think someone was intentionally sabotaging it)
In theory this isn't xml's fault, but somehow XML seems to attract really bad spec designers.
themafia · 2h ago
The simplest explanation is that attributes were a mistake. They add another layer to the structure and create confusion as to where data is best stored within it.
XML without attributes probably would have seen wide and ready adoption.
smarx007 · 2h ago
> developers used it and did not like it.
This makes sense.
However, there are two ways to address it:
1) Work towards a more advanced system that addresses the issues (for example, RDF/Turtle – expands XML namespaces to define classes and properties, represents graphs instead of being limited to trees unlike XML and JSON)
2) Throw it away and start from scratch. First, JSON. Then, JSON schema. Jq introduces a kind of "JSONPath". JSONL says hi to XML stream readers. JSONC because comments in config files are useful. And many more primitives that existed around XML were eventually reimplemented.
Note how the discussion around removing XSLT 1 support similarly has two ways forward: yank it out or support XSLT 3.
I lean towards Turtle replacing XML over JSON, and for XSLT 3 to replace XSLT 1 support in the browsers.
mpyne · 2h ago
> And many more primitives that existed around XML were eventually reimplemented.
Don't miss that they were reimplemented properly.
Even XML schemas, the one thing you'd think they were great at, ended up seeing several different implementation beyond the original DTD-based schema definitions and beyond XSD.
Some XML things were absolute tire fires that should have been reimplemented even earlier, like XML-DSIG, SAML, SOAP, WS-everything.
It's not surprising devs ended up not liking it, there are actual issues trying to apply XML outside of its strengths. As with networking and the eventual conceit of "smart endpoints, dumb pipes" over ESBs, not all data formats are better off being "smart". Oftentimes the complexity of the business logic is better off in the application layer where you can use a real programming language.
smarx007 · 1h ago
> Even XML schemas, the one thing you'd think they were great at
Of course not! W3C SHACL shapes, on the other hand...
schema.org is also a move in the right direction
mattmanser · 2h ago
Part of the problem was it came in an era before we really understood programming, as a collective. We didn't even really know how to encapsulate objects properly, and you saw it in poor database schema designs, bizarre object inheritance patterns, poorly organised APIs, even the inconsistent method param orders in PHP. It was everywhere. Developers weren't good at laying out even POCOs.
And those bizarre designs went straight into XML, properties often in attributes, nodes that should have been attributes, over nesting, etc.
And we blamed XML for the mess where often it was just inexperience in software design as an industry that was the real cause. But XML had too much flexibility compared to the simplicity of the later JSON, meaning it helped cause the problem. JSON 'solved' the problem by being simpler.
But then the flip side was that it was too strict and starting one in code was a tedious pita where you had to specify a schema even though it didn't exist or even matter most of the time.
toyg · 2h ago
Nah, we still have all those issues and more.
The hard truth is that XML lost to the javascript-native format (JSON). Any JavaScript-native format would have won, because "the web" effectively became the world of JavaScript. XML was not js-friendly enough: the parsing infrastructure was largely based on C/C++/Java, and then you'd get back objects with verbose interfaces (again, a c++/java thing) rather than the simple, nested dictionaries that less-skilled "JS-first" developers felt at ease with.
mpyne · 2h ago
The thing is, JSON is even superior in C++.
It's a dumber format but that makes it a better lingua franca between all sorts of programming languages, not just Javascript, especially if you haven't locked in on a schema.
Once you have locked in on a schema and IDL-style tooling to autogenerate adapter classes/objects, then non-JSON interchange formats become viable (if not superior). But even in that world, I'd rather have something like gRPC over XML.
em-bee · 1h ago
that's the thing, XML should have become javascript native so that we could write inline HTML more easily like JSX from react allows us to do.
Aurornis · 2h ago
This is the abstract idealism I was talking about: Every pro-XML person I've talked to wants to discuss XML in the context of a hypothetical perfect world of programming that does not exist, not the world we inhabit.
The few staunch XML supporters I worked with always wanted to divert blame to something else, refusing to acknowledge that maybe XML was the wrong tool for the job or even contributing to the problems.
johannes1234321 · 2h ago
I think a key factor is: XML offers so many ways to serialize, you always have to decide in the individual case what the structure should be, what's an attribute, what's Text content, what's it's own attribute and those are important choices having impact on later changes.
With JSON you can dump data structures from about any language straight out and it's okay to start toying around and experimenting. Over time you might add logic for filtering out some fields, rename others, move stuff a little around without too much trouble.
Also quickly writing the structure up by hand works a lot faster in any editor, without having to repeat closing tags (while at some point closing brackets and braces will take their tribute)
However I agree: once you got the XML machinery, there is a lot of power in it.
the_mitsuhiko · 3h ago
> I've seen it in bits and pieces. It would have been beautiful.
XHTML being based on XML tried to be a strict standard in a world where a non-strict standard existed and everybody became just very much aware on a daily that a non-strict standard is much easier to work with.
I think it's very hard to compete with that.
kstrauser · 1h ago
Seconded. I spend a whole lot of effort making my early 2000s websites emit compliant XHTML because it seemed like the right thing to do, and the way we were inevitably heading. And then I — and apparently almost everyone else, too, at the same time — realized it was a whole lot of busywork with almost nothing to show for it. The only thing XHTML ever added to the mix was a giant error message if you forget to write "<br/>" instead of "<br>".
Know what? Life's too short to lose time to remembering to close a self-closing tag.
About the time XHTML 1.1 came along, we collectively bailed and went to HTML5, and it was a breath of fresh air.
ndriscoll · 1h ago
I don't understand this sentiment. Never have. I've doubted myself for years that I'm mistaken that this is how XHTML really fell out of favor despite it being what I recall reading at the time. A modern web developer is writing in javascript or typescript, which is going to make you correctly close your curly braces and parentheses (and so much more with typescript).
Then React introduced faux-XML as JSX except with this huge machinery of a runtime javascript virtual DOM instead of basic template expansion and everyone loves it? And if this react playground I've opened up reflects reality, JSX seems to literally require you to balance opening/closing your tags. The punch-line of the whole joke.
What was the point of this exercise? Why do people use JSX for e.g. blogs when HTML templating is built into the browser and they do nothing dynamic? For many years it's been hard to shake the feeling that it isn't some trick to justify 6 figure salaries for people making web pages that are simple enough that an 8 year old should be up to the task.
That same nagging feeling reassures me about our AI future though. Easy ways to do things have been here the whole time, yet here we are. I don't think companies are as focused on efficiency as they pretend. Clearly social aspects like empire building dominate.
kstrauser · 35m ago
It could be that we have drastically different ideas of "easy way to do things". To me, XSLT has never remotely resembled an easy way to do things. It seemed like a weird, difficult, almost deliberately obtuse horse designed by committee that technically worked but in a horrifying kind of way, like finding that someone write a C compiler Bash. Wow, it's cool that someone was able to do it at all, but I can't imagine ever wanting to jump on that train.
jackero · 2h ago
I legitimately tried my best to like XLST on the web back in the day.
The idea behind XLST is nice — creating a stylesheet to transform raw data into presentation. The practice of using it was terrible. It was ugly, it was verbose, it was painful, it had gotchas, it made it easier for scrapers, it bound your data to your presentation more, and so on.
Most of the time I needed to generate XML to later apply a XLST style sheet, the resulting XML document was mostly a one off with no associated spec and not a serious transport document. It begged the question of why I was doing this extra work.
ndriscoll · 1h ago
Making your data easy to scrape is part of the point (or just more generally work with). If you're building your web presence, you want people to easily be able to find the data on your site (unless your goal is platform lockin).
The entire point of XSLT is to separate your data from its presentation. That's why it made it easy to scrape. You could return your data in a more natural domain model and transform it via a stylesheet to its presentation.
And in doing so it is incredibly concise (mostly because XPath is so powerful).
hnlmorg · 2h ago
The problem with XML is that it’s horrible to manually read and write, and it takes more effort to parse too. It’s a massive spec which contains footguns (a fair few CVEs exist just because of people using XML on publicly accessible endpoints).
Now I do think there is a need for the complexity supported by XML to exist, but 99% of the time JSON or similar is good enough while being easy to work with.
That all said, XHTML was amazing. I’d have loved to see XHTML become the standard for web markup. But alas that wasn’t to be.
spankalee · 2h ago
XHTML was too rigid - as a user agent it should try to render a document, rather than tell the user: "tough, the developer screwed up".
I saw the rigidity of XHTML as an asset rather than a problem.
But I do agree that I’m likely in the minority of people (outside of web developers at least) that thought that way.
assimpleaspossi · 2h ago
Sometimes it gets lost that XML is a document description language like HTML.
somat · 2h ago
I actually rather like XML(asterisk) But this is one of it's warts, It wants to be two things, A markup language, it's in the name and arguably where it should have stayed, and an object notation. This is where you start to question some of XML's fundamentals, stuff like why is it redundant? When do you stick data in attributes? Or is it better to nest tags all the way down?
Asterisk: except namespaces, I loathe those, you are skipping happily along chewing through your XML, xpathing left and right, and then find out some psychopath has decided to use namespaces, and now every thing has become super awkward and formal.
layer8 · 2h ago
Namespaces are essential whenever you want to insert contents defined by one schema into “payload” or “application-defined” elements of another schema. There are also more complex scenarios where attributes from one schema are used to annotate elements from a different schema.
Well, I guess we could do it like libraries in C-land and have every schema add its own informal identifier prefix to avoid name collisions. But there’s a reason why programming languages moved to namespaces as an explicit notion.
SoftTalker · 2h ago
XML was a great idea but the markup was tedious and verbose.
JSON is too simplistic.
Something built from s-expressions would probably have been ideal but we've known that for 70 years.
th0ma5 · 3h ago
I think you're getting at a very often discussed ebb and flow between being extremely controlled vs extremely flexible. XML was astounding compared to system specific proprietary systems, and then as the need for formalism grew people wanted something simpler... And now you see the same thing growing with JSON and the need for more rigor. I personally think there are many forces to all of this, just the context at the time, prevailing senses of which things are chores and which aren't, companies trying to gain advantage, but probably most importantly is that the vast majority of people have a subset of historical information about systems and computer science, myself included, yet we have to get things done.
Devasta · 2h ago
If XForms was released on browsers today, it would be hailed as a revolutionary technology. Instead, it is just one of the many things thrown away, and even now 20 years after the WHATWG took over we cannot even do a PUT request without Javascript.
> I would blame the adoption of containerization for the lack of interest in XML standards, but by the time containerization happened, XML had been all but abandoned.
Not sure how that is true. XML is a specification for a data format, but you still need to define the schema (i.e., elements, attributes, their meaning). It's not like XML for web pages (XHTML?) could also serve as XML for Linux container descriptions or as XML for Android app manifests.
madeofpalk · 2h ago
I don’t follow why docker killed XML.
SigmundA · 2h ago
>XML also does not play very nice with streaming technologies.
Not sure why just as good as JSON, if you are going to stream and parse you need a low level push or pull parser not a DOM just like JSON. See SAX for Java or XmlReader / XmlWriter in .Net.
XSLT 3 even had a streaming mode I believe which was badly needed but had constraints due to not having the whole document in memory at once.
I liked XSLT but there is no need for it, javascript is good enough if not better, many times you needed to do a xslt script tag to get some thing done it couldn't do on its own anyway, might as well use a full language with good libraries for handling XML instead. See Linq to XML etc.
somat · 2h ago
Right, both sort of suck at streaming, something about being closed form tree structures would be my guess(strictly speaking, you need to close the tree to serialize it, so no clean way to append data in real time, best you can do is to leave the structure open and send fragments). Having said that, I am not really sure what a good native streaming format would look like. Best guess is something flatter, closer to CSV.
SigmundA · 1h ago
>Right, both sort of suck at streaming, something about being closed form tree structures would be my guess(strictly speaking, you need to close the tree to serialize it, so no clean way to append data in real time, best you can do is to leave the structure open and send fragments).
Again don't really agree, its just most developers don't seem to understand the difference between a DOM or parsing JSON into a full object vs using a streaming reader or writer so they need to be hand fed a format that forces it on them such as line based CSV.
Maybe if JSON and XML allowed top level multiple documents / objects it would have helped like JSON lines.
WJW · 2h ago
> I would blame the adoption of containerization for the lack of interest in XML standards, but by the time containerization happened, XML had been all but abandoned.
It got abandoned because it sucks. New technology gets adopted because it's good. XML standard were just super meh and difficult to work with. There's really not much more to it than that.
echelon · 2h ago
Google didn't want XML to win.
XHTML would have made the Semantic Web (capital letters) possible. Someone else could have done search better. We might have had a proper P2P web.
They wanted sloppy, because only Google scale could deal with that.
Hopefully the AI era might erode that.
bawolff · 27m ago
The Semantic Web was never going to win. It does not make sense on a fundamental level.
XHTML's failure had nothing to do with it, and is basically unrelated. Even if xhtml won i fail to see how that would have helped semantic web in any way shape or form.
WJW · 2h ago
XML died because it sucks. Everyone who had to deal with it back in the day jumped to YAML and/or JSON as quickly as they could. Google didn't cause that, but because they're a search engine they followed it.
hnlmorg · 2h ago
I don’t recall Google being the ones to kill XHTML. Got any references to back that claim up?
adfm · 1h ago
If the P in SPA exists to load your app, then why care? If you’ve noticed the herd thin as SSE and hypermedia trim split ends, then you may see the utility in open web tooling.
jongjong · 14m ago
It's a mistake to assume that the utility of a tool has anything to do with its adoption.
The real barrier to adoption for any tool are the network effects of other existing tools which create attention barriers and cultural barriers which may hinder adoption of superior alternatives.
A tool has to adhere and build on top of existing conceptual baggage in order to be appealing to the masses of developers.
This is partly because developers believe that the tools they're using now are cutting-edge and optimal... So a radical conceptual reinvention of their current favorite tools will look to them like a step backwards, regardless of how much further it can take them forward.
I have made a couple of simple websites using PHP to bolt on reusable elements (header, footer, navigation), just because it is the solution that probably will work for ~decades without much churn. But XSLT would be even better!
i am thinking of something like
index.html
index.js or maybe then add a function that iterates over customnodes and makes the replacements. even better if i could just iterate over all defined functions. (there is a way, i just didn't take the time to look it up) then the functions could be: and the iterator would wrap that function with the appropriate document.querySelector('navigation').replaceWith call.Works a treat and makes most frameworks for the same seem completely pointless.
Google did not unilaterally decide to kill XSLT - https://news.ycombinator.com/item?id=44987239
Recent and also related:
XSLT removal will break multiple government and regulatory sites - https://news.ycombinator.com/item?id=44987346 - Aug 2025 (99 comments)
"Remove mentions of XSLT from the html spec" - https://news.ycombinator.com/item?id=44952185 - Aug 2025 (523 comments)
Should we remove XSLT from the web platform? - https://news.ycombinator.com/item?id=44909599 - Aug 2025 (96 comments)
But that universe did not happen.
Lots of "modern" tooling works around the need. For example, in a world of Docker and Kubernetes, are those standards really that important?
I would blame the adoption of containerization for the lack of interest in XML standards, but by the time containerization happened, XML had been all but abandoned.
Maybe it was the adoption of Python, whose JSON libraries are much nicer than XML. Maybe it was the fact that so few XML specs every became mainstream.
In terms of effort, there is a huge tail in XML, where you're trying to get things working, but getting little in return for that effort. XLST is supposed to be the glue that keeps it all together, but there is no "it" to keep together.
XML also does not play very nice with streaming technologies.
I suspect that eventually XML will make a comeback. Or maybe another SGML dialect. But that time is not now.
The pro-XML narrative always sounded like what you wrote, as far back as I can remember: The XML people would tell you it was beautiful and perfect and better than everything as long as everyone would just do everything perfectly right at every step. Then you got into the real world and it was frustrating to deal with on every level. The realities of real-world development meant that the picture-perfect XML universe we were promised wasn't practical.
I don't understand your comparison to containerization. That feels like apples and oragnes.
They tended to be design by comittee messes that included every possible use case as an option.
Anyone who has ever had the misfortune of having to deal with SAML knows what i'm talking about. Its a billion line long specification, everyone only implements 10% of it, and its full of hidden gotchas that will screw up your security if you get them wrong. (Even worse, the underlying xml-signature spec is literally the worst way to do digital signatures possible. Its so bad you'd think someone was intentionally sabotaging it)
In theory this isn't xml's fault, but somehow XML seems to attract really bad spec designers.
XML without attributes probably would have seen wide and ready adoption.
This makes sense.
However, there are two ways to address it:
1) Work towards a more advanced system that addresses the issues (for example, RDF/Turtle – expands XML namespaces to define classes and properties, represents graphs instead of being limited to trees unlike XML and JSON)
2) Throw it away and start from scratch. First, JSON. Then, JSON schema. Jq introduces a kind of "JSONPath". JSONL says hi to XML stream readers. JSONC because comments in config files are useful. And many more primitives that existed around XML were eventually reimplemented.
Note how the discussion around removing XSLT 1 support similarly has two ways forward: yank it out or support XSLT 3.
I lean towards Turtle replacing XML over JSON, and for XSLT 3 to replace XSLT 1 support in the browsers.
Don't miss that they were reimplemented properly.
Even XML schemas, the one thing you'd think they were great at, ended up seeing several different implementation beyond the original DTD-based schema definitions and beyond XSD.
Some XML things were absolute tire fires that should have been reimplemented even earlier, like XML-DSIG, SAML, SOAP, WS-everything.
It's not surprising devs ended up not liking it, there are actual issues trying to apply XML outside of its strengths. As with networking and the eventual conceit of "smart endpoints, dumb pipes" over ESBs, not all data formats are better off being "smart". Oftentimes the complexity of the business logic is better off in the application layer where you can use a real programming language.
Of course not! W3C SHACL shapes, on the other hand...
schema.org is also a move in the right direction
And those bizarre designs went straight into XML, properties often in attributes, nodes that should have been attributes, over nesting, etc.
And we blamed XML for the mess where often it was just inexperience in software design as an industry that was the real cause. But XML had too much flexibility compared to the simplicity of the later JSON, meaning it helped cause the problem. JSON 'solved' the problem by being simpler.
But then the flip side was that it was too strict and starting one in code was a tedious pita where you had to specify a schema even though it didn't exist or even matter most of the time.
The hard truth is that XML lost to the javascript-native format (JSON). Any JavaScript-native format would have won, because "the web" effectively became the world of JavaScript. XML was not js-friendly enough: the parsing infrastructure was largely based on C/C++/Java, and then you'd get back objects with verbose interfaces (again, a c++/java thing) rather than the simple, nested dictionaries that less-skilled "JS-first" developers felt at ease with.
It's a dumber format but that makes it a better lingua franca between all sorts of programming languages, not just Javascript, especially if you haven't locked in on a schema.
Once you have locked in on a schema and IDL-style tooling to autogenerate adapter classes/objects, then non-JSON interchange formats become viable (if not superior). But even in that world, I'd rather have something like gRPC over XML.
The few staunch XML supporters I worked with always wanted to divert blame to something else, refusing to acknowledge that maybe XML was the wrong tool for the job or even contributing to the problems.
With JSON you can dump data structures from about any language straight out and it's okay to start toying around and experimenting. Over time you might add logic for filtering out some fields, rename others, move stuff a little around without too much trouble.
Also quickly writing the structure up by hand works a lot faster in any editor, without having to repeat closing tags (while at some point closing brackets and braces will take their tribute)
However I agree: once you got the XML machinery, there is a lot of power in it.
XHTML being based on XML tried to be a strict standard in a world where a non-strict standard existed and everybody became just very much aware on a daily that a non-strict standard is much easier to work with.
I think it's very hard to compete with that.
Know what? Life's too short to lose time to remembering to close a self-closing tag.
About the time XHTML 1.1 came along, we collectively bailed and went to HTML5, and it was a breath of fresh air.
Then React introduced faux-XML as JSX except with this huge machinery of a runtime javascript virtual DOM instead of basic template expansion and everyone loves it? And if this react playground I've opened up reflects reality, JSX seems to literally require you to balance opening/closing your tags. The punch-line of the whole joke.
What was the point of this exercise? Why do people use JSX for e.g. blogs when HTML templating is built into the browser and they do nothing dynamic? For many years it's been hard to shake the feeling that it isn't some trick to justify 6 figure salaries for people making web pages that are simple enough that an 8 year old should be up to the task.
That same nagging feeling reassures me about our AI future though. Easy ways to do things have been here the whole time, yet here we are. I don't think companies are as focused on efficiency as they pretend. Clearly social aspects like empire building dominate.
The idea behind XLST is nice — creating a stylesheet to transform raw data into presentation. The practice of using it was terrible. It was ugly, it was verbose, it was painful, it had gotchas, it made it easier for scrapers, it bound your data to your presentation more, and so on.
Most of the time I needed to generate XML to later apply a XLST style sheet, the resulting XML document was mostly a one off with no associated spec and not a serious transport document. It begged the question of why I was doing this extra work.
The entire point of XSLT is to separate your data from its presentation. That's why it made it easy to scrape. You could return your data in a more natural domain model and transform it via a stylesheet to its presentation.
And in doing so it is incredibly concise (mostly because XPath is so powerful).
Now I do think there is a need for the complexity supported by XML to exist, but 99% of the time JSON or similar is good enough while being easy to work with.
That all said, XHTML was amazing. I’d have loved to see XHTML become the standard for web markup. But alas that wasn’t to be.
So XHTML lost to the much more forgiving HTML.
There was an idea to make a forgiving XML for web use cases: https://annevankesteren.nl/2007/10/xml5 but it never got traction.
But I do agree that I’m likely in the minority of people (outside of web developers at least) that thought that way.
Asterisk: except namespaces, I loathe those, you are skipping happily along chewing through your XML, xpathing left and right, and then find out some psychopath has decided to use namespaces, and now every thing has become super awkward and formal.
Well, I guess we could do it like libraries in C-land and have every schema add its own informal identifier prefix to avoid name collisions. But there’s a reason why programming languages moved to namespaces as an explicit notion.
JSON is too simplistic.
Something built from s-expressions would probably have been ideal but we've known that for 70 years.
What a pity.
Not sure how that is true. XML is a specification for a data format, but you still need to define the schema (i.e., elements, attributes, their meaning). It's not like XML for web pages (XHTML?) could also serve as XML for Linux container descriptions or as XML for Android app manifests.
Not sure why just as good as JSON, if you are going to stream and parse you need a low level push or pull parser not a DOM just like JSON. See SAX for Java or XmlReader / XmlWriter in .Net.
XSLT 3 even had a streaming mode I believe which was badly needed but had constraints due to not having the whole document in memory at once.
I liked XSLT but there is no need for it, javascript is good enough if not better, many times you needed to do a xslt script tag to get some thing done it couldn't do on its own anyway, might as well use a full language with good libraries for handling XML instead. See Linq to XML etc.
Again don't really agree, its just most developers don't seem to understand the difference between a DOM or parsing JSON into a full object vs using a streaming reader or writer so they need to be hand fed a format that forces it on them such as line based CSV.
Maybe if JSON and XML allowed top level multiple documents / objects it would have helped like JSON lines.
It got abandoned because it sucks. New technology gets adopted because it's good. XML standard were just super meh and difficult to work with. There's really not much more to it than that.
XHTML would have made the Semantic Web (capital letters) possible. Someone else could have done search better. We might have had a proper P2P web.
They wanted sloppy, because only Google scale could deal with that.
Hopefully the AI era might erode that.
XHTML's failure had nothing to do with it, and is basically unrelated. Even if xhtml won i fail to see how that would have helped semantic web in any way shape or form.
The real barrier to adoption for any tool are the network effects of other existing tools which create attention barriers and cultural barriers which may hinder adoption of superior alternatives.
A tool has to adhere and build on top of existing conceptual baggage in order to be appealing to the masses of developers.
This is partly because developers believe that the tools they're using now are cutting-edge and optimal... So a radical conceptual reinvention of their current favorite tools will look to them like a step backwards, regardless of how much further it can take them forward.
https://news.ycombinator.com/item?id=44909599