JSON is not a friendly format to the Unix shell — it’s hierarchical, and cannot be reasonably split on any character
Yes, shell is definitely too weak to parse JSON!
(One reason I started https://oils.pub is because I saw that bash completion scripts try to parse bash in bash, which is an even worse idea than trying to parse JSON in bash)
I'd argue that Awk is ALSO too weak to parse JSON
The following code assumes that it will be fed valid JSON. It has some basic validation as a function of the parsing and will most likely throw an error if it encounters something strange, but there are no guarantees beyond that.
Yeah I don't like that! If you don't reject invalid input, you're not really parsing
---
OSH and YSH both have JSON built-in, and they have the hierarchical/recursive data structures you need for the common Python/JS-like API:
osh-0.33$ var d = { date: $(date --iso-8601) }
osh-0.33$ json write (d) | tee tmp.txt
{
"date": "2025-06-28"
}
Parse, then pretty print the data structure you got:
Awk is great and this is a great post. But dang, awk really shoots itself so much with its lack of features that it so desperately needs!
Like: printing all but one column somewhere in the middle. It turns into long, long commands that really pull away from the spirit of fast fabrication unix experimentation.
jq and sql both have the same problem :)
SoftTalker · 29m ago
> awk really shoots itself so much with its lack of features that it so desperately needs
Whence perl.
jcynix · 16m ago
>awk really shoots itself so much with its lack of features that it so desperately needs!
That's why I use Perl instead (besides some short one liners in awk, which in some cases are even shorter than the Perl version) and do my JSON parsing in Perl.
Yes, shell is definitely too weak to parse JSON!
(One reason I started https://oils.pub is because I saw that bash completion scripts try to parse bash in bash, which is an even worse idea than trying to parse JSON in bash)
I'd argue that Awk is ALSO too weak to parse JSON
The following code assumes that it will be fed valid JSON. It has some basic validation as a function of the parsing and will most likely throw an error if it encounters something strange, but there are no guarantees beyond that.
Yeah I don't like that! If you don't reject invalid input, you're not really parsing
---
OSH and YSH both have JSON built-in, and they have the hierarchical/recursive data structures you need for the common Python/JS-like API:
Parse, then pretty print the data structure you got: Create a JSON syntax error on purpose: (now I see the error message could be better)Another example from wezm yesterday: https://mastodon.decentralised.social/@wezm/1147586026608361...
YSH has JSON natively, but for anyone interested, it would be fun to test out the language by writing a JSON parser in YSH
It's fundamentally more powerful than shell and awk because it has garbage-collected data structures - https://www.oilshell.org/blog/2024/09/gc.html
Like: printing all but one column somewhere in the middle. It turns into long, long commands that really pull away from the spirit of fast fabrication unix experimentation.
jq and sql both have the same problem :)
Whence perl.
That's why I use Perl instead (besides some short one liners in awk, which in some cases are even shorter than the Perl version) and do my JSON parsing in Perl.
This
diff -rs a/ b/ | ask '/identical/ {print $4}' | xargs rm
is one of my often used awk one liners. Unless some filenames contain e.g. whitespace, then it's Perl again