Show HN: SJT- A lightweight structured JSON table format for APIs
For example, with Discord’s /messages endpoint:
Raw JSON payload: ~50,110 bytes
Same data encoded with SJT: ~26,494 bytes
So you get about a 50% reduction in size, while still being able to decode incrementally (record by record). Surprisingly, decoding can even be faster than plain JSON, because there’s less string parsing overhead.
Quick benchmark:
| Format | Size (KB) | Encode Time | Decode Time |
| ----------- | --------- | ----------- | ----------- | | JSON | 3849.34 | 41.81 ms | 51.86 ms |
| JSON + Gzip | 379.67 | 55.66 ms | 39.61 ms |
| MessagePack | 2858.83 | 51.66 ms | 74.53 ms |
| SJT (json) | 2433.38 | 36.76 ms | 42.13 ms |
| SJT + Gzip | 359.00 | 69.59 ms | 46.82 ms |
Test conditions:
Dataset: Synthetic tabular dataset containing 50,000 records with mixed primitive fields, nested arrays, and nested objects (representative of large REST API payloads).
Runtime: Node.js 20 (V8 engine).
Implementation: JavaScript (via sjt.js).
Size (KB): Uncompressed size in kilobytes (estimated for binary formats).
Encode / Decode (ms): Average time in milliseconds to serialize/deserialize the entire dataset.
Spec: https://github.com/SJTF/SJT
JS implementation: https://github.com/yukiakai212/SJT.js
Curious to hear feedback from people who have worked with JSON-heavy APIs, streaming, or compact data formats (CSV, Parquet, etc.).
It seems, the computational overhead is not worth it?