The best – but not good – way to limit string length

24 adam-p 20 4/30/2025, 8:37:49 PM adam-p.ca ↗

Comments (20)

wpollock · 13m ago
Best advice I've heard is to never use the character type in your programming language. Instead, store characters in strings. An array of strings can be used as a string of characters. In this approach, characters become opaque blobs of bytes. This makes it easy to get the two numbers you care about: length in characters and size in bytes.

There is some overhead for this, so maybe a technique more suited to backends. Normalization, sanitation and validation steps are best performed in the frontend.

Also worth knowing is the ICU library, which is often the easiest way to work with Unicode consistently regardless of programming language.

Finally, punycode is a standard way to represent arbitrary Unicode strings as ASCII. It's reversible too (and built into every web browser). You can do size limits on the punycode representation.

wavemode · 2h ago
In the age of unicode (and modern computing in general), all of this is more headache than it's worth. What is actually important is that you limit the size of an HTTP request to your server (perhaps making some exceptions for file upload endpoints). As long as the user's form entries fit within that, let them do what they want.
gcau · 28m ago
I don't it's practical or useful to just say "limit the size of entire requests" and just ignore all the real world reasons you'd want to actually validate/check data before putting it in your database. The logic you're using is how we have bugs and security holes. This persons write-up gives specific and detailed information that's genuinely useful.

No comments yet

adam-p · 1h ago
If you can get away with that, that's great. But I feel like there are still plenty of cases where you want to limit the lengths of particular fields (and communicate to the user which lengths were exceeded).
jasonthorsness · 3h ago
Huh, apparently HTML input attributes like maxsize don't try anything fancy and just count UTF-16 code units same as JavaScript strings (I guess it makes sense...) With the prevalence of emojis this seems like it might not do the right thing.

https://html.spec.whatwg.org/multipage/input.html#attr-input...

jerf · 30m ago
I had this problem recently, in logging email subjects into something that has a defined byte limit size. I went for iterating on graphemes and fitting as many complete graphemes into the bytes as I could, and then stopping. The idea is, don't show broken graphemes and fit as much as I can.

This approach probably solves most programmer problems with length. However if this has to be surfaced to an end-user who is not intimately familiar with the nature of Unicode encodings, which is, you know, basically everybody, it may be difficult to explain to them what the limits actually mean in any sensible way. About all you can do is maybe give vague hints about it being nearly too long and avoid being precise enough for there to be a problem. There doesn't seem to me to be a perfect solution here, the intrinsic problem of there being no easy to explain the lengths of these things to end-users and no reason to ever expect them to understand it seems fundamental to me.

aidenn0 · 1h ago
This doesn't seem to cover truncation, but rather acceptance/rejection. If you are given something with "too many" codepoints, but need to use it anyways it seems like it would make sense to truncate it on a grapheme cluster boundary.
adam-p · 1h ago
I don't get into truncation much, but I do mention the risk of:

a) failing to truncate on a code point sequence boundary (a bug React Native iOS used to have)[1], and

b) failing to truncate on a grapheme cluster boundary (a bug React Native Android seems to still have)[2]

[1]: https://adam-p.ca/blog/2025/04/string-length/#utf-16-code-un...

[2]: https://adam-p.ca/blog/2025/04/string-length/#unicode-code-p...

neuroelectron · 4h ago
This is why my website is going to be ASCII only.
poincaredisk · 3h ago
Which is a reasonable and clean solution - I love simplicity of ASCII like every programmer does.

Except ASCII is not enough to represent my language, or even my name. Unicode is complex, but I'm glad it's here. I'm old enough to remember the absolute nightmare that was multi-language support before Unicode and now the problem of encodings is... almost solved.

HeyImAlex · 2h ago
Thank you for writing this! It’s something I’ve always wanted a comprehensive guide on, now I have something to point to.
Retr0id · 3h ago
> The byte size allowed would need to be about 100x the length limit. That’s… kind of a lot?

Would it need to be, though? ~10x ought to be enough for any realistic string that wasn't especially crafted to be annoying.

adam-p · 1h ago
Valid question, and I think you're right in the abstract and most of the time. But I also think you end up with a mismatch.

What's the concrete spec for the limit if you've only got 10x storage per grapheme cluster?

Probably you end providing the limit in bytes. That's fine, but it's no longer the "hybrid counting" thing anymore.

aidenn0 · 1h ago
They show a single Hindi character that is 15 bytes in UTF-8. That's enough over 10 that it would be believable that Hindi words could get uncomfortably close to the 10x limit.
Retr0id · 1h ago
A single hindi character, yes. But they also mention that only ~25% of hindi characters use combining marks.
bsder · 3h ago
TIL: In worst case, "20 UTF-8 bytes" == "1 Hindi character"

Going to have to remember that.

No comments yet

o11c · 3h ago
Note that normalization involves rearranging combining characters of different combining classes:

  > Array.from("\u{10FFff}\u0300\u0327".normalize('NFC')).map(x=>x.codePointAt().toString(16))
  [ '10ffff', '327', '300' ]
If a precombined character exists, the relevant accent will be pulled into the base regardless of where it is in the sequence. Note also that normalization can change the visual length (see below) under some circumstances.

The article is somewhat wrong when it says Unicode may "change character normalization rules"; new combining characters may be added (which affects the class sort above) but new precombined ones cannot.

---

There's one important notion of "length" that this doesn't cover: how wide is this on the screen?

For variable-width fonts of course this is very difficult. For monospace fonts, there are several steps for the least-bad answer:

* Zeroth, if you have reason to believe a later stage has a limit on the number of combining characters or will normalize, do the normalization yourself if that won't ruin your other concerns. (TODO - since there are some precomposed characters with multiple accents, can this actually make things worse?)

* First, deal with whitespace. Do you collapse space? What forms of line separator do you accept? How far apart are tab stops?

* Second, deal with any nonprintable/control/format characters (including spaces you don't recognize), e.g. escaping them or replacing them by their printable form but adding the "inverted" attribute.

* Third, deal with any leading (meaning, immediately after a nonprintable or a line-separator) combining characters, treat them by synthesizing a NBSP (which is not a space), which has length 1. Likewise, synthesize missing Hangul fillers anywhere in the line.

* Now, iterate through the codepoints, checking their EastAsianWidth (note that you can usually have a table combining this lookup with the earlier stages): -1 for a control character, 0 for a combining character (unless dealing with a system that's too dumb to strip them), 1 or 2 for normal characters.

* Any codepoints that are Ambiguous or in one of the Private Use Areas should be counted both ways (you want to produce two separate counts). Any combining characters that are enclosing should be treated as ambiguous (unless the base was already wide). Likewise for the Korean Hangul LVT sequences, you should produce a range of lengths (since in practice, whether they will combine depends on whether the font includes that exact sequence).

* If you encounter any ZWJ sequences, regardless of whether or not they correspond to a known emoji, count them both ways (min length being the max of any single component, max length as counted all separately).

* Flag characters are evil, since they violate Unicode's random-access rule. Count them both as if they would render separately and if they would render as a flag.

* TODO what about Ideographic Description Characters?

* Finally, hard-code any exceptions you encounter in the wild, e.g. there are some Arabic codepoints that are really supposed to be more than 2 columns.

For the purpose of layout, you should mostly work based on the largest possible count. But if the smallest possible count is different, you need to use some sort of absolute positioning so you don't mess up the user's terminal.

adam-p · 57m ago
> The article is somewhat wrong when it says Unicode may "change character normalization rules"; new combining characters may be added (which affects the class sort above) but new precombined ones cannot.

That's fair. I updated the wording in the post.

Thanks for the display info. It's cool and horrible and out of scope for my post.

adam-p · 5h ago
@dang Can the title be changed? It should be "The best – but not good – way to limit string length". Thanks.
dang · 4h ago
Fixed!