Another problem is line breaks. Have a <textarea>? Line breaks are counted as \n on the client (affecting maxlength attribute and JavaScript calculations using textarea.value.length), but submitted as \r\n. This has bitten me on “2000 character maximum” feedback forms at least twice: client says it’s fine, server says it’s too long, and promptly throws everything away.
wpollock · 6h ago
Best advice I've heard is to never use the character type in your programming language. Instead, store characters in strings. An array of strings can be used as a string of characters. In this approach, characters become opaque blobs of bytes. This makes it easy to get the two numbers you care about: length in characters and size in bytes.
There is some overhead for this, so maybe a technique more suited to backends. Normalization, sanitation and validation steps are best performed in the frontend.
Also worth knowing is the ICU library, which is often the easiest way to work with Unicode consistently regardless of programming language.
Finally, punycode is a standard way to represent arbitrary Unicode strings as ASCII. It's reversible too (and built into every web browser). You can do size limits on the punycode representation.
BTW, you shouldn't store passwords in strings in the first place. Many programming languages have an alternative to hold secrets in memory safely.
saagarjha · 36m ago
This is generally a bad idea, even if you ignore the obvious overhead from doing so. At some point you are going to create a "real" string out of the thing you have, and it is not going to behave like you expect if you just blindly use the array's properties to compute them. Nor will they really have well defined semantics unless you are careful about what the "characters" you're storing in strings are.
fsckboy · 1h ago
>length in characters and size in bytes
you change the word you use as if those words have inherent meanings that we can draw upon. they don't.
it would be more clear to write "length in characters and length in bytes"
[linguistically speaking, words don't carry meanings, it is us who ascribe meaning to words. we use words to say what we want to say, but words don't limit us in what we can say]
wild_egg · 5h ago
> validation steps are best performed in the frontend.
I'm really hoping we have very different definitions of "frontend"
wpollock · 5h ago
I meant the web server, not in the end user's browser! (So by backend, I meant the application and data layers.)
frizlab · 4h ago
Swift’s Character type represents an extended grapheme cluster, which is the correct thing to do.
jasonthorsness · 9h ago
Huh, apparently HTML input attributes like maxsize don't try anything fancy and just count UTF-16 code units same as JavaScript strings (I guess it makes sense...) With the prevalence of emojis this seems like it might not do the right thing.
Which is a reasonable and clean solution - I love simplicity of ASCII like every programmer does.
Except ASCII is not enough to represent my language, or even my name. Unicode is complex, but I'm glad it's here. I'm old enough to remember the absolute nightmare that was multi-language support before Unicode and now the problem of encodings is... almost solved.
fsckboy · 3h ago
>ASCII is not enough to represent my language, or even my name.
Hebrew and Arabic don't include vowels. While you think that writing your language needs vowels, we can tell from the existence of Hebrew and Arabic that you are probably wrong. It would take some getting used to, but just like that "scramble the letters in the middle of words, you can still read":
>Aocdrnig to a rscheearch at Cmabrigde Vinervtisy, it deosn't mttaer in waht oredr the Itteers in a wrod are, the olny iprmoetnt ting is taht the frist and Isat Itteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcusee the huamn mnid deos not raed ervey teter by istlef, but the wrod as a wlohe.
your language, too, is redundant and could be modified to be simpler to write.
I'm not asking you to write your language with no vowels, I'm simply saying you could reduce to ASCII, get used to it, and civilization could move on. Stop clinging to the past, you are holding up the flying cars.
cjs_ac · 46m ago
Once again, I request that 95% of the world's population change the way it does almost everything, so that I can simplify my code.
Thank you for writing this comment; it's cleared up some self-esteem issues I've been having about whether I'm clever enough to start my own company.
saagarjha · 34m ago
It's hilarious that the guy telling people to go back to ASCII is the one saying "stop clinging to the past".
smrq · 58m ago
English itself lost some lovely letters because of the printing press (RIP, þ), so I suppose simplifying writing systems in the name of technological simplicity isn't unprecedented.
HeyImAlex · 8h ago
Thank you for writing this! It’s something I’ve always wanted a comprehensive guide on, now I have something to point to.
aidenn0 · 8h ago
This doesn't seem to cover truncation, but rather acceptance/rejection. If you are given something with "too many" codepoints, but need to use it anyways it seems like it would make sense to truncate it on a grapheme cluster boundary.
adam-p · 7h ago
I don't get into truncation much, but I do mention the risk of:
a) failing to truncate on a code point sequence boundary (a bug React Native iOS used to have)[1], and
b) failing to truncate on a grapheme cluster boundary (a bug React Native Android seems to still have)[2]
> The byte size allowed would need to be about 100x the length limit. That’s… kind of a lot?
Would it need to be, though? ~10x ought to be enough for any realistic string that wasn't especially crafted to be annoying.
adam-p · 7h ago
Valid question, and I think you're right in the abstract and most of the time. But I also think you end up with a mismatch.
What's the concrete spec for the limit if you've only got 10x storage per grapheme cluster?
Probably you end providing the limit in bytes. That's fine, but it's no longer the "hybrid counting" thing anymore.
aidenn0 · 8h ago
They show a single Hindi character that is 15 bytes in UTF-8. That's enough over 10 that it would be believable that Hindi words could get uncomfortably close to the 10x limit.
chrismorgan · 24m ago
Triple conjuncts are very uncommon in Indic scripts, though there are a few in common use, like stri is a single-syllable word that means woman or wife in many languages. Pick your Indic script, and that’ll be LETTER SA, SIGN VIRAMA, LETTER TA, SIGN VIRAMA, LETTER RA, VOWEL SIGN I. Most Indic syllables/grapheme clusters are a single consonant and a single vowel sign, if not the inherent vowel -a. Conjuncts use their script’s SIGN VIRAMA to suppress the inherent vowel and normally graphically join the next consonant (an orthographic choice rarely broken, a little like ß being ss in German).
I’m not so confident about Hindi, though 25% seems very low if we’re talking frequency; but in Telugu writing it’s definitely a lot more than that that specify a vowel sign and thus take at least two Unicode scalar values to represent a syllable.
My feeling (as a white fellow moved to India, with well above average knowledge of Indian languages and Unicode for a place like HN, but not yet fluent in any Indian language) is that some four-bytes-per-code-point script might conceivably get realistic existing texts above an average of 10 bytes per syllable for at least twenty syllables, and that most Indic languages could sustain it indefinitely in specific deliberate styles of writing.
Retr0id · 7h ago
A single hindi character, yes. But they also mention that only ~25% of hindi characters use combining marks.
saagarjha · 30m ago
Most of them are vowels. They're pretty common. (Also, I feel like you of all people would understand the issues with "only 25% of the time this happens, therefore surprising behavior at the edges is unlikely to happen".)
o11c · 9h ago
Note that normalization involves rearranging combining characters of different combining classes:
If a precombined character exists, the relevant accent will be pulled into the base regardless of where it is in the sequence. Note also that normalization can change the visual length (see below) under some circumstances.
The article is somewhat wrong when it says Unicode may "change character normalization rules"; new combining characters may be added (which affects the class sort above) but new precombined ones cannot.
---
There's one important notion of "length" that this doesn't cover: how wide is this on the screen?
For variable-width fonts of course this is very difficult. For monospace fonts, there are several steps for the least-bad answer:
* Zeroth, if you have reason to believe a later stage has a limit on the number of combining characters or will normalize, do the normalization yourself if that won't ruin your other concerns. (TODO - since there are some precomposed characters with multiple accents, can this actually make things worse?)
* First, deal with whitespace. Do you collapse space? What forms of line separator do you accept? How far apart are tab stops?
* Second, deal with any nonprintable/control/format characters (including spaces you don't recognize), e.g. escaping them or replacing them by their printable form but adding the "inverted" attribute.
* Third, deal with any leading (meaning, immediately after a nonprintable or a line-separator) combining characters, treat them by synthesizing a NBSP (which is not a space), which has length 1. Likewise, synthesize missing Hangul fillers anywhere in the line.
* Now, iterate through the codepoints, checking their EastAsianWidth (note that you can usually have a table combining this lookup with the earlier stages): -1 for a control character, 0 for a combining character (unless dealing with a system that's too dumb to strip them), 1 or 2 for normal characters.
* Any codepoints that are Ambiguous or in one of the Private Use Areas should be counted both ways (you want to produce two separate counts). Any combining characters that are enclosing should be treated as ambiguous (unless the base was already wide). Likewise for the Korean Hangul LVT sequences, you should produce a range of lengths (since in practice, whether they will combine depends on whether the font includes that exact sequence).
* If you encounter any ZWJ sequences, regardless of whether or not they correspond to a known emoji, count them both ways (min length being the max of any single component, max length as counted all separately).
* Flag characters are evil, since they violate Unicode's random-access rule. Count them both as if they would render separately and if they would render as a flag.
* TODO what about Ideographic Description Characters?
* Finally, hard-code any exceptions you encounter in the wild, e.g. there are some Arabic codepoints that are really supposed to be more than 2 columns.
For the purpose of layout, you should mostly work based on the largest possible count. But if the smallest possible count is different, you need to use some sort of absolute positioning so you don't mess up the user's terminal.
adam-p · 7h ago
> The article is somewhat wrong when it says Unicode may "change character normalization rules"; new combining characters may be added (which affects the class sort above) but new precombined ones cannot.
That's fair. I updated the wording in the post.
Thanks for the display info. It's cool and horrible and out of scope for my post.
DemocracyFTW2 · 30m ago
> * TODO what about Ideographic Description Characters?
I've never encountered them other than rendered with widths like any other CJK character, i.e. with (nominally) double width. There may be software that makes an effort to render IDSes (Ideographic Description Sequences) as existing or generated ideographs (or whacha may call those), but I have yet to see one. There may however, and IMO more likely, be situations where you want to grant the user an input of exactly one, or up to a certain number of CJK characters e.g. for the purpose of searching and grant them the ability to use IDSes for unencoded characters or incompletely known characters. But in that case you're clearly leaving the boundaries of what is Unicode and enter into the grammar of your search engine's customized search strings. Meaning that you probably don't need to handle IDC separately at all other than treating them like any other fullwidth CJK codepoint.
bsder · 9h ago
TIL: In worst case, "20 UTF-8 bytes" == "1 Hindi character"
Going to have to remember that.
saagarjha · 29m ago
You can go way beyond that, although at some point I think it's unlikely that the character is something that is semantically valid.
wavemode · 9h ago
In the age of unicode (and modern computing in general), all of this is more headache than it's worth. What is actually important is that you limit the size of an HTTP request to your server (perhaps making some exceptions for file upload endpoints). As long as the user's form entries fit within that, let them do what they want.
gcau · 6h ago
I don't it's practical or useful to just say "limit the size of entire requests" and just ignore all the real world reasons you'd want to actually validate/check data before putting it in your database. The logic you're using is how we have bugs and security holes. This persons write-up gives specific and detailed information that's genuinely useful.
No comments yet
adam-p · 7h ago
If you can get away with that, that's great. But I feel like there are still plenty of cases where you want to limit the lengths of particular fields (and communicate to the user which lengths were exceeded).
jerf · 6h ago
I had this problem recently, in logging email subjects into something that has a defined byte limit size. I went for iterating on graphemes and fitting as many complete graphemes into the bytes as I could, and then stopping. The idea is, don't show broken graphemes and fit as much as I can.
This approach probably solves most programmer problems with length. However if this has to be surfaced to an end-user who is not intimately familiar with the nature of Unicode encodings, which is, you know, basically everybody, it may be difficult to explain to them what the limits actually mean in any sensible way. About all you can do is maybe give vague hints about it being nearly too long and avoid being precise enough for there to be a problem. There doesn't seem to me to be a perfect solution here, the intrinsic problem of there being no easy to explain the lengths of these things to end-users and no reason to ever expect them to understand it seems fundamental to me.
adam-p · 11h ago
@dang Can the title be changed? It should be "The best – but not good – way to limit string length". Thanks.
There is some overhead for this, so maybe a technique more suited to backends. Normalization, sanitation and validation steps are best performed in the frontend.
Also worth knowing is the ICU library, which is often the easiest way to work with Unicode consistently regardless of programming language.
Finally, punycode is a standard way to represent arbitrary Unicode strings as ASCII. It's reversible too (and built into every web browser). You can do size limits on the punycode representation.
BTW, you shouldn't store passwords in strings in the first place. Many programming languages have an alternative to hold secrets in memory safely.
you change the word you use as if those words have inherent meanings that we can draw upon. they don't.
it would be more clear to write "length in characters and length in bytes"
[linguistically speaking, words don't carry meanings, it is us who ascribe meaning to words. we use words to say what we want to say, but words don't limit us in what we can say]
I'm really hoping we have very different definitions of "frontend"
https://html.spec.whatwg.org/multipage/input.html#attr-input...
Except ASCII is not enough to represent my language, or even my name. Unicode is complex, but I'm glad it's here. I'm old enough to remember the absolute nightmare that was multi-language support before Unicode and now the problem of encodings is... almost solved.
Hebrew and Arabic don't include vowels. While you think that writing your language needs vowels, we can tell from the existence of Hebrew and Arabic that you are probably wrong. It would take some getting used to, but just like that "scramble the letters in the middle of words, you can still read":
https://www.sciencealert.com/word-jumble-meme-first-last-let...
>Aocdrnig to a rscheearch at Cmabrigde Vinervtisy, it deosn't mttaer in waht oredr the Itteers in a wrod are, the olny iprmoetnt ting is taht the frist and Isat Itteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit porbelm. Tihs is bcusee the huamn mnid deos not raed ervey teter by istlef, but the wrod as a wlohe.
your language, too, is redundant and could be modified to be simpler to write.
I'm not asking you to write your language with no vowels, I'm simply saying you could reduce to ASCII, get used to it, and civilization could move on. Stop clinging to the past, you are holding up the flying cars.
Thank you for writing this comment; it's cleared up some self-esteem issues I've been having about whether I'm clever enough to start my own company.
a) failing to truncate on a code point sequence boundary (a bug React Native iOS used to have)[1], and
b) failing to truncate on a grapheme cluster boundary (a bug React Native Android seems to still have)[2]
[1]: https://adam-p.ca/blog/2025/04/string-length/#utf-16-code-un...
[2]: https://adam-p.ca/blog/2025/04/string-length/#unicode-code-p...
Would it need to be, though? ~10x ought to be enough for any realistic string that wasn't especially crafted to be annoying.
What's the concrete spec for the limit if you've only got 10x storage per grapheme cluster?
Probably you end providing the limit in bytes. That's fine, but it's no longer the "hybrid counting" thing anymore.
I’m not so confident about Hindi, though 25% seems very low if we’re talking frequency; but in Telugu writing it’s definitely a lot more than that that specify a vowel sign and thus take at least two Unicode scalar values to represent a syllable.
My feeling (as a white fellow moved to India, with well above average knowledge of Indian languages and Unicode for a place like HN, but not yet fluent in any Indian language) is that some four-bytes-per-code-point script might conceivably get realistic existing texts above an average of 10 bytes per syllable for at least twenty syllables, and that most Indic languages could sustain it indefinitely in specific deliberate styles of writing.
The article is somewhat wrong when it says Unicode may "change character normalization rules"; new combining characters may be added (which affects the class sort above) but new precombined ones cannot.
---
There's one important notion of "length" that this doesn't cover: how wide is this on the screen?
For variable-width fonts of course this is very difficult. For monospace fonts, there are several steps for the least-bad answer:
* Zeroth, if you have reason to believe a later stage has a limit on the number of combining characters or will normalize, do the normalization yourself if that won't ruin your other concerns. (TODO - since there are some precomposed characters with multiple accents, can this actually make things worse?)
* First, deal with whitespace. Do you collapse space? What forms of line separator do you accept? How far apart are tab stops?
* Second, deal with any nonprintable/control/format characters (including spaces you don't recognize), e.g. escaping them or replacing them by their printable form but adding the "inverted" attribute.
* Third, deal with any leading (meaning, immediately after a nonprintable or a line-separator) combining characters, treat them by synthesizing a NBSP (which is not a space), which has length 1. Likewise, synthesize missing Hangul fillers anywhere in the line.
* Now, iterate through the codepoints, checking their EastAsianWidth (note that you can usually have a table combining this lookup with the earlier stages): -1 for a control character, 0 for a combining character (unless dealing with a system that's too dumb to strip them), 1 or 2 for normal characters.
* Any codepoints that are Ambiguous or in one of the Private Use Areas should be counted both ways (you want to produce two separate counts). Any combining characters that are enclosing should be treated as ambiguous (unless the base was already wide). Likewise for the Korean Hangul LVT sequences, you should produce a range of lengths (since in practice, whether they will combine depends on whether the font includes that exact sequence).
* If you encounter any ZWJ sequences, regardless of whether or not they correspond to a known emoji, count them both ways (min length being the max of any single component, max length as counted all separately).
* Flag characters are evil, since they violate Unicode's random-access rule. Count them both as if they would render separately and if they would render as a flag.
* TODO what about Ideographic Description Characters?
* Finally, hard-code any exceptions you encounter in the wild, e.g. there are some Arabic codepoints that are really supposed to be more than 2 columns.
For the purpose of layout, you should mostly work based on the largest possible count. But if the smallest possible count is different, you need to use some sort of absolute positioning so you don't mess up the user's terminal.
That's fair. I updated the wording in the post.
Thanks for the display info. It's cool and horrible and out of scope for my post.
I've never encountered them other than rendered with widths like any other CJK character, i.e. with (nominally) double width. There may be software that makes an effort to render IDSes (Ideographic Description Sequences) as existing or generated ideographs (or whacha may call those), but I have yet to see one. There may however, and IMO more likely, be situations where you want to grant the user an input of exactly one, or up to a certain number of CJK characters e.g. for the purpose of searching and grant them the ability to use IDSes for unencoded characters or incompletely known characters. But in that case you're clearly leaving the boundaries of what is Unicode and enter into the grammar of your search engine's customized search strings. Meaning that you probably don't need to handle IDC separately at all other than treating them like any other fullwidth CJK codepoint.
Going to have to remember that.
No comments yet
This approach probably solves most programmer problems with length. However if this has to be surfaced to an end-user who is not intimately familiar with the nature of Unicode encodings, which is, you know, basically everybody, it may be difficult to explain to them what the limits actually mean in any sensible way. About all you can do is maybe give vague hints about it being nearly too long and avoid being precise enough for there to be a problem. There doesn't seem to me to be a perfect solution here, the intrinsic problem of there being no easy to explain the lengths of these things to end-users and no reason to ever expect them to understand it seems fundamental to me.