In multiple courses, books, and jobs, I have seen text fields defined as VARCHAR(255) as kind of the default for "shortish" text. Is there any good reason that a length of 255 is chosen so often, other than being a nice round number? Is it a holdout from some time in the past when there was a good reason (whether or not it applies today)?
I realize, of course, that a tighter limit would be more ideal, if you somehow know the maximum length of the string. But if you are using VARCHAR(255) that probably indicates that you don't know the max length, only that it is a "shortish" string.
Note: I found this question (varchar(255) v tinyblob v tinytext), which says that VARCHAR(n) requires n+1 bytes of storage for n<=255, n+2 bytes of storage for n>255. Is this the only reason? That seems kind of arbitrary, since you would only be saving two bytes compared to VARCHAR(256), and you could just as easily save another two bytes by declaring it VARCHAR(253).
Historically, 255 characters has often been the maximum length of a VARCHAR
in some DBMSes, and it sometimes still winds up being the effective maximum if you want to use UTF-8 and have the column indexed (because of index length limitations).
255 is used because it's the largest number of characters that can be counted with an 8-bit number. It maximizes the use of the 8-bit count, without frivolously requiring another whole byte to count the characters above 255.
When used this way, VarChar only uses the number of bytes + 1 to store your text, so you might as well set it to 255, unless you want a hard limit (like 50) on the number of characters in the field.
Probably because both SQL Server and Sybase (to name two I am familiar with) used to have a 255 character maximum in the number of characters in a VARCHAR
column. For SQL Server, this changed in version 7 in 1996/1997 or so... but old habits sometimes die hard.
I'm going to answer the literal question: no, there isn't a good reason you see VARCHAR(255) used so often (there are indeed reasons, as discussed in the other answers, just not good ones). You won't find many examples of projects that have failed catastrophically because the architect chose VARCHAR(300) instead of VARCHAR(255). This would be an issue of near-total insignificance even if you were talking about CHAR instead of VARCHAR.
When you say 2^8
you get 256
, but the numbers in computers terms begins from the number 0
. So, then you got the 255
, you can probe it in a internet mask for the IP or in the IP itself.
255
is the maximum value of a 8 bit integer : 11111111 = 255
Does that help?
Note: I found this question
(varchar(255) v tinyblob v tinytext),
which says that VARCHAR(n) requires
n+1 bytes of storage for n<=255, n+2
bytes of storage for n>255. Is this
the only reason? That seems kind of
arbitrary, since you would only be
saving two bytes compared to
VARCHAR(256), and you could just as
easily save another two bytes by
declaring it VARCHAR(253).
No. you don't save two bytes by declaring 253.
The implementation of the varchar is most likely a length counter and a variable length, nonterminated array. This means that if you store "hello" in a varchar(255) you will occupy 6 bytes: one byte for the length (the number 5) and 5 bytes for the five letters.
An unsigned 1 byte number can contain the range [0-255] inclusive. So when you see 255, it is mostly because programmers think in base 10
(get the joke?) :)
Actually, for a while, 255 was the largest size you could give a VARCHAR in MySQL, and there are advantages to using VARCHAR over TEXT with indexing and other issues.
In many applications, like MsOffice (until version 2000 or 2002), the maximum number of characters per cell was 255. Moving data from programs able of handling more than 255 characters per field to/from those applications was a nightmare. Currently, the limit is less and less hindering.