n[var]char stores unicode while [var]char just stores single-byte characters.
[n]char requires a fixed number of characters of the exact length while [n]varchar accepts a variable number of characters up to and including the defined length.
Another difference is length. Both nchar and nvarchar can be up to 4,000 characters long. And char and varchar can be up to 8000 characters long. But for SQL Server you can also use a [n]varchar(max) which can handle up to 2,147,483,648 characters. (Two gigabytes, a signed 4-byte integer.)
Just to add something more:
nchar - adds trailing spaces to the data.
nvarchar - does not add trailing spaces to the data.
So, if you are going to filter your dataset by an 'nchar' field, you may want to use RTRIM to remove the spaces.
E.g.
nchar(10) field called BRAND stores the word NIKE.
It adds 6 spaces to the right of the word.
So, when filtering, the expression should read:
RTRIM(Fields!BRAND.Value) = "NIKE"
Hope this helps someone out there because I was struggling with it for a bit just now!
nchar and char pretty much operate in exactly the same way as each other, as do nvarchar and varchar. The only difference between them is that nchar/nvarchar store Unicode characters (essential if you require the use of extended character sets) whilst varchar does not.
Because Unicode characters require more storage, nchar/nvarchar fields take up twice as much space (so for example in earlier versions of SQL Server the maximum size of an nvarchar field is 4000).
My attempt to summarize and correct the existing answers:
First, char and nchar will always use a fixed amount of storage space, even when the string to be stored is smaller than the available space, whereas varchar and nvarchar will use only as much storage space as is needed to store that string (plus two bytes of overhead, presumably to store the string length). So remember, "var" means "variable", as in variable space.
The second major point to understand is that, nchar and nvarchar store strings using exactly two bytes per character, whereas char and varchar use an encoding determined by the collation code page, which will usually be exactly one byte per character (though there are exceptions, see below). By using two bytes per character, a very wide range of characters can be stored, so the basic thing to remember here is that nchar and nvarchar tend to be a much better choice when you want internationalization support, which you probably do.
Now for some some finer points.
First, nchar and nvarchar columns always store data using UCS-2. This means that exactly two bytes per character will be used, and any Unicode character in the Basic Multilingual Plane (BMP) can be stored by an nchar or nvarchar field. However, it is not the case that any Unicode character can be stored. For example, according to Wikipedia, the code points for Egyptian hieroglyphs fall outside of the BMP. There are, therefore, Unicode strings that can be represented in UTF-8 and other true Unicode encodings that cannot be stored in a SQL Server nchar or nvarchar field, and strings written in Egyptian hieroglyphs would be among them. Fortunately your users probably don't write in that script, but it's something to keep in mind!
Another confusing but interesting point that other posters have highlighted is that char and varchar fields may use two bytes per character for certain characters if the collation code page requires it. (Martin Smith gives an excellent example in which he shows how Chinese_Traditional_Stroke_Order_100_CS_AS_KS_WS exhibits this behavior. Check it out.)
UPDATE: As of SQL Server 2012, there are finally code pages for UTF-16, for example Latin1_General_100_CI_AS_SC, which can truly cover the entire Unicode range.
nchar[(n)]
(national character)n
defines the string length and must be a value from 1 through 4,000.n
bytes.nvarchar [(n | max)]
(national character varying.)n
defines the string length and can be a value from 1 through 4,000.max
indicates that the maximum storage size is 2^31-1 bytes (2 GB).char [(n)]
(character)non-Unicode
string data.n
defines the string length and must be a value from 1 through 8,000.n
bytes.varchar [(n | max)]
(character varying)n
defines the string length and can be a value from 1 through 8,000.max
indicates that the maximum storage size is 2^31-1 bytes (2 GB).The differences are:
Another difference is length. Both nchar and nvarchar can be up to 4,000 characters long. And char and varchar can be up to 8000 characters long. But for SQL Server you can also use a [n]varchar(max) which can handle up to 2,147,483,648 characters. (Two gigabytes, a signed 4-byte integer.)
nchar is fixed-length and can hold unicode characters. it uses two bytes storage per character.
varchar is of variable length and cannot hold unicode characters. it uses one byte storage per character.
Just to add something more: nchar - adds trailing spaces to the data. nvarchar - does not add trailing spaces to the data.
So, if you are going to filter your dataset by an 'nchar' field, you may want to use RTRIM to remove the spaces. E.g. nchar(10) field called BRAND stores the word NIKE. It adds 6 spaces to the right of the word. So, when filtering, the expression should read: RTRIM(Fields!BRAND.Value) = "NIKE"
Hope this helps someone out there because I was struggling with it for a bit just now!
nchar and char pretty much operate in exactly the same way as each other, as do nvarchar and varchar. The only difference between them is that nchar/nvarchar store Unicode characters (essential if you require the use of extended character sets) whilst varchar does not.
Because Unicode characters require more storage, nchar/nvarchar fields take up twice as much space (so for example in earlier versions of SQL Server the maximum size of an nvarchar field is 4000).
This question is a duplicate of this one.
My attempt to summarize and correct the existing answers:
First,
char
andnchar
will always use a fixed amount of storage space, even when the string to be stored is smaller than the available space, whereasvarchar
andnvarchar
will use only as much storage space as is needed to store that string (plus two bytes of overhead, presumably to store the string length). So remember, "var" means "variable", as in variable space.The second major point to understand is that,
nchar
andnvarchar
store strings using exactly two bytes per character, whereaschar
andvarchar
use an encoding determined by the collation code page, which will usually be exactly one byte per character (though there are exceptions, see below). By using two bytes per character, a very wide range of characters can be stored, so the basic thing to remember here is thatnchar
andnvarchar
tend to be a much better choice when you want internationalization support, which you probably do.Now for some some finer points.
First,
nchar
andnvarchar
columns always store data using UCS-2. This means that exactly two bytes per character will be used, and any Unicode character in the Basic Multilingual Plane (BMP) can be stored by annchar
ornvarchar
field. However, it is not the case that any Unicode character can be stored. For example, according to Wikipedia, the code points for Egyptian hieroglyphs fall outside of the BMP. There are, therefore, Unicode strings that can be represented in UTF-8 and other true Unicode encodings that cannot be stored in a SQL Servernchar
ornvarchar
field, and strings written in Egyptian hieroglyphs would be among them. Fortunately your users probably don't write in that script, but it's something to keep in mind!Another confusing but interesting point that other posters have highlighted is that
char
andvarchar
fields may use two bytes per character for certain characters if the collation code page requires it. (Martin Smith gives an excellent example in which he shows how Chinese_Traditional_Stroke_Order_100_CS_AS_KS_WS exhibits this behavior. Check it out.)UPDATE: As of SQL Server 2012, there are finally code pages for UTF-16, for example Latin1_General_100_CI_AS_SC, which can truly cover the entire Unicode range.