Does the database (maximum) field length affect pe

2019-02-17 09:08发布

问题:

At my company, we have a legacy database with various tables and therefore many, many fields.

A lot of the fields seem to have large limits (ex: NVARCHAR(MAX)) that are never reached.

Does arbitrarily making the fields their maximum width or 2 to 3 times larger than what is normally inputted negatively affect performance?

How should one balance performance with field lengths? Is there a balance?

回答1:

There's two parts to this question:

Does using NVARCHAR over VARCHAR hurt performance? Yes, storing data in unicode fields doubles the storage requirements. Your data stored in those fields is 2x the size it needs to be (until SQL Server 2008 R2 came out, which includes unicode compression. Your table scans will take twice as long and only half as much data can be stored in memory in the buffer cache.

Does using MAX hurt performance? Not directly, but when you use VARCHAR(MAX), NVARCHAR(MAX), and those kinds of fields, and if you need to index the table, you won't be able to rebuild those indexes online in SQL Server 2005/2008/R2. (Denali brings some improvements around tables with MAX fields so some indexes can be rebuilt online.)



回答2:

Yes, the query optimizer can guess how many rows fit in a page, if you have a lot of varchar fields that are larger than necessary, SQL Server can internally guess the wrong number of rows.