Why shouldn't I use “Hungarian Notation”?

2018-12-31 09:18发布

I know what Hungarian refers to - giving information about a variable, parameter, or type as a prefix to its name. Everyone seems to be rabidly against it, even though in some cases it seems to be a good idea. If I feel that useful information is being imparted, why shouldn't I put it right there where it's available?

See also: Do people use the Hungarian naming conventions in the real world?

30条回答
明月照影归
2楼-- · 2018-12-31 09:57

Joel's article is great, but it seems to omit one major point:

Hungarian makes a particular 'idea' (kind + identifier name) unique, or near-unique, across the codebase - even a very large codebase.

That's huge for code maintenance. It means you can use good ol' single-line text search (grep, findstr, 'find in all files') to find EVERY mention of that 'idea'.

Why is that important when we have IDE's that know how to read code? Because they're not very good at it yet. This is hard to see in a small codebase, but obvious in a large one - when the 'idea' might be mentioned in comments, XML files, Perl scripts, and also in places outside source control (documents, wikis, bug databases).

You do have to be a little careful even here - e.g. token-pasting in C/C++ macros can hide mentions of the identifier. Such cases can be dealt with using coding conventions, and anyway they tend to affect only a minority of the identifiers in the codebase.

P.S. To the point about using the type system vs. Hungarian - it's best to use both. You only need wrong code to look wrong if the compiler won't catch it for you. There are plenty of cases where it is infeasible to make the compiler catch it. But where it's feasible - yes, please do that instead!

When considering feasibility, though, do consider the negative effects of splitting up types. e.g. in C#, wrapping 'int' with a non-built-in type has huge consequences. So it makes sense in some situations, but not in all of them.

查看更多
泛滥B
3楼-- · 2018-12-31 09:59

Most people use Hungarian notation in a wrong way and are getting wrong results.

Read this excellent article by Joel Spolsky: Making Wrong Code Look Wrong.

In short, Hungarian Notation where you prefix your variable names with their type (string) (Systems Hungarian) is bad because it's useless.

Hungarian Notation as it was intended by its author where you prefix the variable name with its kind (using Joel's example: safe string or unsafe string), so called Apps Hungarian has its uses and is still valuable.

查看更多
刘海飞了
4楼-- · 2018-12-31 09:59

In the words of the master:

http://www.joelonsoftware.com/articles/Wrong.html

An interesting reading, as usual.

Extracts:

"Somebody, somewhere, read Simonyi’s paper, where he used the word “type,” and thought he meant type, like class, like in a type system, like the type checking that the compiler does. He did not. He explained very carefully exactly what he meant by the word “type,” but it didn’t help. The damage was done."

"But there’s still a tremendous amount of value to Apps Hungarian, in that it increases collocation in code, which makes the code easier to read, write, debug, and maintain, and, most importantly, it makes wrong code look wrong."

Make sure you have some time before reading Joel On Software. :)

查看更多
不再属于我。
5楼-- · 2018-12-31 10:00

Hungarian notation only makes sense in languages without user-defined types. In a modern functional or OO-language, you would encode information about the "kind" of value into the datatype or class rather than into the variable name.

Several answers reference Joels article. Note however that his example is in VBScript, which didn't support user-defined classes (for a long time at least). In a language with user-defined types you would solve the same problem by creating a HtmlEncodedString-type and then let the Write method accept only that. In a statically typed language, the compiler will catch any encoding-errors, in a dynamically typed you would get a runtime exception - but in any case you are protected against writing unencoded strings. Hungarian notations just turns the programmer into a human type-checker, with is the kind of job that is typically better handled by software.

Joel distinguishes between "systems hungarian" and "apps hungarian", where "systems hungarian" encodes the built-in types like int, float and so on, and "apps hungarian" encodes "kinds", which is higher-level meta-info about variable beyound the machine type, In a OO or modern functional language you can create user-defined types, so there is no distinction between type and "kind" in this sense - both can be represented by the type system - and "apps" hungarian is just as redundant as "systems" hungarian.

So to answer your question: Systems hungarian would only be useful in a unsafe, weakly typed language where e.g. assigning a float value to an int variable will crash the system. Hungarian notation was specifically invented in the sixties for use in BCPL, a pretty low-level language which didn't do any type checking at all. I dont think any language in general use today have this problem, but the notation lived on as a kind of cargo cult programming.

Apps hungarian will make sense if you are working with a language without user defined types, like legacy VBScript or early versions of VB. Perhaps also early versions of Perl and PHP. Again, using it in a modern languge is pure cargo cult.

In any other language, hungarian is just ugly, redundant and fragile. It repeats information already known from the type system, and you should not repeat yourself. Use a descriptive name for the variable that describes the intent of this specific instance of the type. Use the type system to encode invariants and meta info about "kinds" or "classes" of variables - ie. types.

The general point of Joels article - to have wrong code look wrong - is a very good principle. However an even better protection against bugs is to - when at all possible - have wrong code to be detected automatically by the compiler.

查看更多
皆成旧梦
6楼-- · 2018-12-31 10:03

Im my experience, it is bad because:

1 - then you break all the code if you need to change the type of a variable (i.e. if you need to extend a 32 bits integer to a 64 bits integer);

2 - this is useless information as the type is either already in the declaration or you use a dynamic language where the actual type should not be so important in the first place.

Moreover, with a language accepting generic programming (i.e. functions where the type of some variables is not determine when you write the function) or with dynamic typing system (i.e. when the type is not even determine at compile time), how would you name your variables? And most modern languages support one or the other, even if in a restricted form.

查看更多
人气声优
7楼-- · 2018-12-31 10:04

Isn't scope more important than type these days, e.g.

* l for local
* a for argument
* m for member
* g for global
* etc

With modern techniques of refactoring old code, search and replace of a symbol because you changed its type is tedious, the compiler will catch type changes, but often will not catch incorrect use of scope, sensible naming conventions help here.

查看更多
登录 后发表回答