I have been re-factoring someone else's JavaScript code.
BEFORE:
function SomeObj(flag) {
var _private = true;
this.flag = (flag) ? true : false;
this.version="1.1 (prototype)";
if (!this._someProperty) this._init();
// leading underscore hints at what should be a 'private' to me
this.reset(); // assumes reset has been added...
}
SomeObj.prototype.reset = function() {
/* perform some actions */
}
/* UPDATE */
SomeObj.prototype.getPrivate = function() {
return _private; // will return undefined
}
/* ...several other functions appended via `prototype`...*/
AFTER:
var SomeObj = function (flag) {
var _private = true;
this.flag = (flag) ? true : false;
this.version = "2.0 (constructor)";
this.reset = function () {
/* perform some actions */
};
/* UPDATE */
this.getPrivate = function() {
return _private; // will return true
}
/* other functions and function calls here */
}
For me the first example looks difficult to read, especially in a larger context. Adding methods like reset
on like this, using the prototype
property, seems much less controlled as it can presumably happen anywhere in the script. My refactored code (the second example above) looks much neater to me and is therefore easier to read because it's self-contained. I've gained some privacy with the variable declarations but I've lost the possibilities the prototype chain.
...
QUESTIONS:
Firstly, I'm interested to know what else I have lost by foregoing
prototype
, or if there are larger implications to the loss of the prototype chain. This article is 6 years old but claims that using theprototype
property is much more efficient on a large scale than closure patterns.Both the examples above would still be instantiated by a
new
operator; they are both 'classical'-ish constructors. Eventually I'd even like to move away from this into a model where all the properties and functions are declared asvar
s and I have one method which I expose that's capable of returning an object opening up all the properties and methods I need, which have privileges (by virtue of closure) to those that are private. Something like this:var SomeObj = (function () { /* all the stuff mentioned above, declared as 'private' `var`s */ /* UPDATE */ var getPrivate = function () { return private; } var expose = function (flag) { // just returns `flag` for now // but could expose other properties return { flag: flag || false, // flag from argument, or default value getPrivate: getPrivate } }; return { expose: expose } })(); // IIFE // instead of having to write `var whatever = new SomeObj(true);` use... var whatever = SomeObj.expose();
There are a few answers on StackOverflow addressing the 'prototype vs. closure' question (here and here, for example). But, as with the
prototype
property, I'm interested in what a move towards this and away from thenew
operator means for the efficiency of my code and for any loss of possibility (e.g.instanceof
is lost). If I'm not going to be using prototypal inheritance anyway, do I actually lose anything in foregoing thenew
operator?A looser question if I'm permitted, given that I'm asking for specifics above: if
prototype
andnew
really are the most efficient way to go, with more advantages (whatever you think they might be) than closure, are there any guidelines or design patterns for writing them in a neater fashion?
...
UPDATE:
Note that expose
returns a new object each time, so this is where the instantiation happens. As I understand this, where that object refers to methods declared in the SomeObj
closure, they are the same methods across all objects (unless overwritten). In the case of the flag
variable (which I've now corrected), this can be inherited from the argument of expose
, have a default value, or again refer back to a encapsulated pre-existing method or property. So there are instances of objects being produced and there is some inheritance (plus polymorphism?) going on here.
So to repeat question 2: If I'm not going to be using prototypal inheritance anyway, do I actually lose anything in foregoing the new
operator?
Many thanks for answers so far, which have helped to clarify my question.
Answers:
You already answer this question: you loose the prototype chain. (Actually you don't loose it, but your prototype will be always empty). The consequences are:
There is a little performance/memory impact, because methods are created for each instance . But it depends a lot on the JavaScript engine, and you should worry about it only if you need to create a big amount of objects.
You can't monkey patch instances by modifying the prototype. Not a big issue either, since doing that leads to a maintenance nightmare.
Let me do a small pedantic correction to your question: Is not a matter of "prototype vs closure", in fact the concept of closure is orthogonal to a prototype based language.
The question is related on how you are going to create objects: define an new object from zero each time, or clone it from a prototype.
The example that you show about using functions to limit the scope, is a usual practice in JavaScript, and you can continue doing that even if you decide to use prototypes. For example:
If you are worried about modularization, take a look into requirejs which is an implementation of a technique called AMD (async module definition). Some people doesn't like AMD and some people love it. My experience with it was positive: it helped me a lot to create a modular JavaScript app for the browser.
There are some libraries to make your life with prototypes easier: composejs, dejavu, and my own barman (yes is a shameless self promotion, but you can look into the source code to see ways of dealing with definitions of objects).
About patterns: Since you can easily hide object instantiation using factory methods, you can still use
new
to clone a prototype internally.I'm sure someone can provide an answer, but I'll at least give it a shot. There are at least two reasons to use
prototype
:prototype
methods can be used staticallyCreating a method as an object member means that it is created for every instance of the object. That's more memory per object, and it slows down object creation (hence your efficiency). People tend to say that
prototype
methods are like class methods whereas member methods are like object methods, but this is very misleading since methods in the prototype chain can still use the object instance.You can define the prototype as an object itself, so you may like the syntax better (but it's not all that different):
Your argument that it seems less controlled is valid to me. I get that it is weird to have two blocks involved in creating an object. However, it's a bit spurious in that there is nothing stopping someone from overwriting the prototype of your other object anyway.
Foregoing
new
If you're only going to be creating specific object instances on the fly via
{}
notation, it's not really different from usingnew
anyway. You would need to usenew
to create multiple instances of the same object from a class (function
) definition. This is not unusual as it applies to any object oriented programming language, and it has to do with reuse.For your current application, this may work great. However, if you came up with some awesome plugin that was reusable across contexts, it could get annoying to have to rewrite it a lot. I think that you are looking for something like
require.js
, which allows you to define "modules" that you can import with therequire
function. You define a module within adefine
function closure, so you get to keep the constructor and prototype definitions wrapped together anyway, and no one else can touch them until they've imported that module.Advantages of closure vs.
prototype
They are not mutually exclusive:
http://jsfiddle.net/ExplosionPIlls/HPjV7/1/
Question by question:
Basically, having the
reset
method in theprototype
means that all instances of your constructor will share the exact same copy of the method. By creating a local method inside the constructor, you'll have one copy of the method per instance, which will consume more memory (this may become a problem if you have a lot of instances). Other than that, both versions are identical; changingfunction SomeObj
tovar SomeObj = function
only differs on howSomeObj
is hoisted on its parent scope. You said you "gained some privacy with the variable declarations", but I didn't see any private variables there...With the IIFE approach you mentioned, you'll lose the ability to check if
instance instanceof SomeObj
.Not sure if this answers your question, but there is also
Object.create
, where you can still set the prototype, but get rid of thenew
keyword. You lose the ability to have constructors, though.In my experience, the only thing you lose by not using
.prototype
is memory - each object ends up owning its own copy of the function objects defined therein.If you only intend instantiating "small" numbers of objects this is not likely to be a big problem.
Regarding your specific questions:
The second comment on that linked article is highly relevant. The author's benchmark is wrong - it's testing the overhead of running a constructor that also declares four inner functions. It's not testing the subsequent performance of those functions.
Your "closure and expose" code sample is not OO, it's just a namespace with some enclosed private variables. Since it doesn't use
new
it's no use if you ever hope to instantiate objects from it.I can't answer this - "it depends" is as good an answer as you can get for this.