I was wondering why Nullable<T>
is a value type, if it is designed to mimic the behavior of reference types? I understand things like GC pressure, but I don't feel convinced - if we want to have int
acting like reference, we are probably OK with all the consequences of having real reference type. I can see no reason why Nullable<T>
is not just boxed version of T
struct.
As value type:
- it still needs to be boxed and unboxed, and more, boxing must be a bit different than with "normal" structs (to treat null-valued nullable like real
null
)
it needs to be treated differently when checking for null (done simply in Equals
, no real problem)
it is mutable, breaking the rule that structs should be immutable (ok, it is logically immutable)
- it needs to have special restriction to disallow recursion like
Nullable<Nullable<T>>
Doesn't making Nullable<T>
a reference type solve that issues?
rephrased and updated:
I've modified my reason list a bit, but my general question is still open:
How will reference type Nullable<T>
be worse than current value type implementation? Is it only GC pressure and "small, immutable" rule? It still feels strange for me...
The reason is that it was not designed to act like a reference type. It was designed to act like a value type, except in just one particular. Let's look at some ways value types and reference types differ.
The main difference between a value and reference type, is that value type is self-contained (the variable containing the actual value), while a reference type refers to another value.
Some other differences are entailed by this. The fact that we can alias reference types directly (which has both good and bad effects) comes from this. So too do differences in what equality means:
A value type has a concept of equality based on the value contained, which can optionally be redefined (there are logical restrictions on how this redefinition can happen*). A reference type has a concept of identity that is meaningless with value types (as they cannot be directly aliased, so two such values cannot be identical) that can not be redefined, which is also gives the default for its concept of equality. By default, ==
deals with this value-based equality when it comes to value types†, but with identity when it comes to reference types. Also, even when a reference type is given a value-based concept of equality, and has it used for ==
it never loses the ability to be compared to another reference for identity.
Another difference entailed by this is that reference types can be null - a value that refers to another value allows for a value that doesn't refer to any value, which is what a null reference is.
Also, some of the advantages of keeping value-types small relate to this, since being based on value, they are copied by value when passed to functions.
Some other differences are implied but not entailed by this. That it's often a good idea to make value types immutable is implied but not entailed by the core difference because while there are advantages to be found without considering implementation matters, there are also advantages in doing so with reference types (indeed some relating to safety with aliases apply more immediately to reference types) and reasons why one may break this guideline - so it's not a hard and fast rule (with nested value types the risks involved are so heavily reduced that I would have few qualms in making a nested value type mutable, even though my style leans heavily to making even reference types immutable when at all practical).
Some further differences between value types and reference types are arguably implementation details. That a value type in a local variable has the value stored on the stack has been argued as an implementation detail; probably a pretty obvious one if your implementation has a stack, and certainly an important one in some cases, but not core to the definition. It's also often overstated (for a start, a reference type in a local variable also has the reference itself in the stack, for another there are plenty of times when a value type value is stored in the heap).
Some further advantages in value types being small relate to this.
Now, Nullable<T>
is a type that behaves like a value type in all the ways described above, except that it can take a null value. Maybe the matter of local values being stored on the stack isn't all that important (being more an implementation detail than anything else), but the rest is inherent to how it is defined.
Nullable<T>
is defined as
struct Nullable<T>
{
private bool hasValue;
internal T value;
/* methods and properties I won't go into here */
}
Most of the implementation from this point is obvious. Some special handling is needed allow null to be assigned to it - treated as if default(Nullable<T>)
had been assigned - and some special handling when boxed, and then the rest follows (including that it can be compared for equality with null).
If Nullable<T>
was a reference type, then we'd have to have special handling to allow for all the rest to occur, along with special handling for features in how .NET helps the developer (such as we'd need special handling to make it descend from ValueType
). I'm not even sure if it would be possible.
*There are some restrictions on how we are allowed to redefine equality. Combining those rules with those used in the defaults, then generally we can allow for two values to be considered equal that would be considered unequal by default, but it rarely makes sense to consider two values unequal that the default would consider equal. A exception is the case where a struct contains only value-types, but where said value-types redefine equality. This the a result of an optimisation, and generally considered a bug rather than by design.
†An exception is float-point types. Because of the definition of value-types in the CLI standard, double.NaN.Equals(double.NaN)
and float.NaN.Equals(float.NaN)
return true
. But because of the definition of NaN in ISO 60559, float.NaN == float.NaN
and double.NaN == double.NaN
both return false.
Edited to address the updated question...
You can box and unbox objects if you want to use a struct as a reference.
However, the Nullable<>
type basically allows to enhance any value type with an additional state flag which tells whether the value shall be used as null
or if the stuct is "valid".
So to address your questions:
This is an advantage when used in collections, or because of the different semantics (copying instead of referencing)
No it doesn't. The CLR does respect this when boxing and unboxing, so that you actually never box a Nullable<>
instance. Boxing a Nullable<>
which "has" no value will return a null
reference, and unboxing does the opposite.
Nope.
Again, this isn't the case. In fact generic constraints for a struct do not allow nullable structs to be used. This makes sense due to the special boxing/unboxing behavior. Therefore, if you have a where T: struct
to constrain a generic type, nullable types will be disallowed. Since this constraint is defined on the Nullable<T>
type as well, you cannot nest them, without any special treatment to prevent this.
Why not using references? I already mentioned the important semantic differences. But apart from this, reference types use much more memory space: Each reference, especially in 64-bit environments, uses up not only heap memory for the instance, but also memory for the reference, the instance type information, locking bits etc.. So, apart from the semantics and performance differences (indirection via reference), you end up with using a multiple of the memory used for the entity itself for most common entities. And the GC gets more objects to handle, which will make the total performance compared to structs even worse.
It is not mutable; check again.
The boxing is different too; an empty "boxes" to null.
But; it is small (barely bigger than T), immutable, and encapsulates only structs - ideal as a struct. Perhaps more importantly, so long as T is truly a "value", then so is T? a logical "value".
I coded MyNullable as a class.
Can't really understand why it cannot be a class, beside for avoid heap memory pressure.
namespace ClassLibrary1
{
using NFluent;
using NUnit.Framework;
[TestFixture]
class MyNullableShould
{
[Test]
public void operator_equals_btw_nullable_and_value_works()
{
var myNullable = new MyNullable<int>(1);
Check.That(myNullable == 1).IsEqualTo(true);
Check.That(myNullable == 2).IsEqualTo(false);
}
[Test]
public void Can_be_comparedi_with_operator_equal_equals()
{
var myNullable = new MyNullable<int>(1);
var myNullable2 = new MyNullable<int>(1);
Check.That(myNullable == myNullable2).IsTrue();
Check.That(myNullable == myNullable2).IsTrue();
var myNullable3 = new MyNullable<int>(2);
Check.That(myNullable == myNullable3).IsFalse();
}
}
}
namespace ClassLibrary1
{
using System;
public class MyNullable<T> where T : struct
{
internal T value;
public MyNullable(T value)
{
this.value = value;
this.HasValue = true;
}
public bool HasValue { get; }
public T Value
{
get
{
if (!this.HasValue) throw new Exception("Cannot grab value when has no value");
return this.value;
}
}
public static explicit operator T(MyNullable<T> value)
{
return value.Value;
}
public static implicit operator MyNullable<T>(T value)
{
return new MyNullable<T>(value);
}
public static bool operator ==(MyNullable<T> n1, MyNullable<T> n2)
{
if (!n1.HasValue) return !n2.HasValue;
if (!n2.HasValue) return false;
return Equals(n1.value, n2.value);
}
public static bool operator !=(MyNullable<T> n1, MyNullable<T> n2)
{
return !(n1 == n2);
}
public override bool Equals(object other)
{
if (!this.HasValue) return other == null;
if (other == null) return false;
return this.value.Equals(other);
}
public override int GetHashCode()
{
return this.HasValue ? this.value.GetHashCode() : 0;
}
public T GetValueOrDefault()
{
return this.value;
}
public T GetValueOrDefault(T defaultValue)
{
return this.HasValue ? this.value : defaultValue;
}
public override string ToString()
{
return this.HasValue ? this.value.ToString() : string.Empty;
}
}
}