It is really unbelievable but real. This code will not work:
[AttributeUsage(AttributeTargets.Property|AttributeTargets.Field)]
public class Range : Attribute
{
public decimal Max { get; set; }
public decimal Min { get; set; }
}
public class Item
{
[Range(Min=0m,Max=1000m)] //compile error:'Min' is not a valid named attribute argument because it is not a valid attribute parameter type
public decimal Total { get; set; }
}
While this works:
[AttributeUsage(AttributeTargets.Property|AttributeTargets.Field)]
public class Range : Attribute
{
public double Max { get; set; }
public double Min { get; set; }
}
public class Item
{
[Range(Min=0d,Max=1000d)]
public decimal Total { get; set; }
}
Who can tell me why double is OK while decimal is not.
Taken from this answer by JaredPar.
From the specs:
The answer to this problem is to use strings, which are allowed as attributes despite not being an atomic type. Don't use doubles as rounding will make the results less accurate.