Why does the System.Net.IpAddress
allow the following strings to be converted to valid IP addresses?
$b = [ipaddress]"10.10.10"
$b.IPAddressToString
#10.10.0.10
$c = [ipaddress]"10.10"
$c.IPAddressToString
#10.0.0.10
$d = [ipaddress]"10"
$d.IPAddressToString
#0.0.0.10
I can see that the pattern is that the last octet in the string is the last octet in the IPAddress
object, and whatever the first octets are in the string, are used as the left most octets in the IPAddress
, and zeros are used to fill the middle unspecified octets, if any.
But why does it do this? As a user I'd expect it to fail during conversion unless all octets are specified. Because it allows these conversions, unexpected results like this are possible when checking if a string is a valid IP address:
[bool]("10" -as [ipaddress]) #Outputs True
According to https://msdn.microsoft.com/en-us/library/system.net.ipaddress.parse.aspx?f=255&MSPPError=-2147217396
The number of parts (each part is separated by a period) in ipString determines how the IP address is constructed. A one part address is stored directly in the network address. A two part address, convenient for specifying a class A address, puts the leading part in the first byte and the trailing part in the right-most three bytes of the network address. A three part address, convenient for specifying a class B address, puts the first part in the first byte, the second part in the second byte, and the final part in the right-most two bytes of the network address.