Is there a place in the documentation or something I can look up dynamically in Xcode 6 that shows all the defined operator overloads for numbers such as the binary arithmetic and comparison operators?
Swift supports the four standard arithmetic operators for all number types:
Addition (+)
Subtraction (-)
Multiplication (*)
Division (/)
Swift supports all standard C comparison operators:
Equal to (a == b)
Not equal to (a != b)
Greater than (a > b)
Less than (a < b)
Greater than or equal to (a >= b)
Less than or equal to (a <= b)
The reason I would like to know is so that I can see when I'll have to use type casting and when I won't because there is a built-in operator overload for two compatible types.
I have a related question on type casting and automatic upscaling, but before I posted it I wanted to make sure I understand the rules Swift defines by default.
As Martin said, you can see a sort of header file that declares these functions by command-clicking Int or some other Swift type. For example the multiplication functions look like this:
func *(lhs: UInt8, rhs: UInt8) -> UInt8
func *(lhs: Float, rhs: Float) -> Float
func *(lhs: Int, rhs: Int) -> Int
func *(lhs: UInt, rhs: UInt) -> UInt
func *(lhs: Int64, rhs: Int64) -> Int64
func *(lhs: Float80, rhs: Float80) -> Float80
func *(lhs: Double, rhs: Double) -> Double
func *(lhs: UInt64, rhs: UInt64) -> UInt64
func *(lhs: Int32, rhs: Int32) -> Int32
func *(lhs: UInt32, rhs: UInt32) -> UInt32
func *(lhs: Int16, rhs: Int16) -> Int16
func *(lhs: UInt16, rhs: UInt16) -> UInt16
func *(lhs: Int8, rhs: Int8) -> Int8
All of the arithmetic functions take two numbers of the same type and return a number of that type. That's why you often have to do conversions before arithmetic operations.
You only have to cast variables or constants that already have a type. You can do any arithmetic operation on raw numeric literals.
In this situation both operands are constants and swift gave them a type implicitly when they were defined:
let a = 42 //compiler assumes Int
let b = 3.14 //compiler assumes Float
a + Int(b) //returns Int (45)
but that doesn't mean having a decimal point will force the literal to be assigned to a Float variable. You can explicitly force the type:
var x: Int = 1.1 //shows 1
var y: Float = 1 //shows 1.0
That's why you can do stuff like this:
var foo = 10
foo + 10.4
even though 10.4 is a floating-point literal, because Swift already implicitly typed foo as an Integer it treats the 10.4 literal as an Integer as well and happily adds them together.