I have a working TCP socket setup on my Go server. I accept an incoming connection, run a for loop and read incoming data using the net.Conn.Read function.
But it just doesn't make sense to me. How does it know the full message has been received in order to continue with a return of message size?
This is my code currently:
func (tcpSocket *TCPServer) HandleConnection(conn net.Conn) {
println("Handling connection! ", conn.RemoteAddr().String(), " connected!")
recieveBuffer := make([]byte, 50) // largest message we ever get is 50 bytes
defer func() {
fmt.Println("Closing connection for: ", conn.RemoteAddr().String())
conn.Close()
}()
for {
// how does it know the end of a message vs the start of a new one?
messageSize, err := conn.Read(recieveBuffer)
if err != nil {
return
}
if messageSize > 0 { // update keep alive since we got a message
conn.SetReadDeadline(time.Now().Add(time.Second * 5))
}
}
}
Lets say my application sends a message which is 6 bytes long (could be any size). How does conn.Read
know when its received the end of said message to then continue?
My experience mainly lies in C#, so Go is but unusual here. For my C# application the messages have the size of the message contained in first byte, then i use a for loop to read the remaining bytes up to message size.
Yet the above code in Go seems to get the full message and continues - it some how automatically knows the size of my message?
I am really confused how this is happening or if its just working by luck when i'm actually approaching it wrong.
All my messages have the header in the first byte stating the size of the message. But it seems i don't need it on a Go server, am i misunderstanding how this works?