When a TCP Server does a socket accept on a port, it gets a new socket to work with that Client.
The accepting socket remains valid for that port and can accept further clients on that port.
Why did the original FTP specification RFC 959 decide to create both a control port and a data port?
Would there be any reason to do this in a similar custom protocol?
It seems to me that this could have been easily specified on a single port.
Given all the problems with firewalls and NATS with FTP, it seems that a single port would have been much better.
For a general protocol implementation, the only reason I could think that you would want to do this is so that you can serve the files from a different host than the commands are going to.
The IETF has banned the practice of allocating more than one port for new protocols so we likely won't be seeing this in the future.
Newer IP protocols such as SCTP are designed to solve some of the shortcommings of TCP that could lead one to use multiple ports. TCPs 'head-of-line' blocking prevents you from having multiple separate requests/streams in flight which can be a problem for some realtime applications.
FTP was designed at a time when the stupidity of a modern firewall was inconceivable. TCP ports are intended for this functionality; multiplexing multiple connections on a single IP. They are NOT a substitute for Access Control Lists. They are NOT intended to extend IPv4 to 48 bits addresses, either.
Any new non-IPv6 protocol will have to deal with the current mess, so it should stick to a small contiguous range of ports.
In my view, it's just a bad design choice in the first place. In the old ages where it was invented, firewall and NAT were not existing ... Nowadays, the real question is more "why people still want to use FTP" ? Everything FTP does can be done using HTTP in a better way.
You should have a look at the RTSP + RTP protcol. It is a similar design : each stream can be sent on a different port, and statistics about jitter, reordering etc... is sent on yet another port.
Plus there is no connection since it is UDP. However it was developped when firewall where already something banal (sorry for my english), so a mode was developped where all this connection could be embedded in one TCP connection with HTTP syntax.
Guess what ? The multi port protocol is much simpler (IMO) to implement than the http multiplexed one and and it has a lot more features. If you are not concerned with firewall problem, why choose complicated multiplexing scheme when there is already one existing (TCP port)?
IIRC, the issue wasn't that FTP uses two (i.e., more than one) ports. The issue is that the control connection is initiated by the client and the data channel was initiated by the server. The largest difference between FTP and HTTP is that in HTTP the client pulls data and in FTP the server pushes it. The NAT issue is related to the server pushing data through a firewall that doesn't know to expect the connection.
FTP's usage of separate ports for control and data is rather elegant IMHO. Think about all of the headaches in HTTP surrounding the multiplexing of control and data - think Trailing headers, rules surrounding pipelined requests, connection keep-alives, and what not. Much of that is avoided by explicitly separating control and data not to mention it is possible to do interesting things like encrypt one and not the other or make the control channel have a higher QoS than the data.
Like many of the older wire protocols, FTP is suited for use by humans. That is it is quite easy to use FTP from a terminal session. The designers of FTP anticipated that users might want to continue working with the remote host while data was transferring. This would have been difficult if command and data were going over the same channel.