as HTTP creates the connection again and again for each data to be transferred over a network, WEB SOCKETS are static and the connection will be made once initially and will stay until the transmission is done...but if the web sockets are static then why the latency differs for each data packet..???
the latency test app i have created shows me different time lag.. so what is the advantage of web socket being a static connection or if this is a common issue in web sockets ??
Do i need to create a buffer to control the flow of data because the data transmission in the is continous..? does the latecy increases when data transmission is continous?
There is no overhead to establish a new connection with a statically open web socket (as the connection is already open and established), but when you're making a request half way around the world, networking takes some time so there's latency when you're talking to a server half way around the world.
That's just how networking works.
You get a near immediate response from a server on your own LAN and the further away the server gets (in terms of network topology) the more routers each packet much transit through, the more total delay there is. As you witnessed in your earlier question related to this topic, when you do a
tracert
from your location to your server location, you saw a LOT of different hops that each packet has to traverse. The time for each one of these hops all adds up and busy routers may also each add a small delay if they aren't instantly processing your packet.The latency between when you send a packet and get a response is just 2x the packet transit time plus whatever your server takes to respond plus perhaps a tiny little overhead for TCP (since it's a reliable protocol, it needs acknowledgements). You cannot speed up the transit time unless you pick a server that is closer or somehow influence the route the packet takes to a faster route (this is mostly not under your control once you've selected a local ISP to use).
No amount of buffering on your end will decrease the roundtrip time to your server.
In addition, the more hops in the network there are between your client and server, the more variation you may get from one moment to the next in the transit time. Each one of the routers the packet traverses and each one of the links it travels on has their own load, congestion, etc... that varies with time. There will likely be a minimum transit time that you will ever observe (it will never be faster than x), but many things can influence it over time to make it slower than that in some moments. There could even be a case of an ISP taking a router offline for maintenance which puts more load on the other routers handling the traffic or a route between hops going down so a temporary, but slower and longer route is substituted in its place. There are literally hundreds of things that can cause the transit time to vary from moment to moment. In general, it won't vary a lot from one minute to the next, but can easily vary through the day or over longer periods of time.
You haven't said whether this is relevant or not, but when you have poor latency on a given roundtrip or when performance is very important, what you want to do is to minimize the number of roundtrips that you wait for. You can do that a couple of ways:
1. Don't sequence small pieces of data. The slowest way to send lots of data is to send a little bit of data, wait for a response, send a little more data, wait for a response, etc... If you had 100 bytes to send and you sent the data 1 byte at a time waiting for a response each time and your roundtrip time was X, you'd have 100X as your total time to send all the data. Instead, collect up a larger piece of the data and send it all at once. If you send the 100 bytes all at once, you'd probably only have a total delay of X rather than 100X.
2. If you can, send data parallel. As explained above the pattern of send data, wait for response, send more data, wait for response is slow when the roundtrip time is poor. If your data can be tagged such that it stands on its own, then sometimes you can send data in parallel without waiting for prior responses. In the above example, it was very slow to send 1 byte, wait for response, send next byte, wait for response. But, if you send 1 byte, then send next byte, then send next byte and then some times later you process all the response, you get much, much better throughput. Obviously, if you already have 100 bytes of data, you may as well just send that all at once, but if the data is arriving real time, you may want to just send it out as it arrives and not wait for prior responses. Obviously whether you can do this depends entirely upon the data protocol between your client and server.
3. Send bigger pieces of data at a time. If you can, send bigger chunks of data at once. Depending upon your app, it may or may not make sense to actually wait for data to accumulate before sending it, but if you already have 100 bytes of data, then try to send it all at once rather than sending it in smaller pieces.