I have a c application running on Linux OS's. This app gets keyboard keys from a terminal and sends them to remote server.
The code below opens the terminal:
// save old terminal attributes
if (tcgetattr(0, &ttyold) != 0) {
fprintf(stderr, "Failed getting terminal attributes\n");
goto out;
}
ttynew = ttyold;
ttynew.c_iflag = 0;
ttynew.c_oflag = 0;
// disable canonical mode (don't buffer by line)
ttynew.c_lflag &= ~ICANON;
// disable local echo
ttynew.c_lflag &= ~ECHO;
ttynew.c_cc[VMIN] = 1;
ttynew.c_cc[VTIME] = 1;
// set new terminal attributes
if (tcsetattr(0, TCSANOW, &ttynew) != 0) {
fprintf(stderr, "Failed setting terminal attributes\n");
goto out;
I did not write this app, I'm just trying to understand this code.
I don't understand why the previous engeneer disabled the echo? the data that has to be sent isn't secret. What else could be the meaning of this? performance? disable buffering?
In addition, I'll be happy to get an explanation of "ttynew.c_lflag &= ~ICANON;" code.
Thanks in advance.
If the receiving end doesn't echo, you need to enable this. If the receiving end echoes, you disable it, otherwise you end up with seeing everything double.
Here everything is explained:
Basically, data is sent after EOL, rather than per character.