I am working on a monothreaded process applet which creates a proxy virtual device (more precisely a virtual Xbox 360 pad); I do manage to create it with the uinput interface, I set it up properly and it works just fine.
In order to feed commands to this virtual device, I read events from another real interface (in this case a PS3 pad), and I open the real device file with these flags:
fd = open("/dev/input/event22", O_RDONLY); // open the PS3 pad
The main loop is something like (minus error checking):
while(run) {
input_event ev = {0};
read(fd, &ev, sizeof(struct input_event));
// convert from PS3 --> Xbox 360
convert(ev);
// write to the new virtual pad
write(fd_virtual, &ev, sizeof(struct input_event));
}
As you can imagine the read(fd, &ev, sizeof(struct input_event));
is a blocking call and I would like to have a sort of timeout to cycle through the loop and check for other events/execute other code.
For these reasons I am thinking of encapsulating that read(fd...
call inside an epoll loop, so I can also have a timeout.
Question is, would it be efficient to have it done this way?
By virtue of using epoll_wait, am I introducing additional delays to the current loop, thus delays in responsiveness of the virtual pad?
By virtue of using epoll_wait, am I introducing additional delays to the current loop, thus delays in responsiveness of the virtual pad?
Yes, you sure do.
would it be efficient to have it done this way?
I'm sure yes, but that very much depends on your definition of "efficient".
What we're talking about here is a human input device. What we care mostly when dealing with HIDs is latency, it shouldn't lag, the reaction on the keypress should be instant. What is "instant" for a human being? There is a nice discussion there, but one argument that I like most is that on a high-level athletics competition you'd be disqualified for starting in less than 100 ms after the signal.
But that 100 ms is a time budget to make whole processing of the input signal, from key press to some perceivable changes in the game. Wikipedia page on input lag has some numbers on how this budget is usually spent.
Anyway, I think 1 ms is absolutely safe overhead you can add with your proxy and no one will notice, let's say that's our goal for maximum latency (as in definition of "efficient").
So, let's assume that you're satisfied with the response time from your current code. What changes when you add an epoll()
call? Basically, you're adding some time for another syscall to be made, because now instead of one syscall to get the value you're making two. So potentially it's about two times slower than your original code (let's forget about difference in processing time for different syscalls for the moment). But is it really that bad?
To answer that question we need to have some estimate of what a syscall overhead is. If we're too lazy to measure it ourselves, we can use some numbers from 20 years ago, some numbers from people that care about syscalls, some IPC numbers from microkernel guys (they always care), some random numbers from StackOverflow or just ask Rich and settle around something microsecond-level as a safe assumption.
So the question boils down to whether adding some (let's even say 10) microseconds is noticable within your millisecond (as in 1000 µs) time budget. I think it's not.
There is just one little possible problem, when you're to go from "just adding epoll()
" to
cycle through the loop and check for other events/execute other code.
You need to be careful to stay within your time budget for these loops and checks. But then again 1 ms is probably more than enough for you here.