What is the efficient way to implement tail in *NIX? I came up (wrote) with two simple solution, both using kind of circular buffer to load lines into circular structure (array | doubly linked circular list - for fun). I've seen part of older implementation in busybox and from what I understood, they used fseek to find EOF and then read stuff "backwards". Is there anything cleaner and faster out there? I got asked this on interview and asker did not look satisfied. Thank you in advance.
相关问题
- Multiple sockets for clients to connect to
- Is shmid returned by shmget() unique across proces
- What is the best way to do a search in a large fil
- glDrawElements only draws half a quad
- how to get running process information in java?
/
*This example implements the option n of tail command.*/
Read backwards from the end of the file until
N
linebreaks are read or the beginning of the file is reached.Then print what was just read.
I dont think any fancy datastructures are needed here.
Here is the source code of tail if you're interested.
I don't think there are solutions different than "keep the latest N lines while reading forward the data" or "start from the end and go backwards until you read the Nth line".
The point is that you'd use one or the another based on the context.
The "go to the end and go backwards" is better when tail accesses a random access file, or when the data is small enough to be put on memory. In this case the runtime is minimized, since you scan the data that has to be outputted (so, it's "optimal")
Your solution (keep the N latest lines) is better when tail is fed with a pipeline or when the data is huge. In this case, the other solution wastes too much memory, so it is not practical and, in the case the source is slower than tail (which is probable) scanning all the file doesn't matter that much.
First use
fseek
to find the end-of-file then subtract 512 andfseek
to that offset, then read forward from there to end. Count the number of line-breaks because if there are too few you will have to do the same with a subtracted offset of 1024 ... but in 99% of cases 512 will be enough.This (1) avoids reading the whole file forward and (2) the reason why this is probably more efficient than reading backwards from the end is that reading forward is typically faster.