cat multiple files over one ssh connection and get

2019-08-01 19:55发布

问题:

As said in the title, i'm trying to cat multiple files (content needs to be appended to existing files on host) over one ssh connection and get return value for each, i.e. if that cat for the particular file was successful or not. Up to now, i did this for each file individually, by just repeating the following command for each and checking the return value.

cat specific_file | ssh user@host -i /root/.ssh/id_rsa "cat >> result/specific_file"

I then just checked the return value for each transfer (automatically) and thereby could determine the status for each file. My question is: is it possible to to this over one single ssh connection, but to obtain a return value for every single file ?

Thanks in advance !

EDIT:

(b) As you can see in the following, i generate a command for one specific file and then check the return code. The method containing this part of the program is then called for the different types o

If the file was successfully logged, it can be removed from the client, and that further logging to the type of file is (still) allowed. If the file is exceeding its limits, this bit is unset in another part of the program.

sprintf(command,"/bin/cat /root/%s%s | /usr/bin/ssh log-bot@192.168.5.1 -i"
        " /root/.ssh/id_rsa \"/bin/cat >> result/%s%s\"", apmac,
        ending_str[source], apmac, ending_str[dest]);
rc = system(command);

if(rc != 0)
{
    sprintf(buffer, "[%i] - LOG ERROR from %s to %s CODE %i ", (int)time(0),
            ending_str[source], ending_str[dest], rc);
    sprintf(filename,"%s%s%s", LOCAL, apmac, ending_str[source]);
}
else
{
    sprintf(filename,"%s%s%s", LOCAL, apmac, ending_str[source]);
    remove(filename);
    sprintf(buffer, "[%i] - LOG process from %s to %s ok", (int)time(0),
            ending_str[source], ending_str[dest]);

    switch(source)
    {
    case PROBE_FILE:
        LOG_MASK_SET(globals->log_mask, LOG_MASK_PROB);
        break;
    case LIST_FILE:
        LOG_MASK_SET(globals->log_mask, LOG_MASK_LIST);
        break;
    case SCAN_FILE:
        LOG_MASK_SET(globals->log_mask, LOG_MASK_SCAN);
        break;
    default:
        /* Other cases not YET considered */
        break;
    }
}

SECOND EDIT:

I'm running this code on a router as a part of a program. Please note that i'm not allowed to add new libraries or non basic functions to the system. In addition, the ssh client on the system itself doesn't allow "-M" mode.

回答1:

Edit in response to the added info (and code):

To the code: I'd strongly consider writing a script/program on the receiving end to talk to the sending process over the ssh pipe. That way you have full flexibility.

The simplest thing that could work, would still appear to be sending an archive over to the receiving host. On the receiving end, filter the archive with a script that

  • untars each file into a temporary location
  • tries the appending operation cat >> specific_file
  • prints a 'result record' to stdout as feedback to the sender

So you'd do:

tar c file1 file2 file3 |
   ssh log-bot@remote /home/log-bot/handle_logappends.sh |
   while read resultcode filename
   do
       echo "$filename" resulted in code "resultcode"
   done

To handle the feedback in C/C++ you'd look at popen, that will allow you to read the streaming feedback as if from a file, simple!

An example of such a handle_logappends.sh script on the receiving end:

#!/bin/bash
set -e # bail on error

TEMPDIR="/tmp/.receiving_$RANDOM"
mkdir "$TEMPDIR"

trap "rm -rf '$TEMPDIR/'" INT ERR EXIT

tar x -v -C "$TEMPDIR/" | while read filename
do
    echo "unpacked file $filename" > /dev/stderr

    ## implement your file append logic here :)
    ## e.g. (?):
    cat "$TEMPDIR/$filename" >> "result/$filename"

    ## HERE COMES THE FEEDBACK PART: '<code> <filename>'
    echo "$?" "$filename"
done

The really neat part of this is, that since everything is in streaming mode, the feedback for the first file(s) may be arriving while the sending tar is still sending the later files to the receiving host. No unnecessary delays!

I included a tiny bit of sane error handling/cleanup but I would suggest

  • perhaps receiving the whole archive first, then iterating through the files?
  • doing the appends in atomic fashion (i.e. on a copy, then move the copy into place only if the whole append operation succeeded; this prevents partially appended logs)

Hope that helps!


Older answer:

You'd usually employ devious little tricks (not) like:

tar c file1 file2 file3 | ssh user@host -i /root/.ssh/id_rsa "tar x -C result/ -"

Add a verbose flag to see progress details

tar c file1 file2 file3 | ssh user@host -i /root/.ssh/id_rsa "tar xvC result/ -"

If you want, you can substitute cpio for tar. Add options to get more functionality (-p for preserve permissions, e.g.)


To do various separate steps over a single logical connection, you can use a ssh Master connection:

ssh user@host -i /root/.ssh/id_rsa  -MNf  # login, master, background without a command

for specific_file in file1 file2 file3
do
     cat "$specific_file" |
         ssh user@host -Mi /root/.ssh/id_rsa "cat >> 'result/$specific_file'"
     # check/use error code
done



回答2:

How about building on libssh2 instead of scripting ssh, and using the sftp subsystem instead of building your own file-transfer system in shell?

There's an example of performing one file append in libssh2/examples/sftp_append.c, just repeat it for the multiple files you want.



回答3:

if you look at the problem from a different tactical view, you could cat all the files over from another master file. That master file is a shell script that has here documents embedded with the files' contents. Then exec the master shell script and ls the files - all in one ssh session. It's not pretty or elegant but will be successful.



标签: linux ssh cat