Is it possible to read only first N bytes from the

2019-01-23 13:27发布

问题:

Here is the question.

Given the url http://www.example.com, can we read the first N bytes out of the page?

  • using wget, we can download the whole page.
  • using curl, there is -r, 0-499 specifies the first 500 bytes. Seems solve the problem.

    You should also be aware that many HTTP/1.1 servers do not have this feature enabled, so that when you attempt to get a range, you'll instead get the whole document.

  • using urlib in python. similar question here, but according to Konstantin's comment, is that really true?

    Last time I tried this technique it failed because it was actually impossible to read from the HTTP server only specified amount of data, i.e. you implicitly read all HTTP response and only then read first N bytes out of it. So at the end you ended up downloading the whole 1Gb malicious response.

So the problem is that how can we read the first N bytes from the HTTP server in practice?

Regards & Thanks

回答1:

curl <url> | head -c 499

or

curl <url> | dd bs=1 count=499

should do

Also there are simpler utils with perhaps borader availability like

    netcat host 80 <<"HERE" | dd count=499 of=output.fragment
GET /urlpath/query?string=more&bloddy=stuff

HERE

Or

GET /urlpath/query?string=more&bloddy=stuff


回答2:

You can do it natively by the next curl command (no need to donwload whole document). According to culr man page:

RANGES HTTP 1.1 introduced byte-ranges. Using this, a client can request to get only one or more subparts of a specified document. curl supports this with the -r flag.

Get the first 100 bytes of a document:
    curl -r 0-99 http://www.get.this/

Get the last 500 bytes of a document:  
    curl -r -500 http://www.get.this/

`curl` also supports simple ranges for FTP files as well.
Then you can only specify start and stop position.

Get the first 100 bytes of a document using FTP:
    curl -r 0-99 ftp://www.get.this/README

It works for me even with Java web app that deployed to GigaSpaces.



回答3:

You should also be aware that many HTTP/1.1 servers do not have this feature enabled, so that when you attempt to get a range, you'll instead get the whole document.

You will have to get the whole web anyways, so you can get the web with curl and pipe it to head, for example.

head

c, --bytes=[-]N print the first N bytes of each file; with the leading '-', print all but the last N bytes of each file



回答4:

Make a socket connection. Read the bytes you want. Close, and you're done.



回答5:

I came here looking for a way to time the server's processing time, which I thought I could measure by telling curl to stop downloading after 1 byte or something.

For me, the better solution turned out to be to do a HEAD request, since this usually lets the server process the request as normal but does not return any response body:

time curl --head <URL>