how to get a list of all paths/files on a webpage

2019-09-01 04:53发布

I use wget -p $url to get all the files on a webpage so that I can get a list. But for some URLs, it turns out that only the index.html can be fetched by wget. Is there a way to get a list of files on a specific URL by wget or cURL? Do I need to check the request headers and response headers?

1条回答
叛逆
2楼-- · 2019-09-01 05:03

Some servers do not let you browse directory listings, and if there's a default document in that directory, it takes over and you can't browse either.

You need to implement a spider that parses all the paths and files and links, and creates a directory structure of files that are declared and used in the HTML. Then you can download those files.

查看更多
登录 后发表回答