可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I have several web pages on several different sites that I want to mirror completely. This means that I will need images, CSS, etc, and the links need to be converted. This functionality would be similar to using Firefox to "Save Page As" and selecting "Web Page, complete". I'd like to name the files and corresponding directories as something sensible (e.g. myfavpage1.html,myfavpage1.dir).
I do not have access to the servers, and they are not my pages. Here is one sample link: Click Me!
A little more clarification... I have about 100 pages that I want to mirror (many from slow servers), I will be cron'ing the job on Solaris 10 and dumping the results every hour to a samba mount for people to view. And, yes, I have obviously tried wget with several different flags but I haven't gotten the results for which I am looking.
So, pointing to the GNU wget page is not really helpful. Let me start with where I am with a simple example.
wget --mirror -w 2 -p --html-extension --tries=3 -k -P stackperl.html "https://stackoverflow.com/tags/perl"
From this, I should see the https://stackoverflow.com/tags/perl page in the stackper.html file, if I had the flags correct.
回答1:
If your just looking to run a command and get a copy of a web site, use the tools that others have suggested, such as wget, curl, or some of the GUI tools. I use my own personal tool that I call webreaper (that's not the Windows WebReaper though. There are a few Perl programs I know about, including webmirror and a few others you can find on CPAN.
If you're looking to do this inside a Perl program you are writing (since you have the "perl" tag on your answer), there are many tools in CPAN that can help you at each step:
- Downloading content: LWP::Simple, LWP::UserAgent, WWW::Mechanize
- Link extraction: HTML::LinkExtor, HTML::SimpleLinkExtor
- Link rewriting: HTML::Parser
Good luck, :)
回答2:
For an HTML-ized version of your sites you could use WinHTTrack - a free, open source, GPL program available. It will pull down pre-rendered versions of your pages, graphics, documents, zip files, movies, etc... Of course, since this is a mirrored copy any dynamic backend code such as database calls won't be dynamic anymore.
http://www.httrack.com/
回答3:
Personally, the last time I had the urge to do this, I wrote a python script which made a copy of my browser cache, then manually visited all the pages I wished to mirror. A very ugly solution, but it has the nice advantage of not triggering any, "don't scrape my page" alarms. Thanks to Opera's links tab bar, "manually" downloading tens of thousands of pages wasn't nearly as hard as you'd think.
回答4:
I'll echo the "it's not clear" comment. Are these web pages/sites that you've created, and you want to deploy them on multiple servers? If so, use relative references in your HTML, and you should be OK. Or, use a in your and adjust it on each site. But, relativity is really the way to go.
Or, are you saying that you'd like to download websites (like the Stack Overflow homepage, perl.com, etc.) to have local copies on your computer? I'll agree with Daniel - use wget.
Jim
回答5:
回答6:
You may use wget gnu tools to grab an entire site like this:
wget -r -p -np -k URL
or, if you use perl, try these modules:
LWP::Simple
WWW::Mechanize
回答7:
If wget is complicated or you dont have a linuxbox you could always user WebZip
回答8:
It sounds like you want the caching functionality provided by a good proxy server.
Maybe look into something like SQUID? Pretty sure it can do it.
This is more of a sysadmin type question than programming though.
回答9:
In most modern websites the front end only tells a small part of the story. Regardless of tools for stripping html, css and javascript you will still be missing the core functionality that is contained at the server.
Or maybe you were meaning something else.