Save all image files from a website

2019-01-24 12:50发布

I'm creating a small app for myself where I run a Ruby script and save all of the images off of my blog.

I can't figure out how to save the image files after I've identified them. Any help would be much appreciated.

require 'rubygems'
require 'nokogiri'
require 'open-uri'

url = '[my blog url]'
doc = Nokogiri::HTML(open(url))

doc.css("img").each do |item|
  #something
end

4条回答
做个烂人
2楼-- · 2019-01-24 13:19
URL = '[my blog url]'

require 'nokogiri' # gem install nokogiri
require 'open-uri' # already part of your ruby install

Nokogiri::HTML(open(URL)).xpath("//img/@src").each do |src|
  uri = URI.join( URL, src ).to_s # make absolute uri
  File.open(File.basename(uri),'wb'){ |f| f.write(open(uri).read) }
end

Using the code to convert to absolute paths from here: How can I get the absolute URL when extracting links using Nokogiri?

查看更多
甜甜的少女心
3楼-- · 2019-01-24 13:20
system %x{ wget #{item['src']} }

Edit: This is assuming you're on a unix system with wget :) Edit 2: Updated code for grabbing the img src from nokogiri.

查看更多
smile是对你的礼貌
4楼-- · 2019-01-24 13:37

Tip: there's a simple way to get images from a page's head/body using the Scrapifier gem. The cool thing is that you can also define which type of image you want it to be returned (jpg, png, gif).

Give it a try: https://github.com/tiagopog/scrapifier

Hope you enjoy.

查看更多
smile是对你的礼貌
5楼-- · 2019-01-24 13:45

assuming the src attribute is an absolute url, maybe something like:

if item['src'] =~ /([^\/]+)$/
    File.open($1, 'wb') {|f| f.write(open(item['src']).read)}
end
查看更多
登录 后发表回答