I'm creating a small app for myself where I run a Ruby script and save all of the images off of my blog.
I can't figure out how to save the image files after I've identified them. Any help would be much appreciated.
require 'rubygems'
require 'nokogiri'
require 'open-uri'
url = '[my blog url]'
doc = Nokogiri::HTML(open(url))
doc.css("img").each do |item|
#something
end
URL = '[my blog url]'
require 'nokogiri' # gem install nokogiri
require 'open-uri' # already part of your ruby install
Nokogiri::HTML(open(URL)).xpath("//img/@src").each do |src|
uri = URI.join( URL, src ).to_s # make absolute uri
File.open(File.basename(uri),'wb'){ |f| f.write(open(uri).read) }
end
Using the code to convert to absolute paths from here: How can I get the absolute URL when extracting links using Nokogiri?
assuming the src attribute is an absolute url, maybe something like:
if item['src'] =~ /([^\/]+)$/
File.open($1, 'wb') {|f| f.write(open(item['src']).read)}
end
Tip: there's a simple way to get images from a page's head/body using the Scrapifier gem. The cool thing is that you can also define which type of image you want it to be returned (jpg, png, gif).
Give it a try: https://github.com/tiagopog/scrapifier
Hope you enjoy.
system %x{ wget #{item['src']} }
Edit: This is assuming you're on a unix system with wget :)
Edit 2: Updated code for grabbing the img src from nokogiri.