I have a script which generates like 16000 html pages and saves it in the system. after 1013 pages i get the error: Too many open files.
This is the Ruby code which generates the files
FileUtils.mkdir_p "public/users_directory/#{DEFAULT_COUNTRY_CODE}/#{prefix}"
FileUtils.mkdir_p "public/users_directory/#{DEFAULT_COUNTRY_CODE}/#{prefix}/#{n/1000}"
html_file = File.new("public/users_directory/#{DEFAULT_COUNTRY_CODE}/#{prefix}/#{n/1000}/#{n}.html", "w")
html_file.write(html)
html_file.close
as you can see i close the file in the last line....
Does somebody know what i am doing wrong here? I have Ubuntu 8.04.4 LTS
Thanks a lot
Edit:
This is the whole script
def self.fetching_directory_page(n=1, letter = nil)
id = letter == '' ? "" : "/#{letter.upcase}"
url = "this is a valid url :)"
agent = WWW::Mechanize.new
page = agent.get(url)
html = page.search('div#my_profile_body').to_html
prefix = id == '' ? 'all' : letter
FileUtils.mkdir_p "public/users_directory/#{DEFAULT_COUNTRY_CODE}/#{prefix}"
FileUtils.mkdir_p "public/users_directory/#{DEFAULT_COUNTRY_CODE}/#{prefix}/#{n/1000}"
html_file = File.new("public/users_directory/#{DEFAULT_COUNTRY_CODE}/#{prefix}/#{n/1000}/#{n}.html", "w")
html_file.write(html)
html_file.close
puts "+ CREATED #{prefix}/#{n/1000}/#{n}.html"
new_url = page.parser.xpath("//a[@class='next_page']")[0]['href'] rescue nil
if new_url.present?
self.fetching_directory_page(n+1, letter)
end
end
It is fetching all the users of my users directory and saves the page for caching reasons. It generates 16000 files in total.
This is results for ulimit-a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 24640
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 24000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 24640
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
After editing /etc/security/limits i dont get the error Too many open files
but it just gets stuck
lsof -u username
returns a list of more or less 600 entries and it doesnt change while doing the script