I had a test case like this:
scenario "there should be an SVG tag" do
...
page.find("svg")
end
For some reason, Capybara could not find the svg tag even though when I looked in the page's source, the tag was there (and also visually).
I was only able to get it to find the SVG tag after I did something like:
scenario "there should be an SVG tag" do
...
page.find("#layers *[xmlns='http://www.w3.org/2000/svg']")
end
(Note, the svg is within the "layers" ID).
Does anyone have any ideas? I use Selenium as the driver.
It turns out this is an issue with Firefox's built in xpath evaluator.
Using FireBug, I was able to verify that the call that Selenium uses:
document.evaluate("//svg", document, null, 9, null).singleNodeValue
doesn't return any elements, whereas
document.evaluate("//div", document, null, 9, null).singleNodeValue
returns the first div on the page.
There may be some namespacing issues that could get FireFox to return svg elements. For now I've just looked for elements with my svg xmlns attribute.
I have found a solution which enables the use of CSS selectors:
scenario "there should be an SVG tag" do
...
Nokogiri::HTML.parse(page.body).css('svg')
end
Strange and annoying that it doesn't work out the box using page.find(), though.