Before you tell me to use parse_url
, it's not nearly good enough and has too many bugs. There are many questions on the subject of parsing URLs be found on here, but nearly all are to parse only some specific class of URLs or are otherwise incomplete.
I'm looking for a definitive RFC-compliant URL parser in PHP that will reliably process any URL that a browser is likely to encounter. In this I include:
- Page-internal links
#
,#title
- Page-relative URLs
blah/thing.php
- Site-relative URLs
/blah/thing.php
- Anonymous-protocol URLs
//ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min.js
- Callto URLs
callto:+442079460123
- File URLs
file:///Users/me/thisfile.txt
- Mailto URLs
mailto:user@example.com?subject=hello
,mailto:?subject=hello
and support for all the usual scheme/authentication/domain/path/query/fragment etc, and break all of those elements out into an array, with extra flags for relative/schemaless URLs. Ideally it would come with a URL reconstructor (like http_build_url) supporting the same elements, and I'd also like validation to be applied (i.e. it should be able to make a best-guess interpretation of a URL if it's invalid, but flag it as such, just like browsers do).
This answer contained a tantalising Fermat-style reference to such a beast, but it doesn't actually go anywhere.
I've looked in all the major frameworks, but they only seem to provide thin wrappers around parse_url which is generally a bad place to start since it makes so many mistakes.
So, does such a thing exist?
Not sure how many bugs
parse_url()
has, but this might help:Source: http://tools.ietf.org/html/rfc3986#page-51
It breaks down the location as:
To rebuild, you could use: