I have an HTML (not XHTML) document that renders fine in Firefox 3 and IE 7. It uses fairly basic CSS to style it and renders fine in HTML.
I'm now after a way of converting it to PDF. I have tried:
- DOMPDF: it had huge problems with tables. I factored out my large nested tables and it helped (before it was just consuming up to 128M of memory then dying--thats my limit on memory in php.ini) but it makes a complete mess of tables and doesn't seem to get images. The tables were just basic stuff with some border styles to add some lines at various points;
- HTML2PDF and HTML2PS: I actually had better luck with this. It rendered some of the images (all the images are Google Chart URLs) and the table formatting was much better but it seemed to have some complexity problem I haven't figured out yet and kept dying with unknown node_type() errors. Not sure where to go from here; and
- Htmldoc: this seems to work fine on basic HTML but has almost no support for CSS whatsoever so you have to do everything in HTML (I didn't realize it was still 2001 in Htmldoc-land...) so it's useless to me.
I tried a Windows app called Html2Pdf Pilot that actually did a pretty decent job but I need something that at a minimum runs on Linux and ideally runs on-demand via PHP on the Webserver.
What am I missing, or how can I resolve this issue?
I recommend TCPDF or DOMPDF, in that order.
Try grabbing the latest nightly dompdf build - I was using an older version that was a terrible resource hog and took forever to render my pdf. After grabbing a nightly from here.
It only took a few seconds to generate the PDF - AND it was just as nicely rendered as with PrinceXML / Docraptor. Seems like they've seriously optimized the dompdf code since I last used it!
I've tried a lot of different libraries for PHP. All the listed I've tried. In my opinion TCPDF library is the best compromise performance/usability. It's very simply to install and use, also good performance in small medium application. If you need high performance and very big PDF document, use Zend_PDF module, but get ready to coding hard!
This question is pretty old already, but haven't seen anyone mentioning CutyCapt so I will :)
CutyCapt
1) use MPDF !
a) extract in
yourfolder
b) create file.php in
yourfolder
and insert such code:c) open file.php from your browser
2) Use pdfToHtml !
1) extract pdftohtml.exe to your root folder:
2) inside that folder, in anyfile.php file, put this code (assuming, there is a source example.pdf too):
3) enter FinalFolder, and there will be the converted files (as many pages, as the source PDF had..)
After some investigation and general hair-pulling the solution seems to be HTML2PDF. DOMPDF did a terrible job with tables, borders and even moderately complex layout and htmldoc seems reasonably robust but is almost completely CSS-ignorant and I don't want to go back to doing HTML layout without CSS just for that program.
HTML2PDF looked the most promising but I kept having this weird error about null reference arguments to node_type. I finally found the solution to this. Basically, PHP 5.1.x worked fine with regex replaces (preg_replace_*) on strings of any size. PHP 5.2.1 introduced a php.ini config directive called pcre.backtrack_limit. What this config parameter does is limits the string length for which matching is done. Why this was introduced I don't know. The default value was chosen as 100,000. Why such a low value? Again, no idea.
A bug was raised against PHP 5.2.1 for this, which is still open almost two years later.
What's horrifying about this is that when the limit is exceeded, the replace just silently fails. At least if an error had been raised and logged you'd have some indication of what happened, why and what to change to fix it. But no.
So I have a 70k HTML file to turn into PDF. It requires the following php.ini settings:
Now the astute reader may have noticed that my HTML file is smaller than 100k. The only reason I can guess as to why I hit this problem is that html2pdf does a conversion into xhtml as part of the process. Perhaps that took me over (although nearly 50% bloat seems odd). Whatever the case, the above worked.
Now, html2pdf is a resource hog. My 70k file takes approximately 5 minutes and at least 500-600M of RAM to create a 35 page PDF file. Not quick enough (by far) for a real-time download unfortunately and the memory usage puts the memory usage ratio in the order of 1000-to-1 (600M of RAM for a 70k file), which is utterly ridiculous.
Unfortunately, that's the best I've come up with.