I've used Tesseract a bit and it's results leave much to be desired. I'm currently detecting very small images (35x15, without border, but have tried adding one with imagemagick with no ocr advantage); they range from 2 chars to 5 and are a pretty reliable font, however the characters are variable enough that simply using an image size checksum or such is not going to work.
What options exist for OCR besides sticking with Tesseract or doing a complete custom training of it? Also, it would be VERY helpful if this were compatible with Heroku style hosting (at least where I can compile the bins and shove them over).
I have successfully used GOCR in the past for small image OCR. I would say accuracy was around 85%, after getting the grayscale options set properly, on fairly regular fonts. It fails miserably when the fonts get complicated and has trouble with multiline layouts.
Also have a look at Ocropus, which is maintained by Google. Its related to Tesseract, but from what I understand, its OCR engine is different. With just the default models included, it achieves near 99% accuracy on high-quality images, handles layout pretty well and provides HTML output with information concerning formatting and lines. However, in my experience, its accuracy is very low when the image quality is not good enough. That being said, training is relatively simple and you might want to give it a try.
Both of them are easily callable from the command line. GOCR usage is very straightforward; just type
gocr -h
and you should have all the information you need. Ocropus is a bit more tricky; here's a usage example, in Ruby:We use OCR XTR Lite from Vividata at my office. It uses the ScanSoft engine and is very accurate but isn't a free solution. Currently it is being scripted from bash and I process from 75,000 to 150,000 pages a day with it. Accuracy is almost perfect and it auto-rotates the images to determine the OCR orientation.