OCR Toolkit User's Manual

Last modified: July 27, 2010

Contents

This documentation is for those who want to use the toolkit for OCR, but are not interested in extending the toolkit itself.

Overview

The toolkit provides the functionality to segment an image page into text lines, words and characters, to sort them in reading-order, and to generate an output string.

Before you can use the OCR toolkit, you must first train characters from sample pages, which will then be used by the toolkit for classifying characters:

images/overview.png

Hence the proper use of this toolkit requires the following two steps:

There are two options to use this toolkit: you can either use the script ocr4gamera.py as provided by the toolkit, or you can build your own recognition scripts with the aid of the python library functions provided by the toolkit. Both alternatives are described below.

Using the script ocr4gamera.py

The ocr4gamera.py script takes an image and already trained data and segments the picture into single glyphs. The training-data is used to classify those glyphs and converts them into strings. The final text is written to standard-out or can optionally be stored in a textfile. Also a word by word correction can be performed on the recognized text.

The end user application ocr4gamera.py will be installed to /usr/bin unless you habe explicitly chosen a different location. Its synopsis is:

ocr4gamera.py -x <trainingdata> [options] <imagefile>

Options can be in short (one dash, one character) or long form (two dashes, string). When called with -h, --help or an invalid option, a usage message will be printed. The other options are:

-x trainingdata, --xml-file=trainingdata
This option is required. trainingdata must be an xml file created with Gamera's training dialog.
-o outfile, --output=outfile
Writes the output text to outfile. When not given, the result is printed to stdout.
-a, --automatic-group
Uses Gamera's automatic grouping algorithm during classification. This can be helpful when glyphs are fragmentated.
-d, --deskew
Does a skew correction before page segmentation.
-f, --filter
Enables some basic filter operations like deleting very big or very small connected components.
-D, --dictionary-correction
Post-processing step called dictionary-check can be enabled here. For using this you need to have the unix spell tools installed like aspell and ispell. Do not forget to install the needed language and turn it on by changing the LANG environment variable or set it with the -L option.
-L language, --dictionary-language=language
Sets the dictionary for the correcting-process. Otherwise the locale-settings language (aspell) or the default language (ispell) is used.
-e number, --edit-distance=number
Sets the max. distance between two words, the recognized and the corrected word. The actual distance is calculated by the gamera built in function edit_distance. It has to be integer. The default value is 2.
-c csv-file, --extra_chars_csvfile=csv_file

Reads a user defined list of comma separated pairs (classname, output) one pair per line as in the following example:

latin.small.ligature.st,st
latin.small.ligature.ft,ft
latin.small.letter.long.s,s
-R rules, --heuristic_rules=rules
apply heuristic rules rules for disambiguation of some chars rules can be roman (default) or none (for no rules)
-v level, --information=level
Set verbosity level to level. When one, debug information is printed to stdout. When two, additionally three images are written to the current directory: debug_lines.png has the detected textlines marked, debug_chars.png has all segmentated characters marked, and debug_words.png has all words marked. This can be usefull to identify segmentation errors.

Writing custom scripts

If you want to write your own scripts for recognition, you can use ocr4gamera.py as a good starting point.

In order to access the OCR Toolkit classes and functions, you must import them at the beginning of your script:

from gamera.toolkits.ocr.ocr_toolkit import *
from gamera.toolkits.ocr.classes import Textline,Page,ClassifyCCs

After that you can segment an image with the Page class and its method segment():

img = load_image("image.png")
if img.data.pixel_type != ONEBIT:
   img = img.to_onebit()
result_page = Page(img)
result_page.segment()

The Page object result_page now contains all segment information like textlines, words and characters in reading order. You can then classify the characters line-per-line with a knn classifier and print the document text:

# load training data into classifier
cknn = knn.kNNInteractive([], \
          ["aspect_ratio", "moments", "volume64regions"], 0)
cknn.from_xml_filename("trainingdata.xml")

# classify characters and create output text
for line in page.textlines:
    line.glyphs = \
           cknn.classify_and_update_list_automatic(line.glyphs)
    line.sort_glyphs()
    print "Text of line", textline_to_string(line)

Note that the function textline_to_string is global and not bound to a class instance. This function requires that class names for characters have been chosen according to the standard unicode character names, as in the examples of the following table:

Character Unicode Name Class Name
! EXCLAMATION MARK exclamation.mark
2 DIGIT TWO digit.two
A LATIN CAPITAL LETTER A latin.capital.letter.a
a LATIN SMALL LETTER A latin.small.letter.a

For more information on how to fine control the segmentation process, see the developer's manual.