OCRising 17th French prints

In the past few years, OCR tools have dramatically increased their efficiency and accuracy. We have therefore decided to create a ground truth bank for 17th French prints – because of research we are currently carrying on stylometry, but also because it is one sure way to get non-nomalised 17th French texts.

A first corpus

Most of our training data is taken for literary texts, and especially plays. A first test has been carried on the following texts (c. 110,000 words and 19,000 lines), which will slowly be uploaded on our GitHub repo. Our corpus contains the following prints:

  • Œuvres de Corneille (1664)
  • Œuvres de Racine (1676 et 1697)
  • Lettres de Guez de Balzac (1624)
  • Caractères de La Bruyère (1688)
  • Histoire amoureuse de Bussy (1665)
  • Achile et Polixene de Campistron (1687)
  • Princesse de Clèves de La Fayette (1678)
  • Discours de la Méthode de Descartes (1637)
  • Chrysolite de Mareschal (1627)
  • Recueil de lettres nouvelles de Faret (1627)
  • Œuvres de Campistron (1690)
  • L’Amour tirannique de Scudéry (1639)
  • Panthée de Tristan L’Hermite (1639)
  • Clélie de Scudéry (1656)
  • Ibrahim de Scudéry (1651)
  • Mariane de Tristan L’Hermite (1637)
  • La mort des enfants de Boyer (1648)
  • Oeuvres de Pradon (1697)
  • Méduse de Boyer (1697)
  • Statira de Pradon (1680)
  • Traictez de l’équilibre des liqueurs de Pascal (1663).
  • La Manière d’amolir les os de Papin (1682).
  • Dom Garcie de Navarre de Molière (1694)

Not all the prints are online: those available are described here.

Because we mainly work on the end of the century, most of the training data dates from this period, but we tried to include some works of the early 17th c. (Tristan L’Hermite, Faret, Descartes, Guez de Balzac, Mareschal). We tried to diversify the genres with science (Papin and Pascal), novels (Scudéry, La Fayette), philosophy (Descartes), theatre (Molière, Racine, Boyer, Pradon, Campistron), poetry (Regnier, Scarron)…

Transcription is relatively diplomatic: long ſ is maintained (plaiſir and not plaisir), but ligatures which are not relevant in contemporary French (ſt, st, ct) are not kept, contrary to those still used (œ). One of the reasons is that not all these ligatures exist in unicode, and some would have to be encoded with MUFI (Medieval Unicode Font Initiative) such as ct, and some with other encoding (such as sp, cf. Aletheia project).

DESCRIPTIONUNICODEMUFI
œLatin Small Ligature O EU+0153
stLatin Small Ligature S TU+FB06
ſtLatin Small Ligature Long S TU+FB05
ctLatin Small Ligature C TEEC5
spLatin Small Ligature S P

(Keeping ligatures being only possible with high resolutions images, we are currently carrying some tests with 400dpi images: the data is available here. We recommend the Andron Scriptor Web font rather than Junicode to use this data).

In order to have a model as useful as possible, it is important to train the model on various data. Most of our data is in 72dpi, but 400dpi and even 600dpi images have been added in the training set

Pascal, Equilibre (600dpi)
Papin, Manière (400dpi)
Balzac, Lettres (72dpi)

Since italic characters are important at the beginning of the century, we included some examples in our data.

Tristan Lhermite, Panthée (1639)

Ground truth

One has to differentiate training data and the tool used for training. In that sense, a ground truth works the same way than a gold corpus lemmatised and POS-tagged: it is data that can be re-used with any device using similar data.

OCCURRENCELEMMAPOS
jejePROper
tetuPROper
prieprierVERcjg
,,PONfbl
DedePRE
peurpeurNOMcom
qu’queCONsub
ililPROper

Ground truth for OCR is made of images of the printed text (.png docs) and its transcription (.txt docs) – for an example, cf. here.

Same line: image and transcription

As we can see, the quality of the image looks poor: it is because it has been binarised, i.e. turned into black and white (and not greyscale).

This data is produced via a segmenter that divides the page into lines, each line being transcribed manually via an interface – OCR system offering sometimes different transcription interfaces.

Unfortunately, segmenters are not always accurate and mistakes can be made. Despite this kind of problem, which rarely happen with proprietary/commercial solutions, we insist on using open-source solutions to avoid being dependent of a single software and guarantee a free access to quality tools for everyone.

Segmentation failure: two lines instead of one

Training the model

The ground truth can be used by different OCR engine. We have decided to use Kraken, but other tools like Ocropy or Tessaract are available. Training data being separated in different folders (one per page), we use the following command.

$ ketos train data/*/*.png

stage 0/∞  [####################################]  1162/1162          
Accuracy report (1161) 0.7022 4825 1437
stage 1/∞  [####################################]  1162/1162          
Accuracy report (2323) 0.8750 4825 603
stage 2/∞  [####################################]  1162/1162          
Accuracy report (3485) 0.9190 4825 391
stage 3/∞  [####################################]  1162/1162          
Accuracy report (4647) 0.9380 4825 299
stage 4/∞  [####################################]  1162/1162          
Accuracy report (5809) 0.9472 4825 255
stage 5/∞  [####################################]  1162/1162          
Accuracy report (6971) 0.9474 4825 254
stage 6/∞  [####################################]  1162/1162          
Accuracy report (8133) 0.9542 4825 221
stage 7/∞  [####################################]  1162/1162          
Accuracy report (9295) 0.9554 4825 215
stage 8/∞  [####################################]  1162/1162          
Accuracy report (10457) 0.9559 4825 213
stage 9/∞  [####################################]  1162/1162          
Accuracy report (11619) 0.9569 4825 208
stage 10/∞  [####################################]  1162/1162          
Accuracy report (12781) 0.9573 4825 206
stage 11/∞  [####################################]  1162/1162          
Accuracy report (13943) 0.9588 4825 199
stage 12/∞  [####################################]  1162/1162          
Accuracy report (15105) 0.9600 4825 193
stage 13/∞  [####################################]  1162/1162          
Accuracy report (16267) 0.9583 4825 201
stage 14/∞  [####################################]  1162/1162          
Accuracy report (17429) 0.9610 4825 188
stage 15/∞  [####################################]  1162/1162          
Accuracy report (18591) 0.9621 4825 183
stage 16/∞  [####################################]  1162/1162          
Accuracy report (19753) 0.9606 4825 190
stage 17/∞  [####################################]  1162/1162          
Accuracy report (20915) 0.9615 4825 186
stage 18/∞  [####################################]  1162/1162          
Accuracy report (22077) 0.9602 4825 192
stage 19/∞  [####################################]  1162/1162          
Accuracy report (23239) 0.9617 4825 185
stage 20/∞  [####################################]  1162/1162          
Accuracy report (24401) 0.9610 4825 188
Moving best model model_18591.mlmodel (0.9620725388601036) to model_best.mlmodel

The model is then evaluated and a report is produced with the following command line:

$ ketos test -m model_best.mlmodel data/*/*.png

The result is a detailed description of the model that looks like the following:

=== report  ===

48038	Characters
814	Errors
98.31%	Accuracy

381	Insertions
52	Deletions
381	Substitutions

Count	Missed	%Right
37818	402	98.94%	Latin
9590	324	96.62%	Common
630	36	94.29%	Inherited

Errors	Correct-Generated
76	{ ' } - { ’ }
59	{ ’ } - { ' }
47	{ SPACE } - {  }
26	{ , } - {  }
25	{ f } - { ſ }
24	{ r } - {  }
23	{ , } - { . }
17	{ e } - {  }
16	{ t } - {  }
15	{ r } - { t }
14	{ l } - {  }
12	{ COMBINING ACUTE ACCENT } - {  }
11	{ l } - { ſ }
11	{ i } - {  }
11	{ a } - {  }
10	{ . } - {  }
10	{ n } - {  }
9	{ c } - {  }
8	{ ſ } - {  }
8	{ ' } - {  }
8	{ s } - {  }
8	{ u } - {  }
8	{ o } - {  }
8	{ ? } - {  }
8	{ m } - {  }
7	{ COMBINING GRAVE ACCENT } - {  }
7	{  } - { u }
7	{  } - { . }
7	{ COMBINING CIRCUMFLEX ACCENT } - {  }

This evaluation is particulary important, since it details all the errors made by the OCR. In this very case, we can see that the most frequent mistake is the confusion between curly and straight apostrophes based on our inconstitency during the transcription, which can be easily be corrected with the sed command.

References

Thanks

Special thanks to Thibault Clérice for his magic xslt stylesheets and Matthias Gille-Levenson for his help with Ocropy.

Citer cet article: Simon Gabay, « OCRising 17th French prints », e-ditiones, consulté le 21/08/2019, [https://editiones.hypotheses.org/1958].