Extracting data from sale catalogues

Looking for 17th c. French manuscrits is not a simple: many found in libraries are forgeries or misattributions. Then, how to authenticate documents? Paleography is a first obvious solution, but tracing the history of documents in time is another. We are therefore planning to extract data from sale catalogues to build a database of manuscripts sold to reconstitute their history.

[NOTE] This blog post is derived from a course given at the ENS for the Artl@s project. Our slides are available on HAL:

https://hal.archives-ouvertes.fr/cel-01951107

Catalogues

There are two kinds of catalogues:

  • Fixed-price catalogues, (more or less) monthly published by experts with the content of their stocks available.
  • Auction catalogues, (most of the time) published for the sale of a collection

For a first test, we have acquired copies of the Revue des autographes held at the Bibliothèque nationale de France (https://catalogue.bnf.fr/ark:/12148/cb32858189v). They are fixed-price catalogues catalogues published by Gabriel Charavay (1818, †1879, cf. CTHS), and then by his son Eugène (1858-†1892) and his grand-daughter Gabrielle (1866-†1948).

The Revue des autographes is at first a publication on the autograph market inspired by the Amateur d’autographes, funded in 1862 by the same Gabriel Charavay and continued by his brother Jacques Charavay (1809-†1867, cf wikipedia). In December 1868, it starts including a list of manuscript sold by Gabriel and within a year the news section is reduced to the minimum.

Between 1868 and 1936, three different layouts are used by the Gabriel Charavay and his descendants.

Our goal is to extract data from this catalogue to create a database, using GROBID dictionaries.

GROBID dictionaries

GROBID dictionaries derived from the generic GROBID and is developed by M. Khemakhem with L. Romary (INRIA-CMB Berlin). It is primarily used to convert pdf of dictionaries into XML-TEI documents, but can also be used with catalogues which follow a similar layout (cf. KHEMAKHEM et al., 2018).

The basic structure of the encoding would therefore be the following:

The idea is that GROBID dictionaries needs one training for each level (i.e. on XML node).

In practice, training data will produce the following encodings:

A First GROBID model for catalogues

A first model has been trained and is available online: https://github.com/gabays/grobid. It has been trained on the first layout of our catalogue. We used only the first four levels of GROBID dictionaries:

  1. Dictionary segmentation
  2. Body segmentation
  3. Lexical entry
  4. Form and sense

Each training follows the same pattern:

  1. Creation of training data
  2. Annotation of the training data
  3. Training
  4. Quality control

The procedure is entirely done within a docker container synchronised with the toydata folder (available here). After the installation of Docker (cf. Mohamed’s wiki), we therefore have to run docker:

$ docker run -v /Users/gabaysimon/Documents/Grobid/toyData:/grobid/grobid-dictionaries/resources -p 8080:8080 -it medkhem/grobid-dictionaries bash

PDF used for training are placed in the following folder: dataset/dictionary-segmentation/corpus/pdf. Then we created data using the following command line:

$ java -jar/grobid/grobid-dictionaries/target/grobid-dictionaries-0.4.3-SNAPSHOT.one-jar.jar -dInresources/dataset/dictionary-segmentation/corpus/pdf/ -dOut resources -execreateTrainingDictionarySegmentation

Data created are annotated in XML (we strongly recommend the author mode in Oxygen). Markup has to be added without adding or removing any character (including spaces). For the Body segmentation level, we have only two elements to add (using cmd+E):

  • < headnote > (for page numbers)
  • < body > (for the text of each page).

Data created has to be moved to the dataset/Dictionary-segmentation/corpus folder, and split the following way (cf. example here):

  • .xml files are placed in the dataset/Dictionary-segmentation/corpus/tei folder
  • .training.dictionarySegmentation and .training.dictionarySegmentation.rawtxt files are placed in the dataset/Dictionary-segmentation/corpus/raw folder

The tei and raw folders are then copied and pasted in the dataset/Dictionary-segmentation/evaluation folder, for evaluation purposes (cf. example here). Now we can train:

$ mvn generate-resources -P train_dictionary_segmentation -e

When training, GROBID will verify that the results obtained are similar to the one in the dataset/Dictionary-segmentation/evaluation folder. Results therefore need to be 100.

As told, the procedure is repeated for each training level. Results should look like those available here: https://github.com/gabays/grobid/tree/master/toyData/dataset

References

  • Mohamed KHEMAKHEM, Laurent ROMARY, Simon GABAY, Hervé BOHBOT, Francesca FRONTINI, et al.. «Automatically Encoding Encyclopedic-like Resources in TEI», The annual TEI Conference and Members Meeting, Sep 2018, Tokyo, Japan. 〈https://tei2018.dhii.asia/〉. 〈hal-01819505〉
  • Simon GABAY, Mohamed KHEMAKHEM, Laurent ROMARY. Les catalogues et GROBID. Doctorat. Du catalogue aux humanités numériques : quelles méthodes pour quels résultats ?, Paris, France. 2018. 〈cel-01951107〉
  • Mohamed KHEMAKHEM, Luca FOPPIANO, Laurent ROMARY, «Automatic Extraction of TEI Structures in Digitized LexicalResources using Conditional Random Fields», Electronic lexicography, eLex 2017, Leiden (Netherlands). 〈hal-01508868v2〉
  • Mohamed KHEMAKHEM, Axel HEROLD, Laurent ROMARY, «Enhancing Usability for Automatically Structuring DigitisedDictionaries», GLOBALEX workshop at LREC 2018, Miyazaki (Japan). hal-01708137

Thanks

Thanks to Prof. Volker Schröder for his help with manuscript sale catalogues, Mohamed Khemakhem and Laurent Romary for their help with GROBID, and Béatrice Joyeux-Prunel for her invitation at the ENS.





Citer ce billet
Simon Gabay (2019, 13 février). Extracting data from sale catalogues. e-ditiones. Consulté le 29 mars 2024, à l’adresse https://doi.org/10.58079/o2ou