Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Computing the Page in Early-Modern Europe, Oxford, 15 mars 2024

Teaching Room, Oxford e-Research Centre, Keble Rd, Oxford

This workshop is hosted by the Visual Geometry Group (VGG) in association with the AHRC-funded Envisioning Dante project. 

Programme

0915 Welcome and introduction

0930-1045  Papers 1 and 2

Coffee

1115-1230 Papers 3 and 4

Lunch

1330-1530 Papers 5, 6 and 7

Coffee

1600 Discussion and next steps

1700 End

Paper One

Segmenting Dante, 1470-1620

Guyda Armstrong, Giles Bergel, Rebecca Bowen, Abhishek Dutta, Simon Gilson, Gloria Moorman, David Pinto, Prasanna Sridhar, Andrew Zisserman, Guanqi Zhang (Manchester, Oxford)

Abstract: This paper will present the initial findings from a three-year project on the almost entire corpus of early editions of Dante’s Commedia. Lead by the University of Manchester and partnering with VGG in Oxford, the project has so far digitised almost 100 books and applied various segmentation models to the resulting images. The presentation will report results using Mask R-CNN, FASTER-CNN and Mask2Former models, with ground truth established and inference reviewed using the VGG Image Annotator (VIA) tools. The project has also begun to explore computer-aided image comparison, visual search and OCR. The presentation will discuss how domain knowledge (including reading as well as seeing the page) might intersect with the visual analysis, as well as some of the complexities of early printed books that the visual analysis both surfaces and seeks to address.

Paper Two

Werck der bücher’: Rediscovering the history of early modern printing using type recognition and type shape analysis

Florian Kordon (Pattern Recognition Lab of FAU Erlangen-Nürnberg); Nikolaus Weichselbaumer (Johannes Gutenberg-Universität Mainz)

Abstract: The ‘werck der bücher’ (work of books) project challenges the conventional view of the invention of printing, focusing on the period between 1440 and 1470 as one of diverse technological experimentation. Utilizing both typographic methods and computer vision techniques, we aim to examine the variety of printing techniques from this era, from blockbooks to movable type, to understand their development and interaction. Our work seeks to clarify how these technologies coexisted and possibly shared methodologies, moving away from the idea of isolated competition. This talk will discuss the methods we developed for recognizing font groups using locally combined OCR models and recognizing glyphs in early modern prints using Joint Energy-based Models. We will further discuss our methodology for recognizing and classifying the subtle differences in shape between different glyphs from a large collection of incunables.

Paper 3

Interpretable computer vision analysis of historical prints

Mathieu Aubry, Sonat Baltaci (École des Ponts ParisTech)

Abstract: In this talk, we will present interpretable AI algorithms and their applications on curated datasets of early printed books. First, we will present the algorithms we used for  illustration extraction and interpretable similarity search and the first results of our VHS project on the diffusion of scientific illustrations. Second, we will present an interpretable clustering algorithm, and how it can be used to cluster ornaments printed from woodblocks collected by the ROIi project. Finally, we will give an overview of other projects related to historical document analysis developed in our team, emphasising the online tools we built and made available online to foster collaborations with historians.

Paper 4

Layout analysis and networks of early modern printers

Hassan El-Hajj (Max Planck Institute for the History of Science); Matteo Valleriani (Max Planck Institute for the History of Science/Technische Universität Berlin/Tel Aviv University

In a recent study conducted as part of the project “The Sphere. Knowledge System Evolution and the Shared Scientific Identity of Europe,” it was concluded that, with reference to the collection of historical sources analyzed, about 50 percent of the collection had been produced through a process of “imitation:” the printers involved “imitated” books produced by other printers. This imitation concerned both content and form, particularly book size and layout. However, it was not a single model imitated equally by all printers but a series of books that, from time to time, gave rise to such a more or less intense process of imitation.In this way it has been possible to establish networks of printers for whom it can be safely said that they knew each other’s books produced (networks of awareness).The method for reaching this result was to develop a distance metric between portions of the fingerprints of the books belonging to the collection, a metric that treats fingerprints as vectors.Through this method, books similar in content and form are automatically identified simply based on the bibliographic metadata of the fingerprint. However, only 84 percent of the identified books turn out to be actually part of such concatenations of imitated sources. This result, while important, nevertheless represents a limitation as severe as the collection of sourcea analyzed is large, as manual empirical validation is necessary to eliminate the 16% of incorrect identifications.The presentation will briefly introduce the conducted study by elaborating on (a) the concept of “network of awareness” and all its possible historical interpretations based on early modern methods of text production and sale, (b) the limitations of the developed method, and (c) how, through the integration of Machine Learning, greater accuracy can be achieved with respect to the detection of similar texts and thus networks of printers.

Paper 5

Vignettes discovery in historical ornaments: Rey database and practices, limits and advances

Sayan Chaki1†, Remi Emonet1†,3, Fabienne Vial-Bonacci 2, Amaury Habrard1†,3, Christelle Bahier-Porte2, Thierry Fournel

Univ. Lyon, UJM-St-Etienne, CNRS, Institut d’Optique Graduate School, Inria \ Laboratoire Hubert Curien UMR 5516, F-42023, Saint-Etienne, France IHRIM-Institut d’Histoire des Représentations et des Idées dans les Modernités, UMR 5317, France; IUF University Institute of France

Abstract: The Marc-Michel Rey database, currently under construction, aims to gather detailed data on the ornaments printed in books addressed to or attributed to this bookseller. The books were published in the 18th century by this bookseller-publisher, who played a central role in the dissemination of Enlightenment thought. The unsupervised decomposition of compound ornaments into vignettes can aid book attribution and provide a better understanding of editorial practices during a period of censorship. This article highlights a shortcoming that currently limits automation of the task via segmentation when the number of glyphs increases. A possible improvement based on diffusion rendering will be discussed.

Paper 6

Camels, Nautili, Trees and More: On the Digital Exploration of Ottoman Nature in Travelogues

Doris Gruber (ÖAW), Michela Vignoli (AIT), Jacopo Jandl (ÖNB), Michael Seidl (AIT)

Visiting foreign regions strongly impacts people. In the early modern period, many travelers recorded their experiences. These travelogues have survived in large numbers and inspired countless research. Nature – animals, plants and landscapes – are often a central theme of these reports. Their role, the extent to which these representations reflect desires, expectations or needs, and figured as instruments to achieve cultural, military or political aims of the people involved in their production as well as their audience has, however, hardly been investigated.

The project Ottoman Nature in Travelogues, 1501–1850: A Digital Analysis, funded by the Austrian Science Fund (FWF: FWF: P 35245), is dedicated to this question. The focus lies on printed travelogues on the Ottoman Empire. We hereby explore new computerized methods, some of which use machine learning. The presentation summarizes the most important results of the first project phase on image analysis and outlines further plans.

The following institutions collaborate in the project: the Institute for Habsburg and Balkan Studies (IHB) of the Austrian Academy of Sciences (ÖAW), the Austrian Institute of Technology (AIT) as well as the Austrian National Library (ÖNB).

Paper Seven

Multi-Modal Insights into Early Modern Religious Visual Culture 

Drew Thomas, University College Dublin

This paper presents ‘Visualizing Faith: Print, Piety and Power’, a project investigating Protestant and Catholic visual communication in the Holy Roman Empire during the Reformation era. Leveraging the Ornamento corpus, which used computer vision and the VISE image matching software to identify and track visual elements in books printed from 1450 to 1600, we employ the Wise interface by VGG for text-based image search. By integrating the Wise API with our Solr index, we can refine our searches based on visual elements and bibliographic metadata. Additionally, we integrate GPT-4 to generate captions for illustrations, enhancing our understanding of visual rhetoric. By combining these two methods, we can better understand the evolution of graphic design and visual media during Europe’s first mass media event.

Contact: giles.bergel@eng.ox.ac.uk; 44 7866 566982

Workshop – Computer vision for the investigation of ancient documents Saint-Étienne, 6-7 avril 2023

Organized by the team of the ANR Project ROIi, the workshop will echo projects for the development of computer vision tools aiming to help human experts in their analysis of ancient documents. It will allow discussion about recent methods in machine learning likely to improve or extend capabilities of image retrieval and query image analysis.

Ateliers ROIi: 15 et 16 juin 2022, Université Jean Monnet, Saint-Etienne

L’équipe de l’ANR ROIi organise deux journées d’échanges visant à répondre aux problématiques posées par la construction d’une base de données intégrant une fonctionnalité d’identification d’anomalies au sein d’ornements anciens en vue de l’authentification d’un ouvrage.

Programme provisoire :

Inscription gratuite mais obligatoire : formulaire d’inscription.

Detector-encoder AutoEncoder for decomposition into M.-M. Rey’s vignettes

see a french version

A machine learning model has been developed for the analysis of composed ornaments [Khan M. S., Emonet R. et Fournel T., 2021] in order to help the human experts during the process of assignment to the publisher, Marc-Michel Rey [Bahier-Porte et Vial-Bonacci, 2019].  The model is designed for vignette pattern detection with bounding box and probability assignment,then reconstruction from features when the assigned probability is greater than a predefined threshold value. Such a decomposition into vignettes can be achieved after learning the parameters of the model from a dictionary of vignettes used by M.-M. Rey.  Out-of-dictionary or suspicious vignettes are so missing or badly reproduced in the reconstructed image (Fig. 1).

Figure 1. (Left) An input image composed with vignettes randomly distributed, with the detected bounding boxes in red, out-of-dictionary vignettes (ground truth) being indicated by bounding boxes in blue. (Right) The reconstructed image.

The architecture of the model is an object detector placed upstream of a decoder to strengthen both detection and copy-and-past of vignettes in the dictionary. More precisely, a Single Shot Multibox Detector [Liu et al 2016] is used to predict boxes bounding predicted vignettes in the dictionary. In the detection part, some layers are added to align the predicted bounding boxes [He et al 2017] and transform the feature maps into 1 × 1 × 1024 feature vectors. In the decoding part, the feature vectors are fed into three fully connected linear layers then reshaped into 128×128 reconstructed vignettes. The Detector-Encoder was first trained separately by using Distance IoU [Zheng et al 2020], then the stacked neural network named DAE as Detector-AutoEncoder (https://gitlab.huma-num.fr/ANR-ROIi). 

DAE was tested on a dataset of images synthetized from real vignettes randomly selected in two arbitrary classes, normals and abnormals, and placed in the image area without any typographical composition to properly challenge the model (https://documentation.huma-num.fr/nakala/).  Normalized cross-correlation index between an original vignette and the reconstructed one shows a clear separation between normal and abnormal populations (Fig. 2). DAE has now to be compared with the state-of-the-art algorithms.

Figure 2. Histograms of the normalized cross-correlation index between an original vignette and the reconstructed one for abnormal and normal vignettes.

Rémi Emonet, Mohammad Sadil Khan, and Thierry Fournel

LHC-Laboratoire Hubert Curien-Saint-Etienne

References

Khan M. S., Emonet R. et Fournel T., Vignette detection and reconstruction of composed ornaments with a strengthened autoencoder, https://hal.archives-ouvertes.fr/hal-03409930, 2021

Bahier-Porte C. et Vial-Bonacci F, Le commerce de la librairie à la lumière de la correspondance – Marc Michel Rey, Pierre Rousseau, Charles Weissenbruch, Journal encyclopédique aux humanités numériques. Trois siècles d’histoire du livre et de la pensée à travers le Fonds Weissenbruch, Bruxelles, Archives générales du Royaume, p. 205-222, 2019

Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., Berg, A. C., SSD: Single-Shot Multibox detector, in European conference on computer vision. Springer, Cham, p. 21-37, 2016

He, K., Gkioxari, G., Doll´ar, P., Girshick, R., Mask R-CNN, in Proceedings of the IEEE international conference on computer vision. p. 2961-2969, 2017

Zheng, Z., Wang, P., Liu, W., Li, J., Ye, R., Ren, D., Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression, in Proceedings of the AAAI Conference on Artificial Intelligence, p. 12993-13000, 2020