International Identifier for serials
and other continuing resources, in the electronic and print world

2026/01/28

12th Meeting of the Keepers Registry Technical Advisory Committee – 5 February 2026

The 12th meeting of the Keepers Registry Technical Advisory Committee (TAC) will be held online on 5 February 2026, bringing together a renewed group of international experts to support the ongoing development of the Keepers Registry service. This meeting marks the start of a new two-year term for incoming members of the Committee. The TAC plays a key role in advising the ISSN International Centre on the strategic, technical, and community-driven evolution of the Keepers Registry, which provides a global overview of the long-term preservation of digital serial publications. The Committee is composed of professionals representing archiving agencies participating in the Keepers Registry, user communities, and independent experts in digital preservation and scholarly communication.

Members for the 2026-2028 term include:

Keepers Registry Archiving Agencies
Alicia Wise (CLOCKSS)
Miguel Mardero Arellano (Rede Cariniana, IBICT)
Kate Davis (Scholars Portal)

Keepers Registry User Groups
Kylie van Zyl (African Journals OnLine – AJOL)
Brendan O’Connell (Directory of Open Access Journals – DOAJ)
Courtney Mumma (University of Texas at Austin)

Independent Experts
Paul Wheatley (Preserve Together)
Daniel Villanueva Rivas (Universidad Nacional Autónoma de México – UNAM)

The Committee is chaired by the Director of the ISSN International Centre and supported by staff from the Centre as required. Peter Burnhill, a consultant to the ISSN-IC Director, is also involved. The Technical Advisory Committee brings together expertise from libraries, preservation networks, open access infrastructures, research institutions and international initiatives, reflecting the diversity of the communities served by the Keepers Registry. The committee’s members will contribute strategic insight and practical guidance to help shape the future development of the service, ensuring that it remains robust, inclusive and responsive to the evolving needs of publishers, libraries and researchers worldwide.

2025/12/19

Signing of an MoU with the AI4LAM association

Since 2018, the National Library of Norway and Stanford University Library (USA) have been collaborating to promote the adoption of artificial intelligence in archives, libraries, and museums (LAM). In 2024, these two institutions signed an agreement with the aim of establishing an association that will gradually be set up and ultimately operate on the basis of contributions from its members.

In December 2025, the ISSN International Centre signed a memorandum of understanding (MoU) with the AI4LAM association. As a member, the ISSN International Centre will benefit from the work and tools developed by the network’s members, and will participate in working groups dealing with topics related to its 2029 action plan. Further information will be provided when the new AI4LAM association website is operational.

2025/12/18

Fantastic Futures 2025

This meeting has been held annually since 2018. It brings together heritage institutions that want to improve access to their digitised collections and the quality of their metadata by using artificial intelligence tools. The following projects are particularly relevant in the context of implementing the 2029 action plan at the ISSN International Centre, and more broadly in terms of necessary acculturation.

The French Ministry of Culture has launched the Comparia website, which compares the performance of several generative artificial intelligences (https://comparia.beta.gouv.fr/). Users can submit a query to two AIs chosen at random by the site. The AIs analyse the response and provide the answers. The user then evaluates the answers and finally obtains an assessment of the energy consumption of the two AIs in producing the answers.

As part of their mission to preserve the memory of government institutions, the British government archives have developed a tool to create metadata from the vast quantity of documents they process. According to its website, the Apache Tika™ toolkit can detect and extract metadata and text from over a thousand different file types, such as PPT, XLS and PDF. These file types can all be processed through a single interface, making Tika useful for search engine indexing, content analysis, translation, and much more. This tool could be used to support publishers’ requests.

The Library of Congress has implemented an AI tool that automatically creates metadata for digital or digitised monographs. This service is managed by Digirati (https://digirati.com/). Similarly, the Harvard Library uses Apache Airflow to automate the ingestion of resources with metadata extraction. These are then compared to the indexing of resources already described in the library catalogue via ElasticSearch. Harvard also uses Better Binary Quantisation to store data in vector form. BBQ is described on its website as follows: ‘BBQ is a leap forward in quantisation for Lucene and Elasticsearch, reducing float32 dimensions to bits and delivering ~95% memory reduction while maintaining high ranking quality.’ It outperforms traditional approaches such as product quantisation (PQ) in terms of indexing speed (20–30 times faster), query speed (2–5 times faster), and with no loss of accuracy.

The Yale Library, like the National Library of Luxembourg, is also involved in producing metadata using AI. The latter had to process a backlog of around 75,000 deposited digital files. ChatGPT 4.0 was initially used to generate metadata, but the results were disappointing in terms of subject indexing. ANNIF (https://annif.org/) was therefore preferred. The National Libraries of Sweden and Germany are engaged in similar projects. The German National Library (DNB) has implemented a project aimed at improving the performance of generative intelligence in German. Seventeen million digital publications were selected, including thirteen million periodicals. These were reworked to anonymise the texts and modify them so that they were no longer subject to copyright. These texts have been ‘tokenised’ and will be used to train AI in German.

The KB, nationale bibliotheek (formerly the Koninklijke Bibliotheek of the Netherlands) has published a statement (https://www.kb.nl/en/ai-statement) to technically limit the use of its digital collections by commercial companies that train their generative AI, particularly from the Delpher digitised continuing resources site (https://www.delpher.nl).

Finally, the Stanford Library presented a project involving the digitisation of typewritten cards containing marine biological observations. These cards were processed by AI to generate JSON metadata files. The presenters emphasised the importance of providing very detailed prompts to the AI.