International Identifier for serials
and other continuing resources, in the electronic and print world

2025/12/19

Signing of an MoU with the AI4LAM association

Since 2018, the National Library of Norway and Stanford University Library (USA) have been collaborating to promote the adoption of artificial intelligence in archives, libraries, and museums (LAM). In 2024, these two institutions signed an agreement with the aim of establishing an association that will gradually be set up and ultimately operate on the basis of contributions from its members.

In December 2025, the ISSN International Centre signed a memorandum of understanding (MoU) with the AI4LAM association. As a member, the ISSN International Centre will benefit from the work and tools developed by the network’s members, and will participate in working groups dealing with topics related to its 2029 action plan. Further information will be provided when the new AI4LAM association website is operational.

2025/12/18

Fantastic Futures 2025

This meeting has been held annually since 2018. It brings together heritage institutions that want to improve access to their digitised collections and the quality of their metadata by using artificial intelligence tools. The following projects are particularly relevant in the context of implementing the 2029 action plan at the ISSN International Centre, and more broadly in terms of necessary acculturation.

The French Ministry of Culture has launched the Comparia website, which compares the performance of several generative artificial intelligences (https://comparia.beta.gouv.fr/). Users can submit a query to two AIs chosen at random by the site. The AIs analyse the response and provide the answers. The user then evaluates the answers and finally obtains an assessment of the energy consumption of the two AIs in producing the answers.

As part of their mission to preserve the memory of government institutions, the British government archives have developed a tool to create metadata from the vast quantity of documents they process. According to its website, the Apache Tika™ toolkit can detect and extract metadata and text from over a thousand different file types, such as PPT, XLS and PDF. These file types can all be processed through a single interface, making Tika useful for search engine indexing, content analysis, translation, and much more. This tool could be used to support publishers’ requests.

The Library of Congress has implemented an AI tool that automatically creates metadata for digital or digitised monographs. This service is managed by Digirati (https://digirati.com/). Similarly, the Harvard Library uses Apache Airflow to automate the ingestion of resources with metadata extraction. These are then compared to the indexing of resources already described in the library catalogue via ElasticSearch. Harvard also uses Better Binary Quantisation to store data in vector form. BBQ is described on its website as follows: ‘BBQ is a leap forward in quantisation for Lucene and Elasticsearch, reducing float32 dimensions to bits and delivering ~95% memory reduction while maintaining high ranking quality.’ It outperforms traditional approaches such as product quantisation (PQ) in terms of indexing speed (20–30 times faster), query speed (2–5 times faster), and with no loss of accuracy.

The Yale Library, like the National Library of Luxembourg, is also involved in producing metadata using AI. The latter had to process a backlog of around 75,000 deposited digital files. ChatGPT 4.0 was initially used to generate metadata, but the results were disappointing in terms of subject indexing. ANNIF (https://annif.org/) was therefore preferred. The National Libraries of Sweden and Germany are engaged in similar projects. The German National Library (DNB) has implemented a project aimed at improving the performance of generative intelligence in German. Seventeen million digital publications were selected, including thirteen million periodicals. These were reworked to anonymise the texts and modify them so that they were no longer subject to copyright. These texts have been ‘tokenised’ and will be used to train AI in German.

The KB, nationale bibliotheek (formerly the Koninklijke Bibliotheek of the Netherlands) has published a statement (https://www.kb.nl/en/ai-statement) to technically limit the use of its digital collections by commercial companies that train their generative AI, particularly from the Delpher digitised continuing resources site (https://www.delpher.nl).

Finally, the Stanford Library presented a project involving the digitisation of typewritten cards containing marine biological observations. These cards were processed by AI to generate JSON metadata files. The presenters emphasised the importance of providing very detailed prompts to the AI.

2025/11/19

Charleston Library Conference 2025

By Gaëlle Béquet, 19 November 2025

The 2025 Charleston Library Conference highlighted several major trends currently transforming the academic, documentary and publishing ecosystems in the United States of America. Universities are experiencing a significant decline in student enrolment, a trend exacerbated by a fall in the number of international students. This contraction directly impacts libraries, which are facing reduced acquisition budgets and consequently cutting back on subscriptions to journals and reference databases. In this context, publishers’ transformative agreements are receiving less support, particularly when they impose embargoes on open access content availability.

In scientific publishing, there are several signs of accelerating change. Wiley, for example, illustrates this paradoxical dynamic: while researchers are under increasing pressure to publish, libraries are buying less. Wiley is also continuing to grow in Asia, as evidenced by its office in Beijing with 75 employees, and it has recently published recommendations on the use of AI for authors. Many researchers now object to their articles or books being used to train AI models, while publishers are using detection tools to automatically identify content generated by AI in submitted manuscripts. Finally, several speakers raised the possibility of small publishing houses and learned societies disappearing, as they are threatened by increasing concentration in the sector.

The conference also addressed the evolving role of librarians. According to Lorcan Dempsey, proficiency in programming languages such as Python will be essential for effective interaction with the artificial intelligence systems that are becoming standard throughout the document chain.

Another highlight of the discussions was issues related to open access. Works available via OAPEN are widely captured by robots and AI systems that can bypass authentication mechanisms by mimicking human user behaviour. This automated retrieval disregards Creative Commons licences and undermines data protection efforts. Site managers can no longer rely on the declared identity of visitors, leading some to consider a radical scenario in which websites would no longer offer direct access to data and would instead delegate intermediation to AI, which would present the information to end users. At the same time, several participants noted a decline in the use of Google for information searches, with users increasingly turning to conversational AI tools instead.

The issue of scientific integrity was also addressed. Cabells now lists almost 19,000 questionable journals in its database. The term ‘grimpact’ has been proposed to describe the negative effects that biased research based on distorted data or unethical practices can cause.

Regarding infrastructure and tools, there were presentations on advances concerning the Collaborative Collection Lifecycle Project, notably thanks to the CYCLOPS prototype developed by Indexdata and based on a Recommended Practice currently open for public comment by NISO.

Finally, the conference provided an opportunity to discover new tools for libraries. These include digitisation and optical character recognition solutions adapted to non-Latin languages, such as the Spacy.io tool. There was also mention of an inspiring collaboration between Lehigh University and Elsevier, which aims to create a chemistry literature analysis tool to save teachers valuable time when exploring existing literature prior to conducting experiments.