One of the issues to be resolved by the CRISalid community, specifically within our "SoVisu+" project, is achieving a comprehensive recording of scientific production by research institutions.
The "SoVisu+ Harvester" project aims to provide a robust and flexible solution for parallel querying of the many platforms that reference researchers' publications : on the French level, archives like Hal, Sudoc, OpenEdition etc. and aggregators such as a Scanr, data.idref.fr; on the international level OpenAlex, Pubmed, Wos, Scopus...). Additionally, the tool ensures the conversion of these references to a common standardized model (inspired by the ABES SciencePlus model, which in turn is based on the most popular ontologies in the field), in order to record them in the institutional knowledge graph.
Institutions will then 'just' have to work on collecting and aligning their researchers' identifiers (idref, idHal ORCID, etc.), a task that can be aided by the mass alignment services offered by ABES/IdRef, and soon by the SoVisu+ software itself, which will offer this feature.
Solving one problem makes another worse
However, in addressing the challenge of gathering data from different platforms, the SoVisu+ Harvester introduces a new challenge around deduplication.
It's not unusual for bibliographic references to have been entered manually, on several platforms, by different researchers or support staff, and for their metadata to differ slightly. At the level of an institution, it can represent a significant challenge to distinguish between real duplicates and false duplicates - publications on similar subjects by the same authors, variations on an article or derivative works, preprints, etc. - especially when metadata are poor and when identifiers such as DOIs are missing. The problem of deduplication is one of those for which traditional algorithmic approaches, such as rule engines, have revealed limitations.
Don't GPT 4 or Llama 2 make the problem easy?
One might think that with the advent of the large language models of the GPT-3 generation, the solution to the problem is just around the corner: can't anyone experience that a modern chat is capable, if given the appropriate prompt, of discriminating between true and false duplicates?
And even if a generalist chat can't do it very well, wouldn't it just take a little fine-tuning for it to perform as well as a library professional? Simply deploy such an LLM behind an API and you're done.
Perhaps this is only due to the current state of the art of LLMs, but we identify two problems with this idea :
- The first problem is that if an LLM has to compare all publications in pairs to determine which are duplicates, it will generate a huge number of queries, or extremely long prompts.
Whether the model is accessed in "Saas" (software as a service) mode and charged on a per token basis, or whether you've deployed it on your own GPUs, on premise or in the cloud, it will either end up with a big bill, or with long waiting times.
Therefore, in any case, a first step is needed to identify duplicate candidates, and this step should be designed economically : it can't rely on recent, resource-intensive large language models.
- The second problem is that the prompt engineering approach will inevitably require LLMs with billions of parameters.
Even if the first stage of identifying candidate duplicates has been implemented, it's still preferable to work cost-efficiently. Basically, the deduplication issue can be reduced to a classification problem: this type of problem can typically be addressed using encoding-only transformers, such as the Google Bert-derived models. So why should we need a generative LLM? Deploying generative AI instead of non-generative AI, whether from an engineering, economic, or ecological perspective, is not regarded as a good practice.
SVP-merger project: solving the problem of duplicates with transformers, in a non-generative use.
Therefore, we would like to investigate an approach that doesn't rely on generative AI.
For duplicate candidates identification
We believe that the duplicate candidates identification stage could be achieved:
- either via a semantic search (projection of bibliographic references embeddings into a high-dimensional vector space and nearest neighbor search)
- or even simply with the good old scoring/ranking functionalities of similar records offered by Lucene-based search engines (Solr, Elastic)
The good news is that we no longer have to choose between the two strategies: SolR and Elastic both offer support for text representation as dense vectors, and the latest versions of Elastic even come with a combination of the two approaches (Hybrid search with reciprocal rank fusion). However, if it becomes evident that semantic search is the most efficient approach,, this may pose the question of fine-tuning or even re-training the encoder model (see below), especially if we want to perform embeddings on the entire bibliographic record, and not just on the title.
It's worth noting that this sequence of calls (semantic search - LLM) is very similar to current RAG (retrieval augmented generation) architectures, which are becoming a real design pattern in the field. Therefore, it may be possible to implement it via an orchestrator such as LangChain, which offers, and this is good news, on the shelf integrations for Solr and Elastic.
For duplicate resolution
Here comes the tricky part: automatically discriminating true duplicates from false ones.
It was quite surprising for us to discover that, while exploring the landscape of state-of-the-art tools for deduplicating bibliographic references, no one seems to have tried to address the problem with a "cross-encoder" such as those proposed by the popular Sentence Transformers ecosystem. It appears to us, however, that this is a straightforward and well-established way of solving deduplication problems involving natural language content. A few works have used Bert to calculate semantic similarities between items, but they don't benefit from the Sentence Transformer architecture, nor train classifiers. (Gyawali 2020)
As a reminder, Sentence Transformers are an optimization of the Bert family models. Sentence Transformers use the architecture of siamese Bert networks to obtain better quality embeddings at a higher level of granularity than the word. It is possible to plug a downstream task (typically a classification task) into the pre-trained model, then train the task on a specific set of data, which also results in fine-tuning of the model. The cross encoder option is optimal whenever the task in question requires the classification of sentences taken in pairs (cause and effect, synonims/antonyms, duplicates, etc.).
Solution overview
The diagram below shows the architecture of the solution proposed in this two-stage approach:
- Embedding of the bibliographic reference and nearest neighbor search within a semantic index (or a hybrid index containing both structured metadata and vectors).
- Assigning each potential duplicate pair a "duplicate" or "non-duplicate" label using a neural classifier based on a Bert cross-encoder.
SVP merger project challenges
Training the downstream classification task
The main disadvantage of the Bert cross-encoder based approach compared to the use of generative AI is that zero-shot and few-shot querying method are not available: training the Bert based cross-encoder classifier will require training data of sufficient quality and quantity.
This question is far from being insurmountable. Indeed, we have a number of approaches for generating this kind of training data :
- Our SVP Harvester tool is already advanced enough to provide us with near-duplicates of real life that could then be manually annotated with an application designed to boost annotation productivity such as Doccano.
- Another technique is to collect references with identical DOIs, but whose metadata differs between platforms (Hammerton 2012).
- And what about using a conversational LLM to generate training data from a palette of carefully chosen examples?
The importance of this task of generating training datasets should not be underestimated: by packaging the data for platforms such as HuggingFace Hub or Kaggle, we open the way to a process of continuous improvement of the model,. This iterative process may even take a participative form through the organization of challenges.
Re-training Bert from scratch? Hopefully not...
Another, more fundamental, uncertainty concerns the ability of Bert-based models to provide efficient vector representations of bibliographic references. The use of pre-trained models, like any transfer learning approach, is based on the idea that the model has extracted from its training corpus structural features that will be relevant to the target task. The corpora on which Bert and its derived models have been trained are made of sentences whose syntax and logic are not homologous to those of bibliographic references, except for the title. As a result, it may be necessary to re-train Bert, and then Sentence Bert, on a corpus entirely composed of bibliographical references, which is a more ambitious project.
A call for collaboration
We're aware that we're not the only team dealing with this problem, which is common in the fields of bibliometrics and research information management (for example, there seems to be a project underway at WorldCat). Our particular requirement is that we need deduplication to operate on bibliographic metadata for which an English version is not always available.
Don't hesitate to contact us
We are of course open to the possibility of collaboration with other teams or individuals. If you are interested in contributing to the project in any way, or if you have an alternative proposal to the analysis proposed in these lines, or if you already have a project underway, or if you think you can contribute to the development of training data, or if you already have data that could be used as training data, or if you need such an application component for your own projects, please do not hesitate to contact us at contact@crisalid.org.