Arian Askari

Visiting Researcher @ University of Amsterdam | PhD candidate as Marie Skłodowska-Curie Research Fellow @ Leiden University | Text Mining | Information Retrieva | Large language models

About Me

As a PhD candidate and a Marie Skłodowska-Curie Research Fellow at Leiden University, I study large language models from a retrieval point of view and develop effective retrieval models for web and professional search. I have over four years of experience in data engineering, back-end development, and software engineering. I have published multiple papers at prestigious conferences during my PhD and MSc. My passion lies in pushing the boundaries of information retrieval through the capabilities of LLMs, while also applying these advancements to real-world issues and practical applications.

I am also a Visiting Researcher at the University of Amsterdam where I do research in collaboration with Evangelos Kanoulas and Mohammad Aliannejadi. My initial project during this visit delved into a research on conversational search in the legal domain, and its outcome is recently published at CIKM 2023 long paper track. Currently, my research revolves around the training and analyzing Large Language Models (LLMs) for text retrieval tasks and the early outcome of it is a resource paper at CIKM titled “A Test Collection of Synthetic Documents for Training Rankers: ChatGPT vs. Human Experts” that analyze whether ChatGPT’s responses can act as training data for Q&A retrieval models.

Publications

EMNLP 2023 - Main track

Expand, Highlight, Generate: RL-driven Document Generation for Passage Reranking

Generating synthetic training data based on large language models (LLMs) for ranking models has gained attention recently. Prior studies use LLMs to build pseudo query-document pairs by generating synthetic queries from documents in a corpus. In this paper, we propose a new perspective of data augmentation: generating synthetic documents from queries. To achieve this, we propose DocGen, that consists of a three-step pipeline that utilizes the few-shot capabilities of LLMs. DocGen pipeline performs synthetic document generation by (i) expanding, (ii) highlighting the original query, and then (iii) generating a synthetic document that is likely to be relevant to the query. To further improve the relevance between generated synthetic documents and their corresponding queries, we propose DocGen-RL, which regards the estimated relevance of the document as a reward and leverages reinforcement learning (RL) to optimize DocGen pipeline. Extensive experiments demonstrate that DocGen pipeline and DocGen-RL significantly outperform existing state-of-theart data augmentation methods, such as InPars, indicating that our new perspective of generating documents leverages the capacity of LLMs in generating synthetic data more effectively. We release the code, generated data, and model checkpoints to foster research in this area.

ECIR 2024

Answer Retrieval in Legal Community Question Answering

The task of answer retrieval in the legal domain aims to help users to seek relevant legal advice from massive amounts of professional responses. Two main challenges hinder applying existing answer retrieval approaches in other domains to the legal domain: (1) a huge knowledge gap between lawyers and non-professionals; and (2) a mix of informal and formal content on legal QA websites. To tackle these challenges, we propose CE_FS, a novel cross-encoder (CE) re-ranker based on the fine-grained structured inputs. CE_FS uses additional structured information in the CQA data to improve the effectiveness of cross-encoder re-rankers. Furthermore, we propose LegalQA: a real-world benchmark dataset for evaluating answer retrieval in the legal domain. Experiments conducted on LegalQA show that our proposed method significantly outperforms strong cross-encoder re-rankers fine-tuned on MS MARCO. Our novel finding is that adding the question tags of each question besides the question description and title into the input of cross-encoder re-rankers structurally boosts the rankers’ effectiveness. While we study our proposed method in the legal domain, we believe that our method can be applied in similar applications in other domains.

CIKM 2023

A Test Collection of Synthetic Documents for Training Rankers: ChatGPT vs. Human Experts

In this resource paper, we investigate the usefulness of generative Large Language Models (LLMs) in generating training data for cross-encoder re-rankers in a novel direction: generating synthetic documents instead of synthetic queries. We introduce a new dataset, ChatGPT-RetrievalQA, and compare the effectiveness of strong models fine-tuned on both LLM-generated and human-generated data. We build ChatGPT-RetrievalQA based on an existing dataset, human ChatGPT Comparison Corpus (HC3), consisting of public question collections with human responses and answers from ChatGPT. We fine-tune a range of cross-encoder re-rankers on either human-generated or ChatGPT-generated data. Our evaluation on MS MARCO DEV, TREC DL 2019, and TREC DL 2020 demonstrates that cross-encoder re-ranking models trained on LLM-generated responses are significantly more effective for out-of-domain re-ranking than those trained on human responses. For in-domain re-ranking, the human-trained re-rankers outperform the LLM-trained re-rankers. Our novel findings suggest that generative LLMs have high potential in generating training data for neural retrieval models and can be used to augment training data, especially in domains with smaller amounts of labeled data. We believe that our dataset, ChatGPT-RetrievalQA, presents various opportunities for analyzing and improving rankers with human and synthetic data. We release our data, code, and model checkpoints for future work.

CIKM 2023

CLosER: Conversational Legal Longformer with Expertise-Aware Passage Response Ranker for Long Contexts

In this paper, we investigate the task of response ranking in conversational legal search. We propose a novel method for conversational passage response retrieval (ConvPR) for long conversations in domains with mixed levels of expertise. Conversational legal search is challenging because the domain includes long, multi-participant dialogues with domain-specific language. Furthermore, as opposed to other domains, there typically is a large knowledge gap between the questioner (a layman) and the responders (lawyers), participating in the same conversation. We collect and release a large-scale real-world dataset called LegalConv with nearly one million legal conversations from a legal community question answering (CQA) platform. We address the particular challenges of processing legal conversations, with our novel Conversational Legal Longformer with Expertise-Aware Response Ranker, called CLosER. The proposed method has two main innovations compared to state-ofthe-art methods for ConvPR: (i) Expertise-Aware Post-Training; a learning objective that takes into account the knowledge gap difference between participants to the conversation; and (ii) a simple but effective strategy for re-ordering the context utterances in long conversations to overcome the limitations of the sparse attention mechanism of the Longformer architecture. Evaluation on our large collection shows that our proposed method substantially and significantly outperforms existing state-of-the-art models on the response selection task. Our analysis indicates that our Expertise-Aware PostTraining, i.e., continued pre-training or domain/task adaptation, plays an important role in the achieved effectiveness. Our proposed method is generalizable to other tasks with domain-specific challenges and can facilitate future research on conversational search in other domains.

ECIR 2023

Injecting the BM25 Score as Text Improves BERT-Based Re-rankers

In this paper we propose a novel approach for combining first-stage lexical retrieval models and Transformer-based re-rankers: we inject the relevance score of the lexical model as a token in the middle of the input of the cross-encoder re-ranker. It was shown in prior work that interpolation between the relevance score of lexical and BERT-based re-rankers may not consistently result in higher effectiveness. Our idea is motivated by the finding that BERT models can capture numeric information. We compare several representations of the BM25 score and inject them as text in the input of four different cross-encoders. We additionally analyze the effect for different query types, and investigate the effectiveness of our method for capturing exact matching relevance. Evaluation on the MSMARCO Passage collection and the TREC DL collections shows that the proposed method significantly improves over all cross-encoder re-rankers as well as the common interpolation methods. We show that the improvement is consistent for all query types. We also find an improvement in exact matching capabilities over both BM25 and the cross-encoders. Our findings indicate that cross-encoder re-rankers can efficiently be improved without additional computational burden and extra steps in the pipeline by explicitly adding the output of the first-stage ranker to the model input, and this effect is robust for different models and query types.

SIGIR 2019

Anonymous Commenting: A Greedy Approach to Balance Utilization and Anonymity for Instagram Users

In many online services, anonymous commenting is not possible for the users; therefore, the users can not express their critical opinions without disregarding the consequences. As for now, naïve approaches are available for anonymous commenting which cause problems for analytical services on user comments. In this paper, we explore anonymous commenting approaches and their pros and cons. We also propose methods for anonymous commenting where it’s possible to protect the user privacy while allowing sentimental analytics for service providers. Our experiments were conducted on a real dataset gathered from Instagram comments which indicate the effectiveness of our proposed methods in privacy protection and sentimental analytics. The proposed methods are independent of a particular website and can be utilized in various domains.

ECIR 2022

Expert Finding in Legal Community Question Answering

Expert finding has been well-studied in community question answering (QA) systems in various domains. However, none of these studies addresses expert finding in the legal domain, where the goal is for citizens to find lawyers based on their expertise. In the legal domain, there is a large knowledge gap between the experts and the searchers, and the content on the legal QA websites consist of a combination formal and informal communication. In this paper, we propose methods for generating query-dependent textual profiles for lawyers covering several aspects including sentiment, comments, and recency. We combine query-dependent profiles with existing expert finding methods. Our experiments are conducted on a novel dataset gathered from an online legal QA service. We discovered that taking into account different lawyer profile aspects improves the best baseline model. We make our dataset publicly available for future work.

CIKM 2018

Query Understanding via Entity Attribute Identification

Understanding searchers’ queries is an essential component of semantic search systems. In many cases, search queries involve specific attributes of an entity in a knowledge base (KB), which can be further used to find query answers. In this study, we aim to move forward the understanding of queries by identifying their related entity attributes from a knowledge base. To this end, we introduce the task of entity attribute identification and propose two methods to address it:(i) a model based on Markov Random Field, and (ii) a learning to rank model. We develop a human annotated test collection and show that our proposed methods can bring significant improvements over the baseline methods.

DESIRES 2021

Combining lexical and neural retrieval with longformer-based summarization for effective case law retrieval

In this paper, we combine lexical and neural ranking models for case law retrieval. In this task, the query is a full case document, and the candidate documents are prior cases that are potentially relevant to the current case. Most documents are longer than 1024 tokens, which makes retrieval and classification with Transformer-based models problematic. We create shorter query documents with different methods: term extraction, noun phrase extraction, entity extraction, and automatic summarization using Longformer-Encoder-Decoder (LED). We then combine the summaries with five different ranking models: a BM25 ranker, statistical language modelling, the Deep Relevance Matching Model (DRMM), a Vanilla BERT ranker, and a Longformer ranker. We optimised all models and combined the best lexical ranker with neural retrieval models using different ensemble classifiers. We evaluate our methods on the retrieval benchmarks from COLIEE’20 and COLIEE’21. We beat state-of-the-art models for case law retrieval with both benchmark sets. Our experiments show the importance of tuning lexical retrieval methods, summarizing query documents, and combining lexical and neural models into one ranker for effective case law retrieval.

COLIEE 2022

LeiBi@COLIEE 2022: Aggregating Tuned Lexical Models with a Cluster-driven BERT-based Model for Case Law Retrieval

This paper summarizes our approaches submitted to the case law retrieval task in the Competition on Legal Information Extraction/Entailment (COLIEE) 2022. Our methodology consists of four steps; in detail, given a legal case as a query, we reformulate it by extracting various meaningful sentences or n-grams. Then, we utilize the pre-processed query case to retrieve an initial set of possible relevant legal cases, which we further re-rank. Lastly, we aggregate the relevance scores obtained by the first stage and the re-ranking models to improve retrieval effectiveness. In each step of our methodology, we explore various well-known and novel methods. In particular, to reformulate the query cases aiming to make them shorter, we extract unigrams using three different statistical methods: KLI, PLM, IDF-r, as well as models that leverage embeddings (e.g., KeyBERT). Moreover, we investigate if automatic summarization using Longformer-Encoder-Decoder (LED) can produce an effective query representation for this retrieval task. Furthermore, we propose a novel re-ranking cluster-driven approach, which leverages Sentence-BERT models that are pre-tuned on large amounts of data for embedding sentences from query and candidate documents. Finally, we employ a linear aggregation method to combine the relevance scores obtained by traditional IR models and neural-based models, aiming to incorporate the semantic understanding of neural models and the statistically measured topical relevance. We show that aggregating these relevance scores can improve the overall retrieval effectiveness.

Data

ChatGPT-RetrievalQA

ChatGPT-RetrievalQA: Can ChatGPT's responses act as training data for Q&A retrieval models?

A dataset for training and evaluating Question Answering (QA) Retrieval models on ChatGPT responses with the possibility of training/evaluating on real human responses. Given a set of questions and corresponding ChatGPT’s and humans’ responses, we make two separate collections: one from ChatGPT and one from humans. By doing so, we provide several analysis opportunities from an information retrieval perspective regarding the usefulness of ChatGPT responses for training retrieval models. We provide the dataset for both end-to-end retrieval and a re-ranking setup. To give flexibility to other analyses, we organize all the files separately for ChatGPT and human responses. While ChatGPT is a powerful language model that can produce impressive answers, it is not immune to mistakes or hallucinations. Furthermore, the source of the information generated by ChatGPT is not transparent and usually there is no source for the generated information even when the information is correct. This can be a bigger concern when it comes to domains such as law, medicine, science, and other professional fields where trustworthiness and accountability are critical. Retrieval models, as opposed to generative models, retrieve the actual (true) information from sources and search engines provide the source of each retrieved item. This is why information retrieval – even when ChatGPT is available – remains an important application, especially in situations where reliability is vital.

Invited talks

Talk at Leiden Data Science Meetup on GenAI for Pharma and Biotech

Generating Synthetic Documents for Cross-Encoder Re-Rankers: A comparative study of ChatGPT versus human experts

I presented our latest paper, which has been accepted at CIKM 2023, and talked about the usefulness of generative Large Language Models (LLMs), particularly hashtag ChatGPT, in generating training data for cross-encoder re-rankers. What made it fascinating to me was the various audience’s anticipation before they saw the results. A key highlight of my talk was that ChatGPT-trained cross-encoder re-rankers significantly outperformed their human-trained counterparts when dealing with out-of-domain datasets. This raises questions about the potential for Language Models (LLMs) to replace humans in data augmentation for synthetic document generation for information retrieval models in the future.

Talk at Amazon AI reading group

Expand, Highlight, Generate: RL-driven Document Generation for Passage Reranking

I presented our EMNLP 2023 paper titled “Expand, Highlight, Generate: RL-driven Document Generation for Passage Reranking” at Amazon reading group on Janurary 2024.

Academic activities

I have been reviewer for full paper track of the following conferences:

  • ACL 2023
  • SIGIR 2022, 2023
  • WWW 2023
  • ECIR 2022, 2023
  • EACL 2023
  • ACL-IJCNLP 2021

I have been reviewer for the follwoing journal:

  • Artificial Intelligence In Medicine 2022 (Volum 128)

I have been a member of organization team for the following workshop:

  • Dutch-Belgian Information Retrieval Workshop (DIR 2021)

I have been teaching assitant for the following courses:

  • Information Retrieval, Spring 2022, Leiden University (LIACS)
  • Information Retrieval, Fall 2018, Shahid Beheshti University
  • Database Lab, Spring 2018, Shahid Beheshti University
  • Database Lab, Spring 2017, Institute for Higher Education ACECR Khuzestan
  • Operation Systems Lab, Fall 2016, Institute for Higher Education ACECR Khuzestan

A Little More About Me

To be written!