Investigation of Simple-but-Effective Architecture for Long-form Text Matching with Transformers

Chen Shen, Jin Wang
Long-form text matching plays a significant role in many real world Natural Language processing (NLP) and Information Retrieval (IR) applications.Recently Transformer based models such as BERT have been widely applied to address this problem and achieved promising results.However, they are all based on the architecture of Siamese Network and thus need to come up with extra techniques to capture the matching signals to remedy the problem of late interaction.In this paper, we investigate the usage of sequence pair classification architecture as the solution to long-form text matching.That is, we concatenate the pair of long-form texts into one sequence as the input into a pre-trained language model for fine-tuning.The initial experimental results show that such a simple baseline method can outperform state-of-the-art approaches in this field without further optimization.This findings illustrate that it is a promising choice to use sequence pair classification as the solution for this problem which has not been explored by previous studies yet.We also conduct in-depth empirical analysis to present more comprehensive results to support our claim and provide more insights for researchers in this direction.