Download PDFOpen PDF in browser

Investigating the Effects of Pre-Trained BERT to Improve Sparse Data Recommender Systems

EasyChair Preprint no. 6743

7 pagesDate: October 3, 2021

Abstract

Recommender systems play an important role with many applications in natural language processing such as in e-commerce services. Matrix factorization (MF) is a powerful method in recommender systems, but a main issue is the sparse data problem. In order to overcome the problem, some previous models use neural networks to represent additional information such as product item reviews to enhance MF-based methods, and obtain improvement in recommender systems. However, these models use conventional pre-trained word embeddings, which raise a question whether recent powerful models such as pre-trained BERT can improve these MF-based methods enhanced by item reviews. In this work, we investigate the effect of utilizing pre-trained BERT model to improve some previous models, especially focusing on several specific sparse data settings. Experimental results and intensive analyses on the MovieLens dataset show some promising findings for our model, which open directions to improve this on-going model to solve the problem of sparse data in MF-based recommender systems.

Keyphrases: collaborative filtering, Convolutional Neural Network, Matrix Factorization, Pre-trained BERT, Recommender Systems

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:6743,
  author = {Nguyen Huy Xuan and Le Minh Nguyen and Long H. Trieu},
  title = {Investigating the Effects of Pre-Trained BERT to Improve Sparse Data Recommender Systems},
  howpublished = {EasyChair Preprint no. 6743},

  year = {EasyChair, 2021}}
Download PDFOpen PDF in browser