Download PDFOpen PDF in browser

HealthMavericks@MEDIQA-Chat 2023: Benchmarking Different Transformer Based Models for Clinical Dialogue Summarization

EasyChair Preprint no. 10545

18 pagesDate: July 12, 2023

Abstract

In recent years, we have seen many Transformer based models being created to address Dialog Summarization problem. While there has been a lot of work on understanding how these models stack against each other in summarizing regular conversations such as the ones found in DialogSum dataset, there haven't been many analysis of these models on Clinical Dialog Summarization. In this article, we describe our solution to MEDIQA-Chat 2023 Shared Tasks as part of ACL-ClinicalNLP 2023 workshop which benchmarks some of the popular Transformer Architectures such as BioBart, Flan-T5, DialogLED, and OpenAI GPT3 on the problem of Clinical Dialog Summarization. We analyse their performance on two tasks - summarizing short conversations and long conversations. In addition to this, we also benchmark two popular summarization ensemble methods and report their performance.

Keyphrases: Clinical Dialog Summarization, clinical text, Healthcare, LLMs, Reproducible AI, Summarization, transformers

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:10545,
  author = {Kunal Suri and Saumajit Saha and Atul Singh},
  title = {HealthMavericks@MEDIQA-Chat 2023: Benchmarking Different Transformer Based Models for Clinical Dialogue Summarization},
  howpublished = {EasyChair Preprint no. 10545},

  year = {EasyChair, 2023}}
Download PDFOpen PDF in browser