Download PDFOpen PDF in browser

Video Based Fire Detection Using Xception and ConvLSTM

EasyChair Preprint no. 4306

10 pagesDate: October 1, 2020

Abstract

Immediate detection of wildfires can aid firefighters in saving lives. The re-search community has invested a lot of their efforts in detecting fires using vision-based systems, due to their ability to monitor vast open spaces. Most of the current state-of-the-art vision-based systems operate on individual im-ages, limiting them to only spatial features. This paper presents a novel sys-tem that explores the spatio-temporal information available within a video sequence to perform classification of a scene into fire or non-fire category. The system, in its initial step, selects 15 key frames from an input video se-quence. The frame selection step allows the system to capture the entire movement available in a video sequence regardless of the duration. The spa-tio-temporal information among those frames can then be captured using a deep convolutional neural network (CNN) called Xception, which is pre-trained on the ImageNet, and a convolutional long short term memory net-work (ConvLSTM). The system is evaluated on a challenging new dataset, presented in this paper, containing 70 fire and 70 non-fire sequences. The dataset contains aerial shots of fire and fire-like sequences, such as fog, sun-rise and bright flashing objects, captured using a dynamic/moving camera for an average duration of 13 sec. The classification accuracy of 95.83% high-lights the effectiveness of the proposed system in tackling such challenging scenarios.

Keyphrases: deep learning, fire detection, video processing

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:4306,
  author = {Tanmay Verlekar and Alexandre Bernardino},
  title = {Video Based Fire Detection Using Xception and ConvLSTM},
  howpublished = {EasyChair Preprint no. 4306},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser