Download PDFOpen PDF in browser

Stateful Premise Selection by Recurrent Neural Networks

14 pagesPublished: May 27, 2020

Abstract

In this work we develop a new learning-based method for selecting facts (premises) when proving new goals over large formal libraries. Unlike previous methods that choose sets of facts independently of each other by their rank, the new method uses the notion of state that is updated each time a choice of a fact is made. Our stateful architecture is based on recurrent neural networks which have been recently very successful in stateful tasks such as language translation. The new method is combined with data augmentation techniques, evaluated in several ways on a standard large-theory benchmark and compared to state-of-the-art premise approach based on gradient boosted trees. It is shown to perform significantly better and to solve many new problems.

Keyphrases: automated theorem proving, machine learning, Recurrent Neural Networks

In: Elvira Albert and Laura Kovács (editors). LPAR23. LPAR-23: 23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning, vol 73, pages 409--422

Links:
BibTeX entry
@inproceedings{LPAR23:Stateful_Premise_Selection_by,
  author    = {Bartosz Piotrowski and Josef Urban},
  title     = {Stateful Premise Selection by Recurrent Neural Networks},
  booktitle = {LPAR23. LPAR-23: 23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning},
  editor    = {Elvira Albert and Laura Kovacs},
  series    = {EPiC Series in Computing},
  volume    = {73},
  pages     = {409--422},
  year      = {2020},
  publisher = {EasyChair},
  bibsource = {EasyChair, https://easychair.org},
  issn      = {2398-7340},
  url       = {https://easychair.org/publications/paper/g38n},
  doi       = {10.29007/j5hd}}
Download PDFOpen PDF in browser