Seminars

Links' Seminars and Public Events Add to google calendar
2023
Fri 23rd Jun
11:00 am
12:00 pm
Add event to google
Seminar by Florent Capelli
Speaker: Florent Capelli — florent.capelli.me/

Title: A simpler FPRAS for nOBDD

Abstract: A simpler FPRAS for nOBDD

Abstract: In this talk, we revisit the algorithm by Arenas, Croquevielle, Jayaram and Riveros that allows to approximate the number of words of length n of a non deterministic finite automaton. We explain the algorithm and techniques in a modular and general way, without relating to the particular case of counting words in automaton. We illustrate the soundness of the approach by applying it to the problem of approximatively counting the number of satisfying assignments of a non-deterministic OBDD.
Show in Google map
B21
Fri 2nd Jun
11:00 am
12:30 pm
Add event to google
Séminaire Martin Berger
Title: Search-Based Regular Expression Inference on a GPU

Abstract: Regular expression inference (REI) is a supervised machine
learning and program synthesis problem that takes a cost metric for regular
expressions, and positive and negative examples of strings as input. It
outputs a regular expression that is precise (i.e., accepts all positive
and rejects all negative examples), and minimal w.r.t. to the cost metric.
We present a novel algorithm for REI over arbitrary alphabets that is
enumerative and trades off time for space. Our main algorithmic idea is to
implement the search space of regular expressions succinctly as a
contiguous matrix of bitvectors. Collectively, the bitvectors represent, as
characteristic sequences, all sub-languages of the infix-closure of the
union of positive and negative examples. Mathematically, this is a semiring
of (a variant of) formal power series. Infix-closure enables bottom-up
compositional construction of larger from smaller regular expressions using
the operations of our semiring. This minimises data movement and
data-dependent branching, hence maximises data-parallelism. In addition,
the infix-closure remains unchanged during the search, hence search can be
staged: first pre-compute various expensive operations, and then run the
compute intensive search process. We provide two C++ implementations, one
for general purpose CPUs and one for Nvidia GPUs (using CUDA). We benchmark
both on Google Colab Pro: the GPU implementation is on average over 1000x
faster than the CPU implementation on the hardest benchmarks.

Joint work with Mojtaba Valizadeh

Download: martinfriedrichberger.net/pldi2023.html

Lien Permanent pour cet article : https://team.inria.fr/links/fr/seminars/