AsyDiL

Short description

Dictionary learning (DL) is a technique for adapting sparse representations to training data, with many applications in signal and image processing and also in machine learning tasks like classification. Although very versatile, DL essentially builds linear representations, a fact which limits performance in certain applications. Some steps towards nonlinearity have been made, notably kernel DL, but the topic is largely open.

We propose a structural extension to DL: to replace the dictionary atoms, which are vectors, with sets, for example cones around a central vector. A single atom is chosen from an atom-set for sparse representation. This extensions significantly increases non-linearity and poses interesting optimization problems for the computation of the best sparse representation and for DL itself. It can be applied to standard DL as well as to kernel or other DL forms.

The new structure is called Asymmetric DL, due to the lack of reconstruction capability in the absence of the represented signal. However, many applications are still possible, like denoising and classification. Our focus will be on anomaly detection, with special interest to graph data coming from bank transactions, on which we have access to real data.

Funding

This work is supported by a grant of the Ministry of Research, Innovation and Digitization, CNCS – UEFISCDI, project number PN-III-P4-PCE-2021-0154, within PNCDI III. Amount: 1068030 lei (about 215000 euro).